1 CPU = 5 vCPUs is okay?

Not_OlesNot_Oles Hosting ProviderContent Writer

From: https://nb.fedorapeople.org/cvsfedora/web/html/docs/virtualization-guide/f12/en-US/html/sect-Virtualization_Guide-Tips_and_tricks-Overcommitting_with_KVM.html

Virtualized CPUs are overcommitted best when each virtualized guest only has a single VCPU. The Linux scheduler is very efficient with this type of load. KVM should safely support guests with loads under 100% at a ratio of 5 VCPUs Overcommitting single VCPU virtualized guests is not an issue.

So, 1 CPU = 5 vCPUs is okay?

I hope everyone gets the servers they want!

Thanked by (2)jureve bikegremlin

Comments

  • Doesn't matter how "efficient" it is.

    One PMSing benchmark junkie is all you need to break down a node.

    Thanked by (1)Not_Oles

    ♻ Amitz day is October 21.
    ♻ Join Nigh sect by adopting my avatar. Let us spread the joys of the end.

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Also

    Assigning guests VCPUs up to the number of physical cores is appropriate and works as expected.

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @deank said: PMSing benchmark junkie

    We don't have anybody like that here! :)

    PMSing = yes
    benchmark junkie = yes
    PMSing + benchmark junkie = no :)

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    DOI: 10.1109/CLOUD.2012.131

    Thanked by (1)mfs

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited October 2020

    On page 29 of the cited article:

    Proposed approach using a running example. For the 
    sample of group 32 VMs presented in this analysis, total
    number of vCPUs requested is 99. We assume that these
    VMs are to be provisioned on a PM with 16 physical CPU
    cores that can support a maximum of 32 vCPUs. **Thus, the
    vCPU capacity is over-committed by a factor of 99/32 or
    3.09.** For this group of VMs, we predict the probability
    of exceeding a set threshold utilization and estimate the
    risk over a 24 hr time window. Specifically, the analysis is
    presented for two values of threshold utilization: 70% and
    95%. Since the collected data contains utilization samples
    at every 15 minutes interval, over a 24 hr window, total 96
    samples are recorded. Figure 4 shows the **aggregate CPU
    utilization** in each time interval and the set thresholds at
    70% and 95% utilization. Observe, for the given group
    of VMs, 70% threshold is exceeded by a small samples
    although **none exceeds the 95% threshold.**
    

    So, assuming similar use to the private cloud example from the cited article, it looks like 1 CPU = 3 vCPUs might be okay?

    Here is maybe a more convenient citation to the article from which the above quote is taken: Biting Off Safely More Than You Can Chew: Predictive Analytics for Resource Over-Commit in IaaS Cloud.

    Thanked by (1)bikegremlin

    I hope everyone gets the servers they want!

  • _MS__MS_ OG
    edited October 2020

    @Not_Oles said:
    So, assuming similar use to the private cloud example from the cited article, it looks like 1 CPU = 3 vCPUs might be okay?

    From the example, 16 pCPU cores = 32 provider vCPUs = 99 client vCPUs.
    Which makes, 1 pCPU core = 2 provider vCPUs = 6 client vCPUs.

    Thanked by (1)Not_Oles
  • Interesting, how about RAM ratio?

    Thanked by (1)Not_Oles
  • Well, on NanoKVM/microLXC always gave a VM 1 core, so It wont trip any monitoring or performance issues if someone compiles stuff or uses his core. So a single user cannot impact the node's performance for affecting someone else.

    Which is quite nice, so everything works flawless.
    Of course you can over commit but you may end up in issues.

    Thanked by (1)Not_Oles
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    MS said:

    @Not_Oles said:
    So, assuming similar use to the private cloud example from the cited article, it looks like 1 CPU = 3 vCPUs might be okay?

    From the example, 16 pCPU cores = 32 provider vCPUs = 99 client vCPUs.
    Which makes, 1 pCPU core = 2 provider vCPUs = 6 client vCPUs.

    It's my poor vocabulary. Sometimes I use the term "CPU" to mean "thread." I should be more clear. Thanks for highlighting my ambiguity. That's very helpful! :)

    Thanked by (1)_MS_

    I hope everyone gets the servers they want!

  • jarlandjarland Hosting ProviderOG

    @Not_Oles said: So, 1 CPU = 5 vCPUs is okay?

    Sounds like pretty safe territory to me under average conditions.

    Thanked by (2)Not_Oles Abdullah

    Do everything as though everyone you’ll ever know is watching.

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @gks said:
    Interesting, how about RAM ratio?

    Section VI of the cited paper mentions related research involving memory and says that the analytical approach in the cited paper can be extended to run-time memory allocation.

    Swap also needs to be considered.

    I hope everyone gets the servers they want!

  • jarlandjarland Hosting ProviderOG

    It's really interesting to see their recommendations on overcommitting memory. Truth be told the professional crowd are a lot less shy about overcommitting than the hobbyist crowd.

    Thanked by (1)Not_Oles

    Do everything as though everyone you’ll ever know is watching.

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @Neoon said: NanoKVM/microLXC always gave a VM 1 core

    Just to be certain, please let me ask: do you mean "1 core" in the sense of "1 thread" which would be 1 "processing unit" in the output of GNU nproc?

    @Neoon said: Which is quite nice, so everything works flawless.

    There is a guy who uses a desktop environment with Firefox and VScodium under LXC on one of my Xeon D 1521 servers. For awhile I kept asking him if everything works okay. Eventually he said, "I insist it works okay." So I stopped asking. :)

    LXC doesn't get nearly the love that it deserves!

    I hope everyone gets the servers they want!

Sign In or Register to comment.