I like to think about CPU as a resource either under contention (more
processes need CPU time than CPU cores to go around) or not:

CPU caps limit how much you can use when there's no contention. In the
Joyent Could it ensures people don't get more CPU time than they pay for.
(example 1 below)

CPU shares ensure fairness when there is contention. If CPU shares are
doled out in proportion to memory doled out (Like in the Joyent Cloud),
when everyone is fighting for CPU, the more memory you're paying for, the
more CPU cycles you'll get. (example 2 below)

If all the zones on a box belong to the same customer (as might be the case
on your machine), you could set the cpu caps much higher, and depend on the
shares for fairness.

Examples for clarification:

1. I fire up a g4-highcpu-2G zone in the Joyent cloud. It gets a CPU cap of
100. It happens to land on a freshly deployed compute node with no other
zones on it.
I then run 2 CPU intensive processes. There is no contention so the machine
is totally capable of scheduling each of them on a separate CPU and running
them full throttle.
However, because I'm only paying for 1 1vCPU I hit that cap of 100 and my
two processes have to take turns running. If my workload is really that CPU
intensive I should scale the zone up to g4-highcpu-4G which gets 2vCPUs and
a cap of 200.

2. At home I have a machine with 8 cores running SmartOS. I don't put caps
on my zones because I am the only customer on the box so to speak. I
provision 3 zones on the machine: the "important" zone that gets 600
shares, and two "unimportant" ones that each get 200 shares. Each of them
is running 8 CPU intensive processes (apparently I'm a glutton for
punishment). I have given out 10000 shares. Because my numbers all divide
evenly I can use simple fractions. The important zone get 6/10 or 3/5 of
the CPU, and the other two each get 2/10 or 1/5. My math adds up: 3/5 + 1/5
+ 1/5 = 5/5 =1. The important zone gets approximately 4.8 cores worth of
CPU time, and the unimportant ones each get approximately 1.6 cores worth.
This is as good as things can be when I foolishly run too many CPU
intensive workloads on my machine at home. Maybe it's time for a beefier
box. :-P

Was this helpful?
-Nahum

On Fri, Dec 2, 2016 at 3:47 AM, Len Weincier <[email protected]> wrote:

> Just to note I see that in triton the cpu_cap is set as well as cpu_shares
> == max_physical_memory
>
> I am struggling to understand how this works. I am guessing that if the
> ratio of cpu_cap to cores is low then its not that noticeable to the
> guests, the threads get through in a reasonable amount of time but with the
> E7-x cpu's that have high core counts and the ratio is higher this will
> become a more serious problem especially for multithreaded apps ?
>
> Thanks
> Len
>
>
> On 2 December 2016 at 09:44, Len Weincier <[email protected]> wrote:
>
>> This is the classic symptom of a machine with a cpu_cap less than its
>>> CPU (core) count.
>>>
>>
>> So the vm sees all the cores and dispatches threads / processes for that
>> many cores but the actual uasage is limited by the cpu_cap so the threads
>> are stalling waiting for the OS to give them a slot to run hence the load
>> avg climbing iiuc. Setting cpu_cap=0 sets the machine free and its
>> responsive
>>
>> Whats the right way to share the cores for LX brand machines then seeing
>> as we cant limit the access to the cores ? In triton is see the default
>> packages set the cpu_cap as well which would result in the same issue ?
>>
>> I have had a look at https://wiki.smartos.org/di
>> splay/DOC/CPU+Caps+and+Shares  and am not sure what todo ?
>>
>> Thanks
>> Len
>>
>>
> *smartos-discuss* | Archives
> <https://www.listbox.com/member/archive/184463/=now>
> <https://www.listbox.com/member/archive/rss/184463/28443469-fb954443> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to