On 06/30/2017 07:06 AM, sfinu...@redhat.com wrote:
On Thu, 2017-06-29 at 12:20 -0600, Chris Friesen wrote:
On 06/29/2017 10:59 AM, sfinu...@redhat.com wrote:
From the above, there are 3-4 work items:
- Add a 'emulator_pin_set' or 'cpu_emulator_threads_mask' configuration
option
- If using a mask, rename 'vcpu_pin_set' to 'pin_set' (or, better,
'usable_cpus')
- Add a 'emulator_overcommit_ratio', which will do for emulator threads
what
the other ratios do for vCPUs and memory
If we were going to support "emulator_overcommit_ratio", then we wouldn't
necessarily need an explicit mask/set as a config option. If someone wants
to run with 'hw:emulator_thread_policy=isolate' and we're below the
overcommit ratio then we run it, otherwise nova could try to allocate a new
pCPU to add to the emulator_pin_set internally tracked by nova. This would
allow for the number of pCPUs in emulator_pin_set to vary depending on the
number of instances with 'hw:emulator_thread_policy=isolate'on the compute
node, which should allow for optimal packing.
So we'd now mark pCPUs not only as used, but also as used for a specific
purpose? That would probably be more flexible that using a static pool of CPUs,
particularly if instances are heterogeneous. I'd imagine it would, however, be
much tougher to do right. I need to think on this.
I think you could do it with a new "emulator_cpus" field in NUMACell, and a new
"emulator_pcpu" field in InstanceNUMACell.
As an aside, what would we do about billing? Currently we include CPUs used for
emulator threads as overhead. Would this change?
We currently have local changes to allow instances with "shared" and "dedicated"
CPUs to coexist on the same compute node. For CPU usage, "dedicated" CPUs count
as "1", and "shared" CPUs count as 1/cpu_overcommit_ratio. That way the total
CPU usage can never exceed the number of available CPUs.
You could follow this model and bill for an extra 1/emulator_overcommit_ratio
worth of a CPU for instances with 'hw:emulator_thread_policy=isolate'.
- Deprecate 'hw:emulator_thread_policy'???
I'm not sure we need to deprecate it, it would instead signify whether the
emulator threads should be isolated from the vCPU threads. If set to
"isolate" then they would run on the emulator_pin_set identified above
(potentially sharing them with emulator threads from other instances) rather
than each instance getting a whole pCPU for its emulator threads.
I'm confused, I thought we weren't going to need 'emulator_pin_set'?
I meant whatever field we use internally to track which pCPUs are currently
being used to run emulator threads as opposed to vCPU threads. (ie the
"emulator_cpus" field in NUMACell suggested above.
> In any
case, it's probably less about deprecating the extra spec and instead changing
how things work under the hood. We'd actually still want something to signify
"I want my emulator overhead accounted for separately".
Agreed.
Chris
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev