Hi Roman,
I haven't but I will write a blueprint for the core pinning part.
I considered vcpu element usage as well but in that case you can not set
e.g. vcpu-0 to run on pcpu-0. Vcpus and emulator are sharing all pcpus
defined in cpuset so I decided to use cputune element.
Are you using extra specs for carrying cpuset attributes in your
implementation?
Br,Tuomas
On 18.11.2013 17:14, Roman Verchikov wrote:
Tuomas,
Have you published your code/blueprints anywhere? Looks like we’re working on
the same stuff. I have implemented almost the same feature set (haven’t
published anything yet because of this thread), except for the scheduler part.
The main goal is to be able to pin VCPUs in NUMA environment.
Have you considered adding placement and cpuset attributes to <vcpu> element?
For example:
<vcpu placement=‘static’ cpuset=‘%whatever%’>
Thanks,
Roman
On Nov 13, 2013, at 14:46, Tuomas Paappanen <tuomas.paappa...@tieto.com> wrote:
Hi all,
I would like to hear your thoughts about core pinning in Openstack. Currently
nova(with qemu-kvm) supports usage of cpu set of PCPUs what can be used by
instances. I didn't find blueprint, but I think this feature is for isolate
cpus used by host from cpus used by instances(VCPUs).
But, from performance point of view it is better to exclusively dedicate PCPUs
for VCPUs and emulator. In some cases you may want to guarantee that only one
instance(and its VCPUs) is using certain PCPUs. By using core pinning you can
optimize instance performance based on e.g. cache sharing, NUMA topology,
interrupt handling, pci pass through(SR-IOV) in multi socket hosts etc.
We have already implemented feature like this(PoC with limitations) to Nova
Grizzly version and would like to hear your opinion about it.
The current implementation consists of three main parts:
- Definition of pcpu-vcpu maps for instances and instance spawning
- (optional) Compute resource and capability advertising including free pcpus
and NUMA topology.
- (optional) Scheduling based on free cpus and NUMA topology.
The implementation is quite simple:
(additional/optional parts)
Nova-computes are advertising free pcpus and NUMA topology in same manner than
host capabilities. Instances are scheduled based on this information.
(core pinning)
admin can set PCPUs for VCPUs and for emulator process, or select NUMA cell for
instance vcpus, by adding key:value pairs to flavor's extra specs.
EXAMPLE:
instance has 4 vcpus
<key>:<value>
vcpus:1,2,3,4 --> vcpu0 pinned to pcpu1, vcpu1 pinned to pcpu2...
emulator:5 --> emulator pinned to pcpu5
or
numacell:0 --> all vcpus are pinned to pcpus in numa cell 0.
In nova-compute, core pinning information is read from extra specs and added to
domain xml same way as cpu quota values(cputune).
<cputune>
<vcpupin vcpu='0' cpuset='1'/>
<vcpupin vcpu='1' cpuset='2'/>
<vcpupin vcpu='2' cpuset='3'/>
<vcpupin vcpu='3' cpuset='4'/>
<emulatorpin cpuset='5'/>
</cputune>
What do you think? Implementation alternatives? Is this worth of blueprint? All
related comments are welcome!
Regards,
Tuomas
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev