What was configured was pinning to a set and that is reflected, no? Or is that not referred to as “pinning”? Anyway, for performance we didn’t see a difference between 1:1 pinning and confining (?) the vCPUs to a a set as long as the instance is aware of the underlying NUMA topology.
Cheers, Arne > On 15 Dec 2015, at 17:11, Chris Friesen <chris.frie...@windriver.com> wrote: > > Actually no, I don't think that's right. When pinning is enabled each vCPU > will be affined to a single host CPU. What is showing below is what I would > expect if the instance was using non-dedicated CPUs. > > To the original poster, you should be using > > 'hw:cpu_policy': 'dedicated' > > in your flavor extra-specs to enable CPU pinning. And you should enable the > NUMATopologyFilter scheduler filter. > > Chris > > > > On 12/15/2015 09:23 AM, Arne Wiebalck wrote: >> The pinning seems to have done what you asked for, but you probably >> want to confine your vCPUs to NUMA nodes. >> >> Cheers, >> Arne >> >> >>> On 15 Dec 2015, at 16:12, Satish Patel <satish....@gmail.com> wrote: >>> >>> Sorry forgot to reply all :) >>> >>> This is what i am getting >>> >>> [root@compute-1 ~]# virsh vcpupin instance-00000043 >>> VCPU: CPU Affinity >>> ---------------------------------- >>> 0: 2-3,6-7 >>> 1: 2-3,6-7 >>> >>> >>> Following numa info >>> >>> [root@compute-1 ~]# numactl --hardware >>> available: 2 nodes (0-1) >>> node 0 cpus: 0 3 5 6 >>> node 0 size: 2047 MB >>> node 0 free: 270 MB >>> node 1 cpus: 1 2 4 7 >>> node 1 size: 2038 MB >>> node 1 free: 329 MB >>> node distances: >>> node 0 1 >>> 0: 10 20 >>> 1: 20 10 >>> >>> On Tue, Dec 15, 2015 at 8:36 AM, Arne Wiebalck <arne.wieba...@cern.ch> >>> wrote: >>>> The pinning we set up goes indeed into the <cputune> block: >>>> >>>> —> >>>> <vcpu placement='static'>32</vcpu> >>>> <cputune> >>>> <shares>32768</shares> >>>> <vcpupin vcpu='0' cpuset='0-7,16-23'/> >>>> <vcpupin vcpu='1' cpuset='0-7,16-23'/> >>>> … >>>> <— >>>> >>>> What does “virsh vcpupin <domain>” give for your instance? >>>> >>>> Cheers, >>>> Arne >>>> >>>> >>>>> On 15 Dec 2015, at 13:02, Satish Patel <satish....@gmail.com> wrote: >>>>> >>>>> I am running JUNO version with qemu-kvm-ev-2.1.2-23.el7_1.9.1.x86_64 >>>>> on CentOS7.1 >>>>> >>>>> I am trying to configure CPU pinning because my application is cpu >>>>> hungry. this is what i did. >>>>> >>>>> in /etc/nova/nova.conf >>>>> >>>>> vcpu_pin_set=2,3,6,7 >>>>> >>>>> >>>>> I have created aggregated host with pinning=true and created flavor >>>>> with pinning, after that when i start VM on Host following this i can >>>>> see in guest >>>>> >>>>> ... >>>>> ... >>>>> <vcpu placement='static' cpuset='2-3,6-7'>2</vcpu> >>>>> ... >>>>> ... >>>>> >>>>> But i am not seeing any <cputune> info. >>>>> >>>>> Just want to make sure does my pinning working correctly or something >>>>> is wrong. How do i verify my pinning config is correct? >>>>> >>>>> _______________________________________________ >>>>> Mailing list: >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>>> Post to : openstack@lists.openstack.org >>>>> Unsubscribe : >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>> >>>> -- >>>> Arne Wiebalck >>>> CERN IT >>>> >> >> -- >> Arne Wiebalck >> CERN IT >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack@lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack@lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Arne Wiebalck CERN IT _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack