If you specify "vcpu_pin_set=2,3,6,7" in /etc/nova/nova.conf then nova will limit the VMs to run on that subset of host CPUs. This involves pinning from libvirt, but isn't really considered a "dedicated" instance in nova.

By default, instances that can run on all of the allowed host CPUs are considered "non-dedicated", "shared", or "floating" because the vCPUs are not mapped one-to-one with a single host CPU. For these instances the CPU overcommit value determines how many vCPUs can run on a single host CPU.

If an instance specifies a cpu_policy of "dedicated", then each vCPU is mapped one-to-one with a single host CPU. This is known as a "pinned" or "dedicated" instance.

The performance differences between the two come into play when there is contention. By default a non-dedicated vCPU can have 16x overcommit, so there can be 16 vCPUs trying to run on a single host CPU. In contrast, a dedicated vCPU gets a whole host CPU all to itself.

As for the guest being aware of the underlying NUMA topology, this isn't possible when the number of vCPUs in the instance isn't a multiple of the number of hyperthreads in a host core. The problem is that qemu doesn't have a way to specify cores with varying numbers of threads.

Chris

On 12/15/2015 10:50 AM, Arne Wiebalck wrote:
What was configured was pinning to a set and that is reflected, no? Or is that 
not referred to as “pinning”?
Anyway, for performance we didn’t see a difference between 1:1 pinning and 
confining (?) the vCPUs to a
a set as long as the instance is aware of the underlying NUMA topology.

Cheers,
  Arne


On 15 Dec 2015, at 17:11, Chris Friesen <chris.frie...@windriver.com> wrote:

Actually no, I don't think that's right.  When pinning is enabled each vCPU 
will be affined to a single host CPU.  What is showing below is what I would 
expect if the instance was using non-dedicated CPUs.

To the original poster, you should be using

'hw:cpu_policy': 'dedicated'

in your flavor extra-specs to enable CPU pinning.  And you should enable the 
NUMATopologyFilter scheduler filter.

Chris



On 12/15/2015 09:23 AM, Arne Wiebalck wrote:
The pinning seems to have done what you asked for, but you probably
want to confine your vCPUs to NUMA nodes.

Cheers,
  Arne


On 15 Dec 2015, at 16:12, Satish Patel <satish....@gmail.com> wrote:

Sorry forgot to reply all :)

This is what i am getting

[root@compute-1 ~]# virsh vcpupin instance-00000043
VCPU: CPU Affinity
----------------------------------
   0: 2-3,6-7
   1: 2-3,6-7


Following numa info

[root@compute-1 ~]# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 3 5 6
node 0 size: 2047 MB
node 0 free: 270 MB
node 1 cpus: 1 2 4 7
node 1 size: 2038 MB
node 1 free: 329 MB
node distances:
node   0   1
  0:  10  20
  1:  20  10

On Tue, Dec 15, 2015 at 8:36 AM, Arne Wiebalck <arne.wieba...@cern.ch> wrote:
The pinning we set up goes indeed into the <cputune> block:

—>
<vcpu placement='static'>32</vcpu>
  <cputune>
    <shares>32768</shares>
    <vcpupin vcpu='0' cpuset='0-7,16-23'/>
    <vcpupin vcpu='1' cpuset='0-7,16-23'/>
    …
<—

What does “virsh vcpupin <domain>” give for your instance?

Cheers,
Arne


On 15 Dec 2015, at 13:02, Satish Patel <satish....@gmail.com> wrote:

I am running JUNO version with qemu-kvm-ev-2.1.2-23.el7_1.9.1.x86_64
on CentOS7.1

I am trying to configure CPU pinning because my application is cpu
hungry. this is what i did.

in /etc/nova/nova.conf

vcpu_pin_set=2,3,6,7


I have created aggregated host with pinning=true and created flavor
with pinning, after that when i start VM on Host following this i can
see in guest

...
...
<vcpu placement='static' cpuset='2-3,6-7'>2</vcpu>
...
...

But i am not seeing any <cputune> info.

Just want to make sure does my pinning working correctly or something
is wrong. How do i verify my pinning config is correct?

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Arne Wiebalck
CERN IT


--
Arne Wiebalck
CERN IT

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Arne Wiebalck
CERN IT



_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to