Reviewed: https://review.openstack.org/285321 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=0b2e34f92507fd490faaec3285049b28446dc94c Submitter: Jenkins Branch: master
commit 0b2e34f92507fd490faaec3285049b28446dc94c Author: Stephen Finucane <stephen.finuc...@intel.com> Date: Fri Feb 26 13:07:56 2016 +0000 virt/hardware: Fix 'isolate' case on non-SMT hosts The 'isolate' policy is supposed to function on both hosts with an SMT architecture (e.g. HyperThreading) and those without. The former is true, but the latter is broken due to a an underlying implementation detail in how vCPUs are "packed" onto pCPUs. The '_pack_instance_onto_cores' function expects to work with a list of sibling sets. Since non-SMT hosts don't have siblings, the function is being given a list of all cores as one big sibling set. However, this conflicts with the idea that, in the 'isolate' case, only one sibling from each sibling set should be used. Using one sibling from the one available sibling set means it is not possible to schedule instances with more than one vCPU. Resolve this mismatch by instead providing the function with a list of multiple sibling sets, each containing a single core. This also resolves another bug. When booting instances on a non-HT host, the resulting NUMA topology should not define threads. By correctly considering the cores on these systems as non-siblings, the resulting instance topology will contain multiple cores with only a single thread in each. Change-Id: I2153f25fdb6382ada8e62fddf4215d9a0e3a6aa7 Closes-bug: #1550317 Closes-bug: #1417723 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1417723 Title: when using dedicated cpus, the guest topology doesn't match the host Status in OpenStack Compute (nova): Fix Released Bug description: According to "http://specs.openstack.org/openstack/nova- specs/specs/juno/approved/virt-driver-cpu-pinning.html", the topology of the guest is set up as follows: "In the absence of an explicit vCPU topology request, the virt drivers typically expose all vCPUs as sockets with 1 core and 1 thread. When strict CPU pinning is in effect the guest CPU topology will be setup to match the topology of the CPUs to which it is pinned." What I'm seeing is that when strict CPU pinning is in use the guest seems to be configuring multiple threads, even if the host doesn't have theading enabled. As an example, I set up a flavor with 2 vCPUs and enabled dedicated cpus. I then booted up an instance of this flavor on two separate compute nodes, one with hyperthreading enabled and one with hyperthreading disabled. In both cases, "virsh dumpxml" gave the following topology: <topology sockets='1' cores='1' threads='2'/> When running on the system with hyperthreading disabled, this should presumably have been set to "cores=2 threads=1". Taking this a bit further, even if hyperthreading is enabled on the host it would be more accurate to only specify multiple threads in the guest topology if the vCPUs are actually affined to multiple threads of the same host core. Otherwise it would be more accurate to specify the guest topology with multiple cores of one thread each. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1417723/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp