----- Original Message -----
> From: "Srinivasa Rao Ragolu" <[email protected]>
> To: "joejiang" <[email protected]>
> 
> Hi Joejing,
> 
> Thanks for quick reply. Above xml is getting generated fine if I set
> "vcpu_pin_set=1-12" in /etc/nova/nova.conf.
> 
> But how to pin each vcpu with pcpu something like below
> 
> <cputune>
>    <vcpupin vcpu=‘0’ cpuset=‘1-5,12-17’/>
> 
>    <vcpupin vcpu=‘1’ cpuset=‘2-3,12-17’/>
> 
> </cputune>
> 
> 
> One more questions is Are Numa nodes are compulsory for pin each vcpu to
> pcpu?

The specification for the CPU pinning functionality recently implemented in 
Nova is here:

    
http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/virt-driver-cpu-pinning.html

Note that exact vCPU to pCPU pinning is not exposed to the user as this would 
require them to have direct knowledge of the host pCPU layout. Instead they 
request that the instance receive "dedicated" CPU resourcing and Nova handles 
allocation of pCPUs and pinning of vCPUs to them.

Example usage:

* Create a host aggregate and add set metadata on it to indicate it is to be 
used for pinning, 'pinned' is used for the example but any key value can be 
used. The same key must used be used in later steps though::

    $ nova aggregate-create cpu_pinning
    $ nova aggregate-set-metadata 1 pinned=true

  NB: For aggregates/flavors that wont be dedicated set pinned=false.

* Set all existing flavors to avoid this aggregate::

    $ for FLAVOR in `nova flavor-list | cut -f 2 -d ' ' | grep -o [0-9]*`; do 
nova flavor-key ${FLAVOR} set "aggregate_instance_extra_specs:pinned"="false"; 
done

* Create flavor that has extra spec "hw:cpu_policy" set to "dedicated". In this 
example it is created with ID of 6, 2048 MB of RAM, 20 GB drive, and 2 vCPUs::

    $ nova flavor-create pinned.medium 6 2048 20 2
    $ nova flavor-key 6 set "hw:cpu_policy"="dedicated"

* Set the flavor to require the aggregate set aside for dedicated pinning of 
guests::

    $ nova flavor-key 6 set "aggregate_instance_extra_specs:pinned"="true"

* Add a compute host to the created aggregate (see nova host-list to get the 
host name(s))::

    $ nova aggregate-add-host 1 my_packstack_host_name

* Add the AggregateInstanceExtraSpecsFilter and CPUPinningFilter filters to the 
scheduler_default_filters in /etc/nova.conf::

    scheduler_default_filters = 
RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,
                                
ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,
                                
AggregateInstanceExtraSpecsFilter,CPUPinningFilter

  NB: On Kilo code base I believe the filter is NUMATopologyFilter

* Restart the scheduler::

    # systemctl restart openstack-nova-scheduler

* After the above - with a normal (non-admin user) try to boot an instance with 
the newly created flavor::

    $ nova boot --image fedora --flavor 6 test_pinning

* Confirm the instance has succesfully booted and that it's vCPU's are pinned 
to _a single_ host CPU by observing
  the <cputune> element of the generated domain XML::

    # virsh list
     Id    Name                           State
    ----------------------------------------------------
     2     instance-00000001              running
    # virsh dumpxml instance-00000001
    ...
    <vcpu placement='static' cpuset='0-3'>2</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='0'/>
        <vcpupin vcpu='1' cpuset='1'/>
    </cputune>


-Steve


_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to