On 06/09/2016 05:15 AM, Paul Michali wrote:
1) On the host, I was seeing 32768 huge pages, of 2MB size.

Please check the number of huge pages _per host numa node_.

2) I changed mem_page_size from 1024 to 2048 in the flavor, and then when VMs
were created, they were being evenly assigned to the two NUMA nodes. Each using
1024 huge pages. At this point I could create more than half, but when there
were 1945 pages left, it failed to create a VM. Did it fail because the
mem_page_size was 2048 and the available pages were 1945, even though we were
only requesting 1024 pages?

I do not think that "1024" is a valid page size (at least for x86).

Be careful about units. mem_page_size is in units of KB. For x86, valid numerical sizes are 4, 2048, and 1048576. (For 4KB, 2MB, and 1GB hugepages.) The flavor specifies memory size in MB.

3) Related to #2, is there a relationship between mem_page_size, the allocation
of VMs to NUMA nodes, and the flavor size? IOW, if I use the medium flavor
(4GB), will I need a larger mem_page_size? (I'll play with this variation, as
soon as I can). Gets back to understanding how the scheduling determines how to
assign the VMs.

Valid mem_page_size values are determined by the host CPU. You do not need a larger page size for flavors with larger memory sizes.

VMs with numa topology (hugepages, pinned CPUs, pci devices, etc.) will be pinned to a single host numa node.)


Chris


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to