On 27 September 2017 at 23:19, Jakub Jursa <jakub.ju...@chillisys.com> wrote:
> 'hw:cpu_policy=dedicated' (while NOT setting 'hw:numa_nodes') results in
> libvirt pinning CPU in 'strict' memory mode
>
> (from libvirt xml for given instance)
> ...
>   <numatune>
>     <memory mode='strict' nodeset='1'/>
>     <memnode cellid='0' mode='strict' nodeset='1'/>
>   </numatune>
> ...
>
> So yeah, the instance is not able to allocate memory from another NUMA node.

I can't recall what the docs say on this but I wouldn't be surprised
if that was a bug. Though I do think most users would want CPU & NUMA
pinning together (you haven't shared your use case but perhaps you do
too?).

> I'm not quite sure what do you mean by 'memory will be locked for the
> guest'. Also, aren't huge pages enabled in kernel by default?

I think that suggestion was probably referring to static hugepages,
which can be reserved (per NUMA node) at boot and then (assuming your
host is configured correctly) QEMU will be able to back guest RAM with
them.

You are probably thinking of THP (transparent huge pages) which are
now on by default in Linux but can be somewhat hit & miss if you have
a long running host where memory has become fragmented or the
pagecache is large - in our experience performance can be severely
degraded by just missing hugepage backing of a small fraction of guest
memory, and we have noticed behaviour from memory management where THP
allocations fail when pagecache is highly utilised despite none of it
being dirty (so should be able to be dropped immediately).

-- 
Cheers,
~Blairo

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to