Il 09/12/2013 19:10, Marcelo Tosatti ha scritto:
> On Mon, Dec 09, 2013 at 06:33:41PM +0100, Paolo Bonzini wrote:
>> Il 06/12/2013 19:49, Marcelo Tosatti ha scritto:
>>>>> You'll have with your patches (without them it's worse of course):
>>>>>
>>>>>    RAM offset    physical address   node 0
>>>>>    0-3840M       0-3840M            host node 0
>>>>>    4096M-4352M   4096M-4352M        host node 0
>>>>>    4352M-8192M   4352M-8192M        host node 1
>>>>>    3840M-4096M   8192M-8448M        host node 1
>>>>>
>>>>> So only 0-3G and 5-8G are aligned, 3G-5G and 8G-8.25G cannot use
>>>>> gigabyte pages because they are split across host nodes.
>>> AFAIK the TLB caches virt->phys translations, why specifics of 
>>> a given phys address is a factor into TLB caching?
>>
>> The problem is that "-numa mem" receives memory sizes and these do not
>> take into account the hole below 4G.
>>
>> Thus, two adjacent host-physical addresses (two adjacent ram_addr_t-s)
>> map to very far guest-physical addresses, are assigned to different
>> guest nodes, and from there to different host nodes.  In the above
>> example this happens for 3G-5G.
> 
> Physical address which is what the TLB uses does not take node
> information into account.

Indeed.  What I should have written is "two adjacent host-virtual
addresses".

>> On second thought, this is not particularly important, or at least not
>> yet.  It's not really possible to control the NUMA policy for
>> hugetlbfs-allocated memory, right?
> 
> It is possible. I don't know what happens if conflicting NUMA policies
> are specified for different virtual address ranges that map to a single
> huge page.

So what will happen is that 3G-5G will use GB pages but it will not be
on the requested node.

Paolo

Reply via email to