> On Feb 16, 2021, at 3:29 PM, Jan Beulich <jbeul...@suse.com> wrote:
> 
> On 16.02.2021 11:28, George Dunlap wrote:
>> --- /dev/null
>> +++ b/docs/hypervisor-guide/memory-allocation-functions.rst
>> @@ -0,0 +1,118 @@
>> +.. SPDX-License-Identifier: CC-BY-4.0
>> +
>> +Xenheap memory allocation functions
>> +===================================
>> +
>> +In general Xen contains two pools (or "heaps") of memory: the *xen
>> +heap* and the *dom heap*.  Please see the comment at the top of
>> +``xen/common/page_alloc.c`` for the canonical explanation.
>> +
>> +This document describes the various functions available to allocate
>> +memory from the xen heap: their properties and rules for when they should be
>> +used.
> 
> Irrespective of your subsequent indication of you disliking the
> proposal (which I understand only affects the guidelines further
> down anyway) I'd like to point out that vmalloc() does not
> allocate from the Xen heap. Therefore a benefit of always
> recommending use of xvmalloc() would be that the function could
> fall back to vmalloc() (and hence the larger domain heap) when
> xmalloc() failed.

OK, that’s good to know.

So just trying to think this through: address space is limiting factor for how 
big the xenheap can be, right?  Presumably “vmap” space is also limited, and 
will be much smaller?  So in a sense the “fallback” is less about getting more 
memory, but about using up that extra little bit of virtual address space?

Another question that raises:  Are there times when it’s advantageous to 
specify which heap to allocate from?  If there are good reasons for allocations 
to be in the xenheap or in the domheap / vmap area, then the guidelines should 
probably say that as well.

And, of course, will the whole concept of the xenheap / domheap split go away 
if we ever get rid of the 1:1 map?

> 
>> +TLDR guidelines
>> +---------------
>> +
>> +* By default, ``xvmalloc`` (or its helper cognates) should be used
>> +  unless you know you have specific properties that need to be met.
>> +
>> +* If you need memory which needs to be physically contiguous, and may
>> +  be larger than ``PAGE_SIZE``...
>> +  
>> +  - ...and is order 2, use ``alloc_xenheap_pages``.
>> +    
>> +  - ...and is not order 2, use ``xmalloc`` (or its helper cognates)..
> 
> ITYM "an exact power of 2 number of pages"?

Yes, I’ll fix that.

> 
>> +* If you don't need memory to be physically contiguous, and know the
>> +  allocation will always be larger than ``PAGE_SIZE``, you may use
>> +  ``vmalloc`` (or one of its helper cognates).
>> +
>> +* If you know that allocation will always be less than ``PAGE_SIZE``,
>> +  you may use ``xmalloc``.
> 
> As per Julien's and your own replies, this wants to be "minimum
> possible page size", which of course depends on where in the
> tree the piece of code is to live. (It would be "maximum
> possible page size" in the earlier paragraph.)

I’ll see if I can clarify this.

> 
>> +Properties of various allocation functions
>> +------------------------------------------
>> +
>> +Ultimately, the underlying allocator for all of these functions is
>> +``alloc_xenheap_pages``.  They differ on several different properties:
>> +
>> +1. What underlying allocation sizes are.  This in turn has an effect
>> +   on:
>> +
>> +   - How much memory is wasted when requested size doesn't match
>> +
>> +   - How such allocations are affected by memory fragmentation
>> +
>> +   - How such allocations affect memory fragmentation
>> +
>> +2. Whether the underlying pages are physically contiguous
>> +
>> +3. Whether allocation and deallocation require the cost of mapping and
>> +   unmapping
>> +
>> +``alloc_xenheap_pages`` will allocate a physically contiguous set of
>> +pages on orders of 2.  No mapping or unmapping is done.
> 
> That's the case today, but meant to change rather sooner than later
> (when the 1:1 map disappears).

Is that the kind of thing we want to add into this document?  I suppose it 
would be good to make the guidelines now such that they produce code which is 
as easy as possible to adapt to the new way of doing things.

 -George

Reply via email to