On 10/26/2018 12:20 PM, Jan Beulich wrote:
>>>> On 26.10.18 at 12:51, <george.dun...@citrix.com> wrote:
>> On 10/26/2018 10:56 AM, Jan Beulich wrote:
>>>>>> On 26.10.18 at 11:28, <wei.l...@citrix.com> wrote:
>>>> On Fri, Oct 26, 2018 at 03:16:15AM -0600, Jan Beulich wrote:
>>>>>>>> On 25.10.18 at 18:29, <andrew.coop...@citrix.com> wrote:
>>>>>> A split xenheap model means that data pertaining to other guests isn't
>>>>>> mapped in the context of this vcpu, so cannot be brought into the cache.
>>>>>
>>>>> It was not clear to me from Wei's original mail that talk here is
>>>>> about "split" in a sense of "per-domain"; I was assuming the
>>>>> CONFIG_SEPARATE_XENHEAP mode instead.
>>>>
>>>> The split heap was indeed referring to CONFIG_SEPARATE_XENHEAP mode, yet
>>>> I what I wanted most is the partial direct map which reduces the amount
>>>> of data mapped inside Xen context -- the original idea was removing
>>>> direct map discussed during one of the calls IIRC. I thought making the
>>>> partial direct map mode work and make it as small as possible will get
>>>> us 90% there.
>>>>
>>>> The "per-domain" heap is a different work item.
>>>
>>> But if we mean to go that route, going (back) to the separate
>>> Xen heap model seems just like an extra complication to me.
>>> Yet I agree that this would remove the need for a fair chunk of
>>> the direct map. Otoh a statically partitioned Xen heap would
>>> bring back scalability issues which we had specifically meant to
>>> get rid of by moving away from that model.
>>
>> I think turning SEPARATE_XENHEAP back on would just be the first step.
>> We definitely would then need to sort things out so that it's scalable
>> again.
>>
>> After system set-up, the key difference between xenheap and domheap
>> pages is that xenheap pages are assumed to be always mapped (i.e., you
>> can keep a pointer to them and it will be valid), whereas domheap pages
>> cannot assumed to be mapped, and need to be wrapped with
>> [un]map_domain_page().
>>
>> The basic solution involves having a xenheap virtual address mapping
>> area not tied to the physical layout of the memory.  domheap and xenheap
>> memory would have to come from the same pool, but xenheap would need to
>> be mapped into the xenheap virtual memory region before being returned.
> 
> Wouldn't this most easily be done by making alloc_xenheap_pages()
> call alloc_domheap_pages() and then vmap() the result? Of course
> we may need to grow the vmap area in that case.

I couldn't answer that question without a lot more digging. :-)  I'd
always assumed that the reason for the original reason for having the
xenheap direct-mapped on 32-bit was something to do with early-boot
allocation; if there is something tricky there, we'd need to
special-case the early-boot allocation somehow.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to