On 5/10/19 10:43 PM, Julien Grall wrote:
Hi,
On 10/05/2019 21:51, Stefano Stabellini wrote:
On Tue, 7 May 2019, Julien Grall wrote:
Hi Stefano,
On 4/30/19 10:02 PM, Stefano Stabellini wrote:
Reserved memory regions are automatically remapped to dom0. Their
device
tree nodes are also added to dom0 device tree. However, the dom0 memory
node is not currently extended to cover the reserved memory regions
ranges as required by the spec. This commit fixes it.
AFAICT, this does not cover the problem mention by Amit in [1].
What do you think is required to fix Amit's problem?
I haven't fully investigated the problem to be able to answer the
question here. Although I provided some insights in:
<b293d89c-9ed1-2033-44e5-227643ae1...@arm.com>
But I am still not happy with the approach taken for the reserved-memory
regions in this series. As I pointed out before, they are just normal
memory
that was reserved for other purpose (CMA, framebuffer...).
Treating them as "device" from Xen POV is a clear abuse of the
meaning and I
don't believe it is a viable solution long term.
If we don't consider "reusable" memory regions as part of the
discussion, the distinction becomes more philosophical than practical:
- Xen is not supposed to use them for anything
- only given them to the VM configured for it
I don't see much of a difference with MMIO regions, except for the
expected pagetable attributes: i.e. cacheable, not-cacheable. But even
in that case, there could be reasonable use cases for non-cacheable
mappings of reserved-memory regions, even if reserved-memory regions are
"normal" memory.
Could you please help me understand why you see them so differently, as
far as to say that "treating them as "device" from Xen POV is a clear
abuse of the meaning"?
Obviously if you take half of the picture, then it makes things easier.
However, we are not here to discuss half of the picture but the full one
(even if at the end you only implement half of it).
Indeed, some of the regions may have a property "reusable" allowing
the the OS
to use them until they are claimed by the device driver owning the
region. I
don't know how Linux (or any other OS) is using it today, but I don't
see what
would prevent it to use them as hypercall buffer. This would
obviously not
work because they are not actual RAM from Xen POV.
I haven't attempted at handling "reusable" reserved-memory regions
because I don't have a test environment and/or a use-case for them. In
other words, I don't have any "reusable" reserved-memory regions in any
of the boards (Xilinx and not Xilinx) I have access to. I could add a
warning if we find a "reusable" reserved-memory region at boot.
Don't get me wrong, I don't ask for the implementation now, so a warning
would be fine here. However, you need at least to show me some ground
that re-usable memory can be implemented with your solution or they are
not a concern for Xen at all.
Nonetheless, if you have a concrete suggestion which doesn't require a
complete rework of this series, I can try to put extra effort to handle
this case even if it is not a benefit to my employer. I am also open to
the possibility of dropping patches 6-10 from the series.
I don't think the series as it is would allow us to support re-usable
memory. However as I haven't spent enough time to understand how this
could be possibly dealt. So I am happy to be proved wrong.
I thought a bit more about this series during the night. I do agree that
we need to improve the support of the reserved-memory today as we may
give memory to the allocator that are could be exposed to a guest via a
different method (iomem). So carving out the reserved-memory region from
the memory allocator is the first step to go.
Now we have to differentiate the hardware domain from the other guests.
I don't have any objection regarding the way to map reserved-memory
region to the hardware domain because this is completely internal to
Xen. However, I have some objections with the current interface for DomU:
1) It is still unclear how "reusable" property would fit in that story
2) It is definitely not possible for a user to use 'iomem' for
reserved-memory region today because the partial Device-Tree doesn't
allow you to create /reserved-memory node nor /memory
3) AFAIK, there are no way for to prevent the hardware domain to use
the reserved-region (status = "disabled" would not work).
So, IHMO, the guest support for reserved-memory is not in shape. So I
think it would be best if we don't permit the reserved-memory region in
the iomem rangeset. This would avoid us to tie us in an interface until
we figure out the correct plan for guest.
With that in place, I don't have a strong objection with patches 6-10.
In any case I think you should clearly spell out in the commit message
what kind of reserved-memory region is supported. For instance, by just
going through the binding, I have the feeling that those properties are
not actually supported:
1) "no-map" - It is used to tell the OS to not create a virtual
memory of the region as part of its standard mapping of system memory,
nor permit speculative access to it under any circumstances other than
under the control of the device driver using the region. On Arm64, Xen
will map reserved-memory as part of xenheap (i.e the direct mapping),
but carving out from xenheap would not be sufficient as we use 1GB block
for the mapping. So they may still be covered. I would assume this is
used for memory that needs to be mapped non-cacheable, so it is
potentially critical as Xen would map them cacheable in the stage-1
hypervisor page-tables.
2) "alloc-ranges": it is used to specify regions of memory where it
is acceptable to allocate memory from. This may not play well with the
Dom0 memory allocator.
3) "reusable": I mention here only for completeness. My
understanding is it could potentially be used for hypercall buffer. This
needs to be investigated.
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel