Hi Jonah,
On 7/31/2024 7:09 AM, Jonah Palmer wrote:
Let me clarify, correct me if I was wrong:
1) IOVA allocator is still implemented via a tree, we just
don't need
to store how the IOVA is used
2) A dedicated GPA -> IOVA tree, updated via listeners and is
used in
the datapath SVQ translation
3) A linear mapping or another SVQ -> IOVA tree used for SVQ
His solution is composed of three trees:
1) One for the IOVA allocations, so we know where to allocate
new ranges
2) One of the GPA -> SVQ IOVA translations.
3) Another one for SVQ vrings translations.
For my understanding, say we have those 3 memory mappings:
HVA GPA IOVA
---------------------------------------------------
Map
(1) [0x7f7903e00000, 0x7f7983e00000) [0x0, 0x80000000) [0x1000,
0x80000000)
(2) [0x7f7983e00000, 0x7f9903e00000) [0x100000000, 0x2080000000)
[0x80001000, 0x2000001000)
(3) [0x7f7903ea0000, 0x7f7903ec0000) [0xfeda0000, 0xfedc0000)
[0x2000001000, 0x2000021000)
And then say when we go to unmap (e.g. vhost_vdpa_svq_unmap_ring)
we're given an HVA of 0x7f7903eb0000, which fits in both the first and
third mappings.
The correct one to remove here would be the third mapping, right? Not
only because the HVA range of the third mapping has a more "specific"
or "tighter" range fit given an HVA of 0x7f7903eb0000 (which, as I
understand, may not always be the case in other scenarios), but mainly
because the HVA->GPA translation would give GPA 0xfedb0000, which only
fits in the third mapping's GPA range. Am I understanding this correctly?
You're correct, we would still need a GPA -> IOVA tree for mapping and
unmapping on guest mem. I've talked to Eugenio this morning and I think
he is now aligned. Granted, this GPA tree is partial in IOVA space that
doesn't contain ranges from host-only memory (e.g. backed by SVQ
descriptors or buffers), we could create an API variant to
vhost_iova_tree_map_alloc() and vhost_iova_tree_map_remove(), which not
just adds IOVA -> HVA range to the HVA tree, but also manipulates the
GPA tree to maintain guest memory mappings, i.e. only invoked from the
memory listener ops. Such that this new API is distinguishable from the
one in the SVQ mapping and unmapping path that only manipulates the HVA
tree.
I think the only case that you may need to pay attention to in
implementation is in the SVQ address translation path, where if you come
to an HVA address for translation, you would need to tell apart which
tree you'd have to look up - if this HVA is backed by guest mem you
could use API qemu_ram_block_from_host() to infer the ram block then the
GPA, so you end up doing a lookup on the GPA tree; or else the HVA may
be from the SVQ mappings, where you'd have to search the HVA tree again
to look for host-mem-only range before you can claim the HVA is a
bogus/unmapped address... For now, this additional second lookup is
sub-optimal but inadvitable, but I think both of us agreed that you
could start to implement this version first, and look for future
opportunity to optimize the lookup performance on top.
---
In the case where the first mapping here is removed (GPA [0x0,
0x80000000)), why do we use the word "reintroduce" here? As I
understand it, when we remove a mapping, we're essentially
invalidating the IOVA range associated with that mapping, right? In
other words, the IOVA ranges here don't overlap, so removing a mapping
where its HVA range overlaps another mapping's HVA range shouldn't
affect the other mapping since they have unique IOVA ranges. Is my
understanding correct here or am I probably missing something?
With the GPA tree I think this case should work fine. I've double
checked the implementation of vhost-vdpa iotlb, and doesn't see a red
flag there.
Thanks,
-Siwei