1 Sharing with granted references
==================================
1-1 Buffer allocated @DomU
--------------------------
@DomU
alloc_xenballooned_pages(nr_pages, pages);
cur_ref = gnttab_claim_grant_reference(&priv_gref_head);
gnttab_grant_foreign_access_ref(cur_ref, otherend_id, ...);
<pass grant_ref_t[] to Dom0>
@Dom0
alloc_xenballooned_pages(nr_pages, pages);
gnttab_set_map_op(&map_ops[i], addr, GNTMAP_host_map |
GNTMAP_device_map,
grefs[i], otherend_id);
gnttab_map_refs(map_ops, NULL, pages, nr_pages);
1-2 Buffer allocated @Dom0
--------------------------
@Dom0
<the code below is equivalent to xen_alloc_ballooned_pages without
PV MMU support as seen in the balloon driver, the difference is
that
pages are explicitly allocated to be used for DMA>
dma_alloc_wc(dev, size, &dev_addr, GFP_KERNEL | __GFP_NOWARN);
HYPERVISOR_memory_op(XENMEM_populate_physmap, &reservation);
cur_ref = gnttab_claim_grant_reference(&priv_gref_head);
gnttab_grant_foreign_access_ref(cur_ref, otherend_id, ...);
<pass grant_ref_t[] to DomU>
@Dom0
alloc_xenballooned_pages(nr_pages, pages);
gnttab_set_map_op(&map_ops[i], addr, GNTMAP_host_map, grefs[i],
otherend_id);
gnttab_map_refs(map_ops, NULL, pages, nr_pages);
2 Sharing with page transfers (GNTTABOP_transfer)
==================================================
FIXME: This use-case seems to be only needed while allocating physically
contiguous buffers at Dom0. For the reverse path 1-1 method can be used.
This approach relies on GNTTABOP_transfer API: “transfer <frame> to a
foreign
domain. The foreign domain has previously registered its interest in the
transfer via <domid, ref>”, for full documentation see [1]. The
process of
transferring pages is explained by Christopher Clark at [2] and is
available as
implementation at [3], [4]. The relevant logic is in:
xen/common/grant_table.c :
gnttab_transfer.
Basic workflow explained to me by Christopher:
- The mfn starts as owned by the sending domain, and that domain removes
any
mappings of it from its page tables. Xen will enforce that the
reference count
must be low enough for the transfer to succeed.
- The receiving domain indicates interest for receiving a page by
writing an
entry in its grant table.
- You'll need to communicate the grant ref from the receiver to the
sender (eg.
via xenstore or another existing channel)
- The sending domain invokes the hypercall, with the grant ref from the
receiving domain.
- The sending domain notifies the receiving domain somehow that the
transfer has
completed. (eg. send an event or via xenstore)
- Once the transfer has completed, the receiving domain will need to map
the
newly assigned page.
- Note: For the transfer, the receiving domain must have enough
headroom to
receive the new page, which means it must not have allocated all of
its memory
quota already prior to the transfer. Typically this can be ensured by
freeing
enough memory back to Xen before writing the grant ref.
3 Sharing with page exchange (XENMEM_exchange)
==============================================
This API was pointed to me by Stefano Stabellini as one of the possible
ways to
achieve zero copying and share physically contiguous buffers. It is used
by x86
SWIOTLB code (xen_create_contiguous_region, [5]), but as per my
understanding
this API cannot be used on ARM as of now [6]. Conclusion: not an option
for ARM
at the moment
Comparison for display use-case
===============================
1 Number of grant references used
1-1 grant references: nr_pages
1-2 GNTTABOP_transfer: nr_pages
1-3 XENMEM_exchange: not an option
2 Effect of DomU crash on Dom0 (its mapped pages)
2-1 grant references: pages can be unmapped by Dom0, Dom0 is fully
recovered
2-2 GNTTABOP_transfer: pages will be returned to the Hypervisor, lost
for Dom0
2-3 XENMEM_exchange: not an option
3 Security issues from sharing Dom0 pages to DomU
1-1 grant references: none
1-2 GNTTABOP_transfer: none
1-3 XENMEM_exchange: not an option
At the moment approach 1 with granted references seems to be a winner
for
sharing buffers both ways, e.g. Dom0 -> DomU and DomU -> Dom0.
Conclusion
==========
I would like to get some feedback from the community on which approach
is more
suitable for sharing large buffers and to have a clear vision on cons
and pros
of each one: please feel free to add other metrics I missed and correct
the ones
I commented on. I would appreciate help on comparing approaches 2 and 3
as I
have little knowledge of these APIs (2 seems to be addressed by
Christopher, and
3 seems to be relevant to what Konrad/Stefano do WRT SWIOTLB).
Thank you,
Oleksandr
[1]
https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/public/grant_table.h;h=018036e825f8f2999812cdb089f7fa2195789231;hb=HEAD#l414
[2] https://xenbits.xen.org/docs/4.9-testing/misc/grant-tables.txt
[3]
https://xenbits.xen.org/hg/linux-2.6.18-xen.hg/file/7d14715efcac/drivers/xen/netfront
[4]
https://xenbits.xen.org/hg/linux-2.6.18-xen.hg/file/7d14715efcac/drivers/xen/netback
[5]
http://elixir.free-electrons.com/linux/latest/source/arch/x86/xen/mmu_pv.c#L2618
[6]
https://lists.xenproject.org/archives/html/xen-devel/2015-12/msg02110.html
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel