-----Original Message-----
From: Ding, Xuan <xuan.d...@intel.com>
Sent: Wednesday, September 29, 2021 10:41 AM
To: dev@dpdk.org; Burakov, Anatoly <anatoly.bura...@intel.com>;
maxime.coque...@redhat.com; Xia, Chenbo <chenbo....@intel.com>
Cc: Hu, Jiayu <jiayu...@intel.com>; Jiang, Cheng1 <cheng1.ji...@intel.com>;
Richardson, Bruce <bruce.richard...@intel.com>; Pai G, Sunil
<sunil.pa...@intel.com>; Wang, Yinan <yinan.w...@intel.com>; Yang,
YvonneX <yvonnex.y...@intel.com>; Ding, Xuan <xuan.d...@intel.com>
Subject: [PATCH v6 2/2] vhost: enable IOMMU for async vhost
The use of IOMMU has many advantages, such as isolation and address
translation. This patch extends the capbility of DMA engine to use IOMMU if
the DMA engine is bound to vfio.
When set memory table, the guest memory will be mapped into the default
container of DPDK.
Signed-off-by: Xuan Ding <xuan.d...@intel.com>
---
+async_dma_map(struct rte_vhost_mem_region *region, bool
+*dma_map_success, bool do_map) {
+uint64_t host_iova;
+int ret = 0;
+
+host_iova = rte_mem_virt2iova((void *)(uintptr_t)region-
host_user_addr);
+if (do_map) {
+/* Add mapped region into the default container of DPDK. */
+ret =
rte_vfio_container_dma_map(RTE_VFIO_DEFAULT_CONTAINER_FD,
+ region->host_user_addr,
+ host_iova,
+ region->size);
+*dma_map_success = ret == 0;
+
+if (ret) {
+/*
+ * DMA device may bind with kernel driver, in this
case,
+ * we don't need to program IOMMU manually.
However, if no
+ * device is bound with vfio/uio in DPDK, and vfio
kernel
+ * module is loaded, the API will still be called and
return
+ * with ENODEV/ENOSUP.
+ *
+ * DPDK vfio only returns ENODEV/ENOSUP in very
similar
+ * situations(vfio either unsupported, or supported
+ * but no devices found). Either way, no mappings
could be
+ * performed. We treat it as normal case in async
path.
+ */