On 22-Apr-19 7:15 AM, kirankum...@marvell.com wrote:
From: Kiran Kumar K <kirankum...@marvell.com>

With current KNI implementation kernel module will work only in
IOVA=PA mode. This patch will add support for kernel module to work
with IOVA=VA mode.

The idea is to get the physical address from iova address using
api iommu_iova_to_phys. Using this API, we will get the physical
address from iova address and later use phys_to_virt API to
convert the physical address to kernel virtual address.

With this approach we have compared the performance with IOVA=PA
and there is no difference observed. Seems like kernel is the
overhead.

This approach will not work with the kernel versions less than 4.4.0
because of API compatibility issues.

Signed-off-by: Kiran Kumar K <kirankum...@marvell.com>
---

<snip>

diff --git a/kernel/linux/kni/kni_net.c b/kernel/linux/kni/kni_net.c
index be9e6b0b9..e77a28066 100644
--- a/kernel/linux/kni/kni_net.c
+++ b/kernel/linux/kni/kni_net.c
@@ -35,6 +35,22 @@ static void kni_net_rx_normal(struct kni_dev *kni);
  /* kni rx function pointer, with default to normal rx */
  static kni_net_rx_t kni_net_rx_func = kni_net_rx_normal;

+/* iova to kernel virtual address */
+static void *
+iova2kva(struct kni_dev *kni, void *pa)
+{
+       return phys_to_virt(iommu_iova_to_phys(kni->domain,
+                               (uintptr_t)pa));
+}
+
+static void *
+iova2data_kva(struct kni_dev *kni, struct rte_kni_mbuf *m)
+{
+       return phys_to_virt((iommu_iova_to_phys(kni->domain,
+                                       (uintptr_t)m->buf_physaddr) +
+                            m->data_off));
+}
+

Apologies, i've accidentally responded to the previous version with this comment.

I don't see how this could possibly work, because for any IOVA-contiguous chunk of memory, mbufs are allowed to cross page boundaries. In this function, you're getting the start address of a buffer, but there are no guarantees that the end of the buffer is on the same physical page.

--
Thanks,
Anatoly

Reply via email to