On 22-Apr-19 5:39 AM, kirankum...@marvell.com wrote:
From: Kiran Kumar K <kirankum...@marvell.com>
With current KNI implementation kernel module will work only in
IOVA=PA mode. This patch will add support for kernel module to work
with IOVA=VA mode.
The idea is to get the physical address from iova address using
api iommu_iova_to_phys. Using this API, we will get the physical
address from iova address and later use phys_to_virt API to
convert the physical address to kernel virtual address.
With this approach we have compared the performance with IOVA=PA
and there is no difference observed. Seems like kernel is the
overhead.
This approach will not work with the kernel versions less than 4.4.0
because of API compatibility issues.
Signed-off-by: Kiran Kumar K <kirankum...@marvell.com>
---
<snip>
+/* iova to kernel virtual address */
+static void *
+iova2kva(struct kni_dev *kni, void *pa)
+{
+ return phys_to_virt(iommu_iova_to_phys(kni->domain,
+ (dma_addr_t)pa));
+}
+
+static void *
+iova2data_kva(struct kni_dev *kni, struct rte_kni_mbuf *m)
+{
+ return phys_to_virt((iommu_iova_to_phys(kni->domain,
+ (dma_addr_t)m->buf_physaddr) +
+ m->data_off));
Does this account for mbufs crossing page boundary? In IOVA as VA mode,
the mempool is likely allocated in one go, so the mempool allocator will
not care for preventing mbufs from crossing page boundary. The data may
very well start at the very end of a page, and continue through the
beginning of next page, which will have a different physical address.
--
Thanks,
Anatoly