dpaa_sec needs translations between physical and virtual addresses.
V to P translation is relatively fast, as memory is managed in
contiguous segments.

The result of each V to P translation is used to update the DPAA iova
table, which should be updated by a Mem event callback, but is not.
Then the DPAA iova table has entries for all needed memory ranges.

With this patch, dpaa_mem_ptov will always use dpaax_iova_table_get_va,
which ensures optimal performance.

Fixes: 5a7dbb934d75 ('dpaa: enable dpaax library')
Cc: sta...@dpdk.org
Cc: akhil.go...@nxp.com
Cc: hemant.agra...@nxp.com

Signed-off-by: Thierry Herbelot <thierry.herbe...@6wind.com>
---
 drivers/crypto/dpaa_sec/dpaa_sec.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c 
b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 122c80a072ff..22b8b1d63ce0 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -38,6 +38,7 @@
 #include <rte_dpaa_bus.h>
 #include <dpaa_sec.h>
 #include <dpaa_sec_log.h>
+#include <dpaax_iova_table.h>
 
 enum rta_sec_era rta_sec_era;
 
@@ -100,8 +101,10 @@ dpaa_mem_vtop(void *vaddr)
        const struct rte_memseg *ms;
 
        ms = rte_mem_virt2memseg(vaddr, NULL);
-       if (ms)
+       if (ms) {
+               dpaax_iova_table_update(ms->iova, (void *)ms->addr_64, ms->len);
                return ms->iova + RTE_PTR_DIFF(vaddr, ms->addr);
+       }
        return (size_t)NULL;
 }
 
-- 
2.20.1

Reply via email to