On 20/09/2021 12:24, Conor Walsh wrote:

From: Konstantin Ananyev <konstantin.anan...@intel.com>

Few changes in ioat sample behaviour:
- Always do SW copy for packet metadata (mbuf fields)
- Always use same lcore for both DMA requests enqueue and dequeue

Main reasons for that:
a) it is safer, as idxd PMD doesn't support MT safe enqueue/dequeue (yet).
b) sort of more apples to apples comparison with sw copy.
c) from my testing things are faster that way.

Signed-off-by: Konstantin Ananyev <konstantin.anan...@intel.com>
---
  examples/ioat/ioatfwd.c | 185 ++++++++++++++++++++++------------------
  1 file changed, 101 insertions(+), 84 deletions(-)

diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index b3977a8be5..1498343492 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -331,43 +331,36 @@ update_mac_addrs(struct rte_mbuf *m, uint32_t dest_portid)
    /* Perform packet copy there is a user-defined function. 8< */
  static inline void
-pktmbuf_sw_copy(struct rte_mbuf *src, struct rte_mbuf *dst)
+pktmbuf_metadata_copy(const struct rte_mbuf *src, struct rte_mbuf *dst)
  {
-    /* Copy packet metadata */
-    rte_memcpy(&dst->rearm_data,
-        &src->rearm_data,
-        offsetof(struct rte_mbuf, cacheline1)
-        - offsetof(struct rte_mbuf, rearm_data));
+    dst->data_off = src->data_off;
+    memcpy(&dst->rx_descriptor_fields1, &src->rx_descriptor_fields1,
+        offsetof(struct rte_mbuf, buf_len) -
+        offsetof(struct rte_mbuf, rx_descriptor_fields1));
+}
  -    /* Copy packet data */
+/* Copy packet data */
+static inline void
+pktmbuf_sw_copy(struct rte_mbuf *src, struct rte_mbuf *dst)
+{
      rte_memcpy(rte_pktmbuf_mtod(dst, char *),
          rte_pktmbuf_mtod(src, char *), src->data_len);
  }
  /* >8 End of perform packet copy there is a user-defined function. */

Might need to redo these snippet markers as the function is now split in two.

Will rework this with the overall documentation update after moving to dmafwd.



<snip>

+static inline uint32_t
+ioat_dequeue(struct rte_mbuf *src[], struct rte_mbuf *dst[], uint32_t num,
+    uint16_t dev_id)
+{
+    int32_t rc;

rc should be uint32_t, but this is removed in patch 4 of this set during the change from raw to dma so it shouldn't really matter.

I belive int32_t is correct here, since the return value from rte_ioat_completed_ops() is "int". Otherwise we would not be able to error check the return.

If rc is negative, we set it to 0 before returning. This ensure that we have a positive value, which can be safely typecast to uint32_t before returning.

Thanks for the review, Conor!


+    /* Dequeue the mbufs from IOAT device. Since all memory
+     * is DPDK pinned memory and therefore all addresses should
+     * be valid, we don't check for copy errors
+     */
+    rc = rte_ioat_completed_ops(dev_id, num, NULL, NULL,
+        (void *)src, (void *)dst);
+    if (rc < 0) {
+        RTE_LOG(CRIT, IOAT,
+            "rte_ioat_completed_ops(%hu) failedi, error: %d\n",
+            dev_id, rte_errno);
+        rc = 0;
+    }
+    return rc;
+}

Reviewed-by: Conor Walsh <conor.wa...@intel.com>

Reply via email to