For S2M rings, ring->head is updated by the sender and eth_memif_tx
function is called in the context of sending thread. The loads in
the sender do not need to synchronize with its own stores.

Fixes: a2aafb9aa651 ("net/memif: optimize with one-way barrier")
Cc: phil.y...@arm.com
Cc: sta...@dpdk.org

Signed-off-by: Honnappa Nagarahalli <honnappa.nagaraha...@arm.com>
Reviewed-by: Phil Yang <phil.y...@arm.com>
---
 drivers/net/memif/rte_eth_memif.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/net/memif/rte_eth_memif.c 
b/drivers/net/memif/rte_eth_memif.c
index 0d064c8fa..435c6345c 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -569,7 +569,13 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t 
nb_pkts)
        mask = ring_size - 1;
 
        if (type == MEMIF_RING_S2M) {
-               slot = __atomic_load_n(&ring->head, __ATOMIC_ACQUIRE);
+               /* For S2M queues ring->head is updated by the sender and
+                * this function is called in the context of sending thread.
+                * The loads in the sender do not need to synchronize with
+                * its own stores. Hence, the following load can be a
+                * relaxed load.
+                */
+               slot = __atomic_load_n(&ring->head, __ATOMIC_RELAXED);
                n_free = ring_size - slot +
                                __atomic_load_n(&ring->tail, __ATOMIC_ACQUIRE);
        } else {
-- 
2.17.1

Reply via email to