In __rte_ring_move_prod_head, move the __atomic_load_n up and out of the do {} while loop as upon failure the old_head will be updated, another load is costly and not necessary.
This helps a little on the latency,about 1~5%. Test result with the patch(two cores): SP/SC bulk enq/dequeue (size: 8): 5.64 MP/MC bulk enq/dequeue (size: 8): 9.58 SP/SC bulk enq/dequeue (size: 32): 1.98 MP/MC bulk enq/dequeue (size: 32): 2.30 Fixes: 39368ebfc606 ("ring: introduce C11 memory model barrier option") Cc: sta...@dpdk.org Signed-off-by: Gavin Hu <gavin...@arm.com> Reviewed-by: Honnappa Nagarahalli <honnappa.nagaraha...@arm.com> Reviewed-by: Steve Capper <steve.cap...@arm.com> Reviewed-by: Ola Liljedahl <ola.liljed...@arm.com> Reviewed-by: Jia He <justin...@arm.com> Acked-by: Jerin Jacob <jerin.ja...@caviumnetworks.com> Tested-by: Jerin Jacob <jerin.ja...@caviumnetworks.com> --- doc/guides/rel_notes/release_18_11.rst | 7 +++++++ lib/librte_ring/rte_ring_c11_mem.h | 10 ++++------ 2 files changed, 11 insertions(+), 6 deletions(-) diff --git a/doc/guides/rel_notes/release_18_11.rst b/doc/guides/rel_notes/release_18_11.rst index 376128f..c9c2b86 100644 --- a/doc/guides/rel_notes/release_18_11.rst +++ b/doc/guides/rel_notes/release_18_11.rst @@ -69,6 +69,13 @@ New Features checked out against that dma mask and rejected if out of range. If more than one device has addressing limitations, the dma mask is the more restricted one. +* **Updated the ring library with C11 memory model.** + + Updated the ring library with C11 memory model including the following changes: + + * Synchronize the load and store of the tail + * Move the atomic load of head above the loop + * **Added hot-unplug handle mechanism.** ``rte_dev_hotplug_handle_enable`` and ``rte_dev_hotplug_handle_disable`` are diff --git a/lib/librte_ring/rte_ring_c11_mem.h b/lib/librte_ring/rte_ring_c11_mem.h index 52da95a..7bc74a4 100644 --- a/lib/librte_ring/rte_ring_c11_mem.h +++ b/lib/librte_ring/rte_ring_c11_mem.h @@ -61,13 +61,11 @@ __rte_ring_move_prod_head(struct rte_ring *r, unsigned int is_sp, unsigned int max = n; int success; + *old_head = __atomic_load_n(&r->prod.head, __ATOMIC_ACQUIRE); do { /* Reset n to the initial burst count */ n = max; - *old_head = __atomic_load_n(&r->prod.head, - __ATOMIC_ACQUIRE); - /* load-acquire synchronize with store-release of ht->tail * in update_tail. */ @@ -93,6 +91,7 @@ __rte_ring_move_prod_head(struct rte_ring *r, unsigned int is_sp, if (is_sp) r->prod.head = *new_head, success = 1; else + /* on failure, *old_head is updated */ success = __atomic_compare_exchange_n(&r->prod.head, old_head, *new_head, 0, __ATOMIC_ACQUIRE, @@ -135,13 +134,11 @@ __rte_ring_move_cons_head(struct rte_ring *r, int is_sc, int success; /* move cons.head atomically */ + *old_head = __atomic_load_n(&r->cons.head, __ATOMIC_ACQUIRE); do { /* Restore n as it may change every loop */ n = max; - *old_head = __atomic_load_n(&r->cons.head, - __ATOMIC_ACQUIRE); - /* this load-acquire synchronize with store-release of ht->tail * in update_tail. */ @@ -166,6 +163,7 @@ __rte_ring_move_cons_head(struct rte_ring *r, int is_sc, if (is_sc) r->cons.head = *new_head, success = 1; else + /* on failure, *old_head will be updated */ success = __atomic_compare_exchange_n(&r->cons.head, old_head, *new_head, 0, __ATOMIC_ACQUIRE, -- 2.7.4