> > Consumer queue dequeuing must be guaranteed to be done fully before the tail > is updated. This is not guaranteed with a read barrier, > changed to a write barrier just before tail update which in practice > guarantees correct order of reads and writes. > > Signed-off-by: Juhamatti Kuusisaari <juhamatti.kuusisaari at coriant.com> > --- > lib/librte_ring/rte_ring.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h index > eb45e41..14920af 100644 > --- a/lib/librte_ring/rte_ring.h > +++ b/lib/librte_ring/rte_ring.h > @@ -748,7 +748,7 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void > **obj_table, > > /* copy in table */ > DEQUEUE_PTRS(); > - rte_smp_rmb(); > + rte_smp_wmb(); > > __RING_STAT_ADD(r, deq_success, n); > r->cons.tail = cons_next; > --
Acked-by: Konstantin Ananyev <konstantin.ananyev at intel.com> > 2.9.0 >