On Sun, Apr 26, 2020 at 2:09 PM Gavin Hu <[email protected]> wrote: > > In acquiring a spinlock, cores repeatedly poll the lock variable. > This is replaced by rte_wait_until_equal API. > > Running the micro benchmarking and the testpmd and l3fwd traffic tests > on ThunderX2, Ampere eMAG80 and Arm N1SDP, everything went well and no > notable performance gain nor degradation was measured. > > Signed-off-by: Gavin Hu <[email protected]> > Reviewed-by: Ruifeng Wang <[email protected]> > Reviewed-by: Phil Yang <[email protected]> > Reviewed-by: Steve Capper <[email protected]> > Reviewed-by: Ola Liljedahl <[email protected]> > Reviewed-by: Honnappa Nagarahalli <[email protected]> > Tested-by: Pavan Nikhilesh <[email protected]>
Acked-by: Jerin Jacob <[email protected]> > --- > lib/librte_eal/include/generic/rte_spinlock.h | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/lib/librte_eal/include/generic/rte_spinlock.h > b/lib/librte_eal/include/generic/rte_spinlock.h > index 87ae7a4f1..40fe49d5a 100644 > --- a/lib/librte_eal/include/generic/rte_spinlock.h > +++ b/lib/librte_eal/include/generic/rte_spinlock.h > @@ -65,8 +65,8 @@ rte_spinlock_lock(rte_spinlock_t *sl) > > while (!__atomic_compare_exchange_n(&sl->locked, &exp, 1, 0, > __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) { > - while (__atomic_load_n(&sl->locked, __ATOMIC_RELAXED)) > - rte_pause(); > + rte_wait_until_equal_32((volatile uint32_t *)&sl->locked, > + 0, __ATOMIC_RELAXED); > exp = 0; > } > } > -- > 2.17.1 >

