> > > > Hi, > > > > > > > > Add performance test on all available cores to benchmark the scaling > > > up performance and fairness of rw_lock. > > > > > > Fixes: af75078faf ("first public release") > > > Cc: sta...@dpdk.org > > > > > > Suggested-by: Gavin Hu <gavin...@arm.com> > > > Signed-off-by: Joyce Kong <joyce.k...@arm.com> > > > Reviewed-by: Honnappa Nagarahalli <honnappa.nagaraha...@arm.com> > > > Reviewed-by: Ola Liljedahl <ola.liljed...@arm.com> > > > Reviewed-by: Gavin Hu <gavin...@arm.com> > > > Reviewed-by: Ruifeng Wang <ruifeng.w...@arm.com> > > > --- > > > test/test/test_rwlock.c | 71 > > > +++++++++++++++++++++++++++++++++++++++++++++++++ > > > 1 file changed, 71 insertions(+) > > > > > > diff --git a/test/test/test_rwlock.c b/test/test/test_rwlock.c index > > > 29171c4..4766c09 100644 > > > --- a/test/test/test_rwlock.c > > > +++ b/test/test/test_rwlock.c > > > @@ -4,6 +4,7 @@ > > > > > > #include <stdio.h> > > > #include <stdint.h> > > > +#include <inttypes.h> > > > #include <unistd.h> > > > #include <sys/queue.h> > > > > > > @@ -44,6 +45,7 @@ > > > > > > static rte_rwlock_t sl; > > > static rte_rwlock_t sl_tab[RTE_MAX_LCORE]; > > > +static rte_atomic32_t synchro; > > > > > > static int > > > test_rwlock_per_core(__attribute__((unused)) void *arg) @@ -65,6 > > > +67,72 @@ test_rwlock_per_core(__attribute__((unused)) void *arg) > > > return 0; > > > } > > > > > > +static rte_rwlock_t lk = RTE_RWLOCK_INITIALIZER; static uint64_t > > > +lock_count[RTE_MAX_LCORE] = {0}; > > > + > > > +#define TIME_MS 100 > > > + > > > +static int > > > +load_loop_fn(__attribute__((unused)) void *arg) { > > > + uint64_t time_diff = 0, begin; > > > + uint64_t hz = rte_get_timer_hz(); > > > + uint64_t lcount = 0; > > > + const unsigned int lcore = rte_lcore_id(); > > > + > > > + /* wait synchro for slaves */ > > > + if (lcore != rte_get_master_lcore()) > > > + while (rte_atomic32_read(&synchro) == 0) > > > + ; > > > + > > > + begin = rte_rdtsc_precise(); > > > + while (time_diff < hz * TIME_MS / 1000) { > > > + rte_rwlock_write_lock(&lk); > > > + rte_pause(); > > > > Wouldn't it be more realistic to write/read some shared data here? > > Again extra checking could be done in that case that lock behaves as > > expected. > Will do it in v2, thanks! > > > > > + rte_rwlock_write_unlock(&lk); > > > + rte_rwlock_read_lock(&lk); > > > + rte_rwlock_read_lock(&lk); > > > > Wonder what is the point of double rdlock here? > > Konstantin > Double rd lock is to check rd locks will not block each other. > Anyway I will remove it in v2 if no concerns here. > > > > > + rte_pause(); > > > + rte_rwlock_read_unlock(&lk); > > > + rte_rwlock_read_unlock(&lk); > > > + lcount++; > > > + /* delay to make lock duty cycle slightly realistic */ > > > + rte_pause(); > > > + time_diff = rte_rdtsc_precise() - begin; > > > + } Should we change the way the measurement is done? We are measuring 'how many locks/unlocks per <certain time>'. This introduces more over head due to rte_rdtsc_precise call for every iteration. If we do, 'how many cycles it takes to do <certain number of locks/unlocks>', the over head of rte_rdtsc_precise can be amortized and will be very little.
> > > + lock_count[lcore] = lcount; > > > + return 0; > > > +} > > > +