To precisely benchmark the spinlock performance, uses the precise version of getting timestamps, which enforces the timestamps are obtained at the expected places.
Signed-off-by: Gavin Hu <gavin...@arm.com> Reviewed-by: Phil Yang <phil.y...@arm.com> --- test/test/test_spinlock.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/test/test/test_spinlock.c b/test/test/test_spinlock.c index 6795195ae..648474833 100644 --- a/test/test/test_spinlock.c +++ b/test/test/test_spinlock.c @@ -113,14 +113,14 @@ load_loop_fn(void *func_param) if (lcore != rte_get_master_lcore()) while (rte_atomic32_read(&synchro) == 0); - begin = rte_get_timer_cycles(); + begin = rte_rdtsc_precise(); while (time_diff < hz * TIME_MS / 1000) { if (use_lock) rte_spinlock_lock(&lk); lcount++; if (use_lock) rte_spinlock_unlock(&lk); - time_diff = rte_get_timer_cycles() - begin; + time_diff = rte_rdtsc_precise() - begin; } lock_count[lcore] = lcount; return 0; -- 2.11.0