-----Original Message-----
> Date: Fri, 26 Oct 2018 00:37:32 -0500
> From: Honnappa Nagarahalli <honnappa.nagaraha...@arm.com>
> To: bruce.richard...@intel.com, pablo.de.lara.gua...@intel.com
> CC: dev@dpdk.org, yipeng1.w...@intel.com, honnappa.nagaraha...@arm.com,
>  dharmik.thak...@arm.com, gavin...@arm.com, n...@arm.com
> Subject: [dpdk-dev] [PATCH v7 4/5] hash: add lock-free read-write
>  concurrency
> X-Mailer: git-send-email 2.7.4
> 
> 
> Add lock-free read-write concurrency. This is achieved by the
> following changes.
> 
> 1) Add memory ordering to avoid race conditions. The only race
> condition that can occur is -  using the key store element
> before the key write is completed. Hence, while inserting the element
> the release memory order is used. Any other race condition is caught
> by the key comparison. Memory orderings are added only where needed.
> For ex: reads in the writer's context do not need memory ordering
> as there is a single writer.
> 
> key_idx in the bucket entry and pdata in the key store element are
> used for synchronisation. key_idx is used to release an inserted
> entry in the bucket to the reader. Use of pdata for synchronisation
> is required due to updation of an existing entry where-in only
> the pdata is updated without updating key_idx.
> 
> 2) Reader-writer concurrency issue, caused by moving the keys
> to their alternative locations during key insert, is solved
> by introducing a global counter(tbl_chng_cnt) indicating a
> change in table.
> 
> 3) Add the flag to enable reader-writer concurrency during
> run time.
> 
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagaraha...@arm.com>

Hi Honnappa,

This patch is causing _~24%_ performance regression on mpps/core with 64B
packet with l3fwd in EM mode with octeontx.

Example command to reproduce with 2 core+2 port l3fwd in hash mode(-E)

# l3fwd -v -c 0xf00000 -n 4 -- -P -E -p 0x3 --config="(0, 0, 23),(1, 0, 22)"

Observations:
1) When hash lookup is _success_ then regression is only 3%. Which is kind of
make sense because additional new atomic instructions

What I meant by lookup is _success_ is: 
Configuring traffic gen like below to match lookup as defined 
ipv4_l3fwd_em_route_array() in examples/l3fwd/l3fwd_em.c

dest.ip      port0    201.0.0.0
src.ip       port0    200.20.0.1
dest.port    port0    102
src.port     port0    12

dest.ip      port1    101.0.0.0
src.ip       port1    100.10.0.1
dest.port    port1    101
src.port     port1    11

tx.type      IPv4+TCP



2) When hash lookup _fails_ the per core mpps regression comes around 24% with 
64B packet size.

What I meant by lookup is _failure_ is: 
Configuring traffic gen not to hit the 5 tuples defined in 
ipv4_l3fwd_em_route_array() in examples/l3fwd/l3fwd_em.c


3) perf top _without_ this patch
  37.30%  l3fwd         [.] em_main_loop
  22.40%  l3fwd         [.] rte_hash_lookup
  13.05%  l3fwd         [.] nicvf_recv_pkts_cksum
   9.70%  l3fwd         [.] nicvf_xmit_pkts
   6.18%  l3fwd         [.] ipv4_hash_crc
   4.77%  l3fwd         [.] nicvf_fill_rbdr
   4.50%  l3fwd         [.] nicvf_single_pool_free_xmited_buffers
   1.16%  libc-2.28.so  [.] memcpy
   0.47%  l3fwd         [.] common_ring_mp_enqueue
   0.44%  l3fwd         [.] common_ring_mc_dequeue
   0.03%  l3fwd         [.] strerror_r@plt

4) perf top with this patch

  47.41%  l3fwd         [.] rte_hash_lookup
  23.55%  l3fwd         [.] em_main_loop
   9.53%  l3fwd         [.] nicvf_recv_pkts_cksum
   6.95%  l3fwd         [.] nicvf_xmit_pkts
   4.63%  l3fwd         [.] ipv4_hash_crc
   3.30%  l3fwd         [.] nicvf_fill_rbdr
   3.29%  l3fwd         [.] nicvf_single_pool_free_xmited_buffers
   0.76%  libc-2.28.so  [.] memcpy
   0.30%  l3fwd         [.] common_ring_mp_enqueue
   0.25%  l3fwd         [.] common_ring_mc_dequeue
   0.04%  l3fwd         [.] strerror_r@plt


5) Based on assembly, most of the cycles spends in rte_hash_lookup
around  key_idx = __atomic_load_n(&bkt->key_idx[i](whose LDAR)
and "if (bkt->sig_current[i] == sig && key_idx != EMPTY_SLOT) {"


6) Since this patch is big and does 3 things are mentioned above,
it is difficult to pin point what is causing the exact issue.

But, my primary analysis shows the item (1)(adding the atomic barriers).
But I need to spend more cycles to find out the exact causes.

The use case like lwfwd in hash mode, where writer does not update
stuff in fastpath(aka insert op) will be impact with this patch.

7) Have you checked the l3fwd lookup failure use case in your environment?
if so, please share your observation and if not, could you please check it?

8) IMO, Such performance regression is not acceptable for l3fwd use case
where hash insert op will be done in slowpath.

9) Does any else facing this problem?


Reply via email to