Some general comments for the various __atomic_store/load added, 1. Although it passes the compiler check, but I just want to confirm that if we should use GCC/clang builtins, or if There are higher level APIs in DPDK to do atomic operations?
2. We believe compiler will translate the atomic_store/load to regular MOV instruction on Total Store Order architecture (e.g. X86_64). But we run the perf test on x86 and here is the relative slowdown on lookup comparing to master head. I am not sure if the performance drop comes from the atomic buitins. Keysize | single lookup | bulk lookup 4 | 0.93 | 0.95 8 | 0.95 | 0.96 16 | 0.97 | 0.96 32 | 0.97 | 1.00 48 | 1.03 | 0.99 64 | 1.04 | 0.98 9 | 0.91 | 0.96 13 | 0.97 | 0.98 37 | 1.04 | 1.03 40 | 1.02 | 0.98 Other comments inlined. >-----Original Message----- >Only race condition that can occur is - using the key store element >before the key write is completed. Hence, while inserting the element >the release memory order is used. Any other race condition is caught >by the key comparison. Memory orderings are added only where needed. >For ex: reads in the writer's context do not need memory ordering >as there is a single writer. > [Wang, Yipeng] I assume this commit works together with the next one to enable read-write concurrency on relaxed memory ordering machine. This commit by itself does not do that, right? If this is the case, should we merge the two commits, or maybe just explain it a little bit more in the commit message? >key_idx in the bucket entry and pdata in the key store element are >used for synchronisation. key_idx is used to release an inserted >entry in the bucket to the reader. Use of pdata for synchronisation >is required due to updation of an existing entry where-in only >the pdata is updated without updating key_idx. > > >-/* Search a key from bucket and update its data */ >+/* Search a key from bucket and update its data. >+ * Writer holds the lock before calling this. >+ */ > static inline int32_t > search_and_update(const struct rte_hash *h, void *data, const void *key, > struct rte_hash_bucket *bkt, hash_sig_t sig, hash_sig_t alt_hash) >@@ -499,8 +501,13 @@ search_and_update(const struct rte_hash *h, void *data, >const void *key, > k = (struct rte_hash_key *) ((char *)keys + > bkt->key_idx[i] * h->key_entry_size); > if (rte_hash_cmp_eq(key, k->key, h) == 0) { >- /* Update data */ >- k->pdata = data; >+ /* 'pdata' acts as the synchronization point >+ * when an existing hash entry is updated. >+ * Key is not updated in this case. >+ */ [Wang, Yipeng] I did not quite understand why do we need synchronization for hash data update. Since pdata write is already atomic, the lookup will either read out the stale data or the new data, which should be fine without synchronization. Is it to ensure the order of multiple reads in lookup threads? >@@ -1243,11 +1289,19 @@ __rte_hash_lookup_bulk(const struct rte_hash *h, const >void **keys, > while (sec_hitmask[i]) { > uint32_t hit_index = __builtin_ctzl(sec_hitmask[i]); > >- uint32_t key_idx = secondary_bkt[i]->key_idx[hit_index]; >+ uint32_t key_idx = >+ __atomic_load_n( >+ &secondary_bkt[i]->key_idx[hit_index], >+ __ATOMIC_ACQUIRE); > const struct rte_hash_key *key_slot = > (const struct rte_hash_key *)( > (const char *)h->key_store + > key_idx * h->key_entry_size); >+ >+ if (key_idx != EMPTY_SLOT) [Wang, Yipeng] I think even for current code, we need to check empty_slot. Could you export this as a bug fix commit?