The code in rte_cuckoo_hash multi-writer support is broken if write
operations are called from a non-EAL thread.

rte_lcore_id() wil return LCORE_ID_ANY (UINT32_MAX) for non EAL
thread and that leads to using wrong local cache.

Add error checks and document the restriction.

Fixes: 9d033dac7d7c ("hash: support no free on delete")
Fixes: 5915699153d7 ("hash: fix scaling by reducing contention")
Signed-off-by: Stephen Hemminger <step...@networkplumber.org>
Cc: honnappa.nagaraha...@arm.com
Cc: pablo.de.lara.gua...@intel.com
---
 doc/guides/prog_guide/hash_lib.rst | 1 +
 lib/librte_hash/rte_cuckoo_hash.c  | 9 +++++++++
 2 files changed, 10 insertions(+)

diff --git a/doc/guides/prog_guide/hash_lib.rst 
b/doc/guides/prog_guide/hash_lib.rst
index d06c7de2ead1..29b41a425a43 100644
--- a/doc/guides/prog_guide/hash_lib.rst
+++ b/doc/guides/prog_guide/hash_lib.rst
@@ -85,6 +85,7 @@ For concurrent writes, and concurrent reads and writes the 
following flag values
 
 *  If the multi-writer flag (RTE_HASH_EXTRA_FLAGS_MULTI_WRITER_ADD) is set, 
multiple threads writing to the table is allowed.
    Key add, delete, and table reset are protected from other writer threads. 
With only this flag set, readers are not protected from ongoing writes.
+   The writer threads must be EAL threads, it is not safe to write to a 
multi-writer hash table from an interrupt, control or other threads.
 
 *  If the read/write concurrency (RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY) is set, 
multithread read/write operation is safe
    (i.e., application does not need to stop the readers from accessing the 
hash table until writers finish their updates. Readers and writers can operate 
on the table concurrently).
diff --git a/lib/librte_hash/rte_cuckoo_hash.c 
b/lib/librte_hash/rte_cuckoo_hash.c
index 90cb99b0eef8..79c94107a582 100644
--- a/lib/librte_hash/rte_cuckoo_hash.c
+++ b/lib/librte_hash/rte_cuckoo_hash.c
@@ -979,6 +979,9 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, 
const void *key,
        /* Did not find a match, so get a new slot for storing the new key */
        if (h->use_local_cache) {
                lcore_id = rte_lcore_id();
+               if (lcore_id == LCORE_ID_ANY)
+                       return -EINVAL;
+
                cached_free_slots = &h->local_free_slots[lcore_id];
                /* Try to get a free slot from the local cache */
                if (cached_free_slots->len == 0) {
@@ -1382,6 +1385,10 @@ remove_entry(const struct rte_hash *h, struct 
rte_hash_bucket *bkt, unsigned i)
 
        if (h->use_local_cache) {
                lcore_id = rte_lcore_id();
+               ERR_IF_TRUE((lcore_id == LCORE_ID_ANY),
+                           "%s: attempt to remove entry from non EAL thread\n",
+                           __func__);
+
                cached_free_slots = &h->local_free_slots[lcore_id];
                /* Cache full, need to free it. */
                if (cached_free_slots->len == LCORE_CACHE_SIZE) {
@@ -1637,6 +1644,8 @@ rte_hash_free_key_with_position(const struct rte_hash *h,
 
        if (h->use_local_cache) {
                lcore_id = rte_lcore_id();
+               RETURN_IF_TRUE((lcore_id == LCORE_ID_ANY), -EINVAL);
+
                cached_free_slots = &h->local_free_slots[lcore_id];
                /* Cache full, need to free it. */
                if (cached_free_slots->len == LCORE_CACHE_SIZE) {
-- 
2.26.2

Reply via email to