On 08/16/2018 10:33 PM, Honnappa Nagarahalli wrote:
+/* Get the primary bucket index given the precomputed hash value. */
+static inline uint32_t rte_hash_get_primary_bucket(const struct
+rte_hash *h, hash_sig_t sig) {
+ return sig & h->bucket_bitmask;
+}
+
+/* Get the secondary bucket index given the precomputed hash value. */
+static inline uint32_t rte_hash_get_secondary_bucket(const struct
+rte_hash *h, hash_sig_t sig) {
+ return rte_hash_secondary_hash(sig) & h->bucket_bitmask; }
+
IMO, to keep the code consistent, we do not need to have the above 2 functions.
Ok.
+int32_t __rte_experimental
+rte_hash_iterate_conflict_entries(struct rte_conflict_iterator_state *state,
+ const void **key, const void **data)
+{
+ struct rte_hash_iterator_conflict_entries_state *__state;
+
+ RETURN_IF_TRUE(((state == NULL) || (key == NULL) ||
+ (data == NULL)), -EINVAL);
+
+ __state = (struct rte_hash_iterator_conflict_entries_state *)state;
+
+ while (__state->vnext < RTE_HASH_BUCKET_ENTRIES * 2) {
+ uint32_t bidx = (__state->vnext < RTE_HASH_BUCKET_ENTRIES) ?
+ __state->primary_bidx : __state->secondary_bidx;
+ uint32_t next = __state->vnext & (RTE_HASH_BUCKET_ENTRIES - 1);
+ uint32_t position = __state->h->buckets[bidx].key_idx[next];
+ struct rte_hash_key *next_key;
+ /*
+ * The test below is unlikely because this iterator is meant
+ * to be used after a failed insert.
+ * */
+ if (unlikely(position == EMPTY_SLOT))
+ goto next;
+
+ /* Get the entry in key table. */
+ next_key = (struct rte_hash_key *) (
+ (char *)__state->h->key_store +
+ position * __state->h->key_entry_size);
+ /* Return key and data. */
+ *key = next_key->key;
+ *data = next_key->pdata;
+
+next:
+ /* Increment iterator. */
+ __state->vnext++;
+
+ if (likely(position != EMPTY_SLOT))
+ return position - 1;
+ }
+
+ return -ENOENT;
+}
I think, we can make this API similar to 'rte_hash_iterate'. I suggest the
following API signature:
int32_t
rte_hash_iterate_conflict_entries (const struct rte_hash *h, const void **key,
void **data, hash_sig_t sig, uint32_t *next)
The goal of our interface is to support changing the underlying hash
table algorithm without requiring changes in applications. As Yipeng1
Wang exemplified in the discussion of the first version of this patch,
"in future, rte_hash may use three hash functions, or as I mentioned
each bucket may have an additional linked list or even a second level
hash table, or if the hopscotch hash replaces cuckoo hash as the new
algorithm." These new algorithms may require more state than sig and
next can efficiently provide in order to browse the conflicting entries.
I also suggest to change the API name to ' rte_hash_iterate_bucket_entries' -
'bucket' is a well understood term in the context of hash algorithms.
It's a matter of semantics here. rte_hash_iterate_conflict_entries()
may cross more than one bucket. In fact, the first version of this patch
tried to do exactly that, but it exposes the underlying algorithm. In
addition, future algorithms may stretch what is being browsed even further.
Do we also need to have 'rte_hash_iterate_conflict_entries_with_hash' API?
I may have not understood the question. We are already working with
the hash (i.e. sig). Did you mean something else?
diff --git a/lib/librte_hash/rte_hash.h b/lib/librte_hash/rte_hash.h index
f71ca9fbf..7ecb6a7eb 100644
--- a/lib/librte_hash/rte_hash.h
+++ b/lib/librte_hash/rte_hash.h
@@ -61,6 +61,11 @@ struct rte_hash_parameters {
/** @internal A hash table structure. */ struct rte_hash;
+/** @internal A hash table conflict iterator state structure. */ struct
+rte_conflict_iterator_state {
+ uint8_t space[64];
+};
+
The size depends on the current size of the state, which is subject to change
with the algorithm used.
We chose a size that should be robust for any future underlying
algorithm. Do you have a suggestion on how to go about it? We chose to
have a simple struct to enable applications to allocate a state as a
local variable and avoid a memory allocation.
[ ]'s
Michel Machado