Hi Konstantin,

Thanks for the review,

On 07/04/2021 15:53, Ananyev, Konstantin wrote:
Hi Vladimir,

Few comments below, mostly minor.
One generic one - doc seems missing.
With that in place:
Acked-by: Konstantin Ananyev <konstantin.anan...@intel.com>


This patch implements predictable RSS functionality.

Signed-off-by: Vladimir Medvedkin <vladimir.medved...@intel.com>

<snip>

+#defineRETA_SZ_MIN2U
+#defineRETA_SZ_MAX16U

Should these RETA_SZ defines be in public header?
So user can know what are allowed values?


I don't think this is necessary, because the user chooses it not arbitrary, but depending on the NIC.

+#define RETA_SZ_IN_RANGE(reta_sz)((reta_sz >= RETA_SZ_MIN) && \

<snip>

+uint32_t i;

Empty line is  missing.


Thanks

+if ((name == NULL) || (key_len == 0) || !RETA_SZ_IN_RANGE(reta_sz)) {
+rte_errno = EINVAL;
+return NULL;
+}

<snip>

+static inline void
+set_bit(uint8_t *ptr, uint32_t bit, uint32_t pos)
+{
+uint32_t byte_idx = pos >> 3;

Just as a nit to be consistent with the line below:
pos / CHAR_BIT;


Fixed

+uint32_t bit_idx = (CHAR_BIT - 1) - (pos & (CHAR_BIT - 1));
+uint8_t tmp;

<snip>

+ent = rte_zmalloc(NULL, sizeof(struct rte_thash_subtuple_helper) +
+sizeof(uint32_t) * (1 << ctx->reta_sz_log), 0);

Helper can be used by data-path code (via rte_thash_get_compliment()) right?
Then might be better to align it at cache-line.


Agree, I'll fix it

+if (ent == NULL)
+return -ENOMEM;

<snip>

  uint32_t
-rte_thash_get_compliment(struct rte_thash_subtuple_helper *h __rte_unused,
-uint32_t hash __rte_unused, uint32_t desired_hash __rte_unused)
+rte_thash_get_compliment(struct rte_thash_subtuple_helper *h,
+uint32_t hash, uint32_t desired_hash)
  {
-return 0;
+return h->compl_table[(hash ^ desired_hash) & h->lsb_msk];
  }

Would it make sense to add another-one for multi values:
rte_thash_get_compliment(uint32_t hash, const uint32_t desired_hashes[], 
uint32_t adj_hash[], uint32_t num);
So user can get adjustment values for multiple queues at once?


At the moment I can't find scenarios why do we need to have a bulk version for this function


  const uint8_t *
-rte_thash_get_key(struct rte_thash_ctx *ctx __rte_unused)
+rte_thash_get_key(struct rte_thash_ctx *ctx)
  {
-return NULL;
+return ctx->hash_key;
+}
+
+static inline void
+xor_bit(uint8_t *ptr, uint32_t bit, uint32_t pos)
+{
+uint32_t byte_idx = pos >> 3;
+uint32_t bit_idx = (CHAR_BIT - 1) - (pos & (CHAR_BIT - 1));
+uint8_t tmp;
+
+tmp = ptr[byte_idx];
+tmp ^= bit << bit_idx;
+ptr[byte_idx] = tmp;
+}
+
+int
+rte_thash_adjust_tuple(struct rte_thash_subtuple_helper *h,
+uint8_t *orig_tuple, uint32_t adj_bits,
+rte_thash_check_tuple_t fn, void *userdata)
+{
+unsigned i;
+
+if ((h == NULL) || (orig_tuple == NULL))
+return -EINVAL;
+
+adj_bits &= h->lsb_msk;
+/* Hint: LSB of adj_bits corresponds to offset + len bit of tuple */
+for (i = 0; i < sizeof(uint32_t) * CHAR_BIT; i++) {
+uint8_t bit = (adj_bits >> i) & 0x1;
+if (bit)
+xor_bit(orig_tuple, bit,
+h->tuple_offset + h->tuple_len - 1 - i);
+}
+
+if (fn != NULL)
+return (fn(userdata, orig_tuple)) ? 0 : -EEXIST;
+
+return 0;
  }

Not sure is there much point to have a callback that is called only once.
Might be better to rework the function in a way that user to provide 2 
callbacks -
one to generate new value, second to check.
Something like that:

int
rte_thash_gen_tuple(struct rte_thash_subtuple_helper *h,
uint8_t *tuple, uint32_t desired_hash,
int (*cb_gen_tuple)(uint8_t *, void *),
int (*cb_check_tuple)(const uint8_t *, void *),
void *userdata)
{
do {
rc = cb_gen_tuple(tuple, userdata);
if (rc != 0)
return rc;
hash = rte_softrss(tuple, ...);
adj = rte_thash_get_compliment(h, hash, desired_hash);
update_tuple(tuple, adj, ...);
rc = cb_check_tuple(tuple, userdata);
} while(rc != 0);

              return rc;
}

Agree, there is no point to call the callback for a single function call. I'll rewrite rte_thash_adjust_tuple() and send a new version an v3. As for gen_tuple, I think we don't need to have a separate callback, new rte_thash_adjust_tuple implementation randomly changes corresponding bits (based on configured offset and length in the helper) in the tuple.


diff --git a/lib/librte_hash/rte_thash.h b/lib/librte_hash/rte_thash.h
index 38a641b..fd67931 100644
--- a/lib/librte_hash/rte_thash.h
+++ b/lib/librte_hash/rte_thash.h
@@ -360,6 +360,48 @@ __rte_experimental
  const uint8_t *
  rte_thash_get_key(struct rte_thash_ctx *ctx);

+/**
+ * Function prototype for the rte_thash_adjust_tuple
+ * to check if adjusted tuple could be used.
+ * Generally it is some kind of lookup function to check
+ * if adjusted tuple is already in use.
+ *
+ * @param userdata
+ *  Pointer to the userdata. It could be a pointer to the
+ *  table with used tuples to search.
+ * @param tuple
+ *  Pointer to the tuple to check
+ *
+ * @return
+ *  1 on success
+ *  0 otherwise
+ */
+typedef int (*rte_thash_check_tuple_t)(void *userdata, uint8_t *tuple);
+
+/**
+ * Adjust tuple with complimentary bits.
+ *
+ * @param h
+ *  Pointer to the helper struct
+ * @param orig_tuple
+ *  Pointer to the tuple to be adjusted
+ * @param adj_bits
+ *  Valure returned by rte_thash_get_compliment
+ * @param fn
+ *  Callback function to check adjusted tuple. Could be NULL
+ * @param userdata
+ *  Pointer to the userdata to be passed to fn(). Could be NULL
+ *
+ * @return
+ *  0 on success
+ *  negative otherwise
+ */
+__rte_experimental
+int
+rte_thash_adjust_tuple(struct rte_thash_subtuple_helper *h,
+uint8_t *orig_tuple, uint32_t adj_bits,
+rte_thash_check_tuple_t fn, void *userdata);
+
  #ifdef __cplusplus
  }
  #endif
diff --git a/lib/librte_hash/version.map b/lib/librte_hash/version.map
index 93cb230..a992a1e 100644
--- a/lib/librte_hash/version.map
+++ b/lib/librte_hash/version.map
@@ -32,6 +32,7 @@ DPDK_21 {
  EXPERIMENTAL {
  global:

+rte_thash_adjust_tuple;
  rte_hash_free_key_with_position;
  rte_hash_lookup_with_hash_bulk;
  rte_hash_lookup_with_hash_bulk_data;
--
2.7.4


--
Regards,
Vladimir

Reply via email to