Re: [PATCH v12 1/6] memarea: introduce memarea library

2023-01-20 Thread fengchengwen
Hi Morten,

On 2023/1/15 15:58, Morten Brørup wrote:
>> From: Chengwen Feng [mailto:fengcheng...@huawei.com]
>> Sent: Saturday, 14 January 2023 12.50
>>
>> The memarea library is an allocator of variable-size object which based
>> on a memory region.
>>
>> This patch provides rte_memarea_create() and rte_memarea_destroy() API.
>>
>> Signed-off-by: Chengwen Feng 
>> Reviewed-by: Dongdong Liu 
>> ---
> 
> [...]
> 
>> +struct memarea_obj {
>> +TAILQ_ENTRY(memarea_obj) obj_node;
>> +TAILQ_ENTRY(memarea_obj) free_node;
>> +size_t   size;
>> +size_t   alloc_size;
>> +uint32_t magic;
>> +};
> 
> The magic cookie is for debug purposes only, and should be enclosed by #ifdef 
> RTE_LIBRTE_MEMAREA_DEBUG, like in the mempool library [1].

In the mempool the cookie mechanism is under debug macro, the main reason 
should be performance considerations.

And community recommends that compilation macro be used as little as possible.

So I think we could add the following new API like:

/*
 * Enable audit in alloc/free/audit.
 */
rte_memarea_audit_enable(struct rte_memarea *ma)

/*
 * Disable audit in alloc/free/audit.
 */
rte_memarea_audit_disable(struct rte_memarea *ma)

/*
 * if ptr is NULL, then audit the all objects.
 * else, only audit the object which the ptr pointers.
 * if audit fail, will return an error code, and the error log will emit.
 *
 * The audit is performed only when the audit function is enabled for the 
memarea.
 */
int rte_memarea_audit_object(struct rte_memarea *ma, void *ptr)


So for an application, it can invoke rte_memarea_audit() in its code without 
macro control.
If it considers that memory corruption may occur in a certain scenario, the 
audit function can be enabled, so could find it by error log or retcode.

This method causes slight performance loss. But it's guaranteed that there's no 
need to recompile, and a binary can be done.

> 
> [1]: 
> https://elixir.bootlin.com/dpdk/latest/source/lib/mempool/rte_mempool.h#L154
> 
> With that added:
> 
> Series-acked-by: Morten Brørup 
> 
> 
> .
> 


Re: [PATCH v6 1/2] ethdev: add query_update sync and async function calls

2023-01-20 Thread Andrew Rybchenko

On 1/19/23 19:47, Gregory Etelson wrote:

Current API allows either query or update indirect flow action.
If indirect action must be conditionally updated according to it's
present state application must first issue action query then
analyze returned data and if needed issue update request.
When the update will be processed, action state can change and
the update can invalidate the action.

The patch adds `rte_flow_action_handle_query_update` function call,


"This patch adds ..." -> "Add ..."


and it's async version `rte_flow_async_action_handle_query_update`
to atomically query and update flow action.


Sorry, may be I'm missing something, but after reading previous
discussion I have a feeling that update could be queried data
dependent. However, I don't understand how it could be possible
with API below. Just my misunderstanding?



Application can control query and update order, if that is supported
by port hardware, by setting `qu_mode` parameter to
RTE_FLOW_QU_QUERY_FIRST or RTE_FLOW_QU_UPDATE_FIRST.

Signed-off-by: Gregory Etelson 
---
v2: Remove RTE_FLOW_QU_DEFAULT query-update mode.
v3: Update release release notes.
 Fix doxygen errors.
v4: Add returned errno codes.
v5: Update the patch description.
 Fix typos.
v6: Resolve merge conflict with the main branch.
---
  doc/guides/rel_notes/release_23_03.rst |  7 ++
  lib/ethdev/rte_flow.c  | 39 ++
  lib/ethdev/rte_flow.h  | 99 ++
  lib/ethdev/rte_flow_driver.h   | 15 
  lib/ethdev/version.map |  5 ++
  5 files changed, 165 insertions(+)

diff --git a/doc/guides/rel_notes/release_23_03.rst 
b/doc/guides/rel_notes/release_23_03.rst
index c15f6fbb9f..6941d20abc 100644
--- a/doc/guides/rel_notes/release_23_03.rst
+++ b/doc/guides/rel_notes/release_23_03.rst
@@ -69,6 +69,13 @@ New Features
  ``rte_event_dev_config::nb_single_link_event_port_queues`` parameter
  required for eth_rx, eth_tx, crypto and timer eventdev adapters.
  
+* **Added functions to atomically query and update indirect flow action.**

+
+  Added synchronous and asynchronous functions to atomically query and update
+  indirect flow action:
+
+  - ``rte_flow_action_handle_query_update``
+  - ``rte_flow_async_action_handle_query_update``


Please, add one more empty line since two should separate
sections.

  
  Removed Items

  -
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 7d0c24366c..8b8aa940be 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1883,3 +1883,42 @@ rte_flow_async_action_handle_query(uint16_t port_id,
  action_handle, data, user_data, 
error);
return flow_err(port_id, ret, error);
  }
+
+int
+rte_flow_action_handle_query_update(uint16_t port_id,
+   struct rte_flow_action_handle *handle,
+   const void *update, void *query,
+   enum rte_flow_query_update_mode mode,
+   struct rte_flow_error *error)
+{
+   struct rte_eth_dev *dev = &rte_eth_devices[port_id];


I know that nearby code does the same, but I still dislike it.
If port_id is invalid, it could be out-of-boundary access.
Yes, port_id is checked on ops get, but it would be safer to
do the assignment after ops check below.
Right now there is no bugs, but the future modifications
could break it.


+   const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+   int ret;
+
+   if (!ops || !ops->action_handle_query_update)
+   return -ENOTSUP;
+   ret = ops->action_handle_query_update(dev, handle, update, query,
+ mode, error);
+   return flow_err(port_id, ret, error);
+}
+
+int
+rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t queue_id,
+ const struct rte_flow_op_attr *attr,
+ struct rte_flow_action_handle *handle,
+ const void *update, void *query,
+ enum rte_flow_query_update_mode mode,
+ void *user_data,
+ struct rte_flow_error *error)
+{
+   struct rte_eth_dev *dev = &rte_eth_devices[port_id];


same here


+   const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+   int ret;
+
+   if (!ops || !ops->async_action_handle_query_update)
+   return -ENOTSUP;
+   ret = ops->async_action_handle_query_update(dev, queue_id, attr,
+   handle, update, query, mode,
+   user_data, error);
+   return flow_err(port_id, ret, error);
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index b60987db4b

[PATCH v6 0/6] add trace points in ethdev library

2023-01-20 Thread Ankur Dwivedi
This series adds trace points for functions in the ethdev library.
The trace points are added in ethdev, flow, mtr and tm files.

v6:
 - Resolves compilation error with 32 bit build.
 - Resolves a bug found in v5 in the trace autotest application where
   the traces where not getting generated after the first blob test case.
   The trace size needs to be known at trace point register, so a dynamic
   length array cannot be implemented with current implementation.
   So changing the metadata of blob to make the array as 64 bytes. The
   length will denote the blob length(passed by the application). The
   trailing unused fields will be set to zero if length is less than 64.

   For example, the following is the ctf metadata created to display
   a mac addr array in rte_eth_trace_macaddr_get():
   struct {
  ...
  uint8_t len;
  uint8_t mac_addr_addr_bytes[64];
   };
 - Minor changes in the subject of patches (2/6) and (3/6).

v5:
 - The rte_trace_point_emit_char_array function is renamed to 
   rte_trace_point_emit_blob. With this function an array of
   any length upto 65535 bytes can be captured.
   For example, the following is the ctf metadata created to display
   a mac addr array in rte_eth_trace_macaddr_get():
   struct {
  ...
  uint16_t len;
  uint8_t mac_addr_addr_bytes[len];
   };
 - Added additional test cases for rte_eal_trace_generic_blob
   test case.
 - Capturing of return value of a function is added to tracepoint 
   for flow, mtr and tm patches.
 - In ehdev patch (1/6), removed extra line. Also added rx_pkts and
   tx_pkts pointer in trace point.

v4:
 - Adds tracepoint function to emit char array. Also adds the
   test case.
 - Resolved review comments on "ethdev: add trace point" patch.
   This patch is divided into 2 patches to minimize per patch
   size.
 - From the earlier version (v3), few tracepoints in ethdev,
   flow, mtr, tm are made as fast path tracepoints. For the 
   tracepoint which i was unsure, i have made it as fastpath.
   All the fast path tracepoints can be found in 
   rte_ethdev_trace_fp.h and rte_ethdev_trace_fp_burst.h.
   All the slow path tracepoints can be found in rte_ethdev_trace.h.
 - Capturing of return value is added to tracepoint in ethdev.
   For flow, mtr and tm these changes are still yet to bde done.
   Will do it in the next versions.
 - Moved the trace functions from INTERNAL to EXPERIMENTAL in
   version.map.

v3:
 - Moved the trace functions from EXPERIMENTAL to INTERNAL in
   version.map.
 - Moved trace functions call to the end, in ethdev and flow trace.
 - Added code to print the input value of features in
   rte_eth_trace_rx_metadata_negotiate().
 - Added code to capture return value in flow trace.

Ankur Dwivedi (6):
  eal: trace: add trace point emit for blob
  ethdev: add trace points for ethdev (part one)
  ethdev: add trace points for ethdev (part two)
  ethdev: add trace points for flow
  ethdev: add trace points for mtr
  ethdev: add trace points for tm

 app/test/test_trace.c  |   11 +
 doc/guides/prog_guide/trace_lib.rst|   12 +
 lib/eal/common/eal_common_trace_points.c   |2 +
 lib/eal/include/rte_eal_trace.h|6 +
 lib/eal/include/rte_trace_point.h  |   32 +
 lib/eal/include/rte_trace_point_register.h |9 +
 lib/eal/version.map|3 +
 lib/ethdev/ethdev_private.c|5 +
 lib/ethdev/ethdev_trace_points.c   |  715 ++
 lib/ethdev/meson.build |1 +
 lib/ethdev/rte_ethdev.c|  711 --
 lib/ethdev/rte_ethdev.h|2 +-
 lib/ethdev/rte_ethdev_cman.c   |   30 +-
 lib/ethdev/rte_ethdev_trace.h  | 1450 
 lib/ethdev/rte_ethdev_trace_fp.h   | 1005 +-
 lib/ethdev/rte_ethdev_trace_fp_burst.h |   44 +
 lib/ethdev/rte_flow.c  |  314 -
 lib/ethdev/rte_mtr.c   |  156 ++-
 lib/ethdev/rte_tm.c|  247 +++-
 lib/ethdev/version.map |  235 
 20 files changed, 4754 insertions(+), 236 deletions(-)
 create mode 100644 lib/ethdev/rte_ethdev_trace_fp_burst.h

-- 
2.25.1



[PATCH v6 1/6] eal: trace: add trace point emit for blob

2023-01-20 Thread Ankur Dwivedi
Adds a trace point emit function for capturing a blob. The blob
captures the length passed by the application followed by the array.

The maximum blob bytes which can be captured is bounded by
RTE_TRACE_BLOB_LEN_MAX macro. The value for max blob length macro is
64 bytes. If the length is less than 64 the remaining trailing bytes
are set to zero.

This patch also adds test case for emit blob tracepoint function.

Signed-off-by: Ankur Dwivedi 
---
 app/test/test_trace.c  | 11 
 doc/guides/prog_guide/trace_lib.rst| 12 
 lib/eal/common/eal_common_trace_points.c   |  2 ++
 lib/eal/include/rte_eal_trace.h|  6 
 lib/eal/include/rte_trace_point.h  | 32 ++
 lib/eal/include/rte_trace_point_register.h |  9 ++
 lib/eal/version.map|  3 ++
 7 files changed, 75 insertions(+)

diff --git a/app/test/test_trace.c b/app/test/test_trace.c
index 6bedf14024..ad4a394a29 100644
--- a/app/test/test_trace.c
+++ b/app/test/test_trace.c
@@ -4,6 +4,7 @@
 
 #include 
 #include 
+#include 
 #include 
 
 #include "test.h"
@@ -177,7 +178,12 @@ test_fp_trace_points(void)
 static int
 test_generic_trace_points(void)
 {
+   uint8_t arr[RTE_TRACE_BLOB_LEN_MAX];
int tmp;
+   int i;
+
+   for (i = 0; i < RTE_TRACE_BLOB_LEN_MAX; i++)
+   arr[i] = i;
 
rte_eal_trace_generic_void();
rte_eal_trace_generic_u64(0x10);
@@ -195,6 +201,11 @@ test_generic_trace_points(void)
rte_eal_trace_generic_ptr(&tmp);
rte_eal_trace_generic_str("my string");
rte_eal_trace_generic_size_t(sizeof(void *));
+   rte_eal_trace_generic_blob(arr, 0);
+   rte_eal_trace_generic_blob(arr, 17);
+   rte_eal_trace_generic_blob(arr, RTE_TRACE_BLOB_LEN_MAX);
+   rte_eal_trace_generic_blob(arr, rte_rand() %
+   RTE_TRACE_BLOB_LEN_MAX);
RTE_EAL_TRACE_GENERIC_FUNC;
 
return TEST_SUCCESS;
diff --git a/doc/guides/prog_guide/trace_lib.rst 
b/doc/guides/prog_guide/trace_lib.rst
index 9a8f38073d..3e0ea5835c 100644
--- a/doc/guides/prog_guide/trace_lib.rst
+++ b/doc/guides/prog_guide/trace_lib.rst
@@ -352,3 +352,15 @@ event ID.
 The ``packet.header`` and ``packet.context`` will be written in the slow path
 at the time of trace memory creation. The ``trace.header`` and trace payload
 will be emitted when the tracepoint function is invoked.
+
+Limitations
+---
+
+- The ``rte_trace_point_emit_blob()`` function can capture a maximum blob of
+  length ``RTE_TRACE_BLOB_LEN_MAX`` bytes. The application can call
+  ``rte_trace_point_emit_blob()`` multiple times with length less than or 
equal to
+  ``RTE_TRACE_BLOB_LEN_MAX``, if it needs to capture more than 
``RTE_TRACE_BLOB_LEN_MAX``
+  bytes.
+- If the length passed to the ``rte_trace_point_emit_blob()`` is less than
+  ``RTE_TRACE_BLOB_LEN_MAX``, then the trailing ``(RTE_TRACE_BLOB_LEN_MAX - 
len)``
+  bytes in the trace are set to zero.
diff --git a/lib/eal/common/eal_common_trace_points.c 
b/lib/eal/common/eal_common_trace_points.c
index 0b0b254615..051f89809c 100644
--- a/lib/eal/common/eal_common_trace_points.c
+++ b/lib/eal/common/eal_common_trace_points.c
@@ -40,6 +40,8 @@ RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_size_t,
lib.eal.generic.size_t)
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_func,
lib.eal.generic.func)
+RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_blob,
+   lib.eal.generic.blob)
 
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_alarm_set,
lib.eal.alarm.set)
diff --git a/lib/eal/include/rte_eal_trace.h b/lib/eal/include/rte_eal_trace.h
index 5ef4398230..e0b836eb2f 100644
--- a/lib/eal/include/rte_eal_trace.h
+++ b/lib/eal/include/rte_eal_trace.h
@@ -143,6 +143,12 @@ RTE_TRACE_POINT(
rte_trace_point_emit_string(func);
 )
 
+RTE_TRACE_POINT(
+   rte_eal_trace_generic_blob,
+   RTE_TRACE_POINT_ARGS(void *in, uint8_t len),
+   rte_trace_point_emit_blob(in, len);
+)
+
 #define RTE_EAL_TRACE_GENERIC_FUNC rte_eal_trace_generic_func(__func__)
 
 /* Interrupt */
diff --git a/lib/eal/include/rte_trace_point.h 
b/lib/eal/include/rte_trace_point.h
index 0f8700974f..aca8344dbf 100644
--- a/lib/eal/include/rte_trace_point.h
+++ b/lib/eal/include/rte_trace_point.h
@@ -144,6 +144,16 @@ _tp _args \
 #define rte_trace_point_emit_ptr(val)
 /** Tracepoint function payload for string datatype */
 #define rte_trace_point_emit_string(val)
+/**
+ * Tracepoint function to capture a blob.
+ *
+ * @param val
+ *   Pointer to the array to be captured.
+ * @param len
+ *   Length to be captured. The maximum supported length is
+ *   RTE_TRACE_BLOB_LEN_MAX bytes.
+ */
+#define rte_trace_point_emit_blob(val, len)
 
 #endif /* __DOXYGEN__ */
 
@@ -152,6 +162,9 @@ _tp _args \
 /** @internal Macro to define event header size. */
 #define __RTE_TRACE_EVENT_HEADER_SZ sizeof(uint64_t)
 
+/** Macro to define maximum emit length of blob. */
+#d

[PATCH v6 2/6] ethdev: add trace points for ethdev (part one)

2023-01-20 Thread Ankur Dwivedi
Adds trace points for ethdev functions.
Moved the rte_ethdev_trace_rx_burst and rte_ethdev_trace_tx_burst to
a new file rte_ethdev_trace_fp_burst.h. This is needed to resolve
cyclic dependency between rte_ethdev.h and rte_ethdev_trace_fp.h.

Signed-off-by: Ankur Dwivedi 
---
 lib/ethdev/ethdev_private.c|   5 +
 lib/ethdev/ethdev_trace_points.c   | 193 +
 lib/ethdev/meson.build |   1 +
 lib/ethdev/rte_ethdev.c| 235 +---
 lib/ethdev/rte_ethdev.h|   2 +-
 lib/ethdev/rte_ethdev_trace.h  | 285 +
 lib/ethdev/rte_ethdev_trace_fp.h   | 279 +++-
 lib/ethdev/rte_ethdev_trace_fp_burst.h |  44 
 lib/ethdev/version.map |  66 ++
 9 files changed, 1075 insertions(+), 35 deletions(-)
 create mode 100644 lib/ethdev/rte_ethdev_trace_fp_burst.h

diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 48090c879a..fd16b25e55 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -5,6 +5,7 @@
 #include 
 
 #include "rte_ethdev.h"
+#include "rte_ethdev_trace_fp.h"
 #include "ethdev_driver.h"
 #include "ethdev_private.h"
 
@@ -297,6 +298,8 @@ rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t 
queue_id,
cb = cb->next;
}
 
+   rte_eth_trace_call_rx_callbacks(port_id, queue_id, rx_pkts, nb_rx, 
nb_pkts);
+
return nb_rx;
 }
 
@@ -312,6 +315,8 @@ rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t 
queue_id,
cb = cb->next;
}
 
+   rte_eth_trace_call_tx_callbacks(port_id, queue_id, tx_pkts, nb_pkts);
+
return nb_pkts;
 }
 
diff --git a/lib/ethdev/ethdev_trace_points.c b/lib/ethdev/ethdev_trace_points.c
index 2919409a15..4fea76e0ff 100644
--- a/lib/ethdev/ethdev_trace_points.c
+++ b/lib/ethdev/ethdev_trace_points.c
@@ -5,6 +5,7 @@
 #include 
 
 #include 
+#include 
 
 RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_configure,
lib.ethdev.configure)
@@ -29,3 +30,195 @@ RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rx_burst,
 
 RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_tx_burst,
lib.ethdev.tx.burst)
+
+RTE_TRACE_POINT_REGISTER(rte_eth_trace_call_rx_callbacks,
+   lib.ethdev.call_rx_callbacks)
+
+RTE_TRACE_POINT_REGISTER(rte_eth_trace_call_tx_callbacks,
+   lib.ethdev.call_tx_callbacks)
+
+RTE_TRACE_POINT_REGISTER(rte_eth_trace_iterator_init,
+   lib.ethdev.iterator_init)
+
+RTE_TRACE_POINT_REGISTER(rte_eth_trace_iterator_next,
+   lib.ethdev.iterator_next)
+
+RTE_TRACE_POINT_REGISTER(rte_eth_trace_iterator_cleanup,
+   lib.ethdev.iterator_cleanup)
+
+RTE_TRACE_POINT_REGISTER(rte_eth_trace_find_next,
+   lib.ethdev.find_next)
+
+RTE_TRACE_POINT_REGISTER(rte_eth_trace_find_next_of,
+   lib.ethdev.find_next_of)
+
+RTE_TRACE_POINT_REGISTER(rte_eth_trace_find_next_sibling,
+   lib.ethdev.find_next_sibling)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_is_valid_port,
+   lib.ethdev.is_valid_port)
+
+RTE_TRACE_POINT_REGISTER(rte_eth_trace_find_next_owned_by,
+   lib.ethdev.find_next_owned_by)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_owner_new,
+   lib.ethdev.owner_new)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_owner_set,
+   lib.ethdev.owner_set)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_owner_unset,
+   lib.ethdev.owner_unset)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_owner_delete,
+   lib.ethdev.owner_delete)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_owner_get,
+   lib.ethdev.owner_get)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_socket_id,
+   lib.ethdev.socket_id)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_get_sec_ctx,
+   lib.ethdev.get_sec_ctx)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_count_avail,
+   lib.ethdev.count_avail)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_count_total,
+   lib.ethdev.count_total)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_get_name_by_port,
+   lib.ethdev.get_name_by_port)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_get_port_by_name,
+   lib.ethdev.get_port_by_name)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rx_queue_start,
+   lib.ethdev.rx_queue_start)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rx_queue_stop,
+   lib.ethdev.rx_queue_stop)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_tx_queue_start,
+   lib.ethdev.tx_queue_start)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_tx_queue_stop,
+   lib.ethdev.tx_queue_stop)
+
+RTE_TRACE_POINT_REGISTER(rte_eth_trace_speed_bitflag,
+   lib.ethdev.speed_bitflag)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rx_offload_name,
+   lib.ethdev.rx_offload_name)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_tx_offload_name,
+   lib.ethdev.tx_offload_name)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_capability_name,
+   lib.ethdev.capability_name)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_set_link_up,
+   lib.ethdev.set_link_up)
+
+R

[PATCH v6 3/6] ethdev: add trace points for ethdev (part two)

2023-01-20 Thread Ankur Dwivedi
Adds trace points for remaining ethdev functions.

Signed-off-by: Ankur Dwivedi 
---
 lib/ethdev/ethdev_trace_points.c | 252 +++
 lib/ethdev/rte_ethdev.c  | 476 ++-
 lib/ethdev/rte_ethdev_cman.c |  30 +-
 lib/ethdev/rte_ethdev_trace.h| 529 +++
 lib/ethdev/rte_ethdev_trace_fp.h | 342 
 lib/ethdev/version.map   |  81 +
 6 files changed, 1622 insertions(+), 88 deletions(-)

diff --git a/lib/ethdev/ethdev_trace_points.c b/lib/ethdev/ethdev_trace_points.c
index 4fea76e0ff..102a18fcc1 100644
--- a/lib/ethdev/ethdev_trace_points.c
+++ b/lib/ethdev/ethdev_trace_points.c
@@ -222,3 +222,255 @@ 
RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_set_rx_queue_stats_mapping,
 
 RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_fw_version_get,
lib.ethdev.fw_version_get)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_info_get,
+   lib.ethdev.info_get)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_conf_get,
+   lib.ethdev.conf_get)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_get_supported_ptypes,
+   lib.ethdev.get_supported_ptypes)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_set_ptypes,
+   lib.ethdev.set_ptypes)
+
+RTE_TRACE_POINT_REGISTER(rte_eth_trace_macaddrs_get,
+   lib.ethdev.macaddrs_get)
+
+RTE_TRACE_POINT_REGISTER(rte_eth_trace_macaddr_get,
+   lib.ethdev.macaddr_get)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_get_mtu,
+   lib.ethdev.get_mtu)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_set_mtu,
+   lib.ethdev.set_mtu)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_vlan_filter,
+   lib.ethdev.vlan_filter)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_set_vlan_strip_on_queue,
+   lib.ethdev.set_vlan_strip_on_queue)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_set_vlan_ether_type,
+   lib.ethdev.set_vlan_ether_type)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_set_vlan_offload,
+   lib.ethdev.set_vlan_offload)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_get_vlan_offload,
+   lib.ethdev.get_vlan_offload)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_set_vlan_pvid,
+   lib.ethdev.set_vlan_pvid)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_flow_ctrl_get,
+   lib.ethdev.flow_ctrl_get)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_flow_ctrl_set,
+   lib.ethdev.flow_ctrl_set)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_priority_flow_ctrl_set,
+   lib.ethdev.priority_flow_ctrl_set)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_priority_flow_ctrl_queue_info_get,
+   lib.ethdev.priority_flow_ctrl_queue_info_get)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_priority_flow_ctrl_queue_configure,
+   lib.ethdev.priority_flow_ctrl_queue_configure)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rss_reta_update,
+   lib.ethdev.rss_reta_update)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rss_reta_query,
+   lib.ethdev.rss_reta_query)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rss_hash_update,
+   lib.ethdev.rss_hash_update)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rss_hash_conf_get,
+   lib.ethdev.rss_hash_conf_get)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_udp_tunnel_port_add,
+   lib.ethdev.udp_tunnel_port_add)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_udp_tunnel_port_delete,
+   lib.ethdev.udp_tunnel_port_delete)
+
+RTE_TRACE_POINT_REGISTER(rte_eth_trace_led_on,
+   lib.ethdev.led_on)
+
+RTE_TRACE_POINT_REGISTER(rte_eth_trace_led_off,
+   lib.ethdev.led_off)
+
+RTE_TRACE_POINT_REGISTER(rte_eth_trace_fec_get_capability,
+   lib.ethdev.fec_get_capability)
+
+RTE_TRACE_POINT_REGISTER(rte_eth_trace_fec_get,
+   lib.ethdev.fec_get)
+
+RTE_TRACE_POINT_REGISTER(rte_eth_trace_fec_set,
+   lib.ethdev.fec_set)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_mac_addr_add,
+   lib.ethdev.mac_addr_add)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_mac_addr_remove,
+   lib.ethdev.mac_addr_remove)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_default_mac_addr_set,
+   lib.ethdev.default_mac_addr_set)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_uc_hash_table_set,
+   lib.ethdev.uc_hash_table_set)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_uc_all_hash_table_set,
+   lib.ethdev.uc_all_hash_table_set)
+
+RTE_TRACE_POINT_REGISTER(rte_eth_trace_set_queue_rate_limit,
+   lib.ethdev.set_queue_rate_limit)
+
+RTE_TRACE_POINT_REGISTER(rte_eth_trace_rx_avail_thresh_set,
+   lib.ethdev.rx_avail_thresh_set)
+
+RTE_TRACE_POINT_REGISTER(rte_eth_trace_rx_avail_thresh_query,
+   lib.ethdev.rx_avail_thresh_query)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_callback_register,
+   lib.ethdev.callback_register)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_callback_unregister,
+   lib.ethdev.callback_unregister)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rx_intr_ctl,
+   lib.ethdev.rx_intr_ctl)
+
+RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rx_intr_ctl_q_get_fd,
+   lib.ethdev.rx_intr_ctl_q_g

[PATCH v6 4/6] ethdev: add trace points for flow

2023-01-20 Thread Ankur Dwivedi
Adds trace points for rte_flow specific functions in ethdev lib.

Signed-off-by: Ankur Dwivedi 
---
 lib/ethdev/ethdev_trace_points.c | 117 ++
 lib/ethdev/rte_ethdev_trace.h| 383 +++
 lib/ethdev/rte_ethdev_trace_fp.h | 113 +
 lib/ethdev/rte_flow.c| 314 +++--
 lib/ethdev/version.map   |  37 +++
 5 files changed, 898 insertions(+), 66 deletions(-)

diff --git a/lib/ethdev/ethdev_trace_points.c b/lib/ethdev/ethdev_trace_points.c
index 102a18fcc1..1953d90a5a 100644
--- a/lib/ethdev/ethdev_trace_points.c
+++ b/lib/ethdev/ethdev_trace_points.c
@@ -474,3 +474,120 @@ RTE_TRACE_POINT_REGISTER(rte_eth_trace_cman_config_set,
 
 RTE_TRACE_POINT_REGISTER(rte_eth_trace_cman_config_get,
lib.ethdev.cman_config_get)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_copy,
+   lib.ethdev.flow.copy)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_create,
+   lib.ethdev.flow.create)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_destroy,
+   lib.ethdev.flow.destroy)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_flush,
+   lib.ethdev.flow.flush)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_isolate,
+   lib.ethdev.flow.isolate)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_query,
+   lib.ethdev.flow.query)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_validate,
+   lib.ethdev.flow.validate)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_conv,
+   lib.ethdev.flow.conv)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_dynf_metadata_register,
+   lib.ethdev.dynf_metadata_register)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_get_aged_flows,
+   lib.ethdev.flow.get_aged_flows)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_get_q_aged_flows,
+   lib.ethdev.flow.get_q_aged_flows)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_tunnel_decap_set,
+   lib.ethdev.flow.tunnel_decap_set)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_tunnel_match,
+   lib.ethdev.flow.tunnel_match)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_get_restore_info,
+   lib.ethdev.flow.get_restore_info)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_tunnel_action_decap_release,
+   lib.ethdev.flow.tunnel_action_decap_release)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_tunnel_item_release,
+   lib.ethdev.flow.tunnel_item_release)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_action_handle_create,
+   lib.ethdev.flow.action_handle_create)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_action_handle_destroy,
+   lib.ethdev.flow.action_handle_destroy)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_action_handle_update,
+   lib.ethdev.flow.action_handle_update)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_action_handle_query,
+   lib.ethdev.flow.action_handle_query)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_flex_item_create,
+   lib.ethdev.flow.flex_item_create)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_flex_item_release,
+   lib.ethdev.flow.flex_item_release)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_pick_transfer_proxy,
+   lib.ethdev.flow.pick_transfer_proxy)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_info_get,
+   lib.ethdev.flow.info_get)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_configure,
+   lib.ethdev.flow.configure)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_pattern_template_create,
+   lib.ethdev.flow.pattern_template_create)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_pattern_template_destroy,
+   lib.ethdev.flow.pattern_template_destroy)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_actions_template_create,
+   lib.ethdev.flow.actions_template_create)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_actions_template_destroy,
+   lib.ethdev.flow.actions_template_destroy)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_template_table_create,
+   lib.ethdev.flow.template_table_create)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_template_table_destroy,
+   lib.ethdev.flow.template_table_destroy)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_async_create,
+   lib.ethdev.flow.async_create)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_async_destroy,
+   lib.ethdev.flow.async_destroy)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_push,
+   lib.ethdev.flow.push)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_pull,
+   lib.ethdev.flow.pull)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_async_action_handle_create,
+   lib.ethdev.flow.async_action_handle_create)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_async_action_handle_destroy,
+   lib.ethdev.flow.async_action_handle_destroy)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_async_action_handle_update,
+   lib.ethdev.flow.async_action_handle_update)
+
+RTE_TRACE_POINT_REGISTER(rte_flow_trace_async_action_handle_query,
+   lib.ethdev.flow.async.action.handle.query)
diff --git a/lib/ethdev/rte_ethdev_trace.h b/lib/ethdev/rte_ethdev_trace.h
index fa195c5cde..fd4fbaf6de 100644
--- a/lib/ethdev/rte_ethdev_trace.h
+++ b/lib/ethdev/rte_ethdev_trace.h
@@ -902,6 +902,389 @@ RTE

[PATCH v6 5/6] ethdev: add trace points for mtr

2023-01-20 Thread Ankur Dwivedi
Adds trace points for rte_mtr specific functions in ethdev lib.

Signed-off-by: Ankur Dwivedi 
---
 lib/ethdev/ethdev_trace_points.c |  63 +
 lib/ethdev/rte_ethdev_trace.h| 112 ++
 lib/ethdev/rte_ethdev_trace_fp.h | 100 
 lib/ethdev/rte_mtr.c | 156 +++
 lib/ethdev/version.map   |  21 +
 5 files changed, 434 insertions(+), 18 deletions(-)

diff --git a/lib/ethdev/ethdev_trace_points.c b/lib/ethdev/ethdev_trace_points.c
index 1953d90a5a..067e5fdab4 100644
--- a/lib/ethdev/ethdev_trace_points.c
+++ b/lib/ethdev/ethdev_trace_points.c
@@ -591,3 +591,66 @@ 
RTE_TRACE_POINT_REGISTER(rte_flow_trace_async_action_handle_update,
 
 RTE_TRACE_POINT_REGISTER(rte_flow_trace_async_action_handle_query,
lib.ethdev.flow.async.action.handle.query)
+
+RTE_TRACE_POINT_REGISTER(rte_mtr_trace_capabilities_get,
+   lib.ethdev.mtr.capabilities_get)
+
+RTE_TRACE_POINT_REGISTER(rte_mtr_trace_create,
+   lib.ethdev.mtr.create)
+
+RTE_TRACE_POINT_REGISTER(rte_mtr_trace_destroy,
+   lib.ethdev.mtr.destroy)
+
+RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_disable,
+   lib.ethdev.mtr.meter_disable)
+
+RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_dscp_table_update,
+   lib.ethdev.mtr.meter_dscp_table_update)
+
+RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_enable,
+   lib.ethdev.mtr.meter_enable)
+
+RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_profile_add,
+   lib.ethdev.mtr.meter_profile_add)
+
+RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_profile_delete,
+   lib.ethdev.mtr.meter_profile_delete)
+
+RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_profile_get,
+   lib.ethdev.mtr.meter_profile_get)
+
+RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_profile_update,
+   lib.ethdev.mtr.meter_profile_update)
+
+RTE_TRACE_POINT_REGISTER(rte_mtr_trace_stats_read,
+   lib.ethdev.mtr.stats_read)
+
+RTE_TRACE_POINT_REGISTER(rte_mtr_trace_stats_update,
+   lib.ethdev.mtr.stats_update)
+
+RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_policy_add,
+   lib.ethdev.mtr.meter_policy_add)
+
+RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_policy_delete,
+   lib.ethdev.mtr.meter_policy_delete)
+
+RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_policy_get,
+   lib.ethdev.mtr.meter_policy_get)
+
+RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_policy_update,
+   lib.ethdev.mtr.meter_policy_update)
+
+RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_policy_validate,
+   lib.ethdev.mtr.meter_policy_validate)
+
+RTE_TRACE_POINT_REGISTER(rte_mtr_trace_meter_vlan_table_update,
+   lib.ethdev.mtr.meter_vlan_table_update)
+
+RTE_TRACE_POINT_REGISTER(rte_mtr_trace_color_in_protocol_get,
+   lib.ethdev.mtr.color_in_protocol_get)
+
+RTE_TRACE_POINT_REGISTER(rte_mtr_trace_color_in_protocol_priority_get,
+   lib.ethdev.mtr.color_in_protocol_priority_get)
+
+RTE_TRACE_POINT_REGISTER(rte_mtr_trace_color_in_protocol_set,
+   lib.ethdev.mtr.color_in_protocol_set)
diff --git a/lib/ethdev/rte_ethdev_trace.h b/lib/ethdev/rte_ethdev_trace.h
index fd4fbaf6de..2a0516e10d 100644
--- a/lib/ethdev/rte_ethdev_trace.h
+++ b/lib/ethdev/rte_ethdev_trace.h
@@ -18,6 +18,7 @@ extern "C" {
 #include 
 
 #include "rte_ethdev.h"
+#include "rte_mtr.h"
 
 RTE_TRACE_POINT(
rte_ethdev_trace_configure,
@@ -1285,6 +1286,117 @@ RTE_TRACE_POINT(
rte_trace_point_emit_int(ret);
 )
 
+RTE_TRACE_POINT(
+   rte_mtr_trace_capabilities_get,
+   RTE_TRACE_POINT_ARGS(uint16_t port_id,
+   struct rte_mtr_capabilities *cap, int ret),
+   rte_trace_point_emit_u16(port_id);
+   rte_trace_point_emit_ptr(cap);
+   rte_trace_point_emit_int(ret);
+)
+
+RTE_TRACE_POINT(
+   rte_mtr_trace_create,
+   RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t mtr_id,
+   struct rte_mtr_params *params, int shared, int ret),
+   rte_trace_point_emit_u16(port_id);
+   rte_trace_point_emit_u32(mtr_id);
+   rte_trace_point_emit_ptr(params);
+   rte_trace_point_emit_u32(params->meter_profile_id);
+   rte_trace_point_emit_int(params->use_prev_mtr_color);
+   rte_trace_point_emit_int(params->meter_enable);
+   rte_trace_point_emit_u64(params->stats_mask);
+   rte_trace_point_emit_u32(params->meter_policy_id);
+   rte_trace_point_emit_int(params->default_input_color);
+   rte_trace_point_emit_int(shared);
+   rte_trace_point_emit_int(ret);
+)
+
+RTE_TRACE_POINT(
+   rte_mtr_trace_destroy,
+   RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t mtr_id, int ret),
+   rte_trace_point_emit_u16(port_id);
+   rte_trace_point_emit_u32(mtr_id);
+   rte_trace_point_emit_int(ret);
+)
+
+RTE_TRACE_POINT(
+   rte_mtr_trace_meter_disable,
+   RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t mtr_id, int ret),
+   rte_trace_point_emit_u16(port_id);
+   rte_trace_point_emit_u32(mtr_id);
+   rte_trace_point_emit_int(ret);
+)
+
+RTE_TRACE_PO

[PATCH v6 6/6] ethdev: add trace points for tm

2023-01-20 Thread Ankur Dwivedi
Adds trace points for rte_tm specific functions in ethdev lib.

Signed-off-by: Ankur Dwivedi 
---
 lib/ethdev/ethdev_trace_points.c |  90 +++
 lib/ethdev/rte_ethdev_trace.h| 141 ++
 lib/ethdev/rte_ethdev_trace_fp.h | 171 +
 lib/ethdev/rte_tm.c  | 247 +++
 lib/ethdev/version.map   |  30 
 5 files changed, 650 insertions(+), 29 deletions(-)

diff --git a/lib/ethdev/ethdev_trace_points.c b/lib/ethdev/ethdev_trace_points.c
index 067e5fdab4..af66a139df 100644
--- a/lib/ethdev/ethdev_trace_points.c
+++ b/lib/ethdev/ethdev_trace_points.c
@@ -654,3 +654,93 @@ 
RTE_TRACE_POINT_REGISTER(rte_mtr_trace_color_in_protocol_priority_get,
 
 RTE_TRACE_POINT_REGISTER(rte_mtr_trace_color_in_protocol_set,
lib.ethdev.mtr.color_in_protocol_set)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_capabilities_get,
+   lib.ethdev.tm.capabilities_get)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_get_number_of_leaf_nodes,
+   lib.ethdev.tm.get_number_of_leaf_nodes)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_hierarchy_commit,
+   lib.ethdev.tm.hierarchy_commit)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_level_capabilities_get,
+   lib.ethdev.tm.level_capabilities_get)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_mark_ip_dscp,
+   lib.ethdev.tm.mark_ip_dscp)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_mark_ip_ecn,
+   lib.ethdev.tm.mark_ip_ecn)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_mark_vlan_dei,
+   lib.ethdev.tm.mark_vlan_dei)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_add,
+   lib.ethdev.tm.node_add)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_capabilities_get,
+   lib.ethdev.tm.node_capabilities_get)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_cman_update,
+   lib.ethdev.tm.node_cman_update)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_delete,
+   lib.ethdev.tm.node_delete)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_parent_update,
+   lib.ethdev.tm.node_parent_update)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_resume,
+   lib.ethdev.tm.node_resume)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_shaper_update,
+   lib.ethdev.tm.node_shaper_update)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_shared_shaper_update,
+   lib.ethdev.tm.node_shared_shaper_update)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_shared_wred_context_update,
+   lib.ethdev.tm.node_shared_wred_context_update)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_stats_read,
+   lib.ethdev.tm.node_stats_read)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_stats_update,
+   lib.ethdev.tm.node_stats_update)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_suspend,
+   lib.ethdev.tm.node_suspend)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_type_get,
+   lib.ethdev.tm.node_type_get)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_wfq_weight_mode_update,
+   lib.ethdev.tm.node_wfq_weight_mode_update)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_node_wred_context_update,
+   lib.ethdev.tm.node_wred_context_update)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_shaper_profile_add,
+   lib.ethdev.tm.shaper_profile_add)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_shaper_profile_delete,
+   lib.ethdev.tm.shaper_profile_delete)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_shared_shaper_add_update,
+   lib.ethdev.tm.shared_shaper_add_update)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_shared_shaper_delete,
+   lib.ethdev.tm.shared_shaper_delete)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_shared_wred_context_add_update,
+   lib.ethdev.tm.shared_wred_context_add_update)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_shared_wred_context_delete,
+   lib.ethdev.tm.shared_wred_context_delete)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_wred_profile_add,
+   lib.ethdev.tm.wred_profile_add)
+
+RTE_TRACE_POINT_REGISTER(rte_tm_trace_wred_profile_delete,
+   lib.ethdev.tm.wred_profile_delete)
diff --git a/lib/ethdev/rte_ethdev_trace.h b/lib/ethdev/rte_ethdev_trace.h
index 2a0516e10d..694ab0c731 100644
--- a/lib/ethdev/rte_ethdev_trace.h
+++ b/lib/ethdev/rte_ethdev_trace.h
@@ -19,6 +19,7 @@ extern "C" {
 
 #include "rte_ethdev.h"
 #include "rte_mtr.h"
+#include "rte_tm.h"
 
 RTE_TRACE_POINT(
rte_ethdev_trace_configure,
@@ -1397,6 +1398,146 @@ RTE_TRACE_POINT(
rte_trace_point_emit_int(ret);
 )
 
+RTE_TRACE_POINT(
+   rte_tm_trace_capabilities_get,
+   RTE_TRACE_POINT_ARGS(uint16_t port_id,
+   struct rte_tm_capabilities *cap, int ret),
+   rte_trace_point_emit_u16(port_id);
+   rte_trace_point_emit_ptr(cap);
+   rte_trace_point_emit_int(ret);
+)
+
+RTE_TRACE_POINT(
+   rte_tm_trace_hierarchy_commit,
+   RTE_TRACE_POINT_ARGS(uint16_t port_id, int clear_on_fail, int ret),
+   rte_trace_point_emit_u16(port_id);
+   rte_trace_point_emit_int(clear_on_fail);
+   rte_trace_point_emit_int(ret);
+)
+
+RTE_TRACE_POINT(
+   rte_tm_trace_level_capabiliti

Re: [PATCH v6 2/2] ethdev: add quota flow action and item

2023-01-20 Thread Andrew Rybchenko

On 1/19/23 19:47, Gregory Etelson wrote:

Quota action limits traffic according to pre-defined configuration.
Quota reflects overall traffic usage regardless bandwidth.
Quota flow action initialized with signed tokens number value.
Quota flow action updates tokens number according to
these rules:
1. if quota was configured to count packet length, for each packet
of size S, tokens number reduced by S.
2. If quota was configured to count packets, each packet decrements
tokens number.
quota action sets packet metadata according to a number of remaining
tokens number:
   PASS - remaining tokens number is non-negative.
   BLOCK - remaining tokens number is negative.

Quota flow item matches on that data

Application updates tokens number in quota flow action
with SET or ADD calls:
  SET(QUOTA, val) - arm quota with new tokens number set to val
  ADD(QUOTA, val) - increase existing quota tokens number by val

Both SET and ADD return to application number of tokens stored in port
before update.

If quota state was BLOCK (negative action tokens number)
application can change it to PASS after providing enough tokens to
raise action tokens number to 0 or above.

Application must create a rule with quota action to mark flow and
match on the mark with quota item in following flow rule.

Signed-off-by: Gregory Etelson 
Acked-by: Ori Kam 


[snip]



diff --git a/doc/guides/rel_notes/release_23_03.rst 
b/doc/guides/rel_notes/release_23_03.rst
index 6941d20abc..713b58267f 100644
--- a/doc/guides/rel_notes/release_23_03.rst
+++ b/doc/guides/rel_notes/release_23_03.rst
@@ -77,6 +77,33 @@ New Features
- ``rte_flow_action_handle_query_update``
- ``rte_flow_async_action_handle_query_update``
  
+* **Added quota flow action and quota flow item.**

+
+  Quota action limits traffic according to pre-defined configuration.
+  Quota reflects overall traffic usage regardless bandwidth.
+  Quota flow action initialized with signed tokens number value.
+  Quota flow action updates tokens number according to
+  these rules:
+  * If quota was configured to count packet length, for each packet
+  of size S, tokens number reduced by S.
+  * If quota was configured to count packets, each packet decrements
+  tokens number.
+  Quota action sets packet metadata according to a number of remaining
+  tokens number:
+  * ``PASS`` - remaining tokens number is non-negative.
+  * ``BLOCK`` - remaining tokens number is negative.
+
+  Quota flow item matches on that metadata.
+
+  - ``RTE_FLOW_ACTION_TYPE_QUOTA``
+  - ``RTE_FLOW_ITEM_TYPE_QUOTA``


Quota description should be a part of action/item
documentation, not release notes.


+
+* **Updated testpmd to support quota flow action and item.**
+
+  Added tokens to ``token_list[]`` for flow quota action and item support.


IMHO 'token_list' is inapropriate in release notes. It is too
deep technical detail.


+
+
+
  Removed Items
  -
  



diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 280567a3ae..0068fc275b 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -624,7 +624,45 @@ enum rte_flow_item_type {
 * See struct rte_flow_item_meter_color.
 */
RTE_FLOW_ITEM_TYPE_METER_COLOR,
+
+   /**
+* Match Quota state
+*
+* @see struct rte_flow_item_quota
+*/
+RTE_FLOW_ITEM_TYPE_QUOTA,
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * QUOTA state.
+ *
+ * @see struct rte_flow_item_quota
+ */
+enum rte_flow_quota_state {
+   RTE_FLOW_QUOTA_STATE_PASS, /** PASS quota state */
+   RTE_FLOW_QUOTA_STATE_BLOCK /** BLOCK quota state */


Enum memeber comments should start from /**< since it is a
documentation after documented code.


+};
+
+/**
+ * RTE_FLOW_ITEM_TYPE_QUOTA
+ *
+ * Matches QUOTA state
+ */
+struct rte_flow_item_quota {
+   enum rte_flow_quota_state state;
+};
+
+/**
+ * Default mask for RTE_FLOW_ITEM_TYPE_QUOTA
+ */
+#ifndef __cplusplus
+static const struct rte_flow_item_quota rte_flow_item_quota_mask = {
+   .state = (enum rte_flow_quota_state)0xff


I don't understand why it is just 0xff, not 0x1, not 0xf,
not 0x, not 0x.

Isn't it better to make 'state' -> 'state_mask' and use
PASS and BLOCK as corresponding bits in a mask. If so,
here should be a value with all known bits set.


  };
+#endif
  
  /**

   *
@@ -2736,6 +2774,81 @@ enum rte_flow_action_type {
 * No associated configuration structure.
 */
RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
+
+   /**
+* Apply the quota verdict (PASS or BLOCK) to a flow.
+*
+* @see struct rte_flow_action_quota
+* @see struct rte_flow_query_quota
+* @see struct rte_flow_update_quota
+*/
+RTE_FLOW_ACTION_TYPE_QUOTA,
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * QUOTA operational mode.
+ *
+ * @see struct rte_flow_action_quota
+ */
+enum rt

RE: [PATCH 1/3] eventdev/eth_rx: add params set/get APIs

2023-01-20 Thread Naga Harish K, S V
Hi Jerin,

> -Original Message-
> From: Jerin Jacob 
> Sent: Wednesday, January 18, 2023 3:52 PM
> To: Naga Harish K, S V 
> Cc: jer...@marvell.com; Gujjar, Abhinandan S
> ; dev@dpdk.org; Jayatheerthan, Jay
> 
> Subject: Re: [PATCH 1/3] eventdev/eth_rx: add params set/get APIs
> 
> On Sat, Jan 7, 2023 at 9:49 PM Naga Harish K S V
>  wrote:
> >
> > The adapter configuration parameters defined in the ``struct
> > rte_event_eth_rx_adapter_config_params`` can be configured and
> > retrieved using ``rte_event_eth_rx_adapter_set_params`` and
> > ``rte_event_eth_tx_adapter_get_params`` respectively.
> >
> > Signed-off-by: Naga Harish K S V 
> > ---
> >
> > +/**
> > + * Adapter configuration parameters
> > + */
> > +struct rte_event_eth_rx_adapter_runtime_params {
> > +   uint32_t max_nb_rx;
> > +   /**< The adapter can return early if it has processed at least
> > +* max_nb_rx mbufs. This isn't treated as a requirement; batching 
> > may
> > +* cause the adapter to process more than max_nb_rx mbufs.
> 
> This parameter as specific to SW only driver. So future something added from
> HW only driver item then it won't work for SW driver. So we need capability
> per adapter.
> 
> So,  I would suggest following theme across _all_ adapters.
> 
> 1) Introduce RTE_EVENT_ETH_RX_ADAPTER_CAP_RUNTIME_XYZ and
> associate each parameter(the one we think, it is not common for all
> adapters)

The parameters that are exposed in the patch are all existing parameters and 
they are made
runtime configurable for SW implementation. I think, there are no such 
parameters existing today for
HW driver implementation. Hence it may be better to introduce these flags when 
the HW driver
Implementation requires runtime configurable parameters.

> 2) Add some reserved fields in rte_event_eth_rx_adapter_runtime_params
> so that we don't break ABI in future

Agreed.

> 3) Add rte_event_eth_rx_adapter_runtime_params_init() function just
> make structure fill with default to avoid ABI break
> 4) Add rte_event_eth_rx_adapter_runtime_params_info_get(). Lets
> capability flags and other items can be return via this

These two items(3,4) can be taken as and when item "1" above is implemented.

> 5) Change rte_event_eth_rx_adapter_set_params as
> rte_event_eth_rx_adapter_runtime_set()  or
> rte_event_eth_rx_adapter_runtime_params_set() to make it runtime
> explicit

Agreed

> 6) Change rte_event_eth_rx_adapter_get_params as
> rte_event_eth_rx_adapter_runtime_get() or
> rte_event_eth_rx_adapter_runtime_params_get()  to make it runtime
> explicit

Agreed


RE: [PATCH v12 1/6] memarea: introduce memarea library

2023-01-20 Thread Morten Brørup
> From: fengchengwen [mailto:fengcheng...@huawei.com]
> Sent: Friday, 20 January 2023 09.21
> 
> Hi Morten,
> 
> On 2023/1/15 15:58, Morten Brørup wrote:
> >> From: Chengwen Feng [mailto:fengcheng...@huawei.com]
> >> Sent: Saturday, 14 January 2023 12.50
> >>
> >> The memarea library is an allocator of variable-size object which
> based
> >> on a memory region.
> >>
> >> This patch provides rte_memarea_create() and rte_memarea_destroy()
> API.
> >>
> >> Signed-off-by: Chengwen Feng 
> >> Reviewed-by: Dongdong Liu 
> >> ---
> >
> > [...]
> >
> >> +struct memarea_obj {
> >> +  TAILQ_ENTRY(memarea_obj) obj_node;
> >> +  TAILQ_ENTRY(memarea_obj) free_node;
> >> +  size_t   size;
> >> +  size_t   alloc_size;
> >> +  uint32_t magic;
> >> +};
> >
> > The magic cookie is for debug purposes only, and should be enclosed
> by #ifdef RTE_LIBRTE_MEMAREA_DEBUG, like in the mempool library [1].

It was just one example; the mbuf library also has RTE_LIBRTE_MBUF_DEBUG.

> 
> In the mempool the cookie mechanism is under debug macro, the main
> reason should be performance considerations.

Yes, the main reason is performance (of production builds).

The secondary reason is pollution of the generated code - unnecessary bloat 
makes it harder reading the assembly output when debugging, because the 
assembly output is not clean, but full of irrelevant runtime checks.

> 
> And community recommends that compilation macro be used as little as
> possible.

Yes, but I don't think this recommendation applies to debug code.

I strongly oppose to having run-time bug detection code, such as checking magic 
cookies, in production builds. It is a well established "best practice" to be 
able to build software projects for debug or production, and the DPDK project 
should not deviate from this best practice!

> 
> So I think we could add the following new API like:
> 
> /*
>  * Enable audit in alloc/free/audit.
>  */
> rte_memarea_audit_enable(struct rte_memarea *ma)
> 
> /*
>  * Disable audit in alloc/free/audit.
>  */
> rte_memarea_audit_disable(struct rte_memarea *ma)
> 
> /*
>  * if ptr is NULL, then audit the all objects.
>  * else, only audit the object which the ptr pointers.
>  * if audit fail, will return an error code, and the error log will
> emit.
>  *
>  * The audit is performed only when the audit function is enabled for
> the memarea.
>  */
> int rte_memarea_audit_object(struct rte_memarea *ma, void *ptr)
> 
> 
> So for an application, it can invoke rte_memarea_audit() in its code
> without macro control.
> If it considers that memory corruption may occur in a certain scenario,
> the audit function can be enabled, so could find it by error log or
> retcode.
> 
> This method causes slight performance loss. But it's guaranteed that
> there's no need to recompile, and a binary can be done.

It still pollutes the generated code, making debugging by hand (i.e. reading 
assembly output) more difficult.

> 
> >
> > [1]:
> https://elixir.bootlin.com/dpdk/latest/source/lib/mempool/rte_mempool.h
> #L154
> >
> > With that added:
> >
> > Series-acked-by: Morten Brørup 

If you think that this library's cookie checking is important for typical 
users, you can leave RTE_LIBRTE_MEMAREA_DEBUG enabled by default.

But it must be possible to completely omit such run-time debug checks when 
building for production.



Re: [PATCH v2 01/11] ethdev: add flex item modify field support

2023-01-20 Thread Andrew Rybchenko

On 1/19/23 07:58, Rongwei Liu wrote:

Add flex item as modify field destination.
Add "struct rte_flow_item_flex_handle *flex_handle" into
"struct rte_flow_action_modify_data" as union with existed
"level" member. This new member is dedicated for modifying
flex item.

Add flex item modify field cmdline support. Now user can use
testpmd cli to specify which flex item to be modified, either
source or destination.

Syntax is as below:
modify_field op set dst_type flex_item dst_level 0
dst_offset 16 src_type value src_value 0x123456781020 width 8

Signed-off-by: Rongwei Liu 
Acked-by: Ori Kam 


[snip]


diff --git a/doc/guides/rel_notes/release_23_03.rst 
b/doc/guides/rel_notes/release_23_03.rst
index b8c5b68d6c..c673205e5e 100644
--- a/doc/guides/rel_notes/release_23_03.rst
+++ b/doc/guides/rel_notes/release_23_03.rst
@@ -56,6 +56,10 @@ New Features
   ===
  
  


It should be just one empty line here


+* ethdev: added a new field:


"added a new field' is too generic.


+
+  - modify flex item: ``rte_flow_action_modify_data.flex_handle``
+


And two empty lines here.


  Removed Items
  -
  
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h

index b60987db4b..c66a65351d 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3528,6 +3528,7 @@ enum rte_flow_field_id {
RTE_FLOW_FIELD_IPV6_ECN,/**< IPv6 ECN. */
RTE_FLOW_FIELD_GTP_PSC_QFI, /**< GTP QFI. */
RTE_FLOW_FIELD_METER_COLOR, /**< Meter color marker. */
+   RTE_FLOW_FIELD_FLEX_ITEM,   /**< Flex item. */
  };
  
  /**

@@ -3541,8 +3542,11 @@ struct rte_flow_action_modify_data {
RTE_STD_C11
union {
struct {
-   /** Encapsulation level or tag index. */
-   uint32_t level;
+   /**< Encapsulation level or tag index or flex item 
handle. */


Have you tried to generate documentation? If it is a union documentation 
it should be /**, not /**<.

In general, it is better to document union from overall
point of view. What is it logically? Do not define union
as just a union of its fields.


+   union {
+   uint32_t level;
+   struct rte_flow_item_flex_handle *flex_handle;


Union items documentation missing.


+   };
/** Number of bits to skip from a field. */
uint32_t offset;
};




Re: [PATCH v2 1/8] ethdev: add IPv6 routing extension header definition

2023-01-20 Thread Andrew Rybchenko

On 1/19/23 06:11, Rongwei Liu wrote:

Add IPv6 routing extension header definition and no
TLV support for now.

At rte_flow layer, there are new items defined for matching
type/nexthdr/segments_left field.

Add command line support for IPv6 routing extension header
matching: type/nexthdr/segment_list.

Signed-off-by: Rongwei Liu 
Acked-by: Ori Kam 


[snip]


diff --git a/doc/guides/prog_guide/rte_flow.rst 
b/doc/guides/prog_guide/rte_flow.rst
index 3e6242803d..ae99036be0 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1544,6 +1544,15 @@ Matches Color Marker set by a Meter.
  
  - ``color``: Metering color marker.
  
+Item: ``IPV6_ROUTING_EXT``

+^^
+
+Matches ipv6 routing extension header.


ipv6 -> IPv6


+
+- ``next_hdr``: Next layer header type.
+- ``type``: IPv6 routing extension header type.
+- ``segments_left``: How many IPv6 destination addresses carries on


Why are only 3 fields mentioned above?


+
  Actions
  ~~~
  
diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst

index b8c5b68d6c..2a794d598e 100644
--- a/doc/guides/rel_notes/release_23_03.rst
+++ b/doc/guides/rel_notes/release_23_03.rst
@@ -55,6 +55,11 @@ New Features
   Also, make sure to start the actual text at the margin.
   ===
  
+* **Added rte_flow support for matching IPv6 routing extension header fields.**

+
+  Added ``ipv6_routing_ext`` items in rte_flow to match IPv6 routing extension
+  header


Missing full stop above.


+
  
  Removed Items

  -
@@ -84,6 +89,11 @@ API Changes
 Also, make sure to start the actual text at the margin.
 ===
  
+* ethdev: added a new structure:

+
+- IPv6 routing extension header ``rte_flow_item_ipv6_routing_ext`` and
+  ``rte_ipv6_routing_ext``
+


If I'm not mistaken, additions should not be here. It is not an
API change.

  
  ABI Changes

  ---
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 7d0c24366c..4074b475c8 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -76,6 +76,20 @@ rte_flow_item_flex_conv(void *buf, const void *data)
return src->length;
  }
  
+static size_t

+rte_flow_item_ipv6_routing_ext_conv(void *buf, const void *data)
+{
+   struct rte_flow_item_ipv6_routing_ext *dst = buf;
+   const struct rte_flow_item_ipv6_routing_ext *src = data;
+   size_t len;
+
+   len = src->hdr.hdr_len ? src->hdr.hdr_len << 3 : src->hdr.segments_left 
<< 4;


Compare hdr_len vs 0 explicitly.
Also I'd add parenthesis around ternary operator
values to make it simpler to understand.


+   if (buf)


Please, compare vs NULL explicitly. May be 'dst' would be
better here?


+   rte_memcpy((void *)((uintptr_t)(dst->hdr.segments)),
+  src->hdr.segments, len);
+   return len;
+}
+
  /** Generate flow_item[] entry. */
  #define MK_FLOW_ITEM(t, s) \
[RTE_FLOW_ITEM_TYPE_ ## t] = { \
@@ -157,6 +171,8 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] 
= {
MK_FLOW_ITEM(L2TPV2, sizeof(struct rte_flow_item_l2tpv2)),
MK_FLOW_ITEM(PPP, sizeof(struct rte_flow_item_ppp)),
MK_FLOW_ITEM(METER_COLOR, sizeof(struct rte_flow_item_meter_color)),
+   MK_FLOW_ITEM_FN(IPV6_ROUTING_EXT, sizeof(struct 
rte_flow_item_ipv6_routing_ext),
+   rte_flow_item_ipv6_routing_ext_conv),
  };
  
  /** Generate flow_action[] entry. */

diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index b60987db4b..0120d3e7d2 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -624,6 +624,13 @@ enum rte_flow_item_type {
 * See struct rte_flow_item_meter_color.
 */
RTE_FLOW_ITEM_TYPE_METER_COLOR,
+
+   /**
+* Matches the presence of IPv6 routing extension header.
+*
+* See struct rte_flow_item_ipv6_routing_ext.


@see


+*/
+   RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT,
  };
  
  /**

@@ -873,6 +880,18 @@ struct rte_flow_item_ipv6 {
uint32_t reserved:23;
  };
  
+/**

+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT.
+ *
+ * Matches an IPv6 routing extension header.
+ */
+struct rte_flow_item_ipv6_routing_ext {
+   struct rte_ipv6_routing_ext hdr;
+};
+


What about default mask?


  /** Default mask for RTE_FLOW_ITEM_TYPE_IPV6. */
  #ifndef __cplusplus
  static const struct rte_flow_item_ipv6 rte_flow_item_ipv6_mask = {
diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h
index 9c8e8206f0..158a2f83ce 100644
--- a/lib/net/rte_ip.h
+++ b/lib/net/rte_ip.h
@@ -539,6 +539,27 @@ struct rte_ipv6_hdr {
uint8_t  dst_addr[16];  /**< IP address of destination host(s). */
  } __rte_packed;
  
+/**

+ * IPv6 Routing Extension Header
+ */
+struct rte_ipv6_routing_ext {
+   

Re: [PATCH 1/3] eventdev/eth_rx: add params set/get APIs

2023-01-20 Thread Jerin Jacob
On Fri, Jan 20, 2023 at 2:28 PM Naga Harish K, S V
 wrote:
>
> Hi Jerin,
>
> > -Original Message-
> > From: Jerin Jacob 
> > Sent: Wednesday, January 18, 2023 3:52 PM
> > To: Naga Harish K, S V 
> > Cc: jer...@marvell.com; Gujjar, Abhinandan S
> > ; dev@dpdk.org; Jayatheerthan, Jay
> > 
> > Subject: Re: [PATCH 1/3] eventdev/eth_rx: add params set/get APIs
> >
> > On Sat, Jan 7, 2023 at 9:49 PM Naga Harish K S V
> >  wrote:
> > >
> > > The adapter configuration parameters defined in the ``struct
> > > rte_event_eth_rx_adapter_config_params`` can be configured and
> > > retrieved using ``rte_event_eth_rx_adapter_set_params`` and
> > > ``rte_event_eth_tx_adapter_get_params`` respectively.
> > >
> > > Signed-off-by: Naga Harish K S V 
> > > ---
> > >
> > > +/**
> > > + * Adapter configuration parameters
> > > + */
> > > +struct rte_event_eth_rx_adapter_runtime_params {
> > > +   uint32_t max_nb_rx;
> > > +   /**< The adapter can return early if it has processed at least
> > > +* max_nb_rx mbufs. This isn't treated as a requirement; batching 
> > > may
> > > +* cause the adapter to process more than max_nb_rx mbufs.
> >
> > This parameter as specific to SW only driver. So future something added from
> > HW only driver item then it won't work for SW driver. So we need capability
> > per adapter.
> >
> > So,  I would suggest following theme across _all_ adapters.
> >
> > 1) Introduce RTE_EVENT_ETH_RX_ADAPTER_CAP_RUNTIME_XYZ and
> > associate each parameter(the one we think, it is not common for all
> > adapters)
>
> The parameters that are exposed in the patch are all existing parameters and 
> they are made
> runtime configurable for SW implementation. I think, there are no such 
> parameters existing today for
> HW driver implementation. Hence it may be better to introduce these flags 
> when the HW driver
> Implementation requires runtime configurable parameters.

Since current values are not applicable to HW. So we any way need the
capability now to tell this is not
applicable for HW.

>
> > 2) Add some reserved fields in rte_event_eth_rx_adapter_runtime_params
> > so that we don't break ABI in future
>
> Agreed.
>
> > 3) Add rte_event_eth_rx_adapter_runtime_params_init() function just
> > make structure fill with default to avoid ABI break
> > 4) Add rte_event_eth_rx_adapter_runtime_params_info_get(). Lets
> > capability flags and other items can be return via this
>
> These two items(3,4) can be taken as and when item "1" above is implemented.

See above.

>
> > 5) Change rte_event_eth_rx_adapter_set_params as
> > rte_event_eth_rx_adapter_runtime_set()  or
> > rte_event_eth_rx_adapter_runtime_params_set() to make it runtime
> > explicit
>
> Agreed
>
> > 6) Change rte_event_eth_rx_adapter_get_params as
> > rte_event_eth_rx_adapter_runtime_get() or
> > rte_event_eth_rx_adapter_runtime_params_get()  to make it runtime
> > explicit
>
> Agreed


RE: [PATCH v6 1/4] lib: add generic support for reading PMU events

2023-01-20 Thread Morten Brørup
> From: Tomasz Duszynski [mailto:tduszyn...@marvell.com]
> Sent: Friday, 20 January 2023 00.39
> 
> Add support for programming PMU counters and reading their values
> in runtime bypassing kernel completely.
> 
> This is especially useful in cases where CPU cores are isolated
> (nohz_full) i.e run dedicated tasks. In such cases one cannot use
> standard perf utility without sacrificing latency and performance.
> 
> Signed-off-by: Tomasz Duszynski 
> ---

If you insist on passing lcore_id around as a function parameter, the function 
description must mention that the lcore_id parameter must be set to 
rte_lcore_id() for the functions where this is a requirement, including all 
functions that use those functions.

Alternatively, follow my previous suggestion: Omit the lcore_id function 
parameter, and use rte_lcore_id() instead.




RE: [PATCH v6 1/6] eal: trace: add trace point emit for blob

2023-01-20 Thread Morten Brørup
> From: Ankur Dwivedi [mailto:adwiv...@marvell.com]
> Sent: Friday, 20 January 2023 09.41
> 
> Adds a trace point emit function for capturing a blob. The blob
> captures the length passed by the application followed by the array.
> 
> The maximum blob bytes which can be captured is bounded by
> RTE_TRACE_BLOB_LEN_MAX macro. The value for max blob length macro is
> 64 bytes. If the length is less than 64 the remaining trailing bytes
> are set to zero.
> 
> This patch also adds test case for emit blob tracepoint function.
> 
> Signed-off-by: Ankur Dwivedi 
> ---

[...]

> +/** Macro to define maximum emit length of blob. */
> +#define RTE_TRACE_BLOB_LEN_MAX 64
> +
>  /**
>   * Enable recording events of the given tracepoint in the trace
> buffer.
>   *
> @@ -374,12 +387,31 @@ do { \
>   mem = RTE_PTR_ADD(mem, __RTE_TRACE_EMIT_STRING_LEN_MAX); \
>  } while (0)
> 
> +#define rte_trace_point_emit_blob(in, len) \
> +do { \
> + if (unlikely(in == NULL)) \
> + return; \
> + if (len > RTE_TRACE_BLOB_LEN_MAX) \
> + len = RTE_TRACE_BLOB_LEN_MAX; \
> + __rte_trace_point_emit(len, uint8_t); \
> + memcpy(mem, in, len); \
> + mem = RTE_PTR_ADD(mem, len); \
> + memset(mem, 0, RTE_TRACE_BLOB_LEN_MAX - len); \
> + mem = RTE_PTR_ADD(mem, RTE_TRACE_BLOB_LEN_MAX - len); \

Alternatively (just a suggestion):

memcpy(mem, in, len); \
memset(RTE_PTR_ADD(mem, len), 0, RTE_TRACE_BLOB_LEN_MAX - len); \
mem = RTE_PTR_ADD(mem, RTE_TRACE_BLOB_LEN_MAX); \

(If memset() is annotated to inform the compiler that the mem pointer is 
constant, the compiler should generate exactly the same code.)

> +} while (0)
> +
>  #else
> 
>  #define __rte_trace_point_emit_header_generic(t) RTE_SET_USED(t)
>  #define __rte_trace_point_emit_header_fp(t) RTE_SET_USED(t)
>  #define __rte_trace_point_emit(in, type) RTE_SET_USED(in)
>  #define rte_trace_point_emit_string(in) RTE_SET_USED(in)
> +#define rte_trace_point_emit_blob(in, len) \
> +do { \
> + RTE_SET_USED(in); \
> + RTE_SET_USED(len); \
> +} while (0)
> +
> 
>  #endif /* ALLOW_EXPERIMENTAL_API */
>  #endif /* _RTE_TRACE_POINT_REGISTER_H_ */
> diff --git a/lib/eal/include/rte_trace_point_register.h
> b/lib/eal/include/rte_trace_point_register.h
> index a32f4d731b..7efbac8a72 100644
> --- a/lib/eal/include/rte_trace_point_register.h
> +++ b/lib/eal/include/rte_trace_point_register.h
> @@ -47,6 +47,15 @@ do { \
>   RTE_STR(in)"[32]", "string_bounded_t"); \
>  } while (0)
> 
> +#define rte_trace_point_emit_blob(in, len) \
> +do { \
> + RTE_SET_USED(in); \
> + __rte_trace_point_emit(len, uint8_t); \
> + __rte_trace_point_emit_field(RTE_TRACE_BLOB_LEN_MAX, \
> + RTE_STR(in)"["RTE_STR(RTE_TRACE_BLOB_LEN_MAX)"]", \
> + RTE_STR(uint8_t)); \
> +} while (0)

I guess the variable sized trace entry couldn't be implemented.

Anyway, this BLOB implementation is still very useful!

Acked-by: Morten Brørup 



RE: [PATCH v3 01/10] ethdev: check return result of rte_eth_dev_info_get

2023-01-20 Thread Morten Brørup
> From: ok...@kernel.org [mailto:ok...@kernel.org]
> Sent: Friday, 20 January 2023 05.42
> 
> From: Sinan Kaya 
> 
> rte_class_eth: eth_mac_cmp: The status of this call to
> rte_eth_dev_info_get
> is not checked, potentially leaving dev_info uninitialized.
> 
> Signed-off-by: Sinan Kaya 
> ---
>  lib/ethdev/rte_class_eth.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c
> index 838b3a8f9f..559ce22491 100644
> --- a/lib/ethdev/rte_class_eth.c
> +++ b/lib/ethdev/rte_class_eth.c
> @@ -50,8 +50,10 @@ eth_mac_cmp(const char *key __rte_unused,
>   if (rte_ether_unformat_addr(value, &mac) < 0)
>   return -1; /* invalid devargs value */
> 
> + if (rte_eth_dev_info_get(data->port_id, &dev_info) < 0)
> + return -1;
> +
>   /* Return 0 if devargs MAC is matching one of the device MACs. */
> - rte_eth_dev_info_get(data->port_id, &dev_info);
>   for (index = 0; index < dev_info.max_mac_addrs; index++)
>   if (rte_is_same_ether_addr(&mac, &data->mac_addrs[index]))
>   return 0;
> --
> 2.25.1
> 

Although rte_eth_dev_info_get() can return various error codes, eth_mac_cmp() 
returns only 0 or -1, so this is correct.

Reviewed-by: Morten Brørup 



[PATCH 0/3] reorder: introduce new APIs

2023-01-20 Thread Volodymyr Fialko
This patch series provides new APIs for reorder library and test cases for
them.

Volodymyr Fialko (3):
  reorder: add new drain up to seq number API
  reorder: add ability to set min sequence number
  test/reorder: add cases to cover new API

 app/test/test_reorder.c   | 160 ++
 lib/reorder/rte_reorder.c | 108 +
 lib/reorder/rte_reorder.h |  43 ++
 lib/reorder/version.map   |   2 +
 4 files changed, 313 insertions(+)

-- 
2.34.1



[PATCH 1/3] reorder: add new drain up to seq number API

2023-01-20 Thread Volodymyr Fialko
Introduce new reorder drain API:
`rte_reorder_drain_up_to_seqn` - exhaustively drain all inserted mbufs
up to the given sequence number.

Currently there's no ability to force the drain from reorder buffer,
i.e. only consecutive ordered or ready packets could be drained.
New function would give user ability to drain inserted packets, without
need to wait for missing or newer packets.

Signed-off-by: Volodymyr Fialko 
---
 lib/reorder/rte_reorder.c | 77 +++
 lib/reorder/rte_reorder.h | 25 +
 lib/reorder/version.map   |  1 +
 3 files changed, 103 insertions(+)

diff --git a/lib/reorder/rte_reorder.c b/lib/reorder/rte_reorder.c
index 385ee479da..57cc1b286b 100644
--- a/lib/reorder/rte_reorder.c
+++ b/lib/reorder/rte_reorder.c
@@ -406,3 +406,80 @@ rte_reorder_drain(struct rte_reorder_buffer *b, struct 
rte_mbuf **mbufs,
 
return drain_cnt;
 }
+
+/* Binary search seqn in ready buffer */
+static inline uint32_t
+ready_buffer_seqn_find(const struct cir_buffer *ready_buf, const uint32_t seqn)
+{
+   uint32_t mid, value, position, high;
+   uint32_t low = 0;
+
+   if (ready_buf->tail > ready_buf->head)
+   high = ready_buf->tail - ready_buf->head;
+   else
+   high = ready_buf->head - ready_buf->tail;
+
+   while (low <= high) {
+   mid = low + (high - low) / 2;
+   position = (ready_buf->tail + mid) & ready_buf->mask;
+   value = *rte_reorder_seqn(ready_buf->entries[position]);
+   if (seqn == value)
+   return mid;
+   else if (seqn > value)
+   low = mid + 1;
+   else
+   high = mid - 1;
+   }
+
+   return low;
+}
+
+unsigned int
+rte_reorder_drain_up_to_seqn(struct rte_reorder_buffer *b, struct rte_mbuf 
**mbufs,
+   const unsigned int max_mbufs, const rte_reorder_seqn_t seqn)
+{
+   uint32_t i, position, offset;
+   unsigned int drain_cnt = 0;
+
+   struct cir_buffer *order_buf = &b->order_buf,
+   *ready_buf = &b->ready_buf;
+
+   /* Seqn in Ready buffer */
+   if (seqn < b->min_seqn) {
+   /* All sequence numbers are higher then given */
+   if (*rte_reorder_seqn(ready_buf->entries[ready_buf->tail]) > 
seqn)
+   return 0;
+
+   offset = ready_buffer_seqn_find(ready_buf, seqn);
+
+   for (i = 0; (i < offset) && (drain_cnt < max_mbufs); i++) {
+   position = (ready_buf->tail + i) & ready_buf->mask;
+   mbufs[drain_cnt++] = ready_buf->entries[position];
+   ready_buf->entries[position] = NULL;
+   }
+   ready_buf->tail = (ready_buf->tail + i) & ready_buf->mask;
+
+   return drain_cnt;
+   }
+
+   /* Seqn in Order buffer, add all buffers from Ready buffer */
+   while ((drain_cnt < max_mbufs) && (ready_buf->tail != ready_buf->head)) 
{
+   mbufs[drain_cnt++] = ready_buf->entries[ready_buf->tail];
+   ready_buf->entries[ready_buf->tail] = NULL;
+   ready_buf->tail = (ready_buf->tail + 1) & ready_buf->mask;
+   }
+
+   /* Fetch buffers from Order buffer up to given sequence number 
(exclusive) */
+   offset = RTE_MIN(seqn - b->min_seqn, b->order_buf.size);
+   for (i = 0; (i < offset) && (drain_cnt < max_mbufs); i++) {
+   position = (order_buf->head + i) & order_buf->mask;
+   if (order_buf->entries[position] == NULL)
+   continue;
+   mbufs[drain_cnt++] = order_buf->entries[position];
+   order_buf->entries[position] = NULL;
+   }
+   b->min_seqn += i;
+   order_buf->head = (order_buf->head + i) & order_buf->mask;
+
+   return drain_cnt;
+}
diff --git a/lib/reorder/rte_reorder.h b/lib/reorder/rte_reorder.h
index 5abdb258e2..c5b354b53d 100644
--- a/lib/reorder/rte_reorder.h
+++ b/lib/reorder/rte_reorder.h
@@ -167,6 +167,31 @@ unsigned int
 rte_reorder_drain(struct rte_reorder_buffer *b, struct rte_mbuf **mbufs,
unsigned max_mbufs);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Fetch set of reordered packets up to specified sequence number (exclusive)
+ *
+ * Returns a set of in-order packets from the reorder buffer structure.
+ * Gaps may be present since reorder buffer will try to fetch all possible 
packets up to given
+ * sequence number.
+ *
+ * @param b
+ *   Reorder buffer instance from which packets are to be drained.
+ * @param mbufs
+ *   Array of mbufs where reordered packets will be inserted from reorder 
buffer.
+ * @param max_mbufs
+ *   The number of elements in the mbufs array.
+ * @param seqn
+ *   Sequence number up to which buffer will be drained.
+ * @return
+ *   Number of mbuf pointers written to mbufs. 0 <= N < max_mbufs.
+ */
+__rte_experi

[PATCH 2/3] reorder: add ability to set min sequence number

2023-01-20 Thread Volodymyr Fialko
Add API `rte_reorder_min_seqn_set` to allow user to specify minimum
sequence number.
Currently sequence number of first inserted packet is used as minimum
sequence number. But for case when we want to wait for packets before
the received one this will not work.

Signed-off-by: Volodymyr Fialko 
---
 lib/reorder/rte_reorder.c | 31 +++
 lib/reorder/rte_reorder.h | 18 ++
 lib/reorder/version.map   |  1 +
 3 files changed, 50 insertions(+)

diff --git a/lib/reorder/rte_reorder.c b/lib/reorder/rte_reorder.c
index 57cc1b286b..0d4c7209f9 100644
--- a/lib/reorder/rte_reorder.c
+++ b/lib/reorder/rte_reorder.c
@@ -483,3 +483,34 @@ rte_reorder_drain_up_to_seqn(struct rte_reorder_buffer *b, 
struct rte_mbuf **mbu
 
return drain_cnt;
 }
+
+static bool
+rte_reorder_is_empty(const struct rte_reorder_buffer *b)
+{
+   const struct cir_buffer *order_buf = &b->order_buf, *ready_buf = 
&b->ready_buf;
+   unsigned int i;
+
+   /* Ready buffer does not have gaps */
+   if (ready_buf->tail != ready_buf->head)
+   return false;
+
+   /* Order buffer could have gaps, iterate */
+   for (i = 0; i < order_buf->size; i++) {
+   if (order_buf->entries[i] != NULL)
+   return false;
+   }
+
+   return true;
+}
+
+unsigned int
+rte_reorder_min_seqn_set(struct rte_reorder_buffer *b, uint32_t min_seqn)
+{
+   if (!rte_reorder_is_empty(b))
+   return -ENOTEMPTY;
+
+   b->min_seqn = min_seqn;
+   b->is_initialized = true;
+
+   return 0;
+}
diff --git a/lib/reorder/rte_reorder.h b/lib/reorder/rte_reorder.h
index c5b354b53d..9f6710bad2 100644
--- a/lib/reorder/rte_reorder.h
+++ b/lib/reorder/rte_reorder.h
@@ -192,6 +192,24 @@ __rte_experimental
 unsigned int
 rte_reorder_drain_up_to_seqn(struct rte_reorder_buffer *b, struct rte_mbuf 
**mbufs,
unsigned int max_mbufs, rte_reorder_seqn_t seqn);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Set minimum sequence number of packet allowed to be buffered.
+ * To successfully set new value reorder buffer has to be empty(after create, 
reset or drain_all).
+ *
+ * @param b
+ *   Empty reorder buffer instance to modify.
+ * @param min_seqn
+ *   New sequence number to set.
+ * @return
+ *   0 on success, a negative value otherwise.
+ */
+__rte_experimental
+unsigned int
+rte_reorder_min_seqn_set(struct rte_reorder_buffer *b, uint32_t min_seqn);
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/reorder/version.map b/lib/reorder/version.map
index e3f41ea7ef..aafdf0b5ae 100644
--- a/lib/reorder/version.map
+++ b/lib/reorder/version.map
@@ -17,4 +17,5 @@ EXPERIMENTAL {
 
rte_reorder_seqn_dynfield_offset;
rte_reorder_drain_up_to_seqn;
+   rte_reorder_min_seqn_set;
 };
-- 
2.34.1



[PATCH 3/3] test/reorder: add cases to cover new API

2023-01-20 Thread Volodymyr Fialko
Add new test cases to cover `rte_reorder_drain_up_to_seqn` and
`rte_reorder_min_seqn_set`.

Signed-off-by: Volodymyr Fialko 
---
 app/test/test_reorder.c | 160 
 1 file changed, 160 insertions(+)

diff --git a/app/test/test_reorder.c b/app/test/test_reorder.c
index f0714a5c18..c345a72e0c 100644
--- a/app/test/test_reorder.c
+++ b/app/test/test_reorder.c
@@ -335,6 +335,164 @@ test_reorder_drain(void)
return ret;
 }
 
+static void
+buffer_to_reorder_move(struct rte_mbuf **mbuf, struct rte_reorder_buffer *b)
+{
+   rte_reorder_insert(b, *mbuf);
+   *mbuf = NULL;
+}
+
+static int
+test_reorder_drain_up_to_seqn(void)
+{
+   struct rte_mempool *p = test_params->p;
+   struct rte_reorder_buffer *b = NULL;
+   const unsigned int num_bufs = 10;
+   const unsigned int size = 4;
+   unsigned int i, cnt;
+   int ret = 0;
+
+   struct rte_mbuf *bufs[num_bufs];
+   struct rte_mbuf *robufs[num_bufs];
+
+   /* initialize all robufs to NULL */
+   memset(robufs, 0, sizeof(robufs));
+
+   /* This would create a reorder buffer instance consisting of:
+* reorder_seq = 0
+* ready_buf: RB[size] = {NULL, NULL, NULL, NULL}
+* order_buf: OB[size] = {NULL, NULL, NULL, NULL}
+*/
+   b = rte_reorder_create("test_drain_up_to_seqn", rte_socket_id(), size);
+   TEST_ASSERT_NOT_NULL(b, "Failed to create reorder buffer");
+
+   for (i = 0; i < num_bufs; i++) {
+   bufs[i] = rte_pktmbuf_alloc(p);
+   TEST_ASSERT_NOT_NULL(bufs[i], "Packet allocation failed\n");
+   *rte_reorder_seqn(bufs[i]) = i;
+   }
+
+   /* Insert packet with seqn 1 and 3:
+* RB[] = {NULL, NULL, NULL, NULL}
+* OB[] = {1, 2, 3, NULL}
+*/
+   buffer_to_reorder_move(&bufs[1], b);
+   buffer_to_reorder_move(&bufs[2], b);
+   buffer_to_reorder_move(&bufs[3], b);
+   /* Draining 1, 2 */
+   cnt = rte_reorder_drain_up_to_seqn(b, robufs, num_bufs, 3);
+   if (cnt != 2) {
+   printf("%s:%d:%d: number of expected packets not drained\n",
+   __func__, __LINE__, cnt);
+   ret = -1;
+   goto exit;
+   }
+   for (i = 0; i < 2; i++)
+   rte_pktmbuf_free(robufs[i]);
+   memset(robufs, 0, sizeof(robufs));
+
+   /* Insert more packets
+* RB[] = {NULL, NULL, NULL, NULL}
+* OB[] = {3, 4, NULL, 6}
+*/
+   buffer_to_reorder_move(&bufs[4], b);
+   buffer_to_reorder_move(&bufs[6], b);
+   /* Insert more packets to utilize Ready buffer
+* RB[] = {3, NULL, 5, 6}
+* OB[] = {NULL, NULL, 8, NULL}
+*/
+   buffer_to_reorder_move(&bufs[8], b);
+
+   /* Drain 3 and 5 */
+   cnt = rte_reorder_drain_up_to_seqn(b, robufs, num_bufs, 6);
+   if (cnt != 2) {
+   printf("%s:%d:%d: number of expected packets not drained\n",
+   __func__, __LINE__, cnt);
+   ret = -1;
+   goto exit;
+   }
+   for (i = 0; i < 2; i++)
+   rte_pktmbuf_free(robufs[i]);
+   memset(robufs, 0, sizeof(robufs));
+
+   ret = 0;
+exit:
+   rte_reorder_free(b);
+   for (i = 0; i < num_bufs; i++) {
+   rte_pktmbuf_free(bufs[i]);
+   rte_pktmbuf_free(robufs[i]);
+   }
+   return ret;
+}
+
+static int
+test_reorder_set_seqn(void)
+{
+   struct rte_mempool *p = test_params->p;
+   struct rte_reorder_buffer *b = NULL;
+   const unsigned int num_bufs = 7;
+   const unsigned int size = 4;
+   unsigned int i;
+   int ret = 0;
+
+   struct rte_mbuf *bufs[num_bufs];
+
+   /* This would create a reorder buffer instance consisting of:
+* reorder_seq = 0
+* ready_buf: RB[size] = {NULL, NULL, NULL, NULL}
+* order_buf: OB[size] = {NULL, NULL, NULL, NULL}
+*/
+   b = rte_reorder_create("test_min_seqn_set", rte_socket_id(), size);
+   TEST_ASSERT_NOT_NULL(b, "Failed to create reorder buffer");
+
+   for (i = 0; i < num_bufs; i++) {
+   bufs[i] = rte_pktmbuf_alloc(p);
+   if (bufs[i] == NULL) {
+   printf("Packet allocation failed\n");
+   goto exit;
+   }
+   *rte_reorder_seqn(bufs[i]) = i;
+   }
+
+   ret = rte_reorder_min_seqn_set(b, 5);
+   if (ret != 0) {
+   printf("%s:%d: Error in setting min sequence number\n", 
__func__, __LINE__);
+   ret = -1;
+   goto exit;
+   }
+
+   ret = rte_reorder_insert(b, bufs[0]);
+   if (ret >= 0) {
+   printf("%s:%d: Insertion with value less the min seq number\n", 
__func__, __LINE__);
+   ret = -1;
+   goto exit;
+   }
+
+   ret = rte_reorder_insert(b, bufs[5]);
+   if (ret != 0) {
+   printf("%s:%d: Error inser

RE: [PATCH 1/3] eventdev/eth_rx: add params set/get APIs

2023-01-20 Thread Naga Harish K, S V
Hi Jerin,

> -Original Message-
> From: Jerin Jacob 
> Sent: Friday, January 20, 2023 3:02 PM
> To: Naga Harish K, S V 
> Cc: jer...@marvell.com; Gujjar, Abhinandan S
> ; dev@dpdk.org; Jayatheerthan, Jay
> 
> Subject: Re: [PATCH 1/3] eventdev/eth_rx: add params set/get APIs
> 
> On Fri, Jan 20, 2023 at 2:28 PM Naga Harish K, S V
>  wrote:
> >
> > Hi Jerin,
> >
> > > -Original Message-
> > > From: Jerin Jacob 
> > > Sent: Wednesday, January 18, 2023 3:52 PM
> > > To: Naga Harish K, S V 
> > > Cc: jer...@marvell.com; Gujjar, Abhinandan S
> > > ; dev@dpdk.org; Jayatheerthan, Jay
> > > 
> > > Subject: Re: [PATCH 1/3] eventdev/eth_rx: add params set/get APIs
> > >
> > > On Sat, Jan 7, 2023 at 9:49 PM Naga Harish K S V
> > >  wrote:
> > > >
> > > > The adapter configuration parameters defined in the ``struct
> > > > rte_event_eth_rx_adapter_config_params`` can be configured and
> > > > retrieved using ``rte_event_eth_rx_adapter_set_params`` and
> > > > ``rte_event_eth_tx_adapter_get_params`` respectively.
> > > >
> > > > Signed-off-by: Naga Harish K S V 
> > > > ---
> > > >
> > > > +/**
> > > > + * Adapter configuration parameters  */ struct
> > > > +rte_event_eth_rx_adapter_runtime_params {
> > > > +   uint32_t max_nb_rx;
> > > > +   /**< The adapter can return early if it has processed at least
> > > > +* max_nb_rx mbufs. This isn't treated as a requirement; 
> > > > batching
> may
> > > > +* cause the adapter to process more than max_nb_rx mbufs.
> > >
> > > This parameter as specific to SW only driver. So future something
> > > added from HW only driver item then it won't work for SW driver. So
> > > we need capability per adapter.
> > >
> > > So,  I would suggest following theme across _all_ adapters.
> > >
> > > 1) Introduce RTE_EVENT_ETH_RX_ADAPTER_CAP_RUNTIME_XYZ and
> associate
> > > each parameter(the one we think, it is not common for all
> > > adapters)
> >
> > The parameters that are exposed in the patch are all existing
> > parameters and they are made runtime configurable for SW
> > implementation. I think, there are no such parameters existing today
> > for HW driver implementation. Hence it may be better to introduce these
> flags when the HW driver Implementation requires runtime configurable
> parameters.
> 
> Since current values are not applicable to HW. So we any way need the
> capability now to tell this is not applicable for HW.
> 

Depending on the existing adapter capability flag 
"RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT",
the current values can be applied to only SW implementation. In this way, there 
is no need for
creating new capability flags.

> >
> > > 2) Add some reserved fields in
> > > rte_event_eth_rx_adapter_runtime_params
> > > so that we don't break ABI in future
> >
> > Agreed.
> >
> > > 3) Add rte_event_eth_rx_adapter_runtime_params_init() function just
> > > make structure fill with default to avoid ABI break
> > > 4) Add rte_event_eth_rx_adapter_runtime_params_info_get(). Lets
> > > capability flags and other items can be return via this
> >
> > These two items(3,4) can be taken as and when item "1" above is
> implemented.
> 
> See above.
> 
> >
> > > 5) Change rte_event_eth_rx_adapter_set_params as
> > > rte_event_eth_rx_adapter_runtime_set()  or
> > > rte_event_eth_rx_adapter_runtime_params_set() to make it runtime
> > > explicit
> >
> > Agreed
> >
> > > 6) Change rte_event_eth_rx_adapter_get_params as
> > > rte_event_eth_rx_adapter_runtime_get() or
> > > rte_event_eth_rx_adapter_runtime_params_get()  to make it runtime
> > > explicit
> >
> > Agreed


RE: [PATCH v6 1/2] ethdev: add query_update sync and async function calls

2023-01-20 Thread Gregory Etelson
Hello Andrew,

[]

> > The patch adds `rte_flow_action_handle_query_update` function call,
> 
> "This patch adds ..." -> "Add ..."
> 

Will fix in v7.

> > and it's async version `rte_flow_async_action_handle_query_update`
> > to atomically query and update flow action.
> 
> Sorry, may be I'm missing something, but after reading previous
> discussion I have a feeling that update could be queried data
> dependent. However, I don't understand how it could be possible
> with API below. Just my misunderstanding?
> 

The goal of `rte_flow_action_handle_query_update()` is to execute query and 
update actions atomically.
The function guaranties that action state will not change by any event before 
both update and query complete.
If the function was called with the ` RTE_FLOW_QU_QUERY_FIRST ` `mode` 
argument, then update execution can depend
on query result. That's up to query format, PMD implementation and hardware 
capabilities.
I hope that answered your question.
  
> >
> > Application can control query and update order, if that is supported
> > by port hardware, by setting `qu_mode` parameter to
> > RTE_FLOW_QU_QUERY_FIRST or RTE_FLOW_QU_UPDATE_FIRST.
> >
> > Signed-off-by: Gregory Etelson 
> > ---
> > v2: Remove RTE_FLOW_QU_DEFAULT query-update mode.
> > v3: Update release release notes.
> >  Fix doxygen errors.
> > v4: Add returned errno codes.
> > v5: Update the patch description.
> >  Fix typos.
> > v6: Resolve merge conflict with the main branch.
> > ---
> >   doc/guides/rel_notes/release_23_03.rst |  7 ++
> >   lib/ethdev/rte_flow.c  | 39 ++
> >   lib/ethdev/rte_flow.h  | 99 ++
> >   lib/ethdev/rte_flow_driver.h   | 15 
> >   lib/ethdev/version.map |  5 ++
> >   5 files changed, 165 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/release_23_03.rst
> b/doc/guides/rel_notes/release_23_03.rst
> > index c15f6fbb9f..6941d20abc 100644
> > --- a/doc/guides/rel_notes/release_23_03.rst
> > +++ b/doc/guides/rel_notes/release_23_03.rst
> > @@ -69,6 +69,13 @@ New Features
> >   ``rte_event_dev_config::nb_single_link_event_port_queues``
> parameter
> >   required for eth_rx, eth_tx, crypto and timer eventdev adapters.
> >
> > +* **Added functions to atomically query and update indirect flow
> action.**
> > +
> > +  Added synchronous and asynchronous functions to atomically query
> and update
> > +  indirect flow action:
> > +
> > +  - ``rte_flow_action_handle_query_update``
> > +  - ``rte_flow_async_action_handle_query_update``
> 
> Please, add one more empty line since two should separate
> sections.
> 

Will fix in v7.

> >
> >   Removed Items
> >   -
> > diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> > index 7d0c24366c..8b8aa940be 100644
> > --- a/lib/ethdev/rte_flow.c
> > +++ b/lib/ethdev/rte_flow.c
> > @@ -1883,3 +1883,42 @@ rte_flow_async_action_handle_query(uint16_t
> port_id,
> > action_handle, data, user_data, 
> > error);
> >   return flow_err(port_id, ret, error);
> >   }
> > +
> > +int
> > +rte_flow_action_handle_query_update(uint16_t port_id,
> > + struct rte_flow_action_handle *handle,
> > + const void *update, void *query,
> > + enum rte_flow_query_update_mode mode,
> > + struct rte_flow_error *error)
> > +{
> > + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> 
> I know that nearby code does the same, but I still dislike it.
> If port_id is invalid, it could be out-of-boundary access.
> Yes, port_id is checked on ops get, but it would be safer to
> do the assignment after ops check below.
> Right now there is no bugs, but the future modifications
> could break it.
> 

Will fix in v7.

> > + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > + int ret;
> > +
> > + if (!ops || !ops->action_handle_query_update)
> > + return -ENOTSUP;
> > + ret = ops->action_handle_query_update(dev, handle, update, query,
> > +   mode, error);
> > + return flow_err(port_id, ret, error);
> > +}
> > +
> > +int
> > +rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t
> queue_id,
> > +   const struct rte_flow_op_attr *attr,
> > +   struct rte_flow_action_handle 
> > *handle,
> > +   const void *update, void *query,
> > +   enum rte_flow_query_update_mode 
> > mode,
> > +   void *user_data,
> > +   struct rte_flow_error *error)
> > +{
> > + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> 
> same here
> 

Will fix in v7.

> > + const struct rte_flow_ops *ops = rte_flow_ops_

TCP stack support on DPDK

2023-01-20 Thread Sam Kirubakaran
Hi team,

My name is Sam and I am a Software Engineer.
Currently, we are trying to improve the performance of proprietary in-house
tools by using DPDK but we are not sure whether DPDK has support for TCP
stack.
If yes, could you please point me to the link of apis alone. I will take it
from there.
Could you please guide me on this? Your response would be highly
appreciated!

Regards,
Sam


net/bnxt: wrong link status when lsc_intr is used

2023-01-20 Thread Edwin Brossette
Hello,

I am trying to operate a Broadcom BCM57414 2x10G nic using dpdk bnxt pmd. I
use DPDK 22.11.
However, doing so I stumbled over a number of different issues. Mainly,
using the dpdk rte_eth library, I don't seem to be able to correctly poll
the link status: I expect my nic has a problem using autoneg to set the
link speed/duplex, because these parameters remain unknown while autoneg is
on. However, after trying to set link up, instead of showing the link state
as down, I see the link being up, which is in truth not the case, as no
packets can transit on the line and the switch at the other end sees it
down.

When searching around and trying to debug the code, I found the function
bnxt_dev_info_get_op() sets my nic in interrupt mode:

> eth_dev->data->dev_conf.intr_conf.lsc = 1;

Which is a bit of a surprising thing to do for a function meant to get
info.
Thus my card ends up working in a mode I didn't configure it to, which may
be the cause of my issue: later when setting the link up in function
bnxt_dev_set_link_up_op():

>  if (!bp->link_info->link_up)
> rc = bnxt_set_hwrm_link_config(bp, true);
>  if (!rc)
> eth_dev->data->dev_link.link_status = 1;

So link_status in eth_dev gets set to 1 as long as the operation did not
return any error code. This is the case when setting my card's link up
(rc=0), although the link clearly can't get up, for whatever other bug is
present. Now this shouldn't be much of an issue given we will update the
link status at some point, mainly in rte_eth_link_get_nowait():

>  if (dev->data->dev_conf.intr_conf.lsc && dev->data->dev_started)
> rte_eth_linkstatus_get(dev, eth_link);
>  else {
> if (*dev->dev_ops->link_update == NULL)
> return -ENOTSUP;
> (*dev->dev_ops->link_update)(dev, 0);
>  *eth_link = dev->data->dev_link;

Here we can see in the else statement that the link status gets updated.
However because the pmd auto-configured the nic in interrupt mode when
calling the get_info function, we are not going through that else
statement. So when reading the value of the link_status, we read 1 instead
of 0. I suppose with interrupt mode enabled, the nic should be able to
update this variable on its own, but it is clearly not the case in my
setup: link status is never updated and incorrectly indicates the link is
UP.

I can suggest a testpmd reproduction setup using the --no-lsc-interrupt
option. With this option, dev_conf.intr_conf.lsc should be 0. In addition,
I added a log to the rte_eth library in rte_eth_dev_start() to display
dev_conf.intr_conf.lsc state when starting the port:

> + RTE_ETHDEV_LOG(ERR, "><><><>:
dev->data->dev_conf.intr_conf=%d\n", dev->data->dev_conf.intr_conf.lsc);
>   diag = (*dev->dev_ops->dev_start)(dev);

Running testpmd, we can see the following outpout when starting port:

dpdk-testpmd --log-level=pmd.net.bnxt.driver:8 -a :02:00.0 -a
:02:00.1 -- -i --rxq=2 --txq=2 --coremask=0x0c --total-num-mbufs=25
--no-lsc-interrupt
[...]
Configuring Port 0 (socket 0)
bnxt_rx_queue_setup_op(): App supplied RXQ drop_en status : 1
bnxt_rx_queue_setup_op(): RX Buf MTU 1500
bnxt_rx_queue_setup_op(): RX Buf size is 9728
bnxt_rx_queue_setup_op(): App supplied RXQ drop_en status : 1
bnxt_rx_queue_setup_op(): RX Buf MTU 1500
bnxt_rx_queue_setup_op(): RX Buf size is 9728
><><><>: dev->data->dev_conf.intr_conf=1
bnxt_mq_rx_configure(): pools = 1 nb_q_per_grp = 2
bnxt_mq_rx_configure(): rxq[0] = 0x105fb7ac0 vnic[0] = 0x100227080
bnxt_mq_rx_configure(): rxq[1] = 0x105fb0e40 vnic[0] = 0x100227080
bnxt_setup_one_vnic(): vnic[0] = 0x100227080 vnic->fw_grp_ids = 0x105fa7e00
bnxt_hwrm_vnic_alloc(): Alloc VNIC. Start 0, End 2
bnxt_hwrm_vnic_alloc(): VNIC ID 2
bnxt_setup_one_vnic(): rxq[0]->vnic=0x100227080 vnic->fw_grp_ids=0x105fa7e00
bnxt_setup_one_vnic(): rxq[1]->vnic=0x100227080 vnic->fw_grp_ids=0x105fa7e00
bnxt_setup_one_vnic(): vnic->rx_queue_cnt = 2
bnxt_hwrm_port_phy_qcfg(): Link Speed:0,Auto:4:64:140,Support:140,Force:0
bnxt_hwrm_port_phy_qcfg(): Link Signal:0,PAM::Auto:0,Support:0,Force:0
bnxt_hwrm_port_phy_qcfg(): Link Speed:0,Auto:4:64:140,Support:140,Force:0
bnxt_hwrm_port_phy_qcfg(): Link Signal:0,PAM::Auto:0,Support:0,Force:0
bnxt_ulp_port_init(): Skip ulp init for port: 0, TF is not enabled
bnxt_receive_function(): Using SSE vector mode receive for port 0
bnxt_transmit_function(): Using SSE vector mode transmit for port 0
Port 0: 00:0A:F7:B6:E3:D0
Configuring Port 1 (socket 0)
bnxt_rx_queue_setup_op(): App supplied RXQ drop_en status : 1
bnxt_rx_queue_setup_op(): RX Buf MTU 1500
bnxt_rx_queue_setup_op(): RX Buf size is 9728
bnxt_rx_queue_setup_op(): App supplied RXQ drop_en status : 1
bnxt_rx_queue_setup_op(): RX Buf MTU 1500
bnxt_rx_queue_setup_op(): RX Buf size is 9728
><><><>: dev->data->dev_conf.intr_conf=1
bnxt_mq_rx_configure(): pools = 1 nb_q_per_grp = 2
bnxt_mq_rx_configure(): rxq[0] = 0x106200280 vnic[0] = 0x100200080
bnxt_mq_rx_configure(): rxq[1] = 0x105f10e40 vnic[0] = 0x100200080
bnxt_setup_one_vnic()

Re: [PATCH v6 1/2] ethdev: add query_update sync and async function calls

2023-01-20 Thread Andrew Rybchenko

On 1/20/23 13:46, Gregory Etelson wrote:

and it's async version `rte_flow_async_action_handle_query_update`
to atomically query and update flow action.


Sorry, may be I'm missing something, but after reading previous
discussion I have a feeling that update could be queried data
dependent. However, I don't understand how it could be possible
with API below. Just my misunderstanding?



The goal of `rte_flow_action_handle_query_update()` is to execute query and 
update actions atomically.
The function guaranties that action state will not change by any event before 
both update and query complete.
If the function was called with the ` RTE_FLOW_QU_QUERY_FIRST ` `mode` 
argument, then update execution can depend
on query result. That's up to query format, PMD implementation and hardware 
capabilities.
I hope that answered your question.


Sorry, I'm still confused. Could you explain a bit more,
please. How update could depend on query result?
Caller already specified update structure...



Testpmd/l3fwd port shutdown failure on Arm Altra systems

2023-01-20 Thread Juraj Linkeš
Hello i40e and testpmd maintainers,

We're hitting an issue with DPDK testpmd on Ampere Altra servers in FD.io lab.

A bit of background: along with VPP performance tests (which uses DPDK), we're 
running a small number of basic DPDK testpmd and l3fwd tests in FD.io as well. 
This is to catch any performance differences due to VPP updating its DPDK 
version.

We're running both l3fwd tests and testpmd tests. The Altra servers are two 
socket and the topology is TG -> DUT1 -> DUT2 -> TG, traffic flows in both 
directions, but nothing gets forwarded (with a slight caveat - put a pin in 
this). There's nothing special in the tests, just forwarding traffic. The NIC 
we're testing is xl710-QDA2.

The same tests are passing on all other testbeds - we have various two node (1 
DUT, 1 TG) and three node (2 DUT, 1 TG) Intel and Arm testbeds and with various 
NICs (Intel 700 and 800 series and the Intel testbeds use some Mellanox NICs as 
well). We don't have quite the same combination of another three node topology 
with the same NIC though, so it looks like something with testpmd/l3fwd and 
xl710-QDA2 on Altra servers.

VPP performance tests are passing, but l3fwd and testpmd fail. This leads us to 
believe to it's a software issue, but there could something wrong with the 
hardware. I'll talk about testpmd from now on, but as far we can tell, the 
behavior is the same for testpmd and l3fwd.

Getting back to the caveat mentioned earlier, there seems to be something wrong 
with port shutdown. When running testpmd on a testbed that hasn't been used for 
a while it seems that all ports are up right away (we don't see any "Port 0|1: 
link state change event") and the setup works fine (forwarding works). After 
restarting testpmd (restarting on one server is sufficient), the ports between 
DUT1 and DUT2 (but not between DUTs and TG) go down and are not usable in DPDK, 
VPP or in Linux (with i40e kernel driver) for a while (measured in minutes, 
sometimes dozens of minutes; the duration is seemingly random). The ports 
eventually recover and can be used again, but there's nothing in syslog 
suggesting what happened.

What seems to be happening is testpmd put the ports into some faulty state. 
This only happens on the DUT1 -> DUT2 link though (the ports between the two 
testpmds), not on TG -> DUT1 link (the TG port is left alone).

Some more info:
We've come across the issue with this configuration:
OS: Ubuntu20.04 with kernel 5.4.0-65-generic.
Old NIC firmware, never upgraded: 6.01 0x800035da 1.1747.0.
Drivers versions: i40e 2.17.15 and iavf 4.3.19.

As well as with this configuration:
OS: Ubuntu22.04 with kernel 5.15.0-46-generic.
Updated firmware: 8.30 0x8000a4ae 1.2926.0.
Drivers: i40e 2.19.3 and iavf 4.5.3.

Unsafe noiommu mode is disabled:
cat /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
N

We used DPDK 22.07 in manual testing and built it on DUTs, using generic build:
meson -Dexamples=l3fwd -Dc_args=-DRTE_LIBRTE_I40E_16BYTE_RX_DESC=y 
-Dplatform=generic build

We're running testpmd with this command:
sudo build/app/dpdk-testpmd -v -l 1,2 -a 0004:04:00.1 -a 0004:04:00.0 
--in-memory -- -i --forward-mode=io --burst=64 --txq=1 --rxq=1 
--tx-offloads=0x0 --numa --auto-start --total-num-mbufs=32768 --nb-ports=2 
--portmask=0x3 --max-pkt-len=1518 --mbuf-size=16384 --nb-cores=1

And l3fwd (with different macs on the other server):
sudo /tmp/openvpp-testing/dpdk/build/examples/dpdk-l3fwd -v -l 1,2 -a 
0004:04:00.0 -a 0004:04:00.1 --in-memory -- --parse-ptype 
--eth-dest="0,40:a6:b7:85:e7:79" --eth-dest="1,3c:fd:fe:c3:e7:a1" --config="(0, 
0, 2),(1, 0, 2)" -P -L -p 0x3

We tried adding logs with  --log-level=pmd,debug and --no-lsc-interrupt, but 
that didn't reveal anything helpful, as far as we can tell - please have a look 
at the attached log. The faulty port is port0 (starts out as down, then we 
waited for around 25 minutes for it to go up and then we shut down testpmd).

We'd like to ask for pointers on what could be the cause or how to debug this 
issue further.

Thanks,
Juraj


RE: Testpmd/l3fwd port shutdown failure on Arm Altra systems

2023-01-20 Thread Juraj Linkeš
Adding the logfile.

One thing that's in the logs but didn't explicitly mention is the DPDK version 
we've tried this with:
EAL: RTE Version: 'DPDK 22.07.0'

We also tried earlier versions going back to 21.08, with no luck. I also did a 
quick check on 22.11, also with no luck.

Juraj

From: Juraj Linkeš
Sent: Friday, January 20, 2023 12:56 PM
To: 'aman.deep.si...@intel.com' ; 
'yuying.zh...@intel.com' ; Xing, Beilei 

Cc: dev@dpdk.org; Ruifeng Wang ; 'Lijian Zhang' 
; 'Honnappa Nagarahalli' 
Subject: Testpmd/l3fwd port shutdown failure on Arm Altra systems

Hello i40e and testpmd maintainers,

We're hitting an issue with DPDK testpmd on Ampere Altra servers in FD.io lab.

A bit of background: along with VPP performance tests (which uses DPDK), we're 
running a small number of basic DPDK testpmd and l3fwd tests in FD.io as well. 
This is to catch any performance differences due to VPP updating its DPDK 
version.

We're running both l3fwd tests and testpmd tests. The Altra servers are two 
socket and the topology is TG -> DUT1 -> DUT2 -> TG, traffic flows in both 
directions, but nothing gets forwarded (with a slight caveat - put a pin in 
this). There's nothing special in the tests, just forwarding traffic. The NIC 
we're testing is xl710-QDA2.

The same tests are passing on all other testbeds - we have various two node (1 
DUT, 1 TG) and three node (2 DUT, 1 TG) Intel and Arm testbeds and with various 
NICs (Intel 700 and 800 series and the Intel testbeds use some Mellanox NICs as 
well). We don't have quite the same combination of another three node topology 
with the same NIC though, so it looks like something with testpmd/l3fwd and 
xl710-QDA2 on Altra servers.

VPP performance tests are passing, but l3fwd and testpmd fail. This leads us to 
believe to it's a software issue, but there could something wrong with the 
hardware. I'll talk about testpmd from now on, but as far we can tell, the 
behavior is the same for testpmd and l3fwd.

Getting back to the caveat mentioned earlier, there seems to be something wrong 
with port shutdown. When running testpmd on a testbed that hasn't been used for 
a while it seems that all ports are up right away (we don't see any "Port 0|1: 
link state change event") and the setup works fine (forwarding works). After 
restarting testpmd (restarting on one server is sufficient), the ports between 
DUT1 and DUT2 (but not between DUTs and TG) go down and are not usable in DPDK, 
VPP or in Linux (with i40e kernel driver) for a while (measured in minutes, 
sometimes dozens of minutes; the duration is seemingly random). The ports 
eventually recover and can be used again, but there's nothing in syslog 
suggesting what happened.

What seems to be happening is testpmd put the ports into some faulty state. 
This only happens on the DUT1 -> DUT2 link though (the ports between the two 
testpmds), not on TG -> DUT1 link (the TG port is left alone).

Some more info:
We've come across the issue with this configuration:
OS: Ubuntu20.04 with kernel 5.4.0-65-generic.
Old NIC firmware, never upgraded: 6.01 0x800035da 1.1747.0.
Drivers versions: i40e 2.17.15 and iavf 4.3.19.

As well as with this configuration:
OS: Ubuntu22.04 with kernel 5.15.0-46-generic.
Updated firmware: 8.30 0x8000a4ae 1.2926.0.
Drivers: i40e 2.19.3 and iavf 4.5.3.

Unsafe noiommu mode is disabled:
cat /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
N

We used DPDK 22.07 in manual testing and built it on DUTs, using generic build:
meson -Dexamples=l3fwd -Dc_args=-DRTE_LIBRTE_I40E_16BYTE_RX_DESC=y 
-Dplatform=generic build

We're running testpmd with this command:
sudo build/app/dpdk-testpmd -v -l 1,2 -a 0004:04:00.1 -a 0004:04:00.0 
--in-memory -- -i --forward-mode=io --burst=64 --txq=1 --rxq=1 
--tx-offloads=0x0 --numa --auto-start --total-num-mbufs=32768 --nb-ports=2 
--portmask=0x3 --max-pkt-len=1518 --mbuf-size=16384 --nb-cores=1

And l3fwd (with different macs on the other server):
sudo /tmp/openvpp-testing/dpdk/build/examples/dpdk-l3fwd -v -l 1,2 -a 
0004:04:00.0 -a 0004:04:00.1 --in-memory -- --parse-ptype 
--eth-dest="0,40:a6:b7:85:e7:79" --eth-dest="1,3c:fd:fe:c3:e7:a1" --config="(0, 
0, 2),(1, 0, 2)" -P -L -p 0x3

We tried adding logs with  --log-level=pmd,debug and --no-lsc-interrupt, but 
that didn't reveal anything helpful, as far as we can tell - please have a look 
at the attached log. The faulty port is port0 (starts out as down, then we 
waited for around 25 minutes for it to go up and then we shut down testpmd).

We'd like to ask for pointers on what could be the cause or how to debug this 
issue further.

Thanks,
Juraj


testpmd.log
Description: testpmd.log


Re: [RFC] ethdev: sharing indirect actions between ports

2023-01-20 Thread Andrew Rybchenko

On 1/18/23 19:37, Slava Ovsiienko wrote:




-Original Message-
From: Thomas Monjalon 
Sent: Wednesday, January 18, 2023 6:22 PM
To: Slava Ovsiienko ; Ori Kam

Cc: dev@dpdk.org; Matan Azrad ; Raslan Darawsheh
; andrew.rybche...@oktetlabs.ru;
ivan.ma...@oktetlabs.ru; ferruh.yi...@amd.com
Subject: Re: [RFC] ethdev: sharing indirect actions between ports

18/01/2023 16:17, Ori Kam:

From: Thomas Monjalon 

28/12/2022 17:54, Viacheslav Ovsiienko:

The RTE Flow API implements the concept of shared objects, known
as indirect actions (RTE_FLOW_ACTION_TYPE_INDIRECT).
An application can create the indirect action of desired type and
configuration with rte_flow_action_handle_create call and then
specify the obtained action handle in multiple flows.

The initial concept supposes the action handle has strict
attachment to the port it was created on and to be used
exclusively in the flows being installed on the port.

Nowadays the multipath network topologies are quite common,
packets belonging to the same connection might arrive and be sent
over multiple ports, and there is the raising demand to handle
these "spread" connections. To fulfil this demand it is proposed
to extend indirect action sharing across the multiple ports. This
kind of sharing would be extremely useful for the meters and
counters, allowing to manage the single connection over the
multiple ports.

This cross-port object sharing is hard to implement in generic way
merely with software on the upper layers, but can be provided by
the driver over the single hardware instance, where  multiple
ports reside on the same physical NIC and share the same hardware
context.

To allow this action sharing application should specify the "host
port" during flow configuring to claim the intention to share the
indirect actions. All indirect actions reside within "host port"
context and can be shared in flows being installed


I don't like the word "host" because it may refer to the host CPU.
Also if I understand well, the application must choose one port
between all ports of the NIC and keep using the same.
I guess we don't want to create a NIC id.
So I would suggest to rename to nic_ref_port or something like that.



I think that host is the correct word since this port hosts all
resources for other ports. (this is also why the host is used in case
of CPU 😊)
I don't think it is correct to use bad wording due to the fact that
some one else also uses this word.
in rte_flow we never talk about host CPU so I don't think this is confusing.


The confusion is that we can think of a port on the host.


In my humble opinion, "_port_id" suffix explicitly specifies what field is and 
does not leave
too much space for confusion.

"root_port_id"? "base_port_id"?  "container_port_id" ? "mgmnt_port_id" ?
  Looks worse as for me and does not reflect the exact meaning.
As Ori mentioned this is DPDK port ID that embraces all the shared actions.
It plays a host role for them.


Maybe 'owner_port_id' or 'rsrc_port_id' ?




RE: [PATCH 3/3] test/reorder: add cases to cover new API

2023-01-20 Thread Volodymyr Fialko
Depends-on: series-26425 ("lib/reorder: fix drain/free issues")

Test failure is caused by this bug in drain/api test - 
http://patches.dpdk.org/project/dpdk/list/?series=26425
It wasn't seen before seen before because drain was the last test, but now 
following test receive pointer to same packet twice due to double free.

> -Original Message-
> From: Volodymyr Fialko 
> Sent: Friday, January 20, 2023 11:22 AM
> To: dev@dpdk.org; Reshma Pattan 
> Cc: Jerin Jacob Kollanukkaran ; Anoob Joseph 
> ;
> Volodymyr Fialko 
> Subject: [PATCH 3/3] test/reorder: add cases to cover new API
> 
> Add new test cases to cover `rte_reorder_drain_up_to_seqn` and 
> `rte_reorder_min_seqn_set`.
> 
> Signed-off-by: Volodymyr Fialko 
> ---
>  app/test/test_reorder.c | 160 
>  1 file changed, 160 insertions(+)
> 
> diff --git a/app/test/test_reorder.c b/app/test/test_reorder.c index 
> f0714a5c18..c345a72e0c 100644
> --- a/app/test/test_reorder.c
> +++ b/app/test/test_reorder.c
> @@ -335,6 +335,164 @@ test_reorder_drain(void)
>   return ret;
>  }
> 
> +static void
> +buffer_to_reorder_move(struct rte_mbuf **mbuf, struct
> +rte_reorder_buffer *b) {
> + rte_reorder_insert(b, *mbuf);
> + *mbuf = NULL;
> +}
> +
> +static int
> +test_reorder_drain_up_to_seqn(void)
> +{
> + struct rte_mempool *p = test_params->p;
> + struct rte_reorder_buffer *b = NULL;
> + const unsigned int num_bufs = 10;
> + const unsigned int size = 4;
> + unsigned int i, cnt;
> + int ret = 0;
> +
> + struct rte_mbuf *bufs[num_bufs];
> + struct rte_mbuf *robufs[num_bufs];
> +
> + /* initialize all robufs to NULL */
> + memset(robufs, 0, sizeof(robufs));
> +
> + /* This would create a reorder buffer instance consisting of:
> +  * reorder_seq = 0
> +  * ready_buf: RB[size] = {NULL, NULL, NULL, NULL}
> +  * order_buf: OB[size] = {NULL, NULL, NULL, NULL}
> +  */
> + b = rte_reorder_create("test_drain_up_to_seqn", rte_socket_id(), size);
> + TEST_ASSERT_NOT_NULL(b, "Failed to create reorder buffer");
> +
> + for (i = 0; i < num_bufs; i++) {
> + bufs[i] = rte_pktmbuf_alloc(p);
> + TEST_ASSERT_NOT_NULL(bufs[i], "Packet allocation failed\n");
> + *rte_reorder_seqn(bufs[i]) = i;
> + }
> +
> + /* Insert packet with seqn 1 and 3:
> +  * RB[] = {NULL, NULL, NULL, NULL}
> +  * OB[] = {1, 2, 3, NULL}
> +  */
> + buffer_to_reorder_move(&bufs[1], b);
> + buffer_to_reorder_move(&bufs[2], b);
> + buffer_to_reorder_move(&bufs[3], b);
> + /* Draining 1, 2 */
> + cnt = rte_reorder_drain_up_to_seqn(b, robufs, num_bufs, 3);
> + if (cnt != 2) {
> + printf("%s:%d:%d: number of expected packets not drained\n",
> + __func__, __LINE__, cnt);
> + ret = -1;
> + goto exit;
> + }
> + for (i = 0; i < 2; i++)
> + rte_pktmbuf_free(robufs[i]);
> + memset(robufs, 0, sizeof(robufs));
> +
> + /* Insert more packets
> +  * RB[] = {NULL, NULL, NULL, NULL}
> +  * OB[] = {3, 4, NULL, 6}
> +  */
> + buffer_to_reorder_move(&bufs[4], b);
> + buffer_to_reorder_move(&bufs[6], b);
> + /* Insert more packets to utilize Ready buffer
> +  * RB[] = {3, NULL, 5, 6}
> +  * OB[] = {NULL, NULL, 8, NULL}
> +  */
> + buffer_to_reorder_move(&bufs[8], b);
> +
> + /* Drain 3 and 5 */
> + cnt = rte_reorder_drain_up_to_seqn(b, robufs, num_bufs, 6);
> + if (cnt != 2) {
> + printf("%s:%d:%d: number of expected packets not drained\n",
> + __func__, __LINE__, cnt);
> + ret = -1;
> + goto exit;
> + }
> + for (i = 0; i < 2; i++)
> + rte_pktmbuf_free(robufs[i]);
> + memset(robufs, 0, sizeof(robufs));
> +
> + ret = 0;
> +exit:
> + rte_reorder_free(b);
> + for (i = 0; i < num_bufs; i++) {
> + rte_pktmbuf_free(bufs[i]);
> + rte_pktmbuf_free(robufs[i]);
> + }
> + return ret;
> +}
> +
> +static int
> +test_reorder_set_seqn(void)
> +{
> + struct rte_mempool *p = test_params->p;
> + struct rte_reorder_buffer *b = NULL;
> + const unsigned int num_bufs = 7;
> + const unsigned int size = 4;
> + unsigned int i;
> + int ret = 0;
> +
> + struct rte_mbuf *bufs[num_bufs];
> +
> + /* This would create a reorder buffer instance consisting of:
> +  * reorder_seq = 0
> +  * ready_buf: RB[size] = {NULL, NULL, NULL, NULL}
> +  * order_buf: OB[size] = {NULL, NULL, NULL, NULL}
> +  */
> + b = rte_reorder_create("test_min_seqn_set", rte_socket_id(), size);
> + TEST_ASSERT_NOT_NULL(b, "Failed to create reorder buffer");
> +
> + for (i = 0; i < num_bufs; i++) {
> + bufs[i] = rte_pktmbuf_alloc(p);
> + if (bufs[i] == NULL) {
> + printf("Packet allocation failed\n");
> +  

Re: TCP stack support on DPDK

2023-01-20 Thread Thomas Monjalon
20/01/2023 09:11, Sam Kirubakaran:
> Hi team,
> 
> My name is Sam and I am a Software Engineer.
> Currently, we are trying to improve the performance of proprietary in-house
> tools by using DPDK but we are not sure whether DPDK has support for TCP
> stack.

No there is no stack inside DPDK.
You can find some projects here: https://www.dpdk.org/ecosystem/#projects




RE: [PATCH] net/i40e: rework maximum frame size configuration

2023-01-20 Thread Su, Simei
Hi David,

> -Original Message-
> From: David Marchand 
> Sent: Friday, January 20, 2023 3:34 PM
> To: Su, Simei 
> Cc: Xing, Beilei ; Zhang, Yuying
> ; dev@dpdk.org; Zhang, Qi Z
> ; Yang, Qiming ;
> sta...@dpdk.org; Zhang, Helin 
> Subject: Re: [PATCH] net/i40e: rework maximum frame size configuration
> 
> On Mon, Jan 16, 2023 at 1:15 PM Su, Simei  wrote:
> >
> > Hi David,
> >
> > > -Original Message-
> > > From: David Marchand 
> > > Sent: Monday, January 16, 2023 7:19 PM
> > > To: Su, Simei 
> > > Cc: Xing, Beilei ; Zhang, Yuying
> > > ; dev@dpdk.org; Zhang, Qi Z
> > > ; Yang, Qiming ;
> > > sta...@dpdk.org; Zhang, Helin 
> > > Subject: Re: [PATCH] net/i40e: rework maximum frame size
> > > configuration
> > >
> > > On Mon, Jan 16, 2023 at 11:54 AM Simei Su  wrote:
> > > >
> > > > This patch removes unnecessary link status check.
> > > >
> > > > Fixes: a4ba77367923 ("net/i40e: enable maximum frame size at port
> > > > level")
> > > > Fixes: 2184f7cdeeaa ("net/i40e: fix max frame size config at port
> > > > level")
> > > > Fixes: 719469f13b11 ("net/i40e: fix jumbo frame Rx with X722")
> > > > Cc: sta...@dpdk.org
> > > >
> > > > Signed-off-by: Simei Su 
> > >
> > > Thanks for looking into the issue.
> > >
> > > This is rather close to what I had tried [1] along my original
> > > report, but it failed in the CI.
> > > Let's see how the validation of your patch goes.
> > >
> > > 1:
> > >
> https://patchwork.dpdk.org/project/dpdk/patch/20221212143715.29649-1
> > > -d
> > > avid.march...@redhat.com/
> > >
> >
> > OK. We will find one environment to see why the unit test failed.
> 
> Any update?

We can reproduce CI error, but "ifconfig interface up" needs to be done firstly
to reproduce it otherwise this error won't exist. The specific reason hasn't 
been
found currently. We will discuss it after Chinese New Year about how to handle 
it
more reasonably.

> 
> 
> --
> David Marchand



Re: [PATCH] net/i40e: rework maximum frame size configuration

2023-01-20 Thread David Marchand
On Fri, Jan 20, 2023 at 2:58 PM Su, Simei  wrote:
>
> Hi David,
>
> > -Original Message-
> > From: David Marchand 
> > Sent: Friday, January 20, 2023 3:34 PM
> > To: Su, Simei 
> > Cc: Xing, Beilei ; Zhang, Yuying
> > ; dev@dpdk.org; Zhang, Qi Z
> > ; Yang, Qiming ;
> > sta...@dpdk.org; Zhang, Helin 
> > Subject: Re: [PATCH] net/i40e: rework maximum frame size configuration
> >
> > On Mon, Jan 16, 2023 at 1:15 PM Su, Simei  wrote:
> > >
> > > Hi David,
> > >
> > > > -Original Message-
> > > > From: David Marchand 
> > > > Sent: Monday, January 16, 2023 7:19 PM
> > > > To: Su, Simei 
> > > > Cc: Xing, Beilei ; Zhang, Yuying
> > > > ; dev@dpdk.org; Zhang, Qi Z
> > > > ; Yang, Qiming ;
> > > > sta...@dpdk.org; Zhang, Helin 
> > > > Subject: Re: [PATCH] net/i40e: rework maximum frame size
> > > > configuration
> > > >
> > > > On Mon, Jan 16, 2023 at 11:54 AM Simei Su  wrote:
> > > > >
> > > > > This patch removes unnecessary link status check.
> > > > >
> > > > > Fixes: a4ba77367923 ("net/i40e: enable maximum frame size at port
> > > > > level")
> > > > > Fixes: 2184f7cdeeaa ("net/i40e: fix max frame size config at port
> > > > > level")
> > > > > Fixes: 719469f13b11 ("net/i40e: fix jumbo frame Rx with X722")
> > > > > Cc: sta...@dpdk.org
> > > > >
> > > > > Signed-off-by: Simei Su 
> > > >
> > > > Thanks for looking into the issue.
> > > >
> > > > This is rather close to what I had tried [1] along my original
> > > > report, but it failed in the CI.
> > > > Let's see how the validation of your patch goes.
> > > >
> > > > 1:
> > > >
> > https://patchwork.dpdk.org/project/dpdk/patch/20221212143715.29649-1
> > > > -d
> > > > avid.march...@redhat.com/
> > > >
> > >
> > > OK. We will find one environment to see why the unit test failed.
> >
> > Any update?
>
> We can reproduce CI error, but "ifconfig interface up" needs to be done 
> firstly
> to reproduce it otherwise this error won't exist. The specific reason hasn't 
> been
> found currently. We will discuss it after Chinese New Year about how to 
> handle it
> more reasonably.

We have regressions in stable releases.
Please put priority when the team is back so that this topic is fixed in 23.03.


Thanks.

-- 
David Marchand



RE: [PATCH] net/i40e: rework maximum frame size configuration

2023-01-20 Thread Su, Simei

> -Original Message-
> From: David Marchand 
> Sent: Friday, January 20, 2023 10:47 PM
> To: Su, Simei 
> Cc: Xing, Beilei ; Zhang, Yuying
> ; dev@dpdk.org; Zhang, Qi Z
> ; Yang, Qiming ;
> sta...@dpdk.org; Zhang, Helin ; Mcnamara, John
> 
> Subject: Re: [PATCH] net/i40e: rework maximum frame size configuration
> 
> On Fri, Jan 20, 2023 at 2:58 PM Su, Simei  wrote:
> >
> > Hi David,
> >
> > > -Original Message-
> > > From: David Marchand 
> > > Sent: Friday, January 20, 2023 3:34 PM
> > > To: Su, Simei 
> > > Cc: Xing, Beilei ; Zhang, Yuying
> > > ; dev@dpdk.org; Zhang, Qi Z
> > > ; Yang, Qiming ;
> > > sta...@dpdk.org; Zhang, Helin 
> > > Subject: Re: [PATCH] net/i40e: rework maximum frame size
> > > configuration
> > >
> > > On Mon, Jan 16, 2023 at 1:15 PM Su, Simei  wrote:
> > > >
> > > > Hi David,
> > > >
> > > > > -Original Message-
> > > > > From: David Marchand 
> > > > > Sent: Monday, January 16, 2023 7:19 PM
> > > > > To: Su, Simei 
> > > > > Cc: Xing, Beilei ; Zhang, Yuying
> > > > > ; dev@dpdk.org; Zhang, Qi Z
> > > > > ; Yang, Qiming ;
> > > > > sta...@dpdk.org; Zhang, Helin 
> > > > > Subject: Re: [PATCH] net/i40e: rework maximum frame size
> > > > > configuration
> > > > >
> > > > > On Mon, Jan 16, 2023 at 11:54 AM Simei Su 
> wrote:
> > > > > >
> > > > > > This patch removes unnecessary link status check.
> > > > > >
> > > > > > Fixes: a4ba77367923 ("net/i40e: enable maximum frame size at
> > > > > > port
> > > > > > level")
> > > > > > Fixes: 2184f7cdeeaa ("net/i40e: fix max frame size config at
> > > > > > port
> > > > > > level")
> > > > > > Fixes: 719469f13b11 ("net/i40e: fix jumbo frame Rx with X722")
> > > > > > Cc: sta...@dpdk.org
> > > > > >
> > > > > > Signed-off-by: Simei Su 
> > > > >
> > > > > Thanks for looking into the issue.
> > > > >
> > > > > This is rather close to what I had tried [1] along my original
> > > > > report, but it failed in the CI.
> > > > > Let's see how the validation of your patch goes.
> > > > >
> > > > > 1:
> > > > >
> > >
> https://patchwork.dpdk.org/project/dpdk/patch/20221212143715.29649-1
> > > > > -d
> > > > > avid.march...@redhat.com/
> > > > >
> > > >
> > > > OK. We will find one environment to see why the unit test failed.
> > >
> > > Any update?
> >
> > We can reproduce CI error, but "ifconfig interface up" needs to be
> > done firstly to reproduce it otherwise this error won't exist. The
> > specific reason hasn't been found currently. We will discuss it after
> > Chinese New Year about how to handle it more reasonably.
> 
> We have regressions in stable releases.
> Please put priority when the team is back so that this topic is fixed in 
> 23.03.
> 
> 
> Thanks.
> 
> --
> David Marchand

Sorry for any inconvenience. OK, we will handle it as soon as possible when we 
are back
and fix it in 23.03.

Thanks,
Simei




Re: ICE DDP Load assistance

2023-01-20 Thread Ben Magistro
Okay, believe we have tracked down and can explain most of the current
behavior. The question I have now is, is this really what is desired or
should it be possible for DPDK be passed a flag and load a DDPK file
anyways, or should it always try to update the DDP if a newer one is
found.  In our case we are unlikely to utilize the ICE based driver in the
OS and would only be using it with DPDK.  To me, in this scenario, it makes
little sense to maintain/think about the OS driver and related items.

With some increased logging from testpmd and looking at the ICE PMD source
it was identified that we are hitting `ICE_ERR_AQ_NO_WORK` in `ice_ddp.c ->
ice_acquire_global_cfg_lock:2232` which indicates that another driver has
already written the package or has found that no update was necessary.  My
uneducated guess is this is happening as the OS starts up and then
transitions the device over to vfio-pci which would also explain why the
initial firmware load was resolved after running dracut.  In the log
scenario below, an update should definitely be necessary still as 1.3.30 is
found but 1.3.26 is loaded.



1822 pkg: /lib/firmware/updates/intel/ice/ddp/ice-40a6b78437f8.pkg
1829 pkg: /lib/firmware/intel/ice/ddp/ice-40a6b78437f8.pkg
1835 pkg: /lib/firmware/updates/intel/ice/ddp/ice.pkg
1847 pkg: /lib/firmware/updates/intel/ice/ddp/ice.pkg
ice_load_pkg(): DDP package name:
/lib/firmware/updates/intel/ice/ddp/ice.pkg
init enter
ice_find_seg_in_pkg(): ice 00.0 Package format version: 1.0.0.0
ice_init_pkg_info(): ice 00.0 Pkg using segment id: 0x0010
ice_find_seg_in_pkg(): ice 00.0 Package format version: 1.0.0.0
ice_init_pkg_info(): ice 00.0 Pkg: 1.3.30.0, ICE OS Default Package
ice_init_pkg_info(): ice 00.0 Ice Seg: 1.0.1.0, ICE Configuration Data
ice_find_seg_in_pkg(): ice 00.0 Package format version: 1.0.0.0
ice_clean_sq(): ice 00.0 ntc 10 head 11.
ice_sq_send_cmd_nolock(): ice 00.0 ATQ: Control Send queue desc and buffer:
ice_sq_send_cmd_nolock(): ice 00.0 ATQ: desc and buffer writeback:
dl w sig
ice_download_pkg_with_sig_seg(): ice 00.0 Segment ID 16
ice_download_pkg_with_sig_seg(): ice 00.0 Signature type 1
ice_acquire_res(): ice 00.0 ice_acquire_res
ice_aq_req_res(): ice 00.0 ice_aq_req_res
ice_clean_sq(): ice 00.0 ntc 11 head 12.
ice_sq_send_cmd_nolock(): ice 00.0 ATQ: Control Send queue desc and buffer:
ice_sq_send_cmd_nolock(): ice 00.0 ATQ: desc and buffer writeback:
ice_acquire_res(): ice 00.0 resource indicates no work to do.
ice_acquire_global_cfg_lock(): ice 00.0 Global config lock: No work to do
ice_clean_sq(): ice 00.0 ntc 12 head 13.
ice_sq_send_cmd_nolock(): ice 00.0 ATQ: Control Send queue desc and buffer:
ice_sq_send_cmd_nolock(): ice 00.0 ATQ: desc and buffer writeback:
ice_clean_sq(): ice 00.0 ntc 13 head 14.
ice_sq_send_cmd_nolock(): ice 00.0 ATQ: Control Send queue desc and buffer:
ice_sq_send_cmd_nolock(): ice 00.0 ATQ: desc and buffer writeback:
ice_clean_sq(): ice 00.0 ntc 14 head 15.
ice_sq_send_cmd_nolock(): ice 00.0 ATQ: Control Send queue desc and buffer:
ice_sq_send_cmd_nolock(): ice 00.0 ATQ: desc and buffer writeback:
ice_init_pkg(): ice 00.0 package previously loaded - no work.
ice_clean_sq(): ice 00.0 ntc 15 head 16.
ice_sq_send_cmd_nolock(): ice 00.0 ATQ: Control Send queue desc and buffer:
ice_sq_send_cmd_nolock(): ice 00.0 ATQ: desc and buffer writeback:
ice_get_pkg_info(): ice 00.0 Pkg[0]: 1.0.0.17,ICE NVM Package,BMN
ice_get_pkg_info(): ice 00.0 Pkg[1]: 1.3.26.0,ICE OS Default Package,A
init end
ice_load_pkg_type(): Active package is: 1.3.26.0, ICE OS Default Package
(single VLAN mode)

On Thu, Jan 19, 2023 at 9:14 AM Ben Magistro  wrote:

> Hello,
>
> We are still trying to track this down but want to reach out to the
> community and see if there is something obvious we are missing.  This
> happens to be a CentOS 7 based host with DPDK 22.11.1 in use.  We have
> obtained the latest compatible Intel drivers and DDP from
> https://sourceforge.net/projects/e1000/files/ice%20stable/ , at this time
> that is version 1.6.7 with DDP 1.3.26.0.  As best as we can tell, testpmd
> is still loading the OS default DDP 1.3.4.0.  The updated DDP package is
> located in /usr/lib/firmware/updates/intel/ice/ddp/ice.pkg.
>
> I've added some additional log lines to ice_ethdev.c and ice_ddp.c to try
> and start debugging.  I've attached what I believe is the relevant
> information below.  Quickly jumping out at me is that the pkg_name going
> into `load_fw` in ice_ethdev.c appears to be the desired path however in
> `ice_download_pkg` says it does not include a signature.  The hex dump from
> the desired file appears to show a signature which makes us think this file
> isn't actually being loaded/utilized.
>
> We will continue trying to debug this too but any additional assistance
> would be greatly appreciated.
>
> 
>
> EAL: Probe PCI driver: net_ice (8086:1593) device: :ca:00.2 (socket 1)
> 1822 pkg: /lib/firmware/updates/intel/ice/ddp/ice-40a6b7843810.pkg
> 1829 pk

Re: [PATCH v3 10/10] bus/vdev: check result of rte_vdev_device_name

2023-01-20 Thread Stephen Hemminger
On Thu, 19 Jan 2023 23:41:40 -0500
ok...@kernel.org wrote:

> diff --git a/lib/ethdev/ethdev_vdev.h b/lib/ethdev/ethdev_vdev.h
> index 364f140f91..6d94a65d97 100644
> --- a/lib/ethdev/ethdev_vdev.h
> +++ b/lib/ethdev/ethdev_vdev.h
> @@ -34,6 +34,8 @@ rte_eth_vdev_allocate(struct rte_vdev_device *dev, size_t 
> private_data_size)
>  {
>   struct rte_eth_dev *eth_dev;
>   const char *name = rte_vdev_device_name(dev);
> + if (name == NULL)
> + return NULL;


Please add a blank line after declarations and before code.
For some reason the DPDK version of checkpatch suppresses this warning.


RE: [PATCH v6 1/2] ethdev: add query_update sync and async function calls

2023-01-20 Thread Gregory Etelson
Hello Andrew,

[]

> On 1/20/23 13:46, Gregory Etelson wrote:
> >>> and it's async version `rte_flow_async_action_handle_query_update`
> >>> to atomically query and update flow action.
> >>
> >> Sorry, may be I'm missing something, but after reading previous
> >> discussion I have a feeling that update could be queried data
> >> dependent. However, I don't understand how it could be possible
> >> with API below. Just my misunderstanding?
> >>
> >
> > The goal of `rte_flow_action_handle_query_update()` is to execute query
> and update actions atomically.
> > The function guaranties that action state will not change by any event
> before both update and query complete.
> > If the function was called with the ` RTE_FLOW_QU_QUERY_FIRST `
> `mode` argument, then update execution can depend
> > on query result. That's up to query format, PMD implementation and
> hardware capabilities.
> > I hope that answered your question.
> 
> Sorry, I'm still confused. Could you explain a bit more,
> please. How update could depend on query result?

Conditional update I described requires special action properties.
Consider an action object with a method that receives query and update as 
parameters.
The method will activate update only if query result satisfies action state.
If the action was updated, both query and update were atomic for application.
The function returns the action state - updated or not.
Application is aware about the action properties.
Application applies action properties to returned state to discover if action 
was updated.

> Caller already specified update structure...
 
Regards,
Gregory


[PATCH v2 0/8] start cleanup of rte_flow_item_*

2023-01-20 Thread Ferruh Yigit
There was a plan to have structures from lib/net/ at the beginning
of corresponding flow item structures.
Unfortunately this plan has not been followed up so far.
This series is a step to make the most used items,
compliant with the inheritance design explained above.
The old API is kept in anonymous union for compatibility,
but the code in drivers and apps is updated to use the new API.


v2: (by Ferruh)
 * Rebased on latest next-net for v23.03
 * 'struct rte_gre_hdr' endianness annotation added to protocol field
 * more driver code updated for rte_flow_item_eth & rte_flow_item_vlan
 * 'struct rte_gre_hdr' updated to have a combined "rte_be16_t c_rsvd0_ver"
   field and updated drivers accordingly
 * more driver code updated for rte_flow_item_gre
 * more driver code updated for rte_flow_item_gtp


Cc: David Marchand 

Thomas Monjalon (8):
  ethdev: use Ethernet protocol struct for flow matching
  net: add smaller fields for VXLAN
  ethdev: use VXLAN protocol struct for flow matching
  ethdev: use GRE protocol struct for flow matching
  ethdev: use GTP protocol struct for flow matching
  ethdev: use ARP protocol struct for flow matching
  doc: fix description of L2TPV2 flow item
  net: mark all big endian types

For series,
Acked-by: Ferruh Yigit 

 app/test-flow-perf/actions_gen.c |   2 +-
 app/test-flow-perf/items_gen.c   |  24 +--
 app/test-pmd/cmdline_flow.c  | 180 +++
 doc/guides/prog_guide/rte_flow.rst   |  57 ++-
 doc/guides/rel_notes/deprecation.rst |   6 +-
 drivers/net/bnxt/bnxt_flow.c |  54 +++
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 112 +++---
 drivers/net/bonding/rte_eth_bond_pmd.c   |  12 +-
 drivers/net/cxgbe/cxgbe_flow.c   |  44 +++---
 drivers/net/dpaa2/dpaa2_flow.c   |  60 
 drivers/net/dpaa2/dpaa2_mux.c|   2 +-
 drivers/net/e1000/igb_flow.c |  14 +-
 drivers/net/enic/enic_flow.c |  24 +--
 drivers/net/enic/enic_fm_flow.c  |  16 +-
 drivers/net/hinic/hinic_pmd_flow.c   |  14 +-
 drivers/net/hns3/hns3_flow.c |  40 ++---
 drivers/net/i40e/i40e_fdir.c |  14 +-
 drivers/net/i40e/i40e_flow.c | 124 
 drivers/net/i40e/i40e_hash.c |   4 +-
 drivers/net/iavf/iavf_fdir.c |  18 +--
 drivers/net/iavf/iavf_fsub.c |  10 +-
 drivers/net/iavf/iavf_ipsec_crypto.c |   4 +-
 drivers/net/ice/ice_acl_filter.c |  20 +--
 drivers/net/ice/ice_fdir_filter.c|  24 +--
 drivers/net/ice/ice_switch_filter.c  |  64 
 drivers/net/igc/igc_flow.c   |   8 +-
 drivers/net/ipn3ke/ipn3ke_flow.c |  12 +-
 drivers/net/ixgbe/ixgbe_flow.c   |  58 
 drivers/net/mlx4/mlx4_flow.c |  38 ++---
 drivers/net/mlx5/hws/mlx5dr_definer.c|  48 +++---
 drivers/net/mlx5/mlx5_flow.c |  62 
 drivers/net/mlx5/mlx5_flow_dv.c  | 178 +++---
 drivers/net/mlx5/mlx5_flow_hw.c  |  80 +-
 drivers/net/mlx5/mlx5_flow_verbs.c   |  46 +++---
 drivers/net/mlx5/mlx5_trigger.c  |  28 ++--
 drivers/net/mvpp2/mrvl_flow.c|  28 ++--
 drivers/net/nfp/nfp_flow.c   |  21 +--
 drivers/net/sfc/sfc_flow.c   |  52 +++
 drivers/net/sfc/sfc_mae.c|  46 +++---
 drivers/net/tap/tap_flow.c   |  58 
 drivers/net/txgbe/txgbe_flow.c   |  28 ++--
 lib/ethdev/rte_flow.h| 117 ++-
 lib/net/rte_arp.h|  28 ++--
 lib/net/rte_gre.h|   7 +-
 lib/net/rte_higig.h  |   6 +-
 lib/net/rte_mpls.h   |   2 +-
 lib/net/rte_vxlan.h  |  35 -
 47 files changed, 982 insertions(+), 947 deletions(-)

--
2.25.1



[PATCH v2 2/8] net: add smaller fields for VXLAN

2023-01-20 Thread Ferruh Yigit
From: Thomas Monjalon 

The VXLAN and VXLAN-GPE headers were including reserved fields
with other fields in big uint32_t struct members.

Some more precise definitions are added as union of the old ones.

The new struct members are smaller in size and in names.

Signed-off-by: Thomas Monjalon 
Acked-by: Ferruh Yigit 
---
 lib/net/rte_vxlan.h | 35 +--
 1 file changed, 29 insertions(+), 6 deletions(-)

diff --git a/lib/net/rte_vxlan.h b/lib/net/rte_vxlan.h
index 929fa7a1dd01..997fc784fc84 100644
--- a/lib/net/rte_vxlan.h
+++ b/lib/net/rte_vxlan.h
@@ -30,9 +30,20 @@ extern "C" {
  * Contains the 8-bit flag, 24-bit VXLAN Network Identifier and
  * Reserved fields (24 bits and 8 bits)
  */
+__extension__ /* no named member in struct */
 struct rte_vxlan_hdr {
-   rte_be32_t vx_flags; /**< flag (8) + Reserved (24). */
-   rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+   union {
+   struct {
+   rte_be32_t vx_flags; /**< flags (8) + Reserved (24). */
+   rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+   };
+   struct {
+   uint8_tflags;/**< Should be 8 (I flag). */
+   uint8_trsvd0[3]; /**< Reserved. */
+   uint8_tvni[3];   /**< VXLAN identifier. */
+   uint8_trsvd1;/**< Reserved. */
+   };
+   };
 } __rte_packed;
 
 /** VXLAN tunnel header length. */
@@ -45,11 +56,23 @@ struct rte_vxlan_hdr {
  * Contains the 8-bit flag, 8-bit next-protocol, 24-bit VXLAN Network
  * Identifier and Reserved fields (16 bits and 8 bits).
  */
+__extension__ /* no named member in struct */
 struct rte_vxlan_gpe_hdr {
-   uint8_t vx_flags;/**< flag (8). */
-   uint8_t reserved[2]; /**< Reserved (16). */
-   uint8_t proto;   /**< next-protocol (8). */
-   rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+   union {
+   struct {
+   uint8_t vx_flags;/**< flag (8). */
+   uint8_t reserved[2]; /**< Reserved (16). */
+   uint8_t protocol;/**< next-protocol (8). */
+   rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+   };
+   struct {
+   uint8_t flags;/**< Flags. */
+   uint8_t rsvd0[2]; /**< Reserved. */
+   uint8_t proto;/**< Next protocol. */
+   uint8_t vni[3];   /**< VXLAN identifier. */
+   uint8_t rsvd1;/**< Reserved. */
+   };
+   };
 } __rte_packed;
 
 /** VXLAN-GPE tunnel header length. */
-- 
2.25.1



[PATCH v2 1/8] ethdev: use Ethernet protocol struct for flow matching

2023-01-20 Thread Ferruh Yigit
From: Thomas Monjalon 

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.
The Ethernet headers (including VLAN) structures are used
instead of the redundant fields in the flow items.

The remaining protocols to clean up are listed for future work
in the deprecation list.
Some protocols are not even defined in the directory net yet.

Signed-off-by: Thomas Monjalon 
Acked-by: Ferruh Yigit 
---
 app/test-flow-perf/items_gen.c   |   4 +-
 app/test-pmd/cmdline_flow.c  | 140 +++
 doc/guides/prog_guide/rte_flow.rst   |   7 +-
 doc/guides/rel_notes/deprecation.rst |   2 +
 drivers/net/bnxt/bnxt_flow.c |  42 +++
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c |  58 +-
 drivers/net/bonding/rte_eth_bond_pmd.c   |  12 +-
 drivers/net/cxgbe/cxgbe_flow.c   |  44 +++
 drivers/net/dpaa2/dpaa2_flow.c   |  48 
 drivers/net/dpaa2/dpaa2_mux.c|   2 +-
 drivers/net/e1000/igb_flow.c |  14 +--
 drivers/net/enic/enic_flow.c |  24 ++--
 drivers/net/enic/enic_fm_flow.c  |  16 +--
 drivers/net/hinic/hinic_pmd_flow.c   |  14 +--
 drivers/net/hns3/hns3_flow.c |  28 ++---
 drivers/net/i40e/i40e_flow.c | 100 
 drivers/net/i40e/i40e_hash.c |   4 +-
 drivers/net/iavf/iavf_fdir.c |  10 +-
 drivers/net/iavf/iavf_fsub.c |  10 +-
 drivers/net/iavf/iavf_ipsec_crypto.c |   4 +-
 drivers/net/ice/ice_acl_filter.c |  20 ++--
 drivers/net/ice/ice_fdir_filter.c|  14 +--
 drivers/net/ice/ice_switch_filter.c  |  34 +++---
 drivers/net/igc/igc_flow.c   |   8 +-
 drivers/net/ipn3ke/ipn3ke_flow.c |   8 +-
 drivers/net/ixgbe/ixgbe_flow.c   |  40 +++
 drivers/net/mlx4/mlx4_flow.c |  38 +++---
 drivers/net/mlx5/hws/mlx5dr_definer.c|  26 ++---
 drivers/net/mlx5/mlx5_flow.c |  24 ++--
 drivers/net/mlx5/mlx5_flow_dv.c  |  94 +++
 drivers/net/mlx5/mlx5_flow_hw.c  |  80 ++---
 drivers/net/mlx5/mlx5_flow_verbs.c   |  30 ++---
 drivers/net/mlx5/mlx5_trigger.c  |  28 ++---
 drivers/net/mvpp2/mrvl_flow.c|  28 ++---
 drivers/net/nfp/nfp_flow.c   |  12 +-
 drivers/net/sfc/sfc_flow.c   |  46 
 drivers/net/sfc/sfc_mae.c|  38 +++---
 drivers/net/tap/tap_flow.c   |  58 +-
 drivers/net/txgbe/txgbe_flow.c   |  28 ++---
 39 files changed, 618 insertions(+), 619 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index a73de9031f54..b7f51030a119 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -37,10 +37,10 @@ add_vlan(struct rte_flow_item *items,
__rte_unused struct additional_para para)
 {
static struct rte_flow_item_vlan vlan_spec = {
-   .tci = RTE_BE16(VLAN_VALUE),
+   .hdr.vlan_tci = RTE_BE16(VLAN_VALUE),
};
static struct rte_flow_item_vlan vlan_mask = {
-   .tci = RTE_BE16(0x),
+   .hdr.vlan_tci = RTE_BE16(0x),
};
 
items[items_counter].type = RTE_FLOW_ITEM_TYPE_VLAN;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 88108498e0c3..694a7eb647c5 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -3633,19 +3633,19 @@ static const struct token token_list[] = {
.name = "dst",
.help = "destination MAC",
.next = NEXT(item_eth, NEXT_ENTRY(COMMON_MAC_ADDR), item_param),
-   .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, dst)),
+   .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, 
hdr.dst_addr)),
},
[ITEM_ETH_SRC] = {
.name = "src",
.help = "source MAC",
.next = NEXT(item_eth, NEXT_ENTRY(COMMON_MAC_ADDR), item_param),
-   .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, src)),
+   .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, 
hdr.src_addr)),
},
[ITEM_ETH_TYPE] = {
.name = "type",
.help = "EtherType",
.next = NEXT(item_eth, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-   .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, type)),
+   .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, 
hdr.ether_type)),
},
[ITEM_ETH_HAS_VLAN] = {
.name = "has_vlan",
@@ -3666,7 +3666,7 @@ static const struct token token_list[] = {
.help = "tag control information",
.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 item_param),
-   .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow

[PATCH v2 3/8] ethdev: use VXLAN protocol struct for flow matching

2023-01-20 Thread Ferruh Yigit
From: Thomas Monjalon 

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

In the case of VXLAN-GPE, the protocol struct is added
in an unnamed union, keeping old field names.

The VXLAN headers (including VXLAN-GPE) are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon 
Acked-by: Ferruh Yigit 
---
 app/test-flow-perf/actions_gen.c |  2 +-
 app/test-flow-perf/items_gen.c   | 12 +++
 app/test-pmd/cmdline_flow.c  | 10 +++---
 doc/guides/prog_guide/rte_flow.rst   | 11 ++-
 doc/guides/rel_notes/deprecation.rst |  1 -
 drivers/net/bnxt/bnxt_flow.c | 12 ---
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 42 
 drivers/net/hns3/hns3_flow.c | 12 +++
 drivers/net/i40e/i40e_flow.c |  4 +--
 drivers/net/ice/ice_switch_filter.c  | 18 +-
 drivers/net/ipn3ke/ipn3ke_flow.c |  4 +--
 drivers/net/ixgbe/ixgbe_flow.c   | 18 +-
 drivers/net/mlx5/mlx5_flow.c | 16 -
 drivers/net/mlx5/mlx5_flow_dv.c  | 40 +++---
 drivers/net/mlx5/mlx5_flow_verbs.c   |  8 ++---
 drivers/net/sfc/sfc_flow.c   |  6 ++--
 drivers/net/sfc/sfc_mae.c|  8 ++---
 lib/ethdev/rte_flow.h| 24 ++
 18 files changed, 126 insertions(+), 122 deletions(-)

diff --git a/app/test-flow-perf/actions_gen.c b/app/test-flow-perf/actions_gen.c
index 63f05d87fa86..f1d59313256d 100644
--- a/app/test-flow-perf/actions_gen.c
+++ b/app/test-flow-perf/actions_gen.c
@@ -874,7 +874,7 @@ add_vxlan_encap(struct rte_flow_action *actions,
items[2].type = RTE_FLOW_ITEM_TYPE_UDP;
 
 
-   item_vxlan.vni[2] = 1;
+   item_vxlan.hdr.vni[2] = 1;
items[3].spec = &item_vxlan;
items[3].mask = &item_vxlan;
items[3].type = RTE_FLOW_ITEM_TYPE_VXLAN;
diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index b7f51030a119..a58245239ba1 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -128,12 +128,12 @@ add_vxlan(struct rte_flow_item *items,
 
/* Set standard vxlan vni */
for (i = 0; i < 3; i++) {
-   vxlan_specs[ti].vni[2 - i] = vni_value >> (i * 8);
-   vxlan_masks[ti].vni[2 - i] = 0xff;
+   vxlan_specs[ti].hdr.vni[2 - i] = vni_value >> (i * 8);
+   vxlan_masks[ti].hdr.vni[2 - i] = 0xff;
}
 
/* Standard vxlan flags */
-   vxlan_specs[ti].flags = 0x8;
+   vxlan_specs[ti].hdr.flags = 0x8;
 
items[items_counter].type = RTE_FLOW_ITEM_TYPE_VXLAN;
items[items_counter].spec = &vxlan_specs[ti];
@@ -155,12 +155,12 @@ add_vxlan_gpe(struct rte_flow_item *items,
 
/* Set vxlan-gpe vni */
for (i = 0; i < 3; i++) {
-   vxlan_gpe_specs[ti].vni[2 - i] = vni_value >> (i * 8);
-   vxlan_gpe_masks[ti].vni[2 - i] = 0xff;
+   vxlan_gpe_specs[ti].hdr.vni[2 - i] = vni_value >> (i * 8);
+   vxlan_gpe_masks[ti].hdr.vni[2 - i] = 0xff;
}
 
/* vxlan-gpe flags */
-   vxlan_gpe_specs[ti].flags = 0x0c;
+   vxlan_gpe_specs[ti].hdr.flags = 0x0c;
 
items[items_counter].type = RTE_FLOW_ITEM_TYPE_VXLAN_GPE;
items[items_counter].spec = &vxlan_gpe_specs[ti];
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 694a7eb647c5..b904f8c3d45c 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -3984,7 +3984,7 @@ static const struct token token_list[] = {
.help = "VXLAN identifier",
.next = NEXT(item_vxlan, NEXT_ENTRY(COMMON_UNSIGNED),
 item_param),
-   .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, vni)),
+   .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, 
hdr.vni)),
},
[ITEM_VXLAN_LAST_RSVD] = {
.name = "last_rsvd",
@@ -3992,7 +3992,7 @@ static const struct token token_list[] = {
.next = NEXT(item_vxlan, NEXT_ENTRY(COMMON_UNSIGNED),
 item_param),
.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan,
-rsvd1)),
+hdr.rsvd1)),
},
[ITEM_E_TAG] = {
.name = "e_tag",
@@ -4210,7 +4210,7 @@ static const struct token token_list[] = {
.next = NEXT(item_vxlan_gpe, NEXT_ENTRY(COMMON_UNSIGNED),
 item_param),
.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan_gpe,
-vni)),
+hdr.vni)),
},
[ITEM_ARP_ETH_IPV4] = {
.name = "arp_e

[PATCH v2 4/8] ethdev: use GRE protocol struct for flow matching

2023-01-20 Thread Ferruh Yigit
From: Thomas Monjalon 

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The GRE header struct members are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon 
Acked-by: Ferruh Yigit 
---
 app/test-flow-perf/items_gen.c   |  4 ++--
 app/test-pmd/cmdline_flow.c  | 14 ++---
 doc/guides/prog_guide/rte_flow.rst   |  6 +-
 doc/guides/rel_notes/deprecation.rst |  1 -
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 12 +--
 drivers/net/dpaa2/dpaa2_flow.c   | 12 +--
 drivers/net/mlx5/hws/mlx5dr_definer.c|  8 
 drivers/net/mlx5/mlx5_flow.c | 22 +---
 drivers/net/mlx5/mlx5_flow_dv.c  | 26 +---
 drivers/net/mlx5/mlx5_flow_verbs.c   |  8 
 drivers/net/nfp/nfp_flow.c   |  9 
 lib/ethdev/rte_flow.h| 24 +++---
 lib/net/rte_gre.h|  5 +
 13 files changed, 81 insertions(+), 70 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index a58245239ba1..0f19e5e53648 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -173,10 +173,10 @@ add_gre(struct rte_flow_item *items,
__rte_unused struct additional_para para)
 {
static struct rte_flow_item_gre gre_spec = {
-   .protocol = RTE_BE16(RTE_ETHER_TYPE_TEB),
+   .hdr.proto = RTE_BE16(RTE_ETHER_TYPE_TEB),
};
static struct rte_flow_item_gre gre_mask = {
-   .protocol = RTE_BE16(0x),
+   .hdr.proto = RTE_BE16(0x),
};
 
items[items_counter].type = RTE_FLOW_ITEM_TYPE_GRE;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index b904f8c3d45c..0e115956514c 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4071,7 +4071,7 @@ static const struct token token_list[] = {
.next = NEXT(item_gre, NEXT_ENTRY(COMMON_UNSIGNED),
 item_param),
.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gre,
-protocol)),
+hdr.proto)),
},
[ITEM_GRE_C_RSVD0_VER] = {
.name = "c_rsvd0_ver",
@@ -4082,7 +4082,7 @@ static const struct token token_list[] = {
.next = NEXT(item_gre, NEXT_ENTRY(COMMON_UNSIGNED),
 item_param),
.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gre,
-c_rsvd0_ver)),
+hdr.c_rsvd0_ver)),
},
[ITEM_GRE_C_BIT] = {
.name = "c_bit",
@@ -4090,7 +4090,7 @@ static const struct token token_list[] = {
.next = NEXT(item_gre, NEXT_ENTRY(COMMON_BOOLEAN),
 item_param),
.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_gre,
- c_rsvd0_ver,
+ hdr.c_rsvd0_ver,
  "\x80\x00\x00\x00")),
},
[ITEM_GRE_S_BIT] = {
@@ -4098,7 +4098,7 @@ static const struct token token_list[] = {
.help = "sequence number bit (S)",
.next = NEXT(item_gre, NEXT_ENTRY(COMMON_BOOLEAN), item_param),
.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_gre,
- c_rsvd0_ver,
+ hdr.c_rsvd0_ver,
  "\x10\x00\x00\x00")),
},
[ITEM_GRE_K_BIT] = {
@@ -4106,7 +4106,7 @@ static const struct token token_list[] = {
.help = "key bit (K)",
.next = NEXT(item_gre, NEXT_ENTRY(COMMON_BOOLEAN), item_param),
.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_gre,
- c_rsvd0_ver,
+ hdr.c_rsvd0_ver,
  "\x20\x00\x00\x00")),
},
[ITEM_FUZZY] = {
@@ -7837,7 +7837,7 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const 
struct token *token,
},
};
struct rte_flow_item_gre gre = {
-   .protocol = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
+   .hdr.proto = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
};
struct rte_flow_item_mpls mpls = {
.ttl = 0,
@@ -7935,7 +7935,7 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const 
struct token *token,

[PATCH v2 7/8] doc: fix description of L2TPV2 flow item

2023-01-20 Thread Ferruh Yigit
From: Thomas Monjalon 

The flow item structure includes the protocol definition
from the directory lib/net/, so it is reflected in the guide.

Section title underlining is also fixed around.

Fixes: 3a929df1f286 ("ethdev: support L2TPv2 and PPP procotol")
Cc: sta...@dpdk.org

Signed-off-by: Thomas Monjalon 
Acked-by: Ferruh Yigit 
---
Cc: jie1x.w...@intel.com
---
 doc/guides/prog_guide/rte_flow.rst | 13 +++--
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst 
b/doc/guides/prog_guide/rte_flow.rst
index 8bf85df2f611..c01b53aad8ed 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1485,22 +1485,15 @@ rte_flow_flex_item_create() routine.
   value and mask.
 
 Item: ``L2TPV2``
-^^^
+
 
 Matches a L2TPv2 header.
 
-- ``flags_version``: flags(12b), version(4b).
-- ``length``: total length of the message.
-- ``tunnel_id``: identifier for the control connection.
-- ``session_id``: identifier for a session within a tunnel.
-- ``ns``: sequence number for this date or control message.
-- ``nr``: sequence number expected in the next control message to be received.
-- ``offset_size``: offset of payload data.
-- ``offset_padding``: offset padding, variable length.
+- ``hdr``:  header definition (``rte_l2tpv2.h``).
 - Default ``mask`` matches flags_version only.
 
 Item: ``PPP``
-^^^
+^
 
 Matches a PPP header.
 
-- 
2.25.1



[PATCH v2 6/8] ethdev: use ARP protocol struct for flow matching

2023-01-20 Thread Ferruh Yigit
From: Thomas Monjalon 

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The ARP header struct members are used in testpmd
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon 
Acked-by: Ferruh Yigit 
---
 app/test-pmd/cmdline_flow.c  |  8 +++---
 doc/guides/prog_guide/rte_flow.rst   | 10 +---
 doc/guides/rel_notes/deprecation.rst |  1 -
 lib/ethdev/rte_flow.h| 37 ++--
 4 files changed, 29 insertions(+), 27 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index dd6da9d98d9b..1d337a96199d 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4226,7 +4226,7 @@ static const struct token token_list[] = {
.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_MAC_ADDR),
 item_param),
.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-sha)),
+hdr.arp_data.arp_sha)),
},
[ITEM_ARP_ETH_IPV4_SPA] = {
.name = "spa",
@@ -4234,7 +4234,7 @@ static const struct token token_list[] = {
.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_IPV4_ADDR),
 item_param),
.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-spa)),
+hdr.arp_data.arp_sip)),
},
[ITEM_ARP_ETH_IPV4_THA] = {
.name = "tha",
@@ -4242,7 +4242,7 @@ static const struct token token_list[] = {
.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_MAC_ADDR),
 item_param),
.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-tha)),
+hdr.arp_data.arp_tha)),
},
[ITEM_ARP_ETH_IPV4_TPA] = {
.name = "tpa",
@@ -4250,7 +4250,7 @@ static const struct token token_list[] = {
.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_IPV4_ADDR),
 item_param),
.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-tpa)),
+hdr.arp_data.arp_tip)),
},
[ITEM_IPV6_EXT] = {
.name = "ipv6_ext",
diff --git a/doc/guides/prog_guide/rte_flow.rst 
b/doc/guides/prog_guide/rte_flow.rst
index ec2e335fac3d..8bf85df2f611 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1100,15 +1100,7 @@ Item: ``ARP_ETH_IPV4``
 
 Matches an ARP header for Ethernet/IPv4.
 
-- ``hdr``: hardware type, normally 1.
-- ``pro``: protocol type, normally 0x0800.
-- ``hln``: hardware address length, normally 6.
-- ``pln``: protocol address length, normally 4.
-- ``op``: opcode (1 for request, 2 for reply).
-- ``sha``: sender hardware address.
-- ``spa``: sender IPv4 address.
-- ``tha``: target hardware address.
-- ``tpa``: target IPv4 address.
+- ``hdr``:  header definition (``rte_arp.h``).
 - Default ``mask`` matches SHA, SPA, THA and TPA.
 
 Item: ``IPV6_EXT``
diff --git a/doc/guides/rel_notes/deprecation.rst 
b/doc/guides/rel_notes/deprecation.rst
index b89450b239ef..8e3683990117 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -64,7 +64,6 @@ Deprecation Notices
   These items are not compliant (not including struct from lib/net/):
 
   - ``rte_flow_item_ah``
-  - ``rte_flow_item_arp_eth_ipv4``
   - ``rte_flow_item_e_tag``
   - ``rte_flow_item_geneve``
   - ``rte_flow_item_geneve_opt``
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 85ca73d1dc04..a215daa83640 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -20,6 +20,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -1255,26 +1256,36 @@ static const struct rte_flow_item_vxlan_gpe 
rte_flow_item_vxlan_gpe_mask = {
  *
  * Matches an ARP header for Ethernet/IPv4.
  */
+RTE_STD_C11
 struct rte_flow_item_arp_eth_ipv4 {
-   rte_be16_t hrd; /**< Hardware type, normally 1. */
-   rte_be16_t pro; /**< Protocol type, normally 0x0800. */
-   uint8_t hln; /**< Hardware address length, normally 6. */
-   uint8_t pln; /**< Protocol address length, normally 4. */
-   rte_be16_t op; /**< Opcode (1 for request, 2 for reply). */
-   struct rte_ether_addr sha; /**< Sender hardware address. */
-   rte_be32_t spa; /**< Sender IPv4 address. */
-   struct rte_ether_addr tha; /**< Target hardware address. */
-   rte_be32_t tpa; /**< Target IPv4 address. */
+

[PATCH v2 5/8] ethdev: use GTP protocol struct for flow matching

2023-01-20 Thread Ferruh Yigit
From: Thomas Monjalon 

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The GTP header struct members are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon 
Acked-by: Ferruh Yigit 
---
 app/test-flow-perf/items_gen.c|  4 ++--
 app/test-pmd/cmdline_flow.c   |  8 +++
 doc/guides/prog_guide/rte_flow.rst| 10 ++---
 doc/guides/rel_notes/deprecation.rst  |  1 -
 drivers/net/i40e/i40e_fdir.c  | 14 ++--
 drivers/net/i40e/i40e_flow.c  | 20 -
 drivers/net/iavf/iavf_fdir.c  |  8 +++
 drivers/net/ice/ice_fdir_filter.c | 10 -
 drivers/net/ice/ice_switch_filter.c   | 12 +-
 drivers/net/mlx5/hws/mlx5dr_definer.c | 14 ++--
 drivers/net/mlx5/mlx5_flow_dv.c   | 18 +++
 lib/ethdev/rte_flow.h | 32 ++-
 12 files changed, 77 insertions(+), 74 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index 0f19e5e53648..55eb6f5cf009 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -213,10 +213,10 @@ add_gtp(struct rte_flow_item *items,
__rte_unused struct additional_para para)
 {
static struct rte_flow_item_gtp gtp_spec = {
-   .teid = RTE_BE32(TEID_VALUE),
+   .hdr.teid = RTE_BE32(TEID_VALUE),
};
static struct rte_flow_item_gtp gtp_mask = {
-   .teid = RTE_BE32(0x),
+   .hdr.teid = RTE_BE32(0x),
};
 
items[items_counter].type = RTE_FLOW_ITEM_TYPE_GTP;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 0e115956514c..dd6da9d98d9b 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4137,19 +4137,19 @@ static const struct token token_list[] = {
.help = "GTP flags",
.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp,
-   v_pt_rsv_flags)),
+   hdr.gtp_hdr_info)),
},
[ITEM_GTP_MSG_TYPE] = {
.name = "msg_type",
.help = "GTP message type",
.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-   .args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp, msg_type)),
+   .args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp, 
hdr.msg_type)),
},
[ITEM_GTP_TEID] = {
.name = "teid",
.help = "tunnel endpoint identifier",
.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-   .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gtp, teid)),
+   .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gtp, 
hdr.teid)),
},
[ITEM_GTPC] = {
.name = "gtpc",
@@ -11224,7 +11224,7 @@ cmd_set_raw_parsed(const struct buffer *in)
goto error;
}
gtp = item->spec;
-   if ((gtp->v_pt_rsv_flags & 0x07) != 0x04) {
+   if (gtp->hdr.s == 1 || gtp->hdr.pn == 1) {
/* Only E flag should be set. */
fprintf(stderr,
"Error - GTP unsupported flags\n");
diff --git a/doc/guides/prog_guide/rte_flow.rst 
b/doc/guides/prog_guide/rte_flow.rst
index 603e1b866be3..ec2e335fac3d 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1064,12 +1064,7 @@ Note: GTP, GTPC and GTPU use the same structure. GTPC 
and GTPU item
 are defined for a user-friendly API when creating GTP-C and GTP-U
 flow rules.
 
-- ``v_pt_rsv_flags``: version (3b), protocol type (1b), reserved (1b),
-  extension header flag (1b), sequence number flag (1b), N-PDU number
-  flag (1b).
-- ``msg_type``: message type.
-- ``msg_len``: message length.
-- ``teid``: tunnel endpoint identifier.
+- ``hdr``:  header definition (``rte_gtp.h``).
 - Default ``mask`` matches teid only.
 
 Item: ``ESP``
@@ -1235,8 +1230,7 @@ Item: ``GTP_PSC``
 
 Matches a GTP PDU extension header with type 0x85.
 
-- ``pdu_type``: PDU type.
-- ``qfi``: QoS flow identifier.
+- ``hdr``:  header definition (``rte_gtp.h``).
 - Default ``mask`` matches QFI only.
 
 Item: ``PPPOES``, ``PPPOED``
diff --git a/doc/guides/rel_notes/deprecation.rst 
b/doc/guides/rel_notes/deprecation.rst
index 80bf7209065a..b89450b239ef 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -68,7 +68,6 @@ Deprecation Notices
   - ``rte_flow_item_e_tag``
   - ``rte_flow_item_gene

[PATCH v2 8/8] net: mark all big endian types

2023-01-20 Thread Ferruh Yigit
From: Thomas Monjalon 

Some protocols (ARP, MPLS and HIGIG2) were using uint16_t and uint32_t
types for their 16 and 32-bit fields.
It was correct but not conveying the big endian nature of these fields.

As for other protocols defined in this directory,
all types are explicitly marked as big endian fields.

Signed-off-by: Thomas Monjalon 
Acked-by: Ferruh Yigit 
---
 lib/net/rte_arp.h   | 28 ++--
 lib/net/rte_gre.h   |  2 +-
 lib/net/rte_higig.h |  6 +++---
 lib/net/rte_mpls.h  |  2 +-
 4 files changed, 19 insertions(+), 19 deletions(-)

diff --git a/lib/net/rte_arp.h b/lib/net/rte_arp.h
index 076c8ab314ee..151e6c641fc5 100644
--- a/lib/net/rte_arp.h
+++ b/lib/net/rte_arp.h
@@ -23,28 +23,28 @@ extern "C" {
  */
 struct rte_arp_ipv4 {
struct rte_ether_addr arp_sha;  /**< sender hardware address */
-   uint32_t  arp_sip;  /**< sender IP address */
+   rte_be32_tarp_sip;  /**< sender IP address */
struct rte_ether_addr arp_tha;  /**< target hardware address */
-   uint32_t  arp_tip;  /**< target IP address */
+   rte_be32_tarp_tip;  /**< target IP address */
 } __rte_packed __rte_aligned(2);
 
 /**
  * ARP header.
  */
 struct rte_arp_hdr {
-   uint16_t arp_hardware;/* format of hardware address */
-#define RTE_ARP_HRD_ETHER 1  /* ARP Ethernet address format */
+   rte_be16_t arp_hardware; /** format of hardware address */
+#define RTE_ARP_HRD_ETHER 1  /** ARP Ethernet address format */
 
-   uint16_t arp_protocol;/* format of protocol address */
-   uint8_t  arp_hlen;/* length of hardware address */
-   uint8_t  arp_plen;/* length of protocol address */
-   uint16_t arp_opcode; /* ARP opcode (command) */
-#defineRTE_ARP_OP_REQUEST1 /* request to resolve address */
-#defineRTE_ARP_OP_REPLY  2 /* response to previous request */
-#defineRTE_ARP_OP_REVREQUEST 3 /* request proto addr given hardware */
-#defineRTE_ARP_OP_REVREPLY   4 /* response giving protocol address */
-#defineRTE_ARP_OP_INVREQUEST 8 /* request to identify peer */
-#defineRTE_ARP_OP_INVREPLY   9 /* response identifying peer */
+   rte_be16_t arp_protocol; /** format of protocol address */
+   uint8_tarp_hlen; /** length of hardware address */
+   uint8_tarp_plen; /** length of protocol address */
+   rte_be16_t arp_opcode;   /** ARP opcode (command) */
+#defineRTE_ARP_OP_REQUEST1  /** request to resolve address */
+#defineRTE_ARP_OP_REPLY  2  /** response to previous request */
+#defineRTE_ARP_OP_REVREQUEST 3  /** request proto addr given hardware 
*/
+#defineRTE_ARP_OP_REVREPLY   4  /** response giving protocol address */
+#defineRTE_ARP_OP_INVREQUEST 8  /** request to identify peer */
+#defineRTE_ARP_OP_INVREPLY   9  /** response identifying peer */
 
struct rte_arp_ipv4 arp_data;
 } __rte_packed __rte_aligned(2);
diff --git a/lib/net/rte_gre.h b/lib/net/rte_gre.h
index 210b81c99018..6b1169c8b0c1 100644
--- a/lib/net/rte_gre.h
+++ b/lib/net/rte_gre.h
@@ -50,7 +50,7 @@ struct rte_gre_hdr {
};
rte_be16_t c_rsvd0_ver;
};
-   uint16_t proto;  /**< Protocol Type */
+   rte_be16_t proto;  /**< Protocol Type */
 } __rte_packed;
 
 /**
diff --git a/lib/net/rte_higig.h b/lib/net/rte_higig.h
index b55fb1a7db44..bba3898a883f 100644
--- a/lib/net/rte_higig.h
+++ b/lib/net/rte_higig.h
@@ -112,9 +112,9 @@ struct rte_higig2_ppt_type0 {
  */
 __extension__
 struct rte_higig2_ppt_type1 {
-   uint16_t classification;
-   uint16_t resv;
-   uint16_t vid;
+   rte_be16_t classification;
+   rte_be16_t resv;
+   rte_be16_t vid;
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
uint16_t opcode:3;
uint16_t resv1:2;
diff --git a/lib/net/rte_mpls.h b/lib/net/rte_mpls.h
index 3e8cb90ec383..51523e7a1188 100644
--- a/lib/net/rte_mpls.h
+++ b/lib/net/rte_mpls.h
@@ -23,7 +23,7 @@ extern "C" {
  */
 __extension__
 struct rte_mpls_hdr {
-   uint16_t tag_msb;   /**< Label(msb). */
+   rte_be16_t tag_msb; /**< Label(msb). */
 #if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
uint8_t tag_lsb:4;  /**< Label(lsb). */
uint8_t tc:3;   /**< Traffic class. */
-- 
2.25.1



Re: [PATCH 4/8] ethdev: use GRE protocol struct for flow matching

2023-01-20 Thread Ferruh Yigit
On 10/26/2022 9:45 AM, David Marchand wrote:
> On Tue, Oct 25, 2022 at 11:45 PM Thomas Monjalon  wrote:
>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
>> index 6045a352ae..fd9be56e31 100644
>> --- a/lib/ethdev/rte_flow.h
>> +++ b/lib/ethdev/rte_flow.h
>> @@ -1069,19 +1069,29 @@ static const struct rte_flow_item_mpls 
>> rte_flow_item_mpls_mask = {
>>   *
>>   * Matches a GRE header.
>>   */
>> +RTE_STD_C11
>>  struct rte_flow_item_gre {
>> -   /**
>> -* Checksum (1b), reserved 0 (12b), version (3b).
>> -* Refer to RFC 2784.
>> -*/
>> -   rte_be16_t c_rsvd0_ver;
>> -   rte_be16_t protocol; /**< Protocol type. */
>> +   union {
>> +   struct {
>> +   /*
>> +* These are old fields kept for compatibility.
>> +* Please prefer hdr field below.
>> +*/
>> +   /**
>> +* Checksum (1b), reserved 0 (12b), version (3b).
>> +* Refer to RFC 2784.
>> +*/
>> +   rte_be16_t c_rsvd0_ver;
>> +   rte_be16_t protocol; /**< Protocol type. */
>> +   };
>> +   struct rte_gre_hdr hdr; /**< GRE header definition. */
>> +   };
>>  };
>>
>>  /** Default mask for RTE_FLOW_ITEM_TYPE_GRE. */
>>  #ifndef __cplusplus
>>  static const struct rte_flow_item_gre rte_flow_item_gre_mask = {
>> -   .protocol = RTE_BE16(0x),
>> +   .hdr.proto = RTE_BE16(UINT16_MAX),
> 
> 
> The proto field in struct rte_gre_hdr from lib/net lacks endianness 
> annotation.
> This triggers a sparse warning (from OVS dpdk-latest build):
> 
> /home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:1095:22:
> error: incorrect type in initializer (different base types)
> /home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:1095:22:
> expected unsigned short [usertype] proto
> /home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:1095:22:
> got restricted ovs_be16 [usertype]
> 
> 

added endianness annotation for GRE 'proto' field in v2, can you please
check if it fixes the warning?


[PATCH v4 1/3] eal/windows: move fnmatch function to header file

2023-01-20 Thread Bruce Richardson
To allow the fnmatch function to be shared between libraries, without
having to export it into the public namespace (since it's not prefixed
with "rte"), we can convert fnmatch.c to replace fnmatch.h. This allows
fnmatch function to be static and limited in scope to the current file,
preventing duplicate definitions if it is used by two libraries, while
also not requiring export for sharing.

Signed-off-by: Bruce Richardson 
Acked-by: Morten Brørup 
Acked-by: Tyler Retzlaff 
---
 lib/eal/windows/fnmatch.c | 172 -
 lib/eal/windows/include/fnmatch.h | 175 +++---
 lib/eal/windows/meson.build   |   1 -
 3 files changed, 162 insertions(+), 186 deletions(-)
 delete mode 100644 lib/eal/windows/fnmatch.c

diff --git a/lib/eal/windows/fnmatch.c b/lib/eal/windows/fnmatch.c
deleted file mode 100644
index f622bf54c5..00
--- a/lib/eal/windows/fnmatch.c
+++ /dev/null
@@ -1,172 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (c) 1989, 1993, 1994
- * The Regents of the University of California.  All rights reserved.
- *
- * This code is derived from software contributed to Berkeley by
- * Guido van Rossum.
- */
-
-#if defined(LIBC_SCCS) && !defined(lint)
-static const char sccsid[] = "@(#)fnmatch.c8.2 (Berkeley) 4/16/94";
-#endif /* LIBC_SCCS and not lint */
-
-/*
- * Function fnmatch() as specified in POSIX 1003.2-1992, section B.6.
- * Compares a filename or pathname to a pattern.
- */
-
-#include 
-#include 
-#include 
-
-#include "fnmatch.h"
-
-#define EOS'\0'
-
-static const char *rangematch(const char *, char, int);
-
-int
-fnmatch(const char *pattern, const char *string, int flags)
-{
-   const char *stringstart;
-   char c, test;
-
-   for (stringstart = string;;)
-   switch (c = *pattern++) {
-   case EOS:
-   if ((flags & FNM_LEADING_DIR) && *string == '/')
-   return (0);
-   return (*string == EOS ? 0 : FNM_NOMATCH);
-   case '?':
-   if (*string == EOS)
-   return (FNM_NOMATCH);
-   if (*string == '/' && (flags & FNM_PATHNAME))
-   return (FNM_NOMATCH);
-   if (*string == '.' && (flags & FNM_PERIOD) &&
-   (string == stringstart ||
-   ((flags & FNM_PATHNAME) && *(string - 1) == '/')))
-   return (FNM_NOMATCH);
-   ++string;
-   break;
-   case '*':
-   c = *pattern;
-   /* Collapse multiple stars. */
-   while (c == '*')
-   c = *++pattern;
-
-   if (*string == '.' && (flags & FNM_PERIOD) &&
-   (string == stringstart ||
-   ((flags & FNM_PATHNAME) && *(string - 1) == '/')))
-   return (FNM_NOMATCH);
-
-   /* Optimize for pattern with * at end or before /. */
-   if (c == EOS)
-   if (flags & FNM_PATHNAME)
-   return ((flags & FNM_LEADING_DIR) ||
-   strchr(string, '/') == NULL ?
-   0 : FNM_NOMATCH);
-   else
-   return (0);
-   else if (c == '/' && flags & FNM_PATHNAME) {
-   string = strchr(string, '/');
-   if (string == NULL)
-   return (FNM_NOMATCH);
-   break;
-   }
-
-   /* General case, use recursion. */
-   while ((test = *string) != EOS) {
-   if (!fnmatch(pattern, string,
-   flags & ~FNM_PERIOD))
-   return (0);
-   if (test == '/' && flags & FNM_PATHNAME)
-   break;
-   ++string;
-   }
-   return (FNM_NOMATCH);
-   case '[':
-   if (*string == EOS)
-   return (FNM_NOMATCH);
-   if (*string == '/' && flags & FNM_PATHNAME)
-   return (FNM_NOMATCH);
-   pattern = rangematch(pattern, *string, flags);
-   if (pattern == NULL)
-   return (FNM_NOMATCH);
-   ++string;
-   break;
-   case '\\':
-   if (!(flags & FNM_NOESCAPE)) {
-   c

[PATCH v4 0/3] Split logging functionality out of EAL

2023-01-20 Thread Bruce Richardson
There is a general desire to reduce the size and scope of EAL. To this
end, this patchset makes a (very) small step in that direction by taking
the logging functionality out of EAL and putting it into its own library
that can be built and maintained separately.

As with the first RFC for this, the main obstacle is the "fnmatch"
function which is needed by both EAL and the new log function when
building on windows. While the function cannot stay in EAL - or we would
have a circular dependency, moving it to a new library or just putting
it in the log library have the disadvantages that it then "leaks" into
the public namespace without an rte_prefix, which could cause issues.
Since only a single function is involved, subsequent versions take a
different approach to v1, and just moves the offending function to be a
static function in a header file. This allows use by multiple libs
without conflicting names or making it public.

The other complication, as explained in v1 RFC was that of multiple
implementations for different OS's. This is solved here in the same
way as v1, by including the OS in the name and having meson pick the
correct file for each build. Since only one file is involved, there
seemed little need for replicating EAL's separate subdirectories
per-OS.

v4:
* Fixed windows build error, due to missing strdup (_strdup on windows)
* Added doc updates to programmers guide.

v3:
* Fixed missing log file for BSD
* Removed "eal" from the filenames of files in the log directory
* added prefixes to elements in the fnmatch header to avoid conflicts
* fixed space indentation in new lines in telemetry.c (checkpatch)
* removed "extern int logtype" definition in telemetry.c (checkpatch)
* added log directory to list for doxygen scanning

Bruce Richardson (3):
  eal/windows: move fnmatch function to header file
  log: separate logging functions out of EAL
  telemetry: use standard logging

 doc/api/doxy-api.conf.in  |   1 +
 .../prog_guide/env_abstraction_layer.rst  |   4 +-
 doc/guides/prog_guide/index.rst   |   1 +
 doc/guides/prog_guide/log_lib.rst | 115 
 lib/eal/common/eal_common_options.c   |   2 +-
 lib/eal/common/eal_private.h  |   7 -
 lib/eal/common/meson.build|   1 -
 lib/eal/freebsd/eal.c |   6 +-
 lib/eal/include/meson.build   |   1 -
 lib/eal/linux/eal.c   |   8 +-
 lib/eal/linux/meson.build |   1 -
 lib/eal/meson.build   |   2 +-
 lib/eal/version.map   |  17 --
 lib/eal/windows/eal.c |   2 +-
 lib/eal/windows/fnmatch.c | 172 -
 lib/eal/windows/include/fnmatch.h | 175 --
 lib/eal/windows/meson.build   |   2 -
 lib/kvargs/meson.build|   3 +-
 .../common/eal_common_log.c => log/log.c} |   7 +-
 lib/log/log_freebsd.c |  12 ++
 .../common/eal_log.h => log/log_internal.h}   |  18 +-
 lib/{eal/linux/eal_log.c => log/log_linux.c}  |   2 +-
 .../windows/eal_log.c => log/log_windows.c}   |   2 +-
 lib/log/meson.build   |   9 +
 lib/{eal/include => log}/rte_log.h|   0
 lib/log/version.map   |  34 
 lib/meson.build   |   1 +
 lib/telemetry/meson.build |   3 +-
 lib/telemetry/telemetry.c |  11 +-
 lib/telemetry/telemetry_internal.h|   3 +-
 30 files changed, 370 insertions(+), 252 deletions(-)
 create mode 100644 doc/guides/prog_guide/log_lib.rst
 delete mode 100644 lib/eal/windows/fnmatch.c
 rename lib/{eal/common/eal_common_log.c => log/log.c} (99%)
 create mode 100644 lib/log/log_freebsd.c
 rename lib/{eal/common/eal_log.h => log/log_internal.h} (69%)
 rename lib/{eal/linux/eal_log.c => log/log_linux.c} (97%)
 rename lib/{eal/windows/eal_log.c => log/log_windows.c} (93%)
 create mode 100644 lib/log/meson.build
 rename lib/{eal/include => log}/rte_log.h (100%)
 create mode 100644 lib/log/version.map

--
2.37.2



[PATCH v4 2/3] log: separate logging functions out of EAL

2023-01-20 Thread Bruce Richardson
Move the logging capability to a separate library, free from EAL. Rename
files as appropriate, and use meson.build to select the correct file to
be built for each operating system, rather than having a subdir per-os.
Add new documentation section in programmers guide to cover logging in
more detail.

Signed-off-by: Bruce Richardson 
Acked-by: Morten Brørup 
Acked-by: Tyler Retzlaff 
---
 doc/api/doxy-api.conf.in  |   1 +
 .../prog_guide/env_abstraction_layer.rst  |   4 +-
 doc/guides/prog_guide/index.rst   |   1 +
 doc/guides/prog_guide/log_lib.rst | 115 ++
 lib/eal/common/eal_common_options.c   |   2 +-
 lib/eal/common/eal_private.h  |   7 --
 lib/eal/common/meson.build|   1 -
 lib/eal/include/meson.build   |   1 -
 lib/eal/linux/eal.c   |   2 +-
 lib/eal/linux/meson.build |   1 -
 lib/eal/meson.build   |   2 +-
 lib/eal/version.map   |  17 ---
 lib/eal/windows/eal.c |   2 +-
 lib/eal/windows/meson.build   |   1 -
 lib/kvargs/meson.build|   3 +-
 .../common/eal_common_log.c => log/log.c} |   7 +-
 lib/log/log_freebsd.c |  12 ++
 .../common/eal_log.h => log/log_internal.h}   |  18 ++-
 lib/{eal/linux/eal_log.c => log/log_linux.c}  |   2 +-
 .../windows/eal_log.c => log/log_windows.c}   |   2 +-
 lib/log/meson.build   |   9 ++
 lib/{eal/include => log}/rte_log.h|   0
 lib/log/version.map   |  34 ++
 lib/meson.build   |   1 +
 lib/telemetry/meson.build |   3 +-
 25 files changed, 202 insertions(+), 46 deletions(-)
 create mode 100644 doc/guides/prog_guide/log_lib.rst
 rename lib/{eal/common/eal_common_log.c => log/log.c} (99%)
 create mode 100644 lib/log/log_freebsd.c
 rename lib/{eal/common/eal_log.h => log/log_internal.h} (69%)
 rename lib/{eal/linux/eal_log.c => log/log_linux.c} (97%)
 rename lib/{eal/windows/eal_log.c => log/log_windows.c} (93%)
 create mode 100644 lib/log/meson.build
 rename lib/{eal/include => log}/rte_log.h (100%)
 create mode 100644 lib/log/version.map

diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index f0886c3bd1..442703c01a 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -51,6 +51,7 @@ INPUT   = @TOPDIR@/doc/api/doxy-api-index.md \
   @TOPDIR@/lib/kni \
   @TOPDIR@/lib/kvargs \
   @TOPDIR@/lib/latencystats \
+  @TOPDIR@/lib/log \
   @TOPDIR@/lib/lpm \
   @TOPDIR@/lib/mbuf \
   @TOPDIR@/lib/member \
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst 
b/doc/guides/prog_guide/env_abstraction_layer.rst
index 35fbebe1be..5fede6bc70 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -443,9 +443,7 @@ Per-lcore variables are implemented using *Thread Local 
Storage* (TLS) to provid
 Logs
 
 
-A logging API is provided by EAL.
-By default, in a Linux application, logs are sent to syslog and also to the 
console.
-However, the log function can be overridden by the user to use a different 
logging mechanism.
+While originally part of EAL, DPDK logging functionality is now provided by 
the :ref:`Log_Library`.
 
 Trace and Debug Functions
 ^
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 8564883018..6344c178c1 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -12,6 +12,7 @@ Programmer's Guide
 overview
 source_org
 env_abstraction_layer
+log_lib
 service_cores
 trace_lib
 rcu_lib
diff --git a/doc/guides/prog_guide/log_lib.rst 
b/doc/guides/prog_guide/log_lib.rst
new file mode 100644
index 00..4481dd0854
--- /dev/null
+++ b/doc/guides/prog_guide/log_lib.rst
@@ -0,0 +1,115 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+Copyright(c) 2023 Intel Corporation.
+
+.. _log_library:
+
+Log Library
+
+
+The DPDK Log library provides the logging functionality for other DPDK 
libraries and drivers.
+By default, in a Linux application, logs are sent to syslog and also to the 
console.
+On FreeBSD and Windows applications, logs are sent only to the console.
+However, the log function can be overridden by the user to use a different 
logging mechanism.
+
+Log Levels
+---
+
+Log messages from apps and libraries are reported with a given level of 
severity.
+These levels, specified in ``rte_log.h`` are (from most to least important):
+
+#. Emergency
+#. Alert
+#. Critical
+#. Error
+#. Warning
+#. Notice
+#. Information
+#. Debug
+
+At runtim

[PATCH v4 3/3] telemetry: use standard logging

2023-01-20 Thread Bruce Richardson
Now that logging is moved out of EAL, we don't need injection of the
logtype and logging function from EAL to telemetry library, simplifying
things.

Signed-off-by: Bruce Richardson 
Acked-by: Morten Brørup 
Acked-by: Tyler Retzlaff 
---
 lib/eal/freebsd/eal.c  |  6 +-
 lib/eal/linux/eal.c|  6 +-
 lib/telemetry/telemetry.c  | 11 +++
 lib/telemetry/telemetry_internal.h |  3 +--
 4 files changed, 6 insertions(+), 20 deletions(-)

diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 607684c1a3..820c4524e5 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -871,13 +871,9 @@ rte_eal_init(int argc, char **argv)
return -1;
}
if (rte_eal_process_type() == RTE_PROC_PRIMARY && 
!internal_conf->no_telemetry) {
-   int tlog = rte_log_register_type_and_pick_level(
-   "lib.telemetry", RTE_LOG_WARNING);
-   if (tlog < 0)
-   tlog = RTE_LOGTYPE_EAL;
if (rte_telemetry_init(rte_eal_get_runtime_dir(),
rte_version(),
-   &internal_conf->ctrl_cpuset, rte_log, tlog) != 
0)
+   &internal_conf->ctrl_cpuset) != 0)
return -1;
}
 
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 0df9f1f353..dec0041094 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -1319,13 +1319,9 @@ rte_eal_init(int argc, char **argv)
return -1;
}
if (rte_eal_process_type() == RTE_PROC_PRIMARY && 
!internal_conf->no_telemetry) {
-   int tlog = rte_log_register_type_and_pick_level(
-   "lib.telemetry", RTE_LOG_WARNING);
-   if (tlog < 0)
-   tlog = RTE_LOGTYPE_EAL;
if (rte_telemetry_init(rte_eal_get_runtime_dir(),
rte_version(),
-   &internal_conf->ctrl_cpuset, rte_log, tlog) != 
0)
+   &internal_conf->ctrl_cpuset) != 0)
return -1;
}
 
diff --git a/lib/telemetry/telemetry.c b/lib/telemetry/telemetry.c
index 8fbb4f3060..13a32f4279 100644
--- a/lib/telemetry/telemetry.c
+++ b/lib/telemetry/telemetry.c
@@ -54,11 +54,9 @@ static struct socket v1_socket; /* socket for v1 telemetry */
 static const char *telemetry_version; /* save rte_version */
 static const char *socket_dir;/* runtime directory */
 static rte_cpuset_t *thread_cpuset;
-static rte_log_fn rte_log_ptr;
-static uint32_t logtype;
 
-#define TMTY_LOG(l, ...) \
-rte_log_ptr(RTE_LOG_ ## l, logtype, "TELEMETRY: " __VA_ARGS__)
+RTE_LOG_REGISTER_DEFAULT(logtype, WARNING);
+#define TMTY_LOG(l, ...) rte_log(RTE_LOG_ ## l, logtype, "TELEMETRY: " 
__VA_ARGS__)
 
 /* list of command callbacks, with one command registered by default */
 static struct cmd_callback *callbacks;
@@ -612,14 +610,11 @@ telemetry_v2_init(void)
 #endif /* !RTE_EXEC_ENV_WINDOWS */
 
 int32_t
-rte_telemetry_init(const char *runtime_dir, const char *rte_version, 
rte_cpuset_t *cpuset,
-   rte_log_fn log_fn, uint32_t registered_logtype)
+rte_telemetry_init(const char *runtime_dir, const char *rte_version, 
rte_cpuset_t *cpuset)
 {
telemetry_version = rte_version;
socket_dir = runtime_dir;
thread_cpuset = cpuset;
-   rte_log_ptr = log_fn;
-   logtype = registered_logtype;
 
 #ifndef RTE_EXEC_ENV_WINDOWS
if (telemetry_v2_init() != 0)
diff --git a/lib/telemetry/telemetry_internal.h 
b/lib/telemetry/telemetry_internal.h
index d085c492dc..5c75d73183 100644
--- a/lib/telemetry/telemetry_internal.h
+++ b/lib/telemetry/telemetry_internal.h
@@ -109,7 +109,6 @@ typedef int (*rte_log_fn)(uint32_t level, uint32_t logtype, 
const char *format,
  */
 __rte_internal
 int
-rte_telemetry_init(const char *runtime_dir, const char *rte_version, 
rte_cpuset_t *cpuset,
-   rte_log_fn log_fn, uint32_t registered_logtype);
+rte_telemetry_init(const char *runtime_dir, const char *rte_version, 
rte_cpuset_t *cpuset);
 
 #endif
-- 
2.37.2



Re: [PATCH v6 1/4] lib: add generic support for reading PMU events

2023-01-20 Thread Tyler Retzlaff
On Fri, Jan 20, 2023 at 12:39:12AM +0100, Tomasz Duszynski wrote:
> Add support for programming PMU counters and reading their values
> in runtime bypassing kernel completely.
> 
> This is especially useful in cases where CPU cores are isolated
> (nohz_full) i.e run dedicated tasks. In such cases one cannot use
> standard perf utility without sacrificing latency and performance.
> 
> Signed-off-by: Tomasz Duszynski 
> ---
>  MAINTAINERS|   5 +
>  app/test/meson.build   |   4 +
>  app/test/test_pmu.c|  42 +++
>  doc/api/doxy-api-index.md  |   3 +-
>  doc/api/doxy-api.conf.in   |   1 +
>  doc/guides/prog_guide/profile_app.rst  |   8 +
>  doc/guides/rel_notes/release_23_03.rst |   7 +
>  lib/meson.build|   1 +
>  lib/pmu/meson.build|  13 +
>  lib/pmu/pmu_private.h  |  29 ++
>  lib/pmu/rte_pmu.c  | 436 +
>  lib/pmu/rte_pmu.h  | 206 
>  lib/pmu/version.map|  19 ++
>  13 files changed, 773 insertions(+), 1 deletion(-)
>  create mode 100644 app/test/test_pmu.c
>  create mode 100644 lib/pmu/meson.build
>  create mode 100644 lib/pmu/pmu_private.h
>  create mode 100644 lib/pmu/rte_pmu.c
>  create mode 100644 lib/pmu/rte_pmu.h
>  create mode 100644 lib/pmu/version.map
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 9a0f416d2e..9f13eafd95 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1697,6 +1697,11 @@ M: Nithin Dabilpuram 
>  M: Pavan Nikhilesh 
>  F: lib/node/
>  
> +PMU - EXPERIMENTAL
> +M: Tomasz Duszynski 
> +F: lib/pmu/
> +F: app/test/test_pmu*
> +
>  
>  Test Applications
>  -
> diff --git a/app/test/meson.build b/app/test/meson.build
> index f34d19e3c3..b2c2a618b1 100644
> --- a/app/test/meson.build
> +++ b/app/test/meson.build
> @@ -360,6 +360,10 @@ if dpdk_conf.has('RTE_LIB_METRICS')
>  test_sources += ['test_metrics.c']
>  fast_tests += [['metrics_autotest', true, true]]
>  endif
> +if is_linux
> +test_sources += ['test_pmu.c']
> +fast_tests += [['pmu_autotest', true, true]]
> +endif

traditionally we don't conditionally include tests at the meson.build
level, instead we run all tests and have them skip when executed for
unsupported exec environments.

you can take a look at test_eventdev.c as an example for a test that is
skipped on windows, i'm sure it could be adapted to skip on freebsd if
you aren't supporting it.



[PATCH v1 0/1] DPDK Coverity issue 381631, 381646

2023-01-20 Thread Hernan Vargas
Upstream fix to 23.03 for potential issue of dereferencing a pointer before 
null check.

Hernan Vargas (1):
  baseband/acc: fix check after deref and dead code

 drivers/baseband/acc/rte_acc100_pmd.c | 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)

-- 
2.37.1



[PATCH v1 1/1] baseband/acc: fix check after deref and dead code

2023-01-20 Thread Hernan Vargas
Fix potential issue of dereferencing a pointer before null check.
Remove null check for value that could never be null.

Coverity issue: 381646, 381631
Fixes: 989dec301a9 ("baseband/acc100: add ring companion address")

Signed-off-by: Hernan Vargas 
---
 drivers/baseband/acc/rte_acc100_pmd.c | 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/drivers/baseband/acc/rte_acc100_pmd.c 
b/drivers/baseband/acc/rte_acc100_pmd.c
index 0992cb5d1e..a600511f4b 100644
--- a/drivers/baseband/acc/rte_acc100_pmd.c
+++ b/drivers/baseband/acc/rte_acc100_pmd.c
@@ -4106,12 +4106,9 @@ acc100_dequeue_ldpc_enc(struct rte_bbdev_queue_data 
*q_data,
int ret, cbm;
struct rte_bbdev_enc_op *op;
 
-   if (q == NULL)
-   return 0;
-#ifdef RTE_LIBRTE_BBDEV_DEBUG
-   if (unlikely(ops == 0))
+   if (avail == 0)
return 0;
-#endif
+
op = acc_op_tail(q, 0);
if (unlikely(ops == NULL || op == NULL))
return 0;
-- 
2.37.1



Re: [PATCH v6 0/2] eal: provide leading and trailing zero bit count abstraction

2023-01-20 Thread Tyler Retzlaff
hi folks,

i think this one can probably be merged?

Series-acked-by: Morten Brørup 
Series-acked-by: Bruce Richardson 
patch 1/2 Acked-by: Ferruh Yigit 

thanks!

On Tue, Jan 10, 2023 at 11:46:39AM -0800, Tyler Retzlaff wrote:
> v6:
>   * remove stray #include 
> 
> v5:
>   * fix implementation of msvc versions of rte_clz{32,64}
> incorrect use of _BitscanReverse{,64} index.
>   * fix and expand unit test to exercise full range of counting
> over uint{32,64}_t input values. (which would have caught
> above mistake).
>   * reduce commit title length
>   * correct commit author
> 
> v4:
>   * combine unit test commit into function addition commit
> 
> v3:
>   * rename to use 32/64 instead of l/ll suffixes
>   * add new functions to rte_bitops.h instead of new header
>   * move other bit functions from rte_common.h to rte_bitops.h
> 
> v2:
>   * use unsigned int instead of unsigned (checkpatches)
>   * match multiple include guard naming convention to rte_common.h
>   * add explicit extern "C" linkage to rte_bitcount.h
> note: not really needed but checkpatches required
>   * add missing space around '-'
> 
> Tyler Retzlaff (2):
>   eal: move bit operation common to bitops header
>   eal: provide leading and trailing zero bit count abstraction
> 
>  app/test/meson.build|   2 +
>  app/test/test_bitcount.c|  93 
>  app/test/test_common.c  |   1 +
>  lib/eal/common/rte_reciprocal.c |   1 +
>  lib/eal/include/rte_bitops.h| 460 
> 
>  lib/eal/include/rte_common.h| 293 -
>  6 files changed, 557 insertions(+), 293 deletions(-)
>  create mode 100644 app/test/test_bitcount.c
> 
> -- 
> 
> Series-acked-by: Morten Brørup 
> Series-acked-by: Bruce Richardson 


RE: [RFC] ethdev: add template table insertion and matching types

2023-01-20 Thread Alexander Kozyrev
> 14/12/2022 03:21, Alexander Kozyrev:
> > Bring more flexibility and control over both flow rule insertion
> > and packet matching mechanisms. Introduce 2 new flow table types:
> >
> > 1. Allow a user to specify the insertion type used in template tables.
> > The insertion type is responsible for choosing the appropriate key
> > value used to map inserted flow rules into a template table.
> >
> > Flow rules can be inserted by calculating the hash value for
> > the pattern or inserted by index via the new create_by_index() API.
> > The idea of the index-based insertion is to avoid additional matches
> > and simply execute predefined actions after jumping to the index.
> 
> I don't understand the idea.
> Why is it avoiding additional matches?

If you jump directly to an index in a table, you don't need to match anything.
For the regular pattern-based table you must calculate the hash and match on it.

> > The insertion to an already existing index is undefined and
> 
> Why is it undefined?

The behavior is up to a PMD to decide. It is free to replace the rule or throw 
an error.

> > depends on PMD implementation. An old rule must be destroyed first.
> > The index cannot be bigger than the size of the table.
> >
> > 2. Allow a user to specify the hash calculation function used in template
> > tables. The hash calculation type is responsible for the calculation of
> > the flow rule index a packet would hit upon arrival at the table.
> >
> > Control over this is useful for applications with custom RSS algorithms,
> > for example. An application can select various packet fields to serve as
> > a hash calculation source and jump to the appropriate flow rule location.
> > The RSS hash result will be used as the index in the table. For the linear
> > hash function, the mapping is one-to-one and the hash result is the index.
> > For other hash functions, the index is the hash result module table size.
> > The RSS hash result can be retrieved via modify_field API: HASH_RESULT.
> >
> > Signed-off-by: Alexander Kozyrev 
> > ---
> >  doc/guides/prog_guide/rte_flow.rst | 20 +++
> >  lib/ethdev/rte_flow.c  | 24 
> >  lib/ethdev/rte_flow.h  | 96 ++
> >  lib/ethdev/rte_flow_driver.h   | 11 
> >  lib/ethdev/version.map |  3 +
> >  5 files changed, 154 insertions(+)
> 
> Is there a PMD implementation available on the mailing list?
> This is required to accept a new API.

This is RFC. PMD implementation will follow in a v1 patches shortly.



[PATCH 0/4] ethdev: add template table insertion and matching types

2023-01-20 Thread Alexander Kozyrev
Bring more flexibility and control over both flow rule insertion
and packet matching mechanisms. Introduce 2 new flow table types:

1. Allow a user to specify the insertion type used in template tables.
The insertion type is responsible for choosing the appropriate key
value used to map inserted flow rules into a template table.

Flow rules can be inserted by calculating the hash value for
the pattern or inserted by index via the new create_by_index() API.
The idea of the index-based insertion is to avoid additional matches
and simply execute predefined actions after jumping to the index.

The insertion to an already existing index is undefined and
depends on PMD implementation. An old rule must be destroyed first.
The index cannot be bigger than the size of the table.

2. Allow a user to specify the hash calculation function used in template
tables. The hash calculation type is responsible for the calculation of
the flow rule index a packet would hit upon arrival at the table.

Control over this is useful for applications with custom RSS algorithms,
for example. An application can select various packet fields to serve as
a hash calculation source and jump to the appropriate flow rule location.
The RSS hash result will be used as the index in the table. For the linear
hash function, the mapping is one-to-one and the hash result is the index.
For other hash functions, the index is the hash result module table size.
The RSS hash result can be retrieved via modify_field API: HASH_RESULT.

RFC: 
https://patchwork.dpdk.org/project/dpdk/patch/20221214022110.393410-1-akozy...@nvidia.com/
PMD implementation will follow shortly.

Alexander Kozyrev (4):
  ethdev: add template table insertion type
  ethdev: add template table hash calculation function
  app/testpmd: add template table insertion type
  app/testpmd: add template table hash calculation function

 app/test-pmd/cmdline_flow.c| 166 +++--
 app/test-pmd/config.c  |  10 +-
 app/test-pmd/testpmd.h |   2 +-
 doc/guides/prog_guide/rte_flow.rst |  20 +++
 doc/guides/rel_notes/release_23_03.rst |  11 ++
 lib/ethdev/rte_flow.c  |  24 
 lib/ethdev/rte_flow.h  |  96 ++
 lib/ethdev/rte_flow_driver.h   |  11 ++
 lib/ethdev/version.map |   3 +
 9 files changed, 332 insertions(+), 11 deletions(-)

-- 
2.18.2



[PATCH 1/4] ethdev: add template table insertion type

2023-01-20 Thread Alexander Kozyrev
Allow user to specify insertion type used in template tables.
The insertion type is responsible for choosing appropriate key
value used to map inserted flow rules into a template table.

Flow rules can be inserted by calculating the hash value for
the pattern or inserted by index via new create_by_index() API.
The idea of the index-based insertion is to avoid additional match
and simply execute predefined actions after jumping to the index.

The insertion to already existing index is undefined and
depends on PMD implementation. Old rule must be destroyed first.
Index cannot be bigger than the size of the table.

Signed-off-by: Alexander Kozyrev 
---
 doc/guides/prog_guide/rte_flow.rst | 20 
 doc/guides/rel_notes/release_23_03.rst |  7 +++
 lib/ethdev/rte_flow.c  | 24 ++
 lib/ethdev/rte_flow.h  | 66 ++
 lib/ethdev/rte_flow_driver.h   | 11 +
 lib/ethdev/version.map |  3 ++
 6 files changed, 131 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst 
b/doc/guides/prog_guide/rte_flow.rst
index 3e6242803d..7d05acb2db 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3669,6 +3669,26 @@ Enqueueing a flow rule creation operation is similar to 
simple creation.
 A valid handle in case of success is returned. It must be destroyed later
 by calling ``rte_flow_async_destroy()`` even if the rule is rejected by HW.
 
+Enqueue creation by index operation
+~~~
+Enqueueing a flow rule creation operation to insert the rule at a table index.
+
+.. code-block:: c
+
+   struct rte_flow *
+   rte_flow_async_create(uint16_t port_id,
+ uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr,
+ struct rte_flow_template_table *template_table,
+ uint32_t rule_index,
+ const struct rte_flow_action actions[],
+ uint8_t actions_template_index,
+ void *user_data,
+ struct rte_flow_error *error);
+
+A valid handle in case of success is returned. It must be destroyed later
+by calling ``rte_flow_async_destroy()`` even if the rule is rejected by HW.
+
 Enqueue destruction operation
 ~
 
diff --git a/doc/guides/rel_notes/release_23_03.rst 
b/doc/guides/rel_notes/release_23_03.rst
index b8c5b68d6c..13dd368f2e 100644
--- a/doc/guides/rel_notes/release_23_03.rst
+++ b/doc/guides/rel_notes/release_23_03.rst
@@ -56,6 +56,13 @@ New Features
  ===
 
 
+* **Added index-based rules insertion in flow API.**
+
+  Added ``rte_flow_table_insertion_type`` to allow the creation
+  of index-based template tables in addition to pattern-based tables.
+  Introduced new function ``rte_flow_async_create_by_index()``
+  to insert rules by index into index-based template tables.
+
 Removed Items
 -
 
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 7d0c24366c..013eb355ca 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1765,6 +1765,30 @@ rte_flow_async_create(uint16_t port_id,
return flow;
 }
 
+struct rte_flow *
+rte_flow_async_create_by_index(uint16_t port_id,
+  uint32_t queue_id,
+  const struct rte_flow_op_attr *op_attr,
+  struct rte_flow_template_table *template_table,
+  uint32_t rule_index,
+  const struct rte_flow_action actions[],
+  uint8_t actions_template_index,
+  void *user_data,
+  struct rte_flow_error *error)
+{
+   struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+   const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+   struct rte_flow *flow;
+
+   flow = ops->async_create_by_index(dev, queue_id,
+ op_attr, template_table, rule_index,
+ actions, actions_template_index,
+ user_data, error);
+   if (flow == NULL)
+   flow_err(port_id, -rte_errno, error);
+   return flow;
+}
+
 int
 rte_flow_async_destroy(uint16_t port_id,
   uint32_t queue_id,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index b60987db4b..88d0c6a724 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -5187,6 +5187,23 @@ rte_flow_actions_template_destroy(uint16_t port_id,
  */
 struct rte_flow_template_table;
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Template table flow rules insertion type.
+ */
+enum rte_flow_table_insertion_type {
+   /**
+* Pattern-based insertion.

[PATCH 3/4] app/testpmd: add template table insertion type

2023-01-20 Thread Alexander Kozyrev
Add testpmd CLI interface for specifying a template table insertion type.
Available types are: pattern and index.
flow template_table 0 create table_id 0 insertion_type index ...
Allow specifying the rule index instead of the pattern template index:
flow queue 0 create 0 template_table 0 rule_index 5 actions_template 0 ...

Signed-off-by: Alexander Kozyrev 
---
 app/test-pmd/cmdline_flow.c | 97 ++---
 app/test-pmd/config.c   | 10 ++--
 app/test-pmd/testpmd.h  |  2 +-
 3 files changed, 99 insertions(+), 10 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 88108498e0..f1d6813baa 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -133,12 +133,11 @@ enum index {
QUEUE_INDIRECT_ACTION,
 
/* Queue create arguments. */
-   QUEUE_CREATE_ID,
QUEUE_CREATE_POSTPONE,
QUEUE_TEMPLATE_TABLE,
QUEUE_PATTERN_TEMPLATE,
QUEUE_ACTIONS_TEMPLATE,
-   QUEUE_SPEC,
+   QUEUE_RULE_ID,
 
/* Queue destroy arguments. */
QUEUE_DESTROY_ID,
@@ -179,6 +178,8 @@ enum index {
TABLE_DESTROY,
TABLE_CREATE_ID,
TABLE_DESTROY_ID,
+   TABLE_INSERTION_TYPE,
+   TABLE_INSERTION_TYPE_NAME,
TABLE_GROUP,
TABLE_PRIORITY,
TABLE_INGRESS,
@@ -814,6 +815,12 @@ static const char *const meter_colors[] = {
"green", "yellow", "red", "all", NULL
 };
 
+static const char *const table_insertion_types[] = {
+   "pattern", "index", NULL
+};
+
+#define RAW_IPSEC_CONFS_MAX_NUM 8
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -1015,6 +1022,7 @@ struct buffer {
struct {
uint32_t table_id;
uint32_t pat_templ_id;
+   uint32_t rule_id;
uint32_t act_templ_id;
struct rte_flow_attr attr;
struct tunnel_ops tunnel_ops;
@@ -1154,6 +1162,7 @@ static const enum index next_table_subcmd[] = {
 static const enum index next_table_attr[] = {
TABLE_CREATE_ID,
TABLE_GROUP,
+   TABLE_INSERTION_TYPE,
TABLE_PRIORITY,
TABLE_INGRESS,
TABLE_EGRESS,
@@ -1287,6 +1296,12 @@ static const enum index next_ia_destroy_attr[] = {
ZERO,
 };
 
+static const enum index next_async_insert_subcmd[] = {
+   QUEUE_PATTERN_TEMPLATE,
+   QUEUE_RULE_ID,
+   ZERO,
+};
+
 static const enum index item_param[] = {
ITEM_PARAM_IS,
ITEM_PARAM_SPEC,
@@ -2399,6 +2414,9 @@ static int parse_meter_policy_id2ptr(struct context *ctx,
 static int parse_meter_color(struct context *ctx, const struct token *token,
 const char *str, unsigned int len, void *buf,
 unsigned int size);
+static int parse_insertion_table_type(struct context *ctx, const struct token 
*token,
+ const char *str, unsigned int len, void 
*buf,
+ unsigned int size);
 static int comp_none(struct context *, const struct token *,
 unsigned int, char *, unsigned int);
 static int comp_boolean(struct context *, const struct token *,
@@ -2431,6 +2449,8 @@ static int comp_queue_id(struct context *, const struct 
token *,
 unsigned int, char *, unsigned int);
 static int comp_meter_color(struct context *, const struct token *,
unsigned int, char *, unsigned int);
+static int comp_insertion_table_type(struct context *, const struct token *,
+unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2901,6 +2921,20 @@ static const struct token token_list[] = {
args.table_destroy.table_id)),
.call = parse_table_destroy,
},
+   [TABLE_INSERTION_TYPE] = {
+   .name = "insertion_type",
+   .help = "specify insertion type",
+   .next = NEXT(next_table_attr,
+NEXT_ENTRY(TABLE_INSERTION_TYPE_NAME)),
+   .args = ARGS(ARGS_ENTRY(struct buffer,
+   args.table.attr.insertion_type)),
+   },
+   [TABLE_INSERTION_TYPE_NAME] = {
+   .name = "insertion_type_name",
+   .help = "insertion type name",
+   .call = parse_insertion_table_type,
+   .comp = comp_insertion_table_type,
+   },
[TABLE_GROUP] = {
.name = "group",
.help = "specify a group",
@@ -3002,7 +3036,7 @@ static const struct token token_list[] = {
[QUEUE_TEMPLATE_TABLE] = {
.name = "template_table",
.help = "specify table id",
-   .next = NEXT(NEXT_ENTRY(QUEUE_PATTERN_TEMPL

[PATCH 2/4] ethdev: add template table hash calculation function

2023-01-20 Thread Alexander Kozyrev
Allow user to specify hash calculation function used in template tables.
The hash calculation type is responsible for the calculation of the flow
rule index a packet would hit upon arrival at the table.

Control over this is useful for applications with custom RSS algorithms,
for example. An application can select various packet fields to serve as
a hash calculation source and jump to the appropriate flow rule location.
The RSS hash result will be used as the index in the table. For the linear
hash function, the mapping is one-to-one and the hash result is the index.
For other hash functions, the index is the hash result module table size.
The RSS hash result can be retrieved via modify_field API: HASH_RESULT.

Signed-off-by: Alexander Kozyrev 
---
 doc/guides/rel_notes/release_23_03.rst |  4 
 lib/ethdev/rte_flow.h  | 30 ++
 2 files changed, 34 insertions(+)

diff --git a/doc/guides/rel_notes/release_23_03.rst 
b/doc/guides/rel_notes/release_23_03.rst
index 13dd368f2e..0535205c8b 100644
--- a/doc/guides/rel_notes/release_23_03.rst
+++ b/doc/guides/rel_notes/release_23_03.rst
@@ -63,6 +63,10 @@ New Features
   Introduced new function ``rte_flow_async_create_by_index()``
   to insert rules by index into index-based template tables.
 
+* **Added index-based rules insertion in flow API.**
+  Added hash calculation function used in template tables
+  to allow control over the calculation of the rule index for a packet.
+
 Removed Items
 -
 
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 88d0c6a724..0c793bce8a 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3528,6 +3528,7 @@ enum rte_flow_field_id {
RTE_FLOW_FIELD_IPV6_ECN,/**< IPv6 ECN. */
RTE_FLOW_FIELD_GTP_PSC_QFI, /**< GTP QFI. */
RTE_FLOW_FIELD_METER_COLOR, /**< Meter color marker. */
+   RTE_FLOW_FIELD_HASH_RESULT, /**< Hash result. */
 };
 
 /**
@@ -5204,6 +5205,31 @@ enum rte_flow_table_insertion_type {
RTE_FLOW_TABLE_INSERTION_TYPE_INDEX,
 };
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Template table hash index calculation function.
+ */
+enum rte_flow_table_hash_func {
+   /**
+* Default hash calculation.
+*/
+   RTE_FLOW_TABLE_HASH_FUNC_DEFAULT,
+   /**
+* Linear hash calculation.
+*/
+   RTE_FLOW_TABLE_HASH_FUNC_LINEAR,
+   /**
+* 32-bit checksum hash calculation.
+*/
+   RTE_FLOW_TABLE_HASH_FUNC_CRC32,
+   /**
+* 16-bit checksum hash calculation.
+*/
+   RTE_FLOW_TABLE_HASH_FUNC_CRC16,
+};
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
@@ -5223,6 +5249,10 @@ struct rte_flow_template_table_attr {
 * Insertion type for flow rules.
 */
enum rte_flow_table_insertion_type insertion_type;
+   /**
+* Hash calculation function for the packet matching.
+*/
+   enum rte_flow_table_hash_func hash_func;
 };
 
 /**
-- 
2.18.2



[PATCH 4/4] app/testpmd: add template table hash calculation function

2023-01-20 Thread Alexander Kozyrev
Add testpmd CLI interface for a template table hash function.
Available types are: default, linear, crc32 and crc16.
flow template_table 0 create table_id 0 hash_func linear ...

Signed-off-by: Alexander Kozyrev 
---
 app/test-pmd/cmdline_flow.c | 69 -
 1 file changed, 68 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index f1d6813baa..007d31c5cf 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -180,6 +180,8 @@ enum index {
TABLE_DESTROY_ID,
TABLE_INSERTION_TYPE,
TABLE_INSERTION_TYPE_NAME,
+   TABLE_HASH_FUNC,
+   TABLE_HASH_FUNC_NAME,
TABLE_GROUP,
TABLE_PRIORITY,
TABLE_INGRESS,
@@ -808,7 +810,8 @@ static const char *const modify_field_ids[] = {
"udp_port_src", "udp_port_dst",
"vxlan_vni", "geneve_vni", "gtp_teid",
"tag", "mark", "meta", "pointer", "value",
-   "ipv4_ecn", "ipv6_ecn", "gtp_psc_qfi", "meter_color", NULL
+   "ipv4_ecn", "ipv6_ecn", "gtp_psc_qfi", "meter_color",
+   "hash_result", NULL
 };
 
 static const char *const meter_colors[] = {
@@ -819,6 +822,10 @@ static const char *const table_insertion_types[] = {
"pattern", "index", NULL
 };
 
+static const char *const table_hash_funcs[] = {
+   "default", "linear", "crc32", "crc16", NULL
+};
+
 #define RAW_IPSEC_CONFS_MAX_NUM 8
 
 /** Maximum number of subsequent tokens and arguments on the stack. */
@@ -1163,6 +1170,7 @@ static const enum index next_table_attr[] = {
TABLE_CREATE_ID,
TABLE_GROUP,
TABLE_INSERTION_TYPE,
+   TABLE_HASH_FUNC,
TABLE_PRIORITY,
TABLE_INGRESS,
TABLE_EGRESS,
@@ -2417,6 +2425,9 @@ static int parse_meter_color(struct context *ctx, const 
struct token *token,
 static int parse_insertion_table_type(struct context *ctx, const struct token 
*token,
  const char *str, unsigned int len, void 
*buf,
  unsigned int size);
+static int parse_hash_table_type(struct context *ctx, const struct token 
*token,
+const char *str, unsigned int len, void *buf,
+unsigned int size);
 static int comp_none(struct context *, const struct token *,
 unsigned int, char *, unsigned int);
 static int comp_boolean(struct context *, const struct token *,
@@ -2451,6 +2462,8 @@ static int comp_meter_color(struct context *, const 
struct token *,
unsigned int, char *, unsigned int);
 static int comp_insertion_table_type(struct context *, const struct token *,
 unsigned int, char *, unsigned int);
+static int comp_hash_table_type(struct context *, const struct token *,
+   unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2935,6 +2948,20 @@ static const struct token token_list[] = {
.call = parse_insertion_table_type,
.comp = comp_insertion_table_type,
},
+   [TABLE_HASH_FUNC] = {
+   .name = "hash_func",
+   .help = "specify hash calculation function",
+   .next = NEXT(next_table_attr,
+NEXT_ENTRY(TABLE_HASH_FUNC_NAME)),
+   .args = ARGS(ARGS_ENTRY(struct buffer,
+   args.table.attr.hash_func)),
+   },
+   [TABLE_HASH_FUNC_NAME] = {
+   .name = "hash_func_name",
+   .help = "hash calculation function name",
+   .call = parse_hash_table_type,
+   .comp = comp_hash_table_type,
+   },
[TABLE_GROUP] = {
.name = "group",
.help = "specify a group",
@@ -10123,6 +10150,32 @@ parse_insertion_table_type(struct context *ctx, const 
struct token *token,
return ret > 0 ? (int)len : ret;
 }
 
+/** Parse Hash Calculation Table Type name */
+static int
+parse_hash_table_type(struct context *ctx, const struct token *token,
+ const char *str, unsigned int len, void *buf,
+ unsigned int size)
+{
+   const struct arg *arg = pop_args(ctx);
+   unsigned int i;
+   char tmp[2];
+   int ret;
+
+   (void)size;
+   /* Argument is expected. */
+   if (!arg)
+   return -1;
+   for (i = 0; table_hash_funcs[i]; ++i)
+   if (!strcmp_partial(table_hash_funcs[i], str, len))
+   break;
+   if (!table_hash_funcs[i])
+   return -1;
+   push_args(ctx, arg);
+   snprintf(tmp, sizeof(tmp), "%u", i);
+   ret = parse_int(ctx, token, tmp, strlen(tmp), buf, sizeof(i));
+   return ret > 0 ? (int)len : ret;
+}
+
 /** No completion. */
 static int
 comp_none(struct context *ctx, const struct token *token,
@@ -10442,6 +1049