[PATCH v2 1/2] security: add fallback security processing and Rx inject

2023-09-29 Thread Anoob Joseph
Add alternate datapath API for security processing which would do Rx
injection (similar to loopback) after successful security processing.

With inline protocol offload, variable part of the session context
(AR windows, lifetime etc in case of IPsec), is not accessible to the
application. If packets are not getting processed in the inline path
due to non security reasons (such as outer fragmentation or rte_flow
packet steering limitations), then the packet cannot be security
processed as the session context is private to the PMD and security
library doesn't provide alternate APIs to make use of the same session.

Introduce new API and Rx injection as fallback mechanism to security
processing failures due to non-security reasons. For example, when there
is outer fragmentation and PMD doesn't support reassembly of outer
fragments, application would receive fragments which it can then
reassemble. Post successful reassembly, packet can be submitted for
security processing and Rx inject. The packets can be then received in
the application as normal inline protocol processed packets.

Same API can be leveraged in lookaside protocol offload mode to inject
packet to Rx. This would help in using rte_flow based packet parsing
after security processing. For example, with IPsec, this will help in
inner parsing and flow splitting after IPsec processing is done.

In both inline protocol capable ethdevs and lookaside protocol capable
cryptodevs, the packet would be received back in eth port & queue based
on rte_flow rules and packet parsing after security processing. The API
would behave like a loopback but with the additional security
processing.

Signed-off-by: Anoob Joseph 
Signed-off-by: Vidya Sagar Velumuri 
---
v2:
* Added a new API for configuring security device to do Rx inject to a specific
  ethdev port
* Rebased

 doc/guides/cryptodevs/features/default.ini |  1 +
 lib/cryptodev/rte_cryptodev.h  |  2 +
 lib/security/rte_security.c| 22 ++
 lib/security/rte_security.h| 85 ++
 lib/security/rte_security_driver.h | 44 +++
 lib/security/version.map   |  3 +
 6 files changed, 157 insertions(+)

diff --git a/doc/guides/cryptodevs/features/default.ini 
b/doc/guides/cryptodevs/features/default.ini
index 6f637fa7e2..f411d4bab7 100644
--- a/doc/guides/cryptodevs/features/default.ini
+++ b/doc/guides/cryptodevs/features/default.ini
@@ -34,6 +34,7 @@ Sym raw data path API  =
 Cipher multiple data units =
 Cipher wrapped key =
 Inner checksum =
+Rx inject  =
 
 ;
 ; Supported crypto algorithms of a default crypto driver.
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 9f07e1ed2c..05aabb6526 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -534,6 +534,8 @@ rte_cryptodev_asym_get_xform_string(enum 
rte_crypto_asym_xform_type xform_enum);
 /**< Support wrapped key in cipher xform  */
 #define RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM   (1ULL << 27)
 /**< Support inner checksum computation/verification */
+#define RTE_CRYPTODEV_FF_SECURITY_RX_INJECT(1ULL << 28)
+/**< Support Rx injection after security processing */
 
 /**
  * Get the name of a crypto device feature flag
diff --git a/lib/security/rte_security.c b/lib/security/rte_security.c
index ab44bbe0f0..fa8d2bb7ce 100644
--- a/lib/security/rte_security.c
+++ b/lib/security/rte_security.c
@@ -321,6 +321,28 @@ rte_security_capability_get(void *ctx, struct 
rte_security_capability_idx *idx)
return NULL;
 }
 
+int
+rte_security_rx_inject_configure(void *ctx, uint16_t port_id, bool enable)
+{
+   struct rte_security_ctx *instance = ctx;
+
+   RTE_PTR_OR_ERR_RET(instance, -EINVAL);
+   RTE_PTR_OR_ERR_RET(instance->ops, -ENOTSUP);
+   RTE_PTR_OR_ERR_RET(instance->ops->rx_inject_configure, -ENOTSUP);
+
+   return instance->ops->rx_inject_configure(instance->device, port_id, 
enable);
+}
+
+uint16_t
+rte_security_inb_pkt_rx_inject(void *ctx, struct rte_mbuf **pkts, void **sess,
+  uint16_t nb_pkts)
+{
+   struct rte_security_ctx *instance = ctx;
+
+   return instance->ops->inb_pkt_rx_inject(instance->device, pkts,
+   (struct rte_security_session 
**)sess, nb_pkts);
+}
+
 static int
 security_handle_cryptodev_list(const char *cmd __rte_unused,
   const char *params __rte_unused,
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index c9cc7a45a6..fe8e8e9813 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -1310,6 +1310,91 @@ const struct rte_security_capability *
 rte_security_capability_get(void *instance,
struct rte_security_capability_idx *idx);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Configure security de

[PATCH v2 2/2] test/cryptodev: add Rx inject test

2023-09-29 Thread Anoob Joseph
From: Vidya Sagar Velumuri 

Add test to verify Rx inject. The test case added would push a known
vector to cryptodev which would be injected to ethdev Rx. The test
case verifies that the packet is received from ethdev Rx and is
processed successfully. It also verifies that the userdata matches with
the expectation.

Signed-off-by: Anoob Joseph 
Signed-off-by: Vidya Sagar Velumuri 
---
 app/test/test_cryptodev.c| 341 +++
 app/test/test_cryptodev_security_ipsec.h |   1 +
 2 files changed, 289 insertions(+), 53 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index f2112e181e..420f60553d 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -17,6 +17,7 @@
 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -1426,6 +1427,93 @@ ut_setup_security(void)
return dev_configure_and_start(0);
 }
 
+static int
+ut_setup_security_rx_inject(void)
+{
+   struct rte_mempool *mbuf_pool = rte_mempool_lookup("CRYPTO_MBUFPOOL");
+   struct crypto_testsuite_params *ts_params = &testsuite_params;
+   struct rte_eth_conf port_conf = {
+   .rxmode = {
+   .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+   RTE_ETH_RX_OFFLOAD_SECURITY,
+   },
+   .txmode = {
+   .offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+   },
+   .lpbk_mode = 1,  /* Enable loopback */
+   };
+   struct rte_cryptodev_info dev_info;
+   struct rte_eth_rxconf rx_conf = {
+   .rx_thresh = {
+   .pthresh = 8,
+   .hthresh = 8,
+   .wthresh = 8,
+   },
+   .rx_free_thresh = 32,
+   };
+   uint16_t nb_ports;
+   void *sec_ctx;
+   int ret;
+
+   rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+   if (!(dev_info.feature_flags & RTE_CRYPTODEV_FF_SECURITY_RX_INJECT) ||
+   !(dev_info.feature_flags & RTE_CRYPTODEV_FF_SECURITY)) {
+   RTE_LOG(INFO, USER1, "Feature requirements for IPsec Rx inject 
test case not met\n"
+  );
+   return TEST_SKIPPED;
+   }
+
+   sec_ctx = rte_cryptodev_get_sec_ctx(ts_params->valid_devs[0]);
+   if (sec_ctx == NULL)
+   return TEST_SKIPPED;
+
+   nb_ports = rte_eth_dev_count_avail();
+   if (nb_ports == 0)
+   return TEST_SKIPPED;
+
+   ret = rte_eth_dev_configure(0 /* port_id */,
+   1 /* nb_rx_queue */,
+   0 /* nb_tx_queue */,
+   &port_conf);
+   if (ret) {
+   printf("Could not configure ethdev port 0 [err=%d]\n", ret);
+   return TEST_SKIPPED;
+   }
+
+   /* Rx queue setup */
+   ret = rte_eth_rx_queue_setup(0 /* port_id */,
+0 /* rx_queue_id */,
+1024 /* nb_rx_desc */,
+SOCKET_ID_ANY,
+&rx_conf,
+mbuf_pool);
+   if (ret) {
+   printf("Could not setup eth port 0 queue 0\n");
+   return TEST_SKIPPED;
+   }
+
+   ret = rte_security_rx_inject_configure(sec_ctx, 0, true);
+   if (ret) {
+   printf("Could not enable Rx inject offload");
+   return TEST_SKIPPED;
+   }
+
+   ret = rte_eth_dev_start(0);
+   if (ret) {
+   printf("Could not start ethdev");
+   return TEST_SKIPPED;
+   }
+
+   ret = rte_eth_promiscuous_enable(0);
+   if (ret) {
+   printf("Could not enable promiscuous mode");
+   return TEST_SKIPPED;
+   }
+
+   /* Configure and start cryptodev with no features disabled */
+   return dev_configure_and_start(0);
+}
+
 void
 ut_teardown(void)
 {
@@ -1478,6 +1566,33 @@ ut_teardown(void)
rte_cryptodev_stop(ts_params->valid_devs[0]);
 }
 
+static void
+ut_teardown_rx_inject(void)
+{
+   struct crypto_testsuite_params *ts_params = &testsuite_params;
+   void *sec_ctx;
+   int ret;
+
+   if  (rte_eth_dev_count_avail() != 0) {
+   ret = rte_eth_dev_reset(0);
+   if (ret)
+   printf("Could not reset eth port 0");
+
+   }
+
+   ut_teardown();
+
+   sec_ctx = rte_cryptodev_get_sec_ctx(ts_params->valid_devs[0]);
+   if (sec_ctx == NULL)
+   return;
+
+   ret = rte_security_rx_inject_configure(sec_ctx, 0, false);
+   if (ret) {
+   printf("Could not disable Rx inject offload");
+   return;
+   }
+}
+
 static int
 test_device_configure_invalid_dev_id(void)
 {
@@ -9875,6 +9990,137 @@ ext_mbuf_create(struct rte_mempool *mbuf_pool, int 
pkt_len,
return NULL;
 }
 

Re: [PATCH v5 12/12] app/test: add event DMA adapter auto-test

2023-09-29 Thread Jerin Jacob
On Fri, Sep 29, 2023 at 9:46 AM Amit Prakash Shukla
 wrote:
>
> Added testsuite to test the dma adapter functionality.
> The testsuite detects event and DMA device capability
> and accordingly dma adapter is configured and modes are
> tested. Test command:
>
> /app/test/dpdk-test event_dma_adapter_autotest

Use the below command with SW driver so that anyone can run it.

>
> Signed-off-by: Amit Prakash Shukla 

sudo ./build/app/test/dpdk-test --vdev=dma_skeleton event_dma_adapter_autotest
There are failures with above as skelton dmadev does not support SG
and most of remaining drivers. So please change to following.

[for-main]dell[dpdk-next-eventdev] $ git diff
diff --git a/lib/eventdev/rte_event_dma_adapter.c
b/lib/eventdev/rte_event_dma_adapter.c
index 4899bc5d0f..bbdfd3daa6 100644
--- a/lib/eventdev/rte_event_dma_adapter.c
+++ b/lib/eventdev/rte_event_dma_adapter.c
@@ -256,8 +256,13 @@ edma_circular_buffer_flush_to_dma_dev(struct
event_dma_adapter *adapter,

for (i = 0; i < n; i++) {
op = bufp->op_buffer[*head];
-   ret = rte_dma_copy_sg(dma_dev_id, vchan, op->src_seg,
op->dst_seg,
- op->nb_src, op->nb_dst, op->flags);
+   if (op->nb_src == 1 && op->nb_dst == 1)
+   ret = rte_dma_copy(dma_dev_id, vchan,
op->src_seg->addr, op->dst_seg->addr,
+   op->src_seg->length, op->flags);
+   else
+   ret = rte_dma_copy_sg(dma_dev_id, vchan,
op->src_seg, op->dst_seg,
+   op->nb_src, op->nb_dst, op->flags);
+

With above change all test cases are pasiing on skelton device.

[for-main]dell[dpdk-next-eventdev] $ sudo ./build/app/test/dpdk-test
--vdev=dma_skeleton event_dma_adapter_autotest
EAL: Detected CPU lcores: 56
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
skeldma_probe(): Create dma_skeleton dmadev with lcore-id -1
APP: HPET is not enabled, using TSC as default timer
RTE>>event_dma_adapter_autotest
 + --- +
 + Test Suite : Event dma adapter test suite
 + --- +
 + TestCase [ 0] : test_dma_adapter_create succeeded
 + TestCase [ 1] : test_dma_adapter_vchan_add_del succeeded
 +--+
 + DMA adapter stats for instance 0:
 + Event port poll count 0x0
 + Event dequeue count   0x0
 + DMA dev enqueue count 0x0
 + DMA dev enqueue failed count  0x0
 + DMA dev dequeue count 0x0
 + Event enqueue count   0x0
 + Event enqueue retry count 0x0
 + Event enqueue fail count  0x0
 +--+
 + TestCase [ 2] : test_dma_adapter_stats succeeded
 + TestCase [ 3] : test_dma_adapter_params succeeded
 +--+
 + DMA adapter stats for instance 0:
 + Event port poll count 0xc5df
 + Event dequeue count   0x20
 + DMA dev enqueue count 0x20
 + DMA dev enqueue failed count  0x0
 + DMA dev dequeue count 0x20
 + Event enqueue count   0x20
 + Event enqueue retry count 0x0
 + Event enqueue fail count  0x0
 +--+
 + TestCase [ 4] : test_with_op_forward_mode succeeded
EVENTDEV: rte_event_dev_stop() line 1427: Device with dev_id=0already stopped
 + --- +
 + Test Suite Summary : Event dma adapter test suite
 + --- +
 + Tests Total :5
 + Tests Skipped :  0
 + Tests Executed : 5
 + Tests Unsupported:   0
 + Tests Passed :   5
 + Tests Failed :   0
 + --- +
Test OK
RTE>>skeldma_remove(): Remove dma_skeleton dmadev



# Please fix the second warning  by using rte_strscpy

[for-main]dell[dpdk-next-eventdev] $ ./devtools/checkpatches.sh -n 12
&& ./devtools/check-git-log.sh -n 12

### [PATCH] eventdev/dma: support adapter create and free

WARNING:MACRO_WITH_FLOW_CONTROL: Macros with flow control statements
should be avoided
#60: FILE: lib/eventdev/rte_event_dma_adapter.c:19:
+#define EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) \
+   do { \
+   if (!edma_adapter_valid_id(id)) { \
+   RTE_EDEV_LOG_ERR("Invalid DMA adapter id = %d\n", id); \
+   return retval; \
+   } \
+   } while (0)

WARNING:STRCPY: Prefer strscpy over strcpy - see:
https://github.com/KSPP/linux/issues/88
#302: FILE: lib/eventdev/rte_event_dma_adapter.c:261:
+   strcpy(adapter->mem_name, name);

total: 0 errors, 2 warnings, 349 lines checked

Rest of the chages looks good to me. Good to merge next version.


Re: [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API

2023-09-29 Thread David Marchand
On Thu, Sep 28, 2023 at 10:06 AM Thomas Monjalon  wrote:
>
> 22/08/2023 23:00, Tyler Retzlaff:
> > --- a/lib/eal/include/generic/rte_rwlock.h
> > +++ b/lib/eal/include/generic/rte_rwlock.h
> > @@ -32,6 +32,7 @@
> >  #include 
> >  #include 
> >  #include 
> > +#include 
>
> I'm not sure about adding the include in patch 1 if it is not used here.

Yes, this is something I had already fixed locally.

>
> > --- /dev/null
> > +++ b/lib/eal/include/rte_stdatomic.h
> > @@ -0,0 +1,198 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2023 Microsoft Corporation
> > + */
> > +
> > +#ifndef _RTE_STDATOMIC_H_
> > +#define _RTE_STDATOMIC_H_
> > +
> > +#include 
> > +
> > +#ifdef __cplusplus
> > +extern "C" {
> > +#endif
> > +
> > +#ifdef RTE_ENABLE_STDATOMIC
> > +#ifdef __STDC_NO_ATOMICS__
> > +#error enable_stdatomics=true but atomics not supported by toolchain
> > +#endif
> > +
> > +#include 
> > +
> > +/* RTE_ATOMIC(type) is provided for use as a type specifier
> > + * permitting designation of an rte atomic type.
> > + */
> > +#define RTE_ATOMIC(type) _Atomic(type)
> > +
> > +/* __rte_atomic is provided for type qualification permitting
> > + * designation of an rte atomic qualified type-name.
>
> Sorry I don't understand this comment.

The difference between atomic qualifier and atomic specifier and the
need for exposing those two notions are not obvious to me.

One clue I have is with one use later in the series:
+rte_mcslock_lock(RTE_ATOMIC(rte_mcslock_t *) *msl, rte_mcslock_t *me)
...
+prev = rte_atomic_exchange_explicit(msl, me, rte_memory_order_acq_rel);

So at least RTE_ATOMIC() seems necessary.


>
> > + */
> > +#define __rte_atomic _Atomic
> > +
> > +/* The memory order is an enumerated type in C11. */
> > +typedef memory_order rte_memory_order;
> > +
> > +#define rte_memory_order_relaxed memory_order_relaxed
> > +#ifdef __ATOMIC_RELAXED
> > +static_assert(rte_memory_order_relaxed == __ATOMIC_RELAXED,
> > + "rte_memory_order_relaxed == __ATOMIC_RELAXED");
>
> Not sure about using static_assert or RTE_BUILD_BUG_ON

Do you mean you want no check at all in a public facing header?

Or is it that we have RTE_BUILD_BUG_ON and we should keep using it
instead of static_assert?

I remember some problems with RTE_BUILD_BUG_ON where the compiler
would silently drop the whole expression and reported no problem as it
could not evaluate the expression.
At least, with static_assert (iirc, it is new to C11) the compiler
complains with a clear "error: expression in static assertion is not
constant".
We could fix RTE_BUILD_BUG_ON, but I guess the fix would be equivalent
to map it to static_assert(!condition).
Using language standard constructs seems a better choice.


-- 
David Marchand



[PATCH v7 00/12] event DMA adapter library support

2023-09-29 Thread Amit Prakash Shukla
This series adds support for event DMA adapter library. API's defined
as part of this library can be used by the application for DMA transfer
of data using event based mechanism.

v7:
- Resolved review comments.

v6:
- Resolved review comments.
- Updated git commit message.

v5:
- Resolved review comments.

v4:
- Fixed compilation error.

v3:
- Resolved checkpatch warnings.
- Fixed compilation error on intel.
- Updated git commit message.

v2:
- Resolved review comments.
- Patch split into multiple patches.

Amit Prakash Shukla (12):
  eventdev/dma: introduce DMA adapter
  eventdev/dma: support adapter capabilities get
  eventdev/dma: support adapter create and free
  eventdev/dma: support vchan add and delete
  eventdev/dma: support adapter service function
  eventdev/dma: support adapter start and stop
  eventdev/dma: support adapter service ID get
  eventdev/dma: support adapter runtime params
  eventdev/dma: support adapter stats
  eventdev/dma: support adapter enqueue
  eventdev/dma: support adapter event port get
  app/test: add event DMA adapter auto-test

 MAINTAINERS   |7 +
 app/test/meson.build  |1 +
 app/test/test_event_dma_adapter.c |  805 +
 config/rte_config.h   |1 +
 doc/api/doxy-api-index.md |1 +
 doc/guides/eventdevs/features/default.ini |8 +
 doc/guides/prog_guide/event_dma_adapter.rst   |  264 +++
 doc/guides/prog_guide/eventdev.rst|8 +-
 .../img/event_dma_adapter_op_forward.svg  | 1086 +
 .../img/event_dma_adapter_op_new.svg  | 1079 +
 doc/guides/prog_guide/index.rst   |1 +
 doc/guides/rel_notes/release_23_11.rst|5 +
 lib/eventdev/eventdev_pmd.h   |  171 +-
 lib/eventdev/eventdev_private.c   |   10 +
 lib/eventdev/meson.build  |4 +-
 lib/eventdev/rte_event_dma_adapter.c  | 1434 +
 lib/eventdev/rte_event_dma_adapter.h  |  581 +++
 lib/eventdev/rte_eventdev.c   |   23 +
 lib/eventdev/rte_eventdev.h   |   44 +
 lib/eventdev/rte_eventdev_core.h  |8 +-
 lib/eventdev/version.map  |   16 +
 lib/meson.build   |2 +-
 22 files changed, 5552 insertions(+), 7 deletions(-)
 create mode 100644 app/test/test_event_dma_adapter.c
 create mode 100644 doc/guides/prog_guide/event_dma_adapter.rst
 create mode 100644 doc/guides/prog_guide/img/event_dma_adapter_op_forward.svg
 create mode 100644 doc/guides/prog_guide/img/event_dma_adapter_op_new.svg
 create mode 100644 lib/eventdev/rte_event_dma_adapter.c
 create mode 100644 lib/eventdev/rte_event_dma_adapter.h

-- 
2.25.1



[PATCH v7 01/12] eventdev/dma: introduce DMA adapter

2023-09-29 Thread Amit Prakash Shukla
Introduce event dma adapter interface to transfer packets between
dma device and event device.

Signed-off-by: Amit Prakash Shukla 
Acked-by: Jerin Jacob 
---
 MAINTAINERS   |6 +
 doc/api/doxy-api-index.md |1 +
 doc/guides/eventdevs/features/default.ini |8 +
 doc/guides/prog_guide/event_dma_adapter.rst   |  264 
 doc/guides/prog_guide/eventdev.rst|8 +-
 .../img/event_dma_adapter_op_forward.svg  | 1086 +
 .../img/event_dma_adapter_op_new.svg  | 1079 
 doc/guides/prog_guide/index.rst   |1 +
 doc/guides/rel_notes/release_23_11.rst|5 +
 lib/eventdev/eventdev_pmd.h   |  171 ++-
 lib/eventdev/eventdev_private.c   |   10 +
 lib/eventdev/meson.build  |1 +
 lib/eventdev/rte_event_dma_adapter.h  |  581 +
 lib/eventdev/rte_eventdev.h   |   44 +
 lib/eventdev/rte_eventdev_core.h  |8 +-
 lib/eventdev/version.map  |   16 +
 lib/meson.build   |2 +-
 17 files changed, 3285 insertions(+), 6 deletions(-)
 create mode 100644 doc/guides/prog_guide/event_dma_adapter.rst
 create mode 100644 doc/guides/prog_guide/img/event_dma_adapter_op_forward.svg
 create mode 100644 doc/guides/prog_guide/img/event_dma_adapter_op_new.svg
 create mode 100644 lib/eventdev/rte_event_dma_adapter.h

diff --git a/MAINTAINERS b/MAINTAINERS
index a926155f26..4ebbbe8bb3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -540,6 +540,12 @@ F: lib/eventdev/*crypto_adapter*
 F: app/test/test_event_crypto_adapter.c
 F: doc/guides/prog_guide/event_crypto_adapter.rst
 
+Eventdev DMA Adapter API
+M: Amit Prakash Shukla 
+T: git://dpdk.org/next/dpdk-next-eventdev
+F: lib/eventdev/*dma_adapter*
+F: doc/guides/prog_guide/event_dma_adapter.rst
+
 Raw device API
 M: Sachin Saxena 
 M: Hemant Agrawal 
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index fdeda13932..b7df7be4d9 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -29,6 +29,7 @@ The public API headers are grouped by topics:
   [event_eth_tx_adapter](@ref rte_event_eth_tx_adapter.h),
   [event_timer_adapter](@ref rte_event_timer_adapter.h),
   [event_crypto_adapter](@ref rte_event_crypto_adapter.h),
+  [event_dma_adapter](@ref rte_event_dma_adapter.h),
   [rawdev](@ref rte_rawdev.h),
   [metrics](@ref rte_metrics.h),
   [bitrate](@ref rte_bitrate.h),
diff --git a/doc/guides/eventdevs/features/default.ini 
b/doc/guides/eventdevs/features/default.ini
index 00360f60c6..73a52d915b 100644
--- a/doc/guides/eventdevs/features/default.ini
+++ b/doc/guides/eventdevs/features/default.ini
@@ -44,6 +44,14 @@ internal_port_op_fwd   =
 internal_port_qp_ev_bind   =
 session_private_data   =
 
+;
+; Features of a default DMA adapter.
+;
+[DMA adapter Features]
+internal_port_op_new   =
+internal_port_op_fwd   =
+internal_port_vchan_ev_bind =
+
 ;
 ; Features of a default Timer adapter.
 ;
diff --git a/doc/guides/prog_guide/event_dma_adapter.rst 
b/doc/guides/prog_guide/event_dma_adapter.rst
new file mode 100644
index 00..701e50d042
--- /dev/null
+++ b/doc/guides/prog_guide/event_dma_adapter.rst
@@ -0,0 +1,264 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+Copyright (c) 2023 Marvell.
+
+Event DMA Adapter Library
+=
+
+DPDK :doc:`Eventdev library ` provides event driven programming 
model with features
+to schedule events. :doc:`DMA Device library ` provides an interface 
to DMA poll mode
+drivers that support DMA operations. Event DMA Adapter is intended to bridge 
between the event
+device and the DMA device.
+
+Packet flow from DMA device to the event device can be accomplished using 
software and hardware
+based transfer mechanisms. The adapter queries an eventdev PMD to determine 
which mechanism to
+be used. The adapter uses an EAL service core function for software based 
packet transfer and
+uses the eventdev PMD functions to configure hardware based packet transfer 
between DMA device
+and the event device. DMA adapter uses a new event type called 
``RTE_EVENT_TYPE_DMADEV`` to
+indicate the source of event.
+
+Application can choose to submit an DMA operation directly to an DMA device or 
send it to an DMA
+adapter via eventdev based on 
``RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD`` capability. The
+first mode is known as the event new (``RTE_EVENT_DMA_ADAPTER_OP_NEW``) mode 
and the second as the
+event forward (``RTE_EVENT_DMA_ADAPTER_OP_FORWARD``) mode. Choice of mode can 
be specified while
+creating the adapter. In the former mode, it is the application's 
responsibility to enable
+ingress packet ordering. In the latter mode, it is the adapter's 
responsibility to enable
+ingress packet ordering.
+
+
+Adapter Modes
+-
+
+RTE_EVENT_DMA_ADAPTER_OP_NEW mode
+~
+
+In the

[PATCH v7 02/12] eventdev/dma: support adapter capabilities get

2023-09-29 Thread Amit Prakash Shukla
Added a new eventdev API rte_event_dma_adapter_caps_get(), to get
DMA adapter capabilities supported by the driver.

Signed-off-by: Amit Prakash Shukla 
---
 lib/eventdev/meson.build|  2 +-
 lib/eventdev/rte_eventdev.c | 23 +++
 2 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build
index 21347f7c4c..b46bbbc9aa 100644
--- a/lib/eventdev/meson.build
+++ b/lib/eventdev/meson.build
@@ -43,5 +43,5 @@ driver_sdk_headers += files(
 'event_timer_adapter_pmd.h',
 )
 
-deps += ['ring', 'ethdev', 'hash', 'mempool', 'mbuf', 'timer', 'cryptodev']
+deps += ['ring', 'ethdev', 'hash', 'mempool', 'mbuf', 'timer', 'cryptodev', 
'dmadev']
 deps += ['telemetry']
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 6ab4524332..60509c6efb 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -20,6 +20,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -224,6 +225,28 @@ rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t 
eth_port_id,
: 0;
 }
 
+int
+rte_event_dma_adapter_caps_get(uint8_t dev_id, uint8_t dma_dev_id, uint32_t 
*caps)
+{
+   struct rte_eventdev *dev;
+
+   RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+   if (!rte_dma_is_valid(dma_dev_id))
+   return -EINVAL;
+
+   dev = &rte_eventdevs[dev_id];
+
+   if (caps == NULL)
+   return -EINVAL;
+
+   *caps = 0;
+
+   if (dev->dev_ops->dma_adapter_caps_get)
+   return (*dev->dev_ops->dma_adapter_caps_get)(dev, dma_dev_id, 
caps);
+
+   return 0;
+}
+
 static inline int
 event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
 {
-- 
2.25.1



[PATCH v7 03/12] eventdev/dma: support adapter create and free

2023-09-29 Thread Amit Prakash Shukla
Added API support to create and free DMA adapter. Create function shall be
called with event device to be associated with the adapter and port
configuration to setup an event port.

Signed-off-by: Amit Prakash Shukla 
---
 config/rte_config.h  |   1 +
 lib/eventdev/meson.build |   1 +
 lib/eventdev/rte_event_dma_adapter.c | 335 +++
 3 files changed, 337 insertions(+)
 create mode 100644 lib/eventdev/rte_event_dma_adapter.c

diff --git a/config/rte_config.h b/config/rte_config.h
index 400e44e3cf..401727703f 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -77,6 +77,7 @@
 #define RTE_EVENT_ETH_INTR_RING_SIZE 1024
 #define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32
 #define RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE 32
+#define RTE_EVENT_DMA_ADAPTER_MAX_INSTANCE 32
 
 /* rawdev defines */
 #define RTE_RAWDEV_MAX_DEVS 64
diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build
index b46bbbc9aa..250abcb154 100644
--- a/lib/eventdev/meson.build
+++ b/lib/eventdev/meson.build
@@ -17,6 +17,7 @@ sources = files(
 'eventdev_private.c',
 'eventdev_trace_points.c',
 'rte_event_crypto_adapter.c',
+'rte_event_dma_adapter.c',
 'rte_event_eth_rx_adapter.c',
 'rte_event_eth_tx_adapter.c',
 'rte_event_ring.c',
diff --git a/lib/eventdev/rte_event_dma_adapter.c 
b/lib/eventdev/rte_event_dma_adapter.c
new file mode 100644
index 00..241327d2a7
--- /dev/null
+++ b/lib/eventdev/rte_event_dma_adapter.c
@@ -0,0 +1,335 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2023 Marvell.
+ */
+
+#include 
+
+#include "rte_event_dma_adapter.h"
+
+#define DMA_BATCH_SIZE 32
+#define DMA_DEFAULT_MAX_NB 128
+#define DMA_ADAPTER_NAME_LEN 32
+#define DMA_ADAPTER_BUFFER_SIZE 1024
+
+#define DMA_ADAPTER_OPS_BUFFER_SIZE (DMA_BATCH_SIZE + DMA_BATCH_SIZE)
+
+#define DMA_ADAPTER_ARRAY "event_dma_adapter_array"
+
+/* Macros to check for valid adapter */
+#define EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) \
+   do { \
+   if (!edma_adapter_valid_id(id)) { \
+   RTE_EDEV_LOG_ERR("Invalid DMA adapter id = %d\n", id); \
+   return retval; \
+   } \
+   } while (0)
+
+/* DMA ops circular buffer */
+struct dma_ops_circular_buffer {
+   /* Index of head element */
+   uint16_t head;
+
+   /* Index of tail element */
+   uint16_t tail;
+
+   /* Number of elements in buffer */
+   uint16_t count;
+
+   /* Size of circular buffer */
+   uint16_t size;
+
+   /* Pointer to hold rte_event_dma_adapter_op for processing */
+   struct rte_event_dma_adapter_op **op_buffer;
+} __rte_cache_aligned;
+
+/* DMA device information */
+struct dma_device_info {
+   /* Number of vchans configured for a DMA device. */
+   uint16_t num_dma_dev_vchan;
+} __rte_cache_aligned;
+
+struct event_dma_adapter {
+   /* Event device identifier */
+   uint8_t eventdev_id;
+
+   /* Event port identifier */
+   uint8_t event_port_id;
+
+   /* Adapter mode */
+   enum rte_event_dma_adapter_mode mode;
+
+   /* Memory allocation name */
+   char mem_name[DMA_ADAPTER_NAME_LEN];
+
+   /* Socket identifier cached from eventdev */
+   int socket_id;
+
+   /* Lock to serialize config updates with service function */
+   rte_spinlock_t lock;
+
+   /* DMA device structure array */
+   struct dma_device_info *dma_devs;
+
+   /* Circular buffer for processing DMA ops to eventdev */
+   struct dma_ops_circular_buffer ebuf;
+
+   /* Configuration callback for rte_service configuration */
+   rte_event_dma_adapter_conf_cb conf_cb;
+
+   /* Configuration callback argument */
+   void *conf_arg;
+
+   /* Set if  default_cb is being used */
+   int default_cb_arg;
+} __rte_cache_aligned;
+
+static struct event_dma_adapter **event_dma_adapter;
+
+static inline int
+edma_adapter_valid_id(uint8_t id)
+{
+   return id < RTE_EVENT_DMA_ADAPTER_MAX_INSTANCE;
+}
+
+static inline struct event_dma_adapter *
+edma_id_to_adapter(uint8_t id)
+{
+   return event_dma_adapter ? event_dma_adapter[id] : NULL;
+}
+
+static int
+edma_array_init(void)
+{
+   const struct rte_memzone *mz;
+   uint32_t sz;
+
+   mz = rte_memzone_lookup(DMA_ADAPTER_ARRAY);
+   if (mz == NULL) {
+   sz = sizeof(struct event_dma_adapter *) * 
RTE_EVENT_DMA_ADAPTER_MAX_INSTANCE;
+   sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
+
+   mz = rte_memzone_reserve_aligned(DMA_ADAPTER_ARRAY, sz, 
rte_socket_id(), 0,
+RTE_CACHE_LINE_SIZE);
+   if (mz == NULL) {
+   RTE_EDEV_LOG_ERR("Failed to reserve memzone : %s, err = 
%d",
+DMA_ADAPTER_ARRAY, rte_errno);
+   return -rte_errno;
+   }
+   }
+
+   e

[PATCH v7 04/12] eventdev/dma: support vchan add and delete

2023-09-29 Thread Amit Prakash Shukla
Added API support to add and delete vchan's from the DMA adapter. DMA devid
and vchan are added to the addapter instance by calling
rte_event_dma_adapter_vchan_add and deleted using
rte_event_dma_adapter_vchan_del.

Signed-off-by: Amit Prakash Shukla 
---
 lib/eventdev/rte_event_dma_adapter.c | 204 +++
 1 file changed, 204 insertions(+)

diff --git a/lib/eventdev/rte_event_dma_adapter.c 
b/lib/eventdev/rte_event_dma_adapter.c
index 241327d2a7..fa2e29b9d3 100644
--- a/lib/eventdev/rte_event_dma_adapter.c
+++ b/lib/eventdev/rte_event_dma_adapter.c
@@ -42,8 +42,31 @@ struct dma_ops_circular_buffer {
struct rte_event_dma_adapter_op **op_buffer;
 } __rte_cache_aligned;
 
+/* Vchan information */
+struct dma_vchan_info {
+   /* Set to indicate vchan queue is enabled */
+   bool vq_enabled;
+
+   /* Circular buffer for batching DMA ops to dma_dev */
+   struct dma_ops_circular_buffer dma_buf;
+} __rte_cache_aligned;
+
 /* DMA device information */
 struct dma_device_info {
+   /* Pointer to vchan queue info */
+   struct dma_vchan_info *vchanq;
+
+   /* Pointer to vchan queue info.
+* This holds ops passed by application till the
+* dma completion is done.
+*/
+   struct dma_vchan_info *tqmap;
+
+   /* If num_vchanq > 0, the start callback will
+* be invoked if not already invoked
+*/
+   uint16_t num_vchanq;
+
/* Number of vchans configured for a DMA device. */
uint16_t num_dma_dev_vchan;
 } __rte_cache_aligned;
@@ -81,6 +104,9 @@ struct event_dma_adapter {
 
/* Set if  default_cb is being used */
int default_cb_arg;
+
+   /* No. of vchan queue configured */
+   uint16_t nb_vchanq;
 } __rte_cache_aligned;
 
 static struct event_dma_adapter **event_dma_adapter;
@@ -333,3 +359,181 @@ rte_event_dma_adapter_free(uint8_t id)
 
return 0;
 }
+
+static void
+edma_update_vchanq_info(struct event_dma_adapter *adapter, struct 
dma_device_info *dev_info,
+   uint16_t vchan, uint8_t add)
+{
+   struct dma_vchan_info *vchan_info;
+   struct dma_vchan_info *tqmap_info;
+   int enabled;
+   uint16_t i;
+
+   if (dev_info->vchanq == NULL)
+   return;
+
+   if (vchan == RTE_DMA_ALL_VCHAN) {
+   for (i = 0; i < dev_info->num_dma_dev_vchan; i++)
+   edma_update_vchanq_info(adapter, dev_info, i, add);
+   } else {
+   tqmap_info = &dev_info->tqmap[vchan];
+   vchan_info = &dev_info->vchanq[vchan];
+   enabled = vchan_info->vq_enabled;
+   if (add) {
+   adapter->nb_vchanq += !enabled;
+   dev_info->num_vchanq += !enabled;
+   } else {
+   adapter->nb_vchanq -= enabled;
+   dev_info->num_vchanq -= enabled;
+   }
+   vchan_info->vq_enabled = !!add;
+   tqmap_info->vq_enabled = !!add;
+   }
+}
+
+int
+rte_event_dma_adapter_vchan_add(uint8_t id, int16_t dma_dev_id, uint16_t vchan,
+   const struct rte_event *event)
+{
+   struct event_dma_adapter *adapter;
+   struct dma_device_info *dev_info;
+   struct rte_eventdev *dev;
+   uint32_t cap;
+   int ret;
+
+   EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+   if (!rte_dma_is_valid(dma_dev_id)) {
+   RTE_EDEV_LOG_ERR("Invalid dma_dev_id = %" PRIu8, dma_dev_id);
+   return -EINVAL;
+   }
+
+   adapter = edma_id_to_adapter(id);
+   if (adapter == NULL)
+   return -EINVAL;
+
+   dev = &rte_eventdevs[adapter->eventdev_id];
+   ret = rte_event_dma_adapter_caps_get(adapter->eventdev_id, dma_dev_id, 
&cap);
+   if (ret) {
+   RTE_EDEV_LOG_ERR("Failed to get adapter caps dev %u dma_dev 
%u", id, dma_dev_id);
+   return ret;
+   }
+
+   if ((cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_VCHAN_EV_BIND) && 
(event == NULL)) {
+   RTE_EDEV_LOG_ERR("Event can not be NULL for dma_dev_id = %u", 
dma_dev_id);
+   return -EINVAL;
+   }
+
+   dev_info = &adapter->dma_devs[dma_dev_id];
+   if (vchan != RTE_DMA_ALL_VCHAN && vchan >= dev_info->num_dma_dev_vchan) 
{
+   RTE_EDEV_LOG_ERR("Invalid vhcan %u", vchan);
+   return -EINVAL;
+   }
+
+   /* In case HW cap is RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, no
+* need of service core as HW supports event forward capability.
+*/
+   if ((cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) ||
+   (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_VCHAN_EV_BIND &&
+adapter->mode == RTE_EVENT_DMA_ADAPTER_OP_NEW) ||
+   (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW &&
+adapter->mode == RTE_EVENT_DMA_ADAPTER_OP_NEW)) {
+   if (*dev->dev_ops

[PATCH v7 05/12] eventdev/dma: support adapter service function

2023-09-29 Thread Amit Prakash Shukla
Added support for DMA adapter service function for event devices.
Enqueue and dequeue of event from eventdev and DMA device are done
based on the adapter mode and the supported HW capabilities.

Signed-off-by: Amit Prakash Shukla 
---
 lib/eventdev/rte_event_dma_adapter.c | 592 +++
 1 file changed, 592 insertions(+)

diff --git a/lib/eventdev/rte_event_dma_adapter.c 
b/lib/eventdev/rte_event_dma_adapter.c
index fa2e29b9d3..1d8bae0422 100644
--- a/lib/eventdev/rte_event_dma_adapter.c
+++ b/lib/eventdev/rte_event_dma_adapter.c
@@ -3,6 +3,7 @@
  */
 
 #include 
+#include 
 
 #include "rte_event_dma_adapter.h"
 
@@ -69,6 +70,10 @@ struct dma_device_info {
 
/* Number of vchans configured for a DMA device. */
uint16_t num_dma_dev_vchan;
+
+   /* Next queue pair to be processed */
+   uint16_t next_vchan_id;
+
 } __rte_cache_aligned;
 
 struct event_dma_adapter {
@@ -90,6 +95,9 @@ struct event_dma_adapter {
/* Lock to serialize config updates with service function */
rte_spinlock_t lock;
 
+   /* Next dma device to be processed */
+   uint16_t next_dmadev_id;
+
/* DMA device structure array */
struct dma_device_info *dma_devs;
 
@@ -107,6 +115,26 @@ struct event_dma_adapter {
 
/* No. of vchan queue configured */
uint16_t nb_vchanq;
+
+   /* Per adapter EAL service ID */
+   uint32_t service_id;
+
+   /* Service initialization state */
+   uint8_t service_initialized;
+
+   /* Max DMA ops processed in any service function invocation */
+   uint32_t max_nb;
+
+   /* Store event port's implicit release capability */
+   uint8_t implicit_release_disabled;
+
+   /* Flag to indicate backpressure at dma_dev
+* Stop further dequeuing events from eventdev
+*/
+   bool stop_enq_to_dma_dev;
+
+   /* Loop counter to flush dma ops */
+   uint16_t transmit_loop_count;
 } __rte_cache_aligned;
 
 static struct event_dma_adapter **event_dma_adapter;
@@ -148,6 +176,18 @@ edma_array_init(void)
return 0;
 }
 
+static inline bool
+edma_circular_buffer_batch_ready(struct dma_ops_circular_buffer *bufp)
+{
+   return bufp->count >= DMA_BATCH_SIZE;
+}
+
+static inline bool
+edma_circular_buffer_space_for_batch(struct dma_ops_circular_buffer *bufp)
+{
+   return (bufp->size - bufp->count) >= DMA_BATCH_SIZE;
+}
+
 static inline int
 edma_circular_buffer_init(const char *name, struct dma_ops_circular_buffer 
*buf, uint16_t sz)
 {
@@ -166,6 +206,71 @@ edma_circular_buffer_free(struct dma_ops_circular_buffer 
*buf)
rte_free(buf->op_buffer);
 }
 
+static inline int
+edma_circular_buffer_add(struct dma_ops_circular_buffer *bufp, struct 
rte_event_dma_adapter_op *op)
+{
+   uint16_t *tail = &bufp->tail;
+
+   bufp->op_buffer[*tail] = op;
+
+   /* circular buffer, go round */
+   *tail = (*tail + 1) % bufp->size;
+   bufp->count++;
+
+   return 0;
+}
+
+static inline int
+edma_circular_buffer_flush_to_dma_dev(struct event_dma_adapter *adapter,
+ struct dma_ops_circular_buffer *bufp, 
uint8_t dma_dev_id,
+ uint16_t vchan, uint16_t *nb_ops_flushed)
+{
+   struct rte_event_dma_adapter_op *op;
+   struct dma_vchan_info *tq;
+   uint16_t *head = &bufp->head;
+   uint16_t *tail = &bufp->tail;
+   uint16_t n;
+   uint16_t i;
+   int ret;
+
+   if (*tail > *head)
+   n = *tail - *head;
+   else if (*tail < *head)
+   n = bufp->size - *head;
+   else {
+   *nb_ops_flushed = 0;
+   return 0; /* buffer empty */
+   }
+
+   tq = &adapter->dma_devs[dma_dev_id].tqmap[vchan];
+
+   for (i = 0; i < n; i++) {
+   op = bufp->op_buffer[*head];
+   if (op->nb_src == 1 && op->nb_dst == 1)
+   ret = rte_dma_copy(dma_dev_id, vchan, 
op->src_seg->addr, op->dst_seg->addr,
+  op->src_seg->length, op->flags);
+   else
+   ret = rte_dma_copy_sg(dma_dev_id, vchan, op->src_seg, 
op->dst_seg,
+ op->nb_src, op->nb_dst, 
op->flags);
+   if (ret < 0)
+   break;
+
+   /* Enqueue in transaction queue. */
+   edma_circular_buffer_add(&tq->dma_buf, op);
+
+   *head = (*head + 1) % bufp->size;
+   }
+
+   *nb_ops_flushed = i;
+   bufp->count -= *nb_ops_flushed;
+   if (!bufp->count) {
+   *head = 0;
+   *tail = 0;
+   }
+
+   return *nb_ops_flushed == n ? 0 : -1;
+}
+
 static int
 edma_default_config_cb(uint8_t id, uint8_t evdev_id, struct 
rte_event_dma_adapter_conf *conf,
   void *arg)
@@ -360,6 +465,406 @@ rte_event_dma_adapter_free(uint8_t id)
return 0;
 }
 
+static inline unsigned int
+edma_enq_to_d

[PATCH v7 06/12] eventdev/dma: support adapter start and stop

2023-09-29 Thread Amit Prakash Shukla
Added API support to start and stop DMA adapter.

Signed-off-by: Amit Prakash Shukla 
---
 lib/eventdev/rte_event_dma_adapter.c | 69 
 1 file changed, 69 insertions(+)

diff --git a/lib/eventdev/rte_event_dma_adapter.c 
b/lib/eventdev/rte_event_dma_adapter.c
index 1d8bae0422..be6c2623e9 100644
--- a/lib/eventdev/rte_event_dma_adapter.c
+++ b/lib/eventdev/rte_event_dma_adapter.c
@@ -74,6 +74,13 @@ struct dma_device_info {
/* Next queue pair to be processed */
uint16_t next_vchan_id;
 
+   /* Set to indicate processing has been started */
+   uint8_t dev_started;
+
+   /* Set to indicate dmadev->eventdev packet
+* transfer uses a hardware mechanism
+*/
+   uint8_t internal_event_port;
 } __rte_cache_aligned;
 
 struct event_dma_adapter {
@@ -1129,3 +1136,65 @@ rte_event_dma_adapter_vchan_del(uint8_t id, int16_t 
dma_dev_id, uint16_t vchan)
 
return ret;
 }
+
+static int
+edma_adapter_ctrl(uint8_t id, int start)
+{
+   struct event_dma_adapter *adapter;
+   struct dma_device_info *dev_info;
+   struct rte_eventdev *dev;
+   uint16_t num_dma_dev;
+   int stop = !start;
+   int use_service;
+   uint32_t i;
+
+   use_service = 0;
+   EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+   adapter = edma_id_to_adapter(id);
+   if (adapter == NULL)
+   return -EINVAL;
+
+   num_dma_dev = rte_dma_count_avail();
+   dev = &rte_eventdevs[adapter->eventdev_id];
+
+   for (i = 0; i < num_dma_dev; i++) {
+   dev_info = &adapter->dma_devs[i];
+   /* start check for num queue pairs */
+   if (start && !dev_info->num_vchanq)
+   continue;
+   /* stop check if dev has been started */
+   if (stop && !dev_info->dev_started)
+   continue;
+   use_service |= !dev_info->internal_event_port;
+   dev_info->dev_started = start;
+   if (dev_info->internal_event_port == 0)
+   continue;
+   start ? (*dev->dev_ops->dma_adapter_start)(dev, i) :
+   (*dev->dev_ops->dma_adapter_stop)(dev, i);
+   }
+
+   if (use_service)
+   rte_service_runstate_set(adapter->service_id, start);
+
+   return 0;
+}
+
+int
+rte_event_dma_adapter_start(uint8_t id)
+{
+   struct event_dma_adapter *adapter;
+
+   EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+   adapter = edma_id_to_adapter(id);
+   if (adapter == NULL)
+   return -EINVAL;
+
+   return edma_adapter_ctrl(id, 1);
+}
+
+int
+rte_event_dma_adapter_stop(uint8_t id)
+{
+   return edma_adapter_ctrl(id, 0);
+}
-- 
2.25.1



[PATCH v7 07/12] eventdev/dma: support adapter service ID get

2023-09-29 Thread Amit Prakash Shukla
Added API support to get DMA adapter service ID. Service id
returned in the variable by the API call shall be used by application
to map a service core.

Signed-off-by: Amit Prakash Shukla 
---
 lib/eventdev/rte_event_dma_adapter.c | 17 +
 1 file changed, 17 insertions(+)

diff --git a/lib/eventdev/rte_event_dma_adapter.c 
b/lib/eventdev/rte_event_dma_adapter.c
index be6c2623e9..c3b014aaf9 100644
--- a/lib/eventdev/rte_event_dma_adapter.c
+++ b/lib/eventdev/rte_event_dma_adapter.c
@@ -1137,6 +1137,23 @@ rte_event_dma_adapter_vchan_del(uint8_t id, int16_t 
dma_dev_id, uint16_t vchan)
return ret;
 }
 
+int
+rte_event_dma_adapter_service_id_get(uint8_t id, uint32_t *service_id)
+{
+   struct event_dma_adapter *adapter;
+
+   EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+   adapter = edma_id_to_adapter(id);
+   if (adapter == NULL || service_id == NULL)
+   return -EINVAL;
+
+   if (adapter->service_initialized)
+   *service_id = adapter->service_id;
+
+   return adapter->service_initialized ? 0 : -ESRCH;
+}
+
 static int
 edma_adapter_ctrl(uint8_t id, int start)
 {
-- 
2.25.1



[PATCH v7 08/12] eventdev/dma: support adapter runtime params

2023-09-29 Thread Amit Prakash Shukla
Added support to set and get runtime params for DMA adapter. The
parameters that can be set/get are defined in
struct rte_event_dma_adapter_runtime_params.

Signed-off-by: Amit Prakash Shukla 
---
 lib/eventdev/rte_event_dma_adapter.c | 93 
 1 file changed, 93 insertions(+)

diff --git a/lib/eventdev/rte_event_dma_adapter.c 
b/lib/eventdev/rte_event_dma_adapter.c
index c3b014aaf9..632169a7c2 100644
--- a/lib/eventdev/rte_event_dma_adapter.c
+++ b/lib/eventdev/rte_event_dma_adapter.c
@@ -1215,3 +1215,96 @@ rte_event_dma_adapter_stop(uint8_t id)
 {
return edma_adapter_ctrl(id, 0);
 }
+
+#define DEFAULT_MAX_NB 128
+
+int
+rte_event_dma_adapter_runtime_params_init(struct 
rte_event_dma_adapter_runtime_params *params)
+{
+   if (params == NULL)
+   return -EINVAL;
+
+   memset(params, 0, sizeof(*params));
+   params->max_nb = DEFAULT_MAX_NB;
+
+   return 0;
+}
+
+static int
+dma_adapter_cap_check(struct event_dma_adapter *adapter)
+{
+   uint32_t caps;
+   int ret;
+
+   if (!adapter->nb_vchanq)
+   return -EINVAL;
+
+   ret = rte_event_dma_adapter_caps_get(adapter->eventdev_id, 
adapter->next_dmadev_id, &caps);
+   if (ret) {
+   RTE_EDEV_LOG_ERR("Failed to get adapter caps dev %" PRIu8 " 
cdev %" PRIu8,
+adapter->eventdev_id, adapter->next_dmadev_id);
+   return ret;
+   }
+
+   if ((caps & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) ||
+   (caps & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW))
+   return -ENOTSUP;
+
+   return 0;
+}
+
+int
+rte_event_dma_adapter_runtime_params_set(uint8_t id,
+struct 
rte_event_dma_adapter_runtime_params *params)
+{
+   struct event_dma_adapter *adapter;
+   int ret;
+
+   EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+   if (params == NULL) {
+   RTE_EDEV_LOG_ERR("params pointer is NULL\n");
+   return -EINVAL;
+   }
+
+   adapter = edma_id_to_adapter(id);
+   if (adapter == NULL)
+   return -EINVAL;
+
+   ret = dma_adapter_cap_check(adapter);
+   if (ret)
+   return ret;
+
+   rte_spinlock_lock(&adapter->lock);
+   adapter->max_nb = params->max_nb;
+   rte_spinlock_unlock(&adapter->lock);
+
+   return 0;
+}
+
+int
+rte_event_dma_adapter_runtime_params_get(uint8_t id,
+struct 
rte_event_dma_adapter_runtime_params *params)
+{
+   struct event_dma_adapter *adapter;
+   int ret;
+
+   EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+   if (params == NULL) {
+   RTE_EDEV_LOG_ERR("params pointer is NULL\n");
+   return -EINVAL;
+   }
+
+   adapter = edma_id_to_adapter(id);
+   if (adapter == NULL)
+   return -EINVAL;
+
+   ret = dma_adapter_cap_check(adapter);
+   if (ret)
+   return ret;
+
+   params->max_nb = adapter->max_nb;
+
+   return 0;
+}
-- 
2.25.1



[PATCH v7 09/12] eventdev/dma: support adapter stats

2023-09-29 Thread Amit Prakash Shukla
Added DMA adapter stats API support to get and reset stats. DMA
SW adapter stats and eventdev driver supported stats for enqueue
and dequeue are reported by get API.

Signed-off-by: Amit Prakash Shukla 
---
 lib/eventdev/rte_event_dma_adapter.c | 95 
 1 file changed, 95 insertions(+)

diff --git a/lib/eventdev/rte_event_dma_adapter.c 
b/lib/eventdev/rte_event_dma_adapter.c
index 632169a7c2..6c67e6d499 100644
--- a/lib/eventdev/rte_event_dma_adapter.c
+++ b/lib/eventdev/rte_event_dma_adapter.c
@@ -142,6 +142,9 @@ struct event_dma_adapter {
 
/* Loop counter to flush dma ops */
uint16_t transmit_loop_count;
+
+   /* Per instance stats structure */
+   struct rte_event_dma_adapter_stats dma_stats;
 } __rte_cache_aligned;
 
 static struct event_dma_adapter **event_dma_adapter;
@@ -475,6 +478,7 @@ rte_event_dma_adapter_free(uint8_t id)
 static inline unsigned int
 edma_enq_to_dma_dev(struct event_dma_adapter *adapter, struct rte_event *ev, 
unsigned int cnt)
 {
+   struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats;
struct dma_vchan_info *vchan_qinfo = NULL;
struct rte_event_dma_adapter_op *dma_op;
uint16_t vchan, nb_enqueued = 0;
@@ -484,6 +488,7 @@ edma_enq_to_dma_dev(struct event_dma_adapter *adapter, 
struct rte_event *ev, uns
 
ret = 0;
n = 0;
+   stats->event_deq_count += cnt;
 
for (i = 0; i < cnt; i++) {
dma_op = ev[i].event_ptr;
@@ -506,6 +511,7 @@ edma_enq_to_dma_dev(struct event_dma_adapter *adapter, 
struct rte_event *ev, uns
ret = edma_circular_buffer_flush_to_dma_dev(adapter, 
&vchan_qinfo->dma_buf,
dma_dev_id, 
vchan,

&nb_enqueued);
+   stats->dma_enq_count += nb_enqueued;
n += nb_enqueued;
 
/**
@@ -552,6 +558,7 @@ edma_adapter_dev_flush(struct event_dma_adapter *adapter, 
int16_t dma_dev_id,
 static unsigned int
 edma_adapter_enq_flush(struct event_dma_adapter *adapter)
 {
+   struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats;
int16_t dma_dev_id;
uint16_t nb_enqueued = 0;
uint16_t nb_ops_flushed = 0;
@@ -566,6 +573,8 @@ edma_adapter_enq_flush(struct event_dma_adapter *adapter)
if (!nb_ops_flushed)
adapter->stop_enq_to_dma_dev = false;
 
+   stats->dma_enq_count += nb_enqueued;
+
return nb_enqueued;
 }
 
@@ -577,6 +586,7 @@ edma_adapter_enq_flush(struct event_dma_adapter *adapter)
 static int
 edma_adapter_enq_run(struct event_dma_adapter *adapter, unsigned int max_enq)
 {
+   struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats;
uint8_t event_port_id = adapter->event_port_id;
uint8_t event_dev_id = adapter->eventdev_id;
struct rte_event ev[DMA_BATCH_SIZE];
@@ -596,6 +606,7 @@ edma_adapter_enq_run(struct event_dma_adapter *adapter, 
unsigned int max_enq)
break;
}
 
+   stats->event_poll_count++;
n = rte_event_dequeue_burst(event_dev_id, event_port_id, ev, 
DMA_BATCH_SIZE, 0);
 
if (!n)
@@ -616,6 +627,7 @@ static inline uint16_t
 edma_ops_enqueue_burst(struct event_dma_adapter *adapter, struct 
rte_event_dma_adapter_op **ops,
   uint16_t num)
 {
+   struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats;
uint8_t event_port_id = adapter->event_port_id;
uint8_t event_dev_id = adapter->eventdev_id;
struct rte_event events[DMA_BATCH_SIZE];
@@ -655,6 +667,10 @@ edma_ops_enqueue_burst(struct event_dma_adapter *adapter, 
struct rte_event_dma_a
 
} while (retry++ < DMA_ADAPTER_MAX_EV_ENQ_RETRIES && nb_enqueued < 
nb_ev);
 
+   stats->event_enq_fail_count += nb_ev - nb_enqueued;
+   stats->event_enq_count += nb_enqueued;
+   stats->event_enq_retry_count += retry - 1;
+
return nb_enqueued;
 }
 
@@ -709,6 +725,7 @@ edma_ops_buffer_flush(struct event_dma_adapter *adapter)
 static inline unsigned int
 edma_adapter_deq_run(struct event_dma_adapter *adapter, unsigned int max_deq)
 {
+   struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats;
struct dma_vchan_info *vchan_info;
struct dma_ops_circular_buffer *tq_buf;
struct rte_event_dma_adapter_op *ops;
@@ -746,6 +763,7 @@ edma_adapter_deq_run(struct event_dma_adapter *adapter, 
unsigned int max_deq)
continue;
 
done = false;
+   stats->dma_deq_count += n;
 
tq_buf = &dev_info->tqmap[vchan].dma_buf;
 
@@ -1308,3 +1326,80 @@ rte_event_dma_adapter_runtime_params_get(uint8_t id,
 
return 0;
 }
+
+int
+rte_event_dma_adapter_stats_get(uint8_t id,

[PATCH v7 10/12] eventdev/dma: support adapter enqueue

2023-09-29 Thread Amit Prakash Shukla
Added API support to enqueue a DMA operation to the DMA driver.

Signed-off-by: Amit Prakash Shukla 
---
 lib/eventdev/rte_event_dma_adapter.c | 13 +
 1 file changed, 13 insertions(+)

diff --git a/lib/eventdev/rte_event_dma_adapter.c 
b/lib/eventdev/rte_event_dma_adapter.c
index 6c67e6d499..f299914dec 100644
--- a/lib/eventdev/rte_event_dma_adapter.c
+++ b/lib/eventdev/rte_event_dma_adapter.c
@@ -1403,3 +1403,16 @@ rte_event_dma_adapter_stats_reset(uint8_t id)
 
return 0;
 }
+
+uint16_t
+rte_event_dma_adapter_enqueue(uint8_t dev_id, uint8_t port_id, struct 
rte_event ev[],
+ uint16_t nb_events)
+{
+   const struct rte_event_fp_ops *fp_ops;
+   void *port;
+
+   fp_ops = &rte_event_fp_ops[dev_id];
+   port = fp_ops->data[port_id];
+
+   return fp_ops->dma_enqueue(port, ev, nb_events);
+}
-- 
2.25.1



[PATCH v7 11/12] eventdev/dma: support adapter event port get

2023-09-29 Thread Amit Prakash Shukla
Added support for DMA adapter event port get.

Signed-off-by: Amit Prakash Shukla 
---
 lib/eventdev/rte_event_dma_adapter.c | 16 
 1 file changed, 16 insertions(+)

diff --git a/lib/eventdev/rte_event_dma_adapter.c 
b/lib/eventdev/rte_event_dma_adapter.c
index f299914dec..af4b5ad388 100644
--- a/lib/eventdev/rte_event_dma_adapter.c
+++ b/lib/eventdev/rte_event_dma_adapter.c
@@ -475,6 +475,22 @@ rte_event_dma_adapter_free(uint8_t id)
return 0;
 }
 
+int
+rte_event_dma_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
+{
+   struct event_dma_adapter *adapter;
+
+   EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+   adapter = edma_id_to_adapter(id);
+   if (adapter == NULL || event_port_id == NULL)
+   return -EINVAL;
+
+   *event_port_id = adapter->event_port_id;
+
+   return 0;
+}
+
 static inline unsigned int
 edma_enq_to_dma_dev(struct event_dma_adapter *adapter, struct rte_event *ev, 
unsigned int cnt)
 {
-- 
2.25.1



[PATCH v7 12/12] app/test: add event DMA adapter auto-test

2023-09-29 Thread Amit Prakash Shukla
Added testsuite to test the dma adapter functionality.
The testsuite detects event and DMA device capability
and accordingly dma adapter is configured and modes are
tested. Test command:

sudo /app/test/dpdk-test --vdev=dma_skeleton \
event_dma_adapter_autotest

Signed-off-by: Amit Prakash Shukla 
---
 MAINTAINERS   |   1 +
 app/test/meson.build  |   1 +
 app/test/test_event_dma_adapter.c | 805 ++
 3 files changed, 807 insertions(+)
 create mode 100644 app/test/test_event_dma_adapter.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 4ebbbe8bb3..92c0b47618 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -544,6 +544,7 @@ Eventdev DMA Adapter API
 M: Amit Prakash Shukla 
 T: git://dpdk.org/next/dpdk-next-eventdev
 F: lib/eventdev/*dma_adapter*
+F: app/test/test_event_dma_adapter.c
 F: doc/guides/prog_guide/event_dma_adapter.rst
 
 Raw device API
diff --git a/app/test/meson.build b/app/test/meson.build
index 05bae9216d..7caf5ae5fc 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -66,6 +66,7 @@ source_file_deps = {
 'test_errno.c': [],
 'test_ethdev_link.c': ['ethdev'],
 'test_event_crypto_adapter.c': ['cryptodev', 'eventdev', 'bus_vdev'],
+'test_event_dma_adapter.c': ['dmadev', 'eventdev', 'bus_vdev'],
 'test_event_eth_rx_adapter.c': ['ethdev', 'eventdev', 'bus_vdev'],
 'test_event_eth_tx_adapter.c': ['bus_vdev', 'ethdev', 'net_ring', 
'eventdev'],
 'test_event_ring.c': ['eventdev'],
diff --git a/app/test/test_event_dma_adapter.c 
b/app/test/test_event_dma_adapter.c
new file mode 100644
index 00..1e193f4b52
--- /dev/null
+++ b/app/test/test_event_dma_adapter.c
@@ -0,0 +1,805 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2023 Marvell.
+ */
+
+#include "test.h"
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#ifdef RTE_EXEC_ENV_WINDOWS
+static int
+test_event_dma_adapter(void)
+{
+   printf("event_dma_adapter not supported on Windows, skipping test\n");
+   return TEST_SKIPPED;
+}
+
+#else
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#define NUM_MBUFS (8191)
+#define MBUF_CACHE_SIZE   (256)
+#define TEST_APP_PORT_ID   0
+#define TEST_APP_EV_QUEUE_ID   0
+#define TEST_APP_EV_PRIORITY   0
+#define TEST_APP_EV_FLOWID 0xAABB
+#define TEST_DMA_EV_QUEUE_ID   1
+#define TEST_ADAPTER_ID0
+#define TEST_DMA_DEV_ID0
+#define TEST_DMA_VCHAN_ID  0
+#define PACKET_LENGTH  1024
+#define NB_TEST_PORTS  1
+#define NB_TEST_QUEUES 2
+#define NUM_CORES  2
+#define DMA_OP_POOL_SIZE   128
+#define TEST_MAX_OP32
+#define TEST_RINGSIZE  512
+
+#define MBUF_SIZE  (RTE_PKTMBUF_HEADROOM + PACKET_LENGTH)
+
+/* Handle log statements in same manner as test macros */
+#define LOG_DBG(...)RTE_LOG(DEBUG, EAL, __VA_ARGS__)
+
+struct event_dma_adapter_test_params {
+   struct rte_mempool *src_mbuf_pool;
+   struct rte_mempool *dst_mbuf_pool;
+   struct rte_mempool *op_mpool;
+   uint8_t dma_event_port_id;
+   uint8_t internal_port_op_fwd;
+};
+
+struct rte_event dma_response_info = {
+   .queue_id = TEST_APP_EV_QUEUE_ID,
+   .sched_type = RTE_SCHED_TYPE_ATOMIC,
+   .flow_id = TEST_APP_EV_FLOWID,
+   .priority = TEST_APP_EV_PRIORITY
+};
+
+static struct event_dma_adapter_test_params params;
+static uint8_t dma_adapter_setup_done;
+static uint32_t slcore_id;
+static int evdev;
+
+static int
+send_recv_ev(struct rte_event *ev)
+{
+   struct rte_event recv_ev[TEST_MAX_OP];
+   uint16_t nb_enqueued = 0;
+   int i = 0;
+
+   if (params.internal_port_op_fwd) {
+   nb_enqueued = rte_event_dma_adapter_enqueue(evdev, 
TEST_APP_PORT_ID, ev,
+   TEST_MAX_OP);
+   } else {
+   while (nb_enqueued < TEST_MAX_OP) {
+   nb_enqueued += rte_event_enqueue_burst(evdev, 
TEST_APP_PORT_ID,
+  
&ev[nb_enqueued], TEST_MAX_OP -
+  nb_enqueued);
+   }
+   }
+
+   TEST_ASSERT_EQUAL(nb_enqueued, TEST_MAX_OP, "Failed to send event to 
dma adapter\n");
+
+   while (i < TEST_MAX_OP) {
+   if (rte_event_dequeue_burst(evdev, TEST_APP_PORT_ID, 
&recv_ev[i], 1, 0) != 1)
+   continue;
+   i++;
+   }
+
+   TEST_ASSERT_EQUAL(i, TEST_MAX_OP, "Test failed. Failed to dequeue 
events.\n");
+
+   return TEST_SUCCESS;
+}
+
+static int
+test_dma_adapter_stats(void)
+{
+   struct rte_event_dma_adapter_stats stats;
+
+   rte_event_dma_adapter_stats_get(TEST_ADAPTER_ID, &stats);
+   printf(" +--+\n");
+   printf(" + DMA adapt

RE: [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API

2023-09-29 Thread Morten Brørup
> From: David Marchand [mailto:david.march...@redhat.com]
> Sent: Friday, 29 September 2023 10.04
> 
> On Thu, Sep 28, 2023 at 10:06 AM Thomas Monjalon  wrote:
> >
> > 22/08/2023 23:00, Tyler Retzlaff:

[...]

> > > +/* The memory order is an enumerated type in C11. */
> > > +typedef memory_order rte_memory_order;
> > > +
> > > +#define rte_memory_order_relaxed memory_order_relaxed
> > > +#ifdef __ATOMIC_RELAXED
> > > +static_assert(rte_memory_order_relaxed == __ATOMIC_RELAXED,
> > > + "rte_memory_order_relaxed == __ATOMIC_RELAXED");
> >
> > Not sure about using static_assert or RTE_BUILD_BUG_ON
> 
> Do you mean you want no check at all in a public facing header?
> 
> Or is it that we have RTE_BUILD_BUG_ON and we should keep using it
> instead of static_assert?
> 
> I remember some problems with RTE_BUILD_BUG_ON where the compiler
> would silently drop the whole expression and reported no problem as it
> could not evaluate the expression.
> At least, with static_assert (iirc, it is new to C11) the compiler
> complains with a clear "error: expression in static assertion is not
> constant".
> We could fix RTE_BUILD_BUG_ON, but I guess the fix would be equivalent
> to map it to static_assert(!condition).
> Using language standard constructs seems a better choice.

+1 to using language standard constructs.

static_assert became standard in C11. (Formally, _Static_assert is standard 
C11, and static_assert is available through a convenience macro in C11 [1].)

In my opinion, our move to C11 thus makes RTE_BUILD_BUG_ON obsolete.

We should mark RTE_BUILD_BUG_ON as deprecated, and disallow RTE_BUILD_BUG_ON in 
new code. Perhaps checkpatches could catch this?

[1]: https://en.cppreference.com/w/c/language/_Static_assert

PS: static_assert also has the advantage that it can be used directly in header 
files. RTE_BUILD_BUG_ON can only be used in functions, and thus needs to be 
wrapped in a dummy (static inline) function when used in a header file.

> 
> 
> --
> David Marchand



Re: [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API

2023-09-29 Thread David Marchand
On Fri, Sep 29, 2023 at 10:54 AM Morten Brørup  
wrote:
> In my opinion, our move to C11 thus makes RTE_BUILD_BUG_ON obsolete.

That's my thought too.

>
> We should mark RTE_BUILD_BUG_ON as deprecated, and disallow RTE_BUILD_BUG_ON 
> in new code. Perhaps checkpatches could catch this?

For a clear deprecation of a part of DPDK API, I don't see a need to
add something in checkpatch.
Putting a RTE_DEPRECATED in RTE_BUILD_BUG_ON directly triggers a build
warning (caught by CI since we run with Werror).


diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index 771c70f2c8..40542629c1 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -495,7 +495,7 @@ rte_is_aligned(const void * const __rte_restrict
ptr, const unsigned int align)
 /**
  * Triggers an error at compilation time if the condition is true.
  */
-#define RTE_BUILD_BUG_ON(condition) ((void)sizeof(char[1 - 2*!!(condition)]))
+#define RTE_BUILD_BUG_ON(condition) RTE_DEPRECATED(RTE_BUILD_BUG_ON)
((void)sizeof(char[1 - 2*!!(condition)]))

 /*** Cache line related macros /



$ ninja -C build-mini
...
[18/333] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
../lib/eal/common/eal_common_trace.c: In function ‘eal_trace_init’:
../lib/eal/common/eal_common_trace.c:44:20: warning:
"RTE_BUILD_BUG_ON" is deprecated
   44 | RTE_BUILD_BUG_ON((offsetof(struct __rte_trace_header,
mem) % 8) != 0);
  |^~~~
[38/333] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
../lib/eal/common/malloc_heap.c: In function ‘malloc_heap_destroy’:
../lib/eal/common/malloc_heap.c:1398:20: warning: "RTE_BUILD_BUG_ON"
is deprecated
 1398 | RTE_BUILD_BUG_ON(offsetof(struct malloc_heap, lock) != 0);
  |^~~~
[50/333] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
../lib/eal/unix/rte_thread.c: In function ‘rte_thread_self’:
../lib/eal/unix/rte_thread.c:239:20: warning: "RTE_BUILD_BUG_ON" is deprecated
  239 | RTE_BUILD_BUG_ON(sizeof(pthread_t) > sizeof(uintptr_t));
  |^~~~


-- 
David Marchand



RE: [PATCH v16 1/8] net/ntnic: initial commit which adds register defines

2023-09-29 Thread Christian Koue Muf
On 9/21/2023 4:05 PM, Ferruh Yigit wrote:
>On 9/20/2023 2:17 PM, Thomas Monjalon wrote:
>> Hello,
>> 
>> 19/09/2023 11:06, Christian Koue Muf:
>>> On 9/18/23 10:34 AM, Ferruh Yigit wrote:
 On 9/15/2023 7:37 PM, Morten Brørup wrote:
>> From: Ferruh Yigit [mailto:ferruh.yi...@amd.com]
>> Sent: Friday, 15 September 2023 17.55
>>
>> On 9/8/2023 5:07 PM, Mykola Kostenok wrote:
>>> From: Christian Koue Muf 
>>>
>>> The NTNIC PMD does not rely on a kernel space Napatech driver, 
>>> thus all defines related to the register layout is part of the 
>>> PMD code, which will be added in later commits.
>>>
>>> Signed-off-by: Christian Koue Muf 
>>> Reviewed-by: Mykola Kostenok 
>>>
>>
>> Hi Mykola, Christiam,
>>
>> This PMD scares me, overall it is a big drop:
>> "249 files changed, 87128 insertions(+)"
>>
>> I think it is not possible to review all in one release cycle, and 
>> it is not even possible to say if all code used or not.
>>
>> I can see code is already developed, and it is difficult to 
>> restructure developed code, but restructure it into small pieces 
>> really helps for reviews.
>>
>>
>> Driver supports good list of features, can it be possible to 
>> distribute upstream effort into multiple release.
>> Starting from basic functionality and add features gradually.
>> Target for this release can be providing datapath, and add more if 
>> we have time in the release, what do you think?
>> 
>> I was expecting to get only Rx/Tx in this release, not really more.
>> 
>> I agree it may be interesting to discuss some design and check whether 
>> we need more features in ethdev as part of the driver upstreaming 
>> process.
>> 
>> 
>> Also there are large amount of base code (HAL / FPGA code), 
>> instead of adding them as a bulk, relevant ones with a feature can 
>> be added with the feature patch, this eliminates dead code in the 
>> base code layer, also helps user/review to understand the link 
>> between driver code and base code.
>> 
>> Yes it would be interesting to see what is really needed for the basic 
>> initialization and what is linked to a specific offload or configuration 
>> feature.
>> 
>> As a maintainer, I have to do some changes across all drivers 
>> sometimes, and I use git blame a lot to understand why something was added.
>> 
>> 
> Jumping in here with an opinion about welcoming new NIC vendors to the 
> community:
>
> Generally, if a NIC vendor supplies a PMD for their NIC, I expect the 
> vendor to take responsibility for the quality of the PMD, including 
> providing a maintainer and support backporting of fixes to the PMD in LTS 
> releases. This should align with the vendor's business case for 
> upstreaming their driver.
>
> If the vendor provides one big patch series, which may be difficult to 
> understand/review, the fallout mainly hits the vendor's customers (and 
> thus the vendor's support organization), not the community as a whole.
>

 Hi Morten,

 I was thinking same before making my above comment, what happens if 
 vendors submit as one big patch and when a problem occurs we can ask owner 
 to fix. Probably this makes vendor happy and makes my life (or any other 
 maintainer's life) easier, it is always easier to say yes.


 But I come up with two main reasons to ask for a rework:

 1- Technically any vendor can deliver their software to their 
 customers via a public git repository, they don't have to upstream 
 to 
 https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fdpdk.org&c=E,1,N
 poJejuuvPdOPfcFJYtsmkQF6PVrDjGsZ8x_gi5xDrTyZokK_nM11u4ZpzHgM10J9bOLl
 nhoR6fFAzWtCzOhRCzVruYj520zZORv6-MjJeSC5TrGnIFL&typo=1,
 but upstreaming has many benefits.

 One of those benefits is upstreaming provides a quality assurance for 
 vendor's customers (that is why customer can be asking for this, as we are 
 having in many cases), and this quality assurance comes from additional 
 eyes reviewing the code and guiding vendors for the DPDK quality standards 
 (some vendors already doing pretty good, but new ones sometimes requires 
 hand-holding).

 If driver is one big patch series, it is practically not possible to 
 review it, I can catch a few bits here or there, you may some others, but 
 practically it will be merged without review, and we will fail on our 
 quality assurance task.

 2- Make code more accessible to the rest of the world.

 When it is a big patch, code can be functional but lots of details, 
 reasoning, relation between components gets lost, which makes it even 
 harder for an external developer, like me, to understand it (I am a mere 
 guinea pig here :).

 If a customer would like to add a feature themselves, or

Re: [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API

2023-09-29 Thread Bruce Richardson
On Fri, Sep 29, 2023 at 11:02:38AM +0200, David Marchand wrote:
> On Fri, Sep 29, 2023 at 10:54 AM Morten Brørup  
> wrote:
> > In my opinion, our move to C11 thus makes RTE_BUILD_BUG_ON obsolete.
> 
> That's my thought too.
> 
> >
> > We should mark RTE_BUILD_BUG_ON as deprecated, and disallow 
> > RTE_BUILD_BUG_ON in new code. Perhaps checkpatches could catch this?
> 
> For a clear deprecation of a part of DPDK API, I don't see a need to
> add something in checkpatch.
> Putting a RTE_DEPRECATED in RTE_BUILD_BUG_ON directly triggers a build
> warning (caught by CI since we run with Werror).
> 

Would it not be sufficient to just make it an alias for the C11 static
assertions? It's not like its a lot of code to maintain, and if app users
have it in their code I'm not sure we get massive benefit from forcing them
to edit their code. I'd rather see it kept as a one-line macro purely from
a backward compatibility viewpoint. We can replace internal usages, though
- which can be checked by checkpatch.

/Bruce


RE: [EXT] Re: [PATCH v4 0/3] Introduce event link profiles

2023-09-29 Thread Pavan Nikhilesh Bhagavatula
> On Thu, Sep 28, 2023 at 3:42 PM  wrote:
> >
> > From: Pavan Nikhilesh 
> 
> + @Thomas Monjalon  @David Marchand  @Aaron Conole  @Michael
> Santana
> 
> There is CI failure in apply stage[1] where it is taking main tree
> commit. Not sure why it is taking main tree?
> 
> Pavan,
> 
> Could you resend this series  again to give one more chance to CI.
> 
> 
> [1]
> https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__patches.dpdk.org_project_dpdk_patch_20230928101205.4352-2D2-
> 2Dpbhagavatula-
> 40marvell.com_&d=DwIFaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=E3SgYMjtKC
> MVsB-fmvgGV3o-g_fjLhk5Pupi9ijohpc&m=s0pyfDe0ZocDrutPG-
> dljjgkODjEcIJ2NEAbull1QXuFTK1wlg4H42nArfxOqW29&s=h8P-
> KWtqiKfO0rinRfnUMFHtuGEFZp2fku5fG6xu3uY&e=
> 
> 

The CI script which decides the tree to run tests on needs an update when 
a series contains a spec change followed by driver implementation, 
I submitted the following patch to c...@dpdk.org

https://patches.dpdk.org/project/ci/patch/20230929083443.9925-1-pbhagavat...@marvell.com/

> 
> >
> > A collection of event queues linked to an event port can be associated
> > with unique identifier called as a link profile, multiple such profiles
> > can be configured based on the event device capability using the function
> > `rte_event_port_profile_links_set` which takes arguments similar to
> > `rte_event_port_link` in addition to the profile identifier.
> >
> > The maximum link profiles that are supported by an event device is
> > advertised through the structure member
> > `rte_event_dev_info::max_profiles_per_port`.
> >
> > By default, event ports are configured to use the link profile 0 on
> > initialization.
> >
> > Once multiple link profiles are set up and the event device is started, the
> > application can use the function `rte_event_port_profile_switch` to change
> > the currently active profile on an event port. This effects the next
> > `rte_event_dequeue_burst` call, where the event queues associated with
> the
> > newly active link profile will participate in scheduling.
> >
> > Rudementary work flow would something like:
> >
> > Config path:
> >
> > uint8_t lq[4] = {4, 5, 6, 7};
> > uint8_t hq[4] = {0, 1, 2, 3};
> >
> > if (rte_event_dev_info.max_profiles_per_port < 2)
> > return -ENOTSUP;
> >
> > rte_event_port_profile_links_set(0, 0, hq, NULL, 4, 0);
> > rte_event_port_profile_links_set(0, 0, lq, NULL, 4, 1);
> >
> > Worker path:
> >
> > empty_high_deq = 0;
> > empty_low_deq = 0;
> > is_low_deq = 0;
> > while (1) {
> > deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
> > if (deq == 0) {
> > /**
> >  * Change link profile based on work activity on current
> >  * active profile
> >  */
> > if (is_low_deq) {
> > empty_low_deq++;
> > if (empty_low_deq == MAX_LOW_RETRY) {
> > rte_event_port_profile_switch(0, 0, 0);
> > is_low_deq = 0;
> > empty_low_deq = 0;
> > }
> > continue;
> > }
> >
> > if (empty_high_deq == MAX_HIGH_RETRY) {
> > rte_event_port_profile_switch(0, 0, 1);
> > is_low_deq = 1;
> > empty_high_deq = 0;
> > }
> > continue;
> > }
> >
> > // Process the event received.
> >
> > if (is_low_deq++ == MAX_LOW_EVENTS) {
> > rte_event_port_profile_switch(0, 0, 0);
> > is_low_deq = 0;
> > }
> > }
> >
> > An application could use heuristic data of load/activity of a given event
> > port and change its active profile to adapt to the traffic pattern.
> >
> > An unlink function `rte_event_port_profile_unlink` is provided to
> > modify the links associated to a profile, and
> > `rte_event_port_profile_links_get` can be used to retrieve the links
> > associated with a profile.
> >
> > Using Link profiles can reduce the overhead of linking/unlinking and
> > waiting for unlinks in progress in fast-path and gives applications
> > the ability to switch between preset profiles on the fly.
> >
> > v4 Changes:
> > --
> > - Address review comments (Jerin).
> >
> > v3 Changes:
> > --
> > - Rebase to next-eventdev
> > - Rename testcase name to match API.
> >
> > v2 Changes:
> > --
> > - Fix compilation.
> >
> > Pavan Nikhilesh (3):
> >   eventdev: introduce link profiles
> >   event/cnxk: implement event link profiles
> >   test/event: add event link profile test
> >
> >  app/test/test_eventdev.c  | 117 +++
> >  config/rte_config.h   |   1 +
> >  doc/guides/eventdevs/cnxk.rst |   1 +
> >  doc/guides/eventdevs/features/cnxk.ini|   3 +-
> >  doc/guides/eventdevs/features/default.ini |   1 +
> >  doc/guides/prog_guide/eventdev.rst|  40 
> >  doc/guides/rel_notes/release_23_11.rst|  14 +-
> >  drivers/common/cnx

Re: [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API

2023-09-29 Thread David Marchand
On Fri, Sep 29, 2023 at 11:26 AM Bruce Richardson
 wrote:
>
> On Fri, Sep 29, 2023 at 11:02:38AM +0200, David Marchand wrote:
> > On Fri, Sep 29, 2023 at 10:54 AM Morten Brørup  
> > wrote:
> > > In my opinion, our move to C11 thus makes RTE_BUILD_BUG_ON obsolete.
> >
> > That's my thought too.
> >
> > >
> > > We should mark RTE_BUILD_BUG_ON as deprecated, and disallow 
> > > RTE_BUILD_BUG_ON in new code. Perhaps checkpatches could catch this?
> >
> > For a clear deprecation of a part of DPDK API, I don't see a need to
> > add something in checkpatch.
> > Putting a RTE_DEPRECATED in RTE_BUILD_BUG_ON directly triggers a build
> > warning (caught by CI since we run with Werror).
> >
>
> Would it not be sufficient to just make it an alias for the C11 static
> assertions? It's not like its a lot of code to maintain, and if app users
> have it in their code I'm not sure we get massive benefit from forcing them
> to edit their code. I'd rather see it kept as a one-line macro purely from
> a backward compatibility viewpoint. We can replace internal usages, though
> - which can be checked by checkpatch.

No, there is no massive benefit, just trying to reduce our ever
growing API surface.

Note, this macro should have been kept internal but it was introduced
at a time such matter was not considered...


-- 
David Marchand



Re: [PATCH v16 1/8] net/ntnic: initial commit which adds register defines

2023-09-29 Thread Ferruh Yigit
On 9/29/2023 10:21 AM, Christian Koue Muf wrote:
> On 9/21/2023 4:05 PM, Ferruh Yigit wrote:
>> On 9/20/2023 2:17 PM, Thomas Monjalon wrote:
>>> Hello,
>>>
>>> 19/09/2023 11:06, Christian Koue Muf:
 On 9/18/23 10:34 AM, Ferruh Yigit wrote:
> On 9/15/2023 7:37 PM, Morten Brørup wrote:
>>> From: Ferruh Yigit [mailto:ferruh.yi...@amd.com]
>>> Sent: Friday, 15 September 2023 17.55
>>>
>>> On 9/8/2023 5:07 PM, Mykola Kostenok wrote:
 From: Christian Koue Muf 

 The NTNIC PMD does not rely on a kernel space Napatech driver,
 thus all defines related to the register layout is part of the
 PMD code, which will be added in later commits.

 Signed-off-by: Christian Koue Muf 
 Reviewed-by: Mykola Kostenok 

>>>
>>> Hi Mykola, Christiam,
>>>
>>> This PMD scares me, overall it is a big drop:
>>> "249 files changed, 87128 insertions(+)"
>>>
>>> I think it is not possible to review all in one release cycle, and
>>> it is not even possible to say if all code used or not.
>>>
>>> I can see code is already developed, and it is difficult to
>>> restructure developed code, but restructure it into small pieces
>>> really helps for reviews.
>>>
>>>
>>> Driver supports good list of features, can it be possible to
>>> distribute upstream effort into multiple release.
>>> Starting from basic functionality and add features gradually.
>>> Target for this release can be providing datapath, and add more if
>>> we have time in the release, what do you think?
>>>
>>> I was expecting to get only Rx/Tx in this release, not really more.
>>>
>>> I agree it may be interesting to discuss some design and check whether
>>> we need more features in ethdev as part of the driver upstreaming
>>> process.
>>>
>>>
>>> Also there are large amount of base code (HAL / FPGA code),
>>> instead of adding them as a bulk, relevant ones with a feature can
>>> be added with the feature patch, this eliminates dead code in the
>>> base code layer, also helps user/review to understand the link
>>> between driver code and base code.
>>>
>>> Yes it would be interesting to see what is really needed for the basic
>>> initialization and what is linked to a specific offload or configuration 
>>> feature.
>>>
>>> As a maintainer, I have to do some changes across all drivers
>>> sometimes, and I use git blame a lot to understand why something was added.
>>>
>>>
>> Jumping in here with an opinion about welcoming new NIC vendors to the 
>> community:
>>
>> Generally, if a NIC vendor supplies a PMD for their NIC, I expect the 
>> vendor to take responsibility for the quality of the PMD, including 
>> providing a maintainer and support backporting of fixes to the PMD in 
>> LTS releases. This should align with the vendor's business case for 
>> upstreaming their driver.
>>
>> If the vendor provides one big patch series, which may be difficult to 
>> understand/review, the fallout mainly hits the vendor's customers (and 
>> thus the vendor's support organization), not the community as a whole.
>>
>
> Hi Morten,
>
> I was thinking same before making my above comment, what happens if 
> vendors submit as one big patch and when a problem occurs we can ask 
> owner to fix. Probably this makes vendor happy and makes my life (or any 
> other maintainer's life) easier, it is always easier to say yes.
>
>
> But I come up with two main reasons to ask for a rework:
>
> 1- Technically any vendor can deliver their software to their
> customers via a public git repository, they don't have to upstream
> to
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fdpdk.org&c=E,1,N
> poJejuuvPdOPfcFJYtsmkQF6PVrDjGsZ8x_gi5xDrTyZokK_nM11u4ZpzHgM10J9bOLl
> nhoR6fFAzWtCzOhRCzVruYj520zZORv6-MjJeSC5TrGnIFL&typo=1,
> but upstreaming has many benefits.
>
> One of those benefits is upstreaming provides a quality assurance for 
> vendor's customers (that is why customer can be asking for this, as we 
> are having in many cases), and this quality assurance comes from 
> additional eyes reviewing the code and guiding vendors for the DPDK 
> quality standards (some vendors already doing pretty good, but new ones 
> sometimes requires hand-holding).
>
> If driver is one big patch series, it is practically not possible to 
> review it, I can catch a few bits here or there, you may some others, but 
> practically it will be merged without review, and we will fail on our 
> quality assurance task.
>
> 2- Make code more accessible to the rest of the world.
>
> When it is a big patch, code can be functional but lots of details, 
> reasoning, relation between components gets lost, which makes it even 
> harder for an external develope

[PATCH v8 00/12] add CLI based graph application

2023-09-29 Thread skori
From: Sunil Kumar Kori 

In the continuation of following feedback
https://patches.dpdk.org/project/dpdk/patch/20230425131516.3308612-5-vattun...@marvell.com/
this patch series adds dpdk-graph application to exercise various
usecases using graph.

1. Each use case is defined in terms of .cli file which will contain
set of commands to configure the system and to create a graph for
that use case.

2. Each module like ethdev, mempool, route etc exposes its set of commands
to do global and node specific configuration.

3. Command parsing is backed by command line library.

Rakesh Kudurumalla (5):
  app/graph: add mempool command line interfaces
  app/graph: add ipv6_lookup command line interfaces
  app/graph: add ethdev_rx command line interfaces
  app/graph: add graph command line interfaces
  app/graph: add l3fwd use case

Sunil Kumar Kori (7):
  app/graph: add application framework to read CLI
  app/graph: add telnet connectivity framework
  app/graph: add parser utility APIs
  app/graph: add ethdev command line interfaces
  app/graph: add ipv4_lookup command line interfaces
  app/graph: add neigh command line interfaces
  app/graph: add CLI option to enable graph stats

 MAINTAINERS  |   7 +
 app/graph/cli.c  | 136 +++
 app/graph/cli.h  |  32 +
 app/graph/conn.c | 282 ++
 app/graph/conn.h |  46 +
 app/graph/ethdev.c   | 885 +++
 app/graph/ethdev.h   |  40 +
 app/graph/ethdev_priv.h  | 112 +++
 app/graph/ethdev_rx.c| 165 
 app/graph/ethdev_rx.h|  37 +
 app/graph/ethdev_rx_priv.h   |  39 +
 app/graph/examples/l3fwd.cli |  87 ++
 app/graph/graph.c| 550 
 app/graph/graph.h|  21 +
 app/graph/graph_priv.h   |  70 ++
 app/graph/ip4_route.c| 224 +
 app/graph/ip6_route.c| 229 +
 app/graph/l3fwd.c| 136 +++
 app/graph/l3fwd.h|  11 +
 app/graph/main.c | 237 +
 app/graph/mempool.c  | 140 +++
 app/graph/mempool.h  |  24 +
 app/graph/mempool_priv.h |  34 +
 app/graph/meson.build|  25 +
 app/graph/module_api.h   |  31 +
 app/graph/neigh.c| 366 
 app/graph/neigh.h|  17 +
 app/graph/neigh_priv.h   |  49 +
 app/graph/route.h|  40 +
 app/graph/route_priv.h   |  44 +
 app/graph/utils.c| 156 
 app/graph/utils.h|  14 +
 app/meson.build  |   1 +
 doc/guides/tools/graph.rst   | 241 +
 doc/guides/tools/img/graph-usecase-l3fwd.svg | 210 +
 doc/guides/tools/index.rst   |   1 +
 36 files changed, 4739 insertions(+)
 create mode 100644 app/graph/cli.c
 create mode 100644 app/graph/cli.h
 create mode 100644 app/graph/conn.c
 create mode 100644 app/graph/conn.h
 create mode 100644 app/graph/ethdev.c
 create mode 100644 app/graph/ethdev.h
 create mode 100644 app/graph/ethdev_priv.h
 create mode 100644 app/graph/ethdev_rx.c
 create mode 100644 app/graph/ethdev_rx.h
 create mode 100644 app/graph/ethdev_rx_priv.h
 create mode 100644 app/graph/examples/l3fwd.cli
 create mode 100644 app/graph/graph.c
 create mode 100644 app/graph/graph.h
 create mode 100644 app/graph/graph_priv.h
 create mode 100644 app/graph/ip4_route.c
 create mode 100644 app/graph/ip6_route.c
 create mode 100644 app/graph/l3fwd.c
 create mode 100644 app/graph/l3fwd.h
 create mode 100644 app/graph/main.c
 create mode 100644 app/graph/mempool.c
 create mode 100644 app/graph/mempool.h
 create mode 100644 app/graph/mempool_priv.h
 create mode 100644 app/graph/meson.build
 create mode 100644 app/graph/module_api.h
 create mode 100644 app/graph/neigh.c
 create mode 100644 app/graph/neigh.h
 create mode 100644 app/graph/neigh_priv.h
 create mode 100644 app/graph/route.h
 create mode 100644 app/graph/route_priv.h
 create mode 100644 app/graph/utils.c
 create mode 100644 app/graph/utils.h
 create mode 100644 doc/guides/tools/graph.rst
 create mode 100644 doc/guides/tools/img/graph-usecase-l3fwd.svg

-- 
2.25.1



[PATCH v8 01/12] app/graph: add application framework to read CLI

2023-09-29 Thread skori
From: Sunil Kumar Kori 

It adds base framework to read a given .cli file as a command line
parameter "-s".

Example:
 # ./dpdk-graph -c 0xff -- -s ./app/graph/examples/dummy.cli

Each .cli file will contain commands to configure different module like
mempool, ethdev, lookup tables, graph etc. Command parsing is backed by
commandline library.

Each module needs to expose its supported commands & corresponding
callback functions to commandline library to get them parsed.

Signed-off-by: Sunil Kumar Kori 
Signed-off-by: Rakesh Kudurumalla 
---
v7..v8:
 - Fix klocwork issues.

v6..v7:
 - Fix FreeBSD build error.
 - Make route and neigh runtime configuration too.

v5..v6:
 - Fix build errors.
 - Fix checkpatch errors.
 - Fix individual patch build errors.

v4..v5:
 - Fix application exit issue.
 - Enable graph packet capture feature.
 - Fix graph coremask synchronization with eal coremask.
 - Update user guide.

https://patches.dpdk.org/project/dpdk/patch/20230919160455.1678716-1-sk...@marvell.com/

v3..v4:
 - Use commandline library to parse command tokens.
 - Split to multiple smaller patches.
 - Make neigh and route as dynamic database.
 - add ethdev and graph stats command via telnet.
 - Update user guide.

https://patches.dpdk.org/project/dpdk/patch/20230908104907.4060511-1-sk...@marvell.com/

 MAINTAINERS|   7 ++
 app/graph/cli.c| 113 
 app/graph/cli.h|  32 ++
 app/graph/main.c   | 128 +
 app/graph/meson.build  |  15 +
 app/graph/module_api.h |  16 +
 app/meson.build|   1 +
 doc/guides/tools/graph.rst |  82 
 doc/guides/tools/index.rst |   1 +
 9 files changed, 395 insertions(+)
 create mode 100644 app/graph/cli.c
 create mode 100644 app/graph/cli.h
 create mode 100644 app/graph/main.c
 create mode 100644 app/graph/meson.build
 create mode 100644 app/graph/module_api.h
 create mode 100644 doc/guides/tools/graph.rst

diff --git a/MAINTAINERS b/MAINTAINERS
index 00f5a5f9e6..7998be98f1 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1811,6 +1811,13 @@ F: dts/
 F: devtools/dts-check-format.sh
 F: doc/guides/tools/dts.rst
 
+Graph application
+M: Sunil Kumar Kori 
+M: Rakesh Kudurumalla 
+F: app/graph/
+F: doc/guides/tools/graph.rst
+F: doc/guides/tools/img/graph-usecase-l3fwd.svg
+
 
 Other Example Applications
 --
diff --git a/app/graph/cli.c b/app/graph/cli.c
new file mode 100644
index 00..473fa1635a
--- /dev/null
+++ b/app/graph/cli.c
@@ -0,0 +1,113 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Marvell.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "module_api.h"
+
+#define CMD_MAX_TOKENS 256
+#define MAX_LINE_SIZE 2048
+
+cmdline_parse_ctx_t modules_ctx[] = {
+   NULL,
+};
+
+static struct cmdline *cl;
+
+static int
+is_comment(char *in)
+{
+   if ((strlen(in) && index("!#%;", in[0])) ||
+   (strncmp(in, "//", 2) == 0) ||
+   (strncmp(in, "--", 2) == 0))
+   return 1;
+
+   return 0;
+}
+
+void
+cli_init(void)
+{
+   cl = cmdline_stdin_new(modules_ctx, "");
+}
+
+void
+cli_exit(void)
+{
+   cmdline_stdin_exit(cl);
+}
+
+void
+cli_process(char *in, char *out, size_t out_size, __rte_unused void *obj)
+{
+   int rc;
+
+   if (is_comment(in))
+   return;
+
+   rc = cmdline_parse(cl, in);
+   if (rc == CMDLINE_PARSE_AMBIGUOUS)
+   snprintf(out, out_size, MSG_CMD_FAIL, "Ambiguous command");
+   else if (rc == CMDLINE_PARSE_NOMATCH)
+   snprintf(out, out_size, MSG_CMD_FAIL, "Command mismatch");
+   else if (rc == CMDLINE_PARSE_BAD_ARGS)
+   snprintf(out, out_size, MSG_CMD_FAIL, "Bad arguments");
+
+   return;
+
+}
+
+int
+cli_script_process(const char *file_name, size_t msg_in_len_max, size_t 
msg_out_len_max, void *obj)
+{
+   char *msg_in = NULL, *msg_out = NULL;
+   FILE *f = NULL;
+
+   /* Check input arguments */
+   if ((file_name == NULL) || (strlen(file_name) == 0) || (msg_in_len_max 
== 0) ||
+   (msg_out_len_max == 0))
+   return -EINVAL;
+
+   msg_in = malloc(msg_in_len_max + 1);
+   msg_out = malloc(msg_out_len_max + 1);
+   if ((msg_in == NULL) || (msg_out == NULL)) {
+   free(msg_out);
+   free(msg_in);
+   return -ENOMEM;
+   }
+
+   /* Open input file */
+   f = fopen(file_name, "r");
+   if (f == NULL) {
+   free(msg_out);
+   free(msg_in);
+   return -EIO;
+   }
+
+   /* Read file */
+   while (fgets(msg_in, msg_in_len_max, f) != NULL) {
+   msg_out[0] = 0;
+
+   cli_process(msg_in, msg_out, msg_out_len_max, obj);
+
+   if (strlen(msg_out))
+   printf("%s",

[PATCH v8 02/12] app/graph: add telnet connectivity framework

2023-09-29 Thread skori
From: Sunil Kumar Kori 

It adds framework to initiate a telnet session with application.

Some configurations and debug commands are exposed as runtime APIs.
Those commands can be invoked using telnet session.

Application initiates a telnet server with host address 0.0.0.0
and port number 8086 by default.

To make it configurable, "-h" and "-p" options are provided.
Using them user can pass host address and port number on which
application will start telnet server.

Using same host address and port number, telnet client can connect
to application.

Syntax to connect with application:
# telnet  

Once session is connected, "graph> " prompt will be available.
Example:
# telnet 10.28.35.207 5
  Trying 10.28.35.207...
  Connected to 10.28.35.207.
  Escape character is '^]'.

  Welcome!

  graph>

Signed-off-by: Sunil Kumar Kori 
Signed-off-by: Rakesh Kudurumalla 
---
 app/graph/conn.c   | 282 +
 app/graph/conn.h   |  46 ++
 app/graph/main.c   | 103 +-
 app/graph/meson.build  |   1 +
 app/graph/module_api.h |   3 +
 doc/guides/tools/graph.rst |  38 +
 6 files changed, 468 insertions(+), 5 deletions(-)
 create mode 100644 app/graph/conn.c
 create mode 100644 app/graph/conn.h

diff --git a/app/graph/conn.c b/app/graph/conn.c
new file mode 100644
index 00..8c88500605
--- /dev/null
+++ b/app/graph/conn.c
@@ -0,0 +1,282 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Marvell.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "module_api.h"
+
+#define MSG_CMD_TOO_LONG "Command too long."
+
+static int
+data_event_handle(struct conn *conn, int fd_client)
+{
+   ssize_t len, i, rc = 0;
+
+   /* Read input message */
+   len = read(fd_client, conn->buf, conn->buf_size);
+   if (len == -1) {
+   if ((errno == EAGAIN) || (errno == EWOULDBLOCK))
+   return 0;
+
+   return -1;
+   }
+
+   if (len == 0)
+   return rc;
+
+   /* Handle input messages */
+   for (i = 0; i < len; i++) {
+   if (conn->buf[i] == '\n') {
+   size_t n;
+
+   conn->msg_in[conn->msg_in_len] = 0;
+   conn->msg_out[0] = 0;
+
+   conn->msg_handle(conn->msg_in, conn->msg_out, 
conn->msg_out_len_max,
+conn->msg_handle_arg);
+
+   n = strlen(conn->msg_out);
+   if (n) {
+   rc = write(fd_client, conn->msg_out, n);
+   if (rc == -1)
+   goto exit;
+   }
+
+   conn->msg_in_len = 0;
+   } else if (conn->msg_in_len < conn->msg_in_len_max) {
+   conn->msg_in[conn->msg_in_len] = conn->buf[i];
+   conn->msg_in_len++;
+   } else {
+   rc = write(fd_client, MSG_CMD_TOO_LONG, 
strlen(MSG_CMD_TOO_LONG));
+   if (rc == -1)
+   goto exit;
+
+   conn->msg_in_len = 0;
+   }
+   }
+
+   /* Write prompt */
+   rc = write(fd_client, conn->prompt, strlen(conn->prompt));
+   rc = (rc == -1) ? -1 : 0;
+
+exit:
+   return rc;
+}
+
+static int
+control_event_handle(struct conn *conn, int fd_client)
+{
+   int rc;
+
+   rc = epoll_ctl(conn->fd_client_group, EPOLL_CTL_DEL, fd_client, NULL);
+   if (rc == -1)
+   goto exit;
+
+   rc = close(fd_client);
+   if (rc == -1)
+   goto exit;
+
+   rc = 0;
+
+exit:
+   return rc;
+}
+
+struct conn *
+conn_init(struct conn_params *p)
+{
+   int fd_server, fd_client_group, rc;
+   struct sockaddr_in server_address;
+   struct conn *conn = NULL;
+   int reuse = 1;
+
+   memset(&server_address, 0, sizeof(server_address));
+
+   /* Check input arguments */
+   if ((p == NULL) || (p->welcome == NULL) || (p->prompt == NULL) || 
(p->addr == NULL) ||
+   (p->buf_size == 0) || (p->msg_in_len_max == 0) || 
(p->msg_out_len_max == 0) ||
+   (p->msg_handle == NULL))
+   goto exit;
+
+   rc = inet_aton(p->addr, &server_address.sin_addr);
+   if (rc == 0)
+   goto exit;
+
+   /* Memory allocation */
+   conn = calloc(1, sizeof(struct conn));
+   if (conn == NULL)
+   goto exit;
+
+   conn->welcome = calloc(1, CONN_WELCOME_LEN_MAX + 1);
+   conn->prompt = calloc(1, CONN_PROMPT_LEN_MAX + 1);
+   conn->buf = calloc(1, p->buf_size);
+   conn->msg_in = calloc(1, p->msg_in_len_max + 1);
+   conn->msg_out = calloc(1, p->msg_out_len_max + 1);
+
+   if ((conn->welcome == NULL)

[PATCH v8 03/12] app/graph: add parser utility APIs

2023-09-29 Thread skori
From: Sunil Kumar Kori 

It adds some helper functions to parse IPv4, IPv6 and MAC addresses
string into respective datatype.

Signed-off-by: Sunil Kumar Kori 
Signed-off-by: Rakesh Kudurumalla 
---
 app/graph/meson.build  |   1 +
 app/graph/module_api.h |   1 +
 app/graph/utils.c  | 156 +
 app/graph/utils.h  |  14 
 4 files changed, 172 insertions(+)
 create mode 100644 app/graph/utils.c
 create mode 100644 app/graph/utils.h

diff --git a/app/graph/meson.build b/app/graph/meson.build
index 644e5c39f2..d322f27d8e 100644
--- a/app/graph/meson.build
+++ b/app/graph/meson.build
@@ -13,4 +13,5 @@ sources = files(
 'cli.c',
 'conn.c',
 'main.c',
+'utils.c',
 )
diff --git a/app/graph/module_api.h b/app/graph/module_api.h
index 9826303f0c..ad4fb50989 100644
--- a/app/graph/module_api.h
+++ b/app/graph/module_api.h
@@ -10,6 +10,7 @@
 
 #include "cli.h"
 #include "conn.h"
+#include "utils.h"
 /*
  * Externs
  */
diff --git a/app/graph/utils.c b/app/graph/utils.c
new file mode 100644
index 00..c7b6ae83cf
--- /dev/null
+++ b/app/graph/utils.c
@@ -0,0 +1,156 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Marvell.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "module_api.h"
+
+#define white_spaces_skip(pos) \
+({ \
+   __typeof__(pos) _p = (pos); \
+   for ( ; isspace(*_p); _p++) \
+   ;   \
+   _p; \
+})
+
+static void
+hex_string_to_uint64(uint64_t *dst, const char *hexs)
+{
+   char buf[2] = {0};
+   uint8_t shift = 4;
+   int iter = 0;
+   char c;
+
+   while ((c = *hexs++)) {
+   buf[0] = c;
+   *dst |= (strtol(buf, NULL, 16) << shift);
+   shift -= 4;
+   iter++;
+   if (iter == 2) {
+   iter = 0;
+   shift = 4;
+   dst++;
+   }
+   }
+}
+
+int
+parser_uint64_read(uint64_t *value, const char *p)
+{
+   char *next;
+   uint64_t val;
+
+   p = white_spaces_skip(p);
+   if (!isdigit(*p))
+   return -EINVAL;
+
+   val = strtoul(p, &next, 0);
+   if (p == next)
+   return -EINVAL;
+
+   p = next;
+   switch (*p) {
+   case 'T':
+   val *= 1024ULL;
+   /* fall through */
+   case 'G':
+   val *= 1024ULL;
+   /* fall through */
+   case 'M':
+   val *= 1024ULL;
+   /* fall through */
+   case 'k':
+   case 'K':
+   val *= 1024ULL;
+   p++;
+   break;
+   }
+
+   p = white_spaces_skip(p);
+   if (*p != '\0')
+   return -EINVAL;
+
+   *value = val;
+   return 0;
+}
+
+int
+parser_uint32_read(uint32_t *value, const char *p)
+{
+   uint64_t val = 0;
+   int rc = parser_uint64_read(&val, p);
+
+   if (rc < 0)
+   return rc;
+
+   if (val > UINT32_MAX)
+   return -ERANGE;
+
+   *value = val;
+   return 0;
+}
+
+int
+parser_ip4_read(uint32_t *value, char *p)
+{
+   uint8_t shift = 24;
+   uint32_t ip = 0;
+   char *token;
+
+   token = strtok(p, ".");
+   while (token != NULL) {
+   ip |= (((uint32_t)strtoul(token, NULL, 10)) << shift);
+   token = strtok(NULL, ".");
+   shift -= 8;
+   }
+
+   *value = ip;
+
+   return 0;
+}
+
+int
+parser_ip6_read(uint8_t *value, char *p)
+{
+   uint64_t val = 0;
+   char *token;
+
+   token = strtok(p, ":");
+   while (token != NULL) {
+   hex_string_to_uint64(&val, token);
+   *value = val;
+   token = strtok(NULL, ":");
+   value++;
+   val = 0;
+   }
+
+   return 0;
+}
+
+int
+parser_mac_read(uint64_t *value, char *p)
+{
+   uint64_t mac = 0, val = 0;
+   uint8_t shift = 40;
+   char *token;
+
+   token = strtok(p, ":");
+   while (token != NULL) {
+   hex_string_to_uint64(&val, token);
+   mac |= val << shift;
+   token = strtok(NULL, ":");
+   shift -= 8;
+   val = 0;
+   }
+
+   *value = mac;
+
+   return 0;
+}
diff --git a/app/graph/utils.h b/app/graph/utils.h
new file mode 100644
index 00..0ebb5de55a
--- /dev/null
+++ b/app/graph/utils.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Marvell.
+ */
+
+#ifndef APP_GRAPH_UTILS_H
+#define APP_GRAPH_UTILS_H
+
+int parser_uint64_read(uint64_t *value, const char *p);
+int parser_uint32_read(uint32_t *value, const char *p);
+int parser_ip4_read(uint32_t *value, char *p);
+int parser_ip6_read(uint8_t *value, char *p);
+int parser_

[PATCH v8 04/12] app/graph: add mempool command line interfaces

2023-09-29 Thread skori
From: Rakesh Kudurumalla 

It adds mempool module which will be creating mempools.

Following commands are exposed:
 - mempool  size  buffers  \
cache  numa 
 - help mempool

User will add this command in .cli file according to its need.

Signed-off-by: Sunil Kumar Kori 
Signed-off-by: Rakesh Kudurumalla 
---
 app/graph/cli.c|   2 +
 app/graph/mempool.c| 140 +
 app/graph/mempool.h|  24 +++
 app/graph/mempool_priv.h   |  34 +
 app/graph/meson.build  |   1 +
 app/graph/module_api.h |   2 +
 doc/guides/tools/graph.rst |   8 +++
 7 files changed, 211 insertions(+)
 create mode 100644 app/graph/mempool.c
 create mode 100644 app/graph/mempool.h
 create mode 100644 app/graph/mempool_priv.h

diff --git a/app/graph/cli.c b/app/graph/cli.c
index 473fa1635a..c9f932517e 100644
--- a/app/graph/cli.c
+++ b/app/graph/cli.c
@@ -20,6 +20,8 @@
 #define MAX_LINE_SIZE 2048
 
 cmdline_parse_ctx_t modules_ctx[] = {
+   (cmdline_parse_inst_t *)&mempool_config_cmd_ctx,
+   (cmdline_parse_inst_t *)&mempool_help_cmd_ctx,
NULL,
 };
 
diff --git a/app/graph/mempool.c b/app/graph/mempool.c
new file mode 100644
index 00..901f07f461
--- /dev/null
+++ b/app/graph/mempool.c
@@ -0,0 +1,140 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Marvell.
+ */
+
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "mempool_priv.h"
+#include "module_api.h"
+
+static const char
+cmd_mempool_help[] = "mempool  size  buffers 
 "
+"cache  numa ";
+
+struct mempools mpconfig;
+
+int
+mempool_process(struct mempool_config *config)
+{
+   struct rte_mempool *mp;
+   uint8_t nb_pools;
+
+   nb_pools = mpconfig.nb_pools;
+   strcpy(mpconfig.config[nb_pools].name, config->name);
+   mpconfig.config[nb_pools].pool_size = config->pool_size;
+   mpconfig.config[nb_pools].buffer_size = config->buffer_size;
+   mpconfig.config[nb_pools].cache_size = config->cache_size;
+   mpconfig.config[nb_pools].numa_node = config->numa_node;
+
+   mp = rte_pktmbuf_pool_create(config->name, config->pool_size, 
config->cache_size,
+   64, config->buffer_size, config->numa_node);
+   if (!mp)
+   return -EINVAL;
+
+   mpconfig.mp[nb_pools] = mp;
+   nb_pools++;
+   mpconfig.nb_pools = nb_pools;
+
+   return 0;
+}
+
+static void
+cli_mempool_help(__rte_unused void *parsed_result, __rte_unused struct cmdline 
*cl,
+__rte_unused void *data)
+{
+   size_t len;
+
+   len = strlen(conn->msg_out);
+   conn->msg_out += len;
+   snprintf(conn->msg_out, conn->msg_out_len_max, "\n%s\n%s\n",
+"- mempool command help 
-",
+cmd_mempool_help);
+
+   len = strlen(conn->msg_out);
+   conn->msg_out_len_max -= len;
+}
+
+static void
+cli_mempool(void *parsed_result, __rte_unused struct cmdline *cl, __rte_unused 
void *data)
+{
+   struct mempool_config_cmd_tokens *res = parsed_result;
+   struct mempool_config config;
+   int rc = -EINVAL;
+
+
+   strcpy(config.name, res->name);
+   config.name[strlen(res->name)] = '\0';
+   config.pool_size = res->nb_bufs;
+   config.buffer_size = res->buf_sz;
+   config.cache_size = res->cache_size;
+   config.numa_node = res->node;
+
+   rc = mempool_process(&config);
+   if (rc < 0)
+   printf(MSG_CMD_FAIL, "mempool");
+}
+
+cmdline_parse_token_string_t mempool_config_add_mempool =
+   TOKEN_STRING_INITIALIZER(struct mempool_config_cmd_tokens, mempool, 
"mempool");
+cmdline_parse_token_string_t mempool_config_add_name =
+   TOKEN_STRING_INITIALIZER(struct mempool_config_cmd_tokens, name, NULL);
+cmdline_parse_token_string_t mempool_config_add_size =
+   TOKEN_STRING_INITIALIZER(struct mempool_config_cmd_tokens, size, 
"size");
+cmdline_parse_token_num_t mempool_config_add_buf_sz =
+   TOKEN_NUM_INITIALIZER(struct mempool_config_cmd_tokens, buf_sz, 
RTE_UINT16);
+cmdline_parse_token_string_t mempool_config_add_buffers =
+   TOKEN_STRING_INITIALIZER(struct mempool_config_cmd_tokens, buffers, 
"buffers");
+cmdline_parse_token_num_t mempool_config_add_nb_bufs =
+   TOKEN_NUM_INITIALIZER(struct mempool_config_cmd_tokens, nb_bufs, 
RTE_UINT16);
+cmdline_parse_token_string_t mempool_config_add_cache =
+   TOKEN_STRING_INITIALIZER(struct mempool_config_cmd_tokens, cache, 
"cache");
+cmdline_parse_token_num_t mempool_config_add_cache_size =
+   TOKEN_NUM_INITIALIZER(struct mempool_config_cmd_tokens, cache_size, 
RTE_UINT16);
+cmdline_parse_token_string_t mempool_config_add_numa =
+   TOKEN_STRING_INITIALIZER(struct mempool_config_cmd_tokens, numa, 
"numa");
+cmdline_parse_token_num_t mempool_config_add_node =
+   TOKEN_NUM_INITIALIZER(struct mempool_config_cmd_t

[PATCH v8 05/12] app/graph: add ethdev command line interfaces

2023-09-29 Thread skori
From: Sunil Kumar Kori 

It adds ethdev module to configure ethernet devices.

Following commands are exposed:
 - ethdev  rxq  txq  
 - ethdev  mtu 
 - ethdev  promiscuous 
 - ethdev  show
 - ethdev  stats
 - ethdev  ip4 addr add  netmask 
 - ethdev  ip6 addr add  netmask 
 - help ethdev

Signed-off-by: Sunil Kumar Kori 
Signed-off-by: Rakesh Kudurumalla 
---
 app/graph/cli.c|   8 +
 app/graph/ethdev.c | 882 +
 app/graph/ethdev.h |  40 ++
 app/graph/ethdev_priv.h| 112 +
 app/graph/main.c   |   1 +
 app/graph/meson.build  |   1 +
 app/graph/module_api.h |   1 +
 doc/guides/tools/graph.rst |  47 ++
 8 files changed, 1092 insertions(+)
 create mode 100644 app/graph/ethdev.c
 create mode 100644 app/graph/ethdev.h
 create mode 100644 app/graph/ethdev_priv.h

diff --git a/app/graph/cli.c b/app/graph/cli.c
index c9f932517e..c4b5cf3ce1 100644
--- a/app/graph/cli.c
+++ b/app/graph/cli.c
@@ -22,6 +22,14 @@
 cmdline_parse_ctx_t modules_ctx[] = {
(cmdline_parse_inst_t *)&mempool_config_cmd_ctx,
(cmdline_parse_inst_t *)&mempool_help_cmd_ctx,
+   (cmdline_parse_inst_t *)ðdev_show_cmd_ctx,
+   (cmdline_parse_inst_t *)ðdev_stats_cmd_ctx,
+   (cmdline_parse_inst_t *)ðdev_mtu_cmd_ctx,
+   (cmdline_parse_inst_t *)ðdev_prom_mode_cmd_ctx,
+   (cmdline_parse_inst_t *)ðdev_ip4_cmd_ctx,
+   (cmdline_parse_inst_t *)ðdev_ip6_cmd_ctx,
+   (cmdline_parse_inst_t *)ðdev_cmd_ctx,
+   (cmdline_parse_inst_t *)ðdev_help_cmd_ctx,
NULL,
 };
 
diff --git a/app/graph/ethdev.c b/app/graph/ethdev.c
new file mode 100644
index 00..74e80679d9
--- /dev/null
+++ b/app/graph/ethdev.c
@@ -0,0 +1,882 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Marvell.
+ */
+
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "ethdev_priv.h"
+#include "module_api.h"
+
+static const char
+cmd_ethdev_mtu_help[] = "ethdev  mtu ";
+
+static const char
+cmd_ethdev_prom_mode_help[] = "ethdev  promiscuous ";
+
+static const char
+cmd_ethdev_help[] = "ethdev  rxq  txq  
";
+static const char
+cmd_ethdev_show_help[] = "ethdev  show";
+
+static const char
+cmd_ethdev_ip4_addr_help[] = "ethdev  ip4 addr add  netmask 
";
+
+static const char
+cmd_ethdev_ip6_addr_help[] = "ethdev  ip6 addr add  netmask 
";
+
+static struct rte_eth_conf port_conf_default = {
+   .link_speeds = 0,
+   .rxmode = {
+   .mq_mode = RTE_ETH_MQ_RX_NONE,
+   .mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo 
frame MTU */
+   },
+   .rx_adv_conf = {
+   .rss_conf = {
+   .rss_key = NULL,
+   .rss_key_len = 40,
+   .rss_hf = 0,
+   },
+   },
+   .txmode = {
+   .mq_mode = RTE_ETH_MQ_TX_NONE,
+   },
+   .lpbk_mode = 0,
+};
+
+uint32_t enabled_port_mask;
+static struct ethdev_head eth_node = TAILQ_HEAD_INITIALIZER(eth_node);
+
+
+static struct ethdev*
+ethdev_port_by_id(uint16_t port_id)
+{
+   struct ethdev *port;
+
+   TAILQ_FOREACH(port, ð_node, next) {
+   if (port->config.port_id == port_id)
+   return port;
+   }
+   return NULL;
+}
+
+void *
+ethdev_mempool_list_by_portid(uint16_t portid)
+{
+   struct ethdev *port;
+
+   if (portid >= RTE_MAX_ETHPORTS)
+   return NULL;
+
+   port = ethdev_port_by_id(portid);
+   if (port)
+   return &(port->config.rx.mp);
+   else
+   return NULL;
+}
+
+int16_t
+ethdev_portid_by_ip4(uint32_t ip, uint32_t mask)
+{
+   int portid = -EINVAL;
+   struct ethdev *port;
+
+   TAILQ_FOREACH(port, ð_node, next) {
+   if (mask == 0) {
+   if ((port->ip4_addr.ip & port->ip4_addr.mask) == (ip & 
port->ip4_addr.mask))
+   return port->config.port_id;
+   } else {
+   if ((port->ip4_addr.ip & port->ip4_addr.mask) == (ip & 
mask))
+   return port->config.port_id;
+   }
+   }
+
+   return portid;
+}
+
+int16_t
+ethdev_portid_by_ip6(uint8_t *ip, uint8_t *mask)
+{
+   int portid = -EINVAL;
+   struct ethdev *port;
+   int j;
+
+   TAILQ_FOREACH(port, ð_node, next) {
+   for (j = 0; j < ETHDEV_IPV6_ADDR_LEN; j++) {
+   if (mask == NULL) {
+   if ((port->ip6_addr.ip[j] & 
port->ip6_addr.mask[j]) !=
+   (ip[j] & port->ip6_addr.mask[j]))
+   break;
+
+   } else {
+   if ((port->ip6_addr.ip[j] & 
port->ip6_addr.mask[j]) !=
+   (ip[j] & mask[j]))
+   break;
+   }

[PATCH v8 06/12] app/graph: add ipv4_lookup command line interfaces

2023-09-29 Thread skori
From: Sunil Kumar Kori 

It adds ipv4_lookup module to configure LPM table. This LPM table
will be used for IPv4 lookup and forwarding.

Following commands are exposed:
 - ipv4_lookup route add ipv4  netmask  via 
 - help ipv4_lookup

Signed-off-by: Sunil Kumar Kori 
Signed-off-by: Rakesh Kudurumalla 
---
 app/graph/cli.c|   2 +
 app/graph/ethdev.c |   2 +-
 app/graph/ip4_route.c  | 221 +
 app/graph/meson.build  |   1 +
 app/graph/module_api.h |   1 +
 app/graph/route.h  |  26 +
 app/graph/route_priv.h |  44 
 doc/guides/tools/graph.rst |   9 ++
 8 files changed, 305 insertions(+), 1 deletion(-)
 create mode 100644 app/graph/ip4_route.c
 create mode 100644 app/graph/route.h
 create mode 100644 app/graph/route_priv.h

diff --git a/app/graph/cli.c b/app/graph/cli.c
index c4b5cf3ce1..430750db6e 100644
--- a/app/graph/cli.c
+++ b/app/graph/cli.c
@@ -30,6 +30,8 @@ cmdline_parse_ctx_t modules_ctx[] = {
(cmdline_parse_inst_t *)ðdev_ip6_cmd_ctx,
(cmdline_parse_inst_t *)ðdev_cmd_ctx,
(cmdline_parse_inst_t *)ðdev_help_cmd_ctx,
+   (cmdline_parse_inst_t *)&ipv4_lookup_cmd_ctx,
+   (cmdline_parse_inst_t *)&ipv4_lookup_help_cmd_ctx,
NULL,
 };
 
diff --git a/app/graph/ethdev.c b/app/graph/ethdev.c
index 74e80679d9..4d2bc73e7c 100644
--- a/app/graph/ethdev.c
+++ b/app/graph/ethdev.c
@@ -160,7 +160,7 @@ ethdev_stop(void)
}
 
ethdev_list_clean();
-   rte_eal_cleanup();
+   route_ip4_list_clean();
printf("Bye...\n");
 }
 
diff --git a/app/graph/ip4_route.c b/app/graph/ip4_route.c
new file mode 100644
index 00..db3354c270
--- /dev/null
+++ b/app/graph/ip4_route.c
@@ -0,0 +1,221 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Marvell.
+ */
+
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "module_api.h"
+#include "route_priv.h"
+
+static const char
+cmd_ipv4_lookup_help[] = "ipv4_lookup route add ipv4  netmask  via 
";
+
+struct ip4_route route4 = TAILQ_HEAD_INITIALIZER(route4);
+
+
+void
+route_ip4_list_clean(void)
+{
+   struct route_ipv4_config *route;
+
+   while (!TAILQ_EMPTY(&route4)) {
+   route = TAILQ_FIRST(&route4);
+   TAILQ_REMOVE(&route4, route, next);
+   }
+}
+
+static struct route_ipv4_config *
+find_route4_entry(struct route_ipv4_config *route)
+{
+   struct route_ipv4_config *ipv4route;
+
+   TAILQ_FOREACH(ipv4route, &route4, next) {
+   if (!memcmp(ipv4route, route, sizeof(*route)))
+   return ipv4route;
+   }
+   return NULL;
+
+}
+
+static uint8_t
+convert_netmask_to_depth(uint32_t netmask)
+{
+   uint8_t zerobits = 0;
+
+   while ((netmask & 0x1) == 0) {
+   netmask = netmask >> 1;
+   zerobits++;
+   }
+
+   return (32 - zerobits);
+}
+
+static int
+route4_rewirte_table_update(struct route_ipv4_config *ipv4route)
+{
+   uint8_t depth;
+   int portid;
+
+   portid = ethdev_portid_by_ip4(ipv4route->via, ipv4route->netmask);
+   if (portid < 0) {
+   printf("Invalid portid found to install the route\n");
+   return portid;
+   }
+
+   depth = convert_netmask_to_depth(ipv4route->netmask);
+
+   return rte_node_ip4_route_add(ipv4route->ip, depth, portid,
+   RTE_NODE_IP4_LOOKUP_NEXT_REWRITE);
+}
+
+static int
+route_ip4_add(struct route_ipv4_config *route)
+{
+   struct route_ipv4_config *ipv4route;
+   int rc = -EINVAL;
+
+   ipv4route = find_route4_entry(route);
+
+   if (!ipv4route) {
+   ipv4route = malloc(sizeof(struct route_ipv4_config));
+   if (!ipv4route)
+   return -ENOMEM;
+   } else {
+   return 0;
+   }
+
+   ipv4route->ip = route->ip;
+   ipv4route->netmask = route->netmask;
+   ipv4route->via = route->via;
+   ipv4route->is_used = true;
+
+   /* FIXME: Get graph status here and then update table */
+   rc = route4_rewirte_table_update(ipv4route);
+   if (rc)
+   goto free;
+
+   TAILQ_INSERT_TAIL(&route4, ipv4route, next);
+   return 0;
+free:
+   free(ipv4route);
+   return rc;
+}
+
+int
+route_ip4_add_to_lookup(void)
+{
+   struct route_ipv4_config *route = NULL;
+   int rc = -EINVAL;
+
+   TAILQ_FOREACH(route, &route4, next) {
+   rc = route4_rewirte_table_update(route);
+   if (rc < 0)
+   return rc;
+   }
+
+   return 0;
+}
+
+static void
+cli_ipv4_lookup_help(__rte_unused void *parsed_result, __rte_unused struct 
cmdline *cl,
+__rte_unused void *data)
+{
+   size_t len;
+
+   len = strlen(conn->msg_out);
+   conn->msg_out += len;
+   snprintf(conn->msg_out, conn->msg_out_len_max, "\n%s\n%s\n",
+"

[PATCH v8 07/12] app/graph: add ipv6_lookup command line interfaces

2023-09-29 Thread skori
From: Rakesh Kudurumalla 

It adds ipv6_lookup module to configure LPM6 table. This LPM6 table
will be used for IPv6 lookup and forwarding.

Following commands are exposed:
 - ipv6_lookup route add ipv6  netmask  via 
 - help ipv6_lookup

Signed-off-by: Sunil Kumar Kori 
Signed-off-by: Rakesh Kudurumalla 
---
 app/graph/cli.c|   2 +
 app/graph/ethdev.c |   1 +
 app/graph/ip6_route.c  | 226 +
 app/graph/meson.build  |   1 +
 app/graph/route.h  |  14 +++
 doc/guides/tools/graph.rst |   9 ++
 6 files changed, 253 insertions(+)
 create mode 100644 app/graph/ip6_route.c

diff --git a/app/graph/cli.c b/app/graph/cli.c
index 430750db6e..7213a91ad2 100644
--- a/app/graph/cli.c
+++ b/app/graph/cli.c
@@ -32,6 +32,8 @@ cmdline_parse_ctx_t modules_ctx[] = {
(cmdline_parse_inst_t *)ðdev_help_cmd_ctx,
(cmdline_parse_inst_t *)&ipv4_lookup_cmd_ctx,
(cmdline_parse_inst_t *)&ipv4_lookup_help_cmd_ctx,
+   (cmdline_parse_inst_t *)&ipv6_lookup_cmd_ctx,
+   (cmdline_parse_inst_t *)&ipv6_lookup_help_cmd_ctx,
NULL,
 };
 
diff --git a/app/graph/ethdev.c b/app/graph/ethdev.c
index 4d2bc73e7c..4c70953b99 100644
--- a/app/graph/ethdev.c
+++ b/app/graph/ethdev.c
@@ -161,6 +161,7 @@ ethdev_stop(void)
 
ethdev_list_clean();
route_ip4_list_clean();
+   route_ip6_list_clean();
printf("Bye...\n");
 }
 
diff --git a/app/graph/ip6_route.c b/app/graph/ip6_route.c
new file mode 100644
index 00..e793cde830
--- /dev/null
+++ b/app/graph/ip6_route.c
@@ -0,0 +1,226 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Marvell.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "module_api.h"
+#include "route_priv.h"
+
+static const char
+cmd_ipv6_lookup_help[] = "ipv6_lookup route add ipv6  netmask  via 
";
+
+struct ip6_route route6 = TAILQ_HEAD_INITIALIZER(route6);
+
+void
+route_ip6_list_clean(void)
+{
+   struct route_ipv6_config *route;
+
+   while (!TAILQ_EMPTY(&route6)) {
+   route = TAILQ_FIRST(&route6);
+   TAILQ_REMOVE(&route6, route, next);
+   }
+}
+
+static struct route_ipv6_config *
+find_route6_entry(struct route_ipv6_config *route)
+{
+   struct route_ipv6_config *ipv6route;
+
+   TAILQ_FOREACH(ipv6route, &route6, next) {
+   if (!memcmp(ipv6route, route, sizeof(*route)))
+   return ipv6route;
+   }
+   return NULL;
+}
+
+static uint8_t
+convert_ip6_netmask_to_depth(uint8_t *netmask)
+{
+   uint8_t setbits = 0;
+   uint8_t mask;
+   int i;
+
+   for (i = 0; i < ETHDEV_IPV6_ADDR_LEN; i++) {
+   mask = netmask[i];
+   while (mask & 0x80) {
+   mask = mask << 1;
+   setbits++;
+   }
+   }
+
+   return setbits;
+}
+
+static int
+route6_rewirte_table_update(struct route_ipv6_config *ipv6route)
+{
+   uint8_t depth;
+   int portid;
+
+   portid = ethdev_portid_by_ip6(ipv6route->gateway, ipv6route->mask);
+   if (portid < 0) {
+   printf("Invalid portid found to install the route\n");
+   return portid;
+   }
+   depth = convert_ip6_netmask_to_depth(ipv6route->mask);
+
+   return rte_node_ip6_route_add(ipv6route->ip, depth, portid,
+   RTE_NODE_IP6_LOOKUP_NEXT_REWRITE);
+
+}
+
+static int
+route_ip6_add(struct route_ipv6_config *route)
+{
+   struct route_ipv6_config *ipv6route;
+   int rc = -EINVAL;
+   int j;
+
+   ipv6route = find_route6_entry(route);
+   if (!ipv6route) {
+   ipv6route = malloc(sizeof(struct route_ipv6_config));
+   if (!ipv6route)
+   return -ENOMEM;
+   } else {
+   return 0;
+   }
+
+   for (j = 0; j < ETHDEV_IPV6_ADDR_LEN; j++) {
+   ipv6route->ip[j] = route->ip[j];
+   ipv6route->mask[j] = route->mask[j];
+   ipv6route->gateway[j] = route->gateway[j];
+   }
+   ipv6route->is_used = true;
+
+   /* FIXME: Get graph status here and then update table */
+   rc = route6_rewirte_table_update(ipv6route);
+   if (rc)
+   goto free;
+
+   TAILQ_INSERT_TAIL(&route6, ipv6route, next);
+   return 0;
+free:
+   free(ipv6route);
+   return rc;
+}
+
+int
+route_ip6_add_to_lookup(void)
+{
+   struct route_ipv6_config *route = NULL;
+   int rc = -EINVAL;
+
+   TAILQ_FOREACH(route, &route6, next) {
+   rc = route6_rewirte_table_update(route);
+   if (rc < 0)
+   return rc;
+   }
+
+   return 0;
+}
+
+static void
+cli_ipv6_lookup_help(__rte_unused void *parsed_result, __rte_unused struct 
cmdline *cl,
+__rte_unused void *data)
+{
+   size_t len;
+
+   len = strlen(conn->msg_out);
+   conn->msg_out += le

[PATCH v8 08/12] app/graph: add neigh command line interfaces

2023-09-29 Thread skori
From: Sunil Kumar Kori 

It adds neigh module to configure arp/neigh. This module uses
ipv4_rewrite and ipv6_rewrite node to write neigh information.

Following commands are exposed:
 - neigh add ipv4  
 - neigh add ipv6  
 - help neigh

Signed-off-by: Sunil Kumar Kori 
Signed-off-by: Rakesh Kudurumalla 
---
 app/graph/cli.c|   3 +
 app/graph/ethdev.c |   2 +
 app/graph/meson.build  |   1 +
 app/graph/module_api.h |   2 +
 app/graph/neigh.c  | 360 +
 app/graph/neigh.h  |  17 ++
 app/graph/neigh_priv.h |  49 +
 doc/guides/tools/graph.rst |  12 ++
 8 files changed, 446 insertions(+)
 create mode 100644 app/graph/neigh.c
 create mode 100644 app/graph/neigh.h
 create mode 100644 app/graph/neigh_priv.h

diff --git a/app/graph/cli.c b/app/graph/cli.c
index 7213a91ad2..36338d5173 100644
--- a/app/graph/cli.c
+++ b/app/graph/cli.c
@@ -34,6 +34,9 @@ cmdline_parse_ctx_t modules_ctx[] = {
(cmdline_parse_inst_t *)&ipv4_lookup_help_cmd_ctx,
(cmdline_parse_inst_t *)&ipv6_lookup_cmd_ctx,
(cmdline_parse_inst_t *)&ipv6_lookup_help_cmd_ctx,
+   (cmdline_parse_inst_t *)&neigh_v4_cmd_ctx,
+   (cmdline_parse_inst_t *)&neigh_v6_cmd_ctx,
+   (cmdline_parse_inst_t *)&neigh_help_cmd_ctx,
NULL,
 };
 
diff --git a/app/graph/ethdev.c b/app/graph/ethdev.c
index 4c70953b99..b43b16c300 100644
--- a/app/graph/ethdev.c
+++ b/app/graph/ethdev.c
@@ -162,6 +162,8 @@ ethdev_stop(void)
ethdev_list_clean();
route_ip4_list_clean();
route_ip6_list_clean();
+   neigh4_list_clean();
+   neigh6_list_clean();
printf("Bye...\n");
 }
 
diff --git a/app/graph/meson.build b/app/graph/meson.build
index c3261a2162..39fe0b984f 100644
--- a/app/graph/meson.build
+++ b/app/graph/meson.build
@@ -17,5 +17,6 @@ sources = files(
 'ip6_route.c',
 'main.c',
 'mempool.c',
+'neigh.c',
 'utils.c',
 )
diff --git a/app/graph/module_api.h b/app/graph/module_api.h
index bd4d245c75..e9e42da7cc 100644
--- a/app/graph/module_api.h
+++ b/app/graph/module_api.h
@@ -12,8 +12,10 @@
 #include "conn.h"
 #include "ethdev.h"
 #include "mempool.h"
+#include "neigh.h"
 #include "route.h"
 #include "utils.h"
+
 /*
  * Externs
  */
diff --git a/app/graph/neigh.c b/app/graph/neigh.c
new file mode 100644
index 00..af69fc8ade
--- /dev/null
+++ b/app/graph/neigh.c
@@ -0,0 +1,360 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Marvell.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "neigh_priv.h"
+#include "module_api.h"
+
+static const char
+cmd_neigh_v4_help[] = "neigh add ipv4  ";
+
+static const char
+cmd_neigh_v6_help[] = "neigh add ipv6  ";
+
+struct neigh4_head neigh4 = TAILQ_HEAD_INITIALIZER(neigh4);
+struct neigh6_head neigh6 = TAILQ_HEAD_INITIALIZER(neigh6);
+
+void
+neigh4_list_clean(void)
+{
+   struct neigh_ipv4_config *v4_config;
+
+   while (!TAILQ_EMPTY(&neigh4)) {
+   v4_config = TAILQ_FIRST(&neigh4);
+   TAILQ_REMOVE(&neigh4, v4_config, next);
+   }
+}
+
+void
+neigh6_list_clean(void)
+{
+   struct neigh_ipv6_config *v6_config;
+
+   while (!TAILQ_EMPTY(&neigh6)) {
+   v6_config = TAILQ_FIRST(&neigh6);
+   TAILQ_REMOVE(&neigh6, v6_config, next);
+   }
+}
+
+static struct neigh_ipv4_config *
+find_neigh4_entry(uint32_t ip, uint64_t mac)
+{
+   struct neigh_ipv4_config *v4_config;
+
+   TAILQ_FOREACH(v4_config, &neigh4, next) {
+   if ((v4_config->ip == ip) && (v4_config->mac == mac))
+   return v4_config;
+   }
+   return NULL;
+}
+
+static struct neigh_ipv6_config *
+find_neigh6_entry(uint8_t *ip, uint64_t mac)
+{
+   struct neigh_ipv6_config *v6_config;
+
+   TAILQ_FOREACH(v6_config, &neigh6, next) {
+   if (!(memcmp(v6_config->ip, ip, 16)) && (v6_config->mac == mac))
+   return v6_config;
+   }
+   return NULL;
+}
+
+static int
+ip6_rewrite_node_add(struct neigh_ipv6_config *v6_config)
+{
+   uint8_t data[2 * RTE_ETHER_ADDR_LEN];
+   uint8_t len = 2 * RTE_ETHER_ADDR_LEN;
+   struct rte_ether_addr smac;
+   int16_t portid = 0;
+   int rc;
+
+   portid = ethdev_portid_by_ip6(v6_config->ip, NULL);
+   if (portid < 0) {
+   printf("Invalid portid found to add neigh\n");
+   return -EINVAL;
+   }
+
+   memset(data, 0, len);
+
+   /* Copy dst mac */
+   rte_memcpy((void *)&data[0], (void *)&v6_config->mac, 
RTE_ETHER_ADDR_LEN);
+
+   /* Copy src mac */
+   rc = rte_eth_macaddr_get(portid, &smac);
+   if (rc < 0)
+   return rc;
+
+   rte_memcpy(&data[RTE_ETHER_ADDR_LEN], smac.addr_bytes, 
RTE_ETHER_ADDR_LEN);
+
+   return rte_node_ip6_rewrite_add(portid, data, len, portid);
+
+
+}
+
+stat

[PATCH v8 09/12] app/graph: add ethdev_rx command line interfaces

2023-09-29 Thread skori
From: Rakesh Kudurumalla 

It adds ethdev_rx module to create port-queue-core mapping.

Mapping will be used to launch graph worker thread and dequeue
packets on mentioned core from desired port/queue.

Following commands are exposed:
 - ethdev_rx map port  queue  core 
 - help ethdev_rx

Signed-off-by: Sunil Kumar Kori 
Signed-off-by: Rakesh Kudurumalla 
---
 app/graph/cli.c|   2 +
 app/graph/ethdev_rx.c  | 165 +
 app/graph/ethdev_rx.h  |  37 +
 app/graph/ethdev_rx_priv.h |  39 +
 app/graph/meson.build  |   1 +
 app/graph/module_api.h |   1 +
 doc/guides/tools/graph.rst |  10 +++
 7 files changed, 255 insertions(+)
 create mode 100644 app/graph/ethdev_rx.c
 create mode 100644 app/graph/ethdev_rx.h
 create mode 100644 app/graph/ethdev_rx_priv.h

diff --git a/app/graph/cli.c b/app/graph/cli.c
index 36338d5173..e947f61ee4 100644
--- a/app/graph/cli.c
+++ b/app/graph/cli.c
@@ -30,6 +30,8 @@ cmdline_parse_ctx_t modules_ctx[] = {
(cmdline_parse_inst_t *)ðdev_ip6_cmd_ctx,
(cmdline_parse_inst_t *)ðdev_cmd_ctx,
(cmdline_parse_inst_t *)ðdev_help_cmd_ctx,
+   (cmdline_parse_inst_t *)ðdev_rx_cmd_ctx,
+   (cmdline_parse_inst_t *)ðdev_rx_help_cmd_ctx,
(cmdline_parse_inst_t *)&ipv4_lookup_cmd_ctx,
(cmdline_parse_inst_t *)&ipv4_lookup_help_cmd_ctx,
(cmdline_parse_inst_t *)&ipv6_lookup_cmd_ctx,
diff --git a/app/graph/ethdev_rx.c b/app/graph/ethdev_rx.c
new file mode 100644
index 00..f2cb8cf9a5
--- /dev/null
+++ b/app/graph/ethdev_rx.c
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Marvell.
+ */
+
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "ethdev_rx_priv.h"
+#include "module_api.h"
+
+static const char
+cmd_ethdev_rx_help[] = "ethdev_rx map port  queue  core 
";
+
+static struct lcore_params lcore_params_array[ETHDEV_RX_LCORE_PARAMS_MAX];
+struct rte_node_ethdev_config ethdev_conf[RTE_MAX_ETHPORTS];
+struct lcore_params *lcore_params = lcore_params_array;
+struct lcore_conf lcore_conf[RTE_MAX_LCORE];
+uint16_t nb_lcore_params;
+
+static void
+rx_map_configure(uint8_t port_id, uint32_t queue, uint32_t core)
+{
+   uint8_t n_rx_queue;
+
+   n_rx_queue = lcore_conf[core].n_rx_queue;
+   lcore_conf[core].rx_queue_list[n_rx_queue].port_id = port_id;
+   lcore_conf[core].rx_queue_list[n_rx_queue].queue_id = queue;
+   lcore_conf[core].n_rx_queue++;
+}
+
+uint8_t
+ethdev_rx_num_rx_queues_get(uint16_t port)
+{
+   int queue = -1;
+   uint16_t i;
+
+   for (i = 0; i < nb_lcore_params; ++i) {
+   if (lcore_params[i].port_id == port) {
+   if (lcore_params[i].queue_id == queue + 1)
+   queue = lcore_params[i].queue_id;
+   else
+   rte_exit(EXIT_FAILURE,
+"Queue ids of the port %d must be"
+" in sequence and must start with 0\n",
+lcore_params[i].port_id);
+   }
+   }
+
+   return (uint8_t)(++queue);
+}
+
+static int
+ethdev_rx_map_add(char *name, uint32_t queue, uint32_t core)
+{
+   uint64_t coremask;
+   uint16_t port_id;
+   int rc;
+
+   if (nb_lcore_params >= ETHDEV_RX_LCORE_PARAMS_MAX)
+   return -EINVAL;
+
+   rc = rte_eth_dev_get_port_by_name(name, &port_id);
+   if (rc)
+   return -EINVAL;
+
+   coremask = 0xff; /* FIXME: Read from graph configuration */
+
+   if (!(coremask & (1 << core)))
+   return -EINVAL;
+
+   rx_map_configure(port_id, queue, core);
+
+   lcore_params_array[nb_lcore_params].port_id = port_id;
+   lcore_params_array[nb_lcore_params].queue_id = queue;
+   lcore_params_array[nb_lcore_params].lcore_id = core;
+   nb_lcore_params++;
+   return 0;
+}
+
+static void
+cli_ethdev_rx_help(__rte_unused void *parsed_result, __rte_unused struct 
cmdline *cl,
+  __rte_unused void *data)
+{
+   size_t len;
+
+   len = strlen(conn->msg_out);
+   conn->msg_out += len;
+   snprintf(conn->msg_out, conn->msg_out_len_max, "\n%s\n%s\n",
+"- ethdev_rx command help 
-",
+cmd_ethdev_rx_help);
+
+   len = strlen(conn->msg_out);
+   conn->msg_out_len_max -= len;
+}
+
+static void
+cli_ethdev_rx(void *parsed_result, __rte_unused struct cmdline *cl, void *data 
__rte_unused)
+{
+   struct ethdev_rx_cmd_tokens *res = parsed_result;
+   int rc = -EINVAL;
+
+   rc = ethdev_rx_map_add(res->dev, res->qid, res->core_id);
+   if (rc < 0) {
+   cli_exit();
+   printf(MSG_CMD_FAIL, res->cmd);
+   rte_exit(EXIT_FAILURE, "input core is Invalid\n");
+   }
+
+}
+
+cmdline_p

[PATCH v8 10/12] app/graph: add graph command line interfaces

2023-09-29 Thread skori
From: Rakesh Kudurumalla 

It adds graph module to create a graph for a given use case like
l3fwd.

Following commands are exposed:
 - graph  [bsz ] [tmo ] [coremask ] \
model  pcap_enable <0 | 1> num_pcap_pkts  \
pcap_file 
 - graph start
 - graph stats show
 - help graph

Signed-off-by: Sunil Kumar Kori 
Signed-off-by: Rakesh Kudurumalla 
---
 app/graph/cli.c|   4 +
 app/graph/ethdev_rx.c  |   2 +-
 app/graph/graph.c  | 550 +
 app/graph/graph.h  |  21 ++
 app/graph/graph_priv.h |  70 +
 app/graph/ip4_route.c  |   5 +-
 app/graph/ip6_route.c  |   5 +-
 app/graph/meson.build  |   1 +
 app/graph/module_api.h |   1 +
 app/graph/neigh.c  |  10 +-
 doc/guides/tools/graph.rst |  19 +-
 11 files changed, 681 insertions(+), 7 deletions(-)
 create mode 100644 app/graph/graph.c
 create mode 100644 app/graph/graph.h
 create mode 100644 app/graph/graph_priv.h

diff --git a/app/graph/cli.c b/app/graph/cli.c
index e947f61ee4..c43af5925c 100644
--- a/app/graph/cli.c
+++ b/app/graph/cli.c
@@ -20,6 +20,10 @@
 #define MAX_LINE_SIZE 2048
 
 cmdline_parse_ctx_t modules_ctx[] = {
+   (cmdline_parse_inst_t *)&graph_config_cmd_ctx,
+   (cmdline_parse_inst_t *)&graph_start_cmd_ctx,
+   (cmdline_parse_inst_t *)&graph_stats_cmd_ctx,
+   (cmdline_parse_inst_t *)&graph_help_cmd_ctx,
(cmdline_parse_inst_t *)&mempool_config_cmd_ctx,
(cmdline_parse_inst_t *)&mempool_help_cmd_ctx,
(cmdline_parse_inst_t *)ðdev_show_cmd_ctx,
diff --git a/app/graph/ethdev_rx.c b/app/graph/ethdev_rx.c
index f2cb8cf9a5..03f8effcca 100644
--- a/app/graph/ethdev_rx.c
+++ b/app/graph/ethdev_rx.c
@@ -69,7 +69,7 @@ ethdev_rx_map_add(char *name, uint32_t queue, uint32_t core)
if (rc)
return -EINVAL;
 
-   coremask = 0xff; /* FIXME: Read from graph configuration */
+   coremask = graph_coremask_get();
 
if (!(coremask & (1 << core)))
return -EINVAL;
diff --git a/app/graph/graph.c b/app/graph/graph.c
new file mode 100644
index 00..b27abcf1f9
--- /dev/null
+++ b/app/graph/graph.c
@@ -0,0 +1,550 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Marvell.
+ */
+
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "graph_priv.h"
+#include "module_api.h"
+
+#define RTE_LOGTYPE_APP_GRAPH RTE_LOGTYPE_USER1
+
+static const char
+cmd_graph_help[] = "graph  bsz  tmo  coremask  "
+  "model  pcap_enable <0 | 1> 
num_pcap_pkts "
+  "pcap_file ";
+
+static const char * const supported_usecases[] = {"l3fwd"};
+struct graph_config graph_config;
+bool graph_started;
+
+/* Check the link rc of all ports in up to 9s, and print them finally */
+static void
+check_all_ports_link_status(uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 90  /* 9s (90 * 100ms) in total */
+   char link_rc_text[RTE_ETH_LINK_MAX_STR_LEN];
+   uint8_t count, all_ports_up, print_flag = 0;
+   struct rte_eth_link link;
+   uint16_t portid;
+   int rc;
+
+   printf("\nChecking link rc");
+   fflush(stdout);
+   for (count = 0; count <= MAX_CHECK_TIME; count++) {
+   if (force_quit)
+   return;
+
+   all_ports_up = 1;
+   RTE_ETH_FOREACH_DEV(portid)
+   {
+   if (force_quit)
+   return;
+
+   if ((port_mask & (1 << portid)) == 0)
+   continue;
+
+   memset(&link, 0, sizeof(link));
+   rc = rte_eth_link_get_nowait(portid, &link);
+   if (rc < 0) {
+   all_ports_up = 0;
+   if (print_flag == 1)
+   printf("Port %u link get failed: %s\n",
+  portid, rte_strerror(-rc));
+   continue;
+   }
+
+   /* Print link rc if flag set */
+   if (print_flag == 1) {
+   rte_eth_link_to_str(link_rc_text, 
sizeof(link_rc_text),
+   &link);
+   printf("Port %d %s\n", portid, link_rc_text);
+   continue;
+   }
+
+   /* Clear all_ports_up flag if any link down */
+   if (link.link_status == RTE_ETH_LINK_DOWN) {
+   all_ports_up = 0;
+   break;
+   }
+   }
+
+   /* After finally printing all link rc, get out */
+   if (print_flag == 1)
+   break;
+
+   if (all_ports_up == 0) {

[PATCH v8 11/12] app/graph: add CLI option to enable graph stats

2023-09-29 Thread skori
From: Sunil Kumar Kori 

It adds application's command line parameter "--enable-graph-stats"
to enable dumping graph stats on console.

By default, no graph stats will be printed on console but same can
be dumped via telnet session using "graph stats show" command.

Signed-off-by: Sunil Kumar Kori 
Signed-off-by: Rakesh Kudurumalla 
---
 app/graph/main.c   | 17 -
 app/graph/module_api.h |  2 ++
 doc/guides/tools/graph.rst |  4 
 3 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/app/graph/main.c b/app/graph/main.c
index c1cb435588..465376425c 100644
--- a/app/graph/main.c
+++ b/app/graph/main.c
@@ -21,12 +21,13 @@
 volatile bool force_quit;
 struct conn *conn;
 
-static const char usage[] = "%s EAL_ARGS -- -s SCRIPT [-h HOST] [-p PORT] "
+static const char usage[] = "%s EAL_ARGS -- -s SCRIPT [-h HOST] [-p PORT] 
[--enable-graph-stats] "
"[--help]\n";
 
 static struct app_params {
struct conn_params conn;
char *script_name;
+   bool enable_graph_stats;
 } app = {
.conn = {
.welcome = "\nWelcome!\n\n",
@@ -40,6 +41,7 @@ static struct app_params {
.msg_handle_arg = NULL, /* set later. */
},
.script_name = NULL,
+   .enable_graph_stats = false,
 };
 
 static void
@@ -56,6 +58,7 @@ app_args_parse(int argc, char **argv)
 {
struct option lgopts[] = {
{"help", 0, 0, 'H'},
+   {"enable-graph-stats", 0, 0, 'g'},
};
int h_present, p_present, s_present, n_args, i;
char *app_name = argv[0];
@@ -133,6 +136,12 @@ app_args_parse(int argc, char **argv)
}
break;
 
+   case 'g':
+   app.enable_graph_stats = true;
+   printf("WARNING! Telnet session can not be accessed 
with"
+  "--enable-graph-stats");
+   break;
+
case 'H':
default:
printf(usage, app_name);
@@ -144,6 +153,12 @@ app_args_parse(int argc, char **argv)
return 0;
 }
 
+bool
+app_graph_stats_enabled(void)
+{
+   return app.enable_graph_stats;
+}
+
 bool
 app_graph_exit(void)
 {
diff --git a/app/graph/module_api.h b/app/graph/module_api.h
index 392dcfb222..a7d287f5c8 100644
--- a/app/graph/module_api.h
+++ b/app/graph/module_api.h
@@ -24,5 +24,7 @@
 extern volatile bool force_quit;
 extern struct conn *conn;
 
+bool app_graph_stats_enabled(void);
 bool app_graph_exit(void);
+
 #endif
diff --git a/doc/guides/tools/graph.rst b/doc/guides/tools/graph.rst
index 7d2aa95c95..d548cb67ec 100644
--- a/doc/guides/tools/graph.rst
+++ b/doc/guides/tools/graph.rst
@@ -57,6 +57,10 @@ Following are the application command-line options:
 a mandatory parameter which will be used to create desired graph
 for a given use case.
 
+* ``--enable-graph-stats``
+
+   Enable graph statistics printing on console. By default graph 
statistics are disabled.
+
 * ``--help``
 
Dumps application usage
-- 
2.25.1



[PATCH v8 12/12] app/graph: add l3fwd use case

2023-09-29 Thread skori
From: Rakesh Kudurumalla 

It adds an use case l3fwd. It contains a dedicated l3fwd.cli file
mentioning commands to configure the required resources.

Once application successfully parses the l3fwd.cli then a graph is
created having below nodes:
 - ethdev_rx -> pkt_cls

 - pkt_cls -> ip4_lookup
 - pkt_cls -> ip6_lookup
 - pkt_cls -> pkt_drop

 - ip4_lookup -> ip4_rewrite
 - ip4_lookup -> pkt_drop

 - ip6_lookup -> ip6_rewrite
 - ip6_lookup -> pkt_drop

 - ip4_rewrite -> ethdev_tx
 - ip4_rewrite -> pkt_drop

 - ip6_rewrite -> ethdev_tx
 - ip6_rewrite -> pkt_drop

 - ethdev_tx -> pkt_drop

Signed-off-by: Sunil Kumar Kori 
Signed-off-by: Rakesh Kudurumalla 
---
 app/graph/examples/l3fwd.cli |  87 
 app/graph/graph.c|   2 +-
 app/graph/l3fwd.c| 136 
 app/graph/l3fwd.h|  11 +
 app/graph/meson.build|   1 +
 app/graph/module_api.h   |   1 +
 doc/guides/tools/graph.rst   |   9 +-
 doc/guides/tools/img/graph-usecase-l3fwd.svg | 210 +++
 8 files changed, 455 insertions(+), 2 deletions(-)
 create mode 100644 app/graph/examples/l3fwd.cli
 create mode 100644 app/graph/l3fwd.c
 create mode 100644 app/graph/l3fwd.h
 create mode 100644 doc/guides/tools/img/graph-usecase-l3fwd.svg

diff --git a/app/graph/examples/l3fwd.cli b/app/graph/examples/l3fwd.cli
new file mode 100644
index 00..1038fde04e
--- /dev/null
+++ b/app/graph/examples/l3fwd.cli
@@ -0,0 +1,87 @@
+; SPDX-License-Identifier: BSD-3-Clause
+; Copyright(c) 2023 Marvell.
+
+;
+; Graph configuration for given usecase
+;
+graph l3fwd coremask 0xff bsz 32 tmo 10 model default pcap_enable 1 
num_pcap_pkts 10 pcap_file /tmp/output.pcap
+
+;
+; Mempools to be attached with ethdev
+;
+mempool mempool0 size 8192 buffers 4000 cache 256 numa 0
+
+;
+; DPDK devices and configuration.
+;
+; Note: Customize the parameters below to match your setup.
+;
+ethdev 0002:04:00.0 rxq 1 txq 8 mempool0
+ethdev 0002:05:00.0 rxq 1 txq 8 mempool0
+ethdev 0002:06:00.0 rxq 1 txq 8 mempool0
+ethdev 0002:07:00.0 rxq 1 txq 8 mempool0
+ethdev 0002:04:00.0 mtu 1700
+ethdev 0002:05:00.0 promiscuous on
+
+;
+; IPv4 addresses assigned to DPDK devices
+;
+ethdev 0002:04:00.0 ip4 addr add 10.0.2.1 netmask 255.255.255.0
+ethdev 0002:05:00.0 ip4 addr add 20.0.2.1 netmask 255.255.255.0
+ethdev 0002:06:00.0 ip4 addr add 30.0.2.1 netmask 255.255.255.0
+ethdev 0002:07:00.0 ip4 addr add 40.0.2.1 netmask 255.255.255.0
+
+;
+; IPv6 addresses assigned to DPDK devices
+;
+ethdev 0002:04:00.0 ip6 addr add 
52:20:DA:4F:68:70:52:20:DA:4F:68:70:52:20:DA:4A netmask 
FF:FF:FF:FF:FF:FF:FF:FF:FF:00:00:00:00:00:00:00
+ethdev 0002:05:00.0 ip6 addr add 
62:20:DA:4F:68:70:52:20:DA:4F:68:70:52:20:DA:4B netmask 
FF:FF:FF:FF:FF:FF:FF:FF:FF:00:00:00:00:00:00:00
+ethdev 0002:06:00.0 ip6 addr add 
72:20:DA:4F:68:70:52:20:DA:4F:68:70:52:20:DA:4C netmask 
FF:FF:FF:FF:FF:FF:FF:FF:FF:00:00:00:00:00:00:00
+ethdev 0002:07:00.0 ip6 addr add 
82:20:DA:4F:68:70:52:20:DA:4F:68:70:52:20:DA:4D netmask 
FF:FF:FF:FF:FF:FF:FF:FF:FF:00:00:00:00:00:00:00
+
+;
+; IPv4 routes which are installed to ipv4_lookup node for LPM processing
+;
+ipv4_lookup route add ipv4 10.0.2.0 netmask 255.255.255.0 via 10.0.2.1
+ipv4_lookup route add ipv4 20.0.2.0 netmask 255.255.255.0 via 20.0.2.1
+ipv4_lookup route add ipv4 30.0.2.0 netmask 255.255.255.0 via 30.0.2.1
+ipv4_lookup route add ipv4 40.0.2.0 netmask 255.255.255.0 via 40.0.2.1
+
+;
+; IPv6 routes which are installed to ipv6_lookup node for LPM processing
+;
+ipv6_lookup route add ipv6 52:20:DA:4F:68:70:52:20:DA:4F:68:70:52:20:DA:4A 
netmask FF:FF:FF:FF:FF:FF:FF:FF:FF:00:00:00:00:00:00:00 via 
52:20:DA:4F:68:70:52:20:DA:4F:68:70:52:20:DA:4A
+ipv6_lookup route add ipv6 62:20:DA:4F:68:70:52:20:DA:4F:68:70:52:20:DA:4B 
netmask FF:FF:FF:FF:FF:FF:FF:FF:FF:00:00:00:00:00:00:00 via 
62:20:DA:4F:68:70:52:20:DA:4F:68:70:52:20:DA:4B
+ipv6_lookup route add ipv6 72:20:DA:4F:68:70:52:20:DA:4F:68:70:52:20:DA:4C 
netmask FF:FF:FF:FF:FF:FF:FF:FF:FF:00:00:00:00:00:00:00 via 
72:20:DA:4F:68:70:52:20:DA:4F:68:70:52:20:DA:4C
+ipv6_lookup route add ipv6 82:20:DA:4F:68:70:52:20:DA:4F:68:70:52:20:DA:4D 
netmask FF:FF:FF:FF:FF:FF:FF:FF:FF:00:00:00:00:00:00:00 via 
82:20:DA:4F:68:70:52:20:DA:4F:68:70:52:20:DA:4D
+
+;
+; Peer MAC and IPv4 address mapping
+;
+neigh add ipv4 10.0.2.2 52:20:DA:4F:68:70
+neigh add ipv4 20.0.2.2 62:20:DA:4F:68:70
+neigh add ipv4 30.0.2.2 72:20:DA:4F:68:70
+neigh add ipv4 40.0.2.2 82:20:DA:4F:68:70
+
+;
+; Peer MAC and IPv6 address mapping
+;
+neigh add ipv6 52:20:DA:4F:68:70:52:20:DA:4F:68:70:52:20:DA:4A 
52:20:DA:4F:68:70
+neigh add ipv6 62:20:DA:4F:68:70:52:20:DA:4F:68:70:52:20:DA:4B 
62:20:DA:4F:68:70
+neigh add ipv6 72:20:DA:4F:68:70:52:20:DA:4F:68:70:52:20:DA:4C 
72:20:DA:4F:68:70
+neigh add ipv6 82:20:DA:4F:68:70:52:20:DA:4F:68:70:52:20:DA:4D 
82:20:DA:4F:68:70
+
+;
+; Port-Queue-Core mapping fo

Re: [PATCH v3 3/9] net/nfp: initialize IPsec related content

2023-09-29 Thread Ferruh Yigit
On 9/29/2023 3:08 AM, Chaoyong He wrote:
> From: Chang Miao 
> 
> If enable IPsec capability bit, driver need to Initialize IPsec.
> Set security context and security offload capabilities in datapath.
> Define private session and add SA array for each PF to save all
> SA data in driver. Add internal mbuf dynamic flag and field to save
> IPsec related data to dynamic mbuf field.
> 
> Also update the relase note, add the inline IPsec offload section for NFP
> PMD.
> 
> Signed-off-by: Chang Miao 
> Signed-off-by: Shihong Wang 
> Reviewed-by: Chaoyong He 
> 

<...>

> @@ -38,6 +38,11 @@ DPDK Release 23.11
>  which also added support for standard atomics
>  (Ref: https://releases.llvm.org/3.6.0/tools/clang/docs/ReleaseNotes.html)
>  
> +  * **Added inline IPsec offload feature for NFP PMD.**
> +
> +  Added the inline IPsec offload feature based on the security framework for
> +  NFP PMD.
> +
>  New Features
>  
>  

Moving release notes update to "New Features" section below while merging.


Re: [PATCH v3 0/9] add the support of ipsec offload

2023-09-29 Thread Ferruh Yigit
On 9/29/2023 3:08 AM, Chaoyong He wrote:
> This patch series add the support of ipsec offload feature, includes:
> * Implement the communication channel between PMD and firmware through
>   mailbox.
> * Implement the ipsec offload related APIs based the security framework.
> * Implement the ipsec packets process logics in the data path.
> 
> ---
> v3:
> * Squash the update of mailmap file.
> * Add a entry in the release note.
> * Remove some unnecessary logic in the TLVs capability parsing function.
> v2:
> * Fix one spell error cause check warning.
> * Try to fix one compile error in Alpine environment of CI.
> ---
> 
> Chang Miao (2):
>   net/nfp: initialize IPsec related content
>   net/nfp: create security session
> 
> Shihong Wang (7):
>   net/nfp: add TLVs capability parsing
>   net/nfp: add mailbox to support IPsec offload
>   net/nfp: get security capabilities and session size
>   net/nfp: get IPsec Rx/Tx packet statistics
>   net/nfp: update security session
>   net/nfp: support IPsec Rx and Tx offload
>   net/nfp: destroy security session
> 

Series applied to dpdk-next-net/main, thanks.



Re: [PATCH v16 1/8] net/ntnic: initial commit which adds register defines

2023-09-29 Thread Thomas Monjalon
29/09/2023 11:46, Ferruh Yigit:
> On 9/29/2023 10:21 AM, Christian Koue Muf wrote:
> > On 9/21/2023 4:05 PM, Ferruh Yigit wrote:
> >> On 9/20/2023 2:17 PM, Thomas Monjalon wrote:
> >>> Hello,
> >>>
> >>> 19/09/2023 11:06, Christian Koue Muf:
>  On 9/18/23 10:34 AM, Ferruh Yigit wrote:
> > On 9/15/2023 7:37 PM, Morten Brørup wrote:
> >>> From: Ferruh Yigit [mailto:ferruh.yi...@amd.com]
> >>> Sent: Friday, 15 September 2023 17.55
> >>>
> >>> On 9/8/2023 5:07 PM, Mykola Kostenok wrote:
>  From: Christian Koue Muf 
> 
>  The NTNIC PMD does not rely on a kernel space Napatech driver,
>  thus all defines related to the register layout is part of the
>  PMD code, which will be added in later commits.
> 
>  Signed-off-by: Christian Koue Muf 
>  Reviewed-by: Mykola Kostenok 
> 
> >>>
> >>> Hi Mykola, Christiam,
> >>>
> >>> This PMD scares me, overall it is a big drop:
> >>> "249 files changed, 87128 insertions(+)"
> >>>
> >>> I think it is not possible to review all in one release cycle, and
> >>> it is not even possible to say if all code used or not.
> >>>
> >>> I can see code is already developed, and it is difficult to
> >>> restructure developed code, but restructure it into small pieces
> >>> really helps for reviews.
> >>>
> >>>
> >>> Driver supports good list of features, can it be possible to
> >>> distribute upstream effort into multiple release.
> >>> Starting from basic functionality and add features gradually.
> >>> Target for this release can be providing datapath, and add more if
> >>> we have time in the release, what do you think?
> >>>
> >>> I was expecting to get only Rx/Tx in this release, not really more.
> >>>
> >>> I agree it may be interesting to discuss some design and check whether
> >>> we need more features in ethdev as part of the driver upstreaming
> >>> process.
> >>>
> >>>
> >>> Also there are large amount of base code (HAL / FPGA code),
> >>> instead of adding them as a bulk, relevant ones with a feature can
> >>> be added with the feature patch, this eliminates dead code in the
> >>> base code layer, also helps user/review to understand the link
> >>> between driver code and base code.
> >>>
> >>> Yes it would be interesting to see what is really needed for the basic
> >>> initialization and what is linked to a specific offload or configuration 
> >>> feature.
> >>>
> >>> As a maintainer, I have to do some changes across all drivers
> >>> sometimes, and I use git blame a lot to understand why something was 
> >>> added.
> >>>
> >>>
> >> Jumping in here with an opinion about welcoming new NIC vendors to the 
> >> community:
> >>
> >> Generally, if a NIC vendor supplies a PMD for their NIC, I expect the 
> >> vendor to take responsibility for the quality of the PMD, including 
> >> providing a maintainer and support backporting of fixes to the PMD in 
> >> LTS releases. This should align with the vendor's business case for 
> >> upstreaming their driver.
> >>
> >> If the vendor provides one big patch series, which may be difficult to 
> >> understand/review, the fallout mainly hits the vendor's customers (and 
> >> thus the vendor's support organization), not the community as a whole.
> >>
> >
> > Hi Morten,
> >
> > I was thinking same before making my above comment, what happens if 
> > vendors submit as one big patch and when a problem occurs we can ask 
> > owner to fix. Probably this makes vendor happy and makes my life (or 
> > any other maintainer's life) easier, it is always easier to say yes.
> >
> >
> > But I come up with two main reasons to ask for a rework:
> >
> > 1- Technically any vendor can deliver their software to their
> > customers via a public git repository, they don't have to upstream
> > to
> > https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fdpdk.org&c=E,1,N
> > poJejuuvPdOPfcFJYtsmkQF6PVrDjGsZ8x_gi5xDrTyZokK_nM11u4ZpzHgM10J9bOLl
> > nhoR6fFAzWtCzOhRCzVruYj520zZORv6-MjJeSC5TrGnIFL&typo=1,
> > but upstreaming has many benefits.
> >
> > One of those benefits is upstreaming provides a quality assurance for 
> > vendor's customers (that is why customer can be asking for this, as we 
> > are having in many cases), and this quality assurance comes from 
> > additional eyes reviewing the code and guiding vendors for the DPDK 
> > quality standards (some vendors already doing pretty good, but new ones 
> > sometimes requires hand-holding).
> >
> > If driver is one big patch series, it is practically not possible to 
> > review it, I can catch a few bits here or there, you may some others, 
> > but practically it will be merged without review, and we will fail on 
> > our quality assurance task.
> >
> >>>

Re: [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API

2023-09-29 Thread Thomas Monjalon
29/09/2023 11:34, David Marchand:
> On Fri, Sep 29, 2023 at 11:26 AM Bruce Richardson
>  wrote:
> >
> > On Fri, Sep 29, 2023 at 11:02:38AM +0200, David Marchand wrote:
> > > On Fri, Sep 29, 2023 at 10:54 AM Morten Brørup 
> > >  wrote:
> > > > In my opinion, our move to C11 thus makes RTE_BUILD_BUG_ON obsolete.
> > >
> > > That's my thought too.
> > >
> > > >
> > > > We should mark RTE_BUILD_BUG_ON as deprecated, and disallow 
> > > > RTE_BUILD_BUG_ON in new code. Perhaps checkpatches could catch this?
> > >
> > > For a clear deprecation of a part of DPDK API, I don't see a need to
> > > add something in checkpatch.
> > > Putting a RTE_DEPRECATED in RTE_BUILD_BUG_ON directly triggers a build
> > > warning (caught by CI since we run with Werror).
> > >
> >
> > Would it not be sufficient to just make it an alias for the C11 static
> > assertions? It's not like its a lot of code to maintain, and if app users
> > have it in their code I'm not sure we get massive benefit from forcing them
> > to edit their code. I'd rather see it kept as a one-line macro purely from
> > a backward compatibility viewpoint. We can replace internal usages, though
> > - which can be checked by checkpatch.
> 
> No, there is no massive benefit, just trying to reduce our ever
> growing API surface.
> 
> Note, this macro should have been kept internal but it was introduced
> at a time such matter was not considered...

I agree with all.
Now taking techboard hat, we agreed to avoid breaking API if possible.
So we should keep RTE_BUILD_BUG_ON forever even if not used.
Internally we can replace its usages.




[PATCH v3] vhost: add IRQ suppression

2023-09-29 Thread Maxime Coquelin
Guest notifications offloading, which has been introduced
in v23.07, aims at offloading syscalls out of the datapath.

This patch optimizes the offloading by not offloading the
guest notification for a given virtqueue if one is already
being offloaded by the application.

With a single VDUSE device, we can already see few
notifications being suppressed when doing throughput
testing with Iperf3. We can expect to see much more being
suppressed when the offloading thread is under pressure.

Signed-off-by: Maxime Coquelin 
---

v3: s/0/false/ (David)

 lib/vhost/vhost.c |  4 
 lib/vhost/vhost.h | 27 +--
 2 files changed, 25 insertions(+), 6 deletions(-)

diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index c03bb9c6eb..7fde412ef3 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -49,6 +49,8 @@ static const struct vhost_vq_stats_name_off 
vhost_vq_stat_strings[] = {
stats.guest_notifications_offloaded)},
{"guest_notifications_error", offsetof(struct vhost_virtqueue,
stats.guest_notifications_error)},
+   {"guest_notifications_suppressed", offsetof(struct vhost_virtqueue,
+   stats.guest_notifications_suppressed)},
{"iotlb_hits", offsetof(struct vhost_virtqueue, 
stats.iotlb_hits)},
{"iotlb_misses",   offsetof(struct vhost_virtqueue, 
stats.iotlb_misses)},
{"inflight_submitted", offsetof(struct vhost_virtqueue, 
stats.inflight_submitted)},
@@ -1517,6 +1519,8 @@ rte_vhost_notify_guest(int vid, uint16_t queue_id)

rte_rwlock_read_lock(&vq->access_lock);

+   __atomic_store_n(&vq->irq_pending, false, __ATOMIC_RELEASE);
+
if (dev->backend_ops->inject_irq(dev, vq)) {
if (dev->flags & VIRTIO_DEV_STATS_ENABLED)
__atomic_fetch_add(&vq->stats.guest_notifications_error,
diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
index 9723429b1c..5fc9035a1f 100644
--- a/lib/vhost/vhost.h
+++ b/lib/vhost/vhost.h
@@ -156,6 +156,7 @@ struct virtqueue_stats {
uint64_t iotlb_misses;
uint64_t inflight_submitted;
uint64_t inflight_completed;
+   uint64_t guest_notifications_suppressed;
/* Counters below are atomic, and should be incremented as such. */
uint64_t guest_notifications;
uint64_t guest_notifications_offloaded;
@@ -346,6 +347,8 @@ struct vhost_virtqueue {

struct vhost_vring_addr ring_addrs;
struct virtqueue_stats  stats;
+
+   bool irq_pending;
 } __rte_cache_aligned;

 /* Virtio device status as per Virtio specification */
@@ -908,12 +911,24 @@ vhost_need_event(uint16_t event_idx, uint16_t new_idx, 
uint16_t old)
 static __rte_always_inline void
 vhost_vring_inject_irq(struct virtio_net *dev, struct vhost_virtqueue *vq)
 {
-   if (dev->notify_ops->guest_notify &&
-   dev->notify_ops->guest_notify(dev->vid, vq->index)) {
-   if (dev->flags & VIRTIO_DEV_STATS_ENABLED)
-   
__atomic_fetch_add(&vq->stats.guest_notifications_offloaded,
-   1, __ATOMIC_RELAXED);
-   return;
+   bool expected = false;
+
+   if (dev->notify_ops->guest_notify) {
+   if (__atomic_compare_exchange_n(&vq->irq_pending, &expected, 
true, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED)) {
+   if (dev->notify_ops->guest_notify(dev->vid, vq->index)) 
{
+   if (dev->flags & VIRTIO_DEV_STATS_ENABLED)
+   
__atomic_fetch_add(&vq->stats.guest_notifications_offloaded,
+   1, __ATOMIC_RELAXED);
+   return;
+   }
+
+   /* Offloading failed, fallback to direct IRQ injection 
*/
+   __atomic_store_n(&vq->irq_pending, false, 
__ATOMIC_RELEASE);
+   } else {
+   vq->stats.guest_notifications_suppressed++;
+   return;
+   }
}

if (dev->backend_ops->inject_irq(dev, vq)) {
--
2.41.0



Re: [PATCH v6 1/2] node: add IPv4 local node to handle local pkts

2023-09-29 Thread Nithin Dabilpuram
Series-acked-by: Nithin Dabilpuram 


On Fri, Sep 29, 2023 at 1:21 AM Rakesh Kudurumalla
 wrote:
>
> Local or Host destined pkts can be redirected IPv4 local node
> using IP4 Lookup node entries with prefix of 32 and be redirected
> to this IP4 local node for further processing.
>
> Signed-off-by: Rakesh Kudurumalla 
> ---
> Depends-on: series-29670 ("remove MAX macro from all nodes")
>
> v6: Resolve dependency
>
>  doc/guides/prog_guide/graph_lib.rst   |  15 ++
>  .../img/graph_inbuilt_node_flow.svg   | 138 ++
>  lib/node/ip4_local.c  |  88 +++
>  lib/node/ip4_lookup.c |   1 +
>  lib/node/meson.build  |   1 +
>  lib/node/rte_node_ip4_api.h   |  12 ++
>  6 files changed, 196 insertions(+), 59 deletions(-)
>  create mode 100644 lib/node/ip4_local.c
>
> diff --git a/doc/guides/prog_guide/graph_lib.rst 
> b/doc/guides/prog_guide/graph_lib.rst
> index e7b6e12004..f2e04a68b9 100644
> --- a/doc/guides/prog_guide/graph_lib.rst
> +++ b/doc/guides/prog_guide/graph_lib.rst
> @@ -498,3 +498,18 @@ Uses ``poll`` function to poll on the socket fd
>  for ``POLLIN`` events to read the packets from raw socket
>  to stream buffer and does ``rte_node_next_stream_move()``
>  when there are received packets.
> +
> +ip4_local
> +~
> +This node is an intermediate node that does ``packet_type`` lookup for
> +the received ipv4 packets and the result determines each packets next node.
> +
> +On successful ``packet_type`` lookup, for any IPv4 protocol the result
> +contains the ``next_node`` id and ``next-hop`` id with which the packet
> +needs to be further processed.
> +
> +On packet_type lookup failure, objects are redirected to ``pkt_drop`` node.
> +``rte_node_ip4_route_add()`` is control path API to add ipv4 address with 32 
> bit
> +depth to receive to packets.
> +To achieve home run, node use ``rte_node_stream_move()`` as mentioned in 
> above
> +sections.
> diff --git a/doc/guides/prog_guide/img/graph_inbuilt_node_flow.svg 
> b/doc/guides/prog_guide/img/graph_inbuilt_node_flow.svg
> index 7eea94701f..b954f6fba1 100644
> --- a/doc/guides/prog_guide/img/graph_inbuilt_node_flow.svg
> +++ b/doc/guides/prog_guide/img/graph_inbuilt_node_flow.svg
> @@ -37,174 +37,194 @@ digraph dpdk_inbuilt_nodes_flow {
>  ethdev_tx -> pkt_drop [color="cyan" style="dashed"]
>  pkt_cls->pkt_drop   [color="cyan" style="dashed"]
>  kernel_tx -> kernel_rx [color="red" style="dashed"]
> +ip4_lookup -> ip4_local
> +ip4_local -> pkt_drop [color="cyan" style="dashed"]
>  }
>
>   -->
>  
>  
> - - viewBox="0.00 0.00 470.23 424.50" xmlns="http://www.w3.org/2000/svg"; 
> xmlns:xlink="http://www.w3.org/1999/xlink";>
> -
> + + viewBox="0.00 0.00 524.91 458.00" xmlns="http://www.w3.org/2000/svg"; 
> xmlns:xlink="http://www.w3.org/1999/xlink";>
> +
>  dpdk_inbuilt_nodes_flow
> -
> +
>  
>  
>  ethdev_rx
> - ry="18"/>
> - font-size="14.00">ethdev_rx
> +
> + font-size="14.00">ethdev_rx
>  
>  
>  
>  pkt_cls
> - ry="18"/>
> - font-size="14.00">pkt_cls
> +
> + font-size="14.00">pkt_cls
>  
>  
>  
>  ethdev_rx->pkt_cls
> -
> -
> +
> +
>  
>  
>  
>  kernel_rx
> -
> - font-size="14.00">kernel_rx
> +
> + font-size="14.00">kernel_rx
>  
>  
>  
>  kernel_rx->pkt_cls
> -
> -
> +
> +
>  
>  
>  
>  ethdev_tx
> - ry="18"/>
> - font-size="14.00">ethdev_tx
> +
> + font-size="14.00">ethdev_tx
>  
>  
>  
>  pkt_drop
> -
> - font-size="14.00">pkt_drop
> +
> + font-size="14.00">pkt_drop
>  
>  
>  
>  ethdev_tx->pkt_drop
> - d="M306.22,-73.53C313.15,-64.75 321.81,-53.76 329.55,-43.96"/>
> -
> + d="M318.94,-73.89C307.58,-64.29 293,-51.96 280.53,-41.42"/>
> +
>  
>  
>  
>  kernel_tx
> -
> - font-size="14.00">kernel_tx
> +
> + font-size="14.00">kernel_tx
>  
>  
>  
>  kernel_tx->kernel_rx
> - d="M89.25,-219.53C82.32,-210.75 73.65,-199.76 65.92,-189.96"/>
> -
> + d="M110.99,-254.17C106.52,-245.81 101,-235.51 95.99,-226.14"/>
> +
>  
>  
>  
>  pkt_cls->pkt_drop
> - d="M255.09,-319.38C322.68,-308.72 462.23,-281.44 462.23,-238 462.23,-238 
> 462.23,-238 462.23,-90 462.23,-57.84 429.01,-39.68 398.59,-29.8"/>
> -
> + d="M84,-348.97C48.65,-337.7 0,-314.61 0,-273 0,-273 0,-273 0,-90 0,-49.99 
> 118.4,-31.52 193.52,-23.82"/>
> +
>  
>  
>  
>  pkt_cls->kernel_tx
> -
> -
> - font-size="14.00">exception pkts
> +
> +
> + font-size="14.00">exception pkts
>  
>  
>  
>  ip4_lookup
> - ry="18"/>
> - font-size="14.00">ip4_lookup
> +
> + font-size="14.00">ip4_lookup
>  
>  
>  
>  pkt_cls->ip4_lookup
> -
> -
> - font-size="14.00">ipv4
> +
> +
> + font-size="14.00">ipv4
>  
>  
>  
>  ip6_lookup
> - ry="18"/>
> - font-size="14.00">ip6_lookup
> +
> + font-size="14.00">ip6_lookup
>  
>  
>  
>  pkt_cls->ip6_lookup
> -
> -
> - font-size="14.00">ipv6
> +
> +
> + font-size="14.00">ipv6
>  
>  
>  
>  ip4_lookup->pkt_drop
> - d="M192.86,-221.12C179.2,-211.83 163.82,-198.49 156.23,-182 149.55,-167.46 
> 150.78,-161.04 156.23,-146 176.39,-90.45 198.

Re: [PATCH v6 1/2] node: add IPv4 local node to handle local pkts

2023-09-29 Thread Nithin Dabilpuram
Acked-by: Nithin Dabilpuram 

On Fri, Sep 29, 2023 at 4:24 PM Nithin Dabilpuram  wrote:
>
> Series-acked-by: Nithin Dabilpuram 
>
>
> On Fri, Sep 29, 2023 at 1:21 AM Rakesh Kudurumalla
>  wrote:
> >
> > Local or Host destined pkts can be redirected IPv4 local node
> > using IP4 Lookup node entries with prefix of 32 and be redirected
> > to this IP4 local node for further processing.
> >
> > Signed-off-by: Rakesh Kudurumalla 
> > ---
> > Depends-on: series-29670 ("remove MAX macro from all nodes")
> >
> > v6: Resolve dependency
> >
> >  doc/guides/prog_guide/graph_lib.rst   |  15 ++
> >  .../img/graph_inbuilt_node_flow.svg   | 138 ++
> >  lib/node/ip4_local.c  |  88 +++
> >  lib/node/ip4_lookup.c |   1 +
> >  lib/node/meson.build  |   1 +
> >  lib/node/rte_node_ip4_api.h   |  12 ++
> >  6 files changed, 196 insertions(+), 59 deletions(-)
> >  create mode 100644 lib/node/ip4_local.c
> >
> > diff --git a/doc/guides/prog_guide/graph_lib.rst 
> > b/doc/guides/prog_guide/graph_lib.rst
> > index e7b6e12004..f2e04a68b9 100644
> > --- a/doc/guides/prog_guide/graph_lib.rst
> > +++ b/doc/guides/prog_guide/graph_lib.rst
> > @@ -498,3 +498,18 @@ Uses ``poll`` function to poll on the socket fd
> >  for ``POLLIN`` events to read the packets from raw socket
> >  to stream buffer and does ``rte_node_next_stream_move()``
> >  when there are received packets.
> > +
> > +ip4_local
> > +~
> > +This node is an intermediate node that does ``packet_type`` lookup for
> > +the received ipv4 packets and the result determines each packets next node.
> > +
> > +On successful ``packet_type`` lookup, for any IPv4 protocol the result
> > +contains the ``next_node`` id and ``next-hop`` id with which the packet
> > +needs to be further processed.
> > +
> > +On packet_type lookup failure, objects are redirected to ``pkt_drop`` node.
> > +``rte_node_ip4_route_add()`` is control path API to add ipv4 address with 
> > 32 bit
> > +depth to receive to packets.
> > +To achieve home run, node use ``rte_node_stream_move()`` as mentioned in 
> > above
> > +sections.
> > diff --git a/doc/guides/prog_guide/img/graph_inbuilt_node_flow.svg 
> > b/doc/guides/prog_guide/img/graph_inbuilt_node_flow.svg
> > index 7eea94701f..b954f6fba1 100644
> > --- a/doc/guides/prog_guide/img/graph_inbuilt_node_flow.svg
> > +++ b/doc/guides/prog_guide/img/graph_inbuilt_node_flow.svg
> > @@ -37,174 +37,194 @@ digraph dpdk_inbuilt_nodes_flow {
> >  ethdev_tx -> pkt_drop [color="cyan" style="dashed"]
> >  pkt_cls->pkt_drop   [color="cyan" style="dashed"]
> >  kernel_tx -> kernel_rx [color="red" style="dashed"]
> > +ip4_lookup -> ip4_local
> > +ip4_local -> pkt_drop [color="cyan" style="dashed"]
> >  }
> >
> >   -->
> >  
> >  
> > - > - viewBox="0.00 0.00 470.23 424.50" xmlns="http://www.w3.org/2000/svg"; 
> > xmlns:xlink="http://www.w3.org/1999/xlink";>
> > -
> > + > + viewBox="0.00 0.00 524.91 458.00" xmlns="http://www.w3.org/2000/svg"; 
> > xmlns:xlink="http://www.w3.org/1999/xlink";>
> > +
> >  dpdk_inbuilt_nodes_flow
> > -
> > +
> >  
> >  
> >  ethdev_rx
> > - > ry="18"/>
> > - > font-family="Times,serif" font-size="14.00">ethdev_rx
> > +
> > + > font-size="14.00">ethdev_rx
> >  
> >  
> >  
> >  pkt_cls
> > - > ry="18"/>
> > - > font-family="Times,serif" font-size="14.00">pkt_cls
> > +
> > + > font-size="14.00">pkt_cls
> >  
> >  
> >  
> >  ethdev_rx->pkt_cls
> > -
> > -
> > +
> > +
> >  
> >  
> >  
> >  kernel_rx
> > - > ry="18"/>
> > - > font-size="14.00">kernel_rx
> > +
> > + > font-size="14.00">kernel_rx
> >  
> >  
> >  
> >  kernel_rx->pkt_cls
> > -
> > -
> > +
> > +
> >  
> >  
> >  
> >  ethdev_tx
> > - > ry="18"/>
> > - > font-size="14.00">ethdev_tx
> > + > ry="18"/>
> > + > font-size="14.00">ethdev_tx
> >  
> >  
> >  
> >  pkt_drop
> > - > ry="18"/>
> > - > font-size="14.00">pkt_drop
> > +
> > + > font-size="14.00">pkt_drop
> >  
> >  
> >  
> >  ethdev_tx->pkt_drop
> > - > d="M306.22,-73.53C313.15,-64.75 321.81,-53.76 329.55,-43.96"/>
> > -
> > + > d="M318.94,-73.89C307.58,-64.29 293,-51.96 280.53,-41.42"/>
> > +
> >  
> >  
> >  
> >  kernel_tx
> > - > ry="18"/>
> > - > font-family="Times,serif" font-size="14.00">kernel_tx
> > +
> > + > font-size="14.00">kernel_tx
> >  
> >  
> >  
> >  kernel_tx->kernel_rx
> > - > d="M89.25,-219.53C82.32,-210.75 73.65,-199.76 65.92,-189.96"/>
> > -
> > + > d="M110.99,-254.17C106.52,-245.81 101,-235.51 95.99,-226.14"/>
> > +
> >  
> >  
> >  
> >  pkt_cls->pkt_drop
> > - > d="M255.09,-319.38C322.68,-308.72 462.23,-281.44 462.23,-238 462.23,-238 
> > 462.23,-238 462.23,-90 462.23,-57.84 429.01,-39.68 398.59,-29.8"/>
> > -
> > + > d="M84,-348.97C48.65,-337.7 0,-314.61 0,-273 0,-273 0,-273 0,-90 0,-49.99 
> > 118.4,-31.52 193.52,-23.82"/>
> > +
> >  
> >  
> >  
> >  pkt_cls->kernel_tx
> > -
> > -
> > - > font-size="14.00">exception pkts
> > +
> > +
> > + > font-size="1

Re: [PATCH v6 2/2] node: add UDP v4 support

2023-09-29 Thread Nithin Dabilpuram
Acked-by: Nithin Dabilpuram 

On Thu, Sep 28, 2023 at 4:06 PM Rakesh Kudurumalla
 wrote:
>
> IPv4 UDP packets are given to application
> with specified UDP destination port given
> by user.
>
> Signed-off-by: Rakesh Kudurumalla 
> ---
>  doc/api/doxy-api-index.md |   3 +-
>  doc/guides/prog_guide/graph_lib.rst   |  25 ++
>  .../img/graph_inbuilt_node_flow.svg   | 165 -
>  lib/node/meson.build  |   2 +
>  lib/node/rte_node_udp4_input_api.h|  61 +
>  lib/node/udp4_input.c | 226 ++
>  lib/node/version.map  |   2 +
>  7 files changed, 418 insertions(+), 66 deletions(-)
>  create mode 100644 lib/node/rte_node_udp4_input_api.h
>  create mode 100644 lib/node/udp4_input.c
>
> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
> index fdeda13932..96282d3fd0 100644
> --- a/doc/api/doxy-api-index.md
> +++ b/doc/api/doxy-api-index.md
> @@ -206,7 +206,8 @@ The public API headers are grouped by topics:
>* graph_nodes:
>  [eth_node](@ref rte_node_eth_api.h),
>  [ip4_node](@ref rte_node_ip4_api.h),
> -[ip6_node](@ref rte_node_ip6_api.h)
> +[ip6_node](@ref rte_node_ip6_api.h),
> +[udp4_input_node](@ref rte_node_udp4_input_api.h)
>
>  - **basic**:
>[bitops](@ref rte_bitops.h),
> diff --git a/doc/guides/prog_guide/graph_lib.rst 
> b/doc/guides/prog_guide/graph_lib.rst
> index f2e04a68b9..3572560362 100644
> --- a/doc/guides/prog_guide/graph_lib.rst
> +++ b/doc/guides/prog_guide/graph_lib.rst
> @@ -513,3 +513,28 @@ On packet_type lookup failure, objects are redirected to 
> ``pkt_drop`` node.
>  depth to receive to packets.
>  To achieve home run, node use ``rte_node_stream_move()`` as mentioned in 
> above
>  sections.
> +
> +udp4_input
> +~~
> +This node is an intermediate node that does udp destination port lookup for
> +the received ipv4 packets and the result determines each packets next node.
> +
> +User registers a new node ``udp4_input`` into graph library during 
> initialization
> +and attach user specified node as edege to this node using
> +``rte_node_udp4_usr_node_add()``, and create empty hash table with 
> destination
> +port and node id as its feilds.
> +
> +After successful addition of user node as edege, edge id is returned to the 
> user.
> +
> +User would register ``ip4_lookup`` table with specified ip address and 32 
> bit as mask
> +for ip filtration using api ``rte_node_ip4_route_add()``.
> +
> +After graph is created user would update hash table with custom port with
> +and previously obtained edge id using API ``rte_node_udp4_dst_port_add()``.
> +
> +When packet is received lpm look up is performed if ip is matched the packet
> +is handed over to ip4_local node, then packet is verified for udp proto and
> +on success packet is enqueued to ``udp4_input`` node.
> +
> +Hash lookup is performed in ``udp4_input`` node with registered destination 
> port
> +and destination port in UDP packet , on success packet is handed to 
> ``udp_user_node``.
> diff --git a/doc/guides/prog_guide/img/graph_inbuilt_node_flow.svg 
> b/doc/guides/prog_guide/img/graph_inbuilt_node_flow.svg
> index b954f6fba1..7c451371a7 100644
> --- a/doc/guides/prog_guide/img/graph_inbuilt_node_flow.svg
> +++ b/doc/guides/prog_guide/img/graph_inbuilt_node_flow.svg
> @@ -39,192 +39,227 @@ digraph dpdk_inbuilt_nodes_flow {
>  kernel_tx -> kernel_rx [color="red" style="dashed"]
>  ip4_lookup -> ip4_local
>  ip4_local -> pkt_drop [color="cyan" style="dashed"]
> +ip4_local -> udp4_input [ label="udpv4"]
> +udp4_input -> udp_user_node
> +udp4_input -> pkt_drop [color="cyan" style="dashed"]
> +
>  }
>
>   -->
>  
>  
> - - viewBox="0.00 0.00 524.91 458.00" xmlns="http://www.w3.org/2000/svg"; 
> xmlns:xlink="http://www.w3.org/1999/xlink";>
> -
> + + viewBox="0.00 0.00 630.95 437.00" xmlns="http://www.w3.org/2000/svg"; 
> xmlns:xlink="http://www.w3.org/1999/xlink";>
> +
>  dpdk_inbuilt_nodes_flow
> -
> +
>  
>  
>  ethdev_rx
> -
> - font-size="14.00">ethdev_rx
> + ry="18"/>
> + font-size="14.00">ethdev_rx
>  
>  
>  
>  pkt_cls
> -
> - font-size="14.00">pkt_cls
> +
> + font-size="14.00">pkt_cls
>  
>  
>  
>  ethdev_rx->pkt_cls
> -
> -
> +
> +
>  
>  
>  
>  kernel_rx
> -
> - font-size="14.00">kernel_rx
> +
> + font-size="14.00">kernel_rx
>  
>  
>  
>  kernel_rx->pkt_cls
> -
> -
> +
> +
>  
>  
>  
>  ethdev_tx
> -
> - font-size="14.00">ethdev_tx
> + ry="18"/>
> + font-size="14.00">ethdev_tx
>  
>  
>  
>  pkt_drop
> -
> - font-size="14.00">pkt_drop
> +
> + font-size="14.00">pkt_drop
>  
>  
>  
>  ethdev_tx->pkt_drop
> - d="M318.94,-73.89C307.58,-64.29 293,-51.96 280.53,-41.42"/>
> -
> + d="M361.45,-73.17C368.55,-64.33 377.39,-53.33 385.23,-43.55"/>
> +
>  
>  
>  
>  kernel_tx
> -
> - font-size="14.00">kernel_tx
> +
> + font-size="14.00">kernel_tx
>  
>  
>  
>  kernel_tx->kernel_rx
> - d="M110.99,-254.17C106.52,-245.81 101

Re: [PATCH 0/2] ethdev: add group set miss actions API

2023-09-29 Thread Ferruh Yigit
On 9/20/2023 1:52 PM, Tomer Shmilovich wrote:
> Introduce new group set miss actions API:
> rte_flow_group_set_miss_actions().
> 
> A group's miss actions are a set of actions to be performed
> in case of a miss on a group, i.e. when a packet didn't hit any flow
> rules in the group.
> 
> Currently, the expected behavior in this case is undefined.
> In order to achieve such functionality, a user can add a flow rule
> that matches on all traffic with the lowest priority in the group -
> this is not explicit however, and can be overridden by another flow rule
> with a lower priority.
> 
> This new API function allows a user to set a group's miss actions in an
> explicit way.
> 
> RFC discussion: 
> http://patches.dpdk.org/project/dpdk/patch/20230807133601.164018-1-tshmilov...@nvidia.com/
> 
> Tomer Shmilovich (2):
>   ethdev: add group set miss actions API
>   app/testpmd: add group set miss actions CLI commands
> 

Series applied to dpdk-next-net/main, thanks.


Re: [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API

2023-09-29 Thread David Marchand
On Fri, Sep 29, 2023 at 12:26 PM Thomas Monjalon  wrote:
>
> 29/09/2023 11:34, David Marchand:
> > On Fri, Sep 29, 2023 at 11:26 AM Bruce Richardson
> >  wrote:
> > >
> > > On Fri, Sep 29, 2023 at 11:02:38AM +0200, David Marchand wrote:
> > > > On Fri, Sep 29, 2023 at 10:54 AM Morten Brørup 
> > > >  wrote:
> > > > > In my opinion, our move to C11 thus makes RTE_BUILD_BUG_ON obsolete.
> > > >
> > > > That's my thought too.
> > > >
> > > > >
> > > > > We should mark RTE_BUILD_BUG_ON as deprecated, and disallow 
> > > > > RTE_BUILD_BUG_ON in new code. Perhaps checkpatches could catch this?
> > > >
> > > > For a clear deprecation of a part of DPDK API, I don't see a need to
> > > > add something in checkpatch.
> > > > Putting a RTE_DEPRECATED in RTE_BUILD_BUG_ON directly triggers a build
> > > > warning (caught by CI since we run with Werror).
> > > >
> > >
> > > Would it not be sufficient to just make it an alias for the C11 static
> > > assertions? It's not like its a lot of code to maintain, and if app users
> > > have it in their code I'm not sure we get massive benefit from forcing 
> > > them
> > > to edit their code. I'd rather see it kept as a one-line macro purely from
> > > a backward compatibility viewpoint. We can replace internal usages, though
> > > - which can be checked by checkpatch.
> >
> > No, there is no massive benefit, just trying to reduce our ever
> > growing API surface.
> >
> > Note, this macro should have been kept internal but it was introduced
> > at a time such matter was not considered...
>
> I agree with all.
> Now taking techboard hat, we agreed to avoid breaking API if possible.
> So we should keep RTE_BUILD_BUG_ON forever even if not used.
> Internally we can replace its usages.

So back to the original topic, I get that static_assert is ok for this patch.


-- 
David Marchand



Re: [PATCH v3] net/af_xdp: fix missing UMEM feature

2023-09-29 Thread Ferruh Yigit
On 9/28/2023 10:32 AM, Bruce Richardson wrote:
> On Thu, Sep 28, 2023 at 09:25:53AM +, Shibin Koikkara Reeny wrote:
>> Shared UMEM feature is missing in the af_xdp driver build
>> after the commit 33d66940e9ba ("build: use C11 standard").
>>
>> Runtime Error log while using Shared UMEM feature:
>> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp0
>> init_internals(): Shared UMEM feature not available. Check kernel
>> and libbpf version
>> rte_pmd_af_xdp_probe(): Failed to init internals
>> vdev_probe(): failed to initialize net_af_xdp0 device
>> EAL: Bus (vdev) probe failed.
>>
>> Reason for the missing UMEM feature is because the C11 standard
>> doesn't include the GNU compiler extensions typeof and asm, used
>> by the libbpf and libxdp header files.
>>
>> Meson error log:
>>  In file included from
>> dpdk/build/meson-private/tmpf74nkhqd/testfile.c:5:
>> /usr/local/include/bpf/xsk.h: In function 'xsk_prod_nb_free':
>> /usr/local/include/bpf/xsk.h:165:26: error: expected ';' before '___p1'
>>   165 | r->cached_cons = libbpf_smp_load_acquire(r->consumer);
>>   |  ^~~
>> /usr/local/include/bpf/xsk.h:165:26: error: 'asm' undeclared (first use
>> in this function)
>> ...
>> /usr/local/include/bpf/xsk.h:199:9: error: unknown type name 'typeof'
>>   199 | libbpf_smp_store_release(prod->producer, *prod->producer
>>   + nb);
>>   | ^~~~
>>
>> Fix is to provide alternative keywords by using Option Controlling C
>> Dialect [1].
>>
> 
> Minor nit, this patch provides the alternative keywords using macros rather 
> than any
> C dialect options.
> 

Fixed while merging.

>> Fixes: 33d66940e9ba ("build: use C11 standard")
>>
>> [1] https://gcc.gnu.org/onlinedocs/gcc/C-Dialect-Options.html
>>
>> v3: Used alternative keywords fix.
>> v2: Added original commit causing the issue.
>> Signed-off-by: Shibin Koikkara Reeny 
>> ---
> 
> Acked-by: Bruce Richardson 
> 

Applied to dpdk-next-net/main, thanks.


Re: [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API

2023-09-29 Thread Thomas Monjalon
29/09/2023 13:38, David Marchand:
> On Fri, Sep 29, 2023 at 12:26 PM Thomas Monjalon  wrote:
> >
> > 29/09/2023 11:34, David Marchand:
> > > On Fri, Sep 29, 2023 at 11:26 AM Bruce Richardson
> > >  wrote:
> > > >
> > > > On Fri, Sep 29, 2023 at 11:02:38AM +0200, David Marchand wrote:
> > > > > On Fri, Sep 29, 2023 at 10:54 AM Morten Brørup 
> > > > >  wrote:
> > > > > > In my opinion, our move to C11 thus makes RTE_BUILD_BUG_ON obsolete.
> > > > >
> > > > > That's my thought too.
> > > > >
> > > > > >
> > > > > > We should mark RTE_BUILD_BUG_ON as deprecated, and disallow 
> > > > > > RTE_BUILD_BUG_ON in new code. Perhaps checkpatches could catch this?
> > > > >
> > > > > For a clear deprecation of a part of DPDK API, I don't see a need to
> > > > > add something in checkpatch.
> > > > > Putting a RTE_DEPRECATED in RTE_BUILD_BUG_ON directly triggers a build
> > > > > warning (caught by CI since we run with Werror).
> > > > >
> > > >
> > > > Would it not be sufficient to just make it an alias for the C11 static
> > > > assertions? It's not like its a lot of code to maintain, and if app 
> > > > users
> > > > have it in their code I'm not sure we get massive benefit from forcing 
> > > > them
> > > > to edit their code. I'd rather see it kept as a one-line macro purely 
> > > > from
> > > > a backward compatibility viewpoint. We can replace internal usages, 
> > > > though
> > > > - which can be checked by checkpatch.
> > >
> > > No, there is no massive benefit, just trying to reduce our ever
> > > growing API surface.
> > >
> > > Note, this macro should have been kept internal but it was introduced
> > > at a time such matter was not considered...
> >
> > I agree with all.
> > Now taking techboard hat, we agreed to avoid breaking API if possible.
> > So we should keep RTE_BUILD_BUG_ON forever even if not used.
> > Internally we can replace its usages.
> 
> So back to the original topic, I get that static_assert is ok for this patch.

Yes we can use static_assert.




[PATCH v8 00/12] event DMA adapter library support

2023-09-29 Thread Amit Prakash Shukla
This series adds support for event DMA adapter library. API's defined
as part of this library can be used by the application for DMA transfer
of data using event based mechanism.

v8:
- Re-arranged DMA adapter section in release notes.

v7:
- Resolved review comments.

v6:
- Resolved review comments.
- Updated git commit message.

v5:
- Resolved review comments.

v4:
- Fixed compilation error.

v3:
- Resolved checkpatch warnings.
- Fixed compilation error on intel.
- Updated git commit message.

v2:
- Resolved review comments.
- Patch split into multiple patches.

Amit Prakash Shukla (12):
  eventdev/dma: introduce DMA adapter
  eventdev/dma: support adapter capabilities get
  eventdev/dma: support adapter create and free
  eventdev/dma: support vchan add and delete
  eventdev/dma: support adapter service function
  eventdev/dma: support adapter start and stop
  eventdev/dma: support adapter service ID get
  eventdev/dma: support adapter runtime params
  eventdev/dma: support adapter stats
  eventdev/dma: support adapter enqueue
  eventdev/dma: support adapter event port get
  app/test: add event DMA adapter auto-test

 MAINTAINERS   |7 +
 app/test/meson.build  |1 +
 app/test/test_event_dma_adapter.c |  805 +
 config/rte_config.h   |1 +
 doc/api/doxy-api-index.md |1 +
 doc/guides/eventdevs/features/default.ini |8 +
 doc/guides/prog_guide/event_dma_adapter.rst   |  264 +++
 doc/guides/prog_guide/eventdev.rst|8 +-
 .../img/event_dma_adapter_op_forward.svg  | 1086 +
 .../img/event_dma_adapter_op_new.svg  | 1079 +
 doc/guides/prog_guide/index.rst   |1 +
 doc/guides/rel_notes/release_23_11.rst|6 +
 lib/eventdev/eventdev_pmd.h   |  171 +-
 lib/eventdev/eventdev_private.c   |   10 +
 lib/eventdev/meson.build  |4 +-
 lib/eventdev/rte_event_dma_adapter.c  | 1434 +
 lib/eventdev/rte_event_dma_adapter.h  |  581 +++
 lib/eventdev/rte_eventdev.c   |   23 +
 lib/eventdev/rte_eventdev.h   |   44 +
 lib/eventdev/rte_eventdev_core.h  |8 +-
 lib/eventdev/version.map  |   16 +
 lib/meson.build   |2 +-
 22 files changed, 5553 insertions(+), 7 deletions(-)
 create mode 100644 app/test/test_event_dma_adapter.c
 create mode 100644 doc/guides/prog_guide/event_dma_adapter.rst
 create mode 100644 doc/guides/prog_guide/img/event_dma_adapter_op_forward.svg
 create mode 100644 doc/guides/prog_guide/img/event_dma_adapter_op_new.svg
 create mode 100644 lib/eventdev/rte_event_dma_adapter.c
 create mode 100644 lib/eventdev/rte_event_dma_adapter.h

-- 
2.25.1



[PATCH v8 01/12] eventdev/dma: introduce DMA adapter

2023-09-29 Thread Amit Prakash Shukla
Introduce event dma adapter interface to transfer packets between
dma device and event device.

Signed-off-by: Amit Prakash Shukla 
Acked-by: Jerin Jacob 
---
 MAINTAINERS   |6 +
 doc/api/doxy-api-index.md |1 +
 doc/guides/eventdevs/features/default.ini |8 +
 doc/guides/prog_guide/event_dma_adapter.rst   |  264 
 doc/guides/prog_guide/eventdev.rst|8 +-
 .../img/event_dma_adapter_op_forward.svg  | 1086 +
 .../img/event_dma_adapter_op_new.svg  | 1079 
 doc/guides/prog_guide/index.rst   |1 +
 doc/guides/rel_notes/release_23_11.rst|6 +
 lib/eventdev/eventdev_pmd.h   |  171 ++-
 lib/eventdev/eventdev_private.c   |   10 +
 lib/eventdev/meson.build  |1 +
 lib/eventdev/rte_event_dma_adapter.h  |  581 +
 lib/eventdev/rte_eventdev.h   |   44 +
 lib/eventdev/rte_eventdev_core.h  |8 +-
 lib/eventdev/version.map  |   16 +
 lib/meson.build   |2 +-
 17 files changed, 3286 insertions(+), 6 deletions(-)
 create mode 100644 doc/guides/prog_guide/event_dma_adapter.rst
 create mode 100644 doc/guides/prog_guide/img/event_dma_adapter_op_forward.svg
 create mode 100644 doc/guides/prog_guide/img/event_dma_adapter_op_new.svg
 create mode 100644 lib/eventdev/rte_event_dma_adapter.h

diff --git a/MAINTAINERS b/MAINTAINERS
index a926155f26..4ebbbe8bb3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -540,6 +540,12 @@ F: lib/eventdev/*crypto_adapter*
 F: app/test/test_event_crypto_adapter.c
 F: doc/guides/prog_guide/event_crypto_adapter.rst
 
+Eventdev DMA Adapter API
+M: Amit Prakash Shukla 
+T: git://dpdk.org/next/dpdk-next-eventdev
+F: lib/eventdev/*dma_adapter*
+F: doc/guides/prog_guide/event_dma_adapter.rst
+
 Raw device API
 M: Sachin Saxena 
 M: Hemant Agrawal 
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index fdeda13932..b7df7be4d9 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -29,6 +29,7 @@ The public API headers are grouped by topics:
   [event_eth_tx_adapter](@ref rte_event_eth_tx_adapter.h),
   [event_timer_adapter](@ref rte_event_timer_adapter.h),
   [event_crypto_adapter](@ref rte_event_crypto_adapter.h),
+  [event_dma_adapter](@ref rte_event_dma_adapter.h),
   [rawdev](@ref rte_rawdev.h),
   [metrics](@ref rte_metrics.h),
   [bitrate](@ref rte_bitrate.h),
diff --git a/doc/guides/eventdevs/features/default.ini 
b/doc/guides/eventdevs/features/default.ini
index 00360f60c6..73a52d915b 100644
--- a/doc/guides/eventdevs/features/default.ini
+++ b/doc/guides/eventdevs/features/default.ini
@@ -44,6 +44,14 @@ internal_port_op_fwd   =
 internal_port_qp_ev_bind   =
 session_private_data   =
 
+;
+; Features of a default DMA adapter.
+;
+[DMA adapter Features]
+internal_port_op_new   =
+internal_port_op_fwd   =
+internal_port_vchan_ev_bind =
+
 ;
 ; Features of a default Timer adapter.
 ;
diff --git a/doc/guides/prog_guide/event_dma_adapter.rst 
b/doc/guides/prog_guide/event_dma_adapter.rst
new file mode 100644
index 00..701e50d042
--- /dev/null
+++ b/doc/guides/prog_guide/event_dma_adapter.rst
@@ -0,0 +1,264 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+Copyright (c) 2023 Marvell.
+
+Event DMA Adapter Library
+=
+
+DPDK :doc:`Eventdev library ` provides event driven programming 
model with features
+to schedule events. :doc:`DMA Device library ` provides an interface 
to DMA poll mode
+drivers that support DMA operations. Event DMA Adapter is intended to bridge 
between the event
+device and the DMA device.
+
+Packet flow from DMA device to the event device can be accomplished using 
software and hardware
+based transfer mechanisms. The adapter queries an eventdev PMD to determine 
which mechanism to
+be used. The adapter uses an EAL service core function for software based 
packet transfer and
+uses the eventdev PMD functions to configure hardware based packet transfer 
between DMA device
+and the event device. DMA adapter uses a new event type called 
``RTE_EVENT_TYPE_DMADEV`` to
+indicate the source of event.
+
+Application can choose to submit an DMA operation directly to an DMA device or 
send it to an DMA
+adapter via eventdev based on 
``RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD`` capability. The
+first mode is known as the event new (``RTE_EVENT_DMA_ADAPTER_OP_NEW``) mode 
and the second as the
+event forward (``RTE_EVENT_DMA_ADAPTER_OP_FORWARD``) mode. Choice of mode can 
be specified while
+creating the adapter. In the former mode, it is the application's 
responsibility to enable
+ingress packet ordering. In the latter mode, it is the adapter's 
responsibility to enable
+ingress packet ordering.
+
+
+Adapter Modes
+-
+
+RTE_EVENT_DMA_ADAPTER_OP_NEW mode
+~
+
+In the

[PATCH v8 02/12] eventdev/dma: support adapter capabilities get

2023-09-29 Thread Amit Prakash Shukla
Added a new eventdev API rte_event_dma_adapter_caps_get(), to get
DMA adapter capabilities supported by the driver.

Signed-off-by: Amit Prakash Shukla 
---
 lib/eventdev/meson.build|  2 +-
 lib/eventdev/rte_eventdev.c | 23 +++
 2 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build
index 21347f7c4c..b46bbbc9aa 100644
--- a/lib/eventdev/meson.build
+++ b/lib/eventdev/meson.build
@@ -43,5 +43,5 @@ driver_sdk_headers += files(
 'event_timer_adapter_pmd.h',
 )
 
-deps += ['ring', 'ethdev', 'hash', 'mempool', 'mbuf', 'timer', 'cryptodev']
+deps += ['ring', 'ethdev', 'hash', 'mempool', 'mbuf', 'timer', 'cryptodev', 
'dmadev']
 deps += ['telemetry']
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 6ab4524332..60509c6efb 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -20,6 +20,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -224,6 +225,28 @@ rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t 
eth_port_id,
: 0;
 }
 
+int
+rte_event_dma_adapter_caps_get(uint8_t dev_id, uint8_t dma_dev_id, uint32_t 
*caps)
+{
+   struct rte_eventdev *dev;
+
+   RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+   if (!rte_dma_is_valid(dma_dev_id))
+   return -EINVAL;
+
+   dev = &rte_eventdevs[dev_id];
+
+   if (caps == NULL)
+   return -EINVAL;
+
+   *caps = 0;
+
+   if (dev->dev_ops->dma_adapter_caps_get)
+   return (*dev->dev_ops->dma_adapter_caps_get)(dev, dma_dev_id, 
caps);
+
+   return 0;
+}
+
 static inline int
 event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
 {
-- 
2.25.1



[PATCH v8 03/12] eventdev/dma: support adapter create and free

2023-09-29 Thread Amit Prakash Shukla
Added API support to create and free DMA adapter. Create function shall be
called with event device to be associated with the adapter and port
configuration to setup an event port.

Signed-off-by: Amit Prakash Shukla 
---
 config/rte_config.h  |   1 +
 lib/eventdev/meson.build |   1 +
 lib/eventdev/rte_event_dma_adapter.c | 335 +++
 3 files changed, 337 insertions(+)
 create mode 100644 lib/eventdev/rte_event_dma_adapter.c

diff --git a/config/rte_config.h b/config/rte_config.h
index 400e44e3cf..401727703f 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -77,6 +77,7 @@
 #define RTE_EVENT_ETH_INTR_RING_SIZE 1024
 #define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32
 #define RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE 32
+#define RTE_EVENT_DMA_ADAPTER_MAX_INSTANCE 32
 
 /* rawdev defines */
 #define RTE_RAWDEV_MAX_DEVS 64
diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build
index b46bbbc9aa..250abcb154 100644
--- a/lib/eventdev/meson.build
+++ b/lib/eventdev/meson.build
@@ -17,6 +17,7 @@ sources = files(
 'eventdev_private.c',
 'eventdev_trace_points.c',
 'rte_event_crypto_adapter.c',
+'rte_event_dma_adapter.c',
 'rte_event_eth_rx_adapter.c',
 'rte_event_eth_tx_adapter.c',
 'rte_event_ring.c',
diff --git a/lib/eventdev/rte_event_dma_adapter.c 
b/lib/eventdev/rte_event_dma_adapter.c
new file mode 100644
index 00..241327d2a7
--- /dev/null
+++ b/lib/eventdev/rte_event_dma_adapter.c
@@ -0,0 +1,335 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2023 Marvell.
+ */
+
+#include 
+
+#include "rte_event_dma_adapter.h"
+
+#define DMA_BATCH_SIZE 32
+#define DMA_DEFAULT_MAX_NB 128
+#define DMA_ADAPTER_NAME_LEN 32
+#define DMA_ADAPTER_BUFFER_SIZE 1024
+
+#define DMA_ADAPTER_OPS_BUFFER_SIZE (DMA_BATCH_SIZE + DMA_BATCH_SIZE)
+
+#define DMA_ADAPTER_ARRAY "event_dma_adapter_array"
+
+/* Macros to check for valid adapter */
+#define EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) \
+   do { \
+   if (!edma_adapter_valid_id(id)) { \
+   RTE_EDEV_LOG_ERR("Invalid DMA adapter id = %d\n", id); \
+   return retval; \
+   } \
+   } while (0)
+
+/* DMA ops circular buffer */
+struct dma_ops_circular_buffer {
+   /* Index of head element */
+   uint16_t head;
+
+   /* Index of tail element */
+   uint16_t tail;
+
+   /* Number of elements in buffer */
+   uint16_t count;
+
+   /* Size of circular buffer */
+   uint16_t size;
+
+   /* Pointer to hold rte_event_dma_adapter_op for processing */
+   struct rte_event_dma_adapter_op **op_buffer;
+} __rte_cache_aligned;
+
+/* DMA device information */
+struct dma_device_info {
+   /* Number of vchans configured for a DMA device. */
+   uint16_t num_dma_dev_vchan;
+} __rte_cache_aligned;
+
+struct event_dma_adapter {
+   /* Event device identifier */
+   uint8_t eventdev_id;
+
+   /* Event port identifier */
+   uint8_t event_port_id;
+
+   /* Adapter mode */
+   enum rte_event_dma_adapter_mode mode;
+
+   /* Memory allocation name */
+   char mem_name[DMA_ADAPTER_NAME_LEN];
+
+   /* Socket identifier cached from eventdev */
+   int socket_id;
+
+   /* Lock to serialize config updates with service function */
+   rte_spinlock_t lock;
+
+   /* DMA device structure array */
+   struct dma_device_info *dma_devs;
+
+   /* Circular buffer for processing DMA ops to eventdev */
+   struct dma_ops_circular_buffer ebuf;
+
+   /* Configuration callback for rte_service configuration */
+   rte_event_dma_adapter_conf_cb conf_cb;
+
+   /* Configuration callback argument */
+   void *conf_arg;
+
+   /* Set if  default_cb is being used */
+   int default_cb_arg;
+} __rte_cache_aligned;
+
+static struct event_dma_adapter **event_dma_adapter;
+
+static inline int
+edma_adapter_valid_id(uint8_t id)
+{
+   return id < RTE_EVENT_DMA_ADAPTER_MAX_INSTANCE;
+}
+
+static inline struct event_dma_adapter *
+edma_id_to_adapter(uint8_t id)
+{
+   return event_dma_adapter ? event_dma_adapter[id] : NULL;
+}
+
+static int
+edma_array_init(void)
+{
+   const struct rte_memzone *mz;
+   uint32_t sz;
+
+   mz = rte_memzone_lookup(DMA_ADAPTER_ARRAY);
+   if (mz == NULL) {
+   sz = sizeof(struct event_dma_adapter *) * 
RTE_EVENT_DMA_ADAPTER_MAX_INSTANCE;
+   sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
+
+   mz = rte_memzone_reserve_aligned(DMA_ADAPTER_ARRAY, sz, 
rte_socket_id(), 0,
+RTE_CACHE_LINE_SIZE);
+   if (mz == NULL) {
+   RTE_EDEV_LOG_ERR("Failed to reserve memzone : %s, err = 
%d",
+DMA_ADAPTER_ARRAY, rte_errno);
+   return -rte_errno;
+   }
+   }
+
+   e

[PATCH v8 04/12] eventdev/dma: support vchan add and delete

2023-09-29 Thread Amit Prakash Shukla
Added API support to add and delete vchan's from the DMA adapter. DMA devid
and vchan are added to the addapter instance by calling
rte_event_dma_adapter_vchan_add and deleted using
rte_event_dma_adapter_vchan_del.

Signed-off-by: Amit Prakash Shukla 
---
 lib/eventdev/rte_event_dma_adapter.c | 204 +++
 1 file changed, 204 insertions(+)

diff --git a/lib/eventdev/rte_event_dma_adapter.c 
b/lib/eventdev/rte_event_dma_adapter.c
index 241327d2a7..fa2e29b9d3 100644
--- a/lib/eventdev/rte_event_dma_adapter.c
+++ b/lib/eventdev/rte_event_dma_adapter.c
@@ -42,8 +42,31 @@ struct dma_ops_circular_buffer {
struct rte_event_dma_adapter_op **op_buffer;
 } __rte_cache_aligned;
 
+/* Vchan information */
+struct dma_vchan_info {
+   /* Set to indicate vchan queue is enabled */
+   bool vq_enabled;
+
+   /* Circular buffer for batching DMA ops to dma_dev */
+   struct dma_ops_circular_buffer dma_buf;
+} __rte_cache_aligned;
+
 /* DMA device information */
 struct dma_device_info {
+   /* Pointer to vchan queue info */
+   struct dma_vchan_info *vchanq;
+
+   /* Pointer to vchan queue info.
+* This holds ops passed by application till the
+* dma completion is done.
+*/
+   struct dma_vchan_info *tqmap;
+
+   /* If num_vchanq > 0, the start callback will
+* be invoked if not already invoked
+*/
+   uint16_t num_vchanq;
+
/* Number of vchans configured for a DMA device. */
uint16_t num_dma_dev_vchan;
 } __rte_cache_aligned;
@@ -81,6 +104,9 @@ struct event_dma_adapter {
 
/* Set if  default_cb is being used */
int default_cb_arg;
+
+   /* No. of vchan queue configured */
+   uint16_t nb_vchanq;
 } __rte_cache_aligned;
 
 static struct event_dma_adapter **event_dma_adapter;
@@ -333,3 +359,181 @@ rte_event_dma_adapter_free(uint8_t id)
 
return 0;
 }
+
+static void
+edma_update_vchanq_info(struct event_dma_adapter *adapter, struct 
dma_device_info *dev_info,
+   uint16_t vchan, uint8_t add)
+{
+   struct dma_vchan_info *vchan_info;
+   struct dma_vchan_info *tqmap_info;
+   int enabled;
+   uint16_t i;
+
+   if (dev_info->vchanq == NULL)
+   return;
+
+   if (vchan == RTE_DMA_ALL_VCHAN) {
+   for (i = 0; i < dev_info->num_dma_dev_vchan; i++)
+   edma_update_vchanq_info(adapter, dev_info, i, add);
+   } else {
+   tqmap_info = &dev_info->tqmap[vchan];
+   vchan_info = &dev_info->vchanq[vchan];
+   enabled = vchan_info->vq_enabled;
+   if (add) {
+   adapter->nb_vchanq += !enabled;
+   dev_info->num_vchanq += !enabled;
+   } else {
+   adapter->nb_vchanq -= enabled;
+   dev_info->num_vchanq -= enabled;
+   }
+   vchan_info->vq_enabled = !!add;
+   tqmap_info->vq_enabled = !!add;
+   }
+}
+
+int
+rte_event_dma_adapter_vchan_add(uint8_t id, int16_t dma_dev_id, uint16_t vchan,
+   const struct rte_event *event)
+{
+   struct event_dma_adapter *adapter;
+   struct dma_device_info *dev_info;
+   struct rte_eventdev *dev;
+   uint32_t cap;
+   int ret;
+
+   EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+   if (!rte_dma_is_valid(dma_dev_id)) {
+   RTE_EDEV_LOG_ERR("Invalid dma_dev_id = %" PRIu8, dma_dev_id);
+   return -EINVAL;
+   }
+
+   adapter = edma_id_to_adapter(id);
+   if (adapter == NULL)
+   return -EINVAL;
+
+   dev = &rte_eventdevs[adapter->eventdev_id];
+   ret = rte_event_dma_adapter_caps_get(adapter->eventdev_id, dma_dev_id, 
&cap);
+   if (ret) {
+   RTE_EDEV_LOG_ERR("Failed to get adapter caps dev %u dma_dev 
%u", id, dma_dev_id);
+   return ret;
+   }
+
+   if ((cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_VCHAN_EV_BIND) && 
(event == NULL)) {
+   RTE_EDEV_LOG_ERR("Event can not be NULL for dma_dev_id = %u", 
dma_dev_id);
+   return -EINVAL;
+   }
+
+   dev_info = &adapter->dma_devs[dma_dev_id];
+   if (vchan != RTE_DMA_ALL_VCHAN && vchan >= dev_info->num_dma_dev_vchan) 
{
+   RTE_EDEV_LOG_ERR("Invalid vhcan %u", vchan);
+   return -EINVAL;
+   }
+
+   /* In case HW cap is RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, no
+* need of service core as HW supports event forward capability.
+*/
+   if ((cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) ||
+   (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_VCHAN_EV_BIND &&
+adapter->mode == RTE_EVENT_DMA_ADAPTER_OP_NEW) ||
+   (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW &&
+adapter->mode == RTE_EVENT_DMA_ADAPTER_OP_NEW)) {
+   if (*dev->dev_ops

[PATCH v8 05/12] eventdev/dma: support adapter service function

2023-09-29 Thread Amit Prakash Shukla
Added support for DMA adapter service function for event devices.
Enqueue and dequeue of event from eventdev and DMA device are done
based on the adapter mode and the supported HW capabilities.

Signed-off-by: Amit Prakash Shukla 
---
 lib/eventdev/rte_event_dma_adapter.c | 592 +++
 1 file changed, 592 insertions(+)

diff --git a/lib/eventdev/rte_event_dma_adapter.c 
b/lib/eventdev/rte_event_dma_adapter.c
index fa2e29b9d3..1d8bae0422 100644
--- a/lib/eventdev/rte_event_dma_adapter.c
+++ b/lib/eventdev/rte_event_dma_adapter.c
@@ -3,6 +3,7 @@
  */
 
 #include 
+#include 
 
 #include "rte_event_dma_adapter.h"
 
@@ -69,6 +70,10 @@ struct dma_device_info {
 
/* Number of vchans configured for a DMA device. */
uint16_t num_dma_dev_vchan;
+
+   /* Next queue pair to be processed */
+   uint16_t next_vchan_id;
+
 } __rte_cache_aligned;
 
 struct event_dma_adapter {
@@ -90,6 +95,9 @@ struct event_dma_adapter {
/* Lock to serialize config updates with service function */
rte_spinlock_t lock;
 
+   /* Next dma device to be processed */
+   uint16_t next_dmadev_id;
+
/* DMA device structure array */
struct dma_device_info *dma_devs;
 
@@ -107,6 +115,26 @@ struct event_dma_adapter {
 
/* No. of vchan queue configured */
uint16_t nb_vchanq;
+
+   /* Per adapter EAL service ID */
+   uint32_t service_id;
+
+   /* Service initialization state */
+   uint8_t service_initialized;
+
+   /* Max DMA ops processed in any service function invocation */
+   uint32_t max_nb;
+
+   /* Store event port's implicit release capability */
+   uint8_t implicit_release_disabled;
+
+   /* Flag to indicate backpressure at dma_dev
+* Stop further dequeuing events from eventdev
+*/
+   bool stop_enq_to_dma_dev;
+
+   /* Loop counter to flush dma ops */
+   uint16_t transmit_loop_count;
 } __rte_cache_aligned;
 
 static struct event_dma_adapter **event_dma_adapter;
@@ -148,6 +176,18 @@ edma_array_init(void)
return 0;
 }
 
+static inline bool
+edma_circular_buffer_batch_ready(struct dma_ops_circular_buffer *bufp)
+{
+   return bufp->count >= DMA_BATCH_SIZE;
+}
+
+static inline bool
+edma_circular_buffer_space_for_batch(struct dma_ops_circular_buffer *bufp)
+{
+   return (bufp->size - bufp->count) >= DMA_BATCH_SIZE;
+}
+
 static inline int
 edma_circular_buffer_init(const char *name, struct dma_ops_circular_buffer 
*buf, uint16_t sz)
 {
@@ -166,6 +206,71 @@ edma_circular_buffer_free(struct dma_ops_circular_buffer 
*buf)
rte_free(buf->op_buffer);
 }
 
+static inline int
+edma_circular_buffer_add(struct dma_ops_circular_buffer *bufp, struct 
rte_event_dma_adapter_op *op)
+{
+   uint16_t *tail = &bufp->tail;
+
+   bufp->op_buffer[*tail] = op;
+
+   /* circular buffer, go round */
+   *tail = (*tail + 1) % bufp->size;
+   bufp->count++;
+
+   return 0;
+}
+
+static inline int
+edma_circular_buffer_flush_to_dma_dev(struct event_dma_adapter *adapter,
+ struct dma_ops_circular_buffer *bufp, 
uint8_t dma_dev_id,
+ uint16_t vchan, uint16_t *nb_ops_flushed)
+{
+   struct rte_event_dma_adapter_op *op;
+   struct dma_vchan_info *tq;
+   uint16_t *head = &bufp->head;
+   uint16_t *tail = &bufp->tail;
+   uint16_t n;
+   uint16_t i;
+   int ret;
+
+   if (*tail > *head)
+   n = *tail - *head;
+   else if (*tail < *head)
+   n = bufp->size - *head;
+   else {
+   *nb_ops_flushed = 0;
+   return 0; /* buffer empty */
+   }
+
+   tq = &adapter->dma_devs[dma_dev_id].tqmap[vchan];
+
+   for (i = 0; i < n; i++) {
+   op = bufp->op_buffer[*head];
+   if (op->nb_src == 1 && op->nb_dst == 1)
+   ret = rte_dma_copy(dma_dev_id, vchan, 
op->src_seg->addr, op->dst_seg->addr,
+  op->src_seg->length, op->flags);
+   else
+   ret = rte_dma_copy_sg(dma_dev_id, vchan, op->src_seg, 
op->dst_seg,
+ op->nb_src, op->nb_dst, 
op->flags);
+   if (ret < 0)
+   break;
+
+   /* Enqueue in transaction queue. */
+   edma_circular_buffer_add(&tq->dma_buf, op);
+
+   *head = (*head + 1) % bufp->size;
+   }
+
+   *nb_ops_flushed = i;
+   bufp->count -= *nb_ops_flushed;
+   if (!bufp->count) {
+   *head = 0;
+   *tail = 0;
+   }
+
+   return *nb_ops_flushed == n ? 0 : -1;
+}
+
 static int
 edma_default_config_cb(uint8_t id, uint8_t evdev_id, struct 
rte_event_dma_adapter_conf *conf,
   void *arg)
@@ -360,6 +465,406 @@ rte_event_dma_adapter_free(uint8_t id)
return 0;
 }
 
+static inline unsigned int
+edma_enq_to_d

[PATCH v8 06/12] eventdev/dma: support adapter start and stop

2023-09-29 Thread Amit Prakash Shukla
Added API support to start and stop DMA adapter.

Signed-off-by: Amit Prakash Shukla 
---
 lib/eventdev/rte_event_dma_adapter.c | 69 
 1 file changed, 69 insertions(+)

diff --git a/lib/eventdev/rte_event_dma_adapter.c 
b/lib/eventdev/rte_event_dma_adapter.c
index 1d8bae0422..be6c2623e9 100644
--- a/lib/eventdev/rte_event_dma_adapter.c
+++ b/lib/eventdev/rte_event_dma_adapter.c
@@ -74,6 +74,13 @@ struct dma_device_info {
/* Next queue pair to be processed */
uint16_t next_vchan_id;
 
+   /* Set to indicate processing has been started */
+   uint8_t dev_started;
+
+   /* Set to indicate dmadev->eventdev packet
+* transfer uses a hardware mechanism
+*/
+   uint8_t internal_event_port;
 } __rte_cache_aligned;
 
 struct event_dma_adapter {
@@ -1129,3 +1136,65 @@ rte_event_dma_adapter_vchan_del(uint8_t id, int16_t 
dma_dev_id, uint16_t vchan)
 
return ret;
 }
+
+static int
+edma_adapter_ctrl(uint8_t id, int start)
+{
+   struct event_dma_adapter *adapter;
+   struct dma_device_info *dev_info;
+   struct rte_eventdev *dev;
+   uint16_t num_dma_dev;
+   int stop = !start;
+   int use_service;
+   uint32_t i;
+
+   use_service = 0;
+   EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+   adapter = edma_id_to_adapter(id);
+   if (adapter == NULL)
+   return -EINVAL;
+
+   num_dma_dev = rte_dma_count_avail();
+   dev = &rte_eventdevs[adapter->eventdev_id];
+
+   for (i = 0; i < num_dma_dev; i++) {
+   dev_info = &adapter->dma_devs[i];
+   /* start check for num queue pairs */
+   if (start && !dev_info->num_vchanq)
+   continue;
+   /* stop check if dev has been started */
+   if (stop && !dev_info->dev_started)
+   continue;
+   use_service |= !dev_info->internal_event_port;
+   dev_info->dev_started = start;
+   if (dev_info->internal_event_port == 0)
+   continue;
+   start ? (*dev->dev_ops->dma_adapter_start)(dev, i) :
+   (*dev->dev_ops->dma_adapter_stop)(dev, i);
+   }
+
+   if (use_service)
+   rte_service_runstate_set(adapter->service_id, start);
+
+   return 0;
+}
+
+int
+rte_event_dma_adapter_start(uint8_t id)
+{
+   struct event_dma_adapter *adapter;
+
+   EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+   adapter = edma_id_to_adapter(id);
+   if (adapter == NULL)
+   return -EINVAL;
+
+   return edma_adapter_ctrl(id, 1);
+}
+
+int
+rte_event_dma_adapter_stop(uint8_t id)
+{
+   return edma_adapter_ctrl(id, 0);
+}
-- 
2.25.1



[PATCH v8 07/12] eventdev/dma: support adapter service ID get

2023-09-29 Thread Amit Prakash Shukla
Added API support to get DMA adapter service ID. Service id
returned in the variable by the API call shall be used by application
to map a service core.

Signed-off-by: Amit Prakash Shukla 
---
 lib/eventdev/rte_event_dma_adapter.c | 17 +
 1 file changed, 17 insertions(+)

diff --git a/lib/eventdev/rte_event_dma_adapter.c 
b/lib/eventdev/rte_event_dma_adapter.c
index be6c2623e9..c3b014aaf9 100644
--- a/lib/eventdev/rte_event_dma_adapter.c
+++ b/lib/eventdev/rte_event_dma_adapter.c
@@ -1137,6 +1137,23 @@ rte_event_dma_adapter_vchan_del(uint8_t id, int16_t 
dma_dev_id, uint16_t vchan)
return ret;
 }
 
+int
+rte_event_dma_adapter_service_id_get(uint8_t id, uint32_t *service_id)
+{
+   struct event_dma_adapter *adapter;
+
+   EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+   adapter = edma_id_to_adapter(id);
+   if (adapter == NULL || service_id == NULL)
+   return -EINVAL;
+
+   if (adapter->service_initialized)
+   *service_id = adapter->service_id;
+
+   return adapter->service_initialized ? 0 : -ESRCH;
+}
+
 static int
 edma_adapter_ctrl(uint8_t id, int start)
 {
-- 
2.25.1



[PATCH v8 08/12] eventdev/dma: support adapter runtime params

2023-09-29 Thread Amit Prakash Shukla
Added support to set and get runtime params for DMA adapter. The
parameters that can be set/get are defined in
struct rte_event_dma_adapter_runtime_params.

Signed-off-by: Amit Prakash Shukla 
---
 lib/eventdev/rte_event_dma_adapter.c | 93 
 1 file changed, 93 insertions(+)

diff --git a/lib/eventdev/rte_event_dma_adapter.c 
b/lib/eventdev/rte_event_dma_adapter.c
index c3b014aaf9..632169a7c2 100644
--- a/lib/eventdev/rte_event_dma_adapter.c
+++ b/lib/eventdev/rte_event_dma_adapter.c
@@ -1215,3 +1215,96 @@ rte_event_dma_adapter_stop(uint8_t id)
 {
return edma_adapter_ctrl(id, 0);
 }
+
+#define DEFAULT_MAX_NB 128
+
+int
+rte_event_dma_adapter_runtime_params_init(struct 
rte_event_dma_adapter_runtime_params *params)
+{
+   if (params == NULL)
+   return -EINVAL;
+
+   memset(params, 0, sizeof(*params));
+   params->max_nb = DEFAULT_MAX_NB;
+
+   return 0;
+}
+
+static int
+dma_adapter_cap_check(struct event_dma_adapter *adapter)
+{
+   uint32_t caps;
+   int ret;
+
+   if (!adapter->nb_vchanq)
+   return -EINVAL;
+
+   ret = rte_event_dma_adapter_caps_get(adapter->eventdev_id, 
adapter->next_dmadev_id, &caps);
+   if (ret) {
+   RTE_EDEV_LOG_ERR("Failed to get adapter caps dev %" PRIu8 " 
cdev %" PRIu8,
+adapter->eventdev_id, adapter->next_dmadev_id);
+   return ret;
+   }
+
+   if ((caps & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) ||
+   (caps & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW))
+   return -ENOTSUP;
+
+   return 0;
+}
+
+int
+rte_event_dma_adapter_runtime_params_set(uint8_t id,
+struct 
rte_event_dma_adapter_runtime_params *params)
+{
+   struct event_dma_adapter *adapter;
+   int ret;
+
+   EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+   if (params == NULL) {
+   RTE_EDEV_LOG_ERR("params pointer is NULL\n");
+   return -EINVAL;
+   }
+
+   adapter = edma_id_to_adapter(id);
+   if (adapter == NULL)
+   return -EINVAL;
+
+   ret = dma_adapter_cap_check(adapter);
+   if (ret)
+   return ret;
+
+   rte_spinlock_lock(&adapter->lock);
+   adapter->max_nb = params->max_nb;
+   rte_spinlock_unlock(&adapter->lock);
+
+   return 0;
+}
+
+int
+rte_event_dma_adapter_runtime_params_get(uint8_t id,
+struct 
rte_event_dma_adapter_runtime_params *params)
+{
+   struct event_dma_adapter *adapter;
+   int ret;
+
+   EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+   if (params == NULL) {
+   RTE_EDEV_LOG_ERR("params pointer is NULL\n");
+   return -EINVAL;
+   }
+
+   adapter = edma_id_to_adapter(id);
+   if (adapter == NULL)
+   return -EINVAL;
+
+   ret = dma_adapter_cap_check(adapter);
+   if (ret)
+   return ret;
+
+   params->max_nb = adapter->max_nb;
+
+   return 0;
+}
-- 
2.25.1



[PATCH v8 09/12] eventdev/dma: support adapter stats

2023-09-29 Thread Amit Prakash Shukla
Added DMA adapter stats API support to get and reset stats. DMA
SW adapter stats and eventdev driver supported stats for enqueue
and dequeue are reported by get API.

Signed-off-by: Amit Prakash Shukla 
---
 lib/eventdev/rte_event_dma_adapter.c | 95 
 1 file changed, 95 insertions(+)

diff --git a/lib/eventdev/rte_event_dma_adapter.c 
b/lib/eventdev/rte_event_dma_adapter.c
index 632169a7c2..6c67e6d499 100644
--- a/lib/eventdev/rte_event_dma_adapter.c
+++ b/lib/eventdev/rte_event_dma_adapter.c
@@ -142,6 +142,9 @@ struct event_dma_adapter {
 
/* Loop counter to flush dma ops */
uint16_t transmit_loop_count;
+
+   /* Per instance stats structure */
+   struct rte_event_dma_adapter_stats dma_stats;
 } __rte_cache_aligned;
 
 static struct event_dma_adapter **event_dma_adapter;
@@ -475,6 +478,7 @@ rte_event_dma_adapter_free(uint8_t id)
 static inline unsigned int
 edma_enq_to_dma_dev(struct event_dma_adapter *adapter, struct rte_event *ev, 
unsigned int cnt)
 {
+   struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats;
struct dma_vchan_info *vchan_qinfo = NULL;
struct rte_event_dma_adapter_op *dma_op;
uint16_t vchan, nb_enqueued = 0;
@@ -484,6 +488,7 @@ edma_enq_to_dma_dev(struct event_dma_adapter *adapter, 
struct rte_event *ev, uns
 
ret = 0;
n = 0;
+   stats->event_deq_count += cnt;
 
for (i = 0; i < cnt; i++) {
dma_op = ev[i].event_ptr;
@@ -506,6 +511,7 @@ edma_enq_to_dma_dev(struct event_dma_adapter *adapter, 
struct rte_event *ev, uns
ret = edma_circular_buffer_flush_to_dma_dev(adapter, 
&vchan_qinfo->dma_buf,
dma_dev_id, 
vchan,

&nb_enqueued);
+   stats->dma_enq_count += nb_enqueued;
n += nb_enqueued;
 
/**
@@ -552,6 +558,7 @@ edma_adapter_dev_flush(struct event_dma_adapter *adapter, 
int16_t dma_dev_id,
 static unsigned int
 edma_adapter_enq_flush(struct event_dma_adapter *adapter)
 {
+   struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats;
int16_t dma_dev_id;
uint16_t nb_enqueued = 0;
uint16_t nb_ops_flushed = 0;
@@ -566,6 +573,8 @@ edma_adapter_enq_flush(struct event_dma_adapter *adapter)
if (!nb_ops_flushed)
adapter->stop_enq_to_dma_dev = false;
 
+   stats->dma_enq_count += nb_enqueued;
+
return nb_enqueued;
 }
 
@@ -577,6 +586,7 @@ edma_adapter_enq_flush(struct event_dma_adapter *adapter)
 static int
 edma_adapter_enq_run(struct event_dma_adapter *adapter, unsigned int max_enq)
 {
+   struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats;
uint8_t event_port_id = adapter->event_port_id;
uint8_t event_dev_id = adapter->eventdev_id;
struct rte_event ev[DMA_BATCH_SIZE];
@@ -596,6 +606,7 @@ edma_adapter_enq_run(struct event_dma_adapter *adapter, 
unsigned int max_enq)
break;
}
 
+   stats->event_poll_count++;
n = rte_event_dequeue_burst(event_dev_id, event_port_id, ev, 
DMA_BATCH_SIZE, 0);
 
if (!n)
@@ -616,6 +627,7 @@ static inline uint16_t
 edma_ops_enqueue_burst(struct event_dma_adapter *adapter, struct 
rte_event_dma_adapter_op **ops,
   uint16_t num)
 {
+   struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats;
uint8_t event_port_id = adapter->event_port_id;
uint8_t event_dev_id = adapter->eventdev_id;
struct rte_event events[DMA_BATCH_SIZE];
@@ -655,6 +667,10 @@ edma_ops_enqueue_burst(struct event_dma_adapter *adapter, 
struct rte_event_dma_a
 
} while (retry++ < DMA_ADAPTER_MAX_EV_ENQ_RETRIES && nb_enqueued < 
nb_ev);
 
+   stats->event_enq_fail_count += nb_ev - nb_enqueued;
+   stats->event_enq_count += nb_enqueued;
+   stats->event_enq_retry_count += retry - 1;
+
return nb_enqueued;
 }
 
@@ -709,6 +725,7 @@ edma_ops_buffer_flush(struct event_dma_adapter *adapter)
 static inline unsigned int
 edma_adapter_deq_run(struct event_dma_adapter *adapter, unsigned int max_deq)
 {
+   struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats;
struct dma_vchan_info *vchan_info;
struct dma_ops_circular_buffer *tq_buf;
struct rte_event_dma_adapter_op *ops;
@@ -746,6 +763,7 @@ edma_adapter_deq_run(struct event_dma_adapter *adapter, 
unsigned int max_deq)
continue;
 
done = false;
+   stats->dma_deq_count += n;
 
tq_buf = &dev_info->tqmap[vchan].dma_buf;
 
@@ -1308,3 +1326,80 @@ rte_event_dma_adapter_runtime_params_get(uint8_t id,
 
return 0;
 }
+
+int
+rte_event_dma_adapter_stats_get(uint8_t id,

[PATCH v8 10/12] eventdev/dma: support adapter enqueue

2023-09-29 Thread Amit Prakash Shukla
Added API support to enqueue a DMA operation to the DMA driver.

Signed-off-by: Amit Prakash Shukla 
---
 lib/eventdev/rte_event_dma_adapter.c | 13 +
 1 file changed, 13 insertions(+)

diff --git a/lib/eventdev/rte_event_dma_adapter.c 
b/lib/eventdev/rte_event_dma_adapter.c
index 6c67e6d499..f299914dec 100644
--- a/lib/eventdev/rte_event_dma_adapter.c
+++ b/lib/eventdev/rte_event_dma_adapter.c
@@ -1403,3 +1403,16 @@ rte_event_dma_adapter_stats_reset(uint8_t id)
 
return 0;
 }
+
+uint16_t
+rte_event_dma_adapter_enqueue(uint8_t dev_id, uint8_t port_id, struct 
rte_event ev[],
+ uint16_t nb_events)
+{
+   const struct rte_event_fp_ops *fp_ops;
+   void *port;
+
+   fp_ops = &rte_event_fp_ops[dev_id];
+   port = fp_ops->data[port_id];
+
+   return fp_ops->dma_enqueue(port, ev, nb_events);
+}
-- 
2.25.1



[PATCH v8 11/12] eventdev/dma: support adapter event port get

2023-09-29 Thread Amit Prakash Shukla
Added support for DMA adapter event port get.

Signed-off-by: Amit Prakash Shukla 
---
 lib/eventdev/rte_event_dma_adapter.c | 16 
 1 file changed, 16 insertions(+)

diff --git a/lib/eventdev/rte_event_dma_adapter.c 
b/lib/eventdev/rte_event_dma_adapter.c
index f299914dec..af4b5ad388 100644
--- a/lib/eventdev/rte_event_dma_adapter.c
+++ b/lib/eventdev/rte_event_dma_adapter.c
@@ -475,6 +475,22 @@ rte_event_dma_adapter_free(uint8_t id)
return 0;
 }
 
+int
+rte_event_dma_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
+{
+   struct event_dma_adapter *adapter;
+
+   EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+   adapter = edma_id_to_adapter(id);
+   if (adapter == NULL || event_port_id == NULL)
+   return -EINVAL;
+
+   *event_port_id = adapter->event_port_id;
+
+   return 0;
+}
+
 static inline unsigned int
 edma_enq_to_dma_dev(struct event_dma_adapter *adapter, struct rte_event *ev, 
unsigned int cnt)
 {
-- 
2.25.1



[PATCH v8 12/12] app/test: add event DMA adapter auto-test

2023-09-29 Thread Amit Prakash Shukla
Added testsuite to test the dma adapter functionality.
The testsuite detects event and DMA device capability
and accordingly dma adapter is configured and modes are
tested. Test command:

sudo /app/test/dpdk-test --vdev=dma_skeleton \
event_dma_adapter_autotest

Signed-off-by: Amit Prakash Shukla 
---
 MAINTAINERS   |   1 +
 app/test/meson.build  |   1 +
 app/test/test_event_dma_adapter.c | 805 ++
 3 files changed, 807 insertions(+)
 create mode 100644 app/test/test_event_dma_adapter.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 4ebbbe8bb3..92c0b47618 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -544,6 +544,7 @@ Eventdev DMA Adapter API
 M: Amit Prakash Shukla 
 T: git://dpdk.org/next/dpdk-next-eventdev
 F: lib/eventdev/*dma_adapter*
+F: app/test/test_event_dma_adapter.c
 F: doc/guides/prog_guide/event_dma_adapter.rst
 
 Raw device API
diff --git a/app/test/meson.build b/app/test/meson.build
index 05bae9216d..7caf5ae5fc 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -66,6 +66,7 @@ source_file_deps = {
 'test_errno.c': [],
 'test_ethdev_link.c': ['ethdev'],
 'test_event_crypto_adapter.c': ['cryptodev', 'eventdev', 'bus_vdev'],
+'test_event_dma_adapter.c': ['dmadev', 'eventdev', 'bus_vdev'],
 'test_event_eth_rx_adapter.c': ['ethdev', 'eventdev', 'bus_vdev'],
 'test_event_eth_tx_adapter.c': ['bus_vdev', 'ethdev', 'net_ring', 
'eventdev'],
 'test_event_ring.c': ['eventdev'],
diff --git a/app/test/test_event_dma_adapter.c 
b/app/test/test_event_dma_adapter.c
new file mode 100644
index 00..1e193f4b52
--- /dev/null
+++ b/app/test/test_event_dma_adapter.c
@@ -0,0 +1,805 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2023 Marvell.
+ */
+
+#include "test.h"
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#ifdef RTE_EXEC_ENV_WINDOWS
+static int
+test_event_dma_adapter(void)
+{
+   printf("event_dma_adapter not supported on Windows, skipping test\n");
+   return TEST_SKIPPED;
+}
+
+#else
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#define NUM_MBUFS (8191)
+#define MBUF_CACHE_SIZE   (256)
+#define TEST_APP_PORT_ID   0
+#define TEST_APP_EV_QUEUE_ID   0
+#define TEST_APP_EV_PRIORITY   0
+#define TEST_APP_EV_FLOWID 0xAABB
+#define TEST_DMA_EV_QUEUE_ID   1
+#define TEST_ADAPTER_ID0
+#define TEST_DMA_DEV_ID0
+#define TEST_DMA_VCHAN_ID  0
+#define PACKET_LENGTH  1024
+#define NB_TEST_PORTS  1
+#define NB_TEST_QUEUES 2
+#define NUM_CORES  2
+#define DMA_OP_POOL_SIZE   128
+#define TEST_MAX_OP32
+#define TEST_RINGSIZE  512
+
+#define MBUF_SIZE  (RTE_PKTMBUF_HEADROOM + PACKET_LENGTH)
+
+/* Handle log statements in same manner as test macros */
+#define LOG_DBG(...)RTE_LOG(DEBUG, EAL, __VA_ARGS__)
+
+struct event_dma_adapter_test_params {
+   struct rte_mempool *src_mbuf_pool;
+   struct rte_mempool *dst_mbuf_pool;
+   struct rte_mempool *op_mpool;
+   uint8_t dma_event_port_id;
+   uint8_t internal_port_op_fwd;
+};
+
+struct rte_event dma_response_info = {
+   .queue_id = TEST_APP_EV_QUEUE_ID,
+   .sched_type = RTE_SCHED_TYPE_ATOMIC,
+   .flow_id = TEST_APP_EV_FLOWID,
+   .priority = TEST_APP_EV_PRIORITY
+};
+
+static struct event_dma_adapter_test_params params;
+static uint8_t dma_adapter_setup_done;
+static uint32_t slcore_id;
+static int evdev;
+
+static int
+send_recv_ev(struct rte_event *ev)
+{
+   struct rte_event recv_ev[TEST_MAX_OP];
+   uint16_t nb_enqueued = 0;
+   int i = 0;
+
+   if (params.internal_port_op_fwd) {
+   nb_enqueued = rte_event_dma_adapter_enqueue(evdev, 
TEST_APP_PORT_ID, ev,
+   TEST_MAX_OP);
+   } else {
+   while (nb_enqueued < TEST_MAX_OP) {
+   nb_enqueued += rte_event_enqueue_burst(evdev, 
TEST_APP_PORT_ID,
+  
&ev[nb_enqueued], TEST_MAX_OP -
+  nb_enqueued);
+   }
+   }
+
+   TEST_ASSERT_EQUAL(nb_enqueued, TEST_MAX_OP, "Failed to send event to 
dma adapter\n");
+
+   while (i < TEST_MAX_OP) {
+   if (rte_event_dequeue_burst(evdev, TEST_APP_PORT_ID, 
&recv_ev[i], 1, 0) != 1)
+   continue;
+   i++;
+   }
+
+   TEST_ASSERT_EQUAL(i, TEST_MAX_OP, "Test failed. Failed to dequeue 
events.\n");
+
+   return TEST_SUCCESS;
+}
+
+static int
+test_dma_adapter_stats(void)
+{
+   struct rte_event_dma_adapter_stats stats;
+
+   rte_event_dma_adapter_stats_get(TEST_ADAPTER_ID, &stats);
+   printf(" +--+\n");
+   printf(" + DMA adapt

RE: [PATCH v3 4/7] cryptodev: set private and public keys in EC session

2023-09-29 Thread Power, Ciara


Hi Gowrishankar,

> -Original Message-
> From: Gowrishankar Muthukrishnan 
> Sent: Thursday, September 28, 2023 6:09 PM
> To: dev@dpdk.org
> Cc: ano...@marvell.com; Akhil Goyal ; Fan Zhang
> ; Ji, Kai ; Kusztal, ArkadiuszX
> ; Power, Ciara ;
> Gowrishankar Muthukrishnan 
> Subject: [PATCH v3 4/7] cryptodev: set private and public keys in EC session
> 
> Set EC private and public keys into xform so that, it can be maintained per
> session.
> 
> Signed-off-by: Gowrishankar Muthukrishnan 
> Change-Id: Ib8251987c805bc304f819bf13f94f310f225a0e3

What is this Change-Id for?

> ---
>  app/test/test_cryptodev_asym.c   | 60 ++--
>  drivers/common/cnxk/roc_ae.h | 18 ++
>  drivers/common/cpt/cpt_mcode_defines.h   | 18 ++
>  drivers/common/cpt/cpt_ucode_asym.h  | 22 +++
>  drivers/crypto/cnxk/cnxk_ae.h| 37 
>  drivers/crypto/openssl/rte_openssl_pmd.c | 53 +
>  drivers/crypto/openssl/rte_openssl_pmd_ops.c | 35 
>  drivers/crypto/qat/qat_asym.c|  6 +-
>  examples/fips_validation/main.c  | 14 +++--
>  lib/cryptodev/rte_crypto_asym.h  | 18 ++
>  10 files changed, 158 insertions(+), 123 deletions(-)
> 


Acked-by: Ciara Power 



Re: [PATCH 11/11] net/ngbe: add YT PHY fiber mode autoneg 100M support

2023-09-29 Thread Ferruh Yigit
On 9/28/2023 10:47 AM, Jiawen Wu wrote:
> Support auto-neg 100M for YT PHY fiber mode.
> 
> Signed-off-by: Jiawen Wu 
> ---
>  doc/guides/rel_notes/release_23_11.rst |  4 ++
>  drivers/net/ngbe/base/ngbe_phy_yt.c| 66 +++---
>  2 files changed, 52 insertions(+), 18 deletions(-)
> 
> diff --git a/doc/guides/rel_notes/release_23_11.rst 
> b/doc/guides/rel_notes/release_23_11.rst
> index 9746809a66..578900f504 100644
> --- a/doc/guides/rel_notes/release_23_11.rst
> +++ b/doc/guides/rel_notes/release_23_11.rst
> @@ -41,6 +41,10 @@ DPDK Release 23.11
>  New Features
>  
>  
> +* **Updated Wangxun ngbe driver.**
> +
> +  * Added 100M and auto-neg support in YT PHY fiber mode.
> +
>

This is added into section comment, moved to proper location while merging.




Re: [PATCH 00/11] Wanguxn NICs fixes and supports

2023-09-29 Thread Ferruh Yigit
On 9/28/2023 10:47 AM, Jiawen Wu wrote:
> Fix some bugs in txgbe/ngbe driver, and support new feature in
> ngbe driver.
> 
> Jiawen Wu (11):
>   net/txgbe: add Tx queue maximum limit
>   net/txgbe: fix GRE tunnel packet checksum
>   net/ngbe: fix to set flow control
>   net/ngbe: prevent the NIC from slowing down link speed
>   net/txgbe: reconfigure MAC Rx when link update
>   net/ngbe: reconfigure MAC Rx when link update
>   net/txgbe: fix to keep link down after device close
>   net/ngbe: fix to keep link down after device close
>   net/txgbe: check process type in close operation
>   net/ngbe: check process type in close operation
>   net/ngbe: add YT PHY fiber mode autoneg 100M support
> 

Series applied to dpdk-next-net/main, thanks.



Re: [PATCH v2] net/tap: resolve stringop-overflow with gcc 12 on ppc64le

2023-09-29 Thread Ferruh Yigit
On 6/7/2023 7:47 PM, Ferruh Yigit wrote:
> On 5/16/2023 10:55 AM, Ferruh Yigit wrote:
>> On 5/16/2023 2:28 AM, Stephen Hemminger wrote:
>>> On Tue, 16 May 2023 00:35:56 +0100
>>> Ferruh Yigit  wrote:
>>>
 Yes only some scripts and possible applications that hotplug tap
 interface with hardcoded parameters may impacted, don't know how big is
 this amount but this ends up breaking something that was working before
 upgrading DPDK for them.

 And I believe the motivation is weak to break the behavior.

 Won't it be better to update 'rte_ether_unformat_addr()' to accept more
 flexible syntax, and use it? Is there any disadvantage of this approach?
>>>
>>> It is already more flexible than the standard ether_aton().
>>
>> I mean to accept single chars, as 'tap' currently does, like "a:a:a:a:a:a".
>>
>> Agree that impact of tap change is small, but if we can eliminate it
>> completely without any side affect, why not?
>>
>>
>> As accepting single char will be expanding 'rte_ether_unformat_addr()'
>> capability, it will be backward compatible, am I missing anything?
>>
> 
> Hi David,
> 
> If API update is not planned, what do you think to just solve the build
> error without changing functionality with a change something like below:
> 
> ```
>  -   (strlen(mac_byte) == strspn(mac_byte,
>  -   ETH_TAP_CMP_MAC_FMT))) {
>  +   (strlen(mac_byte) == strspn(mac_byte, ETH_TAP_CMP_MAC_FMT)) &&
>  +   index < RTE_ETHER_ADDR_LEN) {
> 
> ```

Hi David,

If you can confirm above fixes the issue, I can send a patch for it.


[PATCH] doc: add cnxk dmadev performance tuning details

2023-09-29 Thread Amit Prakash Shukla
Updated cnxk DMA driver document to explain about performance tuning
parameters for kernel module.

Signed-off-by: Amit Prakash Shukla 
---
 doc/guides/dmadevs/cnxk.rst | 30 ++
 1 file changed, 30 insertions(+)

diff --git a/doc/guides/dmadevs/cnxk.rst b/doc/guides/dmadevs/cnxk.rst
index 418b9a9d63..8d841b1f12 100644
--- a/doc/guides/dmadevs/cnxk.rst
+++ b/doc/guides/dmadevs/cnxk.rst
@@ -56,3 +56,33 @@ Performing Data Copies
 Refer to the :ref:`Enqueue / Dequeue APIs ` section
 of the dmadev library documentation
 for details on operation enqueue and submission API usage.
+
+Performance Tuning Parameters
+~
+
+To achieve higher performance, DMA device needs to be tuned using PF kernel 
driver
+module params. The PF kernel driver is part of the octeon sdk. Module params 
shall be
+configured during module insert as in below example::
+
+$ sudo insmod octeontx2_dpi.ko mps=128 mrrs=128 eng_fifo_buf=0x101008080808
+
+* ``mps``
+  Maximum payload size. MPS size shall not exceed the size selected by PCI 
config.
+  Max size that shall be configured can be found on executing ``lspci`` command
+  for the device.
+
+* ``mrrs``
+  Maximum read request size. MRRS size shall not exceed the size selected by 
PCI
+  config. Max size that shall be configured can be found on executing ``lspci``
+  command for the device.
+
+* ``eng_fifo_buf``
+  CNXK supports 6 DMA engines and each engine has an associated FIFO. 
By-default
+  all engine's FIFO is configured to 8 KB. Engine FIFO size can be tuned using 
this
+  64 bit variable, where each byte represents an engine. In the example above 
engine
+  0-3 FIFO are configure as 8 KB and engine 4-5 are configured as 16 KB.
+
+.. note::
+MPS and MRRS performance tuning parameters helps achieve higher performance
+only for Inbound and Outbound DMA transfers. The parameter has no effect 
for
+Internal only DMA transfer.
-- 
2.25.1



Re: [PATCH] net/dpaa2: change threshold value

2023-09-29 Thread Ferruh Yigit
On 6/9/2023 3:20 PM, Ferruh Yigit wrote:
> On 5/15/2023 9:16 AM, Sachin Saxena (OSS) wrote:
>> On 5/8/2023 4:11 PM, Tianli Lai wrote:
>>> Caution: This is an external email. Please take care when clicking
>>> links or opening attachments. When in doubt, report the message using
>>> the 'Report this email' button
>>>
>>>
>>> this threshold value can be changed with function argument nb_rx_desc.
>>>
>>> Signed-off-by: Tianli Lai 
>>> ---
>>>   drivers/net/dpaa2/dpaa2_ethdev.c | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c
>>> b/drivers/net/dpaa2/dpaa2_ethdev.c
>>> index 679f33ae1a..ad892ded4a 100644
>>> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
>>> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
>>> @@ -829,7 +829,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
>>>  dpaa2_q->cgid,
>>> &taildrop);
>>>  } else {
>>>  /*enabling per rx queue congestion control */
>>> -   taildrop.threshold = CONG_THRESHOLD_RX_BYTES_Q;
>>> +   taildrop.threshold = nb_rx_desc * 1024;
>>>  taildrop.units = DPNI_CONGESTION_UNIT_BYTES;
>>>  taildrop.oal = CONG_RX_OAL;
>>>  DPAA2_PMD_DEBUG("Enabling Byte based Drop on
>>> queue= %d",
>>> -- 
>>> 2.27.0
>>>
>> Hi Tianli,
>>
>> The number of bytes based tail drop threshold value
>> "CONG_THRESHOLD_RX_BYTES_Q" is an optimized value for dpaa2 platform. we
>> concluded this value after multiple benchmark experiments in past.
>> Although, number of frame based threshold value is "nb_rx_desc" based only.
>> We will further review this suggestion and get back.
>>
> 
> Hi Sachin, What is the status of this patch?
> 

Ping


Re: [PATCH] net/sfc: support packet replay in transfer flows

2023-09-29 Thread Ferruh Yigit
On 9/27/2023 11:36 AM, Ivan Malov wrote:
> Packet replay enables users to leverage multiple counters in
> one flow and allows to request delivery to multiple ports.
> 
> A given flow rule may use either one inline count action
> and multiple indirect counters or just multiple indirect
> counters. The inline count action (if any) must come
> before the first delivery action or before the first
> indirect count action, whichever comes earlier.
> 
> These are some testpmd examples of supported
> multi-count and mirroring use cases:
> 
> flow create 0 transfer pattern represented_port ethdev_port_id is 0 / end \
>  actions port_representor port_id 0 / port_representor port_id 1 / end
> 
> or
> 
> flow indirect_action 0 create action_id 239 transfer action count / end
> 
> flow create 0 transfer pattern represented_port ethdev_port_id is 0 / end \
>  actions count / port_representor port_id 0 / indirect 239 / \
>  port_representor port_id 1 / end
> 
> or
> 
> flow indirect_action 0 create action_id 239 transfer action count / end
> 
> flow create 0 transfer pattern represented_port ethdev_port_id is 0 / end \
>  actions indirect 239 / port_representor port_id 0 / indirect 239 / \
>  port_representor port_id 1 / end
> 
> and the likes.
> 
> Signed-off-by: Ivan Malov 
> Reviewed-by: Andy Moreton 
> 

Hi Andrew, Reminder of this patch waiting for review.


Re: [PATCH v6 0/6] rte atomics API for optional stdatomic

2023-09-29 Thread David Marchand
Hello,

On Tue, Aug 22, 2023 at 11:00 PM Tyler Retzlaff
 wrote:
>
> This series introduces API additions prefixed in the rte namespace that allow
> the optional use of stdatomics.h from C11 using enable_stdatomics=true for
> targets where enable_stdatomics=false no functional change is intended.
>
> Be aware this does not contain all changes to use stdatomics across the DPDK
> tree it only introduces the minimum to allow the option to be used which is
> a pre-requisite for a clean CI (probably using clang) that can be run
> with enable_stdatomics=true enabled.
>
> It is planned that subsequent series will be introduced per lib/driver as
> appropriate to further enable stdatomics use when enable_stdatomics=true.
>
> Notes:
>
> * Additional libraries beyond EAL make visible atomics use across the
>   API/ABI surface they will be converted in the subsequent series.
>
> * The eal: add rte atomic qualifier with casts patch needs some discussion
>   as to whether or not the legacy rte_atomic APIs should be converted to
>   work with enable_stdatomic=true right now some implementation dependent
>   casts are used to prevent cascading / having to convert too much in
>   the intial series.
>
> * Windows will obviously need complete conversion of libraries including
>   atomics that are not crossing API/ABI boundaries. those conversions will
>   introduced in separate series as new along side the existing msvc series.
>
> Please keep in mind we would like to prioritize the review / acceptance of
> this patch since it needs to be completed in the 23.11 merge window.
>
> Thank you all for the discussion that lead to the formation of this series.

I did a number of updates on this v6:
- moved rte_stdatomic.h from patch 1 to later patches where needed,
- added a RN entry,
- tried to consistently/adjusted indent,
- fixed mentions of stdatomic*s* to simple atomic, like in the build
option name,
- removed unneded comments (Thomas review on patch 1),

Series applied, thanks Tyler.


Two things are missing:
- add doxygen tags in the new API (this can be fixed later in this
release, can you look at it?),
- add compilation tests for enable_stdatomic (I'll send a patch soon
for devtools and GHA),


-- 
David Marchand



[PATCH] ci: test stdatomic API

2023-09-29 Thread David Marchand
Add some compilation tests with C11 atomics enabled.
The headers check can't be enabled (as gcc and clang don't provide
stdatomic before C++23).

Signed-off-by: David Marchand 
---
 .ci/linux-build.sh| 6 +-
 .github/workflows/build.yml   | 8 
 devtools/test-meson-builds.sh | 3 +++
 3 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index e0b62bac90..b09df07b55 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -92,7 +92,11 @@ fi
 OPTS="$OPTS -Dplatform=generic"
 OPTS="$OPTS -Ddefault_library=$DEF_LIB"
 OPTS="$OPTS -Dbuildtype=$buildtype"
-OPTS="$OPTS -Dcheck_includes=true"
+if [ "$STDATOMIC" = "true" ]; then
+   OPTS="$OPTS -Denable_stdatomic=true"
+else
+   OPTS="$OPTS -Dcheck_includes=true"
+fi
 if [ "$MINI" = "true" ]; then
 OPTS="$OPTS -Denable_drivers=net/null"
 OPTS="$OPTS -Ddisable_libs=*"
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 7a2ac0ceee..14328622fb 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -30,6 +30,7 @@ jobs:
   REF_GIT_TAG: none
   RISCV64: ${{ matrix.config.cross == 'riscv64' }}
   RUN_TESTS: ${{ contains(matrix.config.checks, 'tests') }}
+  STDATOMIC: ${{ contains(matrix.config.checks, 'stdatomic') }}
 
 strategy:
   fail-fast: false
@@ -38,6 +39,12 @@ jobs:
   - os: ubuntu-20.04
 compiler: gcc
 mini: mini
+  - os: ubuntu-20.04
+compiler: gcc
+checks: stdatomic
+  - os: ubuntu-20.04
+compiler: clang
+checks: stdatomic
   - os: ubuntu-20.04
 compiler: gcc
 checks: debug+doc+examples+tests
@@ -241,6 +248,7 @@ jobs:
 > ~/env
 echo CC=ccache ${{ matrix.config.compiler }} >> ~/env
 echo DEF_LIB=${{ matrix.config.library }} >> ~/env
+echo STDATOMIC=false >> ~/env
 - name: Load the cached image
   run: |
 docker load -i ~/.image/${{ matrix.config.image }}.tar
diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index c41659d28b..ca32e3d5a5 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -239,6 +239,9 @@ done
 build build-mini cc skipABI $use_shared -Ddisable_libs=* \
-Denable_drivers=net/null
 
+build build-gcc-shared-stdatomic gcc skipABI -Denable_stdatomic=true 
$use_shared
+build build-clang-shared-stdatomic clang skipABI -Denable_stdatomic=true 
$use_shared
+
 # test compilation with minimal x86 instruction set
 # Set the install path for libraries to "lib" explicitly to prevent problems
 # with pkg-config prefixes if installed in "lib/x86_64-linux-gnu" later.
-- 
2.41.0



Re: [PATCH 0/2] Add eventdev tests to test suites

2023-09-29 Thread David Marchand
On Thu, Sep 28, 2023 at 5:14 PM Bruce Richardson
 wrote:
>
> The eventdev library includes a selftest API which can be used by
> drivers for testing. Add the relevant automated self-test commands
> into meson test suites as appropriate.
>
> Bruce Richardson (2):
>   event/sw: add self tests to fast tests
>   event/*: add driver selftests to driver-tests suite
>
>  app/test/test_eventdev.c | 10 +-
>  drivers/event/sw/sw_evdev_selftest.c |  2 +-
>  2 files changed, 6 insertions(+), 6 deletions(-)
>

On the principle, the series lgtm.
Acked-by: David Marchand 


-- 
David Marchand



Re: [PATCH] eal/linux: prevent deadlocks on rte init and cleanup

2023-09-29 Thread David Marchand
Hello Jonathan,

On Thu, Jul 20, 2023 at 7:19 AM Jonathan Erb
 wrote:
>
> Resolves a deadlock that can occur when multiple secondary
> processes are starting and stopping. A deadlock can occur because
> eal_memalloc_init() is taking a second unnecessary read lock.
> If another DPDK process that is terminating enters rte_eal_memory_detach()
> and acquires a write lock wait state before the starting process can
> acquire it's second read lock then no process will be able to proceed.
>
> Cc: sta...@dpdk.org
>
> Signed-off-by: Jonathan Erb 

Thanks for the report and fix and sorry for the late reply.

Artemy came with a similar report and a more complete fix.
Could you confirm it works for you?

https://patchwork.dpdk.org/project/dpdk/list/?series=29463&state=%2A&archive=both


Thanks,

-- 
David Marchand



Re: [PATCH v1] net/memif: fix segfault with large burst size

2023-09-29 Thread Ferruh Yigit
On 9/4/2023 8:10 AM, Joyce Kong wrote:
> There will be a segfault when Rx burst size is greater than
> MAX_PKT_BURST of memif. Fix the issue by correcting the
> wrong mbuf index in eth_memif_rx, which results in accessing
> invalid memory address.
> 
> Bugzilla ID: 1273
> Fixes: aa17df860891 ("net/memif: add a Rx fast path")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Joyce Kong 
> Reviewed-by: Feifei Wang 
> Reviewed-by: Ruifeng Wang 
> 

Hi Joyce, good catch.

Reviewed-by: Ferruh Yigit 

Applied to dpdk-next-net/main, thanks.



For record, if nb_pkts > MAX_PKT_BURST, memif buffer consumed in chunks
of MAX_PKT_BURST mbufs, next chunk consumption starts with 'goto
next_bulk' call.

For each chunk, MAX_PKT_BURST mbufs allocated and filled, they are
accessed by 'n_rx_pkts' index, but 'n_rx_pkts' is overall received mbuf
number, so it shouldn't be used as index for that chunk, but 'rx_pkts'
should be used which is reset at the beginning of the chunk processing.

For the first chunk using 'n_rx_pkts' or 'rx_pkts' are same, that
explains how issue lived till now, as commit log mentions issue can be
observed when nb_pkts > MAX_PKT_BURST.



Re: [PATCH] net/virtio: fix descriptors buffer addresses on 32 bits builds

2023-09-29 Thread Dave Johnson (davejo)
Hi Maxime,
I back ported the patch to v22.11.2 and it worked for us on both the testpmd 
app and with our 32-bit DPDK (virtio-pci) application.  The change to set the 
mbuf_addr_mask was moved under virtio_init_queue() in v22.11.2 (see below).

I’m in the process of updating the application to v23.07 and will test there as 
well.  Thank you for looking into this and providing the patch.
Regards, Dave

diff --git a/drivers/net/virtio/virtio_ethdev.c 
b/drivers/net/virtio/virtio_ethdev.c
index b72334455e..bd90ba9d49 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -565,10 +565,13 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t 
queue_idx)
memset(mz->addr, 0, mz->len);
-   if (hw->use_va)
+   if (hw->use_va) {
vq->vq_ring_mem = (uintptr_t)mz->addr;
-   else
+   vq->mbuf_addr_mask = UINTPTR_MAX;
+   } else {
vq->vq_ring_mem = mz->iova;
+   vq->mbuf_addr_mask = UINT64_MAX;
+   }
vq->vq_ring_virt_mem = mz->addr;
PMD_INIT_LOG(DEBUG, "vq->vq_ring_mem: 0x%" PRIx64, vq->vq_ring_mem);

From: Maxime Coquelin 
Date: Wednesday, September 20, 2023 at 9:02 AM
To: dev@dpdk.org , Roger Melton (rmelton) , 
Dave Johnson (davejo) , Sampath Peechu (speechu) 
, chenbo@outlook.com , Malcolm 
Bumgardner (mbumgard) , Chris Brezovec (cbrezove) 
, david.march...@redhat.com 
Cc: Maxime Coquelin , sta...@dpdk.org 

Subject: [PATCH] net/virtio: fix descriptors buffer addresses on 32 bits builds
With Virtio-user, the Virtio descriptor buffer address is the
virtual address of the mbuf's buffer. On 32 bits builds, it is
expected to be 32 bits.

With Virtio-PCI, the Virtio descriptor buffer address is the
physical address of the mbuf's buffer. On 32 bits builds running
on 64 bits kernel, it is expected to be up to 64 bits.

This patch introduces a new mask field in virtqueue's struct to
filter our the upper 4 bytes of the address only when necessary.
An optimization is introduced for 64 bits builds to remove the
masking, as the address is always 64 bits wide.

Fixes: ba55c94a7ebc ("net/virtio: revert forcing IOVA as VA mode for 
virtio-user")
Cc: sta...@dpdk.org

Reported-by: Sampath Peechu 
Signed-off-by: Maxime Coquelin 
---
 drivers/net/virtio/virtqueue.c |  2 ++
 drivers/net/virtio/virtqueue.h | 18 ++
 2 files changed, 16 insertions(+), 4 deletions(-)

diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index 1d836f2530..6f419665f1 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -469,9 +469,11 @@ virtqueue_alloc(struct virtio_hw *hw, uint16_t index, 
uint16_t num, int type,
 if (hw->use_va) {
 vq->vq_ring_mem = (uintptr_t)mz->addr;
 vq->mbuf_addr_offset = offsetof(struct rte_mbuf, buf_addr);
+   vq->mbuf_addr_mask = UINTPTR_MAX;
 } else {
 vq->vq_ring_mem = mz->iova;
 vq->mbuf_addr_offset = offsetof(struct rte_mbuf, buf_iova);
+   vq->mbuf_addr_mask = UINT64_MAX;
 }

 PMD_INIT_LOG(DEBUG, "vq->vq_ring_mem: 0x%" PRIx64, vq->vq_ring_mem);
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 9d4aba11a3..c1cb941c43 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -114,17 +114,26 @@ virtqueue_store_flags_packed(struct vring_packed_desc *dp,

 #define VIRTQUEUE_MAX_NAME_SZ 32

+#ifdef RTE_ARCH_32
+#define VIRTIO_MBUF_ADDR_MASK(vq) ((vq)->mbuf_addr_mask)
+#else
+#define VIRTIO_MBUF_ADDR_MASK(vq) UINT64_MAX
+#endif
+
 /**
  * Return the IOVA (or virtual address in case of virtio-user) of mbuf
  * data buffer.
  *
  * The address is firstly casted to the word size (sizeof(uintptr_t))
- * before casting it to uint64_t. This is to make it work with different
- * combination of word size (64 bit and 32 bit) and virtio device
- * (virtio-pci and virtio-user).
+ * before casting it to uint64_t. It is then masked with the expected
+ * address length (64 bits for virtio-pci, word size for virtio-user).
+ *
+ * This is to make it work with different combination of word size (64
+ * bit and 32 bit) and virtio device (virtio-pci and virtio-user).
  */
 #define VIRTIO_MBUF_ADDR(mb, vq) \
-   ((uint64_t)(*(uintptr_t *)((uintptr_t)(mb) + (vq)->mbuf_addr_offset)))
+   ((*(uint64_t *)((uintptr_t)(mb) + (vq)->mbuf_addr_offset)) & \
+   VIRTIO_MBUF_ADDR_MASK(vq))

 /**
  * Return the physical address (or virtual address in case of
@@ -194,6 +203,7 @@ struct virtqueue {
 void *vq_ring_virt_mem;  /**< linear address of vring*/
 unsigned int vq_ring_size;
 uint16_t mbuf_addr_offset;
+   uint64_t mbuf_addr_mask;

 union {
 struct virtnet_rx rxq;
--
2.41.0


Re: [PATCH 1/3] vhost: fix build for powerpc

2023-09-29 Thread Maxime Coquelin




On 8/31/23 14:10, Bruce Richardson wrote:

When building on Ubuntu using the packaged powerpc compiler[1], a
warning is issued about the print format of the __u64 values.

../../lib/vhost/vduse.c: In function ‘vduse_vring_setup’:
../../lib/vhost/vhost.h:676:17: error: format ‘%llx’ expects argument of
type ‘long long unsigned int’, but argument 5 has type ‘__u64’ {aka
‘long unsigned int’} [-Werror=format=]
   676 | "VHOST_CONFIG: (%s) " fmt, prefix, ##args)
   | ^

Changing the format specifier to %lx, or to use PRIx64 breaks other
builds, so the safest solution is to explicitly typecast the printed
values to match the format string.

[1] powerpc64le-linux-gnu-gcc (Ubuntu 12.3.0-1ubuntu1~23.04) 12.3.0

Fixes: a9120db8b98b ("vhost: add VDUSE device startup")
Cc: maxime.coque...@redhat.com
Cc: sta...@dpdk.org

Signed-off-by: Bruce Richardson 
---
  lib/vhost/vduse.c | 9 ++---
  1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/lib/vhost/vduse.c b/lib/vhost/vduse.c
index 73ed424232..e2b6d35d37 100644
--- a/lib/vhost/vduse.c
+++ b/lib/vhost/vduse.c
@@ -162,9 +162,12 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int 
index)
  
  	VHOST_LOG_CONFIG(dev->ifname, INFO, "VQ %u info:\n", index);

VHOST_LOG_CONFIG(dev->ifname, INFO, "\tnum: %u\n", vq_info.num);
-   VHOST_LOG_CONFIG(dev->ifname, INFO, "\tdesc_addr: %llx\n", 
vq_info.desc_addr);
-   VHOST_LOG_CONFIG(dev->ifname, INFO, "\tdriver_addr: %llx\n", 
vq_info.driver_addr);
-   VHOST_LOG_CONFIG(dev->ifname, INFO, "\tdevice_addr: %llx\n", 
vq_info.device_addr);
+   VHOST_LOG_CONFIG(dev->ifname, INFO, "\tdesc_addr: %llx\n",
+   (unsigned long long)vq_info.desc_addr);
+   VHOST_LOG_CONFIG(dev->ifname, INFO, "\tdriver_addr: %llx\n",
+   (unsigned long long)vq_info.driver_addr);
+   VHOST_LOG_CONFIG(dev->ifname, INFO, "\tdevice_addr: %llx\n",
+   (unsigned long long)vq_info.device_addr);
VHOST_LOG_CONFIG(dev->ifname, INFO, "\tavail_idx: %u\n", 
vq_info.split.avail_index);
VHOST_LOG_CONFIG(dev->ifname, INFO, "\tready: %u\n", vq_info.ready);
  


It is surprising PRIx64 does not work on other architectures.
I don't see a better solution, so:

Acked-by: Maxime Coquelin 

Thanks,
Maxime



Re: [PATCH] gpu: add support for rtx 6000 variant

2023-09-29 Thread David Marchand
Hello,

On Fri, Apr 29, 2022 at 6:53 PM Cliff Burdick  wrote:
>
> Added second GPU PCI device ID for RTX 6000
>
> Signed-off-by: Cliff Burdick 
> ---
> drivers/gpu/cuda/cuda.c| 6 +-
> drivers/gpu/cuda/devices.h | 3 ++-
> 2 files changed, 7 insertions(+), 2 deletions(-)

Is this patch still wanted?
It seems it fell through the cracks.

Cc: maintainers.


-- 
David Marchand



Re: [PATCH] gpu/cuda: Add missing stdlib include

2023-09-29 Thread David Marchand
On Tue, Sep 26, 2023 at 8:24 PM Aaron Conole  wrote:
>
>
> From: John Romein 
>
> getenv needs stdlib.h to be included.
>
> Bugzilla ID: 1133
>
> Fixes: 24c77594e08f ("gpu/cuda: map GPU memory with GDRCopy")
> Signed-off-by: John Romein 

Thanks for the patch, it seems to be a duplicate of the prior patch
sent by Levend:
https://patchwork.dpdk.org/project/dpdk/patch/20230803162512.41396-1-levendsa...@gmail.com/


-- 
David Marchand



Re: [PATCH] gpu/cuda: fix getenv related build error

2023-09-29 Thread David Marchand
On Thu, Aug 3, 2023 at 6:25 PM Levend Sayar  wrote:
>
> If gdrapi.h is available, meson sets DRIVERS_GPU_CUDA_GDRCOPY_H as 1.
> This causes gdrcopy.c build to give an error;
> because compiler can not find signature of getenv.
> stdlib.h is included for the definition of getenv function.
>

There was a bug report for this issue:
Bugzilla ID: 1133

> Fixes: ca12f5e8a7db ("gpu/cuda: mark unused GDRCopy functions parameters")

It is probably worth backporting:
Cc: sta...@dpdk.org

>
> Signed-off-by: Levend Sayar 

Elena, this is a quick one, review please.


-- 
David Marchand



Re: [PATCH v3] bus/cdx: provide driver flag for optional resource mapping

2023-09-29 Thread David Marchand
On Tue, Jul 11, 2023 at 7:52 AM Abhijit Gangurde
 wrote:
> @@ -383,10 +384,12 @@ cdx_probe_one_driver(struct rte_cdx_driver *dr,
> CDX_BUS_DEBUG("  probe device %s using driver: %s", dev_name,
> dr->driver.name);
>
> -   ret = cdx_vfio_map_resource(dev);
> -   if (ret != 0) {
> -   CDX_BUS_ERR("CDX map device failed: %d", ret);
> -   goto error_map_device;
> +   if (dr->drv_flags & RTE_CDX_DRV_NEED_MAPPING) {
> +   ret = cdx_vfio_map_resource(dev);
> +   if (ret != 0) {
> +   CDX_BUS_ERR("CDX map device failed: %d", ret);
> +   goto error_map_device;
> +   }
> }
>
> /* call the driver probe() function */
> diff --git a/drivers/bus/cdx/rte_bus_cdx.h b/drivers/bus/cdx/rte_bus_cdx.h
> new file mode 100644
> index 00..4ca12f90c4
> --- /dev/null
> +++ b/drivers/bus/cdx/rte_bus_cdx.h
> @@ -0,0 +1,52 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright (C) 2023, Advanced Micro Devices, Inc.
> + */
> +
> +#ifndef RTE_BUS_CDX_H
> +#define RTE_BUS_CDX_H
> +
> +/**
> + * @file
> + * CDX device & driver interface
> + */
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/* Forward declarations */
> +struct rte_cdx_device;
> +
> +/**
> + * Map the CDX device resources in user space virtual memory address.
> + *
> + * Note that driver should not call this function when flag
> + * RTE_CDX_DRV_NEED_MAPPING is set, as EAL will do that for
> + * you when it's on.

Why should we export this function in the application ABI, if it is
only used by drivers?


> + *
> + * @param dev
> + *   A pointer to a rte_cdx_device structure describing the device
> + *   to use.
> + *
> + * @return
> + *   0 on success, negative on error and positive if no driver
> + *   is found for the device.
> + */
> +__rte_experimental
> +int rte_cdx_map_device(struct rte_cdx_device *dev);
>


-- 
David Marchand



DPDK Release Status Meeting 2023-09-28

2023-09-29 Thread Mcnamara, John
Release status meeting minutes 2023-09-28
=

Agenda:
* Release Dates
* Subtrees
* Roadmaps
* LTS
* Defects
* Opens

Participants:
* AMD
* ARM [No]
* Debian/Microsoft [No]
* Intel
* Marvell
* Nvidia [No]
* Red Hat

Release Dates
-

The following are the proposed working dates for 23.11:

* V1:  12 August 2023
* RC1: 11 October 2023 - was 29 September 2023
* RC2: 20 October 2023
* RC3: 27 October 2023
* Release: 15 November 2023


Subtrees


* next-net
  * Some patches merged
  * Focusing on eth_dev patches for RC1
  * Around 50% of remaining patches are rte_flow
* needs some reviews

* next-net-intel
  * Some patches waiting for merge
  * Tree will be ready for RC1 be EOD 2023-09-28

* next-net-mlx
  * No update

* next-net-mvl
  * All patches merged.
  * Will send PR 2023-09-29

* next-eventdev
  * Waiting for some fixes for cnxk
  * Will send PR 2023-09-29

* next-baseband
  * Reviewing patches for VRB2 series.

* next-virtio
  * Working on improvements for VDUSE.
  * Other patches under review.

* next-crypto
  * 3 series to complete before PR
  * 2-3 new features from Marvell
  * RX inject patchset for review.
  * SSL/TLS patches for review.
  * SM2 patches need ack

* main
  * Working on Control Threads
  * PCI API cleanup merged. Subtrees take note.
  * Patch for Atomics
* This needs review since it is an important API that will touch a lot of 
components later.
* https://patchwork.dpdk.org/project/dpdk/list/?series=29318
  * Preparing for LTS release.
  * Removing deprecated libraries.

  * DPDK Summit videos are out:
https://www.youtube.com/playlist?list=PLo97Rhbj4ceJf9p-crjGvGvn8pMWrJ_cV


Proposed Schedule for 2023
--

See also http://core.dpdk.org/roadmap/#dates

23.11
  * Proposal deadline (RFC/v1 patches): 12 August 2023
  * API freeze (-rc1): 11 October 2023
  * PMD features freeze (-rc2): 20 October 2023
  * Builtin applications features freeze (-rc3): 27 October 2023
  * Release: 15 November 2023


LTS
---

Backports ongoing. Awaiting test results.

Next LTS releases:

* 22.11.2 - Released
* 21.11.5 - Released
* 20.11.9 - Released
* 19.11.15
  * Will only be updated with CVE and critical fixes.


* Distros
  * v22.11 in Debian 12
  * Ubuntu 22.04-LTS contains 21.11
  * Ubuntu 23.04 contains 22.11

Defects
---

* Bugzilla links, 'Bugs',  added for hosted projects
  * https://www.dpdk.org/hosted-projects/



DPDK Release Status Meetings


The DPDK Release Status Meeting is intended for DPDK Committers to discuss the
status of the master tree and sub-trees, and for project managers to track
progress or milestone dates.

The meeting occurs on every Thursday at 9:30 UTC over Jitsi on 
https://meet.jit.si/DPDK

You don't need an invite to join the meeting but if you want a calendar 
reminder just
send an email to "John McNamara john.mcnam...@intel.com" for the invite.




Re: [PATCH v2 0/2] update rsu implementation

2023-09-29 Thread David Marchand
Hello,

On Fri, Jun 10, 2022 at 4:17 AM Wei Huang  wrote:
>
> The first patch introduce PMCI driver to provide interface to access
> PMCI functions which include flash controller.
> The second patch update RSU (Remote System Update) implementation
> to adapt with PMCI controller.

Is this series still relevant?
If so, it needs some rebasing.

Thanks.


-- 
David Marchand



Re: [PATCH] hash: fix SSE comparison

2023-09-29 Thread David Marchand
On Wed, Sep 6, 2023 at 4:31 AM Jieqiang Wang  wrote:
>
> __mm_cmpeq_epi16 returns 0x if the corresponding 16-bit elements are
> equal. In original SSE2 implementation for function compare_signatures,
> it utilizes _mm_movemask_epi8 to create mask from the MSB of each 8-bit
> element, while we should only care about the MSB of lower 8-bit in each
> 16-bit element.
> For example, if the comparison result is all equal, SSE2 path returns
> 0x while NEON and default scalar path return 0x.
> Although this bug is not causing any negative effects since the caller
> function solely examines the trailing zeros of each match mask, we
> recommend this fix to ensure consistency with NEON and default scalar
> code behaviors.
>
> Fixes: c7d93df552c2 ("hash: use partial-key hashing")
> Cc: yipeng1.w...@intel.com
> Cc: sta...@dpdk.org
>
> Signed-off-by: Feifei Wang 
> Signed-off-by: Jieqiang Wang 
> Reviewed-by: Ruifeng Wang 

A review from this library maintainers please?


-- 
David Marchand



Re: [PATCH 1/1] hash: add SVE support for bulk key lookup

2023-09-29 Thread David Marchand
On Thu, Aug 17, 2023 at 11:24 PM Harjot Singh  wrote:
>
> From: Harjot Singh 
>
> - Implemented Vector Length Agnostic SVE code for comparing signatures
> in bulk lookup.
> - Added Defines in code for SVE code support.
> - New Optimised SVE code is 1-2 CPU cycle slower than NEON for N2
> processor.
>
> Performance Numbers from hash_perf_autotest :
>
> Elements in Primary or Secondary Location
>
> Results (in CPU cycles/operation)
> ---
>  Operations without data
>
> Without pre-computed hash values
>
> Keysize Add/Lookup/Lookup_bulk
> Neon SVE
> 4   93/71/26 93/71/27
> 8   93/70/26 93/70/27
> 9   94/74/27 94/74/28
> 13  100/80/31100/79/32
> 16  100/78/30100/78/31
> 32  109/110/38   108/110/39
>
> With pre-computed hash values
>
> Keysize Add/Lookup/Lookup_bulk
> Neon SVE
> 4   83/58/27 83/58/29
> 8   83/57/27 83/57/28
> 9   83/60/28 83/60/29
> 13  84/60/28 83/60/29
> 16  83/58/27 83/58/29
> 32  84/68/31 84/68/32
>
> Signed-off-by: Harjot Singh 
> Reviewed-by: Nathan Brown 
> Reviewed-by: Feifei Wang 
> Reviewed-by: Jieqiang Wang 
> Reviewed-by: Honnappa Nagarahalli 

Thanks for the patch, please update the release notes.


-- 
David Marchand



Re: [PATCH v9] hash: add XOR32 hash function

2023-09-29 Thread David Marchand
On Tue, Jul 11, 2023 at 12:00 AM Bili Dong  wrote:
>
> An XOR32 hash is needed in the Software Switch (SWX) Pipeline for its
> use case in P4. We implement it in this patch so it could be easily
> registered in the pipeline later.
>
> Signed-off-by: Bili Dong 

Review, please.


-- 
David Marchand



[PATCH v3 1/2] security: add fallback security processing and Rx inject

2023-09-29 Thread Anoob Joseph
Add alternate datapath API for security processing which would do Rx
injection (similar to loopback) after successful security processing.

With inline protocol offload, variable part of the session context
(AR windows, lifetime etc in case of IPsec), is not accessible to the
application. If packets are not getting processed in the inline path
due to non security reasons (such as outer fragmentation or rte_flow
packet steering limitations), then the packet cannot be security
processed as the session context is private to the PMD and security
library doesn't provide alternate APIs to make use of the same session.

Introduce new API and Rx injection as fallback mechanism to security
processing failures due to non-security reasons. For example, when there
is outer fragmentation and PMD doesn't support reassembly of outer
fragments, application would receive fragments which it can then
reassemble. Post successful reassembly, packet can be submitted for
security processing and Rx inject. The packets can be then received in
the application as normal inline protocol processed packets.

Same API can be leveraged in lookaside protocol offload mode to inject
packet to Rx. This would help in using rte_flow based packet parsing
after security processing. For example, with IPsec, this will help in
flow splitting after IPsec processing is done.

In both inline protocol capable ethdevs and lookaside protocol capable
cryptodevs, the packet would be received back in eth port & queue based
on rte_flow rules and packet parsing after security processing. The API
would behave like a loopback but with the additional security
processing.

Signed-off-by: Anoob Joseph 
Signed-off-by: Vidya Sagar Velumuri 
---
v3:
* Resolved compilation error with 32 bit build

v2:
* Added a new API for configuring security device to do Rx inject to a specific
  ethdev port
* Rebased

 doc/guides/cryptodevs/features/default.ini |  1 +
 lib/cryptodev/rte_cryptodev.h  |  2 +
 lib/security/rte_security.c| 22 ++
 lib/security/rte_security.h| 85 ++
 lib/security/rte_security_driver.h | 44 +++
 lib/security/version.map   |  3 +
 6 files changed, 157 insertions(+)

diff --git a/doc/guides/cryptodevs/features/default.ini 
b/doc/guides/cryptodevs/features/default.ini
index 6f637fa7e2..f411d4bab7 100644
--- a/doc/guides/cryptodevs/features/default.ini
+++ b/doc/guides/cryptodevs/features/default.ini
@@ -34,6 +34,7 @@ Sym raw data path API  =
 Cipher multiple data units =
 Cipher wrapped key =
 Inner checksum =
+Rx inject  =
 
 ;
 ; Supported crypto algorithms of a default crypto driver.
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 9f07e1ed2c..05aabb6526 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -534,6 +534,8 @@ rte_cryptodev_asym_get_xform_string(enum 
rte_crypto_asym_xform_type xform_enum);
 /**< Support wrapped key in cipher xform  */
 #define RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM   (1ULL << 27)
 /**< Support inner checksum computation/verification */
+#define RTE_CRYPTODEV_FF_SECURITY_RX_INJECT(1ULL << 28)
+/**< Support Rx injection after security processing */
 
 /**
  * Get the name of a crypto device feature flag
diff --git a/lib/security/rte_security.c b/lib/security/rte_security.c
index ab44bbe0f0..fa8d2bb7ce 100644
--- a/lib/security/rte_security.c
+++ b/lib/security/rte_security.c
@@ -321,6 +321,28 @@ rte_security_capability_get(void *ctx, struct 
rte_security_capability_idx *idx)
return NULL;
 }
 
+int
+rte_security_rx_inject_configure(void *ctx, uint16_t port_id, bool enable)
+{
+   struct rte_security_ctx *instance = ctx;
+
+   RTE_PTR_OR_ERR_RET(instance, -EINVAL);
+   RTE_PTR_OR_ERR_RET(instance->ops, -ENOTSUP);
+   RTE_PTR_OR_ERR_RET(instance->ops->rx_inject_configure, -ENOTSUP);
+
+   return instance->ops->rx_inject_configure(instance->device, port_id, 
enable);
+}
+
+uint16_t
+rte_security_inb_pkt_rx_inject(void *ctx, struct rte_mbuf **pkts, void **sess,
+  uint16_t nb_pkts)
+{
+   struct rte_security_ctx *instance = ctx;
+
+   return instance->ops->inb_pkt_rx_inject(instance->device, pkts,
+   (struct rte_security_session 
**)sess, nb_pkts);
+}
+
 static int
 security_handle_cryptodev_list(const char *cmd __rte_unused,
   const char *params __rte_unused,
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index c9cc7a45a6..fe8e8e9813 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -1310,6 +1310,91 @@ const struct rte_security_capability *
 rte_security_capability_get(void *instance,
struct rte_security_capability_idx *idx);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior no

[PATCH v3 2/2] test/cryptodev: add Rx inject test

2023-09-29 Thread Anoob Joseph
From: Vidya Sagar Velumuri 

Add test to verify Rx inject. The test case added would push a known
vector to cryptodev which would be injected to ethdev Rx. The test
case verifies that the packet is received from ethdev Rx and is
processed successfully. It also verifies that the userdata matches with
the expectation.

Signed-off-by: Anoob Joseph 
Signed-off-by: Vidya Sagar Velumuri 
---
 app/test/test_cryptodev.c| 340 +++
 app/test/test_cryptodev_security_ipsec.h |   1 +
 2 files changed, 288 insertions(+), 53 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index f2112e181e..b645cb32f1 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -17,6 +17,7 @@
 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -1426,6 +1427,93 @@ ut_setup_security(void)
return dev_configure_and_start(0);
 }
 
+static int
+ut_setup_security_rx_inject(void)
+{
+   struct rte_mempool *mbuf_pool = rte_mempool_lookup("CRYPTO_MBUFPOOL");
+   struct crypto_testsuite_params *ts_params = &testsuite_params;
+   struct rte_eth_conf port_conf = {
+   .rxmode = {
+   .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+   RTE_ETH_RX_OFFLOAD_SECURITY,
+   },
+   .txmode = {
+   .offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+   },
+   .lpbk_mode = 1,  /* Enable loopback */
+   };
+   struct rte_cryptodev_info dev_info;
+   struct rte_eth_rxconf rx_conf = {
+   .rx_thresh = {
+   .pthresh = 8,
+   .hthresh = 8,
+   .wthresh = 8,
+   },
+   .rx_free_thresh = 32,
+   };
+   uint16_t nb_ports;
+   void *sec_ctx;
+   int ret;
+
+   rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+   if (!(dev_info.feature_flags & RTE_CRYPTODEV_FF_SECURITY_RX_INJECT) ||
+   !(dev_info.feature_flags & RTE_CRYPTODEV_FF_SECURITY)) {
+   RTE_LOG(INFO, USER1,
+   "Feature requirements for IPsec Rx inject test case not 
met\n");
+   return TEST_SKIPPED;
+   }
+
+   sec_ctx = rte_cryptodev_get_sec_ctx(ts_params->valid_devs[0]);
+   if (sec_ctx == NULL)
+   return TEST_SKIPPED;
+
+   nb_ports = rte_eth_dev_count_avail();
+   if (nb_ports == 0)
+   return TEST_SKIPPED;
+
+   ret = rte_eth_dev_configure(0 /* port_id */,
+   1 /* nb_rx_queue */,
+   0 /* nb_tx_queue */,
+   &port_conf);
+   if (ret) {
+   printf("Could not configure ethdev port 0 [err=%d]\n", ret);
+   return TEST_SKIPPED;
+   }
+
+   /* Rx queue setup */
+   ret = rte_eth_rx_queue_setup(0 /* port_id */,
+0 /* rx_queue_id */,
+1024 /* nb_rx_desc */,
+SOCKET_ID_ANY,
+&rx_conf,
+mbuf_pool);
+   if (ret) {
+   printf("Could not setup eth port 0 queue 0\n");
+   return TEST_SKIPPED;
+   }
+
+   ret = rte_security_rx_inject_configure(sec_ctx, 0, true);
+   if (ret) {
+   printf("Could not enable Rx inject offload");
+   return TEST_SKIPPED;
+   }
+
+   ret = rte_eth_dev_start(0);
+   if (ret) {
+   printf("Could not start ethdev");
+   return TEST_SKIPPED;
+   }
+
+   ret = rte_eth_promiscuous_enable(0);
+   if (ret) {
+   printf("Could not enable promiscuous mode");
+   return TEST_SKIPPED;
+   }
+
+   /* Configure and start cryptodev with no features disabled */
+   return dev_configure_and_start(0);
+}
+
 void
 ut_teardown(void)
 {
@@ -1478,6 +1566,33 @@ ut_teardown(void)
rte_cryptodev_stop(ts_params->valid_devs[0]);
 }
 
+static void
+ut_teardown_rx_inject(void)
+{
+   struct crypto_testsuite_params *ts_params = &testsuite_params;
+   void *sec_ctx;
+   int ret;
+
+   if  (rte_eth_dev_count_avail() != 0) {
+   ret = rte_eth_dev_reset(0);
+   if (ret)
+   printf("Could not reset eth port 0");
+
+   }
+
+   ut_teardown();
+
+   sec_ctx = rte_cryptodev_get_sec_ctx(ts_params->valid_devs[0]);
+   if (sec_ctx == NULL)
+   return;
+
+   ret = rte_security_rx_inject_configure(sec_ctx, 0, false);
+   if (ret) {
+   printf("Could not disable Rx inject offload");
+   return;
+   }
+}
+
 static int
 test_device_configure_invalid_dev_id(void)
 {
@@ -9875,6 +9990,136 @@ ext_mbuf_create(struct rte_mempool *mbuf_pool, int 
pkt_len,
return NULL;
 }
 

Re: [PATCH] lib/hash: new feature adding existing key

2023-09-29 Thread David Marchand
On Mon, Mar 13, 2023 at 8:36 AM Abdullah Ömer Yamaç
 wrote:
>
> In some use cases inserting data with the same key shouldn't be
> overwritten. We use a new flag in this patch to disable overwriting
> data for the same key.
>
> Signed-off-by: Abdullah Ömer Yamaç 

If this patch is still relevant, please rebase it and send a new revision.
Don't forget to copy the library maintainers.


-- 
David Marchand



[RFC] rte_ether_unformat: accept more inputs

2023-09-29 Thread Stephen Hemminger
This updates rte_ether_addr_unformat() to accept more types
of input. There have been requests to handle Windows and other formats.

Signed-off-by: Stephen Hemminger 
---
Marking this as RFC until unit tests are added.

 lib/net/rte_ether.c | 78 +++--
 lib/net/rte_ether.h |  6 ++--
 2 files changed, 66 insertions(+), 18 deletions(-)

diff --git a/lib/net/rte_ether.c b/lib/net/rte_ether.c
index 66d9a9d0699a..5250353eb162 100644
--- a/lib/net/rte_ether.c
+++ b/lib/net/rte_ether.c
@@ -38,7 +38,8 @@ static int8_t get_xdigit(char ch)
 }
 
 /* Convert 00:11:22:33:44:55 to ethernet address */
-static bool get_ether_addr6(const char *s0, struct rte_ether_addr *ea)
+static bool get_ether_addr6(const char *s0, struct rte_ether_addr *ea,
+   const char sep)
 {
const char *s = s0;
int i;
@@ -50,14 +51,17 @@ static bool get_ether_addr6(const char *s0, struct 
rte_ether_addr *ea)
if (x < 0)
return false;
 
-   ea->addr_bytes[i] = x << 4;
-   x = get_xdigit(*s++);
-   if (x < 0)
-   return false;
-   ea->addr_bytes[i] |= x;
+   ea->addr_bytes[i] = x;
+   if (*s != sep) {
+   x = get_xdigit(*s++);
+   if (x < 0)
+   return false;
+   ea->addr_bytes[i] <<= 4;
+   ea->addr_bytes[i] |= x;
+   }
 
if (i < RTE_ETHER_ADDR_LEN - 1 &&
-   *s++ != ':')
+   *s++ != sep)
return false;
}
 
@@ -66,7 +70,8 @@ static bool get_ether_addr6(const char *s0, struct 
rte_ether_addr *ea)
 }
 
 /* Convert 0011:2233:4455 to ethernet address */
-static bool get_ether_addr3(const char *s, struct rte_ether_addr *ea)
+static bool get_ether_addr3(const char *s, struct rte_ether_addr *ea,
+   const char sep)
 {
int i, j;
 
@@ -80,12 +85,14 @@ static bool get_ether_addr3(const char *s, struct 
rte_ether_addr *ea)
if (x < 0)
return false;
w = (w << 4) | x;
+   if (*s == sep)
+   break;
}
ea->addr_bytes[i] = w >> 8;
ea->addr_bytes[i + 1] = w & 0xff;
 
if (i < RTE_ETHER_ADDR_LEN - 2 &&
-   *s++ != ':')
+   *s++ != sep)
return false;
}
 
@@ -93,17 +100,56 @@ static bool get_ether_addr3(const char *s, struct 
rte_ether_addr *ea)
 }
 
 /*
- * Like ether_aton_r but can handle either
- * XX:XX:XX:XX:XX:XX or ::
- * and is more restrictive.
+ * Scan input to see if seperated by dash, colon or period
+ * Returns seperator and number of matches
+ * If seperators are mixed will return
+ */
+static unsigned int get_ether_sep(const char *s, char *sep)
+{
+   const char seperators[] = "-:.";
+   unsigned int count = 0;
+   const char *cp;
+
+   cp = strpbrk(s, seperators);
+   if (cp == NULL)
+   return 0;
+
+   *sep = *cp;
+   do {
+   ++count;
+   /* find next instance of seperator */
+   cp = strchr(cp + 1, *sep);
+   } while (cp != NULL);
+
+   return count;
+}
+
+/*
+ * Be libreal in accepting a wide variety of notational formats
+ * for MAC address including:
+ *  - Linux format six groups of hexadecimal digits seperated by colon
+ *  - Windows format six groups seperated by hyphen
+ *  - two groups hexadecimal digits
  */
 int
 rte_ether_unformat_addr(const char *s, struct rte_ether_addr *ea)
 {
-   if (get_ether_addr6(s, ea))
-   return 0;
-   if (get_ether_addr3(s, ea))
-   return 0;
+   unsigned int count;
+   char sep = '\0';
+
+   count = get_ether_sep(s, &sep);
+   switch (count) {
+   case 5: /* i.e 01:23:45:67:89:AB */
+   if (get_ether_addr6(s, ea, sep))
+   return 0;
+   break;
+   case 2: /* i.e 0123.4567.89AB */
+   if (get_ether_addr3(s, ea, sep))
+   return 0;
+   break;
+   default:
+   break;
+   }
 
rte_errno = EINVAL;
return -1;
diff --git a/lib/net/rte_ether.h b/lib/net/rte_ether.h
index b35c72c7b0e0..e9a4ba9b5860 100644
--- a/lib/net/rte_ether.h
+++ b/lib/net/rte_ether.h
@@ -254,8 +254,10 @@ rte_ether_format_addr(char *buf, uint16_t size,
  *
  * @param str
  *   A pointer to buffer contains the formatted MAC address.
- *   The supported formats are:
- * XX:XX:XX:XX:XX:XX or ::
+ *   The example formats are:
+ * XX:XX:XX:XX:XX:XX - Canonical form
+ * XX-XX-XX-XX-XX-XX - Windows and IEEE 802
+ * ::- original DPDK
  *   where XX is a hex digi

[PATCH v3 00/12] VRB2 bbdev PMD introduction

2023-09-29 Thread Nicolas Chautru
v3: updates based on v2 review:
- split into smaller incremental commits
- FFT windowing exposed through a more generic structure
- refactor using wrapper functions to manage device variants
- removed custom dump function
- consider the request unsupport SO option as an error
instead of fall-back. 
- cosmetic and doc update.
Thanks

v2: doc, comments and commit-log updates.

This serie includes changes to the VRB BBDEV PMD for 23.11.

This allows the VRB unified driver to support the new VRB2
implementation variant on GNR-D.

This also include minor change to the dev_info to expose FFT version
flexibility to expose information to the application on what windows
LUT is configured dynamically on the device.

Nicolas Chautru (12):
  bbdev: add FFT window width member in driver info
  baseband/acc: add FFT window width in the VRB PMD
  baseband/acc: remove the 4G SO capability for VRB1
  baseband/acc: allocate FCW memory separately
  baseband/acc: add support for MLD operation
  baseband/acc: refactor to allow unified driver extension
  baseband/acc: adding VRB2 device variant
  baseband/acc: add FEC capabilities for the VRB2 variant
  baseband/acc: add FFT support to VRB2 variant
  baseband/acc: add MLD support in VRB2 variant
  baseband/acc: add support for VRB2 engine error detection
  baseband/acc: add configure helper for VRB2

 doc/guides/bbdevs/features/vrb2.ini|   14 +
 doc/guides/bbdevs/index.rst|1 +
 doc/guides/bbdevs/vrb1.rst |4 -
 doc/guides/bbdevs/vrb2.rst |  206 +++
 doc/guides/rel_notes/release_23_11.rst |3 +
 drivers/baseband/acc/acc100_pmd.h  |2 +
 drivers/baseband/acc/acc_common.h  |  172 ++-
 drivers/baseband/acc/rte_acc100_pmd.c  |   10 +-
 drivers/baseband/acc/rte_vrb_pmd.c | 1801 ++--
 drivers/baseband/acc/vrb1_pf_enum.h|   17 +-
 drivers/baseband/acc/vrb2_pf_enum.h|  124 ++
 drivers/baseband/acc/vrb2_vf_enum.h|  121 ++
 drivers/baseband/acc/vrb_cfg.h |   16 +
 drivers/baseband/acc/vrb_pmd.h |  173 ++-
 lib/bbdev/rte_bbdev.h  |2 +
 lib/bbdev/rte_bbdev_op.h   |2 +
 16 files changed, 2502 insertions(+), 166 deletions(-)
 create mode 100644 doc/guides/bbdevs/features/vrb2.ini
 create mode 100644 doc/guides/bbdevs/vrb2.rst
 create mode 100644 drivers/baseband/acc/vrb2_pf_enum.h
 create mode 100644 drivers/baseband/acc/vrb2_vf_enum.h

-- 
2.34.1



[PATCH v3 01/12] bbdev: add FFT window width member in driver info

2023-09-29 Thread Nicolas Chautru
This exposes the width of each windowing shape being configured on
the device. This allows to distinguish different version of the
flexible pointwise windowing applied to the FFT and expose
this platform configuration to the application.

The SRS processing chain
(https://doc.dpdk.org/guides/prog_guide/bbdev.html#bbdev-fft-operation)
includes a pointwise multiplication by time window whose shape width
needs to be exposed, notably for accurate SNR estimate.
Using that mechanism user application can retrieve information related
to what has been dynamically programmed on any bbdev device
supporting FFT windowing operation.

Signed-off-by: Nicolas Chautru 
---
 lib/bbdev/rte_bbdev.h| 2 ++
 lib/bbdev/rte_bbdev_op.h | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index 2985c9f42b..df691c479f 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -349,6 +349,8 @@ struct rte_bbdev_driver_info {
const struct rte_bbdev_op_cap *capabilities;
/** Device cpu_flag requirements */
const enum rte_cpu_flag_t *cpu_flag_reqs;
+   /** FFT width related 2048 FFT for each window. */
+   uint16_t fft_window_width[RTE_BBDEV_MAX_FFT_WIN];
 };
 
 /** Macro used at end of bbdev PMD list */
diff --git a/lib/bbdev/rte_bbdev_op.h b/lib/bbdev/rte_bbdev_op.h
index 693baa8386..9d27226ca6 100644
--- a/lib/bbdev/rte_bbdev_op.h
+++ b/lib/bbdev/rte_bbdev_op.h
@@ -51,6 +51,8 @@ extern "C" {
 /* 12 CS maximum */
 #define RTE_BBDEV_MAX_CS_2 (6)
 #define RTE_BBDEV_MAX_CS   (12)
+/* Up to 16 windows for FFT. */
+#define RTE_BBDEV_MAX_FFT_WIN (16)
 /* MLD-TS up to 4 layers */
 #define RTE_BBDEV_MAX_MLD_LAYERS (4)
 /* 12 SB per RB */
-- 
2.34.1



[PATCH v3 03/12] baseband/acc: remove the 4G SO capability for VRB1

2023-09-29 Thread Nicolas Chautru
This removes the specific capability and support of LTE Decoder
Soft Output option on the VRB1 PMD.

This is triggered as a vendor decision to defeature the related optional
capability so that to avoid theoretical risk of race conditions
impacting the device reliability. That optional APP LLR output is
not impacting the actual decoder hard output.

Signed-off-by: Nicolas Chautru 
---
 doc/guides/bbdevs/vrb1.rst |  4 
 drivers/baseband/acc/rte_vrb_pmd.c | 10 ++
 2 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/doc/guides/bbdevs/vrb1.rst b/doc/guides/bbdevs/vrb1.rst
index 9c48d30964..fdefb20651 100644
--- a/doc/guides/bbdevs/vrb1.rst
+++ b/doc/guides/bbdevs/vrb1.rst
@@ -71,11 +71,7 @@ The Intel vRAN Boost v1.0 PMD supports the following bbdev 
capabilities:
- ``RTE_BBDEV_TURBO_EARLY_TERMINATION``: set early termination feature.
- ``RTE_BBDEV_TURBO_DEC_SCATTER_GATHER``: supports scatter-gather for 
input/output data.
- ``RTE_BBDEV_TURBO_HALF_ITERATION_EVEN``: set half iteration granularity.
-   - ``RTE_BBDEV_TURBO_SOFT_OUTPUT``: set the APP LLR soft output.
-   - ``RTE_BBDEV_TURBO_EQUALIZER``: set the turbo equalizer feature.
-   - ``RTE_BBDEV_TURBO_SOFT_OUT_SATURATE``: set the soft output saturation.
- ``RTE_BBDEV_TURBO_CONTINUE_CRC_MATCH``: set to run an extra odd iteration 
after CRC match.
-   - ``RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT``: set if negative APP LLR 
output supported.
- ``RTE_BBDEV_TURBO_MAP_DEC``: supports flexible parallel MAP engine 
decoding.
 
 * For the FFT operation:
diff --git a/drivers/baseband/acc/rte_vrb_pmd.c 
b/drivers/baseband/acc/rte_vrb_pmd.c
index c5a74bae11..f11882f90e 100644
--- a/drivers/baseband/acc/rte_vrb_pmd.c
+++ b/drivers/baseband/acc/rte_vrb_pmd.c
@@ -1025,15 +1025,11 @@ vrb_dev_info_get(struct rte_bbdev *dev, struct 
rte_bbdev_driver_info *dev_info)
RTE_BBDEV_TURBO_SUBBLOCK_DEINTERLEAVE |
RTE_BBDEV_TURBO_CRC_TYPE_24B |
RTE_BBDEV_TURBO_DEC_CRC_24B_DROP |
-   RTE_BBDEV_TURBO_EQUALIZER |
-   RTE_BBDEV_TURBO_SOFT_OUT_SATURATE |
RTE_BBDEV_TURBO_HALF_ITERATION_EVEN |
RTE_BBDEV_TURBO_CONTINUE_CRC_MATCH |
-   RTE_BBDEV_TURBO_SOFT_OUTPUT |
RTE_BBDEV_TURBO_EARLY_TERMINATION |
RTE_BBDEV_TURBO_DEC_INTERRUPTS |
RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN |
-   RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT |
RTE_BBDEV_TURBO_MAP_DEC |
RTE_BBDEV_TURBO_DEC_TB_CRC_24B_KEEP |
RTE_BBDEV_TURBO_DEC_SCATTER_GATHER,
@@ -1982,6 +1978,12 @@ enqueue_dec_one_op_cb(struct acc_queue *q, struct 
rte_bbdev_dec_op *op,
struct rte_mbuf *input, *h_output_head, *h_output,
*s_output_head, *s_output;
 
+   if ((q->d->device_variant == VRB1_VARIANT) &&
+   (check_bit(op->turbo_dec.op_flags, 
RTE_BBDEV_TURBO_SOFT_OUTPUT))) {
+   /* SO not supported for VRB1. */
+   return -EPERM;
+   }
+
desc = acc_desc(q, total_enqueued_cbs);
vrb_fcw_td_fill(op, &desc->req.fcw_td);
 
-- 
2.34.1



[PATCH v3 02/12] baseband/acc: add FFT window width in the VRB PMD

2023-09-29 Thread Nicolas Chautru
This allows to expose the FFT window width being introduced in
previous commit based on what is configured dynamically on the
device platform.

Signed-off-by: Nicolas Chautru 
---
 drivers/baseband/acc/acc_common.h  |  3 +++
 drivers/baseband/acc/rte_vrb_pmd.c | 29 +
 2 files changed, 32 insertions(+)

diff --git a/drivers/baseband/acc/acc_common.h 
b/drivers/baseband/acc/acc_common.h
index 5bb00746c3..7d24c644c0 100644
--- a/drivers/baseband/acc/acc_common.h
+++ b/drivers/baseband/acc/acc_common.h
@@ -512,6 +512,8 @@ struct acc_deq_intr_details {
 enum {
ACC_VF2PF_STATUS_REQUEST = 1,
ACC_VF2PF_USING_VF = 2,
+   ACC_VF2PF_LUT_VER_REQUEST = 3,
+   ACC_VF2PF_FFT_WIN_REQUEST = 4,
 };
 
 
@@ -558,6 +560,7 @@ struct acc_device {
queue_offset_fun_t queue_offset;  /* Device specific queue offset */
uint16_t num_qgroups;
uint16_t num_aqs;
+   uint16_t fft_window_width[RTE_BBDEV_MAX_FFT_WIN]; /* FFT windowing 
width. */
 };
 
 /* Structure associated with each queue. */
diff --git a/drivers/baseband/acc/rte_vrb_pmd.c 
b/drivers/baseband/acc/rte_vrb_pmd.c
index 9e5a73c9c7..c5a74bae11 100644
--- a/drivers/baseband/acc/rte_vrb_pmd.c
+++ b/drivers/baseband/acc/rte_vrb_pmd.c
@@ -298,6 +298,34 @@ vrb_device_status(struct rte_bbdev *dev)
return reg;
 }
 
+/* Request device FFT windowing information. */
+static inline void
+vrb_device_fft_win(struct rte_bbdev *dev, struct rte_bbdev_driver_info 
*dev_info)
+{
+   struct acc_device *d = dev->data->dev_private;
+   uint32_t reg, time_out = 0, win;
+
+   if (d->pf_device)
+   return;
+
+   /* Check from the device the first time. */
+   if (d->fft_window_width[0] == 0) {
+   for (win = 0; win < RTE_BBDEV_MAX_FFT_WIN; win++) {
+   vrb_vf2pf(d, ACC_VF2PF_FFT_WIN_REQUEST | win);
+   reg = acc_reg_read(d, d->reg_addr->pf2vf_doorbell);
+   while ((time_out < ACC_STATUS_TO) && (reg == 
RTE_BBDEV_DEV_NOSTATUS)) {
+   usleep(ACC_STATUS_WAIT); /*< Wait or VF->PF->VF 
Comms */
+   reg = acc_reg_read(d, 
d->reg_addr->pf2vf_doorbell);
+   time_out++;
+   }
+   d->fft_window_width[win] = reg;
+   }
+   }
+
+   for (win = 0; win < RTE_BBDEV_MAX_FFT_WIN; win++)
+   dev_info->fft_window_width[win] = d->fft_window_width[win];
+}
+
 /* Checks PF Info Ring to find the interrupt cause and handles it accordingly. 
*/
 static inline void
 vrb_check_ir(struct acc_device *acc_dev)
@@ -1100,6 +1128,7 @@ vrb_dev_info_get(struct rte_bbdev *dev, struct 
rte_bbdev_driver_info *dev_info)
fetch_acc_config(dev);
/* Check the status of device. */
dev_info->device_status = vrb_device_status(dev);
+   vrb_device_fft_win(dev, dev_info);
 
/* Exposed number of queues. */
dev_info->num_queues[RTE_BBDEV_OP_NONE] = 0;
-- 
2.34.1



[PATCH v3 04/12] baseband/acc: allocate FCW memory separately

2023-09-29 Thread Nicolas Chautru
This allows more flexibility to the FCW size for the
unified driver. No actual functional change.

Signed-off-by: Nicolas Chautru 
---
 drivers/baseband/acc/acc_common.h  |  4 +++-
 drivers/baseband/acc/rte_vrb_pmd.c | 25 -
 2 files changed, 27 insertions(+), 2 deletions(-)

diff --git a/drivers/baseband/acc/acc_common.h 
b/drivers/baseband/acc/acc_common.h
index 7d24c644c0..2c7425e524 100644
--- a/drivers/baseband/acc/acc_common.h
+++ b/drivers/baseband/acc/acc_common.h
@@ -101,6 +101,7 @@
 #define ACC_NUM_QGRPS_PER_WORD 8
 #define ACC_MAX_NUM_QGRPS  32
 #define ACC_RING_SIZE_GRANULARITY  64
+#define ACC_MAX_FCW_SIZE  128
 
 /* Constants from K0 computation from 3GPP 38.212 Table 5.4.2.1-2 */
 #define ACC_N_ZC_1 66 /* N = 66 Zc for BG 1 */
@@ -584,13 +585,14 @@ struct __rte_cache_aligned acc_queue {
uint32_t aq_enqueued;  /* Count how many "batches" have been enqueued */
uint32_t aq_dequeued;  /* Count how many "batches" have been dequeued */
uint32_t irq_enable;  /* Enable ops dequeue interrupts if set to 1 */
-   struct rte_mempool *fcw_mempool;  /* FCW mempool */
enum rte_bbdev_op_type op_type;  /* Type of this Queue: TE or TD */
/* Internal Buffers for loopback input */
uint8_t *lb_in;
uint8_t *lb_out;
+   uint8_t *fcw_ring;
rte_iova_t lb_in_addr_iova;
rte_iova_t lb_out_addr_iova;
+   rte_iova_t fcw_ring_addr_iova;
int8_t *derm_buffer; /* interim buffer for de-rm in SDK */
struct acc_device *d;
 };
diff --git a/drivers/baseband/acc/rte_vrb_pmd.c 
b/drivers/baseband/acc/rte_vrb_pmd.c
index f11882f90e..cf0551c0c7 100644
--- a/drivers/baseband/acc/rte_vrb_pmd.c
+++ b/drivers/baseband/acc/rte_vrb_pmd.c
@@ -890,6 +890,25 @@ vrb_queue_setup(struct rte_bbdev *dev, uint16_t queue_id,
goto free_companion_ring_addr;
}
 
+   q->fcw_ring = rte_zmalloc_socket(dev->device->driver->name,
+   ACC_MAX_FCW_SIZE * d->sw_ring_max_depth,
+   RTE_CACHE_LINE_SIZE, conf->socket);
+   if (q->fcw_ring == NULL) {
+   rte_bbdev_log(ERR, "Failed to allocate fcw_ring memory");
+   ret = -ENOMEM;
+   goto free_companion_ring_addr;
+   }
+   q->fcw_ring_addr_iova = rte_malloc_virt2iova(q->fcw_ring);
+
+   /* For FFT we need to store the FCW separately */
+   if (conf->op_type == RTE_BBDEV_OP_FFT) {
+   for (desc_idx = 0; desc_idx < d->sw_ring_max_depth; desc_idx++) 
{
+   desc = q->ring_addr + desc_idx;
+   desc->req.data_ptrs[0].address = q->fcw_ring_addr_iova +
+   desc_idx * ACC_MAX_FCW_SIZE;
+   }
+   }
+
q->qgrp_id = (q_idx >> VRB1_GRP_ID_SHIFT) & 0xF;
q->vf_id = (q_idx >> VRB1_VF_ID_SHIFT)  & 0x3F;
q->aq_id = q_idx & 0xF;
@@ -1001,6 +1020,7 @@ vrb_queue_release(struct rte_bbdev *dev, uint16_t q_id)
if (q != NULL) {
/* Mark the Queue as un-assigned. */
d->q_assigned_bit_map[q->qgrp_id] &= (~0ULL - (1 << (uint64_t) 
q->aq_id));
+   rte_free(q->fcw_ring);
rte_free(q->companion_ring_addr);
rte_free(q->lb_in);
rte_free(q->lb_out);
@@ -3234,7 +3254,10 @@ vrb_enqueue_fft_one_op(struct acc_queue *q, struct 
rte_bbdev_fft_op *op,
output = op->fft.base_output.data;
in_offset = op->fft.base_input.offset;
out_offset = op->fft.base_output.offset;
-   fcw = &desc->req.fcw_fft;
+
+   fcw = (struct acc_fcw_fft *) (q->fcw_ring +
+   ((q->sw_ring_head + total_enqueued_cbs) & 
q->sw_ring_wrap_mask)
+   * ACC_MAX_FCW_SIZE);
 
vrb1_fcw_fft_fill(op, fcw);
vrb1_dma_desc_fft_fill(op, &desc->req, input, output, &in_offset, 
&out_offset);
-- 
2.34.1



[PATCH v3 05/12] baseband/acc: add support for MLD operation

2023-09-29 Thread Nicolas Chautru
There is no functionality related to the MLD operation
but allows the unified PMD to support the operation
being added moving forward.

Signed-off-by: Nicolas Chautru 
Reviewed-by: Maxime Coquelin 
---
 drivers/baseband/acc/acc_common.h  |  1 +
 drivers/baseband/acc/rte_vrb_pmd.c | 39 --
 drivers/baseband/acc/vrb_pmd.h | 12 +
 3 files changed, 45 insertions(+), 7 deletions(-)

diff --git a/drivers/baseband/acc/acc_common.h 
b/drivers/baseband/acc/acc_common.h
index 2c7425e524..788abf1a3c 100644
--- a/drivers/baseband/acc/acc_common.h
+++ b/drivers/baseband/acc/acc_common.h
@@ -87,6 +87,7 @@
 #define ACC_FCW_LE_BLEN32
 #define ACC_FCW_LD_BLEN36
 #define ACC_FCW_FFT_BLEN   28
+#define ACC_FCW_MLDTS_BLEN 32
 #define ACC_5GUL_SIZE_016
 #define ACC_5GUL_SIZE_140
 #define ACC_5GUL_OFFSET_0  36
diff --git a/drivers/baseband/acc/rte_vrb_pmd.c 
b/drivers/baseband/acc/rte_vrb_pmd.c
index cf0551c0c7..a1de012b40 100644
--- a/drivers/baseband/acc/rte_vrb_pmd.c
+++ b/drivers/baseband/acc/rte_vrb_pmd.c
@@ -37,7 +37,7 @@ vrb1_queue_offset(bool pf_device, uint8_t vf_id, uint8_t 
qgrp_id, uint16_t aq_id
return ((qgrp_id << 7) + (aq_id << 3) + VRB1_VfQmgrIngressAq);
 }
 
-enum {UL_4G = 0, UL_5G, DL_4G, DL_5G, FFT, NUM_ACC};
+enum {UL_4G = 0, UL_5G, DL_4G, DL_5G, FFT, MLD, NUM_ACC};
 
 /* Return the accelerator enum for a Queue Group Index. */
 static inline int
@@ -53,6 +53,7 @@ accFromQgid(int qg_idx, const struct rte_acc_conf *acc_conf)
NumQGroupsPerFn[DL_4G] = acc_conf->q_dl_4g.num_qgroups;
NumQGroupsPerFn[DL_5G] = acc_conf->q_dl_5g.num_qgroups;
NumQGroupsPerFn[FFT] = acc_conf->q_fft.num_qgroups;
+   NumQGroupsPerFn[MLD] = acc_conf->q_mld.num_qgroups;
for (acc = UL_4G;  acc < NUM_ACC; acc++)
for (qgIdx = 0; qgIdx < NumQGroupsPerFn[acc]; qgIdx++)
accQg[qgIndex++] = acc;
@@ -83,6 +84,9 @@ qtopFromAcc(struct rte_acc_queue_topology **qtop, int 
acc_enum, struct rte_acc_c
case FFT:
p_qtop = &(acc_conf->q_fft);
break;
+   case MLD:
+   p_qtop = &(acc_conf->q_mld);
+   break;
default:
/* NOTREACHED. */
rte_bbdev_log(ERR, "Unexpected error evaluating %s using %d", 
__func__, acc_enum);
@@ -139,6 +143,9 @@ initQTop(struct rte_acc_conf *acc_conf)
acc_conf->q_fft.num_aqs_per_groups = 0;
acc_conf->q_fft.num_qgroups = 0;
acc_conf->q_fft.first_qgroup_index = -1;
+   acc_conf->q_mld.num_aqs_per_groups = 0;
+   acc_conf->q_mld.num_qgroups = 0;
+   acc_conf->q_mld.first_qgroup_index = -1;
 }
 
 static inline void
@@ -250,7 +257,7 @@ fetch_acc_config(struct rte_bbdev *dev)
}
 
rte_bbdev_log_debug(
-   "%s Config LLR SIGN IN/OUT %s %s QG %u %u %u %u %u AQ 
%u %u %u %u %u Len %u %u %u %u %u\n",
+   "%s Config LLR SIGN IN/OUT %s %s QG %u %u %u %u %u %u 
AQ %u %u %u %u %u %u Len %u %u %u %u %u %u\n",
(d->pf_device) ? "PF" : "VF",
(acc_conf->input_pos_llr_1_bit) ? "POS" : "NEG",
(acc_conf->output_pos_llr_1_bit) ? "POS" : "NEG",
@@ -259,16 +266,19 @@ fetch_acc_config(struct rte_bbdev *dev)
acc_conf->q_ul_5g.num_qgroups,
acc_conf->q_dl_5g.num_qgroups,
acc_conf->q_fft.num_qgroups,
+   acc_conf->q_mld.num_qgroups,
acc_conf->q_ul_4g.num_aqs_per_groups,
acc_conf->q_dl_4g.num_aqs_per_groups,
acc_conf->q_ul_5g.num_aqs_per_groups,
acc_conf->q_dl_5g.num_aqs_per_groups,
acc_conf->q_fft.num_aqs_per_groups,
+   acc_conf->q_mld.num_aqs_per_groups,
acc_conf->q_ul_4g.aq_depth_log2,
acc_conf->q_dl_4g.aq_depth_log2,
acc_conf->q_ul_5g.aq_depth_log2,
acc_conf->q_dl_5g.aq_depth_log2,
-   acc_conf->q_fft.aq_depth_log2);
+   acc_conf->q_fft.aq_depth_log2,
+   acc_conf->q_mld.aq_depth_log2);
 }
 
 static inline void
@@ -339,7 +349,7 @@ vrb_check_ir(struct acc_device *acc_dev)
 
while (ring_data->valid) {
if ((ring_data->int_nb < ACC_PF_INT_DMA_DL_DESC_IRQ) || (
-   ring_data->int_nb > 
ACC_PF_INT_DMA_DL5G_DESC_IRQ)) {
+   ring_data->int_nb > 
ACC_PF_INT_DMA_MLD_DESC_IRQ)) {
rte_bbdev_log(WARNING, "InfoRing: ITR:%d Info:0x%x",
ring_data->int_nb, 
ring_data->detailed_info);
/* Initialize Info Ring entry and move forward. */
@@ -37

[PATCH v3 06/12] baseband/acc: refactor to allow unified driver extension

2023-09-29 Thread Nicolas Chautru
Adding a few functions and common code prior to
extending the VRB driver.

Signed-off-by: Nicolas Chautru 
---
 drivers/baseband/acc/acc_common.h | 164 +++---
 drivers/baseband/acc/rte_acc100_pmd.c |   4 +-
 drivers/baseband/acc/rte_vrb_pmd.c|  62 +-
 3 files changed, 184 insertions(+), 46 deletions(-)

diff --git a/drivers/baseband/acc/acc_common.h 
b/drivers/baseband/acc/acc_common.h
index 788abf1a3c..89893eae43 100644
--- a/drivers/baseband/acc/acc_common.h
+++ b/drivers/baseband/acc/acc_common.h
@@ -18,6 +18,7 @@
 #define ACC_DMA_BLKID_OUT_HARQ  3
 #define ACC_DMA_BLKID_IN_HARQ   3
 #define ACC_DMA_BLKID_IN_MLD_R  3
+#define ACC_DMA_BLKID_DEWIN_IN  3
 
 /* Values used in filling in decode FCWs */
 #define ACC_FCW_TD_VER  1
@@ -103,6 +104,9 @@
 #define ACC_MAX_NUM_QGRPS  32
 #define ACC_RING_SIZE_GRANULARITY  64
 #define ACC_MAX_FCW_SIZE  128
+#define ACC_IQ_SIZE4
+
+#define ACC_FCW_FFT_BLEN_3 28
 
 /* Constants from K0 computation from 3GPP 38.212 Table 5.4.2.1-2 */
 #define ACC_N_ZC_1 66 /* N = 66 Zc for BG 1 */
@@ -132,6 +136,17 @@
 #define ACC_LIM_21 14 /* 0.21 */
 #define ACC_LIM_31 20 /* 0.31 */
 #define ACC_MAX_E (128 * 1024 - 2)
+#define ACC_MAX_CS 12
+
+#define ACC100_VARIANT  0
+#define VRB1_VARIANT   2
+#define VRB2_VARIANT   3
+
+/* Queue Index Hierarchy */
+#define VRB1_GRP_ID_SHIFT10
+#define VRB1_VF_ID_SHIFT 4
+#define VRB2_GRP_ID_SHIFT12
+#define VRB2_VF_ID_SHIFT 6
 
 /* Helper macro for logging */
 #define rte_acc_log(level, fmt, ...) \
@@ -332,6 +347,37 @@ struct __rte_packed acc_fcw_fft {
res:19;
 };
 
+/* FFT Frame Control Word. */
+struct __rte_packed acc_fcw_fft_3 {
+   uint32_t in_frame_size:16,
+   leading_pad_size:16;
+   uint32_t out_frame_size:16,
+   leading_depad_size:16;
+   uint32_t cs_window_sel;
+   uint32_t cs_window_sel2:16,
+   cs_enable_bmap:16;
+   uint32_t num_antennas:8,
+   idft_size:8,
+   dft_size:8,
+   cs_offset:8;
+   uint32_t idft_shift:8,
+   dft_shift:8,
+   cs_multiplier:16;
+   uint32_t bypass:2,
+   fp16_in:1,
+   fp16_out:1,
+   exp_adj:4,
+   power_shift:4,
+   power_en:1,
+   enable_dewin:1,
+   freq_resample_mode:2,
+   depad_output_size:16;
+   uint16_t cs_theta_0[ACC_MAX_CS];
+   uint32_t cs_theta_d[ACC_MAX_CS];
+   int8_t cs_time_offset[ACC_MAX_CS];
+};
+
+
 /* MLD-TS Frame Control Word */
 struct __rte_packed acc_fcw_mldts {
uint32_t fcw_version:4,
@@ -473,14 +519,14 @@ union acc_info_ring_data {
uint16_t valid: 1;
};
struct {
-   uint32_t aq_id_3: 6;
-   uint32_t qg_id_3: 5;
-   uint32_t vf_id_3: 6;
-   uint32_t int_nb_3: 6;
-   uint32_t msi_0_3: 1;
-   uint32_t vf2pf_3: 6;
-   uint32_t loop_3: 1;
-   uint32_t valid_3: 1;
+   uint32_t aq_id_vrb2: 6;
+   uint32_t qg_id_vrb2: 5;
+   uint32_t vf_id_vrb2: 6;
+   uint32_t int_nb_vrb2: 6;
+   uint32_t msi_0_vrb2: 1;
+   uint32_t vf2pf_vrb2: 6;
+   uint32_t loop_vrb2: 1;
+   uint32_t valid_vrb2: 1;
};
 } __rte_packed;
 
@@ -761,22 +807,105 @@ alloc_sw_rings_min_mem(struct rte_bbdev *dev, struct 
acc_device *d,
free_base_addresses(base_addrs, i);
 }
 
+/* Wrapper to provide VF index from ring data. */
+static inline uint16_t
+vf_from_ring(const union acc_info_ring_data ring_data, uint16_t 
device_variant) {
+   if (device_variant == VRB2_VARIANT)
+   return ring_data.vf_id_vrb2;
+   else
+   return ring_data.vf_id;
+}
+
+/* Wrapper to provide QG index from ring data. */
+static inline uint16_t
+qg_from_ring(const union acc_info_ring_data ring_data, uint16_t 
device_variant) {
+   if (device_variant == VRB2_VARIANT)
+   return ring_data.qg_id_vrb2;
+   else
+   return ring_data.qg_id;
+}
+
+/* Wrapper to provide AQ index from ring data. */
+static inline uint16_t
+aq_from_ring(const union acc_info_ring_data ring_data, uint16_t 
device_variant) {
+   if (device_variant == VRB2_VARIANT)
+   return ring_data.aq_id_vrb2;
+   else
+   return ring_data.aq_id;
+}
+
+/* Wrapper to provide int index from ring data. */
+static inline uint16_t
+int_from_ring(const union acc_info_ring_data ring_data, uint16_t 
device_variant) {
+   if (device_variant == VRB2_VARIANT)
+   return ring_data.int_nb_vrb2;
+   else
+   return ring_data.int_nb;
+}
+
+/* Wrapper to provide queue index from group and aq index. */
+static inline int
+queue_index(uint1

[PATCH v3 07/12] baseband/acc: adding VRB2 device variant

2023-09-29 Thread Nicolas Chautru
No functionality exposed only device enumeration and
configuration.

Signed-off-by: Nicolas Chautru 
---
 doc/guides/bbdevs/features/vrb2.ini|  14 ++
 doc/guides/bbdevs/index.rst|   1 +
 doc/guides/bbdevs/vrb2.rst | 206 +
 doc/guides/rel_notes/release_23_11.rst |   3 +
 drivers/baseband/acc/rte_vrb_pmd.c | 156 +++
 drivers/baseband/acc/vrb2_pf_enum.h| 124 +++
 drivers/baseband/acc/vrb2_vf_enum.h| 121 +++
 drivers/baseband/acc/vrb_pmd.h | 161 ++-
 8 files changed, 751 insertions(+), 35 deletions(-)
 create mode 100644 doc/guides/bbdevs/features/vrb2.ini
 create mode 100644 doc/guides/bbdevs/vrb2.rst
 create mode 100644 drivers/baseband/acc/vrb2_pf_enum.h
 create mode 100644 drivers/baseband/acc/vrb2_vf_enum.h

diff --git a/doc/guides/bbdevs/features/vrb2.ini 
b/doc/guides/bbdevs/features/vrb2.ini
new file mode 100644
index 00..23ca6990b7
--- /dev/null
+++ b/doc/guides/bbdevs/features/vrb2.ini
@@ -0,0 +1,14 @@
+;
+; Supported features of the 'Intel vRAN Boost v2' baseband driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Turbo Decoder (4G) = Y
+Turbo Encoder (4G) = Y
+LDPC Decoder (5G)  = Y
+LDPC Encoder (5G)  = Y
+LLR/HARQ Compression   = Y
+FFT/SRS= Y
+External DDR Access= N
+HW Accelerated = Y
diff --git a/doc/guides/bbdevs/index.rst b/doc/guides/bbdevs/index.rst
index 77d4c54664..269157d77f 100644
--- a/doc/guides/bbdevs/index.rst
+++ b/doc/guides/bbdevs/index.rst
@@ -15,4 +15,5 @@ Baseband Device Drivers
 fpga_5gnr_fec
 acc100
 vrb1
+vrb2
 la12xx
diff --git a/doc/guides/bbdevs/vrb2.rst b/doc/guides/bbdevs/vrb2.rst
new file mode 100644
index 00..2a30002e05
--- /dev/null
+++ b/doc/guides/bbdevs/vrb2.rst
@@ -0,0 +1,206 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+Copyright(c) 2023 Intel Corporation
+
+.. include:: 
+
+Intel\ |reg| vRAN Boost v2 Poll Mode Driver (PMD)
+=
+
+The Intel\ |reg| vRAN Boost integrated accelerator enables
+cost-effective 4G and 5G next-generation virtualized Radio Access Network 
(vRAN)
+solutions.
+The Intel vRAN Boost v2.0 (VRB2 in the code) is specifically integrated on the
+Intel\ |reg| Xeon\ |reg| Granite Rapids-D Process (GNR-D).
+
+Features
+
+
+Intel vRAN Boost v2.0 includes a 5G Low Density Parity Check (LDPC) 
encoder/decoder,
+rate match/dematch, Hybrid Automatic Repeat Request (HARQ) with access to DDR
+memory for buffer management, a 4G Turbo encoder/decoder,
+a Fast Fourier Transform (FFT) block providing DFT/iDFT processing offload
+for the 5G Sounding Reference Signal (SRS), a MLD-TS accelerator, a Queue 
Manager (QMGR),
+and a DMA subsystem.
+There is no dedicated on-card memory for HARQ, the coherent memory on the CPU 
side is being used.
+
+These hardware blocks provide the following features exposed by the PMD:
+
+- LDPC Encode in the Downlink (5GNR)
+- LDPC Decode in the Uplink (5GNR)
+- Turbo Encode in the Downlink (4G)
+- Turbo Decode in the Uplink (4G)
+- FFT processing
+- MLD-TS processing
+- Single Root I/O Virtualization (SR-IOV) with 16 Virtual Functions (VFs) per 
Physical Function (PF)
+- Maximum of 2048 queues per VF
+- Message Signaled Interrupts (MSIs)
+
+The Intel vRAN Boost v2.0 PMD supports the following bbdev capabilities:
+
+* For the LDPC encode operation:
+   - ``RTE_BBDEV_LDPC_CRC_24B_ATTACH``: set to attach CRC24B to CB(s).
+   - ``RTE_BBDEV_LDPC_RATE_MATCH``: if set then do not do Rate Match bypass.
+   - ``RTE_BBDEV_LDPC_INTERLEAVER_BYPASS``: if set then bypass interleaver.
+   - ``RTE_BBDEV_LDPC_ENC_SCATTER_GATHER``: supports scatter-gather for 
input/output data.
+   - ``RTE_BBDEV_LDPC_ENC_CONCATENATION``: concatenate code blocks with bit 
granularity.
+
+* For the LDPC decode operation:
+   - ``RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK``: check CRC24B from CB(s).
+   - ``RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP``: drops CRC24B bits appended while 
decoding.
+   - ``RTE_BBDEV_LDPC_CRC_TYPE_24A_CHECK``: check CRC24A from CB(s).
+   - ``RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK``: check CRC16 from CB(s).
+   - ``RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE``: provides an input for HARQ 
combining.
+   - ``RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE``: provides an input for HARQ 
combining.
+   - ``RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE``: disable early termination.
+   - ``RTE_BBDEV_LDPC_DEC_SCATTER_GATHER``: supports scatter-gather for 
input/output data.
+   - ``RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION``: supports compression of the 
HARQ input/output.
+   - ``RTE_BBDEV_LDPC_LLR_COMPRESSION``: supports LLR input compression.
+   - ``RTE_BBDEV_LDPC_HARQ_4BIT_COMPRESSION``: supports compression of the 
HARQ input/output.
+   - ``RTE_BBDEV_LDPC_SOFT_OUT_ENABLE``: set the APP LLR soft output.
+   - ``RTE_BBDEV_LDPC_SOFT_OUT_RM_BYPASS``: set the APP LLR soft output after 
rate

[PATCH v3 09/12] baseband/acc: add FFT support to VRB2 variant

2023-09-29 Thread Nicolas Chautru
Support for the FFT the processing specific to the
VRB2 variant.

Signed-off-by: Nicolas Chautru 
---
 drivers/baseband/acc/rte_vrb_pmd.c | 132 -
 1 file changed, 128 insertions(+), 4 deletions(-)

diff --git a/drivers/baseband/acc/rte_vrb_pmd.c 
b/drivers/baseband/acc/rte_vrb_pmd.c
index 93add82947..ce4b90d8e7 100644
--- a/drivers/baseband/acc/rte_vrb_pmd.c
+++ b/drivers/baseband/acc/rte_vrb_pmd.c
@@ -903,6 +903,9 @@ vrb_queue_setup(struct rte_bbdev *dev, uint16_t queue_id,
ACC_FCW_LD_BLEN : (conf->op_type == RTE_BBDEV_OP_FFT ?
ACC_FCW_FFT_BLEN : ACC_FCW_MLDTS_BLEN;
 
+   if ((q->d->device_variant == VRB2_VARIANT) && (conf->op_type == 
RTE_BBDEV_OP_FFT))
+   fcw_len = ACC_FCW_FFT_BLEN_3;
+
for (desc_idx = 0; desc_idx < d->sw_ring_max_depth; desc_idx++) {
desc = q->ring_addr + desc_idx;
desc->req.word0 = ACC_DMA_DESC_TYPE;
@@ -1323,6 +1326,24 @@ vrb_dev_info_get(struct rte_bbdev *dev, struct 
rte_bbdev_driver_info *dev_info)
.num_buffers_soft_out = 0,
}
},
+   {
+   .type   = RTE_BBDEV_OP_FFT,
+   .cap.fft = {
+   .capability_flags =
+   RTE_BBDEV_FFT_WINDOWING |
+   RTE_BBDEV_FFT_CS_ADJUSTMENT |
+   RTE_BBDEV_FFT_DFT_BYPASS |
+   RTE_BBDEV_FFT_IDFT_BYPASS |
+   RTE_BBDEV_FFT_FP16_INPUT |
+   RTE_BBDEV_FFT_FP16_OUTPUT |
+   RTE_BBDEV_FFT_POWER_MEAS |
+   RTE_BBDEV_FFT_WINDOWING_BYPASS,
+   .num_buffers_src =
+   1,
+   .num_buffers_dst =
+   1,
+   }
+   },
RTE_BBDEV_END_OF_CAPABILITIES_LIST()
};
 
@@ -3849,6 +3870,47 @@ vrb1_fcw_fft_fill(struct rte_bbdev_fft_op *op, struct 
acc_fcw_fft *fcw)
fcw->bypass = 0;
 }
 
+/* Fill in a frame control word for FFT processing. */
+static inline void
+vrb2_fcw_fft_fill(struct rte_bbdev_fft_op *op, struct acc_fcw_fft_3 *fcw)
+{
+   fcw->in_frame_size = op->fft.input_sequence_size;
+   fcw->leading_pad_size = op->fft.input_leading_padding;
+   fcw->out_frame_size = op->fft.output_sequence_size;
+   fcw->leading_depad_size = op->fft.output_leading_depadding;
+   fcw->cs_window_sel = op->fft.window_index[0] +
+   (op->fft.window_index[1] << 8) +
+   (op->fft.window_index[2] << 16) +
+   (op->fft.window_index[3] << 24);
+   fcw->cs_window_sel2 = op->fft.window_index[4] +
+   (op->fft.window_index[5] << 8);
+   fcw->cs_enable_bmap = op->fft.cs_bitmap;
+   fcw->num_antennas = op->fft.num_antennas_log2;
+   fcw->idft_size = op->fft.idft_log2;
+   fcw->dft_size = op->fft.dft_log2;
+   fcw->cs_offset = op->fft.cs_time_adjustment;
+   fcw->idft_shift = op->fft.idft_shift;
+   fcw->dft_shift = op->fft.dft_shift;
+   fcw->cs_multiplier = op->fft.ncs_reciprocal;
+   fcw->power_shift = op->fft.power_shift;
+   fcw->exp_adj = op->fft.fp16_exp_adjust;
+   fcw->fp16_in = check_bit(op->fft.op_flags, RTE_BBDEV_FFT_FP16_INPUT);
+   fcw->fp16_out = check_bit(op->fft.op_flags, RTE_BBDEV_FFT_FP16_OUTPUT);
+   fcw->power_en = check_bit(op->fft.op_flags, RTE_BBDEV_FFT_POWER_MEAS);
+   if (check_bit(op->fft.op_flags,
+   RTE_BBDEV_FFT_IDFT_BYPASS)) {
+   if (check_bit(op->fft.op_flags,
+   RTE_BBDEV_FFT_WINDOWING_BYPASS))
+   fcw->bypass = 2;
+   else
+   fcw->bypass = 1;
+   } else if (check_bit(op->fft.op_flags,
+   RTE_BBDEV_FFT_DFT_BYPASS))
+   fcw->bypass = 3;
+   else
+   fcw->bypass = 0;
+}
+
 static inline int
 vrb1_dma_desc_fft_fill(struct rte_bbdev_fft_op *op,
struct acc_dma_req_desc *desc,
@@ -3882,6 +3944,58 @@ vrb1_dma_desc_fft_fill(struct rte_bbdev_fft_op *op,
return 0;
 }
 
+static inline int
+vrb2_dma_desc_fft_fill(struct rte_bbdev_fft_op *op,
+   struct acc_dma_req_desc *desc,
+   struct rte_mbuf *input, struct rte_mbuf *output, struct 
rte_mbuf *win_input,
+   struct rte_mbuf *pwr, uint32_t *in_offset, uint32_t *out_offset,
+   uint32_t *win_offset, uint32_t *pwr_offset)
+{
+   bool pwr_en = check_bit(op->fft.op_flags, RTE_BBDEV_FFT_POWER_MEAS);
+  

[PATCH v3 08/12] baseband/acc: add FEC capabilities for the VRB2 variant

2023-09-29 Thread Nicolas Chautru
New implementation for some of the FEC features
specific to the VRB2 variant.

Signed-off-by: Nicolas Chautru 
---
 drivers/baseband/acc/rte_vrb_pmd.c | 567 -
 1 file changed, 548 insertions(+), 19 deletions(-)

diff --git a/drivers/baseband/acc/rte_vrb_pmd.c 
b/drivers/baseband/acc/rte_vrb_pmd.c
index 48e779ce77..93add82947 100644
--- a/drivers/baseband/acc/rte_vrb_pmd.c
+++ b/drivers/baseband/acc/rte_vrb_pmd.c
@@ -1235,6 +1235,94 @@ vrb_dev_info_get(struct rte_bbdev *dev, struct 
rte_bbdev_driver_info *dev_info)
};
 
static const struct rte_bbdev_op_cap vrb2_bbdev_capabilities[] = {
+   {
+   .type = RTE_BBDEV_OP_TURBO_DEC,
+   .cap.turbo_dec = {
+   .capability_flags =
+   RTE_BBDEV_TURBO_SUBBLOCK_DEINTERLEAVE |
+   RTE_BBDEV_TURBO_CRC_TYPE_24B |
+   RTE_BBDEV_TURBO_DEC_CRC_24B_DROP |
+   RTE_BBDEV_TURBO_EQUALIZER |
+   RTE_BBDEV_TURBO_SOFT_OUT_SATURATE |
+   RTE_BBDEV_TURBO_HALF_ITERATION_EVEN |
+   RTE_BBDEV_TURBO_CONTINUE_CRC_MATCH |
+   RTE_BBDEV_TURBO_SOFT_OUTPUT |
+   RTE_BBDEV_TURBO_EARLY_TERMINATION |
+   RTE_BBDEV_TURBO_DEC_INTERRUPTS |
+   RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN |
+   RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT |
+   RTE_BBDEV_TURBO_MAP_DEC |
+   RTE_BBDEV_TURBO_DEC_TB_CRC_24B_KEEP |
+   RTE_BBDEV_TURBO_DEC_SCATTER_GATHER,
+   .max_llr_modulus = INT8_MAX,
+   .num_buffers_src =
+   RTE_BBDEV_TURBO_MAX_CODE_BLOCKS,
+   .num_buffers_hard_out =
+   RTE_BBDEV_TURBO_MAX_CODE_BLOCKS,
+   .num_buffers_soft_out =
+   RTE_BBDEV_TURBO_MAX_CODE_BLOCKS,
+   }
+   },
+   {
+   .type = RTE_BBDEV_OP_TURBO_ENC,
+   .cap.turbo_enc = {
+   .capability_flags =
+   RTE_BBDEV_TURBO_CRC_24B_ATTACH |
+   RTE_BBDEV_TURBO_RV_INDEX_BYPASS |
+   RTE_BBDEV_TURBO_RATE_MATCH |
+   RTE_BBDEV_TURBO_ENC_INTERRUPTS |
+   RTE_BBDEV_TURBO_ENC_SCATTER_GATHER,
+   .num_buffers_src =
+   RTE_BBDEV_TURBO_MAX_CODE_BLOCKS,
+   .num_buffers_dst =
+   RTE_BBDEV_TURBO_MAX_CODE_BLOCKS,
+   }
+   },
+   {
+   .type   = RTE_BBDEV_OP_LDPC_ENC,
+   .cap.ldpc_enc = {
+   .capability_flags =
+   RTE_BBDEV_LDPC_RATE_MATCH |
+   RTE_BBDEV_LDPC_CRC_24B_ATTACH |
+   RTE_BBDEV_LDPC_INTERLEAVER_BYPASS |
+   RTE_BBDEV_LDPC_ENC_INTERRUPTS |
+   RTE_BBDEV_LDPC_ENC_SCATTER_GATHER |
+   RTE_BBDEV_LDPC_ENC_CONCATENATION,
+   .num_buffers_src =
+   RTE_BBDEV_LDPC_MAX_CODE_BLOCKS,
+   .num_buffers_dst =
+   RTE_BBDEV_LDPC_MAX_CODE_BLOCKS,
+   }
+   },
+   {
+   .type   = RTE_BBDEV_OP_LDPC_DEC,
+   .cap.ldpc_dec = {
+   .capability_flags =
+   RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK |
+   RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP |
+   RTE_BBDEV_LDPC_CRC_TYPE_24A_CHECK |
+   RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK |
+   RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE |
+   RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE |
+   RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE |
+   RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS |
+   RTE_BBDEV_LDPC_DEC_SCATTER_GATHER |
+   

[PATCH v3 10/12] baseband/acc: add MLD support in VRB2 variant

2023-09-29 Thread Nicolas Chautru
Adding the capability for the MLD-TS processing specific to
the VRB2 variant.

Signed-off-by: Nicolas Chautru 
---
 drivers/baseband/acc/rte_vrb_pmd.c | 378 +
 1 file changed, 378 insertions(+)

diff --git a/drivers/baseband/acc/rte_vrb_pmd.c 
b/drivers/baseband/acc/rte_vrb_pmd.c
index ce4b90d8e7..a9d3db86e6 100644
--- a/drivers/baseband/acc/rte_vrb_pmd.c
+++ b/drivers/baseband/acc/rte_vrb_pmd.c
@@ -1344,6 +1344,17 @@ vrb_dev_info_get(struct rte_bbdev *dev, struct 
rte_bbdev_driver_info *dev_info)
1,
}
},
+   {
+   .type   = RTE_BBDEV_OP_MLDTS,
+   .cap.mld = {
+   .capability_flags =
+   RTE_BBDEV_MLDTS_REP,
+   .num_buffers_src =
+   1,
+   .num_buffers_dst =
+   1,
+   }
+   },
RTE_BBDEV_END_OF_CAPABILITIES_LIST()
};
 
@@ -4151,6 +4162,371 @@ vrb_dequeue_fft(struct rte_bbdev_queue_data *q_data,
return i;
 }
 
+/* Fill in a frame control word for MLD-TS processing. */
+static inline void
+vrb2_fcw_mldts_fill(struct rte_bbdev_mldts_op *op, struct acc_fcw_mldts *fcw)
+{
+   fcw->nrb = op->mldts.num_rbs;
+   fcw->NLayers = op->mldts.num_layers - 1;
+   fcw->Qmod0 = (op->mldts.q_m[0] >> 1) - 1;
+   fcw->Qmod1 = (op->mldts.q_m[1] >> 1) - 1;
+   fcw->Qmod2 = (op->mldts.q_m[2] >> 1) - 1;
+   fcw->Qmod3 = (op->mldts.q_m[3] >> 1) - 1;
+   /* Mark some layers as disabled */
+   if (op->mldts.num_layers == 2) {
+   fcw->Qmod2 = 3;
+   fcw->Qmod3 = 3;
+   }
+   if (op->mldts.num_layers == 3)
+   fcw->Qmod3 = 3;
+   fcw->Rrep = op->mldts.r_rep;
+   fcw->Crep = op->mldts.c_rep;
+}
+
+/* Fill in descriptor for one MLD-TS processing operation. */
+static inline int
+vrb2_dma_desc_mldts_fill(struct rte_bbdev_mldts_op *op,
+   struct acc_dma_req_desc *desc,
+   struct rte_mbuf *input_q, struct rte_mbuf *input_r,
+   struct rte_mbuf *output,
+   uint32_t *in_offset, uint32_t *out_offset)
+{
+   uint16_t qsize_per_re[VRB2_MLD_LAY_SIZE] = {8, 12, 16}; /* Layer 2 to 
4. */
+   uint16_t rsize_per_re[VRB2_MLD_LAY_SIZE] = {14, 26, 42};
+   uint16_t sc_factor_per_rrep[VRB2_MLD_RREP_SIZE] = {12, 6, 4, 3, 0, 2};
+   uint16_t i, outsize_per_re = 0;
+   uint32_t sc_num, r_num, q_size, r_size, out_size;
+
+   /* Prevent out of range access. */
+   if (op->mldts.r_rep > 5)
+   op->mldts.r_rep = 5;
+   if (op->mldts.num_layers < 2)
+   op->mldts.num_layers = 2;
+   if (op->mldts.num_layers > 4)
+   op->mldts.num_layers = 4;
+   for (i = 0; i < op->mldts.num_layers; i++)
+   outsize_per_re += op->mldts.q_m[i];
+   sc_num = op->mldts.num_rbs * RTE_BBDEV_SCPERRB * (op->mldts.c_rep + 1);
+   r_num = op->mldts.num_rbs * sc_factor_per_rrep[op->mldts.r_rep];
+   q_size = qsize_per_re[op->mldts.num_layers - 2] * sc_num;
+   r_size = rsize_per_re[op->mldts.num_layers - 2] * r_num;
+   out_size =  sc_num * outsize_per_re;
+   /* printf("Sc %d R num %d Size %d %d %d\n", sc_num, r_num, q_size, 
r_size, out_size); */
+
+   /* FCW already done. */
+   acc_header_init(desc);
+   desc->data_ptrs[1].address = rte_pktmbuf_iova_offset(input_q, 
*in_offset);
+   desc->data_ptrs[1].blen = q_size;
+   desc->data_ptrs[1].blkid = ACC_DMA_BLKID_IN;
+   desc->data_ptrs[1].last = 0;
+   desc->data_ptrs[1].dma_ext = 0;
+   desc->data_ptrs[2].address = rte_pktmbuf_iova_offset(input_r, 
*in_offset);
+   desc->data_ptrs[2].blen = r_size;
+   desc->data_ptrs[2].blkid = ACC_DMA_BLKID_IN_MLD_R;
+   desc->data_ptrs[2].last = 1;
+   desc->data_ptrs[2].dma_ext = 0;
+   desc->data_ptrs[3].address = rte_pktmbuf_iova_offset(output, 
*out_offset);
+   desc->data_ptrs[3].blen = out_size;
+   desc->data_ptrs[3].blkid = ACC_DMA_BLKID_OUT_HARD;
+   desc->data_ptrs[3].last = 1;
+   desc->data_ptrs[3].dma_ext = 0;
+   desc->m2dlen = 3;
+   desc->d2mlen = 1;
+   desc->op_addr = op;
+   desc->cbs_in_tb = 1;
+
+   return 0;
+}
+
+/* Check whether the MLD operation can be processed as a single operation. */
+static inline bool
+vrb2_check_mld_r_constraint(struct rte_bbdev_mldts_op *op) {
+   uint8_t layer_idx, rrep_idx;
+   uint16_t max_rb[VRB2_MLD_LAY_SIZE][VRB2_MLD_RREP_SIZE] = {
+   {188, 275, 275, 275, 0, 275},
+   {101, 202, 275, 275, 0, 275},
+   {62, 124, 186, 248, 0, 275} };
+
+   if (op->mldts.c_rep == 0)
+   return true;
+
+ 

  1   2   >