RE: [PATCH v2] eventdev/eth_rx: update adapter create APIs

2023-08-10 Thread Naga Harish K, S V
Hi Jerin,
 As per DPDK Guidelines, API changes or ABI breakage is allowed during LTS 
releases
 
(https://doc.dpdk.org/guides/contributing/abi_policy.html#abi-breakages)

Also, there are previous instances where API changes happened, some of them are 
mentioned below.

   In DPDK 22.11, the cryptodev library had undergone the following API changes.
* rte_cryptodev_sym_session_create() and rte_cryptodev_asym_session_create() 
API parameters changed.
   rte_cryptodev_sym_session_free() and rte_cryptodev_asym_session_free() API 
parameters changed.
   rte_cryptodev_sym_session_init() and rte_cryptodev_asym_session_init() APIs 
are removed.
 
* eventdev: The function ``rte_event_crypto_adapter_queue_pair_add`` was updated
   to accept configuration of type ``rte_event_crypto_adapter_queue_conf``
   instead of ``rte_event``,
   similar to ``rte_event_eth_rx_adapter_queue_add`` signature.
   Event will be one of the configuration fields,
   together with additional vector parameters.

 Applications have to change to accommodate the above API changes.

As discussed earlier, fewer adapter-create APIs are useful for the application 
design.
Please let us know your thoughts on the same.

-Harish

> -Original Message-
> From: Jerin Jacob 
> Sent: Wednesday, August 2, 2023 9:42 PM
> To: Naga Harish K, S V 
> Cc: dev@dpdk.org; Jayatheerthan, Jay 
> Subject: Re: [PATCH v2] eventdev/eth_rx: update adapter create APIs
> 
> On Wed, Aug 2, 2023 at 7:58 PM Naga Harish K, S V
>  wrote:
> >
> > Hi Jerin,
> 
> 
> Hi Harish,
> 
> >
> > The API “rte_event_eth_rx_adapter_create_with_params()” is an extension to
> rte_event_eth_rx_adapter_create() with an additional adapter configuration
> params structure.
> > There is no equivalent API existing today for the
> “rte_event_eth_rx_adapter_create_ext()” API which takes additional adapter
> params.
> > There are use cases where create_ext() version of create API with additional
> parameters is needed. We may need to have one more adapter create API for
> this.
> > That makes so many Adapter create APIs (4 in number) and will be confusing
> for the user.
> >
> >That's why proposed the following changes to the Rx adapter create APIs
> which will consolidate the create APIs to 2 in number with all possible
> combinations.
> > The applications that are currently using
> > rte_event_eth_rx_adapter_create_ext() or rte_event_eth_rx_adapter_create()
> APIs for creating Rx Adapter can pass NULL argument for the newly added
> argument which will behave the same as before.
> >
> > Trying to understand what are the concerns from your perspective with this
> consolidated API approach.
> 
> If single application code base needs to support both version of DPDK then 
> they
> need have #ifdef clutter based on DPDK version check as we are changing the
> function prototype.
> IMO, We should do API prototype change as last resort. It is quite common have
> two APIs versions of single operation with more specialized parameters.
> 
> 
> 
> >
> > -Harish
> >
> > > -Original Message-
> > > From: Jerin Jacob 
> > > Sent: Tuesday, August 1, 2023 8:54 PM
> > > To: Naga Harish K, S V 
> > > Cc: dev@dpdk.org; Jayatheerthan, Jay 
> > > Subject: Re: [PATCH v2] eventdev/eth_rx: update adapter create APIs
> > >
> > > On Tue, Aug 1, 2023 at 7:22 PM Naga Harish K S V
> > >  wrote:
> > > >
> > > > The adapter create APIs such as
> > > > rte_event_eth_rx_adapter_create_ext()
> > > > and
> > > > rte_event_eth_rx_adapter_create() are updated to take additional
> > > > argument as a pointer of type struct rte_event_eth_rx_adapter_params.
> > > >
> > > > The API rte_event_eth_rx_adapter_create_with_params() is deprecated.
> > > >
> > > > Signed-off-by: Naga Harish K S V 
> > >
> > > Pleas check v1 comment
> > > http://mails.dpdk.org/archives/dev/2023-August/273602.html
> > >
> > > > ---
> > > > v2:
> > > > *  Fix doxygen compile issue and warning
> > > > ---
> > > > ---
> > > >  app/test-eventdev/test_perf_common.c  |   2 +-
> > > >  app/test-eventdev/test_pipeline_common.c  |   2 +-
> > > >  app/test/test_event_eth_rx_adapter.c  |  22 ++--
> > > >  app/test/test_security_inline_proto.c |   2 +-
> > > >  .../pipeline_worker_generic.c |   2 +-
> > > >  .../eventdev_pipeline/pipeline_worker_tx.c|   2 +-
> > > >  examples/ipsec-secgw/event_helper.c   |   2 +-
> > > >  examples/l2fwd-event/l2fwd_event_generic.c|   2 +-
> > > >  .../l2fwd-event/l2fwd_event_internal_port.c   |   2 +-
> > > >  examples/l3fwd/l3fwd_event_generic.c  |   2 +-
> > > >  examples/l3fwd/l3fwd_event_internal_port.c|   2 +-
> > > >  lib/eventdev/rte_event_eth_rx_adapter.c   | 100 --
> > > >  lib/eventdev/rte_event_eth_rx_adapter.h   |  36 ++-
> > > >  lib/eventdev/version.map  |   1 -
> > > >  14 files changed, 74 insertions(+), 105 deletions(-)
> > > >
> > > > diff --git a/app/test-eventdev/test_perf_commo

Re: [PATCH v2] eventdev/eth_rx: update adapter create APIs

2023-08-10 Thread Jerin Jacob
On Thu, Aug 10, 2023 at 1:09 PM Naga Harish K, S V
 wrote:
>
> Hi Jerin,
>  As per DPDK Guidelines, API changes or ABI breakage is allowed during 
> LTS releases
>  
> (https://doc.dpdk.org/guides/contributing/abi_policy.html#abi-breakages)

Yes. Provided if depreciation notice has sent, approved and changes
absolutely needed.

>
> Also, there are previous instances where API changes happened, some of them 
> are mentioned below.

These are not the cases where existing APIs removed and changed
prototype to cover up the removed function.

>
>In DPDK 22.11, the cryptodev library had undergone the following API 
> changes.
> * rte_cryptodev_sym_session_create() and rte_cryptodev_asym_session_create() 
> API parameters changed.
>rte_cryptodev_sym_session_free() and rte_cryptodev_asym_session_free() API 
> parameters changed.
>rte_cryptodev_sym_session_init() and rte_cryptodev_asym_session_init() 
> APIs are removed.
>
> * eventdev: The function ``rte_event_crypto_adapter_queue_pair_add`` was 
> updated
>to accept configuration of type ``rte_event_crypto_adapter_queue_conf``
>instead of ``rte_event``,
>similar to ``rte_event_eth_rx_adapter_queue_add`` signature.
>Event will be one of the configuration fields,
>together with additional vector parameters.
>
>  Applications have to change to accommodate the above API changes.
>
> As discussed earlier, fewer adapter-create APIs are useful for the 
> application design.
> Please let us know your thoughts on the same.


mempool have different variants of create API. IMO, Different variants
of _create API is OK and application
can pick the correct one based on the needed.
It is OK to break the API prototype if absolutely needed, in this case
it is not.


Re: [PATCH 04/15] eal: make rte_version_XXX API's stable

2023-08-10 Thread Bruce Richardson
On Wed, Aug 09, 2023 at 09:42:56AM -0700, Stephen Hemminger wrote:
> The subparts of rte_version were added in 2020 and
> can now be marked stable.
> 
> Signed-off-by: Stephen Hemminger 
> ---
Acked-by: Bruce Richardson 


RE: [PATCH 06/15] eal: make rte_service_lcore_may_be_active stable

2023-08-10 Thread Van Haaren, Harry
> -Original Message-
> From: Stephen Hemminger 
> Sent: Wednesday, August 9, 2023 5:43 PM
> To: dev@dpdk.org
> Cc: Stephen Hemminger ; Van Haaren, Harry
> 
> Subject: [PATCH 06/15] eal: make rte_service_lcore_may_be_active stable
> 
> This API was added in 2020.
> 
> Signed-off-by: Stephen Hemminger 

It looks like parts of the "rte_drand" patch accidentally got into the 
service-cores patch, with that minor update;

Acked-by: Harry van Haaren 


Below diffs to 5/15 rte_drand patch.

> diff --git a/lib/eal/version.map b/lib/eal/version.map
> index e6d2fda95770..2e50d6857d26 100644
> --- a/lib/eal/version.map
> +++ b/lib/eal/version.map
> @@ -58,6 +58,7 @@ DPDK_24 {
>   rte_devargs_parsef;
>   rte_devargs_remove;
>   rte_devargs_type_count;
> + rte_drand;
>   rte_driver_name;
>   rte_dump_physmem_layout;
>   rte_dump_stack;



> @@ -400,7 +401,6 @@ EXPERIMENTAL {
>   rte_intr_type_set;
> 
>   # added in 22.07
> - rte_drand;
>   rte_thread_get_affinity_by_id;
>   rte_thread_get_priority;
>   rte_thread_self;
> --
> 2.39.2



RE: [RFC PATCH] dmadev: offload to free source buffer

2023-08-10 Thread Morten Brørup
> From: Amit Prakash Shukla [mailto:amitpraka...@marvell.com]
> Sent: Wednesday, 9 August 2023 20.12
> 
> > From: Morten Brørup 
> > Sent: Wednesday, August 9, 2023 8:19 PM
> >
> > > From: Amit Prakash Shukla [mailto:amitpraka...@marvell.com]
> > > Sent: Wednesday, 9 August 2023 16.27
> > >
> > > > From: Morten Brørup 
> > > > Sent: Wednesday, August 9, 2023 2:37 PM
> > > >
> > > > > From: Amit Prakash Shukla [mailto:amitpraka...@marvell.com]
> > > > > Sent: Wednesday, 9 August 2023 08.09
> > > > >
> > > > > This changeset adds support in DMA library to free source DMA
> > > > > buffer by hardware. On a supported hardware, application can pass
> > > > > on the mempool information as part of vchan config when the DMA
> > > > > transfer direction is configured as RTE_DMA_DIR_MEM_TO_DEV.
> > > >
> > > > Isn't the DMA source buffer a memory area, and what needs to be
> > > > freed
> > > is
> > > > the mbuf holding the memory area, i.e. two different pointers?
> > > No, it is same pointer. Assume mbuf created via mempool, mempool needs
> > > to be given via vchan config and iova passed to
> > > rte_dma_copy/rte_dma_copy_sg's can be any address in mbuf area of
> > > given mempool element.
> > > For example, mempool element size is S. dequeued buff from mempool is
> > > at X. Any address in (X, X+S) can be given as iova to rte_dma_copy.
> >
> > So the DMA library determines the pointer to the mbuf (in the given
> > mempool) by looking at the iova passed to rte_dma_copy/rte_dma_copy_sg,
> > and then calls rte_mempool_put with that pointer?
> 
> No. DMA hardware would determine the pointer to the mbuf using iova address
> and mempool. Hardware will free the buffer, on completion of data transfer.

OK. If there are any requirements to the mempool, it needs to be documented in 
the source code comments. E.g. does it work with mempools where the mempool 
driver is an MP_RTS/MC_RTS ring, or a stack?

> 
> >
> > >
> > > >
> > > > I like the concept. Something similar might also be useful for
> > > > RTE_DMA_DIR_MEM_TO_MEM, e.g. packet capture. Although such a use
> > > > case might require decrementing the mbuf refcount instead of freeing
> > > the
> > > > mbuf directly to the mempool.
> > > This operation is not supported in our hardware. It can be implemented
> > > in future if any hardware supports it.
> >
> > OK, I didn't expect that - just floating the idea. :-)
> >
> > >
> > > >
> > > > PS: It has been a while since I looked at the DMA library, so ignore
> > > > my comments if I got this wrong.



[v1 0/6] cryptodev: support digest message in SM2

2023-08-10 Thread Gowrishankar Muthukrishnan
This patch series fixes SM2 algorithm implementation to
support digest message as input along with plain message
as today.

Gowrishankar Muthukrishnan (6):
  crypto/openssl: include SM2 in asymmetric capabilities
  cryptodev: add RNG capability in EC based xform
  cryptodev: add hash support in asymmetric capability
  cryptodev: use generic EC xform params for SM2
  app/test: check asymmetric capabilities in SM2 test
  crypto/cnxk: add SM2 support

 app/test/test_cryptodev_asym.c| 131 ++
 app/test/test_cryptodev_sm2_test_vectors.h|  32 ++-
 doc/guides/cryptodevs/features/cn10k.ini  |   1 +
 doc/guides/rel_notes/release_23_11.rst|   6 +
 drivers/common/cnxk/hw/cpt.h  |   3 +-
 drivers/common/cnxk/roc_ae.c  |  32 ++-
 drivers/common/cnxk/roc_ae.h  |   3 +-
 drivers/common/cnxk/roc_ae_fpm_tables.c   | 190 ++
 drivers/crypto/cnxk/cnxk_ae.h | 232 +-
 drivers/crypto/cnxk/cnxk_cryptodev.h  |   2 +-
 .../crypto/cnxk/cnxk_cryptodev_capabilities.c |  17 ++
 drivers/crypto/openssl/rte_openssl_pmd_ops.c  |  19 +-
 lib/cryptodev/cryptodev_trace.h   |   9 +
 lib/cryptodev/cryptodev_trace_points.c|   3 +
 lib/cryptodev/rte_crypto_asym.h   |  15 +-
 lib/cryptodev/rte_cryptodev.c |  16 ++
 lib/cryptodev/rte_cryptodev.h |  25 ++
 lib/cryptodev/version.map |   1 +
 18 files changed, 666 insertions(+), 71 deletions(-)

-- 
2.25.1



[v1 2/6] cryptodev: add RNG capability in EC based xform

2023-08-10 Thread Gowrishankar Muthukrishnan
Elliptic curve based asymmetric operations use cryptographically
secure random number in its computation. If PMD supports RNG
for such ops, the application could skip computing on its own.
This patch adds new field in asymmetric capability to declare
this capability.

Signed-off-by: Gowrishankar Muthukrishnan 
---
 drivers/crypto/openssl/rte_openssl_pmd_ops.c | 2 ++
 lib/cryptodev/rte_cryptodev.h| 6 ++
 2 files changed, 8 insertions(+)

diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c 
b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index 2eb450fcfd..0f88669f41 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -603,6 +603,8 @@ static const struct rte_cryptodev_capabilities 
openssl_pmd_capabilities[] = {
 (1 << RTE_CRYPTO_ASYM_OP_VERIFY) |
 (1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |
 (1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),
+   {.internal_rng = 1
+   }
}
}
}
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index ba730373fb..64810c9ec4 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -182,6 +182,12 @@ struct rte_cryptodev_asymmetric_xform_capability {
/**< Range of modulus length supported by modulus based xform.
 * Value 0 mean implementation default
 */
+
+   uint8_t internal_rng;
+   /**< Availability of random number generator for Elliptic curve 
based xform.
+* Value 0 means unavailable, and application should pass the 
required
+* random value. Otherwise, PMD would internally compute the 
random number.
+*/
};
 };
 
-- 
2.25.1



[v1 1/6] crypto/openssl: include SM2 in asymmetric capabilities

2023-08-10 Thread Gowrishankar Muthukrishnan
Include SM2 algorithm in the asymmetric capabilities supported
by OpenSSL PMD.

Fixes: 3b7d638fb11f ("crypto/openssl: support asymmetric SM2")

Signed-off-by: Gowrishankar Muthukrishnan 
---
 drivers/crypto/openssl/rte_openssl_pmd_ops.c | 14 ++
 1 file changed, 14 insertions(+)

diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c 
b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index 85a4fa3e55..2eb450fcfd 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -593,6 +593,20 @@ static const struct rte_cryptodev_capabilities 
openssl_pmd_capabilities[] = {
},
}
},
+   {   /* SM2 */
+   .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
+   {.asym = {
+   .xform_capa = {
+   .xform_type = RTE_CRYPTO_ASYM_XFORM_SM2,
+   .op_types =
+   ((1<

[v1 3/6] cryptodev: add hash support in asymmetric capability

2023-08-10 Thread Gowrishankar Muthukrishnan
Most of the asymmetric operations start with hash of the input.
Add a new field in asymmetric capability to declare support
for hash operations that PMD can support for the asymmetric
operations. Application can skip computing hash if PMD already
supports it.

Signed-off-by: Gowrishankar Muthukrishnan 
---
 drivers/crypto/openssl/rte_openssl_pmd_ops.c |  1 +
 lib/cryptodev/cryptodev_trace.h  |  9 +
 lib/cryptodev/cryptodev_trace_points.c   |  3 +++
 lib/cryptodev/rte_crypto_asym.h  |  3 +++
 lib/cryptodev/rte_cryptodev.c| 16 
 lib/cryptodev/rte_cryptodev.h| 19 +++
 lib/cryptodev/version.map|  1 +
 7 files changed, 52 insertions(+)

diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c 
b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index 0f88669f41..0b3601db40 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -598,6 +598,7 @@ static const struct rte_cryptodev_capabilities 
openssl_pmd_capabilities[] = {
{.asym = {
.xform_capa = {
.xform_type = RTE_CRYPTO_ASYM_XFORM_SM2,
+   .hash_algos = (1 << RTE_CRYPTO_AUTH_SM3),
.op_types =
((1hash_algos, hash, ret);
+
+   return ret;
+}
+
 /* spinlock for crypto device enq callbacks */
 static rte_spinlock_t rte_cryptodev_callback_lock = RTE_SPINLOCK_INITIALIZER;
 
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 64810c9ec4..536e082244 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -189,6 +189,9 @@ struct rte_cryptodev_asymmetric_xform_capability {
 * random value. Otherwise, PMD would internally compute the 
random number.
 */
};
+
+   uint64_t hash_algos;
+   /**< Bitmask of hash algorithms supported for op_type. */
 };
 
 /**
@@ -348,6 +351,22 @@ rte_cryptodev_asym_xform_capability_check_modlen(
const struct rte_cryptodev_asymmetric_xform_capability *capability,
uint16_t modlen);
 
+/**
+ * Check if hash algorithm is supported.
+ *
+ * @param  capability  Asymmetric crypto capability.
+ * @param  hashHash algorithm.
+ *
+ * @return
+ *   - Return true if the hash algorithm is supported.
+ *   - Return false if the hash algorithm is not supported.
+ */
+__rte_experimental
+bool
+rte_cryptodev_asym_xform_capability_check_hash(
+   const struct rte_cryptodev_asymmetric_xform_capability *capability,
+   enum rte_crypto_auth_algorithm hash);
+
 /**
  * Provide the cipher algorithm enum, given an algorithm string
  *
diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
index ae8d9327b4..3c2d1780e0 100644
--- a/lib/cryptodev/version.map
+++ b/lib/cryptodev/version.map
@@ -54,6 +54,7 @@ EXPERIMENTAL {
rte_cryptodev_asym_get_xform_enum;
rte_cryptodev_asym_session_create;
rte_cryptodev_asym_session_free;
+   rte_cryptodev_asym_xform_capability_check_hash;
rte_cryptodev_asym_xform_capability_check_modlen;
rte_cryptodev_asym_xform_capability_check_optype;
rte_cryptodev_sym_cpu_crypto_process;
-- 
2.25.1



[v1 4/6] cryptodev: use generic EC xform params for SM2

2023-08-10 Thread Gowrishankar Muthukrishnan
Now, generic EC xform parameters include hash algorithm field.
Hence, SM2 curve can use this generic struct for setting hash
algorithm, which would also require SM2 curve ID enumerated
along with other curves, as listed in:
https://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml

Signed-off-by: Gowrishankar Muthukrishnan 
---
 app/test/test_cryptodev_asym.c   | 12 
 app/test/test_cryptodev_sm2_test_vectors.h   |  4 +++-
 doc/guides/rel_notes/release_23_11.rst   |  2 ++
 drivers/crypto/openssl/rte_openssl_pmd_ops.c |  2 +-
 lib/cryptodev/rte_crypto_asym.h  | 16 ++--
 5 files changed, 16 insertions(+), 20 deletions(-)

diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 0ef2642fdd..b08772a9bf 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -1838,7 +1838,8 @@ _test_sm2_sign(bool rnd_secret)
/* Setup asym xform */
xform.next = NULL;
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_SM2;
-   xform.sm2.hash = RTE_CRYPTO_AUTH_SM3;
+   xform.ec.curve_id = input_params.curve;
+   xform.ec.hash = RTE_CRYPTO_AUTH_SM3;
 
ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, 
&sess);
if (ret < 0) {
@@ -2019,7 +2020,8 @@ test_sm2_verify(void)
/* Setup asym xform */
xform.next = NULL;
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_SM2;
-   xform.sm2.hash = RTE_CRYPTO_AUTH_SM3;
+   xform.ec.curve_id = input_params.curve;
+   xform.ec.hash = RTE_CRYPTO_AUTH_SM3;
 
ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, 
&sess);
if (ret < 0) {
@@ -2120,7 +2122,8 @@ _test_sm2_enc(bool rnd_secret)
/* Setup asym xform */
xform.next = NULL;
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_SM2;
-   xform.sm2.hash = RTE_CRYPTO_AUTH_SM3;
+   xform.ec.curve_id = input_params.curve;
+   xform.ec.hash = RTE_CRYPTO_AUTH_SM3;
 
ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, 
&sess);
if (ret < 0) {
@@ -2299,7 +2302,8 @@ test_sm2_dec(void)
/* Setup asym xform */
xform.next = NULL;
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_SM2;
-   xform.sm2.hash = RTE_CRYPTO_AUTH_SM3;
+   xform.ec.curve_id = input_params.curve;
+   xform.ec.hash = RTE_CRYPTO_AUTH_SM3;
 
ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, 
&sess);
if (ret < 0) {
diff --git a/app/test/test_cryptodev_sm2_test_vectors.h 
b/app/test/test_cryptodev_sm2_test_vectors.h
index 7a4ce70c10..3d2dba1359 100644
--- a/app/test/test_cryptodev_sm2_test_vectors.h
+++ b/app/test/test_cryptodev_sm2_test_vectors.h
@@ -17,6 +17,7 @@ struct crypto_testsuite_sm2_params {
rte_crypto_param id;
rte_crypto_param cipher;
rte_crypto_param message;
+   int curve;
 };
 
 static uint8_t fp256_pkey[] = {
@@ -123,7 +124,8 @@ struct crypto_testsuite_sm2_params sm2_param_fp256 = {
.cipher = {
.data = fp256_cipher,
.length = sizeof(fp256_cipher),
-   }
+   },
+   .curve = RTE_CRYPTO_EC_GROUP_SM2
 };
 
 #endif /* __TEST_CRYPTODEV_SM2_TEST_VECTORS_H__ */
diff --git a/doc/guides/rel_notes/release_23_11.rst 
b/doc/guides/rel_notes/release_23_11.rst
index 4411bb32c1..23c89e8ea9 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -91,6 +91,8 @@ Removed Items
 
 * kni: Removed the Kernel Network Interface (KNI) library and driver.
 
+* crypto: Removed SM2 xform parameter in asymmetric xform.
+
 
 API Changes
 ---
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c 
b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index 0b3601db40..e521c0c830 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -1307,7 +1307,7 @@ static int openssl_set_asym_session_parameters(
OSSL_PARAM *params = NULL;
int ret = -1;
 
-   if (xform->sm2.hash != RTE_CRYPTO_AUTH_SM3)
+   if (xform->ec.hash != RTE_CRYPTO_AUTH_SM3)
return -1;
 
param_bld = OSSL_PARAM_BLD_new();
diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
index 51f5476c6e..9b68c3f5e2 100644
--- a/lib/cryptodev/rte_crypto_asym.h
+++ b/lib/cryptodev/rte_crypto_asym.h
@@ -69,7 +69,8 @@ enum rte_crypto_curve_id {
RTE_CRYPTO_EC_GROUP_SECP224R1 = 21,
RTE_CRYPTO_EC_GROUP_SECP256R1 = 23,
RTE_CRYPTO_EC_GROUP_SECP384R1 = 24,
-   RTE_CRYPTO_EC_GROUP_SECP521R1 = 25
+   RTE_CRYPTO_EC_GROUP_SECP521R1 = 25,
+   RTE_CRYPTO_EC_GROUP_SM2   = 41,
 };
 
 /**
@@ -382,16 +383,6 @@ struct rte_crypto_ec_xform {
/**< Hash algorithm used in EC op. */
 };
 
-/**
- * Asymmetric SM2 transform data.
- *
- * Structure describing SM2 xform params.
- */
-struct rte_crypto_sm2_x

[v1 5/6] app/test: check asymmetric capabilities in SM2 test

2023-08-10 Thread Gowrishankar Muthukrishnan
Check asymmetric capabilities such as SM3 hash support and
internal RNG and accordingly choose op params for SM2 test.

Signed-off-by: Gowrishankar Muthukrishnan 
---
 app/test/test_cryptodev_asym.c | 127 ++---
 app/test/test_cryptodev_sm2_test_vectors.h |  28 +++--
 2 files changed, 103 insertions(+), 52 deletions(-)

diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index b08772a9bf..1f39b1f017 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -608,6 +608,7 @@ static inline void print_asym_capa(
break;
case RTE_CRYPTO_ASYM_XFORM_ECDSA:
case RTE_CRYPTO_ASYM_XFORM_ECPM:
+   case RTE_CRYPTO_ASYM_XFORM_SM2:
default:
break;
}
@@ -1806,12 +1807,14 @@ test_ecpm_all_curve(void)
 }
 
 static int
-_test_sm2_sign(bool rnd_secret)
+test_sm2_sign(void)
 {
struct crypto_testsuite_params_asym *ts_params = &testsuite_params;
struct crypto_testsuite_sm2_params input_params = sm2_param_fp256;
+   const struct rte_cryptodev_asymmetric_xform_capability *capa;
struct rte_mempool *sess_mpool = ts_params->session_mpool;
struct rte_mempool *op_mpool = ts_params->op_mpool;
+   struct rte_cryptodev_asym_capability_idx idx;
uint8_t dev_id = ts_params->valid_devs[0];
struct rte_crypto_op *result_op = NULL;
uint8_t output_buf_r[TEST_DATA_SIZE];
@@ -1822,6 +1825,12 @@ _test_sm2_sign(bool rnd_secret)
int ret, status = TEST_SUCCESS;
void *sess = NULL;
 
+   /* Check SM2 capability */
+   idx.type = RTE_CRYPTO_ASYM_XFORM_SM2;
+   capa = rte_cryptodev_asym_capability_get(dev_id, &idx);
+   if (capa == NULL)
+   return -ENOTSUP;
+
/* Setup crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (op == NULL) {
@@ -1839,7 +1848,10 @@ _test_sm2_sign(bool rnd_secret)
xform.next = NULL;
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_SM2;
xform.ec.curve_id = input_params.curve;
-   xform.ec.hash = RTE_CRYPTO_AUTH_SM3;
+   if (rte_cryptodev_asym_xform_capability_check_hash(capa, 
RTE_CRYPTO_AUTH_SM3))
+   xform.ec.hash = RTE_CRYPTO_AUTH_SM3;
+   else
+   xform.ec.hash = RTE_CRYPTO_AUTH_NULL;
 
ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, 
&sess);
if (ret < 0) {
@@ -1857,17 +1869,25 @@ _test_sm2_sign(bool rnd_secret)
 
/* Populate op with operational details */
asym_op->sm2.op_type = RTE_CRYPTO_ASYM_OP_SIGN;
-   asym_op->sm2.message.data = input_params.message.data;
-   asym_op->sm2.message.length = input_params.message.length;
+   if (xform.ec.hash == RTE_CRYPTO_AUTH_SM3) {
+   asym_op->sm2.message.data = input_params.message.data;
+   asym_op->sm2.message.length = input_params.message.length;
+   asym_op->sm2.id.data = input_params.id.data;
+   asym_op->sm2.id.length = input_params.id.length;
+   } else {
+   asym_op->sm2.message.data = input_params.digest.data;
+   asym_op->sm2.message.length = input_params.digest.length;
+   asym_op->sm2.id.data = NULL;
+   asym_op->sm2.id.length = 0;
+   }
+
asym_op->sm2.pkey.data = input_params.pkey.data;
asym_op->sm2.pkey.length = input_params.pkey.length;
asym_op->sm2.q.x.data = input_params.pubkey_qx.data;
asym_op->sm2.q.x.length = input_params.pubkey_qx.length;
asym_op->sm2.q.y.data = input_params.pubkey_qy.data;
asym_op->sm2.q.y.length = input_params.pubkey_qy.length;
-   asym_op->sm2.id.data = input_params.id.data;
-   asym_op->sm2.id.length = input_params.id.length;
-   if (rnd_secret) {
+   if (capa->internal_rng != 0) {
asym_op->sm2.k.data = NULL;
asym_op->sm2.k.length = 0;
} else {
@@ -1916,7 +1936,7 @@ _test_sm2_sign(bool rnd_secret)
debug_hexdump(stdout, "s:",
asym_op->sm2.s.data, asym_op->sm2.s.length);
 
-   if (!rnd_secret) {
+   if (capa->internal_rng == 0) {
/* Verify sign (by comparison). */
if (memcmp(input_params.sign_r.data, asym_op->sm2.r.data,
   asym_op->sm2.r.length) != 0) {
@@ -1977,25 +1997,15 @@ _test_sm2_sign(bool rnd_secret)
return status;
 };
 
-static int
-test_sm2_sign_rnd_secret(void)
-{
-   return _test_sm2_sign(true);
-}
-
-__rte_used static int
-test_sm2_sign_plain_secret(void)
-{
-   return _test_sm2_sign(false);
-}
-
 static int
 test_sm2_verify(void)
 {
struct crypto_testsuite_params_asym *ts_params = &testsuite_params;
struct crypto_testsuite_sm2_params input_params = sm2_param_fp256;
+   const struct rte_cryptodev_asymmetric_xform_capability *capa;
struct rte_mempool *sess_mpool = 

[v1 6/6] crypto/cnxk: add SM2 support

2023-08-10 Thread Gowrishankar Muthukrishnan
Add SM2 asymmetric algorithm support in cnxk PMD.

Signed-off-by: Gowrishankar Muthukrishnan 
---
 doc/guides/cryptodevs/features/cn10k.ini  |   1 +
 doc/guides/rel_notes/release_23_11.rst|   4 +
 drivers/common/cnxk/hw/cpt.h  |   3 +-
 drivers/common/cnxk/roc_ae.c  |  32 ++-
 drivers/common/cnxk/roc_ae.h  |   3 +-
 drivers/common/cnxk/roc_ae_fpm_tables.c   | 190 ++
 drivers/crypto/cnxk/cnxk_ae.h | 232 +-
 drivers/crypto/cnxk/cnxk_cryptodev.h  |   2 +-
 .../crypto/cnxk/cnxk_cryptodev_capabilities.c |  17 ++
 9 files changed, 479 insertions(+), 5 deletions(-)

diff --git a/doc/guides/cryptodevs/features/cn10k.ini 
b/doc/guides/cryptodevs/features/cn10k.ini
index 55a1226965..15e2dd48a8 100644
--- a/doc/guides/cryptodevs/features/cn10k.ini
+++ b/doc/guides/cryptodevs/features/cn10k.ini
@@ -103,6 +103,7 @@ Modular Inversion   =
 Diffie-hellman  =
 ECDSA   = Y
 ECPM= Y
+SM2 = Y
 
 ;
 ; Supported Operating systems of the 'cn10k' crypto driver.
diff --git a/doc/guides/rel_notes/release_23_11.rst 
b/doc/guides/rel_notes/release_23_11.rst
index 23c89e8ea9..234fa2e6ee 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -72,6 +72,10 @@ New Features
  Also, make sure to start the actual text at the margin.
  ===
 
+* **Updated CNXK crypto driver.**
+
+  * Added SM2 algorithm support in asymmetric crypto operations.
+
 
 Removed Items
 -
diff --git a/drivers/common/cnxk/hw/cpt.h b/drivers/common/cnxk/hw/cpt.h
index 5e1519e202..ce57de8788 100644
--- a/drivers/common/cnxk/hw/cpt.h
+++ b/drivers/common/cnxk/hw/cpt.h
@@ -79,7 +79,8 @@ union cpt_eng_caps {
uint64_t __io reserved_23_33 : 11;
uint64_t __io pdcp_chain : 1;
uint64_t __io sg_ver2 : 1;
-   uint64_t __io reserved_36_63 : 28;
+   uint64_t __io sm2 : 1;
+   uint64_t __io reserved_37_63 : 27;
};
 };
 
diff --git a/drivers/common/cnxk/roc_ae.c b/drivers/common/cnxk/roc_ae.c
index 336b927641..e6a013d7c4 100644
--- a/drivers/common/cnxk/roc_ae.c
+++ b/drivers/common/cnxk/roc_ae.c
@@ -149,7 +149,37 @@ const struct roc_ae_ec_group ae_ec_grp[ROC_AE_EC_ID_PMAX] 
= {
 0xBF, 0x07, 0x35, 0x73, 0xDF, 0x88, 0x3D, 0x2C,
 0x34, 0xF1, 0xEF, 0x45, 0x1F, 0xD4, 0x6B, 0x50,
 0x3F, 0x00},
-   .length = 66}}};
+   .length = 66},
+   },
+   {},
+   {},
+   {},
+   {
+   .prime = {.data = {0xFF, 0xFF, 0xFF, 0xFE, 0xFF, 0xFF, 0xFF,
+  0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+  0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00,
+  0x00, 0x00, 0x00, 0xFF, 0xFF, 0xFF, 0xFF,
+  0xFF, 0xFF, 0xFF, 0xFF},
+ .length = 32},
+   .order = {.data = {0xFF, 0xFF, 0xFF, 0xFE, 0xFF, 0xFF, 0xFF,
+  0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+  0xFF, 0xFF, 0x72, 0x03, 0xDF, 0x6B, 0x21,
+  0xC6, 0x05, 0x2B, 0x53, 0xBB, 0xF4, 0x09,
+  0x39, 0xD5, 0x41, 0x23},
+ .length = 32},
+   .consta = {.data = {0xFF, 0xFF, 0xFF, 0xFE, 0xFF, 0xFF, 0xFF,
+   0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
+   0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00,
+   0x00, 0x00, 0x00, 0xFF, 0xFF, 0xFF, 0xFF,
+   0xFF, 0xFF, 0xFF, 0xFC},
+  .length = 32},
+   .constb = {.data = {0x28, 0xE9, 0xFA, 0x9E, 0x9D, 0x9F, 0x5E,
+   0x34, 0x4D, 0x5A, 0x9E, 0x4B, 0xCF, 0x65,
+   0x09, 0xA7, 0xF3, 0x97, 0x89, 0xF5, 0x15,
+   0xAB, 0x8F, 0x92, 0xDD, 0xBC, 0xBD, 0x41,
+   0x4D, 0x94, 0x0E, 0x93},
+  .length = 32},
+   }};
 
 int
 roc_ae_ec_grp_get(struct roc_ae_ec_group **tbl)
diff --git a/drivers/common/cnxk/roc_ae.h b/drivers/common/cnxk/roc_ae.h
index c972878eff..6ea4df2334 100644
--- a/drivers/common/cnxk/roc_ae.h
+++ b/drivers/common/cnxk/roc_ae.h
@@ -34,7 +34,8 @@ typedef enum {
ROC_AE_EC_ID_P160 = 5,
ROC_AE_EC_ID_P320 = 6,
ROC_AE_EC_ID_P512 = 7,
-   ROC_AE_EC_ID_PMAX = 8
+   ROC_AE_EC_ID_SM2  = 8,
+   ROC_AE_EC_ID_PMAX
 } roc_ae_ec_id;
 
 /* Prime and order fields of built-in elliptic curves */
diff --git a/drivers/common/cnxk/roc_ae_fpm_tables.c 
b/dri

[PATCH] app/dma-perf: validate copied memory

2023-08-10 Thread Gowrishankar Muthukrishnan
Validate copied memory to ensure DMA copy did not fail.

Signed-off-by: Gowrishankar Muthukrishnan 
Change-Id: I9f888c061f3d077f6b7b2d8a66c6a7cb7e5f2437
---
 app/test-dma-perf/benchmark.c | 23 +--
 app/test-dma-perf/main.c  | 16 +++-
 app/test-dma-perf/main.h  |  2 +-
 3 files changed, 33 insertions(+), 8 deletions(-)

diff --git a/app/test-dma-perf/benchmark.c b/app/test-dma-perf/benchmark.c
index 0601e0d171..9e5b5dc770 100644
--- a/app/test-dma-perf/benchmark.c
+++ b/app/test-dma-perf/benchmark.c
@@ -12,6 +12,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "main.h"
 
@@ -306,7 +307,7 @@ setup_memory_env(struct test_configure *cfg, struct 
rte_mbuf ***srcs,
struct rte_mbuf ***dsts)
 {
unsigned int buf_size = cfg->buf_size.cur;
-   unsigned int nr_sockets;
+   unsigned int nr_sockets, i;
uint32_t nr_buf = cfg->nr_buf;
 
nr_sockets = rte_socket_count();
@@ -360,10 +361,15 @@ setup_memory_env(struct test_configure *cfg, struct 
rte_mbuf ***srcs,
return -1;
}
 
+   for (i = 0; i < nr_buf; i++) {
+   memset(rte_pktmbuf_mtod((*srcs)[i], void *), rte_rand(), 
buf_size);
+   memset(rte_pktmbuf_mtod((*dsts)[i], void *), 0, buf_size);
+   }
+
return 0;
 }
 
-void
+int
 mem_copy_benchmark(struct test_configure *cfg, bool is_dma)
 {
uint16_t i;
@@ -381,6 +387,7 @@ mem_copy_benchmark(struct test_configure *cfg, bool is_dma)
uint32_t avg_cycles_total;
float mops, mops_total;
float bandwidth, bandwidth_total;
+   int ret = 0;
 
if (setup_memory_env(cfg, &srcs, &dsts) < 0)
goto out;
@@ -454,6 +461,16 @@ mem_copy_benchmark(struct test_configure *cfg, bool is_dma)
 
rte_eal_mp_wait_lcore();
 
+   for (i = 0; i < cfg->nr_buf; i++) {
+   if (memcmp(rte_pktmbuf_mtod(srcs[i], void *),
+  rte_pktmbuf_mtod(dsts[i], void *),
+  cfg->buf_size.cur) != 0) {
+   printf("Copy validation fails for buffer number %d\n", 
i);
+   ret = -1;
+   goto out;
+   }
+   }
+
mops_total = 0;
bandwidth_total = 0;
avg_cycles_total = 0;
@@ -505,4 +522,6 @@ mem_copy_benchmark(struct test_configure *cfg, bool is_dma)
rte_dma_stop(ldm->dma_ids[i]);
}
}
+
+   return ret;
 }
diff --git a/app/test-dma-perf/main.c b/app/test-dma-perf/main.c
index de37120df6..bbba06ec1b 100644
--- a/app/test-dma-perf/main.c
+++ b/app/test-dma-perf/main.c
@@ -86,20 +86,24 @@ output_header(uint32_t case_id, struct test_configure 
*case_cfg)
output_csv(true);
 }
 
-static void
+static int
 run_test_case(struct test_configure *case_cfg)
 {
+   int ret = 0;
+
switch (case_cfg->test_type) {
case TEST_TYPE_DMA_MEM_COPY:
-   mem_copy_benchmark(case_cfg, true);
+   ret = mem_copy_benchmark(case_cfg, true);
break;
case TEST_TYPE_CPU_MEM_COPY:
-   mem_copy_benchmark(case_cfg, false);
+   ret = mem_copy_benchmark(case_cfg, false);
break;
default:
printf("Unknown test type. %s\n", case_cfg->test_type_str);
break;
}
+
+   return ret;
 }
 
 static void
@@ -144,8 +148,10 @@ run_test(uint32_t case_id, struct test_configure *case_cfg)
case_cfg->scenario_id++;
printf("\nRunning scenario %d\n", case_cfg->scenario_id);
 
-   run_test_case(case_cfg);
-   output_csv(false);
+   if (run_test_case(case_cfg) < 0)
+   printf("\nTest fails! skipping this scenario.\n");
+   else
+   output_csv(false);
 
if (var_entry->op == OP_ADD)
var_entry->cur += var_entry->incr;
diff --git a/app/test-dma-perf/main.h b/app/test-dma-perf/main.h
index 12bc3f4e3f..57a9f71a06 100644
--- a/app/test-dma-perf/main.h
+++ b/app/test-dma-perf/main.h
@@ -59,6 +59,6 @@ struct test_configure {
uint8_t scenario_id;
 };
 
-void mem_copy_benchmark(struct test_configure *cfg, bool is_dma);
+int mem_copy_benchmark(struct test_configure *cfg, bool is_dma);
 
 #endif /* _MAIN_H_ */
-- 
2.25.1



[PATCH] app/dma-perf: add SG copy support

2023-08-10 Thread Gowrishankar Muthukrishnan
Add SG copy support.

Signed-off-by: Gowrishankar Muthukrishnan 
Change-Id: I17c736bec5c8309b4c9cbe9fb1eafa5b5a00a3fe
---
 app/test-dma-perf/benchmark.c | 204 +-
 app/test-dma-perf/config.ini  |  17 +++
 app/test-dma-perf/main.c  |  35 +-
 app/test-dma-perf/main.h  |   5 +-
 4 files changed, 231 insertions(+), 30 deletions(-)

diff --git a/app/test-dma-perf/benchmark.c b/app/test-dma-perf/benchmark.c
index 9e5b5dc770..5f03f99b7b 100644
--- a/app/test-dma-perf/benchmark.c
+++ b/app/test-dma-perf/benchmark.c
@@ -46,6 +46,10 @@ struct lcore_params {
uint16_t test_secs;
struct rte_mbuf **srcs;
struct rte_mbuf **dsts;
+   struct rte_dma_sge **src_sges;
+   struct rte_dma_sge **dst_sges;
+   uint8_t src_ptrs;
+   uint8_t dst_ptrs;
volatile struct worker_info worker_info;
 };
 
@@ -86,21 +90,31 @@ calc_result(uint32_t buf_size, uint32_t nr_buf, uint16_t 
nb_workers, uint16_t te
 }
 
 static void
-output_result(uint8_t scenario_id, uint32_t lcore_id, char *dma_name, uint16_t 
ring_size,
-   uint16_t kick_batch, uint64_t ave_cycle, uint32_t 
buf_size, uint32_t nr_buf,
-   float memory, float bandwidth, float mops, bool is_dma)
+output_result(struct test_configure *cfg, struct lcore_params *para,
+   uint16_t kick_batch, uint64_t ave_cycle, uint32_t 
buf_size,
+   uint32_t nr_buf, float memory, float bandwidth, float 
mops)
 {
-   if (is_dma)
-   printf("lcore %u, DMA %s, DMA Ring Size: %u, Kick Batch Size: 
%u.\n",
-   lcore_id, dma_name, ring_size, kick_batch);
-   else
+   uint16_t ring_size = cfg->ring_size.cur;
+   uint8_t scenario_id = cfg->scenario_id;
+   uint32_t lcore_id = para->lcore_id;
+   char *dma_name = para->dma_name;
+
+   if (cfg->is_dma) {
+   printf("lcore %u, DMA %s, DMA Ring Size: %u, Kick Batch Size: 
%u", lcore_id,
+  dma_name, ring_size, kick_batch);
+   if (cfg->is_sg)
+   printf(" DMA src ptrs: %u, dst ptrs: %u",
+  para->src_ptrs, para->dst_ptrs);
+   printf(".\n");
+   } else {
printf("lcore %u\n", lcore_id);
+   }
 
printf("Average Cycles/op: %" PRIu64 ", Buffer Size: %u B, Buffer 
Number: %u, Memory: %.2lf MB, Frequency: %.3lf Ghz.\n",
ave_cycle, buf_size, nr_buf, memory, 
rte_get_timer_hz()/10.0);
printf("Average Bandwidth: %.3lf Gbps, MOps: %.3lf\n", bandwidth, mops);
 
-   if (is_dma)
+   if (cfg->is_dma)
snprintf(output_str[lcore_id], MAX_OUTPUT_STR_LEN, 
CSV_LINE_DMA_FMT,
scenario_id, lcore_id, dma_name, ring_size, kick_batch, 
buf_size,
nr_buf, memory, ave_cycle, bandwidth, mops);
@@ -130,7 +144,7 @@ cache_flush_buf(__rte_unused struct rte_mbuf **array,
 
 /* Configuration of device. */
 static void
-configure_dmadev_queue(uint32_t dev_id, uint32_t ring_size)
+configure_dmadev_queue(uint32_t dev_id, uint32_t ring_size, uint8_t ptrs_max)
 {
uint16_t vchan = 0;
struct rte_dma_info info;
@@ -153,6 +167,10 @@ configure_dmadev_queue(uint32_t dev_id, uint32_t ring_size)
rte_exit(EXIT_FAILURE, "Error, no configured queues reported on 
device id. %u\n",
dev_id);
 
+   if (info.max_sges < ptrs_max)
+   rte_exit(EXIT_FAILURE, "Error, DMA ptrs more than supported by 
device id %u.\n",
+   dev_id);
+
if (rte_dma_start(dev_id) != 0)
rte_exit(EXIT_FAILURE, "Error with dma start.\n");
 }
@@ -166,8 +184,12 @@ config_dmadevs(struct test_configure *cfg)
uint32_t i;
int dev_id;
uint16_t nb_dmadevs = 0;
+   uint8_t ptrs_max = 0;
char *dma_name;
 
+   if (cfg->is_sg)
+   ptrs_max = RTE_MAX(cfg->src_ptrs, cfg->dst_ptrs);
+
for (i = 0; i < ldm->cnt; i++) {
dma_name = ldm->dma_names[i];
dev_id = rte_dma_get_dev_id_by_name(dma_name);
@@ -177,7 +199,7 @@ config_dmadevs(struct test_configure *cfg)
}
 
ldm->dma_ids[i] = dev_id;
-   configure_dmadev_queue(dev_id, ring_size);
+   configure_dmadev_queue(dev_id, ring_size, ptrs_max);
++nb_dmadevs;
}
 
@@ -217,7 +239,7 @@ do_dma_submit_and_poll(uint16_t dev_id, uint64_t *async_cnt,
 }
 
 static inline int
-do_dma_mem_copy(void *p)
+do_dma_plain_mem_copy(void *p)
 {
struct lcore_params *para = (struct lcore_params *)p;
volatile struct worker_info *worker_info = &(para->worker_info);
@@ -270,6 +292,61 @@ do_dma_mem_copy(void *p)
return 0;
 }
 
+static inline int
+do_dma_sg_mem_copy(void *p)
+{
+   struct lcore_params *para = (struct lcore_params *)p;
+   

RE: [RFC PATCH] dmadev: offload to free source buffer

2023-08-10 Thread Amit Prakash Shukla



> -Original Message-
> From: Morten Brørup 
> Sent: Thursday, August 10, 2023 3:03 PM
> To: Amit Prakash Shukla ; Chengwen Feng
> ; Kevin Laatz ; Bruce
> Richardson 
> Cc: dev@dpdk.org; Jerin Jacob Kollanukkaran ;
> conor.wa...@intel.com; Vamsi Krishna Attunuru ;
> g.si...@nxp.com; sachin.sax...@oss.nxp.com; hemant.agra...@nxp.com;
> cheng1.ji...@intel.com; Nithin Kumar Dabilpuram
> ; Anoob Joseph 
> Subject: [EXT] RE: [RFC PATCH] dmadev: offload to free source buffer
> 
> External Email
> 
> --
> > From: Amit Prakash Shukla [mailto:amitpraka...@marvell.com]
> > Sent: Wednesday, 9 August 2023 20.12
> >
> > > From: Morten Brørup 
> > > Sent: Wednesday, August 9, 2023 8:19 PM
> > >
> > > > From: Amit Prakash Shukla [mailto:amitpraka...@marvell.com]
> > > > Sent: Wednesday, 9 August 2023 16.27
> > > >
> > > > > From: Morten Brørup 
> > > > > Sent: Wednesday, August 9, 2023 2:37 PM
> > > > >
> > > > > > From: Amit Prakash Shukla [mailto:amitpraka...@marvell.com]
> > > > > > Sent: Wednesday, 9 August 2023 08.09
> > > > > >
> > > > > > This changeset adds support in DMA library to free source DMA
> > > > > > buffer by hardware. On a supported hardware, application can
> > > > > > pass on the mempool information as part of vchan config when
> > > > > > the DMA transfer direction is configured as
> RTE_DMA_DIR_MEM_TO_DEV.
> > > > >
> > > > > Isn't the DMA source buffer a memory area, and what needs to be
> > > > > freed
> > > > is
> > > > > the mbuf holding the memory area, i.e. two different pointers?
> > > > No, it is same pointer. Assume mbuf created via mempool, mempool
> > > > needs to be given via vchan config and iova passed to
> > > > rte_dma_copy/rte_dma_copy_sg's can be any address in mbuf area of
> > > > given mempool element.
> > > > For example, mempool element size is S. dequeued buff from
> mempool
> > > > is at X. Any address in (X, X+S) can be given as iova to rte_dma_copy.
> > >
> > > So the DMA library determines the pointer to the mbuf (in the given
> > > mempool) by looking at the iova passed to
> > > rte_dma_copy/rte_dma_copy_sg, and then calls rte_mempool_put with
> that pointer?
> >
> > No. DMA hardware would determine the pointer to the mbuf using iova
> > address and mempool. Hardware will free the buffer, on completion of
> data transfer.
> 
> OK. If there are any requirements to the mempool, it needs to be
> documented in the source code comments. E.g. does it work with mempools
> where the mempool driver is an MP_RTS/MC_RTS ring, or a stack?

I think adding a comment, related to type of supported mempool, in dma library 
code might not be needed as it is driver implementation dependent. Call to 
dev->dev_ops->vchan_setup for the driver shall check and return error for 
unsupported type of mempool.

> 
> >
> > >
> > > >
> > > > >
> > > > > I like the concept. Something similar might also be useful for
> > > > > RTE_DMA_DIR_MEM_TO_MEM, e.g. packet capture. Although such a
> use
> > > > > case might require decrementing the mbuf refcount instead of
> > > > > freeing
> > > > the
> > > > > mbuf directly to the mempool.
> > > > This operation is not supported in our hardware. It can be
> > > > implemented in future if any hardware supports it.
> > >
> > > OK, I didn't expect that - just floating the idea. :-)
> > >
> > > >
> > > > >
> > > > > PS: It has been a while since I looked at the DMA library, so
> > > > > ignore my comments if I got this wrong.



[PATCH] app/dma-perf: validate copied memory

2023-08-10 Thread Gowrishankar Muthukrishnan
Validate copied memory to ensure DMA copy did not fail.

Fixes: 623dc9364dc ("app/dma-perf: introduce DMA performance test")

Signed-off-by: Gowrishankar Muthukrishnan 
---
v2:
 - patch issue fixed.
---
 app/test-dma-perf/benchmark.c | 23 +--
 app/test-dma-perf/main.c  | 16 +++-
 app/test-dma-perf/main.h  |  2 +-
 3 files changed, 33 insertions(+), 8 deletions(-)

diff --git a/app/test-dma-perf/benchmark.c b/app/test-dma-perf/benchmark.c
index 0601e0d171..9e5b5dc770 100644
--- a/app/test-dma-perf/benchmark.c
+++ b/app/test-dma-perf/benchmark.c
@@ -12,6 +12,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "main.h"
 
@@ -306,7 +307,7 @@ setup_memory_env(struct test_configure *cfg, struct 
rte_mbuf ***srcs,
struct rte_mbuf ***dsts)
 {
unsigned int buf_size = cfg->buf_size.cur;
-   unsigned int nr_sockets;
+   unsigned int nr_sockets, i;
uint32_t nr_buf = cfg->nr_buf;
 
nr_sockets = rte_socket_count();
@@ -360,10 +361,15 @@ setup_memory_env(struct test_configure *cfg, struct 
rte_mbuf ***srcs,
return -1;
}
 
+   for (i = 0; i < nr_buf; i++) {
+   memset(rte_pktmbuf_mtod((*srcs)[i], void *), rte_rand(), 
buf_size);
+   memset(rte_pktmbuf_mtod((*dsts)[i], void *), 0, buf_size);
+   }
+
return 0;
 }
 
-void
+int
 mem_copy_benchmark(struct test_configure *cfg, bool is_dma)
 {
uint16_t i;
@@ -381,6 +387,7 @@ mem_copy_benchmark(struct test_configure *cfg, bool is_dma)
uint32_t avg_cycles_total;
float mops, mops_total;
float bandwidth, bandwidth_total;
+   int ret = 0;
 
if (setup_memory_env(cfg, &srcs, &dsts) < 0)
goto out;
@@ -454,6 +461,16 @@ mem_copy_benchmark(struct test_configure *cfg, bool is_dma)
 
rte_eal_mp_wait_lcore();
 
+   for (i = 0; i < cfg->nr_buf; i++) {
+   if (memcmp(rte_pktmbuf_mtod(srcs[i], void *),
+  rte_pktmbuf_mtod(dsts[i], void *),
+  cfg->buf_size.cur) != 0) {
+   printf("Copy validation fails for buffer number %d\n", 
i);
+   ret = -1;
+   goto out;
+   }
+   }
+
mops_total = 0;
bandwidth_total = 0;
avg_cycles_total = 0;
@@ -505,4 +522,6 @@ mem_copy_benchmark(struct test_configure *cfg, bool is_dma)
rte_dma_stop(ldm->dma_ids[i]);
}
}
+
+   return ret;
 }
diff --git a/app/test-dma-perf/main.c b/app/test-dma-perf/main.c
index e5bccc27da..f917be4216 100644
--- a/app/test-dma-perf/main.c
+++ b/app/test-dma-perf/main.c
@@ -86,20 +86,24 @@ output_header(uint32_t case_id, struct test_configure 
*case_cfg)
output_csv(true);
 }
 
-static void
+static int
 run_test_case(struct test_configure *case_cfg)
 {
+   int ret = 0;
+
switch (case_cfg->test_type) {
case TEST_TYPE_DMA_MEM_COPY:
-   mem_copy_benchmark(case_cfg, true);
+   ret = mem_copy_benchmark(case_cfg, true);
break;
case TEST_TYPE_CPU_MEM_COPY:
-   mem_copy_benchmark(case_cfg, false);
+   ret = mem_copy_benchmark(case_cfg, false);
break;
default:
printf("Unknown test type. %s\n", case_cfg->test_type_str);
break;
}
+
+   return ret;
 }
 
 static void
@@ -144,8 +148,10 @@ run_test(uint32_t case_id, struct test_configure *case_cfg)
case_cfg->scenario_id++;
printf("\nRunning scenario %d\n", case_cfg->scenario_id);
 
-   run_test_case(case_cfg);
-   output_csv(false);
+   if (run_test_case(case_cfg) < 0)
+   printf("\nTest fails! skipping this scenario.\n");
+   else
+   output_csv(false);
 
if (var_entry->op == OP_ADD)
var_entry->cur += var_entry->incr;
diff --git a/app/test-dma-perf/main.h b/app/test-dma-perf/main.h
index f65e264378..658f22f673 100644
--- a/app/test-dma-perf/main.h
+++ b/app/test-dma-perf/main.h
@@ -59,6 +59,6 @@ struct test_configure {
uint8_t scenario_id;
 };
 
-void mem_copy_benchmark(struct test_configure *cfg, bool is_dma);
+int mem_copy_benchmark(struct test_configure *cfg, bool is_dma);
 
 #endif /* MAIN_H */
-- 
2.25.1



[PATCH v2] app/dma-perf: add SG copy support

2023-08-10 Thread Gowrishankar Muthukrishnan
Add SG copy support.

Signed-off-by: Gowrishankar Muthukrishnan 
---
v2:
 - patch issue fixed.
---
 app/test-dma-perf/benchmark.c | 204 +-
 app/test-dma-perf/config.ini  |  17 +++
 app/test-dma-perf/main.c  |  35 +-
 app/test-dma-perf/main.h  |   5 +-
 4 files changed, 231 insertions(+), 30 deletions(-)

diff --git a/app/test-dma-perf/benchmark.c b/app/test-dma-perf/benchmark.c
index 9e5b5dc770..5f03f99b7b 100644
--- a/app/test-dma-perf/benchmark.c
+++ b/app/test-dma-perf/benchmark.c
@@ -46,6 +46,10 @@ struct lcore_params {
uint16_t test_secs;
struct rte_mbuf **srcs;
struct rte_mbuf **dsts;
+   struct rte_dma_sge **src_sges;
+   struct rte_dma_sge **dst_sges;
+   uint8_t src_ptrs;
+   uint8_t dst_ptrs;
volatile struct worker_info worker_info;
 };
 
@@ -86,21 +90,31 @@ calc_result(uint32_t buf_size, uint32_t nr_buf, uint16_t 
nb_workers, uint16_t te
 }
 
 static void
-output_result(uint8_t scenario_id, uint32_t lcore_id, char *dma_name, uint16_t 
ring_size,
-   uint16_t kick_batch, uint64_t ave_cycle, uint32_t 
buf_size, uint32_t nr_buf,
-   float memory, float bandwidth, float mops, bool is_dma)
+output_result(struct test_configure *cfg, struct lcore_params *para,
+   uint16_t kick_batch, uint64_t ave_cycle, uint32_t 
buf_size,
+   uint32_t nr_buf, float memory, float bandwidth, float 
mops)
 {
-   if (is_dma)
-   printf("lcore %u, DMA %s, DMA Ring Size: %u, Kick Batch Size: 
%u.\n",
-   lcore_id, dma_name, ring_size, kick_batch);
-   else
+   uint16_t ring_size = cfg->ring_size.cur;
+   uint8_t scenario_id = cfg->scenario_id;
+   uint32_t lcore_id = para->lcore_id;
+   char *dma_name = para->dma_name;
+
+   if (cfg->is_dma) {
+   printf("lcore %u, DMA %s, DMA Ring Size: %u, Kick Batch Size: 
%u", lcore_id,
+  dma_name, ring_size, kick_batch);
+   if (cfg->is_sg)
+   printf(" DMA src ptrs: %u, dst ptrs: %u",
+  para->src_ptrs, para->dst_ptrs);
+   printf(".\n");
+   } else {
printf("lcore %u\n", lcore_id);
+   }
 
printf("Average Cycles/op: %" PRIu64 ", Buffer Size: %u B, Buffer 
Number: %u, Memory: %.2lf MB, Frequency: %.3lf Ghz.\n",
ave_cycle, buf_size, nr_buf, memory, 
rte_get_timer_hz()/10.0);
printf("Average Bandwidth: %.3lf Gbps, MOps: %.3lf\n", bandwidth, mops);
 
-   if (is_dma)
+   if (cfg->is_dma)
snprintf(output_str[lcore_id], MAX_OUTPUT_STR_LEN, 
CSV_LINE_DMA_FMT,
scenario_id, lcore_id, dma_name, ring_size, kick_batch, 
buf_size,
nr_buf, memory, ave_cycle, bandwidth, mops);
@@ -130,7 +144,7 @@ cache_flush_buf(__rte_unused struct rte_mbuf **array,
 
 /* Configuration of device. */
 static void
-configure_dmadev_queue(uint32_t dev_id, uint32_t ring_size)
+configure_dmadev_queue(uint32_t dev_id, uint32_t ring_size, uint8_t ptrs_max)
 {
uint16_t vchan = 0;
struct rte_dma_info info;
@@ -153,6 +167,10 @@ configure_dmadev_queue(uint32_t dev_id, uint32_t ring_size)
rte_exit(EXIT_FAILURE, "Error, no configured queues reported on 
device id. %u\n",
dev_id);
 
+   if (info.max_sges < ptrs_max)
+   rte_exit(EXIT_FAILURE, "Error, DMA ptrs more than supported by 
device id %u.\n",
+   dev_id);
+
if (rte_dma_start(dev_id) != 0)
rte_exit(EXIT_FAILURE, "Error with dma start.\n");
 }
@@ -166,8 +184,12 @@ config_dmadevs(struct test_configure *cfg)
uint32_t i;
int dev_id;
uint16_t nb_dmadevs = 0;
+   uint8_t ptrs_max = 0;
char *dma_name;
 
+   if (cfg->is_sg)
+   ptrs_max = RTE_MAX(cfg->src_ptrs, cfg->dst_ptrs);
+
for (i = 0; i < ldm->cnt; i++) {
dma_name = ldm->dma_names[i];
dev_id = rte_dma_get_dev_id_by_name(dma_name);
@@ -177,7 +199,7 @@ config_dmadevs(struct test_configure *cfg)
}
 
ldm->dma_ids[i] = dev_id;
-   configure_dmadev_queue(dev_id, ring_size);
+   configure_dmadev_queue(dev_id, ring_size, ptrs_max);
++nb_dmadevs;
}
 
@@ -217,7 +239,7 @@ do_dma_submit_and_poll(uint16_t dev_id, uint64_t *async_cnt,
 }
 
 static inline int
-do_dma_mem_copy(void *p)
+do_dma_plain_mem_copy(void *p)
 {
struct lcore_params *para = (struct lcore_params *)p;
volatile struct worker_info *worker_info = &(para->worker_info);
@@ -270,6 +292,61 @@ do_dma_mem_copy(void *p)
return 0;
 }
 
+static inline int
+do_dma_sg_mem_copy(void *p)
+{
+   struct lcore_params *para = (struct lcore_params *)p;
+   volatile struct worker_

RE: [RFC PATCH] dmadev: offload to free source buffer

2023-08-10 Thread Morten Brørup
> From: Amit Prakash Shukla [mailto:amitpraka...@marvell.com]
> Sent: Thursday, 10 August 2023 12.28
> 
> > From: Morten Brørup 
> > Sent: Thursday, August 10, 2023 3:03 PM
> >
> > > From: Amit Prakash Shukla [mailto:amitpraka...@marvell.com]
> > > Sent: Wednesday, 9 August 2023 20.12
> > >
> > > > From: Morten Brørup 
> > > > Sent: Wednesday, August 9, 2023 8:19 PM
> > > >
> > > > > From: Amit Prakash Shukla [mailto:amitpraka...@marvell.com]
> > > > > Sent: Wednesday, 9 August 2023 16.27
> > > > >
> > > > > > From: Morten Brørup 
> > > > > > Sent: Wednesday, August 9, 2023 2:37 PM
> > > > > >
> > > > > > > From: Amit Prakash Shukla [mailto:amitpraka...@marvell.com]
> > > > > > > Sent: Wednesday, 9 August 2023 08.09
> > > > > > >
> > > > > > > This changeset adds support in DMA library to free source DMA
> > > > > > > buffer by hardware. On a supported hardware, application can
> > > > > > > pass on the mempool information as part of vchan config when
> > > > > > > the DMA transfer direction is configured as
> > RTE_DMA_DIR_MEM_TO_DEV.
> > > > > >
> > > > > > Isn't the DMA source buffer a memory area, and what needs to be
> > > > > > freed
> > > > > is
> > > > > > the mbuf holding the memory area, i.e. two different pointers?
> > > > > No, it is same pointer. Assume mbuf created via mempool, mempool
> > > > > needs to be given via vchan config and iova passed to
> > > > > rte_dma_copy/rte_dma_copy_sg's can be any address in mbuf area of
> > > > > given mempool element.
> > > > > For example, mempool element size is S. dequeued buff from
> > mempool
> > > > > is at X. Any address in (X, X+S) can be given as iova to
> rte_dma_copy.
> > > >
> > > > So the DMA library determines the pointer to the mbuf (in the given
> > > > mempool) by looking at the iova passed to
> > > > rte_dma_copy/rte_dma_copy_sg, and then calls rte_mempool_put with
> > that pointer?
> > >
> > > No. DMA hardware would determine the pointer to the mbuf using iova
> > > address and mempool. Hardware will free the buffer, on completion of
> > data transfer.
> >
> > OK. If there are any requirements to the mempool, it needs to be
> > documented in the source code comments. E.g. does it work with mempools
> > where the mempool driver is an MP_RTS/MC_RTS ring, or a stack?
> 
> I think adding a comment, related to type of supported mempool, in dma
> library code might not be needed as it is driver implementation dependent.
> Call to dev->dev_ops->vchan_setup for the driver shall check and return
> error for unsupported type of mempool.

Makes sense. But I still think that it needs to be mentioned that 
RTE_DMA_CAPA_MEM_TO_DEV_SOURCE_BUFFER_FREE has some limitations, and doesn't 
mean that any type of mempool can be used.

I suggest you add a note to the description of the new "struct rte_mempool 
*mem_to_dev_src_buf_pool" field in the rte_dma_vchan_conf structure, such as:

Note: If the mempool is not supported by the DMA driver, rte_dma_vchan_setup() 
will fail.

You should also mention it with the description of 
RTE_DMA_CAPA_MEM_TO_DEV_SOURCE_BUFFER_FREE flag, such as:

Note: Even though the DMA driver has this capability, it may not support all 
mempool drivers. If the mempool is not supported by the DMA driver, 
rte_dma_vchan_setup() will fail.


> 
> >
> > >
> > > >
> > > > >
> > > > > >
> > > > > > I like the concept. Something similar might also be useful for
> > > > > > RTE_DMA_DIR_MEM_TO_MEM, e.g. packet capture. Although such a
> > use
> > > > > > case might require decrementing the mbuf refcount instead of
> > > > > > freeing
> > > > > the
> > > > > > mbuf directly to the mempool.
> > > > > This operation is not supported in our hardware. It can be
> > > > > implemented in future if any hardware supports it.
> > > >
> > > > OK, I didn't expect that - just floating the idea. :-)
> > > >
> > > > >
> > > > > >
> > > > > > PS: It has been a while since I looked at the DMA library, so
> > > > > > ignore my comments if I got this wrong.



RE: [PATCH v2] eventdev/eth_rx: update adapter create APIs

2023-08-10 Thread Naga Harish K, S V
Hi Jerin,
 Thinking of another approach for this patch.
Instead of changing all create APIs,  update 
rte_event_eth_rx_adapter_create_ext() alone with additional parameters.
rte_event_eth_rx_adapter_create() and 
rte_event_eth_rx_adapter_create_with_params() APIs will be untouched.

How about this approach?

-Harish

> -Original Message-
> From: Jerin Jacob 
> Sent: Thursday, August 10, 2023 1:37 PM
> To: Naga Harish K, S V 
> Cc: dev@dpdk.org; Jayatheerthan, Jay 
> Subject: Re: [PATCH v2] eventdev/eth_rx: update adapter create APIs
> 
> On Thu, Aug 10, 2023 at 1:09 PM Naga Harish K, S V
>  wrote:
> >
> > Hi Jerin,
> >  As per DPDK Guidelines, API changes or ABI breakage is allowed during 
> > LTS
> releases
> >
> > (https://doc.dpdk.org/guides/contributing/abi_policy.html#abi-breakage
> > s)
> 
> Yes. Provided if depreciation notice has sent, approved and changes absolutely
> needed.
> 
> >
> > Also, there are previous instances where API changes happened, some of them
> are mentioned below.
> 
> These are not the cases where existing APIs removed and changed prototype to
> cover up the removed function.
> 
> >
> >In DPDK 22.11, the cryptodev library had undergone the following API
> changes.
> > * rte_cryptodev_sym_session_create() and
> rte_cryptodev_asym_session_create() API parameters changed.
> >rte_cryptodev_sym_session_free() and rte_cryptodev_asym_session_free()
> API parameters changed.
> >rte_cryptodev_sym_session_init() and rte_cryptodev_asym_session_init()
> APIs are removed.
> >
> > * eventdev: The function ``rte_event_crypto_adapter_queue_pair_add`` was
> updated
> >to accept configuration of type ``rte_event_crypto_adapter_queue_conf``
> >instead of ``rte_event``,
> >similar to ``rte_event_eth_rx_adapter_queue_add`` signature.
> >Event will be one of the configuration fields,
> >together with additional vector parameters.
> >
> >  Applications have to change to accommodate the above API changes.
> >
> > As discussed earlier, fewer adapter-create APIs are useful for the 
> > application
> design.
> > Please let us know your thoughts on the same.
> 
> 
> mempool have different variants of create API. IMO, Different variants of
> _create API is OK and application can pick the correct one based on the 
> needed.
> It is OK to break the API prototype if absolutely needed, in this case it is 
> not.


[PATCH] test/dma: add test skip status

2023-08-10 Thread Gowrishankar Muthukrishnan
Add status on skipped tests.

Signed-off-by: Gowrishankar Muthukrishnan 
---
 app/test/test_dmadev_api.c | 26 +++---
 1 file changed, 19 insertions(+), 7 deletions(-)

diff --git a/app/test/test_dmadev_api.c b/app/test/test_dmadev_api.c
index 4a181af90a..a1646472b0 100644
--- a/app/test/test_dmadev_api.c
+++ b/app/test/test_dmadev_api.c
@@ -9,6 +9,8 @@
 #include 
 #include 
 
+#include "test.h"
+
 extern int test_dma_api(uint16_t dev_id);
 
 #define DMA_TEST_API_RUN(test) \
@@ -17,9 +19,6 @@ extern int test_dma_api(uint16_t dev_id);
 #define TEST_MEMCPY_SIZE   1024
 #define TEST_WAIT_US_VAL   5
 
-#define TEST_SUCCESS 0
-#define TEST_FAILED  -1
-
 static int16_t test_dev_id;
 static int16_t invalid_dev_id;
 
@@ -29,6 +28,7 @@ static char *dst;
 static int total;
 static int passed;
 static int failed;
+static int skipped;
 
 static int
 testsuite_setup(int16_t dev_id)
@@ -49,6 +49,7 @@ testsuite_setup(int16_t dev_id)
total = 0;
passed = 0;
failed = 0;
+   skipped = 0;
 
/* Set dmadev log level to critical to suppress unnecessary output
 * during API tests.
@@ -78,12 +79,22 @@ testsuite_run_test(int (*test)(void), const char *name)
 
if (test) {
ret = test();
-   if (ret < 0) {
-   failed++;
-   printf("%s Failed\n", name);
-   } else {
+   switch (ret) {
+   case TEST_SUCCESS:
passed++;
printf("%s Passed\n", name);
+   break;
+   case TEST_FAILED:
+   failed++;
+   printf("%s Failed\n", name);
+   break;
+   case TEST_SKIPPED:
+   skipped++;
+   printf("%s Skipped\n", name);
+   break;
+   default:
+   printf("Invalid test status\n");
+   break;
}
}
 
@@ -566,6 +577,7 @@ test_dma_api(uint16_t dev_id)
printf("Total tests   : %d\n", total);
printf("Passed: %d\n", passed);
printf("Failed: %d\n", failed);
+   printf("Skipped   : %d\n", skipped);
 
if (failed)
return -1;
-- 
2.25.1



[PATCH] test/dma: test vchan reconfiguration

2023-08-10 Thread Gowrishankar Muthukrishnan
Reconfigure vchan count and validate if new count is effective.

Signed-off-by: Gowrishankar Muthukrishnan 
---
 app/test/test_dmadev_api.c | 51 ++
 1 file changed, 51 insertions(+)

diff --git a/app/test/test_dmadev_api.c b/app/test/test_dmadev_api.c
index a1646472b0..5cdd87e162 100644
--- a/app/test/test_dmadev_api.c
+++ b/app/test/test_dmadev_api.c
@@ -355,6 +355,56 @@ test_dma_start_stop(void)
return TEST_SUCCESS;
 }
 
+static int
+test_dma_reconfigure(void)
+{
+   struct rte_dma_vchan_conf vchan_conf = { 0 };
+   struct rte_dma_conf dev_conf = { 0 };
+   struct rte_dma_info dev_info = { 0 };
+   uint16_t cfg_vchans;
+   int ret;
+
+   ret = rte_dma_info_get(test_dev_id, &dev_info);
+   RTE_TEST_ASSERT_SUCCESS(ret, "Failed to obtain device info, %d", ret);
+
+   /* At least two vchans required for the test */
+   if (dev_info.max_vchans < 2)
+   return TEST_SKIPPED;
+
+   /* Setup one vchan for later test */
+   ret = setup_one_vchan();
+   RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup one vchan, %d", ret);
+
+   ret = rte_dma_start(test_dev_id);
+   RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start, %d", ret);
+
+   ret = rte_dma_stop(test_dev_id);
+   RTE_TEST_ASSERT_SUCCESS(ret, "Failed to stop, %d", ret);
+
+   /* Check reconfigure and vchan setup after device stopped */
+   cfg_vchans = dev_conf.nb_vchans = (dev_info.max_vchans - 1);
+
+   ret = rte_dma_configure(test_dev_id, &dev_conf);
+   RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure, %d", ret);
+
+   vchan_conf.direction = RTE_DMA_DIR_MEM_TO_MEM;
+   vchan_conf.nb_desc = dev_info.min_desc;
+   ret = rte_dma_vchan_setup(test_dev_id, 0, &vchan_conf);
+   RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup vchan, %d", ret);
+
+   ret = rte_dma_start(test_dev_id);
+   RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start, %d", ret);
+
+   ret = rte_dma_info_get(test_dev_id, &dev_info);
+   RTE_TEST_ASSERT_SUCCESS(ret, "Failed to obtain device info, %d", ret);
+   RTE_TEST_ASSERT_EQUAL(dev_info.nb_vchans, cfg_vchans, "incorrect 
reconfiguration");
+
+   ret = rte_dma_stop(test_dev_id);
+   RTE_TEST_ASSERT_SUCCESS(ret, "Failed to stop, %d", ret);
+
+   return TEST_SUCCESS;
+}
+
 static int
 test_dma_stats(void)
 {
@@ -567,6 +617,7 @@ test_dma_api(uint16_t dev_id)
DMA_TEST_API_RUN(test_dma_configure);
DMA_TEST_API_RUN(test_dma_vchan_setup);
DMA_TEST_API_RUN(test_dma_start_stop);
+   DMA_TEST_API_RUN(test_dma_reconfigure);
DMA_TEST_API_RUN(test_dma_stats);
DMA_TEST_API_RUN(test_dma_dump);
DMA_TEST_API_RUN(test_dma_completed);
-- 
2.25.1



[PATCH] test/dma: add SG copy tests

2023-08-10 Thread Gowrishankar Muthukrishnan
Add scatter-gather copy tests.

Signed-off-by: Vidya Sagar Velumuri 
Signed-off-by: Gowrishankar Muthukrishnan 
---
 app/test/test_dmadev.c | 124 +++-
 app/test/test_dmadev_api.c | 163 ++---
 app/test/test_dmadev_api.h |   2 +
 3 files changed, 274 insertions(+), 15 deletions(-)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 0736ff2a18..abe970baaf 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -18,7 +18,7 @@
 
 #define ERR_RETURN(...) do { print_err(__func__, __LINE__, __VA_ARGS__); 
return -1; } while (0)
 
-#define COPY_LEN 1024
+#define COPY_LEN 1032
 
 static struct rte_mempool *pool;
 static uint16_t id_count;
@@ -346,6 +346,120 @@ test_stop_start(int16_t dev_id, uint16_t vchan)
return 0;
 }
 
+static int
+test_enqueue_sg_copies(int16_t dev_id, uint16_t vchan)
+{
+   unsigned int src_len, dst_len, n_sge, len, i, j, k;
+   char orig_src[COPY_LEN], orig_dst[COPY_LEN];
+   struct rte_dma_info info = { 0 };
+   enum rte_dma_status_code status;
+   uint16_t id, n_src, n_dst;
+
+   if (rte_dma_info_get(dev_id, &info) < 0)
+   ERR_RETURN("Failed to get dev info");
+
+   n_sge = RTE_MIN(info.max_sges, TEST_SG_MAX);
+   len = COPY_LEN;
+
+   for (n_src = 1; n_src <= n_sge; n_src++) {
+   src_len = len / n_src;
+   for (n_dst = 1; n_dst <= n_sge; n_dst++) {
+   dst_len = len / n_dst;
+
+   struct rte_dma_sge sg_src[n_sge], sg_dst[n_sge];
+   struct rte_mbuf *src[n_sge], *dst[n_sge];
+   char *src_data[n_sge], *dst_data[n_sge];
+
+   for (i = 0 ; i < COPY_LEN; i++)
+   orig_src[i] = rte_rand() & 0xFF;
+
+   memset(orig_dst, 0, COPY_LEN);
+
+   for (i = 0; i < n_src; i++) {
+   src[i] = rte_pktmbuf_alloc(pool);
+   RTE_ASSERT(src[i] != NULL);
+   sg_src[i].addr = rte_pktmbuf_iova(src[i]);
+   sg_src[i].length = src_len;
+   src_data[i] = rte_pktmbuf_mtod(src[i], char *);
+   }
+
+   for (k = 0; k < n_dst; k++) {
+   dst[k] = rte_pktmbuf_alloc(pool);
+   RTE_ASSERT(dst[k] != NULL);
+   sg_dst[k].addr = rte_pktmbuf_iova(dst[k]);
+   sg_dst[k].length = dst_len;
+   dst_data[k] = rte_pktmbuf_mtod(dst[k], char *);
+   }
+
+   for (i = 0; i < n_src; i++) {
+   for (j = 0; j < src_len; j++)
+   src_data[i][j] = orig_src[i * src_len + 
j];
+   }
+
+   for (k = 0; k < n_dst; k++)
+   memset(dst_data[k], 0, dst_len);
+
+   printf("\tsrc segs: %2d [seg len: %4d] - dst segs: %2d 
[seg len : %4d]\n",
+   n_src, src_len, n_dst, dst_len);
+
+   id = rte_dma_copy_sg(dev_id, vchan, sg_src, sg_dst, 
n_src, n_dst,
+RTE_DMA_OP_FLAG_SUBMIT);
+
+   if (id != id_count)
+   ERR_RETURN("Error with rte_dma_copy_sg, got %u, 
expected %u\n",
+   id, id_count);
+
+   /* Give time for copy to finish, then check it was done 
*/
+   await_hw(dev_id, vchan);
+
+   for (k = 0; k < n_dst; k++)
+   memcpy((&orig_dst[0] + k * dst_len), 
dst_data[k], dst_len);
+
+   if (memcmp(orig_src, orig_dst, COPY_LEN))
+   ERR_RETURN("Data mismatch");
+
+   /* Verify completion */
+   id = ~id;
+   if (rte_dma_completed(dev_id, vchan, 1, &id, NULL) != 1)
+   ERR_RETURN("Error with rte_dma_completed\n");
+
+   /* Verify expected index(id_count) */
+   if (id != id_count)
+   ERR_RETURN("Error:incorrect job id received, %u 
[expected %u]\n",
+   id, id_count);
+
+   /* Check for completed and id when no job done */
+   id = ~id;
+   if (rte_dma_completed(dev_id, vchan, 1, &id, NULL) != 0)
+   ERR_RETURN("Error with rte_dma_completed when 
no job done\n");
+
+   if (id != id_count)
+   ERR_RETURN("Error:incorrect job id received 
when no job done, %u [expected %u]\n",
+   

RE: [RFC PATCH] dmadev: offload to free source buffer

2023-08-10 Thread Amit Prakash Shukla


> > > >
> > > > No. DMA hardware would determine the pointer to the mbuf using
> > > > iova address and mempool. Hardware will free the buffer, on
> > > > completion of
> > > data transfer.
> > >
> > > OK. If there are any requirements to the mempool, it needs to be
> > > documented in the source code comments. E.g. does it work with
> > > mempools where the mempool driver is an MP_RTS/MC_RTS ring, or a
> stack?
> >
> > I think adding a comment, related to type of supported mempool, in dma
> > library code might not be needed as it is driver implementation dependent.
> > Call to dev->dev_ops->vchan_setup for the driver shall check and
> > return error for unsupported type of mempool.
> 
> Makes sense. But I still think that it needs to be mentioned that
> RTE_DMA_CAPA_MEM_TO_DEV_SOURCE_BUFFER_FREE has some
> limitations, and doesn't mean that any type of mempool can be used.
> 
> I suggest you add a note to the description of the new "struct rte_mempool
> *mem_to_dev_src_buf_pool" field in the rte_dma_vchan_conf structure,
> such as:
> 
> Note: If the mempool is not supported by the DMA driver,
> rte_dma_vchan_setup() will fail.
> 
> You should also mention it with the description of
> RTE_DMA_CAPA_MEM_TO_DEV_SOURCE_BUFFER_FREE flag, such as:
> 
> Note: Even though the DMA driver has this capability, it may not support all
> mempool drivers. If the mempool is not supported by the DMA driver,
> rte_dma_vchan_setup() will fail.

Sure, I will add a note in next version of the patch.

Thanks.



[PATCH 0/4] test/dma: run tests as per configured pmd

2023-08-10 Thread Gowrishankar Muthukrishnan
This series enables dmadev tests to run as per configured PMDs,
similar to other subsystem like cryptodev.

Gowrishankar Muthukrishnan (4):
  dmadev: add function to get list of device identifiers
  test/dma: run tests according to PMD
  dma/cnxk: update PCI driver name
  test/dma: enable cnxk tests

 app/test/meson.build   |  8 ++-
 app/test/test_dmadev.c | 96 ++
 drivers/dma/cnxk/cnxk_dmadev.c |  8 +--
 lib/dmadev/rte_dmadev.c| 20 +++
 lib/dmadev/rte_dmadev.h| 21 
 lib/dmadev/version.map |  1 +
 6 files changed, 128 insertions(+), 26 deletions(-)

-- 
2.25.1



[PATCH 1/4] dmadev: add function to get list of device identifiers

2023-08-10 Thread Gowrishankar Muthukrishnan
Add a function to get list of device identifiers for a given driver
name.

Signed-off-by: Gowrishankar Muthukrishnan 
---
 lib/dmadev/rte_dmadev.c | 20 
 lib/dmadev/rte_dmadev.h | 21 +
 lib/dmadev/version.map  |  1 +
 3 files changed, 42 insertions(+)

diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index 8c095e1f35..f2a106564d 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -388,6 +388,26 @@ rte_dma_get_dev_id_by_name(const char *name)
return dev->data->dev_id;
 }
 
+uint8_t
+rte_dma_get_dev_list_by_driver(const char *name, int16_t *devs, uint8_t 
nb_devs)
+{
+   uint8_t i, count = 0;
+
+   if (name == NULL)
+   return count;
+
+   for (i = 0; i < dma_devices_max && count < nb_devs; i++) {
+   if (rte_dma_devices[i].state == RTE_DMA_DEV_UNUSED)
+   continue;
+
+   if (strncmp(rte_dma_devices[i].device->driver->name,
+   name, strlen(name) + 1) == 0)
+   devs[count++] = i;
+   }
+
+   return count;
+}
+
 bool
 rte_dma_is_valid(int16_t dev_id)
 {
diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index e61d71959e..689062a686 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -191,6 +191,27 @@ int rte_dma_dev_max(size_t dev_max);
 __rte_experimental
 int rte_dma_get_dev_id_by_name(const char *name);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Get the list of device identifiers for the DMA driver.
+ *
+ * @param name
+ *   DMA driver name.
+ * @param devs
+ *   Output devices identifiers.
+ * @param nb_devs
+ *   Maximal number of devices.
+ *
+ * @return
+ *   Returns number of device identifiers.
+ */
+__rte_experimental
+uint8_t rte_dma_get_dev_list_by_driver(const char *name,
+  int16_t *devs,
+  uint8_t nb_devs);
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
index 7031d6b335..b4d56b41a0 100644
--- a/lib/dmadev/version.map
+++ b/lib/dmadev/version.map
@@ -7,6 +7,7 @@ EXPERIMENTAL {
rte_dma_dev_max;
rte_dma_dump;
rte_dma_get_dev_id_by_name;
+   rte_dma_get_dev_list_by_driver;
rte_dma_info_get;
rte_dma_is_valid;
rte_dma_next_dev;
-- 
2.25.1



[PATCH 2/4] test/dma: run tests according to PMD

2023-08-10 Thread Gowrishankar Muthukrishnan
Run tests according to PMD configured.

Signed-off-by: Gowrishankar Muthukrishnan 
---
 app/test/meson.build   |  7 +++-
 app/test/test_dmadev.c | 87 +++---
 2 files changed, 71 insertions(+), 23 deletions(-)

diff --git a/app/test/meson.build b/app/test/meson.build
index 66897c14a3..de671b665f 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -319,7 +319,12 @@ driver_test_names = [
 'cryptodev_sw_snow3g_autotest',
 'cryptodev_sw_zuc_autotest',
 'cryptodev_uadk_autotest',
-'dmadev_autotest',
+'dmadev_dpaa_autotest',
+'dmadev_dpaa2_autotest',
+'dmadev_hisilicon_autotest',
+'dmadev_idxd_autotest',
+'dmadev_ioat_autotest',
+'dmadev_skeleton_autotest',
 ]
 
 dump_test_names = []
diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index abe970baaf..5e72e8535c 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -1025,40 +1025,83 @@ test_dmadev_instance(int16_t dev_id)
 }
 
 static int
-test_apis(void)
+test_dma(const char *pmd)
+{
+   int16_t devs[UINT8_MAX];
+   uint8_t nb_devs;
+   int i;
+
+   if (rte_dma_count_avail() == 0)
+   return TEST_SKIPPED;
+
+   nb_devs = rte_dma_get_dev_list_by_driver(pmd, devs, UINT8_MAX);
+   if (nb_devs == 0)
+   ERR_RETURN("Error, No device found for pmd %s\n", pmd);
+
+   printf("\n### Test dmadev infrastructure using %s driver\n", pmd);
+   if (test_dma_api(devs[0]) < 0)
+   ERR_RETURN("Error, test failure for %d device\n", devs[0]);
+
+   for (i = 0; i < nb_devs; i++)
+   if (test_dmadev_instance(devs[i]) < 0)
+   ERR_RETURN("Error, test failure for %d device\n", 
devs[i]);
+
+   return 0;
+}
+
+static int
+test_skeleton_dma(void)
 {
const char *pmd = "dma_skeleton";
-   int id;
-   int ret;
 
-   /* attempt to create skeleton instance - ignore errors due to one being 
already present */
+   /* Attempt to create skeleton instance - ignore errors due to one being 
already present */
rte_vdev_init(pmd, NULL);
-   id = rte_dma_get_dev_id_by_name(pmd);
-   if (id < 0)
-   return TEST_SKIPPED;
-   printf("\n### Test dmadev infrastructure using skeleton driver\n");
-   ret = test_dma_api(id);
+   return test_dma(pmd);
+}
 
-   return ret;
+static int
+test_dpaa_dma(void)
+{
+   const char *pmd = "dpaa_qdma";
+
+   return test_dma(pmd);
 }
 
 static int
-test_dma(void)
+test_dpaa2_dma(void)
 {
-   int i;
+   const char *pmd = "dpaa2_qdma";
 
-   /* basic sanity on dmadev infrastructure */
-   if (test_apis() < 0)
-   ERR_RETURN("Error performing API tests\n");
+   return test_dma(pmd);
+}
 
-   if (rte_dma_count_avail() == 0)
-   return TEST_SKIPPED;
+static int
+test_hisilicon_dma(void)
+{
+   const char *pmd = "dma_hisilicon";
 
-   RTE_DMA_FOREACH_DEV(i)
-   if (test_dmadev_instance(i) < 0)
-   ERR_RETURN("Error, test failure for device %d\n", i);
+   return test_dma(pmd);
+}
 
-   return 0;
+static int
+test_idxd_dma(void)
+{
+   const char *pmd = "dmadev_idxd_pci";
+
+   return test_dma(pmd);
+}
+
+static int
+test_ioat_dma(void)
+{
+   const char *pmd = "dmadev_ioat";
+
+   return test_dma(pmd);
 }
 
-REGISTER_TEST_COMMAND(dmadev_autotest, test_dma);
+REGISTER_TEST_COMMAND(dmadev_skeleton_autotest, test_skeleton_dma);
+REGISTER_TEST_COMMAND(dmadev_dpaa_autotest, test_dpaa_dma);
+REGISTER_TEST_COMMAND(dmadev_dpaa2_autotest, test_dpaa2_dma);
+REGISTER_TEST_COMMAND(dmadev_hisilicon_autotest, test_hisilicon_dma);
+REGISTER_TEST_COMMAND(dmadev_idxd_autotest, test_idxd_dma);
+REGISTER_TEST_COMMAND(dmadev_ioat_autotest, test_ioat_dma);
-- 
2.25.1



[PATCH 3/4] dma/cnxk: update PCI driver name

2023-08-10 Thread Gowrishankar Muthukrishnan
Follow PCI driver naming convention standard across drivers.

Signed-off-by: Gowrishankar Muthukrishnan 
---
 drivers/dma/cnxk/cnxk_dmadev.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c
index a6f4a31e0e..8c5d1409b3 100644
--- a/drivers/dma/cnxk/cnxk_dmadev.c
+++ b/drivers/dma/cnxk/cnxk_dmadev.c
@@ -17,6 +17,8 @@
 #include 
 #include 
 
+#define PCI_DRIVER_NAME dma_cnxk
+
 static int
 cnxk_dmadev_info_get(const struct rte_dma_dev *dev,
 struct rte_dma_info *dev_info, uint32_t size)
@@ -719,6 +721,6 @@ static struct rte_pci_driver cnxk_dmadev = {
.remove= cnxk_dmadev_remove,
 };
 
-RTE_PMD_REGISTER_PCI(cnxk_dmadev_pci_driver, cnxk_dmadev);
-RTE_PMD_REGISTER_PCI_TABLE(cnxk_dmadev_pci_driver, cnxk_dma_pci_map);
-RTE_PMD_REGISTER_KMOD_DEP(cnxk_dmadev_pci_driver, "vfio-pci");
+RTE_PMD_REGISTER_PCI(PCI_DRIVER_NAME, cnxk_dmadev);
+RTE_PMD_REGISTER_PCI_TABLE(PCI_DRIVER_NAME, cnxk_dma_pci_map);
+RTE_PMD_REGISTER_KMOD_DEP(PCI_DRIVER_NAME, "vfio-pci");
-- 
2.25.1



[PATCH 4/4] test/dma: enable cnxk tests

2023-08-10 Thread Gowrishankar Muthukrishnan
Enable DMA tests for CNXK driver.

Signed-off-by: Gowrishankar Muthukrishnan 
---
 app/test/meson.build   | 1 +
 app/test/test_dmadev.c | 9 +
 2 files changed, 10 insertions(+)

diff --git a/app/test/meson.build b/app/test/meson.build
index de671b665f..fa5987f3a2 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -319,6 +319,7 @@ driver_test_names = [
 'cryptodev_sw_snow3g_autotest',
 'cryptodev_sw_zuc_autotest',
 'cryptodev_uadk_autotest',
+'dmadev_cnxk_autotest',
 'dmadev_dpaa_autotest',
 'dmadev_dpaa2_autotest',
 'dmadev_hisilicon_autotest',
diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 5e72e8535c..090a6e3ce3 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -1099,9 +1099,18 @@ test_ioat_dma(void)
return test_dma(pmd);
 }
 
+static int
+test_cnxk_dma(void)
+{
+   const char *pmd = "dma_cnxk";
+
+   return test_dma(pmd);
+}
+
 REGISTER_TEST_COMMAND(dmadev_skeleton_autotest, test_skeleton_dma);
 REGISTER_TEST_COMMAND(dmadev_dpaa_autotest, test_dpaa_dma);
 REGISTER_TEST_COMMAND(dmadev_dpaa2_autotest, test_dpaa2_dma);
 REGISTER_TEST_COMMAND(dmadev_hisilicon_autotest, test_hisilicon_dma);
 REGISTER_TEST_COMMAND(dmadev_idxd_autotest, test_idxd_dma);
 REGISTER_TEST_COMMAND(dmadev_ioat_autotest, test_ioat_dma);
+REGISTER_TEST_COMMAND(dmadev_cnxk_autotest, test_cnxk_dma);
-- 
2.25.1



[PATCH v2 0/3] test/dma: add vchan reconfig and SG tests

2023-08-10 Thread Gowrishankar Muthukrishnan
This patch series adds vchan reconfiguration and SG tests.

v2:
 - combined individual test patches with 1/3 as tests can be
   skipped unless supported by PMD.

Gowrishankar Muthukrishnan (3):
  test/dma: add test skip status
  test/dma: test vchan reconfiguration
  test/dma: add SG copy tests

 app/test/test_dmadev.c | 124 ++-
 app/test/test_dmadev_api.c | 238 ++---
 app/test/test_dmadev_api.h |   2 +
 3 files changed, 343 insertions(+), 21 deletions(-)

-- 
2.25.1



[PATCH v2 2/3] test/dma: test vchan reconfiguration

2023-08-10 Thread Gowrishankar Muthukrishnan
Reconfigure vchan count and validate if new count is effective.

Signed-off-by: Gowrishankar Muthukrishnan 
---
 app/test/test_dmadev_api.c | 51 ++
 1 file changed, 51 insertions(+)

diff --git a/app/test/test_dmadev_api.c b/app/test/test_dmadev_api.c
index a1646472b0..5cdd87e162 100644
--- a/app/test/test_dmadev_api.c
+++ b/app/test/test_dmadev_api.c
@@ -355,6 +355,56 @@ test_dma_start_stop(void)
return TEST_SUCCESS;
 }
 
+static int
+test_dma_reconfigure(void)
+{
+   struct rte_dma_vchan_conf vchan_conf = { 0 };
+   struct rte_dma_conf dev_conf = { 0 };
+   struct rte_dma_info dev_info = { 0 };
+   uint16_t cfg_vchans;
+   int ret;
+
+   ret = rte_dma_info_get(test_dev_id, &dev_info);
+   RTE_TEST_ASSERT_SUCCESS(ret, "Failed to obtain device info, %d", ret);
+
+   /* At least two vchans required for the test */
+   if (dev_info.max_vchans < 2)
+   return TEST_SKIPPED;
+
+   /* Setup one vchan for later test */
+   ret = setup_one_vchan();
+   RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup one vchan, %d", ret);
+
+   ret = rte_dma_start(test_dev_id);
+   RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start, %d", ret);
+
+   ret = rte_dma_stop(test_dev_id);
+   RTE_TEST_ASSERT_SUCCESS(ret, "Failed to stop, %d", ret);
+
+   /* Check reconfigure and vchan setup after device stopped */
+   cfg_vchans = dev_conf.nb_vchans = (dev_info.max_vchans - 1);
+
+   ret = rte_dma_configure(test_dev_id, &dev_conf);
+   RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure, %d", ret);
+
+   vchan_conf.direction = RTE_DMA_DIR_MEM_TO_MEM;
+   vchan_conf.nb_desc = dev_info.min_desc;
+   ret = rte_dma_vchan_setup(test_dev_id, 0, &vchan_conf);
+   RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup vchan, %d", ret);
+
+   ret = rte_dma_start(test_dev_id);
+   RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start, %d", ret);
+
+   ret = rte_dma_info_get(test_dev_id, &dev_info);
+   RTE_TEST_ASSERT_SUCCESS(ret, "Failed to obtain device info, %d", ret);
+   RTE_TEST_ASSERT_EQUAL(dev_info.nb_vchans, cfg_vchans, "incorrect 
reconfiguration");
+
+   ret = rte_dma_stop(test_dev_id);
+   RTE_TEST_ASSERT_SUCCESS(ret, "Failed to stop, %d", ret);
+
+   return TEST_SUCCESS;
+}
+
 static int
 test_dma_stats(void)
 {
@@ -567,6 +617,7 @@ test_dma_api(uint16_t dev_id)
DMA_TEST_API_RUN(test_dma_configure);
DMA_TEST_API_RUN(test_dma_vchan_setup);
DMA_TEST_API_RUN(test_dma_start_stop);
+   DMA_TEST_API_RUN(test_dma_reconfigure);
DMA_TEST_API_RUN(test_dma_stats);
DMA_TEST_API_RUN(test_dma_dump);
DMA_TEST_API_RUN(test_dma_completed);
-- 
2.25.1



[PATCH v2 1/3] test/dma: add test skip status

2023-08-10 Thread Gowrishankar Muthukrishnan
Add status on skipped tests.

Signed-off-by: Gowrishankar Muthukrishnan 
---
 app/test/test_dmadev_api.c | 26 +++---
 1 file changed, 19 insertions(+), 7 deletions(-)

diff --git a/app/test/test_dmadev_api.c b/app/test/test_dmadev_api.c
index 4a181af90a..a1646472b0 100644
--- a/app/test/test_dmadev_api.c
+++ b/app/test/test_dmadev_api.c
@@ -9,6 +9,8 @@
 #include 
 #include 
 
+#include "test.h"
+
 extern int test_dma_api(uint16_t dev_id);
 
 #define DMA_TEST_API_RUN(test) \
@@ -17,9 +19,6 @@ extern int test_dma_api(uint16_t dev_id);
 #define TEST_MEMCPY_SIZE   1024
 #define TEST_WAIT_US_VAL   5
 
-#define TEST_SUCCESS 0
-#define TEST_FAILED  -1
-
 static int16_t test_dev_id;
 static int16_t invalid_dev_id;
 
@@ -29,6 +28,7 @@ static char *dst;
 static int total;
 static int passed;
 static int failed;
+static int skipped;
 
 static int
 testsuite_setup(int16_t dev_id)
@@ -49,6 +49,7 @@ testsuite_setup(int16_t dev_id)
total = 0;
passed = 0;
failed = 0;
+   skipped = 0;
 
/* Set dmadev log level to critical to suppress unnecessary output
 * during API tests.
@@ -78,12 +79,22 @@ testsuite_run_test(int (*test)(void), const char *name)
 
if (test) {
ret = test();
-   if (ret < 0) {
-   failed++;
-   printf("%s Failed\n", name);
-   } else {
+   switch (ret) {
+   case TEST_SUCCESS:
passed++;
printf("%s Passed\n", name);
+   break;
+   case TEST_FAILED:
+   failed++;
+   printf("%s Failed\n", name);
+   break;
+   case TEST_SKIPPED:
+   skipped++;
+   printf("%s Skipped\n", name);
+   break;
+   default:
+   printf("Invalid test status\n");
+   break;
}
}
 
@@ -566,6 +577,7 @@ test_dma_api(uint16_t dev_id)
printf("Total tests   : %d\n", total);
printf("Passed: %d\n", passed);
printf("Failed: %d\n", failed);
+   printf("Skipped   : %d\n", skipped);
 
if (failed)
return -1;
-- 
2.25.1



[PATCH v2 3/3] test/dma: add SG copy tests

2023-08-10 Thread Gowrishankar Muthukrishnan
Add scatter-gather copy tests.

Signed-off-by: Vidya Sagar Velumuri 
Signed-off-by: Gowrishankar Muthukrishnan 
---
 app/test/test_dmadev.c | 124 +++-
 app/test/test_dmadev_api.c | 163 ++---
 app/test/test_dmadev_api.h |   2 +
 3 files changed, 274 insertions(+), 15 deletions(-)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 0736ff2a18..abe970baaf 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -18,7 +18,7 @@
 
 #define ERR_RETURN(...) do { print_err(__func__, __LINE__, __VA_ARGS__); 
return -1; } while (0)
 
-#define COPY_LEN 1024
+#define COPY_LEN 1032
 
 static struct rte_mempool *pool;
 static uint16_t id_count;
@@ -346,6 +346,120 @@ test_stop_start(int16_t dev_id, uint16_t vchan)
return 0;
 }
 
+static int
+test_enqueue_sg_copies(int16_t dev_id, uint16_t vchan)
+{
+   unsigned int src_len, dst_len, n_sge, len, i, j, k;
+   char orig_src[COPY_LEN], orig_dst[COPY_LEN];
+   struct rte_dma_info info = { 0 };
+   enum rte_dma_status_code status;
+   uint16_t id, n_src, n_dst;
+
+   if (rte_dma_info_get(dev_id, &info) < 0)
+   ERR_RETURN("Failed to get dev info");
+
+   n_sge = RTE_MIN(info.max_sges, TEST_SG_MAX);
+   len = COPY_LEN;
+
+   for (n_src = 1; n_src <= n_sge; n_src++) {
+   src_len = len / n_src;
+   for (n_dst = 1; n_dst <= n_sge; n_dst++) {
+   dst_len = len / n_dst;
+
+   struct rte_dma_sge sg_src[n_sge], sg_dst[n_sge];
+   struct rte_mbuf *src[n_sge], *dst[n_sge];
+   char *src_data[n_sge], *dst_data[n_sge];
+
+   for (i = 0 ; i < COPY_LEN; i++)
+   orig_src[i] = rte_rand() & 0xFF;
+
+   memset(orig_dst, 0, COPY_LEN);
+
+   for (i = 0; i < n_src; i++) {
+   src[i] = rte_pktmbuf_alloc(pool);
+   RTE_ASSERT(src[i] != NULL);
+   sg_src[i].addr = rte_pktmbuf_iova(src[i]);
+   sg_src[i].length = src_len;
+   src_data[i] = rte_pktmbuf_mtod(src[i], char *);
+   }
+
+   for (k = 0; k < n_dst; k++) {
+   dst[k] = rte_pktmbuf_alloc(pool);
+   RTE_ASSERT(dst[k] != NULL);
+   sg_dst[k].addr = rte_pktmbuf_iova(dst[k]);
+   sg_dst[k].length = dst_len;
+   dst_data[k] = rte_pktmbuf_mtod(dst[k], char *);
+   }
+
+   for (i = 0; i < n_src; i++) {
+   for (j = 0; j < src_len; j++)
+   src_data[i][j] = orig_src[i * src_len + 
j];
+   }
+
+   for (k = 0; k < n_dst; k++)
+   memset(dst_data[k], 0, dst_len);
+
+   printf("\tsrc segs: %2d [seg len: %4d] - dst segs: %2d 
[seg len : %4d]\n",
+   n_src, src_len, n_dst, dst_len);
+
+   id = rte_dma_copy_sg(dev_id, vchan, sg_src, sg_dst, 
n_src, n_dst,
+RTE_DMA_OP_FLAG_SUBMIT);
+
+   if (id != id_count)
+   ERR_RETURN("Error with rte_dma_copy_sg, got %u, 
expected %u\n",
+   id, id_count);
+
+   /* Give time for copy to finish, then check it was done 
*/
+   await_hw(dev_id, vchan);
+
+   for (k = 0; k < n_dst; k++)
+   memcpy((&orig_dst[0] + k * dst_len), 
dst_data[k], dst_len);
+
+   if (memcmp(orig_src, orig_dst, COPY_LEN))
+   ERR_RETURN("Data mismatch");
+
+   /* Verify completion */
+   id = ~id;
+   if (rte_dma_completed(dev_id, vchan, 1, &id, NULL) != 1)
+   ERR_RETURN("Error with rte_dma_completed\n");
+
+   /* Verify expected index(id_count) */
+   if (id != id_count)
+   ERR_RETURN("Error:incorrect job id received, %u 
[expected %u]\n",
+   id, id_count);
+
+   /* Check for completed and id when no job done */
+   id = ~id;
+   if (rte_dma_completed(dev_id, vchan, 1, &id, NULL) != 0)
+   ERR_RETURN("Error with rte_dma_completed when 
no job done\n");
+
+   if (id != id_count)
+   ERR_RETURN("Error:incorrect job id received 
when no job done, %u [expected %u]\n",
+   

[PATCH v3 0/2] app/dma-perf: add SG copy support

2023-08-10 Thread Gowrishankar Muthukrishnan
Add SG copy support in dma-perf application.

v3:
 - Combined patch that does copy validation along with
   this patch, which means better validation for SG.

Gowrishankar Muthukrishnan (2):
  app/dma-perf: validate copied memory
  app/dma-perf: add SG copy support

 app/test-dma-perf/benchmark.c | 227 ++
 app/test-dma-perf/config.ini  |  17 +++
 app/test-dma-perf/main.c  |  47 +--
 app/test-dma-perf/main.h  |   5 +-
 4 files changed, 261 insertions(+), 35 deletions(-)

-- 
2.25.1



[PATCH v3 1/2] app/dma-perf: validate copied memory

2023-08-10 Thread Gowrishankar Muthukrishnan
Validate copied memory to ensure DMA copy did not fail.

Fixes: 623dc9364dc ("app/dma-perf: introduce DMA performance test")

Signed-off-by: Gowrishankar Muthukrishnan 
---
 app/test-dma-perf/benchmark.c | 23 +--
 app/test-dma-perf/main.c  | 16 +++-
 app/test-dma-perf/main.h  |  2 +-
 3 files changed, 33 insertions(+), 8 deletions(-)

diff --git a/app/test-dma-perf/benchmark.c b/app/test-dma-perf/benchmark.c
index 0601e0d171..9e5b5dc770 100644
--- a/app/test-dma-perf/benchmark.c
+++ b/app/test-dma-perf/benchmark.c
@@ -12,6 +12,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "main.h"
 
@@ -306,7 +307,7 @@ setup_memory_env(struct test_configure *cfg, struct 
rte_mbuf ***srcs,
struct rte_mbuf ***dsts)
 {
unsigned int buf_size = cfg->buf_size.cur;
-   unsigned int nr_sockets;
+   unsigned int nr_sockets, i;
uint32_t nr_buf = cfg->nr_buf;
 
nr_sockets = rte_socket_count();
@@ -360,10 +361,15 @@ setup_memory_env(struct test_configure *cfg, struct 
rte_mbuf ***srcs,
return -1;
}
 
+   for (i = 0; i < nr_buf; i++) {
+   memset(rte_pktmbuf_mtod((*srcs)[i], void *), rte_rand(), 
buf_size);
+   memset(rte_pktmbuf_mtod((*dsts)[i], void *), 0, buf_size);
+   }
+
return 0;
 }
 
-void
+int
 mem_copy_benchmark(struct test_configure *cfg, bool is_dma)
 {
uint16_t i;
@@ -381,6 +387,7 @@ mem_copy_benchmark(struct test_configure *cfg, bool is_dma)
uint32_t avg_cycles_total;
float mops, mops_total;
float bandwidth, bandwidth_total;
+   int ret = 0;
 
if (setup_memory_env(cfg, &srcs, &dsts) < 0)
goto out;
@@ -454,6 +461,16 @@ mem_copy_benchmark(struct test_configure *cfg, bool is_dma)
 
rte_eal_mp_wait_lcore();
 
+   for (i = 0; i < cfg->nr_buf; i++) {
+   if (memcmp(rte_pktmbuf_mtod(srcs[i], void *),
+  rte_pktmbuf_mtod(dsts[i], void *),
+  cfg->buf_size.cur) != 0) {
+   printf("Copy validation fails for buffer number %d\n", 
i);
+   ret = -1;
+   goto out;
+   }
+   }
+
mops_total = 0;
bandwidth_total = 0;
avg_cycles_total = 0;
@@ -505,4 +522,6 @@ mem_copy_benchmark(struct test_configure *cfg, bool is_dma)
rte_dma_stop(ldm->dma_ids[i]);
}
}
+
+   return ret;
 }
diff --git a/app/test-dma-perf/main.c b/app/test-dma-perf/main.c
index e5bccc27da..f917be4216 100644
--- a/app/test-dma-perf/main.c
+++ b/app/test-dma-perf/main.c
@@ -86,20 +86,24 @@ output_header(uint32_t case_id, struct test_configure 
*case_cfg)
output_csv(true);
 }
 
-static void
+static int
 run_test_case(struct test_configure *case_cfg)
 {
+   int ret = 0;
+
switch (case_cfg->test_type) {
case TEST_TYPE_DMA_MEM_COPY:
-   mem_copy_benchmark(case_cfg, true);
+   ret = mem_copy_benchmark(case_cfg, true);
break;
case TEST_TYPE_CPU_MEM_COPY:
-   mem_copy_benchmark(case_cfg, false);
+   ret = mem_copy_benchmark(case_cfg, false);
break;
default:
printf("Unknown test type. %s\n", case_cfg->test_type_str);
break;
}
+
+   return ret;
 }
 
 static void
@@ -144,8 +148,10 @@ run_test(uint32_t case_id, struct test_configure *case_cfg)
case_cfg->scenario_id++;
printf("\nRunning scenario %d\n", case_cfg->scenario_id);
 
-   run_test_case(case_cfg);
-   output_csv(false);
+   if (run_test_case(case_cfg) < 0)
+   printf("\nTest fails! skipping this scenario.\n");
+   else
+   output_csv(false);
 
if (var_entry->op == OP_ADD)
var_entry->cur += var_entry->incr;
diff --git a/app/test-dma-perf/main.h b/app/test-dma-perf/main.h
index f65e264378..658f22f673 100644
--- a/app/test-dma-perf/main.h
+++ b/app/test-dma-perf/main.h
@@ -59,6 +59,6 @@ struct test_configure {
uint8_t scenario_id;
 };
 
-void mem_copy_benchmark(struct test_configure *cfg, bool is_dma);
+int mem_copy_benchmark(struct test_configure *cfg, bool is_dma);
 
 #endif /* MAIN_H */
-- 
2.25.1



[PATCH v3 2/2] app/dma-perf: add SG copy support

2023-08-10 Thread Gowrishankar Muthukrishnan
Add SG copy support.

Signed-off-by: Gowrishankar Muthukrishnan 
---
 app/test-dma-perf/benchmark.c | 204 +-
 app/test-dma-perf/config.ini  |  17 +++
 app/test-dma-perf/main.c  |  35 +-
 app/test-dma-perf/main.h  |   5 +-
 4 files changed, 231 insertions(+), 30 deletions(-)

diff --git a/app/test-dma-perf/benchmark.c b/app/test-dma-perf/benchmark.c
index 9e5b5dc770..5f03f99b7b 100644
--- a/app/test-dma-perf/benchmark.c
+++ b/app/test-dma-perf/benchmark.c
@@ -46,6 +46,10 @@ struct lcore_params {
uint16_t test_secs;
struct rte_mbuf **srcs;
struct rte_mbuf **dsts;
+   struct rte_dma_sge **src_sges;
+   struct rte_dma_sge **dst_sges;
+   uint8_t src_ptrs;
+   uint8_t dst_ptrs;
volatile struct worker_info worker_info;
 };
 
@@ -86,21 +90,31 @@ calc_result(uint32_t buf_size, uint32_t nr_buf, uint16_t 
nb_workers, uint16_t te
 }
 
 static void
-output_result(uint8_t scenario_id, uint32_t lcore_id, char *dma_name, uint16_t 
ring_size,
-   uint16_t kick_batch, uint64_t ave_cycle, uint32_t 
buf_size, uint32_t nr_buf,
-   float memory, float bandwidth, float mops, bool is_dma)
+output_result(struct test_configure *cfg, struct lcore_params *para,
+   uint16_t kick_batch, uint64_t ave_cycle, uint32_t 
buf_size,
+   uint32_t nr_buf, float memory, float bandwidth, float 
mops)
 {
-   if (is_dma)
-   printf("lcore %u, DMA %s, DMA Ring Size: %u, Kick Batch Size: 
%u.\n",
-   lcore_id, dma_name, ring_size, kick_batch);
-   else
+   uint16_t ring_size = cfg->ring_size.cur;
+   uint8_t scenario_id = cfg->scenario_id;
+   uint32_t lcore_id = para->lcore_id;
+   char *dma_name = para->dma_name;
+
+   if (cfg->is_dma) {
+   printf("lcore %u, DMA %s, DMA Ring Size: %u, Kick Batch Size: 
%u", lcore_id,
+  dma_name, ring_size, kick_batch);
+   if (cfg->is_sg)
+   printf(" DMA src ptrs: %u, dst ptrs: %u",
+  para->src_ptrs, para->dst_ptrs);
+   printf(".\n");
+   } else {
printf("lcore %u\n", lcore_id);
+   }
 
printf("Average Cycles/op: %" PRIu64 ", Buffer Size: %u B, Buffer 
Number: %u, Memory: %.2lf MB, Frequency: %.3lf Ghz.\n",
ave_cycle, buf_size, nr_buf, memory, 
rte_get_timer_hz()/10.0);
printf("Average Bandwidth: %.3lf Gbps, MOps: %.3lf\n", bandwidth, mops);
 
-   if (is_dma)
+   if (cfg->is_dma)
snprintf(output_str[lcore_id], MAX_OUTPUT_STR_LEN, 
CSV_LINE_DMA_FMT,
scenario_id, lcore_id, dma_name, ring_size, kick_batch, 
buf_size,
nr_buf, memory, ave_cycle, bandwidth, mops);
@@ -130,7 +144,7 @@ cache_flush_buf(__rte_unused struct rte_mbuf **array,
 
 /* Configuration of device. */
 static void
-configure_dmadev_queue(uint32_t dev_id, uint32_t ring_size)
+configure_dmadev_queue(uint32_t dev_id, uint32_t ring_size, uint8_t ptrs_max)
 {
uint16_t vchan = 0;
struct rte_dma_info info;
@@ -153,6 +167,10 @@ configure_dmadev_queue(uint32_t dev_id, uint32_t ring_size)
rte_exit(EXIT_FAILURE, "Error, no configured queues reported on 
device id. %u\n",
dev_id);
 
+   if (info.max_sges < ptrs_max)
+   rte_exit(EXIT_FAILURE, "Error, DMA ptrs more than supported by 
device id %u.\n",
+   dev_id);
+
if (rte_dma_start(dev_id) != 0)
rte_exit(EXIT_FAILURE, "Error with dma start.\n");
 }
@@ -166,8 +184,12 @@ config_dmadevs(struct test_configure *cfg)
uint32_t i;
int dev_id;
uint16_t nb_dmadevs = 0;
+   uint8_t ptrs_max = 0;
char *dma_name;
 
+   if (cfg->is_sg)
+   ptrs_max = RTE_MAX(cfg->src_ptrs, cfg->dst_ptrs);
+
for (i = 0; i < ldm->cnt; i++) {
dma_name = ldm->dma_names[i];
dev_id = rte_dma_get_dev_id_by_name(dma_name);
@@ -177,7 +199,7 @@ config_dmadevs(struct test_configure *cfg)
}
 
ldm->dma_ids[i] = dev_id;
-   configure_dmadev_queue(dev_id, ring_size);
+   configure_dmadev_queue(dev_id, ring_size, ptrs_max);
++nb_dmadevs;
}
 
@@ -217,7 +239,7 @@ do_dma_submit_and_poll(uint16_t dev_id, uint64_t *async_cnt,
 }
 
 static inline int
-do_dma_mem_copy(void *p)
+do_dma_plain_mem_copy(void *p)
 {
struct lcore_params *para = (struct lcore_params *)p;
volatile struct worker_info *worker_info = &(para->worker_info);
@@ -270,6 +292,61 @@ do_dma_mem_copy(void *p)
return 0;
 }
 
+static inline int
+do_dma_sg_mem_copy(void *p)
+{
+   struct lcore_params *para = (struct lcore_params *)p;
+   volatile struct worker_info *worker_info = &(para->wo

[PATCH] app/test: validate shorter private key in ECDSA P521 test

2023-08-10 Thread Gowrishankar Muthukrishnan
Update test vector of ECDSA P521 curve for validating private key
of length shorter than prime length. As prime length of this test
is not aligned by 8 bytes, this new test vector could test any
alignment issue along with the sign validation.

Signed-off-by: Gowrishankar Muthukrishnan 
---
 app/test/test_cryptodev_asym.c   |   6 +
 app/test/test_cryptodev_ecdsa_test_vectors.h | 120 ++-
 2 files changed, 125 insertions(+), 1 deletion(-)

diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 0ef2642fdd..ef050f8b72 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -1477,6 +1477,9 @@ test_ecdsa_sign_verify(enum curve curve_id)
case SECP521R1:
input_params = ecdsa_param_secp521r1;
break;
+   case SECP521R1_UA:
+   input_params = ecdsa_param_secp521r1_ua;
+   break;
default:
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
@@ -1792,6 +1795,9 @@ test_ecpm_all_curve(void)
const char *msg;
 
for (curve_id = SECP192R1; curve_id < END_OF_CURVE_LIST; curve_id++) {
+   if (curve_id == SECP521R1_UA)
+   continue;
+
status = test_ecpm(curve_id);
if (status == TEST_SUCCESS) {
msg = "succeeded";
diff --git a/app/test/test_cryptodev_ecdsa_test_vectors.h 
b/app/test/test_cryptodev_ecdsa_test_vectors.h
index 55fbda5979..f1477639ba 100644
--- a/app/test/test_cryptodev_ecdsa_test_vectors.h
+++ b/app/test/test_cryptodev_ecdsa_test_vectors.h
@@ -14,6 +14,7 @@ enum curve {
SECP256R1,
SECP384R1,
SECP521R1,
+   SECP521R1_UA,
END_OF_CURVE_LIST
 };
 
@@ -21,7 +22,9 @@ const char *curve[] = {"SECP192R1",
   "SECP224R1",
   "SECP256R1",
   "SECP384R1",
-  "SECP521R1"};
+  "SECP521R1",
+  "SECP521R1(unaligned)",
+};
 
 struct crypto_testsuite_ecdsa_params {
rte_crypto_param pubkey_qx;
@@ -502,4 +505,119 @@ struct crypto_testsuite_ecdsa_params 
ecdsa_param_secp521r1 = {
.curve = RTE_CRYPTO_EC_GROUP_SECP521R1
 };
 
+/* SECP521R1 (P-521 NIST) test vectors (unaligned) */
+
+static uint8_t ua_digest_secp521r1[] = {
+   0x7b, 0xec, 0xf5, 0x96, 0xa8, 0x12, 0x04, 0x4c,
+   0x07, 0x96, 0x98, 0x4b, 0xe2, 0x3e, 0x9c, 0x02,
+   0xbf, 0xc5, 0x90, 0x96, 0xf4, 0x2f, 0xfc, 0x8a,
+   0x3f, 0x9a, 0x65, 0x0e
+};
+
+static uint8_t ua_pkey_secp521r1[] = {
+   0x00, 0x70, 0xa8, 0x4d, 0x30, 0xfd, 0xc9, 0x01,
+   0x1c, 0xc6, 0xc3, 0x38, 0xd4, 0x75, 0x6f, 0x3e,
+   0x59, 0xd8, 0x91, 0xaa, 0xb4, 0x18, 0x3e, 0x3c,
+   0xa5, 0x3d, 0x3f, 0x23, 0xd8, 0xe6, 0xfb, 0x3c,
+   0x54, 0x5a, 0xa1, 0xdd, 0x40, 0xec, 0xc5, 0xa0,
+   0x40, 0xa7, 0xb1, 0xb1, 0xbc, 0xfe, 0x34, 0xe4,
+   0xbf, 0xdb, 0x40, 0x89, 0x45, 0xb5, 0xf7, 0x45,
+   0x69, 0xca, 0xa7, 0xc1, 0x9e, 0x4a, 0x76, 0xa8,
+   0x05, 0x58
+};
+
+static uint8_t ua_scalar_secp521r1[] = {
+   0x00, 0x70, 0xa8, 0x4d, 0x30, 0xfd, 0xc9, 0x01,
+   0x1c, 0xc6, 0xc3, 0x38, 0xd4, 0x75, 0x6f, 0x3e,
+   0x59, 0xd8, 0x91, 0xaa, 0xb4, 0x18, 0x3e, 0x3c,
+   0xa5, 0x3d, 0x3f, 0x23, 0xd8, 0xe6, 0xfb, 0x3c,
+   0x54, 0x5a, 0xa1, 0xdd, 0x40, 0xec, 0xc5, 0xa0,
+   0x40, 0xa7, 0xb1, 0xb1, 0xbc, 0xfe, 0x34, 0xe4,
+   0xbf, 0xdb, 0x40, 0x89, 0x45, 0xb5, 0xf7, 0x45,
+   0x69, 0xca, 0xa7, 0xc1, 0x9e, 0x4a, 0x76, 0xa8,
+   0x05, 0x57
+};
+
+static uint8_t ua_pubkey_qx_secp521r1[] = {
+   0x01, 0x29, 0x15, 0x13, 0xa6, 0x45, 0x98, 0x5c,
+   0x5e, 0x2b, 0xc3, 0x99, 0xc5, 0x25, 0x64, 0x29,
+   0x14, 0x91, 0x12, 0xcc, 0x58, 0x3a, 0x9d, 0x91,
+   0x95, 0x64, 0x10, 0x9e, 0xc3, 0x2d, 0xde, 0xe2,
+   0xb1, 0xac, 0x44, 0xb7, 0x90, 0x70, 0xbf, 0xb5,
+   0x50, 0x3b, 0x06, 0x78, 0x36, 0x05, 0x7e, 0x48,
+   0xe7, 0x31, 0x6e, 0x3f, 0x78, 0x3b, 0x37, 0xbc,
+   0xa8, 0xcd, 0xc0, 0x34, 0xb6, 0x4f, 0xf8, 0x73,
+   0xd0, 0xb3
+};
+
+static uint8_t ua_pubkey_qy_secp521r1[] = {
+   0x00, 0xc1, 0x46, 0x92, 0x6e, 0x1a, 0xb5, 0xe6,
+   0xee, 0x25, 0xe3, 0x62, 0x68, 0x30, 0x38, 0xef,
+   0x44, 0x2a, 0xb0, 0xb8, 0xa9, 0xbc, 0x4b, 0x4b,
+   0x55, 0x4c, 0x35, 0xde, 0x50, 0xcc, 0xc6, 0x9e,
+   0xf9, 0x9d, 0x8d, 0xe9, 0x0f, 0x84, 0x95, 0xcb,
+   0x41, 0xa2, 0xc7, 0xf3, 0x7d, 0xea, 0xb1, 0x8b,
+   0x52, 0x5d, 0x58, 0x45, 0xac, 0xa0, 0xb4, 0x64,
+   0x60, 0x74, 0x1f, 0x59, 0x71, 0x97, 0xe8, 0x6b,
+   0x9f, 0x5d
+};
+
+static uint8_t ua_sign_secp521r1_r[] = {
+   0x00, 0xf1, 0xea, 0x3b, 0x7b, 0xfb, 0x49, 0x60,
+   0xf3, 0x93, 0x66, 0x8d, 0x81, 0x28, 0x7f, 0x40,
+   0xe9, 0x35, 0xd6, 0x13, 0xe1, 0x51, 0x1a, 0xee,
+   0xc8, 0x98, 0xa1, 0xf9, 0x62, 0xb6, 0x9f, 0xf3,
+   0x18, 0xdd, 0x45, 0x3c, 0xbb, 0x9d, 0xee, 0x89,
+ 

Re: [PATCH v2] eventdev/eth_rx: update adapter create APIs

2023-08-10 Thread Jerin Jacob
On Thu, Aug 10, 2023 at 5:28 PM Naga Harish K, S V
 wrote:
>
> Hi Jerin,
>  Thinking of another approach for this patch.
> Instead of changing all create APIs,  update 
> rte_event_eth_rx_adapter_create_ext() alone with additional parameters.
> rte_event_eth_rx_adapter_create() and 
> rte_event_eth_rx_adapter_create_with_params() APIs will be untouched.

I am not sure if that is for any help to existing application which is
using rte_event_eth_rx_adapter_create_ext() and
it needs to support two DPDK versions.

Also, rte_event_eth_rx_adapter_create_ext() is not experimental API,
we need depreciation notice to change API.


[PATCH] build: deprecate enable_kmods option

2023-08-10 Thread Bruce Richardson
With the removal of the kni kernel driver, there are no longer any
Linux kernel modules in our repository, leaving only modules for FreeBSD
present. Since:

* BSD has no issues with out-of-tree modules and
* There are no in-tree equivalents for those modules in BSD

there is no point in building for BSD without those modules.

Therefore, we can remove the enable_kmods option, always building the
kmods for BSD.

We can also remove the infrastructure for Linux kmods too, since use of
out-of-tree modules for Linux is not something the DPDK project wants to
pursue in future.

Signed-off-by: Bruce Richardson 
---
 doc/guides/rel_notes/deprecation.rst |   7 ++
 kernel/linux/meson.build | 103 ---
 kernel/meson.build   |   4 +-
 meson.build  |   6 +-
 meson_options.txt|   4 +-
 5 files changed, 14 insertions(+), 110 deletions(-)
 delete mode 100644 kernel/linux/meson.build

diff --git a/doc/guides/rel_notes/deprecation.rst 
b/doc/guides/rel_notes/deprecation.rst
index 317875c505..d5d04ff8d7 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -21,6 +21,13 @@ Deprecation Notices
   won't be possible anymore through the use of the ``disable_libs`` build 
option.
   A new build option for deprecated libraries will be introduced instead.
 
+* build: The ``enable_kmods`` option is deprecated and will be removed in a 
future release.
+  Setting/clearing the option has no impact on the build.
+  Instead, kernel modules will be always built for OS's where out-of-tree 
kernel modules
+  are required for DPDK operation.
+  Currently, this means that modules will only be build for FreeBSD.
+  No modules are shipped with DPDK for either Linux or Windows.
+
 * kvargs: The function ``rte_kvargs_process`` will get a new parameter
   for returning key match count. It will ease handling of no-match case.
 
diff --git a/kernel/linux/meson.build b/kernel/linux/meson.build
deleted file mode 100644
index 8d47074621..00
--- a/kernel/linux/meson.build
+++ /dev/null
@@ -1,103 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Intel Corporation
-
-subdirs = []
-
-kernel_build_dir = get_option('kernel_dir')
-kernel_source_dir = get_option('kernel_dir')
-kernel_install_dir = ''
-install = not meson.is_cross_build()
-cross_args = []
-
-if not meson.is_cross_build()
-# native build
-kernel_version = run_command('uname', '-r', check: true).stdout().strip()
-if kernel_source_dir != ''
-# Try kernel release from sources first
-r = run_command('make', '-s', '-C', kernel_source_dir, 
'kernelrelease', check: false)
-if r.returncode() == 0
-kernel_version = r.stdout().strip()
-endif
-else
-# use default path for native builds
-kernel_source_dir = '/lib/modules/' + kernel_version + '/source'
-endif
-kernel_install_dir = '/lib/modules/' + kernel_version + '/extra/dpdk'
-if kernel_build_dir == ''
-# use default path for native builds
-kernel_build_dir = '/lib/modules/' + kernel_version + '/build'
-endif
-
-# test running make in kernel directory, using "make kernelversion"
-make_returncode = run_command('make', '-sC', kernel_build_dir,
-'kernelversion', check: true).returncode()
-if make_returncode != 0
-# backward compatibility:
-# the headers could still be in the 'build' subdir
-if not kernel_build_dir.endswith('build') and not 
kernel_build_dir.endswith('build/')
-kernel_build_dir = join_paths(kernel_build_dir, 'build')
-make_returncode = run_command('make', '-sC', kernel_build_dir,
-'kernelversion', check: true).returncode()
-endif
-endif
-
-if make_returncode != 0
-error('Cannot compile kernel modules as requested - are kernel headers 
installed?')
-endif
-
-# DO ACTUAL MODULE BUILDING
-foreach d:subdirs
-subdir(d)
-endforeach
-
-subdir_done()
-endif
-
-# cross build
-# if we are cross-compiling we need kernel_build_dir specified
-if kernel_build_dir == ''
-error('Need "kernel_dir" option for kmod compilation when cross-compiling')
-endif
-cross_compiler = find_program('c').path()
-if cross_compiler.endswith('gcc')
-cross_prefix = run_command([py3, '-c', 'print("' + cross_compiler + 
'"[:-3])'],
-check: true).stdout().strip()
-elif cross_compiler.endswith('clang')
-cross_prefix = ''
-found_target = false
-# search for '-target' and use the arg that follows
-# (i.e. the value of '-target') as cross_prefix
-foreach cross_c_arg : meson.get_cross_property('c_args')
-if found_target and cross_prefix == ''
-cross_prefix = cross_c_arg
-endif
-if cross_c_arg == '-target'
-found_target = true
-endif
-endforeach
-if cross_prefix == ''
-

Re: [PATCH v5] build: update DPDK to use C11 standard

2023-08-10 Thread Thomas Monjalon
03/08/2023 15:36, David Marchand:
> On Wed, Aug 2, 2023 at 2:32 PM Bruce Richardson
>  wrote:
> >
> > As previously announced, DPDK 23.11 will require a C11 supporting
> > compiler and will use the C11 standard in all builds.
> >
> > Forcing use of the C standard, rather than the standard with
> > GNU extensions, means that some posix definitions which are not in
> > the C standard are unavailable by default. We fix this by ensuring
> > the correct defines or cflags are passed to the components that
> > need them.
> >
> > Signed-off-by: Bruce Richardson 
> > Acked-by: Morten Brørup 
> > Acked-by: Tyler Retzlaff 
> Tested-by: Ali Alnubani 
> 
> The CI results look good.
> 
> Applied, thanks!

The compiler support is updated, that's fine.
Should we go further and document some major Linux distributions?
One concern is to make clear RHEL 7 is not supported anymore.
Should it be a release note?




Re: [PATCH v5] build: update DPDK to use C11 standard

2023-08-10 Thread Stephen Hemminger
On Thu, 10 Aug 2023 15:34:43 +0200
Thomas Monjalon  wrote:

> 03/08/2023 15:36, David Marchand:
> > On Wed, Aug 2, 2023 at 2:32 PM Bruce Richardson
> >  wrote:  
> > >
> > > As previously announced, DPDK 23.11 will require a C11 supporting
> > > compiler and will use the C11 standard in all builds.
> > >
> > > Forcing use of the C standard, rather than the standard with
> > > GNU extensions, means that some posix definitions which are not in
> > > the C standard are unavailable by default. We fix this by ensuring
> > > the correct defines or cflags are passed to the components that
> > > need them.
> > >
> > > Signed-off-by: Bruce Richardson 
> > > Acked-by: Morten Brørup 
> > > Acked-by: Tyler Retzlaff   
> > Tested-by: Ali Alnubani 
> > 
> > The CI results look good.
> > 
> > Applied, thanks!  
> 
> The compiler support is updated, that's fine.
> Should we go further and document some major Linux distributions?
> One concern is to make clear RHEL 7 is not supported anymore.
> Should it be a release note?
> 
> 

Should be addressed in linux/sys_reqs.rst as well as deprecation notice.
Also, is it possible to add automated check in build for compiler version?


[PATCH] app: fix silent enqueue fail in test_mbuf test_refcnt_iter

2023-08-10 Thread jhascoet
From: Julien Hascoet 

In case of ring full state, we retry the enqueue
operation in order to avoid mbuf loss.

Fixes: af75078fece ("first public release")

Signed-off-by: Julien Hascoet 
---
 app/test/test_mbuf.c | 11 ---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index efac01806b..be114e3302 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -1033,12 +1033,17 @@ test_refcnt_iter(unsigned int lcore, unsigned int iter,
tref += ref;
if ((ref & 1) != 0) {
rte_pktmbuf_refcnt_update(m, ref);
-   while (ref-- != 0)
-   rte_ring_enqueue(refcnt_mbuf_ring, m);
+   while (ref-- != 0) {
+   /* retry in case of failure */
+   while (rte_ring_enqueue(refcnt_mbuf_ring, m) != 
0)
+   ;
+   }
} else {
while (ref-- != 0) {
rte_pktmbuf_refcnt_update(m, 1);
-   rte_ring_enqueue(refcnt_mbuf_ring, m);
+   /* retry in case of failure */
+   while (rte_ring_enqueue(refcnt_mbuf_ring, m) != 
0)
+   ;
}
}
rte_pktmbuf_free(m);
-- 
2.34.1



Re: [PATCH] app: fix silent enqueue fail in test_mbuf test_refcnt_iter

2023-08-10 Thread Stephen Hemminger
On Thu, 10 Aug 2023 08:00:30 +0200
jhascoet  wrote:

> diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
> index efac01806b..be114e3302 100644
> --- a/app/test/test_mbuf.c
> +++ b/app/test/test_mbuf.c
> @@ -1033,12 +1033,17 @@ test_refcnt_iter(unsigned int lcore, unsigned int 
> iter,
>   tref += ref;
>   if ((ref & 1) != 0) {
>   rte_pktmbuf_refcnt_update(m, ref);
> - while (ref-- != 0)
> - rte_ring_enqueue(refcnt_mbuf_ring, m);
> + while (ref-- != 0) {
> + /* retry in case of failure */
> + while (rte_ring_enqueue(refcnt_mbuf_ring, m) != 
> 0)
> + ;

Since other side needs to consume these and might be on same lcore,
it might be good place to add rte_pause or sched_yield here?


RE: [PUB] Re: [PATCH] app: fix silent enqueue fail in test_mbuf test_refcnt_iter

2023-08-10 Thread Julien Hascoet
Yes, just did it.
Thanks !

De : Stephen Hemminger 
Envoyé : jeudi 10 août 2023 17:33
À : jhascoet 
Cc : david.march...@redhat.com ; dev@dpdk.org 

Objet : [PUB] Re: [PATCH] app: fix silent enqueue fail in test_mbuf 
test_refcnt_iter

On Thu, 10 Aug 2023 08:00:30 +0200
jhascoet  wrote:

> diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
> index efac01806b..be114e3302 100644
> --- a/app/test/test_mbuf.c
> +++ b/app/test/test_mbuf.c
> @@ -1033,12 +1033,17 @@ test_refcnt_iter(unsigned int lcore, unsigned int 
> iter,
>tref += ref;
>if ((ref & 1) != 0) {
>rte_pktmbuf_refcnt_update(m, ref);
> - while (ref-- != 0)
> - rte_ring_enqueue(refcnt_mbuf_ring, m);
> + while (ref-- != 0) {
> + /* retry in case of failure */
> + while (rte_ring_enqueue(refcnt_mbuf_ring, m) != 
> 0)
> + ;

Since other side needs to consume these and might be on same lcore,
it might be good place to add rte_pause or sched_yield here?








Re: [PATCH v5 14/14] bus/vmbus: update MAINTAINERS and docs

2023-08-10 Thread Stephen Hemminger
On Sat, 23 Apr 2022 09:58:49 +0530
Srikanth Kaka  wrote:

> updated MAINTAINERS and doc files for FreeBSD support
> 
> Signed-off-by: Srikanth Kaka 
> Signed-off-by: Vag Singh 
> Signed-off-by: Anand Thulasiram 
> ---
>  MAINTAINERS|  2 ++
>  doc/guides/nics/netvsc.rst | 11 +++
>  2 files changed, 13 insertions(+)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 7c4f541..01a494e 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -567,6 +567,7 @@ F: app/test/test_vdev.c
>  VMBUS bus driver
>  M: Stephen Hemminger 
>  M: Long Li 
> +M: Srikanth Kaka 
>  F: drivers/bus/vmbus/
>  
>  
> @@ -823,6 +824,7 @@ F: doc/guides/nics/vdev_netvsc.rst
>  Microsoft Hyper-V netvsc
>  M: Stephen Hemminger 
>  M: Long Li 
> +M: Srikanth Kaka 
>  F: drivers/net/netvsc/
>  F: doc/guides/nics/netvsc.rst
>  F: doc/guides/nics/features/netvsc.ini
> diff --git a/doc/guides/nics/netvsc.rst b/doc/guides/nics/netvsc.rst
> index 77efe1d..12d1702 100644
> --- a/doc/guides/nics/netvsc.rst
> +++ b/doc/guides/nics/netvsc.rst
> @@ -91,6 +91,12 @@ operations:
>  
> The dpdk-devbind.py script can not be used since it only handles PCI 
> devices.
>  
> +On FreeBSD, with hv_uio kernel driver loaded, do the following:
> +
> +.. code-block:: console
> +
> +devctl set driver -f hn1 hv_uio
> +
>  
>  Prerequisites
>  -
> @@ -101,6 +107,11 @@ The following prerequisites apply:
>  Full support of multiple queues requires the 4.17 kernel. It is possible
>  to use the netvsc PMD with 4.16 kernel but it is limited to a single 
> queue.
>  
> +*   FreeBSD support for UIO on vmbus is done with hv_uio driver and it is 
> still
> +in `review`_
> +
> +.. _`review`: https://reviews.freebsd.org/D32184

Looks like the FreeBSD UIO driver is still not merged upstream.
Any update on that?

For now, will leave the DPDK patches in patchwork (though they need to be 
rebased),
and mark them as "Awaiting upstream".

Alternatively, the BSD driver could be carried in DPDK.
Up to Bruce the FreeBSD maintainer to give feedback.



RE: [PATCH] build: deprecate enable_kmods option

2023-08-10 Thread Morten Brørup
> From: Bruce Richardson [mailto:bruce.richard...@intel.com]
> Sent: Thursday, 10 August 2023 15.18
> 
> With the removal of the kni kernel driver, there are no longer any
> Linux kernel modules in our repository, leaving only modules for FreeBSD
> present. Since:
> 
> * BSD has no issues with out-of-tree modules and
> * There are no in-tree equivalents for those modules in BSD
> 
> there is no point in building for BSD without those modules.
> 
> Therefore, we can remove the enable_kmods option, always building the
> kmods for BSD.
> 
> We can also remove the infrastructure for Linux kmods too, since use of
> out-of-tree modules for Linux is not something the DPDK project wants to
> pursue in future.
> 
> Signed-off-by: Bruce Richardson 
> ---

Acked-by: Morten Brørup 



Re: [dpdk-dev] [PATCH RFC] net/ena: Add Windows support.

2023-08-10 Thread Stephen Hemminger
On Thu, 23 Mar 2023 17:19:55 +0300
Dmitry Kozlyuk  wrote:

> > >> This is a very old thread, but still in the patchwork, I wonder if is
> > >> there any update on the issue?
> > > 
> > > Hi Ferruh,
> > > 
> > > sorry for the late reply - nothing new from my side.
> > > 
> > > I'm not sure what's the current state of the netuio interrupt support
> > > for the Windows, but if it still didn't land, then the original issue
> > > still persists.
> > > 
> > 
> > Hi William, Dmitry,
> > 
> > Is there any update on the netuio interrupt support?
> > What is the plan for this patch?  
> 
> Hi Ferruh,
> 
> The work on the interrupt support has been abandoned unfortunately.
> I'd like to complete it one day, but can't make a commitment right now.
> 
> Ref:
> http://patchwork.dpdk.org/project/dpdk/patch/20211012011107.431188-1-dmitry.kozl...@gmail.com/

Marking this patch as "Awaiting upstream" since it depends on interrupt
support in Windows which is not available.


[PATCH] mbuf: add ESP packet type

2023-08-10 Thread Alexander Kozyrev
Support the IP Encapsulating Security Payload (ESP) in transport mode.

Signed-off-by: Alexander Kozyrev 
---
 lib/mbuf/rte_mbuf_ptype.h | 23 +++
 1 file changed, 23 insertions(+)

diff --git a/lib/mbuf/rte_mbuf_ptype.h b/lib/mbuf/rte_mbuf_ptype.h
index 17a2dd3576..7cb7da 100644
--- a/lib/mbuf/rte_mbuf_ptype.h
+++ b/lib/mbuf/rte_mbuf_ptype.h
@@ -308,6 +308,17 @@ extern "C" {
  * | 'version'=4, 'protocol'=2, 'MF'=0, 'frag_offset'=0>
  */
 #define RTE_PTYPE_L4_IGMP   0x0700
+/**
+ * ESP (IP Encapsulating Security Payload) transport packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=50>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=50>
+ */
+#define RTE_PTYPE_L4_ESP0x0800
 /**
  * Mask of layer 4 packet types.
  * It is used for outer packet for tunneling cases.
@@ -658,6 +669,18 @@ extern "C" {
  * | 'version'=6, 'next header'!=[6|17|44|132|1]>
  */
 #define RTE_PTYPE_INNER_L4_NONFRAG  0x0600
+/**
+ * ESP (IP Encapsulating Security Payload) transport packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=50>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=50>
+ */
+#define RTE_PTYPE_INNER_L4_ESP  0x0800
 /**
  * Mask of inner layer 4 packet types.
  */
-- 
2.18.2



RE: [PATCH] mbuf: add ESP packet type

2023-08-10 Thread Morten Brørup
> From: Alexander Kozyrev [mailto:akozy...@nvidia.com]
> Sent: Thursday, 10 August 2023 17.54
> 
> Support the IP Encapsulating Security Payload (ESP) in transport mode.
> 
> Signed-off-by: Alexander Kozyrev 
> ---
>  lib/mbuf/rte_mbuf_ptype.h | 23 +++
>  1 file changed, 23 insertions(+)
> 
> diff --git a/lib/mbuf/rte_mbuf_ptype.h b/lib/mbuf/rte_mbuf_ptype.h
> index 17a2dd3576..7cb7da 100644
> --- a/lib/mbuf/rte_mbuf_ptype.h
> +++ b/lib/mbuf/rte_mbuf_ptype.h
> @@ -308,6 +308,17 @@ extern "C" {
>   * | 'version'=4, 'protocol'=2, 'MF'=0, 'frag_offset'=0>
>   */
>  #define RTE_PTYPE_L4_IGMP   0x0700
> +/**
> + * ESP (IP Encapsulating Security Payload) transport packet type.
> + *
> + * Packet format:
> + * <'ether type'=0x0800
> + * | 'version'=4, 'protocol'=50>

Non-fragment criteria seems to be missing:

* | 'version'=4, 'protocol'=50, 'MF'=0, 'frag_offset'=0>

> + * or,
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=50>
> + */
> +#define RTE_PTYPE_L4_ESP0x0800
>  /**
>   * Mask of layer 4 packet types.
>   * It is used for outer packet for tunneling cases.
> @@ -658,6 +669,18 @@ extern "C" {
>   * | 'version'=6, 'next header'!=[6|17|44|132|1]>
>   */
>  #define RTE_PTYPE_INNER_L4_NONFRAG  0x0600
> +/**
> + * ESP (IP Encapsulating Security Payload) transport packet type.
> + * It is used for inner packet only.
> + *
> + * Packet format (inner only):
> + * <'ether type'=0x0800
> + * | 'version'=4, 'protocol'=50>

Also missing here:
* | 'version'=4, 'protocol'=50, 'MF'=0, 'frag_offset'=0>

> + * or,
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=50>
> + */
> +#define RTE_PTYPE_INNER_L4_ESP  0x0800
>  /**
>   * Mask of inner layer 4 packet types.
>   */
> --
> 2.18.2



[PATCH] ethdev: add packet type matching item

2023-08-10 Thread Alexander Kozyrev
Add RTE_FLOW_ITEM_TYPE_PTYPE to allow matching on
L2/L3/L4 and tunnel information as defined in mbuf.

Signed-off-by: Alexander Kozyrev 
---
 app/test-pmd/cmdline_flow.c | 27 +
 doc/guides/nics/features/default.ini|  1 +
 doc/guides/prog_guide/rte_flow.rst  |  7 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  4 +++
 lib/ethdev/rte_flow.c   |  1 +
 lib/ethdev/rte_flow.h   | 25 +++
 6 files changed, 65 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 94827bcc4a..853a6d25e0 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -524,6 +524,8 @@ enum index {
ITEM_IB_BTH_PSN,
ITEM_IPV6_PUSH_REMOVE_EXT,
ITEM_IPV6_PUSH_REMOVE_EXT_TYPE,
+   ITEM_PTYPE,
+   ITEM_PTYPE_VALUE,
 
/* Validate/create actions. */
ACTIONS,
@@ -1561,6 +1563,7 @@ static const enum index next_item[] = {
ITEM_AGGR_AFFINITY,
ITEM_TX_QUEUE,
ITEM_IB_BTH,
+   ITEM_PTYPE,
END_SET,
ZERO,
 };
@@ -2079,6 +2082,12 @@ static const enum index item_ib_bth[] = {
ZERO,
 };
 
+static const enum index item_ptype[] = {
+   ITEM_PTYPE_VALUE,
+   ITEM_NEXT,
+   ZERO,
+};
+
 static const enum index next_action[] = {
ACTION_END,
ACTION_VOID,
@@ -5827,6 +5836,21 @@ static const struct token token_list[] = {
.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ib_bth,
 hdr.psn)),
},
+   [ITEM_PTYPE] = {
+   .name = "ptype",
+   .help = "match L2/L3/L4 and tunnel information",
+   .priv = PRIV_ITEM(PTYPE,
+ sizeof(struct rte_flow_item_ptype)),
+   .next = NEXT(item_ptype),
+   .call = parse_vc,
+   },
+   [ITEM_PTYPE_VALUE] = {
+   .name = "packet_type",
+   .help = "packet type as defined in rte_mbuf_ptype",
+   .next = NEXT(item_ptype, NEXT_ENTRY(COMMON_UNSIGNED),
+item_param),
+   .args = ARGS(ARGS_ENTRY(struct rte_flow_item_ptype, 
packet_type)),
+   },
/* Validate/create actions. */
[ACTIONS] = {
.name = "actions",
@@ -12689,6 +12713,9 @@ flow_item_default_mask(const struct rte_flow_item *item)
case RTE_FLOW_ITEM_TYPE_IB_BTH:
mask = &rte_flow_item_ib_bth_mask;
break;
+   case RTE_FLOW_ITEM_TYPE_PTYPE:
+   mask = &rte_flow_item_ptype_mask;
+   break;
default:
break;
}
diff --git a/doc/guides/nics/features/default.ini 
b/doc/guides/nics/features/default.ini
index 2011e97127..e41a97b3bb 100644
--- a/doc/guides/nics/features/default.ini
+++ b/doc/guides/nics/features/default.ini
@@ -137,6 +137,7 @@ ppp  =
 pppoed   =
 pppoes   =
 pppoe_proto_id   =
+ptype=
 quota=
 raw  =
 represented_port =
diff --git a/doc/guides/prog_guide/rte_flow.rst 
b/doc/guides/prog_guide/rte_flow.rst
index 5bc998a433..62a6dbb7f9 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1566,6 +1566,13 @@ Matches an InfiniBand base transport header in RoCE 
packet.
 
 - ``hdr``: InfiniBand base transport header definition (``rte_ib.h``).
 
+Item: ``PTYPE``
+^^^
+
+Matches the packet type as defined in rte_mbuf_ptype.
+
+- ``packet_type``: L2/L3/L4 and tunnel information.
+
 Actions
 ~~~
 
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst 
b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index a182479ab2..8dc711bfc4 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3805,6 +3805,10 @@ This section lists supported pattern items and their 
attributes, if any.
 
 - ``send_to_kernel``: send packets to kernel.
 
+- ``ptype``: match the packet type (L2/L3/L4 and tunnel information).
+
+- ``packet_type {unsigned}``: packet type.
+
 
 Actions list
 
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 271d854f78..71583bc174 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -166,6 +166,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] 
= {
MK_FLOW_ITEM(AGGR_AFFINITY, sizeof(struct rte_flow_item_aggr_affinity)),
MK_FLOW_ITEM(TX_QUEUE, sizeof(struct rte_flow_item_tx_queue)),
MK_FLOW_ITEM(IB_BTH, sizeof(struct rte_flow_item_ib_bth)),
+   MK_FLOW_ITEM(PTYPE, sizeof(struct rte_flow_item_ptype)),
 };
 
 /** Generate flow_action[] entry. */
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 86ed98c562..de941a5867 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -688,6 +688,14 @@ enum rte_flow_item_ty

Re: [PATCH v5] build: update DPDK to use C11 standard

2023-08-10 Thread Bruce Richardson
On Thu, Aug 10, 2023 at 07:48:39AM -0700, Stephen Hemminger wrote:
> On Thu, 10 Aug 2023 15:34:43 +0200
> Thomas Monjalon  wrote:
> 
> > 03/08/2023 15:36, David Marchand:
> > > On Wed, Aug 2, 2023 at 2:32 PM Bruce Richardson
> > >  wrote:  
> > > >
> > > > As previously announced, DPDK 23.11 will require a C11 supporting
> > > > compiler and will use the C11 standard in all builds.
> > > >
> > > > Forcing use of the C standard, rather than the standard with
> > > > GNU extensions, means that some posix definitions which are not in
> > > > the C standard are unavailable by default. We fix this by ensuring
> > > > the correct defines or cflags are passed to the components that
> > > > need them.
> > > >
> > > > Signed-off-by: Bruce Richardson 
> > > > Acked-by: Morten Brørup 
> > > > Acked-by: Tyler Retzlaff   
> > > Tested-by: Ali Alnubani 
> > > 
> > > The CI results look good.
> > > 
> > > Applied, thanks!  
> > 
> > The compiler support is updated, that's fine.
> > Should we go further and document some major Linux distributions?
> > One concern is to make clear RHEL 7 is not supported anymore.
> > Should it be a release note?
> > 

Well, DPDK currently is still building fine on Centos 7 for me, so let's
hold off on claiming anything until it's definitely broken.

> > 
> 
> Should be addressed in linux/sys_reqs.rst as well as deprecation notice.
> Also, is it possible to add automated check in build for compiler version?

I'd be a little careful about what we claim, and I think current docs are
accurate vs our original plans. What we didn't plan to support was the GCC
and Clang compiler versions in RHEL 7, but if one installs an updated GCC,
for example, the build should be fine on RHEL 7.

Now, though, we are having to re-evaluate our use of stdatomics, which
means we may not actually break RHEL 7 compatibility after all. We'll have
to "watch this space" as the saying goes!

Overall, I think the approach of build-time checks is the best, but not
for specific versions, but instead for capabilities. If/when we add support
for stdatomics to DPDK builds on Linux/BSD, at that point we put in the
initial compiler checks a suitable check for them being present and output
a suitable error if not found.

/Bruce


Re: [PATCH v5] build: update DPDK to use C11 standard

2023-08-10 Thread Thomas Monjalon
10/08/2023 18:35, Bruce Richardson:
> On Thu, Aug 10, 2023 at 07:48:39AM -0700, Stephen Hemminger wrote:
> > On Thu, 10 Aug 2023 15:34:43 +0200
> > Thomas Monjalon  wrote:
> > 
> > > 03/08/2023 15:36, David Marchand:
> > > > On Wed, Aug 2, 2023 at 2:32 PM Bruce Richardson
> > > >  wrote:  
> > > > >
> > > > > As previously announced, DPDK 23.11 will require a C11 supporting
> > > > > compiler and will use the C11 standard in all builds.
> > > > >
> > > > > Forcing use of the C standard, rather than the standard with
> > > > > GNU extensions, means that some posix definitions which are not in
> > > > > the C standard are unavailable by default. We fix this by ensuring
> > > > > the correct defines or cflags are passed to the components that
> > > > > need them.
> > > > >
> > > > > Signed-off-by: Bruce Richardson 
> > > > > Acked-by: Morten Brørup 
> > > > > Acked-by: Tyler Retzlaff   
> > > > Tested-by: Ali Alnubani 
> > > > 
> > > > The CI results look good.
> > > > 
> > > > Applied, thanks!  
> > > 
> > > The compiler support is updated, that's fine.
> > > Should we go further and document some major Linux distributions?
> > > One concern is to make clear RHEL 7 is not supported anymore.
> > > Should it be a release note?
> > > 
> 
> Well, DPDK currently is still building fine on Centos 7 for me, so let's
> hold off on claiming anything until it's definitely broken.
> 
> > > 
> > 
> > Should be addressed in linux/sys_reqs.rst as well as deprecation notice.
> > Also, is it possible to add automated check in build for compiler version?
> 
> I'd be a little careful about what we claim, and I think current docs are
> accurate vs our original plans. What we didn't plan to support was the GCC
> and Clang compiler versions in RHEL 7, but if one installs an updated GCC,
> for example, the build should be fine on RHEL 7.
> 
> Now, though, we are having to re-evaluate our use of stdatomics, which
> means we may not actually break RHEL 7 compatibility after all. We'll have
> to "watch this space" as the saying goes!
> 
> Overall, I think the approach of build-time checks is the best, but not
> for specific versions, but instead for capabilities. If/when we add support
> for stdatomics to DPDK builds on Linux/BSD, at that point we put in the
> initial compiler checks a suitable check for them being present and output
> a suitable error if not found.

OK looks good






[RFC PATCH v2] dmadev: offload to free source buffer

2023-08-10 Thread Amit Prakash Shukla
This changeset adds support in DMA library to free source DMA buffer by
hardware. On a supported hardware, application can pass on the mempool
information as part of vchan config when the DMA transfer direction is
configured as RTE_DMA_DIR_MEM_TO_DEV.

Signed-off-by: Amit Prakash Shukla 
---
v2:
- Added note related to mempool.

 lib/dmadev/rte_dmadev.h | 27 +++
 1 file changed, 27 insertions(+)

diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index e61d71959e..f4879d9fd8 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -278,6 +278,13 @@ int16_t rte_dma_next_dev(int16_t start_dev_id);
 #define RTE_DMA_CAPA_OPS_COPY_SG   RTE_BIT64(33)
 /** Support fill operation. */
 #define RTE_DMA_CAPA_OPS_FILL  RTE_BIT64(34)
+/** Support for source buffer free for mem to dev transfer.
+ *
+ * @note Even though the DMA driver has this capability, it may not support all
+ * mempool drivers. If the mempool is not supported by the DMA driver,
+ * rte_dma_vchan_setup() will fail.
+ **/
+#define RTE_DMA_CAPA_MEM_TO_DEV_SOURCE_BUFFER_FREE RTE_BIT64(35)
 /**@}*/
 
 /**
@@ -582,6 +589,19 @@ struct rte_dma_vchan_conf {
 * @see struct rte_dma_port_param
 */
struct rte_dma_port_param dst_port;
+   /** mempool from which source buffer is allocated. mempool info is used
+* for freeing source buffer by hardware when configured direction is
+* RTE_DMA_DIR_MEM_TO_DEV. To free the source buffer by hardware,
+* RTE_DMA_OP_FLAG_FREE_SBUF must be set while calling rte_dma_copy and
+* rte_dma_copy_sg().
+*
+* @note If the mempool is not supported by the DMA driver,
+* rte_dma_vchan_setup() will fail.
+*
+* @see RTE_DMA_OP_FLAG_FREE_SBUF
+*/
+   struct rte_mempool *mem_to_dev_src_buf_pool;
+
 };
 
 /**
@@ -819,6 +839,13 @@ struct rte_dma_sge {
  * capability bit for this, driver should not return error if this flag was 
set.
  */
 #define RTE_DMA_OP_FLAG_LLC RTE_BIT64(2)
+/** Mem to dev source buffer free flag.
+ * Used for freeing source DMA buffer by hardware when the transfer direction 
is
+ * configured as RTE_DMA_DIR_MEM_TO_DEV.
+ *
+ * @see struct rte_dma_vchan_conf::mem_to_dev_src_buf_pool
+ */
+#define RTE_DMA_OP_FLAG_FREE_SBUF  RTE_BIT64(3)
 /**@}*/
 
 /**
-- 
2.25.1



Re: [PATCH v5] build: update DPDK to use C11 standard

2023-08-10 Thread Stephen Hemminger
On Thu, 10 Aug 2023 18:49:09 +0200
Thomas Monjalon  wrote:

> 10/08/2023 18:35, Bruce Richardson:
> > On Thu, Aug 10, 2023 at 07:48:39AM -0700, Stephen Hemminger wrote:  
> > > On Thu, 10 Aug 2023 15:34:43 +0200
> > > Thomas Monjalon  wrote:
> > >   
> > > > 03/08/2023 15:36, David Marchand:  
> > > > > On Wed, Aug 2, 2023 at 2:32 PM Bruce Richardson
> > > > >  wrote:
> > > > > >
> > > > > > As previously announced, DPDK 23.11 will require a C11 supporting
> > > > > > compiler and will use the C11 standard in all builds.
> > > > > >
> > > > > > Forcing use of the C standard, rather than the standard with
> > > > > > GNU extensions, means that some posix definitions which are not in
> > > > > > the C standard are unavailable by default. We fix this by ensuring
> > > > > > the correct defines or cflags are passed to the components that
> > > > > > need them.
> > > > > >
> > > > > > Signed-off-by: Bruce Richardson 
> > > > > > Acked-by: Morten Brørup 
> > > > > > Acked-by: Tyler Retzlaff 
> > > > > Tested-by: Ali Alnubani 
> > > > > 
> > > > > The CI results look good.
> > > > > 
> > > > > Applied, thanks!
> > > > 
> > > > The compiler support is updated, that's fine.
> > > > Should we go further and document some major Linux distributions?
> > > > One concern is to make clear RHEL 7 is not supported anymore.
> > > > Should it be a release note?
> > > >   
> > 
> > Well, DPDK currently is still building fine on Centos 7 for me, so let's
> > hold off on claiming anything until it's definitely broken.
> >   
> > > >   
> > > 
> > > Should be addressed in linux/sys_reqs.rst as well as deprecation notice.
> > > Also, is it possible to add automated check in build for compiler 
> > > version?  
> > 
> > I'd be a little careful about what we claim, and I think current docs are
> > accurate vs our original plans. What we didn't plan to support was the GCC
> > and Clang compiler versions in RHEL 7, but if one installs an updated GCC,
> > for example, the build should be fine on RHEL 7.
> > 
> > Now, though, we are having to re-evaluate our use of stdatomics, which
> > means we may not actually break RHEL 7 compatibility after all. We'll have
> > to "watch this space" as the saying goes!
> > 
> > Overall, I think the approach of build-time checks is the best, but not
> > for specific versions, but instead for capabilities. If/when we add support
> > for stdatomics to DPDK builds on Linux/BSD, at that point we put in the
> > initial compiler checks a suitable check for them being present and output
> > a suitable error if not found.  
> 
> OK looks good

Note: RHEL 7 official end of maintenance support is not until June 2024.




[PATCH 1/1] net/sfc: add missing error code indication to MAE init path

2023-08-10 Thread Ivan Malov
A failure to allocate a bounce buffer for encap. header
parsing results in falling to the error path but does
not set an appropriate error code. Fix this.

Fixes: 1bbd1ec2348a ("net/sfc: support action VXLAN encap in MAE backend")
Cc: sta...@dpdk.org

Signed-off-by: Ivan Malov 
Reviewed-by: Andy Moreton 
---
 .mailmap  | 2 +-
 drivers/net/sfc/sfc_mae.c | 4 +++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/.mailmap b/.mailmap
index 864d33ee46..ec31ab8bd0 100644
--- a/.mailmap
+++ b/.mailmap
@@ -106,7 +106,7 @@ Andriy Berestovskyy  

 Andrzej Ostruszka  
 Andy Gospodarek  
 Andy Green 
-Andy Moreton  
+Andy Moreton   

 Andy Pei 
 Anirudh Venkataramanan 
 Ankur Dwivedi   

diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index f5fe55b46f..bf1c2f60c2 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -215,8 +215,10 @@ sfc_mae_attach(struct sfc_adapter *sa)
bounce_eh->buf_size = limits.eml_encap_header_size_limit;
bounce_eh->buf = rte_malloc("sfc_mae_bounce_eh",
bounce_eh->buf_size, 0);
-   if (bounce_eh->buf == NULL)
+   if (bounce_eh->buf == NULL) {
+   rc = ENOMEM;
goto fail_mae_alloc_bounce_eh;
+   }
 
mae->nb_outer_rule_prios_max = limits.eml_max_n_outer_prios;
mae->nb_action_rule_prios_max = limits.eml_max_n_action_prios;
-- 
2.17.1



[PATCH] graph: mark API's as stable

2023-08-10 Thread Stephen Hemminger
The graph library has been marked experimental since initial
release in 2020. Time to take the training wheels off.

Signed-off-by: Stephen Hemminger 
---
 MAINTAINERS|  2 +-
 lib/graph/rte_graph.h  | 34 --
 lib/graph/rte_graph_model_mcore_dispatch.h |  8 -
 lib/graph/rte_graph_worker.h   |  1 -
 lib/graph/rte_graph_worker_common.h| 18 
 lib/graph/version.map  |  2 +-
 6 files changed, 2 insertions(+), 63 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 6345e7f8a65d..0d36c7e7e84d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1705,7 +1705,7 @@ F: app/test/test_bpf.c
 F: app/test-pmd/bpf_cmd.*
 F: doc/guides/prog_guide/bpf_lib.rst
 
-Graph - EXPERIMENTAL
+Graph
 M: Jerin Jacob 
 M: Kiran Kumar K 
 M: Nithin Dabilpuram 
diff --git a/lib/graph/rte_graph.h b/lib/graph/rte_graph.h
index 7e94c151ae42..7c606eda710d 100644
--- a/lib/graph/rte_graph.h
+++ b/lib/graph/rte_graph.h
@@ -8,10 +8,6 @@
 /**
  * @file rte_graph.h
  *
- * @warning
- * @b EXPERIMENTAL:
- * All functions in this file may be changed or removed without prior notice.
- *
  * Graph architecture abstracts the data processing functions as
  * "node" and "link" them together to create a complex "graph" to enable
  * reusable/modular data processing functions.
@@ -249,7 +245,6 @@ struct rte_graph_cluster_node_stats {
  * @return
  *   Unique graph id on success, RTE_GRAPH_ID_INVALID otherwise.
  */
-__rte_experimental
 rte_graph_t rte_graph_create(const char *name, struct rte_graph_param *prm);
 
 /**
@@ -263,7 +258,6 @@ rte_graph_t rte_graph_create(const char *name, struct 
rte_graph_param *prm);
  * @return
  *   0 on success, error otherwise.
  */
-__rte_experimental
 int rte_graph_destroy(rte_graph_t id);
 
 /**
@@ -285,7 +279,6 @@ int rte_graph_destroy(rte_graph_t id);
  * @return
  *   Valid graph id on success, RTE_GRAPH_ID_INVALID otherwise.
  */
-__rte_experimental
 rte_graph_t rte_graph_clone(rte_graph_t id, const char *name, struct 
rte_graph_param *prm);
 
 /**
@@ -297,7 +290,6 @@ rte_graph_t rte_graph_clone(rte_graph_t id, const char 
*name, struct rte_graph_p
  * @return
  *   Graph id on success, RTE_GRAPH_ID_INVALID otherwise.
  */
-__rte_experimental
 rte_graph_t rte_graph_from_name(const char *name);
 
 /**
@@ -309,7 +301,6 @@ rte_graph_t rte_graph_from_name(const char *name);
  * @return
  *   Graph name on success, NULL otherwise.
  */
-__rte_experimental
 char *rte_graph_id_to_name(rte_graph_t id);
 
 /**
@@ -323,7 +314,6 @@ char *rte_graph_id_to_name(rte_graph_t id);
  * @return
  *   0 on success, error otherwise.
  */
-__rte_experimental
 int rte_graph_export(const char *name, FILE *f);
 
 /**
@@ -336,7 +326,6 @@ int rte_graph_export(const char *name, FILE *f);
  * @return
  *   0 on success, error otherwise.
  */
-__rte_experimental
 int rte_graph_model_mcore_dispatch_core_bind(rte_graph_t id, int lcore);
 
 /**
@@ -345,7 +334,6 @@ int rte_graph_model_mcore_dispatch_core_bind(rte_graph_t 
id, int lcore);
  * @param id
  * Graph id to get the pointer of graph object
  */
-__rte_experimental
 void rte_graph_model_mcore_dispatch_core_unbind(rte_graph_t id);
 
 /**
@@ -362,7 +350,6 @@ void rte_graph_model_mcore_dispatch_core_unbind(rte_graph_t 
id);
  *
  * @see rte_graph_walk()
  */
-__rte_experimental
 struct rte_graph *rte_graph_lookup(const char *name);
 
 /**
@@ -371,7 +358,6 @@ struct rte_graph *rte_graph_lookup(const char *name);
  * @return
  *   Maximum graph count.
  */
-__rte_experimental
 rte_graph_t rte_graph_max_count(void);
 
 /**
@@ -382,7 +368,6 @@ rte_graph_t rte_graph_max_count(void);
  * @param id
  *   Graph id to get graph info.
  */
-__rte_experimental
 void rte_graph_dump(FILE *f, rte_graph_t id);
 
 /**
@@ -391,7 +376,6 @@ void rte_graph_dump(FILE *f, rte_graph_t id);
  * @param f
  *   File pointer to dump graph info.
  */
-__rte_experimental
 void rte_graph_list_dump(FILE *f);
 
 /**
@@ -404,7 +388,6 @@ void rte_graph_list_dump(FILE *f);
  * @param all
  *   true to dump nodes in the graph.
  */
-__rte_experimental
 void rte_graph_obj_dump(FILE *f, struct rte_graph *graph, bool all);
 
 /** Macro to browse rte_node object after the graph creation */
@@ -425,7 +408,6 @@ void rte_graph_obj_dump(FILE *f, struct rte_graph *graph, 
bool all);
  * @return
  *   Node pointer on success, NULL otherwise.
  */
-__rte_experimental
 struct rte_node *rte_graph_node_get(rte_graph_t graph_id, rte_node_t node_id);
 
 /**
@@ -439,7 +421,6 @@ struct rte_node *rte_graph_node_get(rte_graph_t graph_id, 
rte_node_t node_id);
  * @return
  *   Node pointer on success, NULL otherwise.
  */
-__rte_experimental
 struct rte_node *rte_graph_node_get_by_name(const char *graph,
const char *name);
 
@@ -453,7 +434,6 @@ struct rte_node *rte_graph_node_get_by_name(const char 
*graph,
  * @return
  *   Valid pointer on success, NULL otherwise.
  */
-__rte_ex

[PATCH 1/2] net/sfc: offer indirect VXLAN encap action in transfer flows

2023-08-10 Thread Ivan Malov
Parsing inline action VXLAN_ENCAP repeating in many flows is
expensive, so offer support for its indirect version. Query
operation is not supported for this action. The next patch
will add a means to update the encapsulation header data.

Signed-off-by: Ivan Malov 
Reviewed-by: Andy Moreton 
---
 .mailmap   |  2 +-
 doc/guides/rel_notes/release_23_11.rst |  4 ++
 drivers/net/sfc/sfc_flow.h |  1 +
 drivers/net/sfc/sfc_mae.c  | 51 ++
 drivers/net/sfc/sfc_mae.h  |  1 +
 5 files changed, 58 insertions(+), 1 deletion(-)

diff --git a/.mailmap b/.mailmap
index 864d33ee46..ec31ab8bd0 100644
--- a/.mailmap
+++ b/.mailmap
@@ -106,7 +106,7 @@ Andriy Berestovskyy  

 Andrzej Ostruszka  
 Andy Gospodarek  
 Andy Green 
-Andy Moreton  
+Andy Moreton   

 Andy Pei 
 Anirudh Venkataramanan 
 Ankur Dwivedi   

diff --git a/doc/guides/rel_notes/release_23_11.rst 
b/doc/guides/rel_notes/release_23_11.rst
index 6b4dd21fd0..dd10110fff 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -55,6 +55,10 @@ New Features
  Also, make sure to start the actual text at the margin.
  ===
 
+* **Updated Solarflare network PMD.**
+
+  * Added support for transfer flow action INDIRECT with subtype VXLAN_ENCAP.
+
 
 Removed Items
 -
diff --git a/drivers/net/sfc/sfc_flow.h b/drivers/net/sfc/sfc_flow.h
index 601f93e540..95fcb5abe0 100644
--- a/drivers/net/sfc/sfc_flow.h
+++ b/drivers/net/sfc/sfc_flow.h
@@ -98,6 +98,7 @@ struct rte_flow_action_handle {
enum rte_flow_action_type   type;
 
union {
+   struct sfc_mae_encap_header *encap_header;
struct sfc_mae_counter  *counter;
};
 };
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index f5fe55b46f..45a66307d2 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -663,6 +663,9 @@ sfc_mae_encap_header_attach(struct sfc_adapter *sa,
SFC_ASSERT(sfc_adapter_is_locked(sa));
 
TAILQ_FOREACH(encap_header, &mae->encap_headers, entries) {
+   if (encap_header->indirect)
+   continue;
+
if (encap_header->size == bounce_eh->size &&
memcmp(encap_header->buf, bounce_eh->buf,
   bounce_eh->size) == 0) {
@@ -4057,6 +4060,9 @@ sfc_mae_rule_parse_action_vxlan_encap(
/* Take care of the masks. */
sfc_mae_header_force_item_masks(buf, parsed_items, nb_parsed_items);
 
+   if (spec == NULL)
+   return 0;
+
rc = efx_mae_action_set_populate_encap(spec);
if (rc != 0) {
rc = rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_ACTION,
@@ -4160,6 +4166,23 @@ sfc_mae_rule_parse_action_indirect(struct sfc_adapter 
*sa,
sfc_dbg(sa, "attaching to indirect_action=%p", entry);
 
switch (entry->type) {
+   case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
+   if (ctx->encap_header != NULL) {
+   return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "cannot have multiple actions 
VXLAN_ENCAP in one flow");
+   }
+
+   rc = 
efx_mae_action_set_populate_encap(ctx->spec);
+   if (rc != 0) {
+   return rte_flow_error_set(error, rc,
+RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+"failed to add ENCAP to MAE action 
set");
+   }
+
+   ctx->encap_header = entry->encap_header;
+   ++(ctx->encap_header->refcnt);
+   break;
case RTE_FLOW_ACTION_TYPE_COUNT:
if (ft_rule_type != SFC_FT_RULE_NONE) {
return rte_flow_error_set(error, EINVAL,
@@ -5182,12 +5205,31 @@ sfc_mae_indir_action_create(struct sfc_adapter *sa,
struct rte_flow_action_handle *handle,
struct rte_flow_error *error)
 {
+   struct sfc_mae *mae = &sa->mae;
+   bool custom_error = false;
int ret;
 
SFC_ASSERT(sfc_adapter_is_locked(sa));
SFC_ASSERT(handle != NULL);
 
switch (action->type) {
+   case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
+   /* Cleanup after previous encap. header bounce buffer usage. */
+   sfc_mae_bounce_eh_invalidate(&mae->bounce_eh);
+
+   ret = sfc_mae_rule_parse_action_vxlan_encap(mae, action->conf,
+

[PATCH 2/2] net/sfc: support updating indirect VXLAN encap action

2023-08-10 Thread Ivan Malov
Such updates are helpful as they let applications avoid
costly flow re-insertions when the header data changes.

Signed-off-by: Ivan Malov 
Reviewed-by: Andy Moreton 
---
 drivers/common/sfc_efx/base/efx.h |  9 +++
 drivers/common/sfc_efx/base/efx_mae.c | 80 
 drivers/common/sfc_efx/version.map|  1 +
 drivers/net/sfc/sfc_flow.c| 35 +++
 drivers/net/sfc/sfc_mae.c | 88 +++
 drivers/net/sfc/sfc_mae.h |  5 ++
 6 files changed, 218 insertions(+)

diff --git a/drivers/common/sfc_efx/base/efx.h 
b/drivers/common/sfc_efx/base/efx.h
index efefea717f..b4d8cfe9d8 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4811,6 +4811,15 @@ efx_mae_encap_header_alloc(
__insize_t header_size,
__out   efx_mae_eh_id_t *eh_idp);
 
+LIBEFX_API
+extern __checkReturn   efx_rc_t
+efx_mae_encap_header_update(
+   __inefx_nic_t *enp,
+   __inefx_mae_eh_id_t *eh_idp,
+   __inefx_tunnel_protocol_t encap_type,
+   __in_bcount(header_size)const uint8_t *header_data,
+   __insize_t header_size);
+
 LIBEFX_API
 extern __checkReturn   efx_rc_t
 efx_mae_encap_header_free(
diff --git a/drivers/common/sfc_efx/base/efx_mae.c 
b/drivers/common/sfc_efx/base/efx_mae.c
index d36cdc71be..0d7b24d351 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -2937,6 +2937,86 @@ efx_mae_encap_header_alloc(
EFSYS_PROBE(fail6);
 fail5:
EFSYS_PROBE(fail5);
+fail4:
+   EFSYS_PROBE(fail4);
+fail3:
+   EFSYS_PROBE(fail3);
+fail2:
+   EFSYS_PROBE(fail2);
+fail1:
+   EFSYS_PROBE1(fail1, efx_rc_t, rc);
+   return (rc);
+}
+
+   __checkReturn   efx_rc_t
+efx_mae_encap_header_update(
+   __inefx_nic_t *enp,
+   __inefx_mae_eh_id_t *eh_idp,
+   __inefx_tunnel_protocol_t encap_type,
+   __in_bcount(header_size)const uint8_t *header_data,
+   __insize_t header_size)
+{
+   const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
+   efx_mcdi_req_t req;
+   EFX_MCDI_DECLARE_BUF(payload,
+   MC_CMD_MAE_ENCAP_HEADER_UPDATE_IN_LENMAX_MCDI2,
+   MC_CMD_MAE_ENCAP_HEADER_UPDATE_OUT_LEN);
+   uint32_t encap_type_mcdi;
+   efx_rc_t rc;
+
+   if (encp->enc_mae_supported == B_FALSE) {
+   rc = ENOTSUP;
+   goto fail1;
+   }
+
+   switch (encap_type) {
+   case EFX_TUNNEL_PROTOCOL_NONE:
+   encap_type_mcdi = MAE_MCDI_ENCAP_TYPE_NONE;
+   break;
+   case EFX_TUNNEL_PROTOCOL_VXLAN:
+   encap_type_mcdi = MAE_MCDI_ENCAP_TYPE_VXLAN;
+   break;
+   case EFX_TUNNEL_PROTOCOL_GENEVE:
+   encap_type_mcdi = MAE_MCDI_ENCAP_TYPE_GENEVE;
+   break;
+   case EFX_TUNNEL_PROTOCOL_NVGRE:
+   encap_type_mcdi = MAE_MCDI_ENCAP_TYPE_NVGRE;
+   break;
+   default:
+   rc = ENOTSUP;
+   goto fail2;
+   }
+
+   if (header_size >
+  MC_CMD_MAE_ENCAP_HEADER_UPDATE_IN_HDR_DATA_MAXNUM_MCDI2) {
+   rc = EINVAL;
+   goto fail3;
+   }
+
+   req.emr_cmd = MC_CMD_MAE_ENCAP_HEADER_UPDATE;
+   req.emr_in_buf = payload;
+   req.emr_in_length = MC_CMD_MAE_ENCAP_HEADER_UPDATE_IN_LEN(header_size);
+   req.emr_out_buf = payload;
+   req.emr_out_length = MC_CMD_MAE_ENCAP_HEADER_UPDATE_OUT_LEN;
+
+   MCDI_IN_SET_DWORD(req,
+   MAE_ENCAP_HEADER_UPDATE_IN_EH_ID, eh_idp->id);
+
+   MCDI_IN_SET_DWORD(req,
+   MAE_ENCAP_HEADER_UPDATE_IN_ENCAP_TYPE, encap_type_mcdi);
+
+   memcpy(MCDI_IN2(req, uint8_t, MAE_ENCAP_HEADER_UPDATE_IN_HDR_DATA),
+   header_data, header_size);
+
+   efx_mcdi_execute(enp, &req);
+
+   if (req.emr_rc != 0) {
+   rc = req.emr_rc;
+   goto fail4;
+   }
+
+   return (0);
+
 fail4:
EFSYS_PROBE(fail4);
 fail3:
diff --git a/drivers/common/sfc_efx/version.map 
b/drivers/common/sfc_efx/version.map
index 40c97ad2b4..43e8e52ab9 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -123,6 +123,7 @@ INTERNAL {
efx_mae_counters_stream_stop;
efx_mae_encap_header_alloc;
efx_mae_encap_header_free;
+   efx_mae_encap_header_update;
efx_mae_fini;
efx_mae_get_limits;
efx_mae_init;
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index a35f20770d..1b50aefe5c 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -2864,6 +2864,40 @@ sfc_flow_actio

RE: [PATCH v5] build: update DPDK to use C11 standard

2023-08-10 Thread Morten Brørup
> From: Stephen Hemminger [mailto:step...@networkplumber.org]
> Sent: Thursday, 10 August 2023 19.03
> 
> On Thu, 10 Aug 2023 18:49:09 +0200
> Thomas Monjalon  wrote:
> 
> > 10/08/2023 18:35, Bruce Richardson:
> > > On Thu, Aug 10, 2023 at 07:48:39AM -0700, Stephen Hemminger wrote:
> > > > On Thu, 10 Aug 2023 15:34:43 +0200
> > > > Thomas Monjalon  wrote:
> > > >
> > > > > 03/08/2023 15:36, David Marchand:
> > > > > > On Wed, Aug 2, 2023 at 2:32 PM Bruce Richardson
> > > > > >  wrote:
> > > > > > >
> > > > > > > As previously announced, DPDK 23.11 will require a C11
> supporting
> > > > > > > compiler and will use the C11 standard in all builds.
> > > > > > >
> > > > > > > Forcing use of the C standard, rather than the standard with
> > > > > > > GNU extensions, means that some posix definitions which are
> not in
> > > > > > > the C standard are unavailable by default. We fix this by
> ensuring
> > > > > > > the correct defines or cflags are passed to the components
> that
> > > > > > > need them.
> > > > > > >
> > > > > > > Signed-off-by: Bruce Richardson 
> > > > > > > Acked-by: Morten Brørup 
> > > > > > > Acked-by: Tyler Retzlaff 
> > > > > > Tested-by: Ali Alnubani 
> > > > > >
> > > > > > The CI results look good.
> > > > > >
> > > > > > Applied, thanks!
> > > > >
> > > > > The compiler support is updated, that's fine.
> > > > > Should we go further and document some major Linux distributions?
> > > > > One concern is to make clear RHEL 7 is not supported anymore.
> > > > > Should it be a release note?
> > > > >
> > >
> > > Well, DPDK currently is still building fine on Centos 7 for me, so
> let's
> > > hold off on claiming anything until it's definitely broken.
> > >
> > > > >
> > > >
> > > > Should be addressed in linux/sys_reqs.rst as well as deprecation
> notice.
> > > > Also, is it possible to add automated check in build for compiler
> version?
> > >
> > > I'd be a little careful about what we claim, and I think current docs
> are
> > > accurate vs our original plans. What we didn't plan to support was
> the GCC
> > > and Clang compiler versions in RHEL 7, but if one installs an updated
> GCC,
> > > for example, the build should be fine on RHEL 7.
> > >
> > > Now, though, we are having to re-evaluate our use of stdatomics,
> which
> > > means we may not actually break RHEL 7 compatibility after all. We'll
> have
> > > to "watch this space" as the saying goes!
> > >
> > > Overall, I think the approach of build-time checks is the best, but
> not
> > > for specific versions, but instead for capabilities. If/when we add
> support
> > > for stdatomics to DPDK builds on Linux/BSD, at that point we put in
> the
> > > initial compiler checks a suitable check for them being present and
> output
> > > a suitable error if not found.

Exactly. Capabilities checks is the right way to go when cross compiling.

> >
> > OK looks good
> 
> Note: RHEL 7 official end of maintenance support is not until June 2024.
> 

It was agreed to abandon RHEL 7, mainly driven by the need for C11 stdatomic.h, 
which is not supported by the GCC C compiler included with RHEL 7. So it pains 
me to admit that Stephen has a valid point here, after it turned out that the 
GCC g++ is not C11 compatible.

Regardless, I think that DPDK 23.11 support for RHEL 7 should be limited to 
"might work on RHEL 7", rather than guaranteed support for RHEL 7 (which would 
require DPDK CI to resume testing on RHEL 7).

IIRC, there was also the argument that DPDK 23.11 LTS support ends after June 
2024.

Here's another argument for abandoning RHEL 7: RHEL 7 uses Linux Kernel 3.10. 
Although DPDK requires Linux Kernel >= 4.14, we promise backwards compatibility 
for RHEL/CentOS 7. Do we really still want to do that? (Note: RHEL 8 uses Linux 
Kernel 4.18.)

While we're discussing the Linux Kernel version required... Is it documented 
anywhere why a specific Linux Kernel version is required by DPDK? Or more 
specifically: Is it documented anywhere which DPDK features require which 
specific Linux Kernel versions?



RE: [RFC PATCH v2] dmadev: offload to free source buffer

2023-08-10 Thread Morten Brørup
> From: Amit Prakash Shukla [mailto:amitpraka...@marvell.com]
> Sent: Thursday, 10 August 2023 18.53
> 
> This changeset adds support in DMA library to free source DMA buffer by
> hardware. On a supported hardware, application can pass on the mempool
> information as part of vchan config when the DMA transfer direction is
> configured as RTE_DMA_DIR_MEM_TO_DEV.
> 
> Signed-off-by: Amit Prakash Shukla 
> ---
> v2:
> - Added note related to mempool.
> 

This seems useful. With the v2 additions,

Acked-by: Morten Brørup 



[RFC] net/sfc: support packet replay in transfer flows

2023-08-10 Thread Ivan Malov
Packet replay enables users to leverage multiple counters in
one flow and allows to request delivery to multiple ports.

A given flow rule may use either one inline count action
and multiple indirect counters or just multiple indirect
counters. The inline count action (if any) must come
before the first delivery action or before the first
indirect count action, whichever comes earlier.

These are some testpmd examples of supported
multi-count and mirroring use cases:

flow create 0 transfer pattern represented_port ethdev_port_id is 0 / end \
 actions port_representor port_id 0 / port_representor port_id 1 / end

or

flow indirect_action 0 create action_id 239 transfer action count / end

flow create 0 transfer pattern represented_port ethdev_port_id is 0 / end \
 actions count / port_representor port_id 0 / indirect 239 / \
 port_representor port_id 1 / end

or

flow indirect_action 0 create action_id 239 transfer action count / end

flow create 0 transfer pattern represented_port ethdev_port_id is 0 / end \
 actions indirect 239 / port_representor port_id 0 / indirect 239 / \
 port_representor port_id 1 / end

and the likes.

Signed-off-by: Ivan Malov 
---
 doc/guides/rel_notes/release_23_11.rst |   2 +
 drivers/common/sfc_efx/base/efx.h  |  32 ++
 drivers/common/sfc_efx/base/efx_mae.c  | 175 ++
 drivers/common/sfc_efx/version.map |   3 +
 drivers/net/sfc/sfc_mae.c  | 712 +
 drivers/net/sfc/sfc_mae.h  |  37 ++
 6 files changed, 870 insertions(+), 91 deletions(-)

diff --git a/doc/guides/rel_notes/release_23_11.rst 
b/doc/guides/rel_notes/release_23_11.rst
index dd10110fff..066495c622 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -59,6 +59,8 @@ New Features
 
   * Added support for transfer flow action INDIRECT with subtype VXLAN_ENCAP.
 
+  * Supported packet replay (multi-count / multi-delivery) in transfer flows.
+
 
 Removed Items
 -
diff --git a/drivers/common/sfc_efx/base/efx.h 
b/drivers/common/sfc_efx/base/efx.h
index b4d8cfe9d8..3312c2fa8f 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -5327,6 +5327,38 @@ efx_table_entry_delete(
__in_bcount(data_size)  uint8_t *entry_datap,
__inunsigned int data_size);
 
+/*
+ * Clone the given MAE action set specification
+ * and drop actions COUNT and DELIVER from it.
+ */
+LIBEFX_API
+extern __checkReturn   efx_rc_t
+efx_mae_action_set_replay(
+   __inefx_nic_t *enp,
+   __inconst efx_mae_actions_t *spec_orig,
+   __out   efx_mae_actions_t **spec_clonep);
+
+/*
+ * The actual limit may be lower than this.
+ * This define merely limits the number of
+ * entries in a single allocation request.
+ */
+#define EFX_MAE_ACTION_SET_LIST_MAX_NENTRIES   254
+
+LIBEFX_API
+extern __checkReturn   efx_rc_t
+efx_mae_action_set_list_alloc(
+   __inefx_nic_t *enp,
+   __inunsigned int n_asets,
+   __in_ecount(n_asets)const efx_mae_aset_id_t *aset_ids,
+   __out   efx_mae_aset_list_id_t *aset_list_idp);
+
+LIBEFX_API
+extern __checkReturn   efx_rc_t
+efx_mae_action_set_list_free(
+   __inefx_nic_t *enp,
+   __inconst efx_mae_aset_list_id_t *aset_list_idp);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/common/sfc_efx/base/efx_mae.c 
b/drivers/common/sfc_efx/base/efx_mae.c
index 0d7b24d351..9ae136dcce 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -4273,4 +4273,179 @@ efx_mae_read_mport_journal(
return (rc);
 }
 
+   __checkReturn   efx_rc_t
+efx_mae_action_set_replay(
+   __inefx_nic_t *enp,
+   __inconst efx_mae_actions_t *spec_orig,
+   __out   efx_mae_actions_t **spec_clonep)
+{
+   const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
+   efx_mae_actions_t *spec_clone;
+   efx_rc_t rc;
+
+   EFSYS_KMEM_ALLOC(enp->en_esip, sizeof (*spec_clone), spec_clone);
+   if (spec_clone == NULL) {
+   rc = ENOMEM;
+   goto fail1;
+   }
+
+   *spec_clone = *spec_orig;
+
+   spec_clone->ema_rsrc.emar_counter_id.id = EFX_MAE_RSRC_ID_INVALID;
+   spec_clone->ema_actions &= ~(1U << EFX_MAE_ACTION_COUNT);
+   spec_clone->ema_n_count_actions = 0;
+
+   (void)efx_mae_mport_invalid(&spec_clone->ema_deliver_mport);
+   spec_clone->ema_actions &= ~(1U << EFX_MAE_ACTION_DELIVER);
+
+   *spec_clonep = spec_clone;
+
+   return (0);
+
+fail1:
+   EFSYS_PROBE1(fail1, efx_rc_t, rc);
+   return (rc);
+}
+
+   __checkReturn   efx_rc_t
+efx_mae_action_set_list_alloc(
+   __inefx_nic_t *enp,
+   __in

DPDK Release Status Meeting 2023-08-10

2023-08-10 Thread Mcnamara, John
Release status meeting minutes 2023-08-10
=

Agenda:
* Release Dates
* Subtrees
* Roadmaps
* LTS
* Defects
* Opens

Participants:
* AMD [No]
* ARM
* Debian/Microsoft
* Intel
* Marvell
* Nvidia
* Red Hat

Release Dates
-

The following are the proposed working dates for 23.11:

* V1:  12 August 2023
* RC1: 29 September 2023
* RC2: 20 October 2023
* RC3: 27 October 2023
* Release: 15 November 2023


Subtrees


* next-net
  * New driver from Napatech
  * New driver RNP: https://patchwork.dpdk.org/project/dpdk/list/?series=29118
  * 1 other new driver

* next-net-intel
  * No update.

* next-net-mlx
  * No update.

* next-net-mvl
  * No update.

* next-eventdev
  * No update.

* next-baseband
  * No update.

* next-virtio
  * No update.

* next-crypto
  * Some patches in progress.
  * Patch to remove some of the experimental API tags.

* main
  * Preparing for LTS release.
  * Removing deprecated libraries.


Proposed Schedule for 2023
--

See also http://core.dpdk.org/roadmap/#dates

23.11
  * Proposal deadline (RFC/v1 patches): 12 August 2023
  * API freeze (-rc1): 29 September 2023
  * PMD features freeze (-rc2): 20 October 2023
  * Builtin applications features freeze (-rc3): 27 October 2023
  * Release: 15 November 2023


LTS
---

Backports ongoing. Awaiting test results.

Next LTS releases:

* 22.11.2
* 21.11.5
* 20.11.9
* 19.11.15
  * Will be updated with CVE and critical fixes only.


* Distros
  * v22.11 in Debian 12
  * Ubuntu 22.04-LTS contains 21.11
  * Ubuntu 23.04 contains 22.11

Defects
---

* Bugzilla links, 'Bugs',  added for hosted projects
  * https://www.dpdk.org/hosted-projects/


Opens
-

* None


DPDK Release Status Meetings


The DPDK Release Status Meeting is intended for DPDK Committers to discuss the
status of the master tree and sub-trees, and for project managers to track
progress or milestone dates.

The meeting occurs on every Thursday at 9:30 UTC over Jitsi on 
https://meet.jit.si/DPDK

You don't need an invite to join the meeting but if you want a calendar 
reminder just
send an email to "John McNamara john.mcnam...@intel.com" for the invite.



Re: [PATCH v5] build: update DPDK to use C11 standard

2023-08-10 Thread Tyler Retzlaff
On Thu, Aug 10, 2023 at 08:17:23PM +0200, Morten Brørup wrote:
> > From: Stephen Hemminger [mailto:step...@networkplumber.org]
> > Sent: Thursday, 10 August 2023 19.03
> > 
> > On Thu, 10 Aug 2023 18:49:09 +0200
> > Thomas Monjalon  wrote:
> > 
> > > 10/08/2023 18:35, Bruce Richardson:
> > > > On Thu, Aug 10, 2023 at 07:48:39AM -0700, Stephen Hemminger wrote:
> > > > > On Thu, 10 Aug 2023 15:34:43 +0200
> > > > > Thomas Monjalon  wrote:
> > > > >
> > > > > > 03/08/2023 15:36, David Marchand:
> > > > > > > On Wed, Aug 2, 2023 at 2:32 PM Bruce Richardson
> > > > > > >  wrote:
> > > > > > > >
> > > > > > > > As previously announced, DPDK 23.11 will require a C11
> > supporting
> > > > > > > > compiler and will use the C11 standard in all builds.
> > > > > > > >
> > > > > > > > Forcing use of the C standard, rather than the standard with
> > > > > > > > GNU extensions, means that some posix definitions which are
> > not in
> > > > > > > > the C standard are unavailable by default. We fix this by
> > ensuring
> > > > > > > > the correct defines or cflags are passed to the components
> > that
> > > > > > > > need them.
> > > > > > > >
> > > > > > > > Signed-off-by: Bruce Richardson 
> > > > > > > > Acked-by: Morten Brørup 
> > > > > > > > Acked-by: Tyler Retzlaff 
> > > > > > > Tested-by: Ali Alnubani 
> > > > > > >
> > > > > > > The CI results look good.
> > > > > > >
> > > > > > > Applied, thanks!
> > > > > >
> > > > > > The compiler support is updated, that's fine.
> > > > > > Should we go further and document some major Linux distributions?
> > > > > > One concern is to make clear RHEL 7 is not supported anymore.
> > > > > > Should it be a release note?
> > > > > >
> > > >
> > > > Well, DPDK currently is still building fine on Centos 7 for me, so
> > let's
> > > > hold off on claiming anything until it's definitely broken.
> > > >
> > > > > >
> > > > >
> > > > > Should be addressed in linux/sys_reqs.rst as well as deprecation
> > notice.
> > > > > Also, is it possible to add automated check in build for compiler
> > version?
> > > >
> > > > I'd be a little careful about what we claim, and I think current docs
> > are
> > > > accurate vs our original plans. What we didn't plan to support was
> > the GCC
> > > > and Clang compiler versions in RHEL 7, but if one installs an updated
> > GCC,
> > > > for example, the build should be fine on RHEL 7.
> > > >
> > > > Now, though, we are having to re-evaluate our use of stdatomics,
> > which
> > > > means we may not actually break RHEL 7 compatibility after all. We'll
> > have
> > > > to "watch this space" as the saying goes!
> > > >
> > > > Overall, I think the approach of build-time checks is the best, but
> > not
> > > > for specific versions, but instead for capabilities. If/when we add
> > support
> > > > for stdatomics to DPDK builds on Linux/BSD, at that point we put in
> > the
> > > > initial compiler checks a suitable check for them being present and
> > output
> > > > a suitable error if not found.
> 
> Exactly. Capabilities checks is the right way to go when cross compiling.
> 
> > >
> > > OK looks good
> > 
> > Note: RHEL 7 official end of maintenance support is not until June 2024.
> > 
> 
> It was agreed to abandon RHEL 7, mainly driven by the need for C11 
> stdatomic.h, which is not supported by the GCC C compiler included with RHEL 
> 7. So it pains me to admit that Stephen has a valid point here, after it 
> turned out that the GCC g++ is not C11 compatible.

we would substantially reduce porting delta to retain C11, there are a
number of other things that help with portability from C11 that we can
utilized that i hadn't brought up before since it had been resolved to
adopt it.

it would be really unfortunate to say we aren't going to require C11
since that would cause me to have to bring a lot more conditional
compile into the tree.

just fyi

> 
> Regardless, I think that DPDK 23.11 support for RHEL 7 should be limited to 
> "might work on RHEL 7", rather than guaranteed support for RHEL 7 (which 
> would require DPDK CI to resume testing on RHEL 7).
> 
> IIRC, there was also the argument that DPDK 23.11 LTS support ends after June 
> 2024.
> 
> Here's another argument for abandoning RHEL 7: RHEL 7 uses Linux Kernel 3.10. 
> Although DPDK requires Linux Kernel >= 4.14, we promise backwards 
> compatibility for RHEL/CentOS 7. Do we really still want to do that? (Note: 
> RHEL 8 uses Linux Kernel 4.18.)
> 
> While we're discussing the Linux Kernel version required... Is it documented 
> anywhere why a specific Linux Kernel version is required by DPDK? Or more 
> specifically: Is it documented anywhere which DPDK features require which 
> specific Linux Kernel versions?
> 


[PATCH 0/6] RFC optional rte optional stdatomics API

2023-08-10 Thread Tyler Retzlaff
This series introduces API additions prefixed in the rte namespace that allow
the optional use of stdatomics.h from C11 using enable_stdatomics=true for
targets where enable_stdatomics=false no functional change is intended.

Be aware this does not contain all changes to use stdatomics across the DPDK
tree it only introduces the minimum to allow the option to be used which is
a pre-requisite for a clean CI (probably using clang) that can be run
with enable_stdatomics=true enabled.

It is planned that subsequent series will be introduced per lib/driver as
appropriate to further enable stdatomics use when enable_stdatomics=true.

Notes:

* additional libraries beyond EAL make visible atomics use across the
  API/ABI surface they will be converted in the next series.

* the eal: add rte atomic qualifier with casts patch needs some discussion
  as to whether or not the legacy rte_atomic APIs should be converted to
  work with enable_stdatomic=true right now some implementation dependent
  casts are used to prevent cascading / having to convert too much in
  the intial series.

* windows will obviously need complete conversion of libraries including
  atomics that are not crossing API/ABI boundaries. those conversions will
  introduced in separate series as new along side the existing msvc series.

Please keep in mind we would like to prioritize the review / acceptance of
this patch since it needs to be completed in the 23.11 merge window.

Thank you all for the discussion that lead to the formation of this series.

Tyler Retzlaff (6):
  eal: provide rte stdatomics optional atomics API
  eal: adapt EAL to present rte optional atomics API
  eal: add rte atomic qualifier with casts
  distributor: adapt for EAL optional atomics API changes
  bpf: adapt for EAL optional atomics API changes
  devtools: forbid new direct use of GCC atomic builtins

 app/test/test_mcslock.c  |   6 +-
 config/meson.build   |   1 +
 config/rte_config.h  |   1 +
 devtools/checkpatches.sh |   8 ++
 lib/bpf/bpf_pkt.c|   6 +-
 lib/distributor/distributor_private.h|   2 +-
 lib/distributor/rte_distributor_single.c |  44 -
 lib/eal/arm/include/rte_atomic_64.h  |  32 +++---
 lib/eal/arm/include/rte_pause_64.h   |  26 ++---
 lib/eal/arm/rte_power_intrinsics.c   |   8 +-
 lib/eal/common/eal_common_trace.c|  16 +--
 lib/eal/include/generic/rte_atomic.h |  66 -
 lib/eal/include/generic/rte_pause.h  |  41 
 lib/eal/include/generic/rte_rwlock.h |  47 -
 lib/eal/include/generic/rte_spinlock.h   |  19 ++--
 lib/eal/include/meson.build  |   1 +
 lib/eal/include/rte_mcslock.h|  50 +-
 lib/eal/include/rte_pflock.h |  24 ++---
 lib/eal/include/rte_seqcount.h   |  18 ++--
 lib/eal/include/rte_stdatomic.h  | 162 +++
 lib/eal/include/rte_ticketlock.h |  42 
 lib/eal/include/rte_trace_point.h|   4 +-
 lib/eal/ppc/include/rte_atomic.h |  50 +-
 lib/eal/x86/include/rte_atomic.h |   4 +-
 lib/eal/x86/include/rte_spinlock.h   |   2 +-
 lib/eal/x86/rte_power_intrinsics.c   |   7 +-
 meson_options.txt|   1 +
 27 files changed, 445 insertions(+), 243 deletions(-)
 create mode 100644 lib/eal/include/rte_stdatomic.h

-- 
1.8.3.1



[PATCH 4/6] distributor: adapt for EAL optional atomics API changes

2023-08-10 Thread Tyler Retzlaff
Adapt distributor for EAL optional atomics API changes

Signed-off-by: Tyler Retzlaff 
---
 lib/distributor/distributor_private.h|  2 +-
 lib/distributor/rte_distributor_single.c | 44 
 2 files changed, 23 insertions(+), 23 deletions(-)

diff --git a/lib/distributor/distributor_private.h 
b/lib/distributor/distributor_private.h
index 7101f63..ffbdae5 100644
--- a/lib/distributor/distributor_private.h
+++ b/lib/distributor/distributor_private.h
@@ -52,7 +52,7 @@
  * Only 64-bits of the memory is actually used though.
  */
 union rte_distributor_buffer_single {
-   volatile int64_t bufptr64;
+   volatile int64_t __rte_atomic bufptr64;
char pad[RTE_CACHE_LINE_SIZE*3];
 } __rte_cache_aligned;
 
diff --git a/lib/distributor/rte_distributor_single.c 
b/lib/distributor/rte_distributor_single.c
index 2c77ac4..ad43c13 100644
--- a/lib/distributor/rte_distributor_single.c
+++ b/lib/distributor/rte_distributor_single.c
@@ -32,10 +32,10 @@
int64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS)
| RTE_DISTRIB_GET_BUF;
RTE_WAIT_UNTIL_MASKED(&buf->bufptr64, RTE_DISTRIB_FLAGS_MASK,
-   ==, 0, __ATOMIC_RELAXED);
+   ==, 0, rte_memory_order_relaxed);
 
/* Sync with distributor on GET_BUF flag. */
-   __atomic_store_n(&(buf->bufptr64), req, __ATOMIC_RELEASE);
+   rte_atomic_store_explicit(&buf->bufptr64, req, 
rte_memory_order_release);
 }
 
 struct rte_mbuf *
@@ -44,7 +44,7 @@ struct rte_mbuf *
 {
union rte_distributor_buffer_single *buf = &d->bufs[worker_id];
/* Sync with distributor. Acquire bufptr64. */
-   if (__atomic_load_n(&buf->bufptr64, __ATOMIC_ACQUIRE)
+   if (rte_atomic_load_explicit(&buf->bufptr64, rte_memory_order_acquire)
& RTE_DISTRIB_GET_BUF)
return NULL;
 
@@ -72,10 +72,10 @@ struct rte_mbuf *
uint64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS)
| RTE_DISTRIB_RETURN_BUF;
RTE_WAIT_UNTIL_MASKED(&buf->bufptr64, RTE_DISTRIB_FLAGS_MASK,
-   ==, 0, __ATOMIC_RELAXED);
+   ==, 0, rte_memory_order_relaxed);
 
/* Sync with distributor on RETURN_BUF flag. */
-   __atomic_store_n(&(buf->bufptr64), req, __ATOMIC_RELEASE);
+   rte_atomic_store_explicit(&buf->bufptr64, req, 
rte_memory_order_release);
return 0;
 }
 
@@ -119,7 +119,7 @@ struct rte_mbuf *
d->in_flight_tags[wkr] = 0;
d->in_flight_bitmask &= ~(1UL << wkr);
/* Sync with worker. Release bufptr64. */
-   __atomic_store_n(&(d->bufs[wkr].bufptr64), 0, __ATOMIC_RELEASE);
+   rte_atomic_store_explicit(&d->bufs[wkr].bufptr64, 0, 
rte_memory_order_release);
if (unlikely(d->backlog[wkr].count != 0)) {
/* On return of a packet, we need to move the
 * queued packets for this core elsewhere.
@@ -165,21 +165,21 @@ struct rte_mbuf *
for (wkr = 0; wkr < d->num_workers; wkr++) {
uintptr_t oldbuf = 0;
/* Sync with worker. Acquire bufptr64. */
-   const int64_t data = __atomic_load_n(&(d->bufs[wkr].bufptr64),
-   __ATOMIC_ACQUIRE);
+   const int64_t data = 
rte_atomic_load_explicit(&d->bufs[wkr].bufptr64,
+   
rte_memory_order_acquire);
 
if (data & RTE_DISTRIB_GET_BUF) {
flushed++;
if (d->backlog[wkr].count)
/* Sync with worker. Release bufptr64. */
-   __atomic_store_n(&(d->bufs[wkr].bufptr64),
+   
rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
backlog_pop(&d->backlog[wkr]),
-   __ATOMIC_RELEASE);
+   rte_memory_order_release);
else {
/* Sync with worker on GET_BUF flag. */
-   __atomic_store_n(&(d->bufs[wkr].bufptr64),
+   
rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
RTE_DISTRIB_GET_BUF,
-   __ATOMIC_RELEASE);
+   rte_memory_order_release);
d->in_flight_tags[wkr] = 0;
d->in_flight_bitmask &= ~(1UL << wkr);
}
@@ -217,8 +217,8 @@ struct rte_mbuf *
while (next_idx < num_mbufs || next_mb != NULL) {
uintptr_t oldbuf = 0;
/* Sync with worker. Acquire bufptr64. */
-   int64_t data = __atomic_load_n(&(d->bufs[wkr].bufptr64),
-   __ATOMIC_ACQUIRE);
+   int64_t data

[PATCH 1/6] eal: provide rte stdatomics optional atomics API

2023-08-10 Thread Tyler Retzlaff
Provide API for atomic operations in the rte namespace that may
optionally be configured to use C11 atomics with meson
option enable_stdatomics=true

Signed-off-by: Tyler Retzlaff 
---
 config/meson.build  |   1 +
 config/rte_config.h |   1 +
 lib/eal/include/meson.build |   1 +
 lib/eal/include/rte_stdatomic.h | 162 
 meson_options.txt   |   1 +
 5 files changed, 166 insertions(+)
 create mode 100644 lib/eal/include/rte_stdatomic.h

diff --git a/config/meson.build b/config/meson.build
index d822371..ec49964 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -303,6 +303,7 @@ endforeach
 # set other values pulled from the build options
 dpdk_conf.set('RTE_MAX_ETHPORTS', get_option('max_ethports'))
 dpdk_conf.set('RTE_LIBEAL_USE_HPET', get_option('use_hpet'))
+dpdk_conf.set('RTE_ENABLE_STDATOMIC', get_option('enable_stdatomic'))
 dpdk_conf.set('RTE_ENABLE_TRACE_FP', get_option('enable_trace_fp'))
 # values which have defaults which may be overridden
 dpdk_conf.set('RTE_MAX_VFIO_GROUPS', 64)
diff --git a/config/rte_config.h b/config/rte_config.h
index 400e44e..f17b6ae 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -13,6 +13,7 @@
 #define _RTE_CONFIG_H_
 
 #include 
+#include 
 
 /* legacy defines */
 #ifdef RTE_EXEC_ENV_LINUX
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index b0db9b3..f8a47b3 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -43,6 +43,7 @@ headers += files(
 'rte_seqlock.h',
 'rte_service.h',
 'rte_service_component.h',
+'rte_stdatomic.h',
 'rte_string_fns.h',
 'rte_tailq.h',
 'rte_thread.h',
diff --git a/lib/eal/include/rte_stdatomic.h b/lib/eal/include/rte_stdatomic.h
new file mode 100644
index 000..832fd07
--- /dev/null
+++ b/lib/eal/include/rte_stdatomic.h
@@ -0,0 +1,162 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Microsoft Corporation
+ */
+
+#ifndef _RTE_STDATOMIC_H_
+#define _RTE_STDATOMIC_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+typedef int rte_memory_order;
+
+#ifdef RTE_ENABLE_STDATOMIC
+#ifdef __STDC_NO_ATOMICS__
+#error enable_stdatomics=true but atomics not supported by toolchain
+#endif
+
+#include 
+
+#define __rte_atomic _Atomic
+
+#define rte_memory_order_relaxed memory_order_relaxed
+#ifdef __ATOMIC_RELAXED
+_Static_assert(rte_memory_order_relaxed == __ATOMIC_RELAXED,
+   "rte_memory_order_relaxed == __ATOMIC_RELAXED");
+#endif
+
+#define rte_memory_order_consume memory_order_consume
+#ifdef __ATOMIC_CONSUME
+_Static_assert(rte_memory_order_consume == __ATOMIC_CONSUME,
+   "rte_memory_order_consume == __ATOMIC_CONSUME");
+#endif
+
+#define rte_memory_order_acquire memory_order_acquire
+#ifdef __ATOMIC_ACQUIRE
+_Static_assert(rte_memory_order_acquire == __ATOMIC_ACQUIRE,
+   "rte_memory_order_acquire == __ATOMIC_ACQUIRE");
+#endif
+
+#define rte_memory_order_release memory_order_release
+#ifdef __ATOMIC_RELEASE
+_Static_assert(rte_memory_order_release == __ATOMIC_RELEASE,
+   "rte_memory_order_release == __ATOMIC_RELEASE");
+#endif
+
+#define rte_memory_order_acq_rel memory_order_acq_rel
+#ifdef __ATOMIC_ACQ_REL
+_Static_assert(rte_memory_order_acq_rel == __ATOMIC_ACQ_REL,
+   "rte_memory_order_acq_rel == __ATOMIC_ACQ_REL");
+#endif
+
+#define rte_memory_order_seq_cst memory_order_seq_cst
+#ifdef __ATOMIC_SEQ_CST
+_Static_assert(rte_memory_order_seq_cst == __ATOMIC_SEQ_CST,
+   "rte_memory_order_seq_cst == __ATOMIC_SEQ_CST");
+#endif
+
+#define rte_atomic_load_explicit(ptr, memorder) \
+   atomic_load_explicit(ptr, memorder)
+
+#define rte_atomic_store_explicit(ptr, val, memorder) \
+   atomic_store_explicit(ptr, val, memorder)
+
+#define rte_atomic_exchange_explicit(ptr, val, memorder) \
+   atomic_exchange_explicit(ptr, val, memorder)
+
+#define rte_atomic_compare_exchange_strong_explicit( \
+   ptr, expected, desired, succ_memorder, fail_memorder) \
+   atomic_compare_exchange_strong_explicit( \
+   ptr, expected, desired, succ_memorder, fail_memorder)
+
+#define rte_atomic_compare_exchange_weak_explicit( \
+   ptr, expected, desired, succ_memorder, fail_memorder) \
+   atomic_compare_exchange_strong_explicit( \
+   ptr, expected, desired, succ_memorder, fail_memorder)
+
+#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \
+   atomic_fetch_add_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \
+   atomic_fetch_sub_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \
+   atomic_fetch_and_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \
+   atomic_fetch_xor_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \
+   atomic_fetch_or_explicit(ptr, val, memorder)
+
+#define r

[PATCH 2/6] eal: adapt EAL to present rte optional atomics API

2023-08-10 Thread Tyler Retzlaff
Adapt the EAL public headers to use rte optional atomics API instead of
directly using and exposing toolchain specific atomic builtin intrinsics.

Signed-off-by: Tyler Retzlaff 
---
 app/test/test_mcslock.c|  6 ++--
 lib/eal/arm/include/rte_atomic_64.h| 32 +++---
 lib/eal/arm/include/rte_pause_64.h | 26 +-
 lib/eal/arm/rte_power_intrinsics.c |  8 +++---
 lib/eal/common/eal_common_trace.c  | 16 ++-
 lib/eal/include/generic/rte_atomic.h   | 50 +-
 lib/eal/include/generic/rte_pause.h| 38 +-
 lib/eal/include/generic/rte_rwlock.h   | 47 +---
 lib/eal/include/generic/rte_spinlock.h | 19 ++---
 lib/eal/include/rte_mcslock.h  | 50 +-
 lib/eal/include/rte_pflock.h   | 24 
 lib/eal/include/rte_seqcount.h | 18 ++--
 lib/eal/include/rte_ticketlock.h   | 42 ++--
 lib/eal/include/rte_trace_point.h  |  4 +--
 lib/eal/ppc/include/rte_atomic.h   | 50 +-
 lib/eal/x86/include/rte_atomic.h   |  4 +--
 lib/eal/x86/include/rte_spinlock.h |  2 +-
 lib/eal/x86/rte_power_intrinsics.c |  6 ++--
 18 files changed, 225 insertions(+), 217 deletions(-)

diff --git a/app/test/test_mcslock.c b/app/test/test_mcslock.c
index 52e45e7..cc25970 100644
--- a/app/test/test_mcslock.c
+++ b/app/test/test_mcslock.c
@@ -36,9 +36,9 @@
  *   lock multiple times.
  */
 
-rte_mcslock_t *p_ml;
-rte_mcslock_t *p_ml_try;
-rte_mcslock_t *p_ml_perf;
+rte_mcslock_t * __rte_atomic p_ml;
+rte_mcslock_t * __rte_atomic p_ml_try;
+rte_mcslock_t * __rte_atomic p_ml_perf;
 
 static unsigned int count;
 
diff --git a/lib/eal/arm/include/rte_atomic_64.h 
b/lib/eal/arm/include/rte_atomic_64.h
index 6047911..ac3cec9 100644
--- a/lib/eal/arm/include/rte_atomic_64.h
+++ b/lib/eal/arm/include/rte_atomic_64.h
@@ -107,33 +107,33 @@
 */
RTE_SET_USED(failure);
/* Find invalid memory order */
-   RTE_ASSERT(success == __ATOMIC_RELAXED ||
-   success == __ATOMIC_ACQUIRE ||
-   success == __ATOMIC_RELEASE ||
-   success == __ATOMIC_ACQ_REL ||
-   success == __ATOMIC_SEQ_CST);
+   RTE_ASSERT(success == rte_memory_order_relaxed ||
+   success == rte_memory_order_acquire ||
+   success == rte_memory_order_release ||
+   success == rte_memory_order_acq_rel ||
+   success == rte_memory_order_seq_cst);
 
rte_int128_t expected = *exp;
rte_int128_t desired = *src;
rte_int128_t old;
 
 #if defined(__ARM_FEATURE_ATOMICS) || defined(RTE_ARM_FEATURE_ATOMICS)
-   if (success == __ATOMIC_RELAXED)
+   if (success == rte_memory_order_relaxed)
__cas_128_relaxed(dst, exp, desired);
-   else if (success == __ATOMIC_ACQUIRE)
+   else if (success == rte_memory_order_acquire)
__cas_128_acquire(dst, exp, desired);
-   else if (success == __ATOMIC_RELEASE)
+   else if (success == rte_memory_order_release)
__cas_128_release(dst, exp, desired);
else
__cas_128_acq_rel(dst, exp, desired);
old = *exp;
 #else
-#define __HAS_ACQ(mo) ((mo) != __ATOMIC_RELAXED && (mo) != __ATOMIC_RELEASE)
-#define __HAS_RLS(mo) ((mo) == __ATOMIC_RELEASE || (mo) == __ATOMIC_ACQ_REL || 
\
-   (mo) == __ATOMIC_SEQ_CST)
+#define __HAS_ACQ(mo) ((mo) != rte_memory_order_relaxed && (mo) != 
rte_memory_order_release)
+#define __HAS_RLS(mo) ((mo) == rte_memory_order_release || (mo) == 
rte_memory_order_acq_rel || \
+   (mo) == rte_memory_order_seq_cst)
 
-   int ldx_mo = __HAS_ACQ(success) ? __ATOMIC_ACQUIRE : __ATOMIC_RELAXED;
-   int stx_mo = __HAS_RLS(success) ? __ATOMIC_RELEASE : __ATOMIC_RELAXED;
+   int ldx_mo = __HAS_ACQ(success) ? rte_memory_order_acquire : 
rte_memory_order_relaxed;
+   int stx_mo = __HAS_RLS(success) ? rte_memory_order_release : 
rte_memory_order_relaxed;
 
 #undef __HAS_ACQ
 #undef __HAS_RLS
@@ -153,7 +153,7 @@
: "Q" (src->val[0])   \
: "memory"); }
 
-   if (ldx_mo == __ATOMIC_RELAXED)
+   if (ldx_mo == rte_memory_order_relaxed)
__LOAD_128("ldxp", dst, old)
else
__LOAD_128("ldaxp", dst, old)
@@ -170,7 +170,7 @@
: "memory"); }
 
if (likely(old.int128 == expected.int128)) {
-   if (stx_mo == __ATOMIC_RELAXED)
+   if (stx_mo == rte_memory_order_relaxed)
__STORE_128("stxp", dst, desired, ret)
else
__STORE_128("stlxp", dst, desired, ret)
@@ -181,7 +181,7 @@
 * needs to be stored back to ensure it wa

[PATCH 3/6] eal: add rte atomic qualifier with casts

2023-08-10 Thread Tyler Retzlaff
Introduce __rte_atomic qualifying casts in rte_optional atomics inline
functions to prevent cascading the need to pass __rte_atomic qualified
arguments.

Warning, this is really implementation dependent and being done
temporarily to avoid having to convert more of the libraries and tests in
DPDK in the initial series that introduces the API. The consequence of the
assumption of the ABI of the types in question not being ``the same'' is
only a risk that may be realized when enable_stdatomic=true.

Signed-off-by: Tyler Retzlaff 
---
 lib/eal/include/generic/rte_atomic.h | 48 
 lib/eal/include/generic/rte_pause.h  |  9 ---
 lib/eal/x86/rte_power_intrinsics.c   |  7 +++---
 3 files changed, 42 insertions(+), 22 deletions(-)

diff --git a/lib/eal/include/generic/rte_atomic.h 
b/lib/eal/include/generic/rte_atomic.h
index 15a36f3..2c65304 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -273,7 +273,8 @@
 static inline void
 rte_atomic16_add(rte_atomic16_t *v, int16_t inc)
 {
-   rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+   rte_atomic_fetch_add_explicit((volatile int16_t __rte_atomic *)&v->cnt, 
inc,
+   rte_memory_order_seq_cst);
 }
 
 /**
@@ -287,7 +288,8 @@
 static inline void
 rte_atomic16_sub(rte_atomic16_t *v, int16_t dec)
 {
-   rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+   rte_atomic_fetch_sub_explicit((volatile int16_t __rte_atomic *)&v->cnt, 
dec,
+   rte_memory_order_seq_cst);
 }
 
 /**
@@ -340,7 +342,8 @@
 static inline int16_t
 rte_atomic16_add_return(rte_atomic16_t *v, int16_t inc)
 {
-   return rte_atomic_fetch_add_explicit(&v->cnt, inc, 
rte_memory_order_seq_cst) + inc;
+   return rte_atomic_fetch_add_explicit((volatile int16_t __rte_atomic 
*)&v->cnt, inc,
+   rte_memory_order_seq_cst) + inc;
 }
 
 /**
@@ -360,7 +363,8 @@
 static inline int16_t
 rte_atomic16_sub_return(rte_atomic16_t *v, int16_t dec)
 {
-   return rte_atomic_fetch_sub_explicit(&v->cnt, dec, 
rte_memory_order_seq_cst) - dec;
+   return rte_atomic_fetch_sub_explicit((volatile int16_t __rte_atomic 
*)&v->cnt, dec,
+   rte_memory_order_seq_cst) - dec;
 }
 
 /**
@@ -379,7 +383,8 @@
 #ifdef RTE_FORCE_INTRINSICS
 static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
 {
-   return rte_atomic_fetch_add_explicit(&v->cnt, 1, 
rte_memory_order_seq_cst) + 1 == 0;
+   return rte_atomic_fetch_add_explicit((volatile int16_t __rte_atomic 
*)&v->cnt, 1,
+   rte_memory_order_seq_cst) + 1 == 0;
 }
 #endif
 
@@ -399,7 +404,8 @@ static inline int rte_atomic16_inc_and_test(rte_atomic16_t 
*v)
 #ifdef RTE_FORCE_INTRINSICS
 static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
 {
-   return rte_atomic_fetch_sub_explicit(&v->cnt, 1, 
rte_memory_order_seq_cst) - 1 == 0;
+   return rte_atomic_fetch_sub_explicit((volatile int16_t __rte_atomic 
*)&v->cnt, 1,
+   rte_memory_order_seq_cst) - 1 == 0;
 }
 #endif
 
@@ -552,7 +558,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
 static inline void
 rte_atomic32_add(rte_atomic32_t *v, int32_t inc)
 {
-   rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+   rte_atomic_fetch_add_explicit((volatile int32_t __rte_atomic *)&v->cnt, 
inc,
+   rte_memory_order_seq_cst);
 }
 
 /**
@@ -566,7 +573,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
 static inline void
 rte_atomic32_sub(rte_atomic32_t *v, int32_t dec)
 {
-   rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+   rte_atomic_fetch_sub_explicit((volatile int32_t __rte_atomic *)&v->cnt, 
dec,
+   rte_memory_order_seq_cst);
 }
 
 /**
@@ -619,7 +627,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
 static inline int32_t
 rte_atomic32_add_return(rte_atomic32_t *v, int32_t inc)
 {
-   return rte_atomic_fetch_add_explicit(&v->cnt, inc, 
rte_memory_order_seq_cst) + inc;
+   return rte_atomic_fetch_add_explicit((volatile int32_t __rte_atomic 
*)&v->cnt, inc,
+   rte_memory_order_seq_cst) + inc;
 }
 
 /**
@@ -639,7 +648,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
 static inline int32_t
 rte_atomic32_sub_return(rte_atomic32_t *v, int32_t dec)
 {
-   return rte_atomic_fetch_sub_explicit(&v->cnt, dec, 
rte_memory_order_seq_cst) - dec;
+   return rte_atomic_fetch_sub_explicit((volatile int32_t __rte_atomic 
*)&v->cnt, dec,
+   rte_memory_order_seq_cst) - dec;
 }
 
 /**
@@ -658,7 +668,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
 #ifdef RTE_FORCE_INTRINSICS
 static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
 {
-   return rte_atomic_fetch_add_explicit(&v->cnt, 1, 
rte_memory_order_seq_cst) + 1 == 0;
+   return rte_atomic_fetch_add_explicit((volatile int32_t __rte_atomic 
*)&v->cnt, 1,
+   rte_memory_order_seq_cst) + 1 ==

[PATCH 5/6] bpf: adapt for EAL optional atomics API changes

2023-08-10 Thread Tyler Retzlaff
Adapt bpf for EAL optional atomics API changes

Signed-off-by: Tyler Retzlaff 
---
 lib/bpf/bpf_pkt.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/lib/bpf/bpf_pkt.c b/lib/bpf/bpf_pkt.c
index ffd2db7..b300447 100644
--- a/lib/bpf/bpf_pkt.c
+++ b/lib/bpf/bpf_pkt.c
@@ -25,7 +25,7 @@
 
 struct bpf_eth_cbi {
/* used by both data & control path */
-   uint32_t use;/*usage counter */
+   uint32_t __rte_atomic use;/*usage counter */
const struct rte_eth_rxtx_callback *cb;  /* callback handle */
struct rte_bpf *bpf;
struct rte_bpf_jit jit;
@@ -110,8 +110,8 @@ struct bpf_eth_cbh {
 
/* in use, busy wait till current RX/TX iteration is finished */
if ((puse & BPF_ETH_CBI_INUSE) != 0) {
-   RTE_WAIT_UNTIL_MASKED((uint32_t *)(uintptr_t)&cbi->use,
-   UINT32_MAX, !=, puse, __ATOMIC_RELAXED);
+   RTE_WAIT_UNTIL_MASKED((uint32_t __rte_atomic 
*)(uintptr_t)&cbi->use,
+   UINT32_MAX, !=, puse, rte_memory_order_relaxed);
}
 }
 
-- 
1.8.3.1



[PATCH 6/6] devtools: forbid new direct use of GCC atomic builtins

2023-08-10 Thread Tyler Retzlaff
Refrain from using compiler __atomic_xxx builtins DPDK now requires
the use of rte_atomic__explicit macros when operating on DPDK
atomic variables.

Signed-off-by: Tyler Retzlaff 
Acked-by: Morten Brørup 
---
 devtools/checkpatches.sh | 8 
 1 file changed, 8 insertions(+)

diff --git a/devtools/checkpatches.sh b/devtools/checkpatches.sh
index 43f5e36..a32f02e 100755
--- a/devtools/checkpatches.sh
+++ b/devtools/checkpatches.sh
@@ -102,6 +102,14 @@ check_forbidden_additions() { # 
-f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
"$1" || res=1
 
+   # refrain from using compiler __atomic_xxx builtins
+   awk -v FOLDERS="lib drivers app examples" \
+   -v EXPRESSIONS="__atomic_.*\\\(" \
+   -v RET_ON_FAIL=1 \
+   -v MESSAGE='Using __atomic_xxx builtins' \
+   -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
+   "$1" || res=1
+
# refrain from using compiler __atomic_thread_fence()
# It should be avoided on x86 for SMP case.
awk -v FOLDERS="lib drivers app examples" \
-- 
1.8.3.1



[PATCH] dma/idxd: add reset in the init routine

2023-08-10 Thread Frank Du
Fix for windows, no one reset the dev to a clear status. In Linux,
kernel driver will reset during the prob.

Signed-off-by: Frank Du 
---
 drivers/dma/idxd/idxd_pci.c | 8 
 1 file changed, 8 insertions(+)

diff --git a/drivers/dma/idxd/idxd_pci.c b/drivers/dma/idxd/idxd_pci.c
index 3696c7f452..a78889a7ef 100644
--- a/drivers/dma/idxd/idxd_pci.c
+++ b/drivers/dma/idxd/idxd_pci.c
@@ -196,6 +196,14 @@ init_pci_device(struct rte_pci_device *dev, struct 
idxd_dmadev *idxd,
pci->portals = dev->mem_resource[2].addr;
pci->wq_cfg_sz = (pci->regs->wqcap >> 24) & 0x0F;
 
+   /* reset */
+   idxd->u.pci = pci;
+   err_code = idxd_pci_dev_command(idxd, idxd_reset_device);
+   if (err_code) {
+   IDXD_PMD_ERR("Error reset device: code %#x", err_code);
+   goto err;
+   }
+
/* sanity check device status */
if (pci->regs->gensts & GENSTS_DEV_STATE_MASK) {
/* need function-level-reset (FLR) or is enabled */
-- 
2.34.1



[PATCH v2] net/iavf: add devargs to enable vf auto-reset

2023-08-10 Thread Shiyang He
Originally, the iavf PMD does not perform special actions when it
receives a PF-to-VF reset event, resulting in vf being offline and
unavailable.

This patch enables vf auto-reset by setting 'watchdog_period' devargs
to true. The iavf PMD will perform an automatic reset to bring the vf
back online when it receives a PF-to-VF event.

v2: using event handler to handle reset

Signed-off-by: Shiyang He 
---
 doc/guides/nics/intel_vf.rst   |   3 +
 doc/guides/rel_notes/release_23_11.rst |   3 +
 drivers/net/iavf/iavf.h|  31 +
 drivers/net/iavf/iavf_ethdev.c | 177 -
 drivers/net/iavf/iavf_rxtx.c   |  25 
 drivers/net/iavf/iavf_rxtx.h   |   1 +
 drivers/net/iavf/iavf_vchnl.c  |  11 +-
 7 files changed, 248 insertions(+), 3 deletions(-)

diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index d365dbc185..c0acd2a7f5 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -101,6 +101,9 @@ For more detail on SR-IOV, please refer to the following 
documents:
 Set ``devargs`` parameter ``watchdog_period`` to adjust the watchdog 
period in microseconds, or set it to 0 to disable the watchdog,
 for example, ``-a 18:01.0,watchdog_period=5000`` or ``-a 
18:01.0,watchdog_period=0``.
 
+Enable vf auto-reset by setting the ``devargs`` parameter like ``-a 
18:01.0,enable_auto_reset=1`` when IAVF is backed
+by an Intel® E810 device or an Intel® 700 Series Ethernet device.
+
 The PCIE host-interface of Intel Ethernet Switch FM1 Series VF 
infrastructure
 
^
 
diff --git a/doc/guides/rel_notes/release_23_11.rst 
b/doc/guides/rel_notes/release_23_11.rst
index 4411bb32c1..80e2d7672f 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -72,6 +72,9 @@ New Features
  Also, make sure to start the actual text at the margin.
  ===
 
+* **Updated Intel iavf driver.**
+
+  Added support for iavf auto-reset.
 
 Removed Items
 -
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 98861e4242..e78bcda962 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -95,6 +95,24 @@
 
 #define IAVF_L2TPV2_FLAGS_LEN  0x4000
 
+#ifndef DIV_ROUND_UP
+#define DIV_ROUND_UP(n, d) (   \
+{  \
+   const typeof(d) __d = d;\
+   (((n) + (__d) - 1) / (__d));\
+}  \
+)
+#endif
+#ifndef DELAY
+#define DELAY(x) rte_delay_us(x)
+#endif
+#ifndef msleep
+#define msleep(x) DELAY(1000 * (x))
+#endif
+#ifndef usleep_range
+#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000))
+#endif
+
 struct iavf_adapter;
 struct iavf_rx_queue;
 struct iavf_tx_queue;
@@ -305,6 +323,7 @@ struct iavf_devargs {
uint8_t proto_xtr[IAVF_MAX_QUEUE_NUM];
uint16_t quanta_size;
uint32_t watchdog_period;
+   uint8_t  enable_auto_reset;
 };
 
 struct iavf_security_ctx;
@@ -424,8 +443,19 @@ _atomic_set_async_response_cmd(struct iavf_info *vf, enum 
virtchnl_ops ops)
 
return !ret;
 }
+
+static inline bool
+iavf_is_reset(struct iavf_hw *hw)
+{
+   return !(IAVF_READ_REG(hw, IAVF_VF_ARQLEN1) &
+IAVF_VF_ARQLEN1_ARQENABLE_MASK);
+}
+
 int iavf_check_api_version(struct iavf_adapter *adapter);
 int iavf_get_vf_resource(struct iavf_adapter *adapter);
+void iavf_dev_event_post(struct rte_eth_dev *dev,
+   enum rte_eth_event_type event,
+   void *param, size_t param_alloc_size);
 void iavf_dev_event_handler_fini(void);
 int iavf_dev_event_handler_init(void);
 void iavf_handle_virtchnl_msg(struct rte_eth_dev *dev);
@@ -501,4 +531,5 @@ int iavf_flow_sub_check(struct iavf_adapter *adapter,
struct iavf_fsub_conf *filter);
 void iavf_dev_watchdog_enable(struct iavf_adapter *adapter);
 void iavf_dev_watchdog_disable(struct iavf_adapter *adapter);
+int iavf_handle_hw_reset(struct rte_eth_dev *dev);
 #endif /* _IAVF_ETHDEV_H_ */
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index f2fc5a5621..9ac612e12b 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -37,6 +37,7 @@
 #define IAVF_PROTO_XTR_ARG "proto_xtr"
 #define IAVF_QUANTA_SIZE_ARG   "quanta_size"
 #define IAVF_RESET_WATCHDOG_ARG"watchdog_period"
+#define IAVF_ENABLE_AUTO_RESET_ARG "enable_auto_reset"
 
 uint64_t iavf_timestamp_dynflag;
 int iavf_timestamp_dynfield_offset = -1;
@@ -45,6 +46,7 @@ static const char * const iavf_valid_args[] = {
IAVF_PROTO_XTR_ARG,
IAVF_QUANTA_SIZE_ARG,
IAVF_RESET_WATCHDOG_ARG,
+   IAVF_ENABLE_AUTO_RESET_ARG,
NULL
 };
 
@@ -305,8 +307,8 @@ iavf_dev_watchdog(void *cb_arg)

Re: 20.11.9 patches review and test

2023-08-10 Thread YangHang Liu
Hi, Luca

RedHat QE did not find any new issues about 20.11.9 rc dpdk during the
tests.

I tested below 18 scenarios and all got PASS on RHEL9:

   - Guest with device assignment(PF) throughput testing(1G hugepage size):
   PASS
   - Guest with device assignment(PF) throughput testing(2M hugepage size)
   : PASS
   - Guest with device assignment(VF) throughput testing: PASS
   - PVP (host dpdk testpmd as vswitch) 1Q: throughput testing: PASS
   - PVP vhost-user 2Q throughput testing: PASS
   - PVP vhost-user 1Q - cross numa node throughput testing: PASS
   - Guest with vhost-user 2 queues throughput testing: PASS
   - vhost-user reconnect with dpdk-client, qemu-server qemu reconnect: PASS
   - vhost-user reconnect with dpdk-client, qemu-server ovs reconnect: PASS
   - PVP  reconnect with dpdk-client, qemu-server: PASS
   - PVP 1Q live migration testing: PASS
   - PVP 1Q cross numa node live migration testing: PASS
   - Guest with ovs+dpdk+vhost-user 1Q live migration testing: PASS
   - Guest with ovs+dpdk+vhost-user 1Q live migration testing (2M): PASS
   - Guest with ovs+dpdk+vhost-user 2Q live migration testing: PASS
   - Guest with ovs+dpdk+vhost-user 4Q live migration testing: PASS
   - Host PF + DPDK testing: PASS
   - Host VF + DPDK testing: PASS

Test Versions:

   - qemu-kvm-6.2.0
   - kernel 5.14
   - dpdk 20.11.9-rc1

# git log -1

commit 84df5f9791de5e5476a29f27fba3254761c399c3 (HEAD, tag: v20.11.9-rc1,
origin/20.11)
Author: Luca Boccassi 
Date:   Fri Jul 28 23:22:55 2023 +0100
version: 20.11.9-rc1
Signed-off-by: Luca Boccassi 


   - Test device : X540-AT2 NIC(ixgbe, 10G)


Best Regards,
YangHang Liu


On Sat, Jul 29, 2023 at 7:07 AM  wrote:

> Hi all,
>
> Here is a list of patches targeted for stable release 20.11.9.
>
> The planned date for the final release is the 14th of August 2023.
>
> Please help with testing and validation of your use cases and report
> any issues/results with reply-all to this mail. For the final release
> the fixes and reported validations will be added to the release notes.
>
> A release candidate tarball can be found at:
>
> https://dpdk.org/browse/dpdk-stable/tag/?id=v20.11.9-rc1
>
> These patches are located at branch 20.11 of dpdk-stable repo:
> https://dpdk.org/browse/dpdk-stable/
>
> Thanks.
>
> Luca Boccassi
>
> ---
> Aakash Sasidharan (1):
>   test/crypto: fix PDCP-SDAP test vectors
>
> Akhil Goyal (1):
>   doc: fix auth algos in cryptoperf app
>
> Alexander Kozyrev (2):
>   net/mlx5: forbid MPRQ restart
>   net/mlx5: fix MPRQ stride size to accommodate the headroom
>
> Ali Alnubani (1):
>   doc: fix typos and wording in flow API guide
>
> Artemii Morozov (1):
>   common/sfc_efx/base: fix Rx queue without RSS hash prefix
>
> Ashwin Sekhar T K (1):
>   doc: fix typo in graph guide
>
> Boleslav Stankevich (1):
>   net/virtio: fix initialization to return negative errno
>
> Bruce Richardson (5):
>   kernel/freebsd: fix function parameter list
>   build: fix case of project language name
>   telemetry: fix autotest on Alpine
>   eal: avoid calling cleanup twice
>   test/bonding: fix include of standard header
>
> Chaoyong He (2):
>   net/nfp: fix offloading flows
>   net/nfp: fix Tx descriptor free logic of NFD3
>
> Chengwen Feng (4):
>   net/hns3: fix Rx multiple firmware reset interrupts
>   net/hns3: fix mbuf leakage when RxQ started during reset
>   net/hns3: fix mbuf leakage when RxQ started after reset
>   net/hns3: fix device start return value
>
> Ciara Power (2):
>   crypto/scheduler: fix last element for valid args
>   app/crypto-perf: fix socket ID default value
>
> David Christensen (1):
>   net/tap: set locally administered bit for fixed MAC address
>
> David Marchand (5):
>   net/virtio-user: fix leak when initialisation fails
>   net/mlx5: enhance error log for tunnel offloading
>   examples/l2fwd-cat: fix external build
>   test: add graph tests
>   mbuf: fix Doxygen comment of distributor metadata
>
> Dengdui Huang (3):
>   net/hns3: fix variable type mismatch
>   net/hns3: fix inaccurate log
>   net/hns3: fix redundant line break in log
>
> Denis Pryazhennikov (3):
>   ethdev: update documentation for API to set FEC
>   ethdev: check that at least one FEC mode is specified
>   ethdev: update documentation for API to get FEC
>
> Devendra Singh Rawat (1):
>   net/qede: fix RSS indirection table initialization
>
> Didier Pallard (1):
>   crypto/openssl: skip workaround at compilation time
>
> Erez Ferber (1):
>   common/mlx5: adjust fork call with new kernel API
>
> Erik Gabriel Carrillo (1):
>   eventdev/timer: fix buffer flush
>
> Fengnan Chang (2):
>   eal/linux: fix legacy mem init with many segments
>   mem: fix memsegs exhausted message
>
> Ferruh Yigit (2):
>   kni: fix build with Linux 6.3
>   kni: fix build with Linux 6.5
>
> Heng Jiang (1):
>   net/mlx5: fix

RE: [PATCH 1/2] eal: remove RTE_CPUFLAG_NUMFLAGS

2023-08-10 Thread Tummala, Sivaprasad
[AMD Official Use Only - General]

```
From: Stanisław Kardach 
Sent: Thursday, August 3, 2023 5:20 AM
To: Tummala, Sivaprasad 
Cc: Ruifeng Wang ; Min Zhou ; David 
Christensen ; Bruce Richardson 
; Konstantin Ananyev 
; dev 
Subject: Re: [PATCH 1/2] eal: remove RTE_CPUFLAG_NUMFLAGS


Caution: This message originated from an External Source. Use proper caution 
when opening attachments, clicking links, or responding.


On Wed, Aug 2, 2023, 23:12 Sivaprasad Tummala 
 wrote:
This patch removes RTE_CPUFLAG_NUMFLAGS to allow new CPU
features without breaking ABI each time.
I'm not sure I understand the reason for removing the last element canary. It's 
quite useful in the coffee that you're refactoring.
Isn't it so that you want to essentially remove the test (other commit in this 
series)?
Because that I can understand as a forward compatibility measure.
``
Yes, I will fix this in v2.
```
Signed-off-by: Sivaprasad Tummala 
---
 lib/eal/arm/include/rte_cpuflags_32.h| 1 -
 lib/eal/arm/include/rte_cpuflags_64.h| 1 -
 lib/eal/arm/rte_cpuflags.c   | 7 +--
 lib/eal/loongarch/include/rte_cpuflags.h | 1 -
 lib/eal/loongarch/rte_cpuflags.c | 7 +--
 lib/eal/ppc/include/rte_cpuflags.h   | 1 -
 lib/eal/ppc/rte_cpuflags.c   | 7 +--
 lib/eal/riscv/include/rte_cpuflags.h | 1 -
 lib/eal/riscv/rte_cpuflags.c | 7 +--
 lib/eal/x86/include/rte_cpuflags.h   | 1 -
 lib/eal/x86/rte_cpuflags.c   | 7 +--
 11 files changed, 25 insertions(+), 16 deletions(-)

diff --git a/lib/eal/arm/include/rte_cpuflags_32.h 
b/lib/eal/arm/include/rte_cpuflags_32.h
index 4e254428a2..41ab0d5f21 100644
--- a/lib/eal/arm/include/rte_cpuflags_32.h
+++ b/lib/eal/arm/include/rte_cpuflags_32.h
@@ -43,7 +43,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_V7L,
RTE_CPUFLAG_V8L,
/* The last item */
-   RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
 };

 #include "generic/rte_cpuflags.h"
diff --git a/lib/eal/arm/include/rte_cpuflags_64.h 
b/lib/eal/arm/include/rte_cpuflags_64.h
index aa7a56d491..ea5193e510 100644
--- a/lib/eal/arm/include/rte_cpuflags_64.h
+++ b/lib/eal/arm/include/rte_cpuflags_64.h
@@ -37,7 +37,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_SVEBF16,
RTE_CPUFLAG_AARCH64,
/* The last item */
-   RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
 };

 #include "generic/rte_cpuflags.h"
diff --git a/lib/eal/arm/rte_cpuflags.c b/lib/eal/arm/rte_cpuflags.c
index 56e7b2e689..447a8d9f9f 100644
--- a/lib/eal/arm/rte_cpuflags.c
+++ b/lib/eal/arm/rte_cpuflags.c
@@ -139,8 +139,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 {
const struct feature_entry *feat;
hwcap_registers_t regs = {0};
+   unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);

-   if (feature >= RTE_CPUFLAG_NUMFLAGS)
+   if (feature >= num_flags)
return -ENOENT;

feat = &rte_cpu_feature_table[feature];
@@ -154,7 +155,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 const char *
 rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 {
-   if (feature >= RTE_CPUFLAG_NUMFLAGS)
+   unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
+
+   if (feature >= num_flags)
return NULL;
return rte_cpu_feature_table[feature].name;
 }
diff --git a/lib/eal/loongarch/include/rte_cpuflags.h 
b/lib/eal/loongarch/include/rte_cpuflags.h
index 1c80779262..9ff8baaa3c 100644
--- a/lib/eal/loongarch/include/rte_cpuflags.h
+++ b/lib/eal/loongarch/include/rte_cpuflags.h
@@ -27,7 +27,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_LBT_ARM,
RTE_CPUFLAG_LBT_MIPS,
/* The last item */
-   RTE_CPUFLAG_NUMFLAGS /**< This should always be the last! */
 };

 #include "generic/rte_cpuflags.h"
diff --git a/lib/eal/loongarch/rte_cpuflags.c b/lib/eal/loongarch/rte_cpuflags.c
index 0a75ca58d4..642eb42509 100644
--- a/lib/eal/loongarch/rte_cpuflags.c
+++ b/lib/eal/loongarch/rte_cpuflags.c
@@ -66,8 +66,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 {
const struct feature_entry *feat;
hwcap_registers_t regs = {0};
+   unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);

-   if (feature >= RTE_CPUFLAG_NUMFLAGS)
+   if (feature >= num_flags)
return -ENOENT;

feat = &rte_cpu_feature_table[feature];
@@ -81,7 +82,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 const char *
 rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 {
-   if (feature >= RTE_CPUFLAG_NUMFLAGS)
+   unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
+
+   if (feature >= num_flags)
return NULL;
return rte_cpu_feature_table[feature].name;
 }
diff --git a/lib/eal/ppc/include/rte_cpuflags.h 
b/lib/eal/ppc/include/rte_cpuflags.h
index a88355d170..b74e7a73ee 100644
--- a/lib/eal/ppc/include

[PATCH v2 2/2] eal: remove NUMFLAGS enumeration

2023-08-10 Thread Sivaprasad Tummala
This patch removes RTE_CPUFLAG_NUMFLAGS to allow new CPU
features without breaking ABI each time.

Signed-off-by: Sivaprasad Tummala 
---
 lib/eal/arm/include/rte_cpuflags_32.h| 1 -
 lib/eal/arm/include/rte_cpuflags_64.h| 1 -
 lib/eal/arm/rte_cpuflags.c   | 7 +--
 lib/eal/loongarch/include/rte_cpuflags.h | 1 -
 lib/eal/loongarch/rte_cpuflags.c | 7 +--
 lib/eal/ppc/include/rte_cpuflags.h   | 1 -
 lib/eal/ppc/rte_cpuflags.c   | 7 +--
 lib/eal/riscv/include/rte_cpuflags.h | 1 -
 lib/eal/riscv/rte_cpuflags.c | 7 +--
 lib/eal/x86/include/rte_cpuflags.h   | 1 -
 lib/eal/x86/rte_cpuflags.c   | 7 +--
 11 files changed, 25 insertions(+), 16 deletions(-)

diff --git a/lib/eal/arm/include/rte_cpuflags_32.h 
b/lib/eal/arm/include/rte_cpuflags_32.h
index 4e254428a2..41ab0d5f21 100644
--- a/lib/eal/arm/include/rte_cpuflags_32.h
+++ b/lib/eal/arm/include/rte_cpuflags_32.h
@@ -43,7 +43,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_V7L,
RTE_CPUFLAG_V8L,
/* The last item */
-   RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
 };
 
 #include "generic/rte_cpuflags.h"
diff --git a/lib/eal/arm/include/rte_cpuflags_64.h 
b/lib/eal/arm/include/rte_cpuflags_64.h
index aa7a56d491..ea5193e510 100644
--- a/lib/eal/arm/include/rte_cpuflags_64.h
+++ b/lib/eal/arm/include/rte_cpuflags_64.h
@@ -37,7 +37,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_SVEBF16,
RTE_CPUFLAG_AARCH64,
/* The last item */
-   RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
 };
 
 #include "generic/rte_cpuflags.h"
diff --git a/lib/eal/arm/rte_cpuflags.c b/lib/eal/arm/rte_cpuflags.c
index 56e7b2e689..f33fee242b 100644
--- a/lib/eal/arm/rte_cpuflags.c
+++ b/lib/eal/arm/rte_cpuflags.c
@@ -139,8 +139,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 {
const struct feature_entry *feat;
hwcap_registers_t regs = {0};
+   unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
 
-   if (feature >= RTE_CPUFLAG_NUMFLAGS)
+   if ((unsigned int)feature >= num_flags)
return -ENOENT;
 
feat = &rte_cpu_feature_table[feature];
@@ -154,7 +155,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 const char *
 rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 {
-   if (feature >= RTE_CPUFLAG_NUMFLAGS)
+   unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
+
+   if ((unsigned int)feature >= num_flags)
return NULL;
return rte_cpu_feature_table[feature].name;
 }
diff --git a/lib/eal/loongarch/include/rte_cpuflags.h 
b/lib/eal/loongarch/include/rte_cpuflags.h
index 1c80779262..9ff8baaa3c 100644
--- a/lib/eal/loongarch/include/rte_cpuflags.h
+++ b/lib/eal/loongarch/include/rte_cpuflags.h
@@ -27,7 +27,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_LBT_ARM,
RTE_CPUFLAG_LBT_MIPS,
/* The last item */
-   RTE_CPUFLAG_NUMFLAGS /**< This should always be the last! */
 };
 
 #include "generic/rte_cpuflags.h"
diff --git a/lib/eal/loongarch/rte_cpuflags.c b/lib/eal/loongarch/rte_cpuflags.c
index 0a75ca58d4..73b53b8a3a 100644
--- a/lib/eal/loongarch/rte_cpuflags.c
+++ b/lib/eal/loongarch/rte_cpuflags.c
@@ -66,8 +66,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 {
const struct feature_entry *feat;
hwcap_registers_t regs = {0};
+   unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
 
-   if (feature >= RTE_CPUFLAG_NUMFLAGS)
+   if ((unsigned int)feature >= num_flags)
return -ENOENT;
 
feat = &rte_cpu_feature_table[feature];
@@ -81,7 +82,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 const char *
 rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 {
-   if (feature >= RTE_CPUFLAG_NUMFLAGS)
+   unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
+
+   if ((unsigned int)feature >= num_flags)
return NULL;
return rte_cpu_feature_table[feature].name;
 }
diff --git a/lib/eal/ppc/include/rte_cpuflags.h 
b/lib/eal/ppc/include/rte_cpuflags.h
index a88355d170..b74e7a73ee 100644
--- a/lib/eal/ppc/include/rte_cpuflags.h
+++ b/lib/eal/ppc/include/rte_cpuflags.h
@@ -49,7 +49,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_HTM,
RTE_CPUFLAG_ARCH_2_07,
/* The last item */
-   RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
 };
 
 #include "generic/rte_cpuflags.h"
diff --git a/lib/eal/ppc/rte_cpuflags.c b/lib/eal/ppc/rte_cpuflags.c
index 61db5c216d..a173c62631 100644
--- a/lib/eal/ppc/rte_cpuflags.c
+++ b/lib/eal/ppc/rte_cpuflags.c
@@ -90,8 +90,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 {
const struct feature_entry *feat;
hwcap_registers_t regs = {0};
+   unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
 
-   if (feature >= RTE_CPUFLAG_NUMFLAGS)
+   if ((unsigned int)feature >= num_flags)
return -ENOENT

[PATCH v2 1/2] test/cpuflags: removed test for NUMFLAGS

2023-08-10 Thread Sivaprasad Tummala
This patch removes RTE_CPUFLAG_NUMFLAGS to allow new CPU
features without breaking ABI each time.

Signed-off-by: Sivaprasad Tummala 
---
 app/test/test_cpuflags.c | 9 -
 1 file changed, 9 deletions(-)

diff --git a/app/test/test_cpuflags.c b/app/test/test_cpuflags.c
index a0e342ae48..2b8563602c 100644
--- a/app/test/test_cpuflags.c
+++ b/app/test/test_cpuflags.c
@@ -322,15 +322,6 @@ test_cpuflags(void)
CHECK_FOR_FLAG(RTE_CPUFLAG_LBT_MIPS);
 #endif
 
-   /*
-* Check if invalid data is handled properly
-*/
-   printf("\nCheck for invalid flag:\t");
-   result = rte_cpu_get_flag_enabled(RTE_CPUFLAG_NUMFLAGS);
-   printf("%s\n", cpu_flag_result(result));
-   if (result != -ENOENT)
-   return -1;
-
return 0;
 }
 
-- 
2.34.1



[PATCH] net/iavf: support no data path polling mode

2023-08-10 Thread Mingjin Ye
Currently, during a PF to VF reset due to an action such as changing
trust settings on a VF, the DPDK application running with iavf PMD
loses connectivity, and the only solution is to reset the DPDK
application.

Instead of forcing a reset of the DPDK application to restore
connectivity, the iavf PMD driver handles the PF to VF reset event
normally by performing all necessary steps to bring the VF back
online.

To minimize downtime, a devargs "no-poll-on-link-down" is introduced
in iavf PMD. When this flag is set, the PMD switches to no-poll mode
when the link state is down (rx/tx bursts return to 0 immediately).
When the link state returns to normal, the PMD switches to normal
rx/tx burst state.

Signed-off-by: Mingjin Ye 
---
 doc/guides/nics/intel_vf.rst|  3 ++
 drivers/net/iavf/iavf.h |  2 ++
 drivers/net/iavf/iavf_ethdev.c  | 12 +++
 drivers/net/iavf/iavf_rxtx.c| 29 +++--
 drivers/net/iavf/iavf_rxtx.h|  1 +
 drivers/net/iavf/iavf_rxtx_vec_avx2.c   | 29 ++---
 drivers/net/iavf/iavf_rxtx_vec_avx512.c | 42 ++---
 drivers/net/iavf/iavf_rxtx_vec_sse.c| 21 -
 drivers/net/iavf/iavf_vchnl.c   | 20 
 9 files changed, 147 insertions(+), 12 deletions(-)

diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index d365dbc185..54cfb688b3 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -101,6 +101,9 @@ For more detail on SR-IOV, please refer to the following 
documents:
 Set ``devargs`` parameter ``watchdog_period`` to adjust the watchdog 
period in microseconds, or set it to 0 to disable the watchdog,
 for example, ``-a 18:01.0,watchdog_period=5000`` or ``-a 
18:01.0,watchdog_period=0``.
 
+Enable vf no-poll-on-link-down by setting the ``devargs`` parameter like 
``-a 18:01.0,no_poll_on_link_down=1`` when IAVF is backed
+by an Intel® E810 device or an Intel® 700 Series Ethernet device.
+
 The PCIE host-interface of Intel Ethernet Switch FM1 Series VF 
infrastructure
 
^
 
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 98861e4242..30b05d25b6 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -305,6 +305,7 @@ struct iavf_devargs {
uint8_t proto_xtr[IAVF_MAX_QUEUE_NUM];
uint16_t quanta_size;
uint32_t watchdog_period;
+   uint16_t no_poll_on_link_down;
 };
 
 struct iavf_security_ctx;
@@ -323,6 +324,7 @@ struct iavf_adapter {
uint32_t ptype_tbl[IAVF_MAX_PKT_TYPE] __rte_cache_min_aligned;
bool stopped;
bool closed;
+   bool no_poll;
uint16_t fdir_ref_cnt;
struct iavf_devargs devargs;
 };
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index f2fc5a5621..2fdc845204 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -37,6 +37,7 @@
 #define IAVF_PROTO_XTR_ARG "proto_xtr"
 #define IAVF_QUANTA_SIZE_ARG   "quanta_size"
 #define IAVF_RESET_WATCHDOG_ARG"watchdog_period"
+#define IAVF_NO_POLL_ON_LINK_DOWN_ARG   "no_poll_on_link_down"
 
 uint64_t iavf_timestamp_dynflag;
 int iavf_timestamp_dynfield_offset = -1;
@@ -45,6 +46,7 @@ static const char * const iavf_valid_args[] = {
IAVF_PROTO_XTR_ARG,
IAVF_QUANTA_SIZE_ARG,
IAVF_RESET_WATCHDOG_ARG,
+   IAVF_NO_POLL_ON_LINK_DOWN_ARG,
NULL
 };
 
@@ -2237,6 +2239,7 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev)
struct rte_kvargs *kvlist;
int ret;
int watchdog_period = -1;
+   uint16_t no_poll_on_link_down;
 
if (!devargs)
return 0;
@@ -2270,6 +2273,15 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev)
else
ad->devargs.watchdog_period = watchdog_period;
 
+   ret = rte_kvargs_process(kvlist, IAVF_NO_POLL_ON_LINK_DOWN_ARG,
+&parse_u16, &no_poll_on_link_down);
+   if (ret)
+   goto bail;
+   if (no_poll_on_link_down == 0)
+   ad->devargs.no_poll_on_link_down = 0;
+   else
+   ad->devargs.no_poll_on_link_down = 1;
+
if (ad->devargs.quanta_size != 0 &&
(ad->devargs.quanta_size < 256 || ad->devargs.quanta_size > 4096 ||
 ad->devargs.quanta_size & 0x40)) {
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index f7df4665d1..447e306fee 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -770,6 +770,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf =
IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+   struct iavf_vsi *vsi = &vf->vsi;
struct iavf_tx_queue *txq;
const struct rte_memzone *mz;