RE: [PATCH] eal: add notes to SMP memory barrier APIs

2023-06-25 Thread Ruifeng Wang
> -Original Message-
> From: Thomas Monjalon 
> Sent: Wednesday, June 21, 2023 3:30 PM
> To: Ruifeng Wang 
> Cc: david.march...@redhat.com; dev@dpdk.org; konstantin.v.anan...@yandex.ru; 
> Honnappa
> Nagarahalli ; nd 
> Subject: Re: [PATCH] eal: add notes to SMP memory barrier APIs
> 
> 21/06/2023 08:44, Ruifeng Wang:
> > + * @note
> > + *  This function is deprecated. It adds complexity to the memory
> > + model
> > + *  used by this project. C11 memory model should always be used.
> > + *  rte_atomic_thread_fence() should be used instead.
> >   */
> >  static inline void rte_smp_mb(void);
> 
> I think it should be more explicit:
> "the memory model used by this project" -> the DPDK memory model Why it adds 
> complexity?

The rte_smp_xx APIs define a set of memory order semantics. It is redundant 
given we are
using memory order semantics defined in the C language.
I'll make it more explicit in the next version.

> What do you mean by "C11 memory model"?

I mean the memory order semantics:
https://en.cppreference.com/w/c/atomic/memory_order

> 



RE: [PATCH] eal: add notes to SMP memory barrier APIs

2023-06-25 Thread Ruifeng Wang
> -Original Message-
> From: Mattias Rönnblom 
> Sent: Friday, June 23, 2023 2:20 AM
> To: Ruifeng Wang ; tho...@monjalon.net; 
> david.march...@redhat.com
> Cc: dev@dpdk.org; konstantin.v.anan...@yandex.ru; Honnappa Nagarahalli
> ; nd 
> Subject: Re: [PATCH] eal: add notes to SMP memory barrier APIs
> 
> On 2023-06-21 08:44, Ruifeng Wang wrote:
> > The rte_smp_xx() APIs are deprecated. But it is not mentioned in the
> > function header.
> > Added notes in function header for clarification.
> >
> > Signed-off-by: Ruifeng Wang 
> > ---
> >   lib/eal/include/generic/rte_atomic.h | 15 +++
> >   1 file changed, 15 insertions(+)
> >
> > diff --git a/lib/eal/include/generic/rte_atomic.h
> > b/lib/eal/include/generic/rte_atomic.h
> > index 58df843c54..542a2c16ff 100644
> > --- a/lib/eal/include/generic/rte_atomic.h
> > +++ b/lib/eal/include/generic/rte_atomic.h
> > @@ -55,6 +55,11 @@ static inline void rte_rmb(void);
> >* Guarantees that the LOAD and STORE operations that precede the
> >* rte_smp_mb() call are globally visible across the lcores
> >* before the LOAD and STORE operations that follows it.
> > + *
> > + * @note
> > + *  This function is deprecated. It adds complexity to the memory
> > + model
> > + *  used by this project. C11 memory model should always be used.
> > + *  rte_atomic_thread_fence() should be used instead.
> 
> It's somewhat confusing to learn I should use the C11 memory model, and then 
> in the next
> sentence that I should call a function which is not in C11.

I should say "memory order semantics". It will be more specific.
The wrapper function rte_atomic_thread_fence is a special case. It provides an 
optimized implementation
for __ATOMIC_SEQ_CST for x86:
https://www.dpdk.org/blog/2021/03/26/dpdk-adopts-the-c11-memory-model/

> 
> I think it would be helpful to say which memory_model parameters should be 
> used to replace
> the rte_smp_*mb() calls, and if there are any difference in semantics between 
> the Linux
> kernel-style barriers and their C11 (near-)equivalents.

As compiler atomic built-ins are being used. The memory model parameters should 
be the ones listed in:
https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html
We are not taking Linux kernel-style barriers. So no need to mention that.

> 
> Is there some particular reason these functions aren't marked 
> __rte_deprecated? Too many
> warnings?

Yes, warnings will come up. Some occurrences still remain in the project. 

> 
> >*/
> >   static inline void rte_smp_mb(void);
> >
> > @@ -64,6 +69,11 @@ static inline void rte_smp_mb(void);
> >* Guarantees that the STORE operations that precede the
> >* rte_smp_wmb() call are globally visible across the lcores
> >* before the STORE operations that follows it.
> > + *
> > + * @note
> > + *  This function is deprecated. It adds complexity to the memory
> > + model
> > + *  used by this project. C11 memory model should always be used.
> > + *  rte_atomic_thread_fence() should be used instead.
> >*/
> >   static inline void rte_smp_wmb(void);
> >
> > @@ -73,6 +83,11 @@ static inline void rte_smp_wmb(void);
> >* Guarantees that the LOAD operations that precede the
> >* rte_smp_rmb() call are globally visible across the lcores
> >* before the LOAD operations that follows it.
> > + *
> > + * @note
> > + *  This function is deprecated. It adds complexity to the memory
> > + model
> > + *  used by this project. C11 memory model should always be used.
> > + *  rte_atomic_thread_fence() should be used instead.
> >*/
> >   static inline void rte_smp_rmb(void);
> >   ///@}


[PATCH] net/ice: fix VLAN mode parser

2023-06-25 Thread Qiming Yang
Parser will not be ctreated if raw packet filter is not support.
This patch add NULL pointer check for parser structure when VLAN
mode configure.

Fixes: 6e753d777ffc ("net/ice: initialize parser for double VLAN")
Cc: sta...@dpdk.org

Signed-off-by: Qiming Yang 
---
 drivers/net/ice/ice_generic_flow.c | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ice/ice_generic_flow.c 
b/drivers/net/ice/ice_generic_flow.c
index ed3075d555..91bf1d6fcb 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -1836,10 +1836,12 @@ ice_flow_init(struct ice_adapter *ad)
if (ice_parser_create(&ad->hw, &ad->psr) != ICE_SUCCESS)
PMD_INIT_LOG(WARNING, "Failed to initialize DDP parser, raw 
packet filter will not be supported");
 
-   if (ice_is_dvm_ena(&ad->hw))
-   ice_parser_dvm_set(ad->psr, true);
-   else
-   ice_parser_dvm_set(ad->psr, false);
+   if (ad->psr) {
+   if (ice_is_dvm_ena(&ad->hw))
+   ice_parser_dvm_set(ad->psr, true);
+   else
+   ice_parser_dvm_set(ad->psr, false);
+   }
 
RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
if (engine->init == NULL) {
-- 
2.25.1



RE: [PATCH] eal: add notes to SMP memory barrier APIs

2023-06-25 Thread Ruifeng Wang
> -Original Message-
> From: Tyler Retzlaff 
> Sent: Saturday, June 24, 2023 5:51 AM
> To: Mattias Rönnblom 
> Cc: Ruifeng Wang ; tho...@monjalon.net; 
> david.march...@redhat.com;
> dev@dpdk.org; konstantin.v.anan...@yandex.ru; Honnappa Nagarahalli
> ; nd 
> Subject: Re: [PATCH] eal: add notes to SMP memory barrier APIs
> 
> On Thu, Jun 22, 2023 at 08:19:30PM +0200, Mattias R�nnblom wrote:
> > On 2023-06-21 08:44, Ruifeng Wang wrote:
> > >The rte_smp_xx() APIs are deprecated. But it is not mentioned in the
> > >function header.
> > >Added notes in function header for clarification.
> > >
> > >Signed-off-by: Ruifeng Wang 
> > >---
> > >  lib/eal/include/generic/rte_atomic.h | 15 +++
> > >  1 file changed, 15 insertions(+)
> > >
> > >diff --git a/lib/eal/include/generic/rte_atomic.h
> > >b/lib/eal/include/generic/rte_atomic.h
> > >index 58df843c54..542a2c16ff 100644
> > >--- a/lib/eal/include/generic/rte_atomic.h
> > >+++ b/lib/eal/include/generic/rte_atomic.h
> > >@@ -55,6 +55,11 @@ static inline void rte_rmb(void);
> > >   * Guarantees that the LOAD and STORE operations that precede the
> > >   * rte_smp_mb() call are globally visible across the lcores
> > >   * before the LOAD and STORE operations that follows it.
> > >+ *
> > >+ * @note
> > >+ *  This function is deprecated. It adds complexity to the memory
> > >+ model
> > >+ *  used by this project. C11 memory model should always be used.
> > >+ *  rte_atomic_thread_fence() should be used instead.
> >
> > It's somewhat confusing to learn I should use the C11 memory model,
> > and then in the next sentence that I should call a function which is
> > not in C11.
> 
> i wonder if we can just do without the comments until we begin to adopt 
> changes for 23.11
> release because the guidance will be short lived.
> 
> in 23.07 we want to say that only gcc builtins that align with the standard 
> C++ memory
> model should be used.
> 
> in 23.11 we want to say that only standard C11 atomics should be used.

Good point. The memory order parameter will change in 23.11.

> 
> my suggestion i guess is just adapt the patch to be appropriate for
> 23.11 and only merge it after 23.07 release? might be easier to manage.

Agree to only merge it after 23.07. 
I will update the comment for standard C11 atomics.

> 
> >
> > I think it would be helpful to say which memory_model parameters
> > should be used to replace the rte_smp_*mb() calls, and if there are
> > any difference in semantics between the Linux kernel-style barriers
> > and their C11 (near-)equivalents.
> >
> > Is there some particular reason these functions aren't marked
> > __rte_deprecated? Too many warnings?
> >
> > >   */
> > >  static inline void rte_smp_mb(void); @@ -64,6 +69,11 @@ static
> > >inline void rte_smp_mb(void);
> > >   * Guarantees that the STORE operations that precede the
> > >   * rte_smp_wmb() call are globally visible across the lcores
> > >   * before the STORE operations that follows it.
> > >+ *
> > >+ * @note
> > >+ *  This function is deprecated. It adds complexity to the memory
> > >+ model
> > >+ *  used by this project. C11 memory model should always be used.
> > >+ *  rte_atomic_thread_fence() should be used instead.
> > >   */
> > >  static inline void rte_smp_wmb(void); @@ -73,6 +83,11 @@ static
> > >inline void rte_smp_wmb(void);
> > >   * Guarantees that the LOAD operations that precede the
> > >   * rte_smp_rmb() call are globally visible across the lcores
> > >   * before the LOAD operations that follows it.
> > >+ *
> > >+ * @note
> > >+ *  This function is deprecated. It adds complexity to the memory
> > >+ model
> > >+ *  used by this project. C11 memory model should always be used.
> > >+ *  rte_atomic_thread_fence() should be used instead.
> > >   */
> > >  static inline void rte_smp_rmb(void);  ///@}


Re: [PATCH] eal: add notes to SMP memory barrier APIs

2023-06-25 Thread Thomas Monjalon
25/06/2023 10:45, Ruifeng Wang:
> From: Tyler Retzlaff 
> > On Thu, Jun 22, 2023 at 08:19:30PM +0200, Mattias R�nnblom wrote:
> > > On 2023-06-21 08:44, Ruifeng Wang wrote:
> > > >+ *  This function is deprecated. It adds complexity to the memory
> > > >+ model
> > > >+ *  used by this project. C11 memory model should always be used.
> > > >+ *  rte_atomic_thread_fence() should be used instead.
> > >
> > > It's somewhat confusing to learn I should use the C11 memory model,
> > > and then in the next sentence that I should call a function which is
> > > not in C11.
> > 
> > i wonder if we can just do without the comments until we begin to adopt 
> > changes for 23.11
> > release because the guidance will be short lived.
> > 
> > in 23.07 we want to say that only gcc builtins that align with the standard 
> > C++ memory
> > model should be used.
> > 
> > in 23.11 we want to say that only standard C11 atomics should be used.
> 
> Good point. The memory order parameter will change in 23.11.
> 
> > 
> > my suggestion i guess is just adapt the patch to be appropriate for
> > 23.11 and only merge it after 23.07 release? might be easier to manage.
> 
> Agree to only merge it after 23.07. 
> I will update the comment for standard C11 atomics.

I would prefer having each step documented
so it will be clearer what is new in 23.11.





Re: [PATCH] drivers: remove compile-time option for IEEE 1588

2023-06-25 Thread Thomas Monjalon
23/06/2023 16:00, Ferruh Yigit:
> On 2/3/2023 1:28 PM, Thomas Monjalon wrote:
> > The option RTE_LIBRTE_IEEE1588 has no effect on any library
> > unlike its name.
> > 
> > Also we are suppose to enable/disable features dynamically,
> > not at compilation time.
> > 
> > And the best is that this macro is neither documented,
> > nor in rte_config.h.
> > 
> > It looks to be a mistake keeping this flag, so it is removed,
> > meaning always enabled.
> > PS: it is disabling vector paths of some drivers.
> > 
> 
> PTP (IEEE1588) processing brings additional overhead to datapath.
> 
> Agree that it is not good to have undocumented compile macro, but just
> removing it may cause performance degradation.
> 
> It may be possible to have separate burst function that supports PTP and
> driver configures it when application explicitly request it with a new
> offload flag (although it is not exactly an offload), what do you think?

The best is to enable dynamically with different functions.




Re: DPDK22 issue: Unable to set more than 4 queues in Azure

2023-06-25 Thread Stephen Hemminger
On Thu, 22 Jun 2023 22:06:10 +0530
Nageswara Rao  wrote:

> Hi All,
> 
> We are observing the following issue with DPDK22.11. We didn’t find any
> upstream patches for this issue on the DPDK github. Is there any known
> issue, please let us know.
> 
> 
> 
> *Issue:*
> 
> On Azure platform, we are unable to configure more than 4 queues. When we
> try to configure more than 4 queues its failing with “EAL: Cannot send more
> than 8 FDs” error.
> 
> Here I am pasting the working and failing testpmd logs.
> 
> Please note that this issue is not observed in DPDK 21.11.
> 

You should be using the native netvsc PMD, not the vdev_netvsc,failsafe,tap 
kludge.

I don't work on Azure any more but I suspect the issue is that the default
in the kernel for TAP is for the number of queues == number of cores.

You aren't going to see any real benefit from having more queues than
the number of DPDK cores.



Re: [PATCH] crypto/openssl: do not build useless workaround

2023-06-25 Thread Thomas Monjalon
18/04/2023 16:56, Didier Pallard:
> This workaround was needed before version 1.0.1f. Do not build it for
> versions >= 1.1.
> 
> Fixes: d61f70b4c918 ("crypto/libcrypto: add driver for OpenSSL library")
> Signed-off-by: Didier Pallard 
> Cc: sta...@dpdk.org
[...]
> +#if OPENSSL_VERSION_NUMBER < 0x1010L
>   /* Workaround open ssl bug in version less then 1.0.1f */
>   if (EVP_EncryptUpdate(ctx, empty, &unused, empty, 0) <= 0)
>   goto process_auth_encryption_gcm_err;
> +#endif

What happens if we build with OpenSSL 1.1 and run with OpenSSL 1.0?
Can we have a runtime check?
Or is it better doing the workaround always as before?




[PATCH 1/4] net/idpf: refine dev_link_update function

2023-06-25 Thread Mingxia Liu
This patch refines idpf_dev_link_update callback function according to
CPFL PMD basic code.

Signed-off-by: Mingxia Liu 
---
 drivers/net/idpf/idpf_ethdev.c | 63 --
 1 file changed, 30 insertions(+), 33 deletions(-)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index fb5965..bfdac92b95 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -30,6 +30,23 @@ static const char * const idpf_valid_args[] = {
NULL
 };
 
+uint32_t idpf_supported_speeds[] = {
+   RTE_ETH_SPEED_NUM_NONE,
+   RTE_ETH_SPEED_NUM_10M,
+   RTE_ETH_SPEED_NUM_100M,
+   RTE_ETH_SPEED_NUM_1G,
+   RTE_ETH_SPEED_NUM_2_5G,
+   RTE_ETH_SPEED_NUM_5G,
+   RTE_ETH_SPEED_NUM_10G,
+   RTE_ETH_SPEED_NUM_20G,
+   RTE_ETH_SPEED_NUM_25G,
+   RTE_ETH_SPEED_NUM_40G,
+   RTE_ETH_SPEED_NUM_50G,
+   RTE_ETH_SPEED_NUM_56G,
+   RTE_ETH_SPEED_NUM_100G,
+   RTE_ETH_SPEED_NUM_200G
+};
+
 static const uint64_t idpf_map_hena_rss[] = {
[IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
RTE_ETH_RSS_NONFRAG_IPV4_UDP,
@@ -110,42 +127,22 @@ idpf_dev_link_update(struct rte_eth_dev *dev,
 {
struct idpf_vport *vport = dev->data->dev_private;
struct rte_eth_link new_link;
+   unsigned int i;
 
memset(&new_link, 0, sizeof(new_link));
 
-   switch (vport->link_speed) {
-   case RTE_ETH_SPEED_NUM_10M:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
-   break;
-   case RTE_ETH_SPEED_NUM_100M:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
-   break;
-   case RTE_ETH_SPEED_NUM_1G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
-   break;
-   case RTE_ETH_SPEED_NUM_10G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
-   break;
-   case RTE_ETH_SPEED_NUM_20G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
-   break;
-   case RTE_ETH_SPEED_NUM_25G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
-   break;
-   case RTE_ETH_SPEED_NUM_40G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
-   break;
-   case RTE_ETH_SPEED_NUM_50G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
-   break;
-   case RTE_ETH_SPEED_NUM_100G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
-   break;
-   case RTE_ETH_SPEED_NUM_200G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
-   break;
-   default:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+   for (i = 0; i < RTE_DIM(idpf_supported_speeds); i++) {
+   if (vport->link_speed == idpf_supported_speeds[i]) {
+   new_link.link_speed = vport->link_speed;
+   break;
+   }
+   }
+
+   if (i == RTE_DIM(idpf_supported_speeds)) {
+   if (vport->link_up)
+   new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
+   else
+   new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
}
 
new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
-- 
2.34.1



[PATCH 1/4] net/idpf: refine dev_link_update function

2023-06-25 Thread Mingxia Liu
This patch refines idpf_dev_link_update callback function according to
CPFL PMD basic code.

Signed-off-by: Mingxia Liu 
---
 drivers/net/idpf/idpf_ethdev.c | 63 --
 1 file changed, 30 insertions(+), 33 deletions(-)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index fb5965..bfdac92b95 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -30,6 +30,23 @@ static const char * const idpf_valid_args[] = {
NULL
 };
 
+uint32_t idpf_supported_speeds[] = {
+   RTE_ETH_SPEED_NUM_NONE,
+   RTE_ETH_SPEED_NUM_10M,
+   RTE_ETH_SPEED_NUM_100M,
+   RTE_ETH_SPEED_NUM_1G,
+   RTE_ETH_SPEED_NUM_2_5G,
+   RTE_ETH_SPEED_NUM_5G,
+   RTE_ETH_SPEED_NUM_10G,
+   RTE_ETH_SPEED_NUM_20G,
+   RTE_ETH_SPEED_NUM_25G,
+   RTE_ETH_SPEED_NUM_40G,
+   RTE_ETH_SPEED_NUM_50G,
+   RTE_ETH_SPEED_NUM_56G,
+   RTE_ETH_SPEED_NUM_100G,
+   RTE_ETH_SPEED_NUM_200G
+};
+
 static const uint64_t idpf_map_hena_rss[] = {
[IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
RTE_ETH_RSS_NONFRAG_IPV4_UDP,
@@ -110,42 +127,22 @@ idpf_dev_link_update(struct rte_eth_dev *dev,
 {
struct idpf_vport *vport = dev->data->dev_private;
struct rte_eth_link new_link;
+   unsigned int i;
 
memset(&new_link, 0, sizeof(new_link));
 
-   switch (vport->link_speed) {
-   case RTE_ETH_SPEED_NUM_10M:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
-   break;
-   case RTE_ETH_SPEED_NUM_100M:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
-   break;
-   case RTE_ETH_SPEED_NUM_1G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
-   break;
-   case RTE_ETH_SPEED_NUM_10G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
-   break;
-   case RTE_ETH_SPEED_NUM_20G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
-   break;
-   case RTE_ETH_SPEED_NUM_25G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
-   break;
-   case RTE_ETH_SPEED_NUM_40G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
-   break;
-   case RTE_ETH_SPEED_NUM_50G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
-   break;
-   case RTE_ETH_SPEED_NUM_100G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
-   break;
-   case RTE_ETH_SPEED_NUM_200G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
-   break;
-   default:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+   for (i = 0; i < RTE_DIM(idpf_supported_speeds); i++) {
+   if (vport->link_speed == idpf_supported_speeds[i]) {
+   new_link.link_speed = vport->link_speed;
+   break;
+   }
+   }
+
+   if (i == RTE_DIM(idpf_supported_speeds)) {
+   if (vport->link_up)
+   new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
+   else
+   new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
}
 
new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
-- 
2.34.1



[PATCH] net/idpf: refine dev_link_update function

2023-06-25 Thread Mingxia Liu
This patch refines idpf_dev_link_update callback function according to
CPFL PMD basic code.

Signed-off-by: Mingxia Liu 
---
 drivers/net/idpf/idpf_ethdev.c | 63 --
 1 file changed, 30 insertions(+), 33 deletions(-)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index fb5965..bfdac92b95 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -30,6 +30,23 @@ static const char * const idpf_valid_args[] = {
NULL
 };
 
+uint32_t idpf_supported_speeds[] = {
+   RTE_ETH_SPEED_NUM_NONE,
+   RTE_ETH_SPEED_NUM_10M,
+   RTE_ETH_SPEED_NUM_100M,
+   RTE_ETH_SPEED_NUM_1G,
+   RTE_ETH_SPEED_NUM_2_5G,
+   RTE_ETH_SPEED_NUM_5G,
+   RTE_ETH_SPEED_NUM_10G,
+   RTE_ETH_SPEED_NUM_20G,
+   RTE_ETH_SPEED_NUM_25G,
+   RTE_ETH_SPEED_NUM_40G,
+   RTE_ETH_SPEED_NUM_50G,
+   RTE_ETH_SPEED_NUM_56G,
+   RTE_ETH_SPEED_NUM_100G,
+   RTE_ETH_SPEED_NUM_200G
+};
+
 static const uint64_t idpf_map_hena_rss[] = {
[IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
RTE_ETH_RSS_NONFRAG_IPV4_UDP,
@@ -110,42 +127,22 @@ idpf_dev_link_update(struct rte_eth_dev *dev,
 {
struct idpf_vport *vport = dev->data->dev_private;
struct rte_eth_link new_link;
+   unsigned int i;
 
memset(&new_link, 0, sizeof(new_link));
 
-   switch (vport->link_speed) {
-   case RTE_ETH_SPEED_NUM_10M:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
-   break;
-   case RTE_ETH_SPEED_NUM_100M:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
-   break;
-   case RTE_ETH_SPEED_NUM_1G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
-   break;
-   case RTE_ETH_SPEED_NUM_10G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
-   break;
-   case RTE_ETH_SPEED_NUM_20G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
-   break;
-   case RTE_ETH_SPEED_NUM_25G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
-   break;
-   case RTE_ETH_SPEED_NUM_40G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
-   break;
-   case RTE_ETH_SPEED_NUM_50G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
-   break;
-   case RTE_ETH_SPEED_NUM_100G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
-   break;
-   case RTE_ETH_SPEED_NUM_200G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
-   break;
-   default:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+   for (i = 0; i < RTE_DIM(idpf_supported_speeds); i++) {
+   if (vport->link_speed == idpf_supported_speeds[i]) {
+   new_link.link_speed = vport->link_speed;
+   break;
+   }
+   }
+
+   if (i == RTE_DIM(idpf_supported_speeds)) {
+   if (vport->link_up)
+   new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
+   else
+   new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
}
 
new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
-- 
2.34.1



RE: [EXT] [PATCH v8] app/dma-perf: introduce dma-perf application

2023-06-25 Thread Anoob Joseph
Hi Cheng,

Please see inline.

Thanks,
Anoob

> -Original Message-
> From: Jiang, Cheng1 
> Sent: Saturday, June 24, 2023 5:23 PM
> To: Anoob Joseph ; tho...@monjalon.net;
> Richardson, Bruce ;
> m...@smartsharesystems.com; Xia, Chenbo ; Amit
> Prakash Shukla ; huangdeng...@huawei.com;
> Laatz, Kevin ; fengcheng...@huawei.com; Jerin
> Jacob Kollanukkaran 
> Cc: dev@dpdk.org; Hu, Jiayu ; Ding, Xuan
> ; Ma, WenwuX ; Wang,
> YuanX ; He, Xingguang ;
> Ling, WeiX 
> Subject: RE: [EXT] [PATCH v8] app/dma-perf: introduce dma-perf application
> 
> Hi Anoob,
> 
> Replies are inline.
> 
> Thanks,
> Cheng
> 
> > -Original Message-
> > From: Anoob Joseph 
> > Sent: Friday, June 23, 2023 2:53 PM
> > To: Jiang, Cheng1 ; tho...@monjalon.net;
> > Richardson, Bruce ;
> > m...@smartsharesystems.com; Xia, Chenbo ; Amit
> > Prakash Shukla ;
> huangdeng...@huawei.com;
> > Laatz, Kevin ; fengcheng...@huawei.com; Jerin
> > Jacob Kollanukkaran 
> > Cc: dev@dpdk.org; Hu, Jiayu ; Ding, Xuan
> > ; Ma, WenwuX ; Wang,
> YuanX
> > ; He, Xingguang ;
> Ling,
> > WeiX 
> > Subject: RE: [EXT] [PATCH v8] app/dma-perf: introduce dma-perf
> > application
> >
> > Hi Cheng,
> >
> > Thanks for the new version. Please see inline.
> >
> > Thanks,
> > Anoob
> >
> > > -Original Message-
> > > From: Cheng Jiang 
> > > Sent: Tuesday, June 20, 2023 12:24 PM
> > > To: tho...@monjalon.net; bruce.richard...@intel.com;
> > > m...@smartsharesystems.com; chenbo@intel.com; Amit Prakash
> Shukla
> > > ; Anoob Joseph ;
> > > huangdeng...@huawei.com; kevin.la...@intel.com;
> > > fengcheng...@huawei.com; Jerin Jacob Kollanukkaran
> > > 
> > > Cc: dev@dpdk.org; jiayu...@intel.com; xuan.d...@intel.com;
> > > wenwux...@intel.com; yuanx.w...@intel.com;
> xingguang...@intel.com;
> > > weix.l...@intel.com; Cheng Jiang 
> > > Subject: [EXT] [PATCH v8] app/dma-perf: introduce dma-perf
> > > application
> > >
> > > External Email
> > >
> > > 
> > > -- There are many high-performance DMA devices supported in DPDK
> > > now,
> > and
> > > these DMA devices can also be integrated into other modules of DPDK
> > > as accelerators, such as Vhost. Before integrating DMA into
> > > applications, developers need to know the performance of these DMA
> > > devices in various scenarios and the performance of CPUs in the same
> > > scenario, such as different buffer lengths. Only in this way can we
> > > know the target performance of the application accelerated by using
> > > them. This patch introduces a high-performance testing tool, which
> > > supports comparing the performance of CPU and DMA in different
> > > scenarios automatically with a pre- set config file. Memory Copy
> > > performance test
> > are supported for now.
> > >
> > > Signed-off-by: Cheng Jiang 
> > > Signed-off-by: Jiayu Hu 
> > > Signed-off-by: Yuan Wang 
> > > Acked-by: Morten Brørup 
> > > Acked-by: Chenbo Xia 
> > > ---
> > > v8:
> > >   fixed string copy issue in parse_lcore();
> > >   improved some data display format;
> > >   added doc in doc/guides/tools;
> > >   updated release notes;
> > >
> > > v7:
> > >   fixed some strcpy issues;
> > >   removed cache setup in calling rte_pktmbuf_pool_create();
> > >   fixed some typos;
> > >   added some memory free and null set operations;
> > >   improved result calculation;
> > > v6:
> > >   improved code based on Anoob's comments;
> > >   fixed some code structure issues;
> > > v5:
> > >   fixed some LONG_LINE warnings;
> > > v4:
> > >   fixed inaccuracy of the memory footprint display;
> > > v3:
> > >   fixed some typos;
> > > v2:
> > >   added lcore/dmadev designation;
> > >   added error case process;
> > >   removed worker_threads parameter from config.ini;
> > >   improved the logs;
> > >   improved config file;
> > >
> > >  app/meson.build|   1 +
> > >  app/test-dma-perf/benchmark.c  | 498 +
> > >  app/test-dma-perf/config.ini   |  61 +++
> > >  app/test-dma-perf/main.c   | 594 +
> > >  app/test-dma-perf/main.h   |  69 +++
> > >  app/test-dma-perf/meson.build  |  17 +
> > >  doc/guides/rel_notes/release_23_07.rst |   6 +
> > >  doc/guides/tools/dmaperf.rst   | 103 +
> > >  doc/guides/tools/index.rst |   1 +
> > >  9 files changed, 1350 insertions(+)  create mode 100644
> > > app/test-dma-perf/benchmark.c  create mode 100644
> > > app/test-dma-perf/config.ini  create mode 100644 app/test-dma-
> > > perf/main.c  create mode 100644 app/test-dma-perf/main.h  create
> > > mode
> > > 100644 app/test-dma-perf/meson.build  create mode 100644
> > > doc/guides/tools/dmaperf.rst
> > >
> >


 
> >
> > > + fprintf(stderr, "Error: Fail to find DMA %s.\n",
> > > dma_name);
> > > + goto end;
> > > + }
> > > +
> > > + ldm->dma_ids[i] = dev_id;
> > > + configure_dmadev_queue(dev_id, ri

[PATCH] net/idpf: optimize the code of IDPF PMD

2023-06-25 Thread Mingxia Liu
This patch moves 'struct eth_dev_ops idpf_eth_dev_ops = {...}'
block just after idpf_dev_close(), to group dev_ops related
code together.

Signed-off-by: Mingxia Liu 
---
 drivers/net/idpf/idpf_ethdev.c | 56 +-
 1 file changed, 28 insertions(+), 28 deletions(-)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index bfdac92b95..801da57472 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -839,6 +839,34 @@ idpf_dev_close(struct rte_eth_dev *dev)
return 0;
 }
 
+static const struct eth_dev_ops idpf_eth_dev_ops = {
+   .dev_configure  = idpf_dev_configure,
+   .dev_close  = idpf_dev_close,
+   .rx_queue_setup = idpf_rx_queue_setup,
+   .tx_queue_setup = idpf_tx_queue_setup,
+   .dev_infos_get  = idpf_dev_info_get,
+   .dev_start  = idpf_dev_start,
+   .dev_stop   = idpf_dev_stop,
+   .link_update= idpf_dev_link_update,
+   .rx_queue_start = idpf_rx_queue_start,
+   .tx_queue_start = idpf_tx_queue_start,
+   .rx_queue_stop  = idpf_rx_queue_stop,
+   .tx_queue_stop  = idpf_tx_queue_stop,
+   .rx_queue_release   = idpf_dev_rx_queue_release,
+   .tx_queue_release   = idpf_dev_tx_queue_release,
+   .mtu_set= idpf_dev_mtu_set,
+   .dev_supported_ptypes_get   = idpf_dev_supported_ptypes_get,
+   .stats_get  = idpf_dev_stats_get,
+   .stats_reset= idpf_dev_stats_reset,
+   .reta_update= idpf_rss_reta_update,
+   .reta_query = idpf_rss_reta_query,
+   .rss_hash_update= idpf_rss_hash_update,
+   .rss_hash_conf_get  = idpf_rss_hash_conf_get,
+   .xstats_get = idpf_dev_xstats_get,
+   .xstats_get_names   = idpf_dev_xstats_get_names,
+   .xstats_reset   = idpf_dev_xstats_reset,
+};
+
 static int
 insert_value(struct idpf_devargs *devargs, uint16_t id)
 {
@@ -1206,34 +1234,6 @@ idpf_adapter_ext_init(struct rte_pci_device *pci_dev, 
struct idpf_adapter_ext *a
return ret;
 }
 
-static const struct eth_dev_ops idpf_eth_dev_ops = {
-   .dev_configure  = idpf_dev_configure,
-   .dev_close  = idpf_dev_close,
-   .rx_queue_setup = idpf_rx_queue_setup,
-   .tx_queue_setup = idpf_tx_queue_setup,
-   .dev_infos_get  = idpf_dev_info_get,
-   .dev_start  = idpf_dev_start,
-   .dev_stop   = idpf_dev_stop,
-   .link_update= idpf_dev_link_update,
-   .rx_queue_start = idpf_rx_queue_start,
-   .tx_queue_start = idpf_tx_queue_start,
-   .rx_queue_stop  = idpf_rx_queue_stop,
-   .tx_queue_stop  = idpf_tx_queue_stop,
-   .rx_queue_release   = idpf_dev_rx_queue_release,
-   .tx_queue_release   = idpf_dev_tx_queue_release,
-   .mtu_set= idpf_dev_mtu_set,
-   .dev_supported_ptypes_get   = idpf_dev_supported_ptypes_get,
-   .stats_get  = idpf_dev_stats_get,
-   .stats_reset= idpf_dev_stats_reset,
-   .reta_update= idpf_rss_reta_update,
-   .reta_query = idpf_rss_reta_query,
-   .rss_hash_update= idpf_rss_hash_update,
-   .rss_hash_conf_get  = idpf_rss_hash_conf_get,
-   .xstats_get = idpf_dev_xstats_get,
-   .xstats_get_names   = idpf_dev_xstats_get_names,
-   .xstats_reset   = idpf_dev_xstats_reset,
-};
-
 static uint16_t
 idpf_vport_idx_alloc(struct idpf_adapter_ext *ad)
 {
-- 
2.34.1



[PATCH] net/idpf: refine idpf_dev_vport_init() function

2023-06-25 Thread Mingxia Liu
This patch adds 'cur_vports' and 'cur_vport_nb' updation in error path.

Signed-off-by: Mingxia Liu 
---
 drivers/net/idpf/idpf_ethdev.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 801da57472..3e66898aaf 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -1300,6 +1300,8 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void 
*init_params)
 err_mac_addrs:
adapter->vports[param->idx] = NULL;  /* reset */
idpf_vport_deinit(vport);
+   adapter->cur_vports &= ~RTE_BIT32(param->devarg_id);
+   adapter->cur_vport_nb--;
 err:
return ret;
 }
-- 
2.34.1



[PATCH] net/idpf: refine RTE_PMD_REGISTER_PARAM_STRING of IDPF PMD

2023-06-25 Thread Mingxia Liu
This patch refines 'IDPF_VPORT' param string in
'RTE_PMD_REGISTER_PARAM_STRING'.

Signed-off-by: Mingxia Liu 
---
 drivers/net/idpf/idpf_ethdev.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 3e66898aaf..34ca5909f1 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -1478,9 +1478,9 @@ RTE_PMD_REGISTER_PCI(net_idpf, rte_idpf_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(net_idpf, pci_id_idpf_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_idpf, "* igb_uio | vfio-pci");
 RTE_PMD_REGISTER_PARAM_STRING(net_idpf,
- IDPF_TX_SINGLE_Q "=<0|1> "
- IDPF_RX_SINGLE_Q "=<0|1> "
- IDPF_VPORT "=[vport_set0,[vport_set1],...]");
+   IDPF_TX_SINGLE_Q "=<0|1> "
+   IDPF_RX_SINGLE_Q "=<0|1> "
+   IDPF_VPORT 
"=[vport0_begin[-vport0_end][,vport1_begin[-vport1_end]][,..]]");
 
 RTE_LOG_REGISTER_SUFFIX(idpf_logtype_init, init, NOTICE);
 RTE_LOG_REGISTER_SUFFIX(idpf_logtype_driver, driver, NOTICE);
-- 
2.34.1



RE: [dpdk-dev] [PATCH v1] doc: add inbuilt graph nodes data flow

2023-06-25 Thread Yan, Zhirun



> -Original Message-
> From: jer...@marvell.com 
> Sent: Friday, June 23, 2023 3:36 PM
> To: dev@dpdk.org; Jerin Jacob ; Kiran Kumar K
> ; Nithin Dabilpuram ;
> Yan, Zhirun 
> Cc: tho...@monjalon.net; pbhagavat...@marvell.com
> Subject: [dpdk-dev] [PATCH v1] doc: add inbuilt graph nodes data flow
> 
> From: Jerin Jacob 
> 
> Added diagram to depict the data flow between inbuilt graph nodes.
> 
> In order to avoid graphviz package dependency to DPDK documentation, manual
> step added to create a svg file from the dot file. The details for the same is
> documented in graph_inbuilt_node_flow.svg as a comment.
> 
> Signed-off-by: Jerin Jacob 
> ---

Reviewed-by: Zhirun Yan