Re: [PATCH v1] ethdev: add indirect action async query

2022-11-17 Thread David Marchand
Hello,

On Tue, Sep 20, 2022 at 9:12 AM Suanming Mou  wrote:
> @@ -2873,17 +2907,23 @@ port_queue_action_handle_destroy(portid_t port_id,
>  * of error.
>  */
> memset(&error, 0x99, sizeof(error));
> +   job = calloc(1, sizeof(*job));
> +   if (!job) {
> +   printf("Queue action destroy job allocate 
> failed\n");
> +   return -ENOMEM;
> +   }
> +   job->type = QUEUE_JOB_TYPE_ACTION_DESTROY;
> +   job->pia = pia;
>
> if (pia->handle &&
> rte_flow_async_action_handle_destroy(port_id,
> -   queue_id, &attr, pia->handle, NULL, &error)) {
> +   queue_id, &attr, pia->handle, job, &error)) {
> ret = port_flow_complain(&error);
> continue;
> }
> *tmp = pia->next;
> printf("Indirect action #%u destruction queued\n",
>pia->id);
> -   free(pia);
> break;
> }
> if (i == n)

Our covscan tool reports a potential leak of "job" in this block.
I am unclear whether it is a normal occurence, but it seems that if
pia->handle == NULL, then job is leaked.

Can you have a look and submit a fix if confirmed?

Thanks.


-- 
David Marchand



[Bug 1127] net/i40e does not preserve RSS redirection table upon port stop/start

2022-11-17 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=1127

Bug ID: 1127
   Summary: net/i40e does not preserve RSS redirection table upon
port stop/start
   Product: DPDK
   Version: 22.07
  Hardware: All
OS: All
Status: UNCONFIRMED
  Severity: normal
  Priority: Normal
 Component: ethdev
  Assignee: dev@dpdk.org
  Reporter: andrew.rybche...@oktetlabs.ru
CC: ivan.ma...@oktetlabs.ru
  Target Milestone: ---

net/i40e does not preserve RSS redirection table upon port stop/start

lib/ethdev/rte_ethdev.h line 86:

  79  * Please note that some configuration is not stored between calls to
  80  * rte_eth_dev_stop()/rte_eth_dev_start(). The following configuration
will
  81  * be retained:
  82  *
  83  * - MTU
  84  * - flow control settings
  85  * - receive mode configuration (promiscuous mode, all-multicast mode,
  86  *   hardware checksum mode, RSS/VMDq settings etc.)
  87  * - VLAN filtering configuration
  88  * - default MAC address
  89  * - MAC addresses supplied to MAC address array
  90  * - flow director filtering mode (but not filtering rules)
  91  * - NIC queue statistics mappings

defines that RSS settings must be retained across restart.

Example log from dpdk-ethdev-ts tests available [1]. Lines below are lines in
the log.

Steps to reproduce should be trivial and as far as I know testpmd has all
required functionality: 
 - configure more than 1 Rx queue, 
 - start port (line 38),
 - update reta (lines 41 and 43),
 - stop (line 72),
 - start again (line 74),
 - read reta (line 77) and compare (example log shows that RETA is reset to
default).
Obviously update should not match default configuration.

[1]
https://ts-factory.io/bublik/v2/log/189828?focusId=190450&mode=treeAndinfoAndlog

-- 
You are receiving this mail because:
You are the assignee for the bug.

RE: [PATCH v1] ethdev: add indirect action async query

2022-11-17 Thread Suanming Mou
Hi,

> -Original Message-
> From: David Marchand 
> Sent: Thursday, November 17, 2022 4:07 PM
> To: Suanming Mou 
> Cc: Ori Kam ; Aman Singh ;
> Yuying Zhang ; NBU-Contact-Thomas Monjalon
> (EXTERNAL) ; Ferruh Yigit ;
> Andrew Rybchenko ; Ray Kinsella
> ; dev@dpdk.org
> Subject: Re: [PATCH v1] ethdev: add indirect action async query
> 
> Hello,
> 
> On Tue, Sep 20, 2022 at 9:12 AM Suanming Mou 
> wrote:
> > @@ -2873,17 +2907,23 @@ port_queue_action_handle_destroy(portid_t
> port_id,
> >  * of error.
> >  */
> > memset(&error, 0x99, sizeof(error));
> > +   job = calloc(1, sizeof(*job));
> > +   if (!job) {
> > +   printf("Queue action destroy job allocate 
> > failed\n");
> > +   return -ENOMEM;
> > +   }
> > +   job->type = QUEUE_JOB_TYPE_ACTION_DESTROY;
> > +   job->pia = pia;
> >
> > if (pia->handle &&
> > rte_flow_async_action_handle_destroy(port_id,
> > -   queue_id, &attr, pia->handle, NULL, 
> > &error)) {
> > +   queue_id, &attr, pia->handle, job, &error)) 
> > {
> > ret = port_flow_complain(&error);
> > continue;
> > }
> > *tmp = pia->next;
> > printf("Indirect action #%u destruction queued\n",
> >pia->id);
> > -   free(pia);
> > break;
> > }
> > if (i == n)
> 
> Our covscan tool reports a potential leak of "job" in this block.
> I am unclear whether it is a normal occurence, but it seems that if
> pia->handle == NULL, then job is leaked.

OK, this function can only be called from destroying a created action handle. 
For the created action handle, the pia->handle should never be NULL here.
And we also have " if (actions[i] != pia->id) " several lines above to ensure 
it is a valid pia.
I agree from tools' point of view it looks like a leak here. But it should 
never happen.
Do you think we need a "fix" in that case?

Thanks.

> 
> Can you have a look and submit a fix if confirmed?
> 
> Thanks.
> 
> 
> --
> David Marchand



Re: release candidate 22.11-rc3

2022-11-17 Thread David Marchand
On Tue, Nov 15, 2022 at 6:33 PM Thomas Monjalon  wrote:
>
> A new DPDK release candidate is ready for testing:
> https://git.dpdk.org/dpdk/tag/?id=v22.11-rc3
>
> There are 161 new patches in this snapshot.
>
> Release notes:
> https://doc.dpdk.org/guides/rel_notes/release_22_11.html
>
> Please test and report issues on bugs.dpdk.org.
> You may share some release validation results
> by replying to this message at dev@dpdk.org
> and by adding tested hardware in the release notes.

RH QE ran its non regression tests on 22.11-rc3, with no issue to report.

Test scenario:

Guest with device assignment(PF) throughput testing(1G hugepage size): PASS
Guest with device assignment(PF) throughput testing(2M hugepage size) : PASS
Guest with device assignment(VF) throughput testing: PASS
PVP (host dpdk testpmd as vswitch) 1Q: throughput testing: PASS
PVP vhost-user 2Q throughput testing: PASS
PVP vhost-user 1Q - cross numa node throughput testing: PASS
Guest with vhost-user 2 queues throughput testing: PASS
vhost-user reconnect with dpdk-client, qemu-server: qemu reconnect: PASS
vhost-user reconnect with dpdk-client, qemu-server: ovs reconnect: PASS
PVP 1Q live migration testing: PASS
PVP 1Q cross numa node live migration testing: PASS
Guest with ovs+dpdk+vhost-user 1Q live migration testing: PASS
Guest with ovs+dpdk+vhost-user 1Q live migration testing (2M): PASS
Guest with ovs+dpdk+vhost-user 2Q live migration testing: PASS
Guest with ovs+dpdk+vhost-user 4Q live migration testing: PASS
Host PF + DPDK testing: PASS
Host VF + DPDK testing: PASS

Test version:

kernel 4.18
qemu 6.2
dpdk: git://dpdk.org/dpdk
# git log -1

commit 04f68bb92b6fee621ddf0f0948f5565fa31a84fd
Author: Thomas Monjalon 
Date:   Tue Nov 15 18:21:34 2022 +0100
version: 22.11-rc3
Signed-off-by: Thomas Monjalon 

NICs: X540-AT2 NIC(ixgbe, 10G)


-- 
David Marchand



Re: [PATCH v1] ethdev: add indirect action async query

2022-11-17 Thread David Marchand
On Thu, Nov 17, 2022 at 9:18 AM Suanming Mou  wrote:
>
> Hi,
>
> > -Original Message-
> > From: David Marchand 
> > Sent: Thursday, November 17, 2022 4:07 PM
> > To: Suanming Mou 
> > Cc: Ori Kam ; Aman Singh ;
> > Yuying Zhang ; NBU-Contact-Thomas Monjalon
> > (EXTERNAL) ; Ferruh Yigit ;
> > Andrew Rybchenko ; Ray Kinsella
> > ; dev@dpdk.org
> > Subject: Re: [PATCH v1] ethdev: add indirect action async query
> >
> > Hello,
> >
> > On Tue, Sep 20, 2022 at 9:12 AM Suanming Mou 
> > wrote:
> > > @@ -2873,17 +2907,23 @@ port_queue_action_handle_destroy(portid_t
> > port_id,
> > >  * of error.
> > >  */
> > > memset(&error, 0x99, sizeof(error));
> > > +   job = calloc(1, sizeof(*job));
> > > +   if (!job) {
> > > +   printf("Queue action destroy job allocate 
> > > failed\n");
> > > +   return -ENOMEM;
> > > +   }
> > > +   job->type = QUEUE_JOB_TYPE_ACTION_DESTROY;
> > > +   job->pia = pia;
> > >
> > > if (pia->handle &&
> > > rte_flow_async_action_handle_destroy(port_id,
> > > -   queue_id, &attr, pia->handle, NULL, 
> > > &error)) {
> > > +   queue_id, &attr, pia->handle, job, 
> > > &error)) {
> > > ret = port_flow_complain(&error);
> > > continue;
> > > }
> > > *tmp = pia->next;
> > > printf("Indirect action #%u destruction queued\n",
> > >pia->id);
> > > -   free(pia);
> > > break;
> > > }
> > > if (i == n)
> >
> > Our covscan tool reports a potential leak of "job" in this block.
> > I am unclear whether it is a normal occurence, but it seems that if
> > pia->handle == NULL, then job is leaked.
>
> OK, this function can only be called from destroying a created action handle. 
> For the created action handle, the pia->handle should never be NULL here.
> And we also have " if (actions[i] != pia->id) " several lines above to ensure 
> it is a valid pia.
> I agree from tools' point of view it looks like a leak here. But it should 
> never happen.
> Do you think we need a "fix" in that case?

- If you are sure of it, unnecessary checks must be removed.

- In pia->handle != NULL branch, won't "job" be leaked too if
rte_flow_async_action_handle_destroy() fails?


-- 
David Marchand



RE: [PATCH] net/idpf: fix port start

2022-11-17 Thread Peng, Yuan
Tested-by: Peng, Yuan 

> -Original Message-
> From: Xing, Beilei 
> Sent: Thursday, November 17, 2022 11:08 AM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Peng, Yuan ; Xing, Beilei
> 
> Subject: [PATCH] net/idpf: fix port start
> 
> From: Beilei Xing 
> 
> Port can't start successfully if stopping port and starting port again.
> This patch fixes port start by initialization.
> 
> Fixes: e9ff6df15b9a ("net/idpf: stop before closing device")
> 
> Signed-off-by: Beilei Xing 
> ---


[PATCH] net/nfp: fix issue of data len exceeds descriptor limitation

2022-11-17 Thread Chaoyong He
From: Long Wu 

If dma_len is larger than "NFDK_DESC_TX_DMA_LEN_HEAD", the value of
dma_len bitwise and NFDK_DESC_TX_DMA_LEN_HEAD maybe less than packet
head length. Fill maximum dma_len in first tx descriptor to make
sure the whole head is included in the first descriptor. In addition,
add some explanation for NFDK code more readable.

Fixes: c73dced48c8c ("net/nfp: add NFDk Tx")
Cc: jin@corigine.com
Cc: sta...@dpdk.org

Signed-off-by: Long Wu 
Reviewed-by: Niklas Söderlund 
Reviewed-by: Chaoyong He 
---
 drivers/net/nfp/nfp_rxtx.c | 27 ++-
 1 file changed, 26 insertions(+), 1 deletion(-)

diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index b8c874d315..ed88d740fa 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -1064,6 +1064,7 @@ nfp_net_nfdk_tx_maybe_close_block(struct nfp_net_txq 
*txq, struct rte_mbuf *pkt)
if (unlikely(n_descs > NFDK_TX_DESC_GATHER_MAX))
return -EINVAL;
 
+   /* Under count by 1 (don't count meta) for the round down to work out */
n_descs += !!(pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG);
 
if (round_down(txq->wr_p, NFDK_TX_DESC_BLOCK_CNT) !=
@@ -1180,6 +1181,7 @@ nfp_net_nfdk_xmit_pkts(void *tx_queue, struct rte_mbuf 
**tx_pkts, uint16_t nb_pk
/* Sending packets */
while ((npkts < nb_pkts) && free_descs) {
uint32_t type, dma_len, dlen_type, tmp_dlen;
+   uint32_t tmp_hlen;
int nop_descs, used_descs;
 
pkt = *(tx_pkts + npkts);
@@ -1218,8 +1220,23 @@ nfp_net_nfdk_xmit_pkts(void *tx_queue, struct rte_mbuf 
**tx_pkts, uint16_t nb_pk
} else {
type = NFDK_DESC_TX_TYPE_GATHER;
}
+
+   /* Implicitly truncates to chunk in below logic */
dma_len -= 1;
-   dlen_type = (NFDK_DESC_TX_DMA_LEN_HEAD & dma_len) |
+
+   /*
+* We will do our best to pass as much data as we can in 
descriptor
+* and we need to make sure the first descriptor includes whole
+* head since there is limitation in firmware side. Sometimes 
the
+* value of dma_len bitwise and NFDK_DESC_TX_DMA_LEN_HEAD will 
less
+* than packet head len.
+*/
+   if (dma_len > NFDK_DESC_TX_DMA_LEN_HEAD)
+   tmp_hlen = NFDK_DESC_TX_DMA_LEN_HEAD;
+   else
+   tmp_hlen = dma_len;
+
+   dlen_type = (NFDK_DESC_TX_DMA_LEN_HEAD & tmp_hlen) |
(NFDK_DESC_TX_TYPE_HEAD & (type << 12));
ktxds->dma_len_type = rte_cpu_to_le_16(dlen_type);
dma_addr = rte_mbuf_data_iova(pkt);
@@ -1229,10 +1246,18 @@ nfp_net_nfdk_xmit_pkts(void *tx_queue, struct rte_mbuf 
**tx_pkts, uint16_t nb_pk
ktxds->dma_addr_lo = rte_cpu_to_le_32(dma_addr & 0x);
ktxds++;
 
+   /*
+* Preserve the original dlen_type, this way below the EOP logic
+* can use dlen_type.
+*/
tmp_dlen = dlen_type & NFDK_DESC_TX_DMA_LEN_HEAD;
dma_len -= tmp_dlen;
dma_addr += tmp_dlen + 1;
 
+   /*
+* The rest of the data (if any) will be in larger dma 
descritors
+* and is handled with the dma_len loop.
+*/
while (pkt) {
if (*lmbuf)
rte_pktmbuf_free_seg(*lmbuf);
-- 
2.29.3



[Bug 1128] [dpdk22.11-rc3]failed to start testpmd with the mbuf-size parameter

2022-11-17 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=1128

Bug ID: 1128
   Summary: [dpdk22.11-rc3]failed to start testpmd with the
mbuf-size parameter
   Product: DPDK
   Version: 22.11
  Hardware: x86
OS: Linux
Status: UNCONFIRMED
  Severity: normal
  Priority: Normal
 Component: core
  Assignee: dev@dpdk.org
  Reporter: yingyax@intel.com
  Target Milestone: ---

[Test Setup]
Steps to reproduce
List the steps to reproduce the issue.

1. Use the following command to build DPDK: 
CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static
x86_64-native-linuxapp-gcc/ 
ninja -C x86_64-native-linuxapp-gcc/ 

2. bind ports to vfio-pci
 ./usertools/dpdk-devbind.py -b vfio-pci 17:00.0 4b:00.0 

3.start testpmd
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5,6 -n 8
--force-max-simd-bitwidth=64 -- -i --portmask=0x3 --rxq=1 --txq=1 --txd=1024
--rxd=1024 --nb-cores=1 --mbuf-size=2048,2048
EAL: Detected CPU lcores: 128
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Ignore mapping IO port bar(1)
EAL: Ignore mapping IO port bar(4)
EAL: Probe PCI driver: net_ice (8086:1592) device: :31:00.0 (socket 0)
ice_load_pkg_type(): Active package is: 1.3.30.0, ICE OS Default Package
(single VLAN mode)
EAL: Ignore mapping IO port bar(1)
EAL: Ignore mapping IO port bar(4)
EAL: Probe PCI driver: net_ice (8086:1592) device: :4b:00.0 (socket 0)
ice_load_pkg_type(): Active package is: 1.3.30.0, ICE OS Default Package
(single VLAN mode)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool : n=155456, size=2048, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool : n=155456, size=2048, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Too many Rx mempools 2 vs maximum 0
Fail to configure port 0 rx queues
EAL: Error - exiting with code: 1
  Cause: Start ports failed
Segmentation fault (core dumped)


4.DPDK bad commit.
commit 4f04edcda769770881832f8036fd209e7bb6ab9a
Author: Hanumanth Pothula 
Date:   Thu Nov 10 15:46:31 2022 +0530
   app/testpmd: support multiple mbuf pools per Rx queue

-- 
You are receiving this mail because:
You are the assignee for the bug.

Fwd: Regarding User Data in DPDK ACL Library.

2022-11-17 Thread venkatesh bs
-- Forwarded message -
From: venkatesh bs 
Date: Wed, Nov 16, 2022 at 7:28 PM
Subject: Regarding User Data in DPDK ACL Library.
To: 


Hi DPDK Team,

After the ACL match for highest priority DPDK Classification API returns
User Data Which is as mentioned below in the document.

53. Packet Classification and Access Control — Data Plane Development Kit
22.11.0-rc2 documentation (dpdk.org)


   - *userdata*: A user-defined value. For each category, a successful
   match returns the userdata field of the highest priority matched rule. When
   no rules match, returned value is zero

I Wonder Why User Data Support does not returns 64 bit values, Always its
possible that User Data in Application Can be 64bit long, But since 64 bit
User data can't be returned by DPDK ACL Library, Application should have
the conversion algorithm from 64 to 32 bit during Rule add and vice versa
after classification.

I Wonder if anyone would have faced this issue, Please suggest any
suggestions if somewhere am wrong in understanding/Possible Solution if
someone has already gone through this issue.

Thanks In Advance.
Regards,
Venkatesh B Siddappa.


RE: [PATCH] net/mlx5: fix GENEVE resource management

2022-11-17 Thread Raslan Darawsheh
Hi,

> -Original Message-
> From: Suanming Mou 
> Sent: Wednesday, November 16, 2022 11:37 AM
> To: Matan Azrad ; Slava Ovsiienko
> 
> Cc: dev@dpdk.org; Raslan Darawsheh 
> Subject: [PATCH] net/mlx5: fix GENEVE resource management
> 
> The item translation split causes GENEVE TLV option resource register
> function flow_dev_geneve_tlv_option_resource_register() to be called
> twice incorrectly both in spec and mask translation.
> 
> In SWS mode the refcnt will only be decreased by 1 in flow release.
> The refcnt will never be 0 again, it causes the resource be leaked.
> In HWS mode the resource is allocated as global, the refcnt should
> not be increased after the resource be allocated. And the resource
> should be released during PMD exists.
> 
> This commit fixes GENEVE resource management.
> 
> Fixes: 75a00812b18f ("net/mlx5: add hardware steering item translation")
> Fixes: cd4ab742064a ("net/mlx5: split flow item matcher and value
> translation")
> 
> Signed-off-by: Suanming Mou 
> Acked-by: Viacheslav Ovsiienko 

Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh


RE: [PATCH v1] ethdev: add indirect action async query

2022-11-17 Thread Suanming Mou
Hi,

> -Original Message-
> From: David Marchand 
> Sent: Thursday, November 17, 2022 4:32 PM
> To: Suanming Mou 
> Cc: Ori Kam ; Aman Singh ;
> Yuying Zhang ; NBU-Contact-Thomas Monjalon
> (EXTERNAL) ; Ferruh Yigit ;
> Andrew Rybchenko ; Ray Kinsella
> ; dev@dpdk.org
> Subject: Re: [PATCH v1] ethdev: add indirect action async query
> 
> On Thu, Nov 17, 2022 at 9:18 AM Suanming Mou 
> wrote:
> >
> > Hi,
> >
> > > -Original Message-
> > > From: David Marchand 
> > > Sent: Thursday, November 17, 2022 4:07 PM
> > > To: Suanming Mou 
> > > Cc: Ori Kam ; Aman Singh
> > > ; Yuying Zhang ;
> > > NBU-Contact-Thomas Monjalon
> > > (EXTERNAL) ; Ferruh Yigit
> > > ; Andrew Rybchenko
> > > ; Ray Kinsella ;
> > > dev@dpdk.org
> > > Subject: Re: [PATCH v1] ethdev: add indirect action async query
> > >
> > > Hello,
> > >
> > > On Tue, Sep 20, 2022 at 9:12 AM Suanming Mou 
> > > wrote:
> > > > @@ -2873,17 +2907,23 @@ port_queue_action_handle_destroy(portid_t
> > > port_id,
> > > >  * of error.
> > > >  */
> > > > memset(&error, 0x99, sizeof(error));
> > > > +   job = calloc(1, sizeof(*job));
> > > > +   if (!job) {
> > > > +   printf("Queue action destroy job 
> > > > allocate failed\n");
> > > > +   return -ENOMEM;
> > > > +   }
> > > > +   job->type = QUEUE_JOB_TYPE_ACTION_DESTROY;
> > > > +   job->pia = pia;
> > > >
> > > > if (pia->handle &&
> > > > 
> > > > rte_flow_async_action_handle_destroy(port_id,
> > > > -   queue_id, &attr, pia->handle, NULL, 
> > > > &error)) {
> > > > +   queue_id, &attr, pia->handle, job,
> > > > + &error)) {
> > > > ret = port_flow_complain(&error);
> > > > continue;
> > > > }
> > > > *tmp = pia->next;
> > > > printf("Indirect action #%u destruction 
> > > > queued\n",
> > > >pia->id);
> > > > -   free(pia);
> > > > break;
> > > > }
> > > > if (i == n)
> > >
> > > Our covscan tool reports a potential leak of "job" in this block.
> > > I am unclear whether it is a normal occurence, but it seems that if
> > > pia->handle == NULL, then job is leaked.
> >
> > OK, this function can only be called from destroying a created action 
> > handle.
> For the created action handle, the pia->handle should never be NULL here.
> > And we also have " if (actions[i] != pia->id) " several lines above to 
> > ensure it is
> a valid pia.
> > I agree from tools' point of view it looks like a leak here. But it should 
> > never
> happen.
> > Do you think we need a "fix" in that case?
> 
> - If you are sure of it, unnecessary checks must be removed.

Sure, I will create a patch to remove that redundant check.

> 
> - In pia->handle != NULL branch, won't "job" be leaked too if
> rte_flow_async_action_handle_destroy() fails?

Yes, you are right. 
Thanks, I will create a patch with the two changes.

> 
> 
> --
> David Marchand



RE: [PATCH v14 1/1] app/testpmd: support multiple mbuf pools per Rx queue

2022-11-17 Thread Jiang, YuX
Hi Hanumanth,

We meet an issue on this patch, can you pls have a look quickly?
https://bugs.dpdk.org/show_bug.cgi?id=1128

Best regards,
Yu Jiang

> -Original Message-
> From: Hanumanth Pothula 
> Sent: Thursday, November 10, 2022 6:17 PM
> To: Singh, Aman Deep ; Zhang, Yuying
> 
> Cc: dev@dpdk.org; andrew.rybche...@oktetlabs.ru; tho...@monjalon.net;
> jer...@marvell.com; ndabilpu...@marvell.com; hpoth...@marvell.com
> Subject: [PATCH v14 1/1] app/testpmd: support multiple mbuf pools per Rx
> queue
> 
> Some of the HW has support for choosing memory pools based on the packet's
> size. The pool sort capability allows PMD/NIC to choose a memory pool based
> on the packet's length.
> 
> On multiple mempool support enabled, populate mempool array accordingly.
> Also, print pool name on which packet is received.
> 
> Signed-off-by: Hanumanth Pothula 
> 


RE: release candidate 22.11-rc3

2022-11-17 Thread Jiang, YuX
> -Original Message-
> From: Thomas Monjalon 
> Sent: Wednesday, November 16, 2022 1:33 AM
> To: annou...@dpdk.org
> Subject: release candidate 22.11-rc3
>
> A new DPDK release candidate is ready for testing:
>   https://git.dpdk.org/dpdk/tag/?id=v22.11-rc3
>
> There are 161 new patches in this snapshot.
>
> Release notes:
>   https://doc.dpdk.org/guides/rel_notes/release_22_11.html
>
> Please test and report issues on bugs.dpdk.org.
> You may share some release validation results by replying to this message at
> dev@dpdk.org and by adding tested hardware in the release notes.
>
> DPDK 22.11-rc4 should be the last chance for bug fixes and doc updates, and it
> is planned for the end of this week.
>
> Thank you everyone
>

Update the test status for Intel part. Till now dpdk22.11-rc3 test execution 
rate is 65%.
One new critical issue is found, hope it can be fixed in 22.11.
  https://bugs.dpdk.org/show_bug.cgi?id=1128 [dpdk22.11-rc3]failed to start 
testpmd with the mbuf-size parameter
  bad commit id:
  commit 4f04edcda769770881832f8036fd209e7bb6ab9a
Author: Hanumanth Pothula 
Date:   Thu Nov 10 15:46:31 2022 +0530

app/testpmd: support multiple mbuf pools per Rx queue

Some of the HW has support for choosing memory pools based on
the packet's size. The pool sort capability allows PMD/NIC to
choose a memory pool based on the packet's length.

On multiple mempool support enabled, populate mempool array
accordingly. Also, print pool name on which packet is received.

Signed-off-by: Hanumanth Pothula 
Reviewed-by: Andrew Rybchenko 

Find one new bug about "[DPDK22.11] idpf: failed to start port all", Intel dev 
has provided fix patch, and validation team verify passed, hope it can be 
merged into RC4.
  - patch link: 
https://patches.dpdk.org/project/dpdk/patch/20221117030744.45460-1-beilei.x...@intel.com/

Meson test known bugs:
  1, https://bugs.dpdk.org/show_bug.cgi?id=1107 [22.11-rc1][meson test] 
seqlock_autotest test failed, which is only found on CentOS7.9/gcc4.8.5. No fix 
yet.
  2, https://bugs.dpdk.org/show_bug.cgi?id=1024 [dpdk-22.07][meson test] 
driver-tests/link_bonding_mode4_autotest bond handshake failed. No fix yet.
Asan test known bugs:
  https://bugs.dpdk.org/show_bug.cgi?id=1123 [dpdk-22.11][asan] the 
stack-buffer-overflow was found when quit testpmd in Redhat9. No fix yet.

# Basic Intel(R) NIC testing
* Build or compile:
 *Build: cover the build test combination with latest GCC/Clang version and the 
popular OS revision such as Ubuntu20.04.5, Ubuntu22.04.1, Ubuntu22.10, 
Fedora36, RHEL8.6 etc.
  - All test passed.
 *Compile: cover the CFLAGES(O0/O1/O2/O3) with popular OS such as Ubuntu22.04.1 
and RHEL8.6.
  - All test passed.
* PF/VF(i40e, ixgbe): test scenarios including 
PF/VF-RTE_FLOW/TSO/Jumboframe/checksum offload/VLAN/VXLAN, etc.
- Execution rate is 90%.
- Known Bug "vf_interrupt_pmd/nic_interrupt_VF_vfio_pci: l3fwd-power 
Wake up failed" on X722 37d0. Intel Dev is still under investigating it.
* PF/VF(ice): test scenarios including Switch features/Package Management/Flow 
Director/Advanced Tx/Advanced RSS/ACL/DCF/Flexible Descriptor, etc.
- Execution rate is 90%.
* idpf PMD and GVE PMD: basic test.
  - Under testing. Find one new bug about "[DPDK22.11] idpf: failed to start 
port all".
* Intel NIC single core/NIC performance: test scenarios including PF/VF single 
core performance test, RFC2544 Zero packet loss performance test, etc.
- Execution rate is 80%. No new issue is found.
* Power and IPsec:
 * Power: test scenarios including bi-direction/Telemetry/Empty Poll 
Lib/Priority Base Frequency, etc.
- Execution rate is 60%. No new issue is found yet.
 * IPsec: test scenarios including ipsec/ipsec-gw/ipsec library basic test - 
QAT&SW/FIB library, etc.
- Execution rate is 50%. No new issue is found yet.
# Basic cryptodev and virtio testing
* Virtio: both function and performance test are covered. Such as 
PVP/Virtio_loopback/virtio-user loopback/virtio-net VM2VM perf testing/VMAWARE 
ESXI 7.0u3, etc.
- All test done. No new issue is found yet.
* Cryptodev:
 *Function test: test scenarios including Cryptodev API testing/CompressDev 
ISA-L/QAT/ZLIB PMD Testing/FIPS, etc.
- Under testing.
 *Performance test: test scenarios including Throughput Performance /Cryptodev 
Latency, etc.
- Under testing.

Best regards,
Yu Jiang


[PATCH] app/testpmd: fix action destruction memory leak

2022-11-17 Thread Suanming Mou
In case action handle destroy fails, the job memory was not freed
properly. This commit fixes the possible memory leak in the action
handle destruction failed case.

Fixes: c9dc03840873 ("ethdev: add indirect action async query")

Signed-off-by: Suanming Mou 
---
 app/test-pmd/config.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 982549ffed..719bdd4261 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2873,9 +2873,9 @@ port_queue_action_handle_destroy(portid_t port_id,
job->type = QUEUE_JOB_TYPE_ACTION_DESTROY;
job->pia = pia;
 
-   if (pia->handle &&
-   rte_flow_async_action_handle_destroy(port_id,
+   if (rte_flow_async_action_handle_destroy(port_id,
queue_id, &attr, pia->handle, job, &error)) {
+   free(job);
ret = port_flow_complain(&error);
continue;
}
-- 
2.25.1



RE: [PATCH] doc: avoid meson deprecation in setup

2022-11-17 Thread Ruifeng Wang
> -Original Message-
> From: Stephen Hemminger 
> Sent: Wednesday, November 16, 2022 1:35 AM
> To: dev@dpdk.org
> Cc: Stephen Hemminger ; Ruifeng Wang 
> ;
> Zhangfei Gao ; Bruce Richardson 
> ;
> Elena Agostini ; Shepard Siegel 
> ; Ed
> Czeck ; John Miller ; 
> Zyta Szpak
> ; Liron Himi ; Nithin Dabilpuram
> ; Kiran Kumar K ; Sunil 
> Kumar Kori
> ; Satha Rao ; David Hunt 
> 
> Subject: [PATCH] doc: avoid meson deprecation in setup
> 
> The command "meson build" causes a deprecation warning with meson 0.64.0.
>   
>   WARNING:
> Running the setup command as `meson [options]` instead of `meson setup 
> [options]` is
> ambiguous and deprecated.
> 
> Therefore fix the examples in the documentation.
> 
> Signed-off-by: Stephen Hemminger 
> ---
>  doc/guides/cryptodevs/armv8.rst  |  2 +-
>  doc/guides/cryptodevs/uadk.rst   |  2 +-
>  doc/guides/freebsd_gsg/build_dpdk.rst|  2 +-
>  doc/guides/gpus/cuda.rst |  4 ++--
>  doc/guides/howto/openwrt.rst |  4 ++--
>  doc/guides/nics/ark.rst  |  2 +-
>  doc/guides/nics/mvneta.rst   |  2 +-
>  doc/guides/nics/mvpp2.rst|  2 +-
>  doc/guides/platform/bluefield.rst|  4 ++--
>  doc/guides/platform/cnxk.rst |  4 ++--
>  doc/guides/platform/octeontx.rst |  8 
>  doc/guides/prog_guide/build-sdk-meson.rst|  4 ++--
>  doc/guides/prog_guide/lto.rst|  2 +-
>  doc/guides/prog_guide/profile_app.rst|  2 +-
>  doc/guides/sample_app_ug/vm_power_management.rst | 14 ++
>  15 files changed, 28 insertions(+), 30 deletions(-)
> 
> diff --git a/doc/guides/cryptodevs/armv8.rst 
> b/doc/guides/cryptodevs/armv8.rst index
> 8963f66a206c..1a006754cbe4 100644
> --- a/doc/guides/cryptodevs/armv8.rst
> +++ b/doc/guides/cryptodevs/armv8.rst

Change looks good to me. Thanks.
Some occurrences in doc/guides/linux_gsg/cross_build_dpdk_for_arm64.rst need 
update as well.



[PATCH] net/idpf: fix crash when launching l3fwd

2022-11-17 Thread beilei . xing
From: Beilei Xing 

There's core dump when launching l3fwd with 1 queue 1 core. It's
because NULL pointer is used if fail to configure device.
This patch removes incorrect check during device configuration,
and checks NULL pointer when excuting VIRTCHNL2_OP_DEALLOC_VECTORS.

Fixes: 549343c25db8 ("net/idpf: support device initialization")
Fixes: 70675bcc3a57 ("net/idpf: support RSS")
Fixes: 37291a68fd78 ("net/idpf: support write back based on ITR expire")

Signed-off-by: Beilei Xing 
---
 drivers/net/idpf/idpf_ethdev.c | 7 ---
 drivers/net/idpf/idpf_vchnl.c  | 3 +++
 2 files changed, 3 insertions(+), 7 deletions(-)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 20f088eb80..51fc97bf7b 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -372,13 +372,6 @@ idpf_dev_configure(struct rte_eth_dev *dev)
return -ENOTSUP;
}
 
-   if ((dev->data->nb_rx_queues == 1 && conf->rxmode.mq_mode != 
RTE_ETH_MQ_RX_NONE) ||
-   (dev->data->nb_rx_queues > 1 && conf->rxmode.mq_mode != 
RTE_ETH_MQ_RX_RSS)) {
-   PMD_INIT_LOG(ERR, "Multi-queue packet distribution mode %d is 
not supported",
-conf->rxmode.mq_mode);
-   return -ENOTSUP;
-   }
-
if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not supported",
 conf->txmode.mq_mode);
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index ac6486d4ef..88770447f8 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -1197,6 +1197,9 @@ idpf_vc_dealloc_vectors(struct idpf_vport *vport)
int err, len;
 
alloc_vec = vport->recv_vectors;
+   if (alloc_vec == NULL)
+   return -EINVAL;
+
vcs = &alloc_vec->vchunks;
 
len = sizeof(struct virtchnl2_vector_chunks) +
-- 
2.26.2



[PATCH] net/mlx5: fix push VLAN action mask iteration

2022-11-17 Thread Dariusz Sosnowski
Before this patch, during translation of OF_PUSH_VLAN actions iterator
was moved forward to the position of OF_SET_VLAN_VID or
OF_SET_VLAN_PCP, but masks iterator was not updated.
As a result, the following actions were incorrectly translated,
because iterators were not aligned.

This patch fixes this behavior by properly adjusting masks iterator
alognside actions iterator.

Fixes: 773ca0e91ba1 ("net/mlx5: support VLAN push/pop/modify with HWS")
Cc: getel...@nvidia.com

Signed-off-by: Dariusz Sosnowski 
Acked-by: Viacheslav Ovsiienko 
---
 drivers/net/mlx5/mlx5_flow_hw.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index f4493ad556..ff0c3064c1 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -1346,6 +1346,7 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
bool actions_end = false;
uint32_t type;
bool reformat_used = false;
+   unsigned int of_vlan_offset;
uint16_t action_pos;
uint16_t jump_pos;
uint32_t ct_idx;
@@ -1413,9 +1414,11 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
(priv, acts, actions->type,
 actions - action_start, action_pos))
goto err;
-   actions += is_of_vlan_pcp_present(actions) ?
+   of_vlan_offset = is_of_vlan_pcp_present(actions) ?
MLX5_HW_VLAN_PUSH_PCP_IDX :
MLX5_HW_VLAN_PUSH_VID_IDX;
+   actions += of_vlan_offset;
+   masks += of_vlan_offset;
break;
case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN:
action_pos = at->actions_off[actions - at->actions];
-- 
2.25.1



[PATCH v2] net/idpf: fix crash when launching l3fwd

2022-11-17 Thread beilei . xing
From: Beilei Xing 

There's core dump when launching l3fwd with 1 queue 1 core. It's
because NULL pointer is used if fail to configure device.
This patch removes incorrect check during device configuration,
and checks NULL pointer when executing VIRTCHNL2_OP_DEALLOC_VECTORS.

Fixes: 549343c25db8 ("net/idpf: support device initialization")
Fixes: 70675bcc3a57 ("net/idpf: support RSS")
Fixes: 37291a68fd78 ("net/idpf: support write back based on ITR expire")

Signed-off-by: Beilei Xing 
---
v2 change: Fix typo.

 drivers/net/idpf/idpf_ethdev.c | 7 ---
 drivers/net/idpf/idpf_vchnl.c  | 3 +++
 2 files changed, 3 insertions(+), 7 deletions(-)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 20f088eb80..51fc97bf7b 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -372,13 +372,6 @@ idpf_dev_configure(struct rte_eth_dev *dev)
return -ENOTSUP;
}
 
-   if ((dev->data->nb_rx_queues == 1 && conf->rxmode.mq_mode != 
RTE_ETH_MQ_RX_NONE) ||
-   (dev->data->nb_rx_queues > 1 && conf->rxmode.mq_mode != 
RTE_ETH_MQ_RX_RSS)) {
-   PMD_INIT_LOG(ERR, "Multi-queue packet distribution mode %d is 
not supported",
-conf->rxmode.mq_mode);
-   return -ENOTSUP;
-   }
-
if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not supported",
 conf->txmode.mq_mode);
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index ac6486d4ef..88770447f8 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -1197,6 +1197,9 @@ idpf_vc_dealloc_vectors(struct idpf_vport *vport)
int err, len;
 
alloc_vec = vport->recv_vectors;
+   if (alloc_vec == NULL)
+   return -EINVAL;
+
vcs = &alloc_vec->vchunks;
 
len = sizeof(struct virtchnl2_vector_chunks) +
-- 
2.26.2



[PATCH v1 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-17 Thread Hanumanth Pothula
Validate ethdev parameter 'max_rx_mempools' to know wheater device
supports multi-mempool feature or not.

Bugzilla ID: 1128

Signed-off-by: Hanumanth Pothula 
---
 app/test-pmd/testpmd.c | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 78ea19fcbb..79c0951b62 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2648,16 +2648,22 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 {
union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
+   struct rte_eth_dev_info dev_info;
struct rte_mempool *mpx;
unsigned int i, mp_n;
int ret;
 
+   ret = rte_eth_dev_info_get(port_id, &dev_info);
+   if (ret != 0)
+   return ret;
+
/* Verify Rx queue configuration is single pool and segment or
 * multiple pool/segment.
+* @see rte_eth_dev_info::max_rx_mempools
 * @see rte_eth_rxconf::rx_mempools
 * @see rte_eth_rxconf::rx_seg
 */
-   if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
+   if (!(dev_info.max_rx_mempools != 0) && !(rx_pkt_nb_segs > 1 ||
((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
/* Single pool/segment configuration */
rx_conf->rx_seg = NULL;
-- 
2.25.1



RE: [PATCH v14 1/1] app/testpmd: support multiple mbuf pools per Rx queue

2022-11-17 Thread Hanumanth Reddy Pothula
Hi Yu Jiang,

Please find the fix for below issue, 
https://patches.dpdk.org/project/dpdk/patch/20221117113047.3088461-1-hpoth...@marvell.com

Verified changes locally, both with/without multi-mempool support.

Regards,
Hanumanth

> -Original Message-
> From: Jiang, YuX 
> Sent: Thursday, November 17, 2022 2:13 PM
> To: Hanumanth Reddy Pothula ; Singh, Aman
> Deep ; Zhang, Yuying
> 
> Cc: dev@dpdk.org; andrew.rybche...@oktetlabs.ru;
> tho...@monjalon.net; Jerin Jacob Kollanukkaran ;
> Nithin Kumar Dabilpuram 
> Subject: [EXT] RE: [PATCH v14 1/1] app/testpmd: support multiple mbuf
> pools per Rx queue
> 
> External Email
> 
> --
> Hi Hanumanth,
> 
> We meet an issue on this patch, can you pls have a look quickly?
> https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__bugs.dpdk.org_show-5Fbug.cgi-3Fid-
> 3D1128&d=DwIFAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=ZXuJnLKRi2OwoXx-
> DBHWwiPuGzcSlH1FHkeNRty_2pQ&m=CWMu6OgmCaCZqYSpbjlxN8XS2otz
> 7qzAU8raSE9f1jdzXi7Cr4kq0OKYTN1MYLex&s=EhkcKAk_QsFYhE_rH1K1n2z
> pzCyQFEmUc-9_fPNPrFQ&e=
> 
> Best regards,
> Yu Jiang
> 
> > -Original Message-
> > From: Hanumanth Pothula 
> > Sent: Thursday, November 10, 2022 6:17 PM
> > To: Singh, Aman Deep ; Zhang, Yuying
> > 
> > Cc: dev@dpdk.org; andrew.rybche...@oktetlabs.ru;
> tho...@monjalon.net;
> > jer...@marvell.com; ndabilpu...@marvell.com; hpoth...@marvell.com
> > Subject: [PATCH v14 1/1] app/testpmd: support multiple mbuf pools per
> > Rx queue
> >
> > Some of the HW has support for choosing memory pools based on the
> > packet's size. The pool sort capability allows PMD/NIC to choose a
> > memory pool based on the packet's length.
> >
> > On multiple mempool support enabled, populate mempool array
> accordingly.
> > Also, print pool name on which packet is received.
> >
> > Signed-off-by: Hanumanth Pothula 
> >


Re: [PATCH] app/testpmd: fix action destruction memory leak

2022-11-17 Thread David Marchand
On Thu, Nov 17, 2022 at 9:56 AM Suanming Mou  wrote:
>
> In case action handle destroy fails, the job memory was not freed
> properly. This commit fixes the possible memory leak in the action
> handle destruction failed case.
>
> Fixes: c9dc03840873 ("ethdev: add indirect action async query")
>

Reported-by: David Marchand 
> Signed-off-by: Suanming Mou 

LGTM.
Reviewed-by: David Marchand 

Thanks.

-- 
David Marchand



[dpdk-dev v6] doc: support IPsec Multi-buffer lib v1.3

2022-11-17 Thread Kai Ji
From: Pablo de Lara 

Updated AESNI MB and AESNI GCM, KASUMI, ZUC, SNOW3G
and CHACHA20_POLY1305 PMD documentation guides
with information about the latest Intel IPSec Multi-buffer
library supported.

Signed-off-by: Pablo de Lara 
Acked-by: Ciara Power 
Acked-by: Brian Dooley 
Signed-off-by: Kai Ji 
---
-v6: Release notes update reword
-v5: Release notes update
-v4: Added information on CHACHA20_POLY1305 PMD guide
-v3: Fixed library version from 1.2 to 1.3 in one line
-v2: Removed repeated word 'the'
---
 doc/guides/cryptodevs/aesni_gcm.rst |  8 +++---
 doc/guides/cryptodevs/aesni_mb.rst  | 29 -
 doc/guides/cryptodevs/chacha20_poly1305.rst | 12 ++---
 doc/guides/cryptodevs/kasumi.rst| 15 ---
 doc/guides/cryptodevs/snow3g.rst| 15 ---
 doc/guides/cryptodevs/zuc.rst   | 14 +++---
 doc/guides/rel_notes/release_22_11.rst  | 11 +++-
 7 files changed, 77 insertions(+), 27 deletions(-)

diff --git a/doc/guides/cryptodevs/aesni_gcm.rst 
b/doc/guides/cryptodevs/aesni_gcm.rst
index 6229392f58..5192287ed8 100644
--- a/doc/guides/cryptodevs/aesni_gcm.rst
+++ b/doc/guides/cryptodevs/aesni_gcm.rst
@@ -40,8 +40,8 @@ Installation
 To build DPDK with the AESNI_GCM_PMD the user is required to download the 
multi-buffer
 library from `here `_
 and compile it on their user system before building DPDK.
-The latest version of the library supported by this PMD is v1.2, which
-can be downloaded in 
``_.
+The latest version of the library supported by this PMD is v1.3, which
+can be downloaded in 
``_.

 .. code-block:: console

@@ -84,8 +84,8 @@ and the external crypto libraries supported by them:
17.08 - 18.02  Multi-buffer library 0.46 - 0.48
18.05 - 19.02  Multi-buffer library 0.49 - 0.52
19.05 - 20.08  Multi-buffer library 0.52 - 0.55
-   20.11 - 21.08  Multi-buffer library 0.53 - 1.2*
-   21.11+ Multi-buffer library 1.0  - 1.2*
+   20.11 - 21.08  Multi-buffer library 0.53 - 1.3*
+   21.11+ Multi-buffer library 1.0  - 1.3*
=  

 \* Multi-buffer library 1.0 or newer only works for Meson but not Make build 
system.
diff --git a/doc/guides/cryptodevs/aesni_mb.rst 
b/doc/guides/cryptodevs/aesni_mb.rst
index 599ed5698f..b9bf03655d 100644
--- a/doc/guides/cryptodevs/aesni_mb.rst
+++ b/doc/guides/cryptodevs/aesni_mb.rst
@@ -1,7 +1,7 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
 Copyright(c) 2015-2018 Intel Corporation.

-AESN-NI Multi Buffer Crypto Poll Mode Driver
+AES-NI Multi Buffer Crypto Poll Mode Driver
 


@@ -10,8 +10,6 @@ support for utilizing Intel multi buffer library, see the 
white paper
 `Fast Multi-buffer IPsec Implementations on Intel® Architecture Processors
 
`_.

-The AES-NI MB PMD has current only been tested on Fedora 21 64-bit with gcc.
-
 The AES-NI MB PMD supports synchronous mode of operation with
 ``rte_cryptodev_sym_cpu_crypto_process`` function call.

@@ -77,6 +75,23 @@ Limitations
 * RTE_CRYPTO_CIPHER_DES_DOCSISBPI is not supported for combined Crypto-CRC
   DOCSIS security protocol.

+AESNI MB PMD selection over SNOW3G/ZUC/KASUMI PMDs
+--
+
+This PMD supports wireless cipher suite (SNOW3G, ZUC and KASUMI).
+On Intel processors, it is recommended to use this PMD instead of SNOW3G, ZUC 
and KASUMI PMDs,
+as it enables algorithm mixing (e.g. cipher algorithm SNOW3G-UEA2 with
+authentication algorithm AES-CMAC-128) and performance over IMIX (packet size 
mix) traffic
+is significantly higher.
+
+AESNI MB PMD selection over CHACHA20-POLY1305 PMD
+-
+
+This PMD supports Chacha20-Poly1305 algorithm.
+On Intel processors, it is recommended to use this PMD instead of 
CHACHA20-POLY1305 PMD,
+as it delivers better performance on single segment buffers.
+For multi-segment buffers, it is still recommended to use CHACHA20-POLY1305 
PMD,
+until the new SGL API is introduced in the AESNI MB PMD.

 Installation
 
@@ -84,8 +99,8 @@ Installation
 To build DPDK with the AESNI_MB_PMD the user is required to download the 
multi-buffer
 library from `here `_
 and compile it on their user system before building DPDK.
-The latest version of the library supported by this PMD is v1.2, which
-can be downloaded from 
``_.
+The latest version of the library supported by this PMD is v1.3, which
+can be downloaded from 
``_.

 .. code-block:: console

@@ -130,8

[Bug 1129] net/mlx5: cannot transmit if Tx queue is setup with maximum number of descriptors

2022-11-17 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=1129

Bug ID: 1129
   Summary: net/mlx5: cannot transmit if Tx queue is setup with
maximum number of descriptors
   Product: DPDK
   Version: 22.11
  Hardware: All
OS: All
Status: UNCONFIRMED
  Severity: normal
  Priority: Normal
 Component: ethdev
  Assignee: dev@dpdk.org
  Reporter: andrew.rybche...@oktetlabs.ru
  Target Milestone: ---

net/mlx5: cannot transmit if Tx queue is setup with maximum number of
descriptors

net/mlx5 does not report Rx/Tx descriptors limits (min/max/align).
As the result application sees theoretical maximum UINT16_MAX (65535) which is
set as the default by ethdev.

If application tries to use the value to setup Tx queue, setup succeed, but Tx
burst always returns 0. I guess the reason is in the following log message:

mlx5_net: port 0 increased number of descriptors in Tx queue 0 to the next
power of two (0)

i.e. effective number of descriptors is equal to 0.

-- 
You are receiving this mail because:
You are the assignee for the bug.

[PATCH v2 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-17 Thread Hanumanth Pothula
Validate ethdev parameter 'max_rx_mempools' to know wheater device
supports multi-mempool feature or not.

Bugzilla ID: 1128

Signed-off-by: Hanumanth Pothula 
v2:
 - Rebased on tip of next-net/main
---
 app/test-pmd/testpmd.c | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4e25f77c6a..fd634bd5e6 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2655,16 +2655,22 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
struct rte_mempool *mpx;
+   struct rte_eth_dev_info dev_info;
unsigned int i, mp_n;
uint32_t prev_hdrs = 0;
int ret;
 
+   ret = rte_eth_dev_info_get(port_id, &dev_info);
+   if (ret != 0)
+   return ret;
+
/* Verify Rx queue configuration is single pool and segment or
 * multiple pool/segment.
+* @see rte_eth_dev_info::max_rx_mempools
 * @see rte_eth_rxconf::rx_mempools
 * @see rte_eth_rxconf::rx_seg
 */
-   if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
+   if (!(dev_info.max_rx_mempools != 0) && !(rx_pkt_nb_segs > 1 ||
((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
/* Single pool/segment configuration */
rx_conf->rx_seg = NULL;
-- 
2.25.1



Regarding User Data in DPDK ACL Library.

2022-11-17 Thread venkatesh bs
Hi DPDK Team,

After the ACL match for highest priority DPDK Classification API returns
User Data Which is as mentioned below in the document.

53. Packet Classification and Access Control — Data Plane Development Kit
22.11.0-rc2 documentation (dpdk.org)


   - *userdata*: A user-defined value. For each category, a successful
   match returns the userdata field of the highest priority matched rule. When
   no rules match, returned value is zero

I Wonder Why User Data Support does not returns 64 bit values, Always its
possible that User Data in Application Can be 64bit long, But since 64 bit
User data can't be returned by DPDK ACL Library, Application should have
the conversion algorithm from 64 to 32 bit during Rule add and vice versa
after classification.

I Wonder if anyone would have faced this issue, Please suggest any
suggestions if somewhere am wrong in understanding/Possible Solution if
someone has already gone through this issue.

Thanks In Advance.
Regards,
Venkatesh B Siddappa.


[PATCH 1/2] net/mlx5: fix port private max_lro_msg_size

2022-11-17 Thread Gregory Etelson
The PMD analyzes each Rx queue maximal LRO size and selects one that
fits all queues to configure TIR LRO attribute.
TIR LRO attribute is number of 256 bytes chunks that match the
selected maximal LRO size.

PMD used `priv->max_lro_msg_size` for selected maximal LRO size and
number of TIR chunks.

Fixes: 9f1035b5f71c ("net/mlx5: fix port initialization with small LRO")

Signed-off-by: Gregory Etelson 
Acked-by: Matan Azrad 
---
 drivers/net/mlx5/mlx5.h  | 2 +-
 drivers/net/mlx5/mlx5_devx.c | 3 ++-
 drivers/net/mlx5/mlx5_rxq.c  | 4 +---
 3 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 02bee5808d..31982002ee 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1711,7 +1711,7 @@ struct mlx5_priv {
uint32_t refcnt; /**< Reference counter. */
/**< Verbs modify header action object. */
uint8_t ft_type; /**< Flow table type, Rx or Tx. */
-   uint8_t max_lro_msg_size;
+   uint32_t max_lro_msg_size;
uint32_t link_speed_capa; /* Link speed capabilities. */
struct mlx5_xstats_ctrl xstats_ctrl; /* Extended stats control. */
struct mlx5_stats_ctrl stats_ctrl; /* Stats control. */
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index c1305836cf..02deaac612 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -870,7 +870,8 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const 
uint8_t *rss_key,
if (lro) {
MLX5_ASSERT(priv->sh->config.lro_allowed);
tir_attr->lro_timeout_period_usecs = priv->config.lro_timeout;
-   tir_attr->lro_max_msg_sz = priv->max_lro_msg_size;
+   tir_attr->lro_max_msg_sz =
+   priv->max_lro_msg_size / MLX5_LRO_SEG_CHUNK_SIZE;
tir_attr->lro_enable_mask =
MLX5_TIRC_LRO_ENABLE_MASK_IPV4_LRO |
MLX5_TIRC_LRO_ENABLE_MASK_IPV6_LRO;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 724cd6c7e6..81aa3f074a 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1533,7 +1533,6 @@ mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, 
uint16_t idx,
MLX5_MAX_TCP_HDR_OFFSET)
max_lro_size -= MLX5_MAX_TCP_HDR_OFFSET;
max_lro_size = RTE_MIN(max_lro_size, MLX5_MAX_LRO_SIZE);
-   max_lro_size /= MLX5_LRO_SEG_CHUNK_SIZE;
if (priv->max_lro_msg_size)
priv->max_lro_msg_size =
RTE_MIN((uint32_t)priv->max_lro_msg_size, max_lro_size);
@@ -1541,8 +1540,7 @@ mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, 
uint16_t idx,
priv->max_lro_msg_size = max_lro_size;
DRV_LOG(DEBUG,
"port %u Rx Queue %u max LRO message size adjusted to %u bytes",
-   dev->data->port_id, idx,
-   priv->max_lro_msg_size * MLX5_LRO_SEG_CHUNK_SIZE);
+   dev->data->port_id, idx, priv->max_lro_msg_size);
 }
 
 /**
-- 
2.34.1



[PATCH 2/2] doc: update MLX5 LRO limitation

2022-11-17 Thread Gregory Etelson
Maximal LRO message size must be multiply of 256.
Otherwise, TCP payload may not fit into a single WQE.

Cc: sta...@dpdk.org
Signed-off-by: Gregory Etelson 
Acked-by: Matan Azrad 
---
 doc/guides/nics/mlx5.rst | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 4f0db21dde..98e0b24be4 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -278,6 +278,9 @@ Limitations
 - No Tx metadata go to the E-Switch steering domain for the Flow group 0.
   The flows within group 0 and set metadata action are rejected by hardware.
 
+- The driver rounds down the ``max_lro_pkt_size`` value in the port
+  configuration to a multiple of 256 due to HW limitation.
+
 .. note::
 
MAC addresses not already present in the bridge table of the associated
-- 
2.34.1



[PATCH] net/mlx5: fix modify field operation validation

2022-11-17 Thread Dariusz Sosnowski
This patch removes the following checks from validation
of modify field action:

- rejection of ADD operation,
- offsets should be aligned to 4 bytes.

These limitations were removed in
commit 0f4aa72b99da ("net/mlx5: support flow modify field with HWS"),
but non-HWS validation was not updated.

Notes about these limitations are removed from mlx5 PMD docs.
On top of that, the current offsetting behavior in modify field action
is clarified in the mlx5 docs.

Fixes: 0f4aa72b99da ("net/mlx5: support flow modify field with HWS")
Cc: suanmi...@nvidia.com

Signed-off-by: Dariusz Sosnowski 
Acked-by: Viacheslav Ovsiienko 
---
 doc/guides/nics/mlx5.rst| 10 --
 drivers/net/mlx5/mlx5_flow_dv.c | 21 -
 2 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 4f0db21dde..203bbd9d27 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -447,17 +447,23 @@ Limitations
 
 - Modify Field flow:
 
-  - Supports the 'set' operation only for 
``RTE_FLOW_ACTION_TYPE_MODIFY_FIELD`` action.
+  - Supports the 'set' and 'add' operations for 
``RTE_FLOW_ACTION_TYPE_MODIFY_FIELD`` action.
   - Modification of an arbitrary place in a packet via the special 
``RTE_FLOW_FIELD_START`` Field ID is not supported.
   - Modification of the 802.1Q Tag, VXLAN Network or GENEVE Network ID's is 
not supported.
   - Encapsulation levels are not supported, can modify outermost header fields 
only.
-  - Offsets must be 32-bits aligned, cannot skip past the boundary of a field.
+  - Offsets cannot skip past the boundary of a field.
   - If the field type is ``RTE_FLOW_FIELD_MAC_TYPE``
 and packet contains one or more VLAN headers,
 the meaningful type field following the last VLAN header
 is used as modify field operation argument.
 The modify field action is not intended to modify VLAN headers type field,
 dedicated VLAN push and pop actions should be used instead.
+  - For packet fields (e.g. MAC addresses, IPv4 addresses or L4 ports)
+offset specifies the number of bits to skip from field's start,
+starting from MSB in the first byte, in the network order.
+  - For flow metadata fields (e.g. META or TAG)
+offset specifies the number of bits to skip from field's start,
+starting from LSB in the least significant byte, in the host order.
 
 - Age action:
 
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index bc9a75f225..f1a3868e48 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -5035,13 +5035,11 @@ flow_dv_validate_action_modify_field(struct rte_eth_dev 
*dev,
" the width of a field");
if (action_modify_field->dst.field != RTE_FLOW_FIELD_VALUE &&
action_modify_field->dst.field != RTE_FLOW_FIELD_POINTER) {
-   if ((action_modify_field->dst.offset +
-action_modify_field->width > dst_width) ||
-   (action_modify_field->dst.offset % 32))
+   if (action_modify_field->dst.offset +
+   action_modify_field->width > dst_width)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
-   "destination offset is too big"
-   " or not aligned to 4 bytes");
+   "destination offset is too big");
if (action_modify_field->dst.level &&
action_modify_field->dst.field != RTE_FLOW_FIELD_TAG)
return rte_flow_error_set(error, ENOTSUP,
@@ -5056,13 +5054,11 @@ flow_dv_validate_action_modify_field(struct rte_eth_dev 
*dev,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"modify field action is not"
" supported for group 0");
-   if ((action_modify_field->src.offset +
-action_modify_field->width > src_width) ||
-   (action_modify_field->src.offset % 32))
+   if (action_modify_field->src.offset +
+   action_modify_field->width > src_width)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
-   "source offset is too big"
-   " or not aligned to 4 bytes");
+   "source offset is too big");
if (action_modify_field->src.level &&
action_modify_field->src.field != RTE_FLOW_FIELD_TAG)
return rte_flow_error_set(error, ENOTSUP,
@@ -5132,11 +5128,10 @@ flow_dv_validate_action_modify_field(struct rte_eth_dev 
*dev,
"cann

Re: [PATCH v2 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-17 Thread Singh, Aman Deep




On 11/17/2022 6:25 PM, Hanumanth Pothula wrote:

Validate ethdev parameter 'max_rx_mempools' to know wheater device
supports multi-mempool feature or not.


Spell check: whether


Bugzilla ID: 1128

Signed-off-by: Hanumanth Pothula 


Tested-by: Aman Singh 


v2:
  - Rebased on tip of next-net/main
---
  app/test-pmd/testpmd.c | 8 +++-
  1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4e25f77c6a..fd634bd5e6 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2655,16 +2655,22 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
struct rte_mempool *mpx;
+   struct rte_eth_dev_info dev_info;
unsigned int i, mp_n;
uint32_t prev_hdrs = 0;
int ret;
  
+	ret = rte_eth_dev_info_get(port_id, &dev_info);

+   if (ret != 0)
+   return ret;
+
/* Verify Rx queue configuration is single pool and segment or
 * multiple pool/segment.
+* @see rte_eth_dev_info::max_rx_mempools
 * @see rte_eth_rxconf::rx_mempools
 * @see rte_eth_rxconf::rx_seg
 */
-   if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
+   if (!(dev_info.max_rx_mempools != 0) && !(rx_pkt_nb_segs > 1 ||


Can we make the check simpler "(dev_info.max_rx_mempools == 0)"


((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
/* Single pool/segment configuration */
rx_conf->rx_seg = NULL;




[PATCH] net/mlx5: fix invalid memory access in port closing

2022-11-17 Thread Michael Baum
The shared IB device (sh) has per port data updated in port creation.
In port closing this port data is updated even when the SH still exist.

However, this updating is happened after SH has been released and for
last port it actually accesses to freed memory.

This patch updates the port data before SH releasing.

Fixes: 13c5c093905c ("net/mlx5: fix port event cleaning order")
Cc: michae...@nvidia.com

Signed-off-by: Michael Baum 
Acked-by: Matan Azrad 
---
 drivers/net/mlx5/mlx5.c | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 6a0d66247a..e55be8720e 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -2119,6 +2119,12 @@ mlx5_dev_close(struct rte_eth_dev *dev)
if (priv->hrxqs)
mlx5_list_destroy(priv->hrxqs);
mlx5_free(priv->ext_rxqs);
+   priv->sh->port[priv->dev_port - 1].nl_ih_port_id = RTE_MAX_ETHPORTS;
+   /*
+* The interrupt handler port id must be reset before priv is reset
+* since 'mlx5_dev_interrupt_nl_cb' uses priv.
+*/
+   rte_io_wmb();
/*
 * Free the shared context in last turn, because the cleanup
 * routines above may use some shared fields, like
@@ -2144,12 +2150,6 @@ mlx5_dev_close(struct rte_eth_dev *dev)
if (!c)
claim_zero(rte_eth_switch_domain_free(priv->domain_id));
}
-   priv->sh->port[priv->dev_port - 1].nl_ih_port_id = RTE_MAX_ETHPORTS;
-   /*
-* The interrupt handler port id must be reset before priv is reset
-* since 'mlx5_dev_interrupt_nl_cb' uses priv.
-*/
-   rte_io_wmb();
memset(priv, 0, sizeof(*priv));
priv->domain_id = RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID;
/*
-- 
2.25.1



RE: [EXT] Re: [PATCH v2 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-17 Thread Hanumanth Reddy Pothula


> -Original Message-
> From: Singh, Aman Deep 
> Sent: Thursday, November 17, 2022 8:30 PM
> To: Hanumanth Reddy Pothula ; Yuying Zhang
> 
> Cc: dev@dpdk.org; andrew.rybche...@oktetlabs.ru;
> tho...@monjalon.net; -yux.ji...@intel.com; Jerin Jacob Kollanukkaran
> ; Nithin Kumar Dabilpuram
> 
> Subject: [EXT] Re: [PATCH v2 1/1] app/testpmd: add valid check to verify
> multi mempool feature
> 
> External Email
> 
> --
> 
> 
> On 11/17/2022 6:25 PM, Hanumanth Pothula wrote:
> > Validate ethdev parameter 'max_rx_mempools' to know wheater device
> > supports multi-mempool feature or not.
> 
> Spell check: whether
> 
> > Bugzilla ID: 1128
> >
> > Signed-off-by: Hanumanth Pothula 
> 
> Tested-by: Aman Singh 
> 
> > v2:
> >   - Rebased on tip of next-net/main
> > ---
> >   app/test-pmd/testpmd.c | 8 +++-
> >   1 file changed, 7 insertions(+), 1 deletion(-)
> >
> > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > 4e25f77c6a..fd634bd5e6 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -2655,16 +2655,22 @@ rx_queue_setup(uint16_t port_id, uint16_t
> rx_queue_id,
> > union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
> > struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
> > struct rte_mempool *mpx;
> > +   struct rte_eth_dev_info dev_info;
> > unsigned int i, mp_n;
> > uint32_t prev_hdrs = 0;
> > int ret;
> >
> > +   ret = rte_eth_dev_info_get(port_id, &dev_info);
> > +   if (ret != 0)
> > +   return ret;
> > +
> > /* Verify Rx queue configuration is single pool and segment or
> >  * multiple pool/segment.
> > +* @see rte_eth_dev_info::max_rx_mempools
> >  * @see rte_eth_rxconf::rx_mempools
> >  * @see rte_eth_rxconf::rx_seg
> >  */
> > -   if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> > +   if (!(dev_info.max_rx_mempools != 0) && !(rx_pkt_nb_segs > 1 ||
> 
> Can we make the check simpler "(dev_info.max_rx_mempools == 0)"
> 
Sure, will simplify the condition.

> > ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
> 0))) {
> > /* Single pool/segment configuration */
> > rx_conf->rx_seg = NULL;



[PATCH v3 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-17 Thread Hanumanth Pothula
Validate ethdev parameter 'max_rx_mempools' to know whether
device supports multi-mempool feature or not.

Bugzilla ID: 1128

Signed-off-by: Hanumanth Pothula 
v3:
 - Simplified conditional check.
 - Corrected spell, whether.
v2:
 - Rebased on tip of next-net/main.
---
 app/test-pmd/testpmd.c | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4e25f77c6a..6c3d0948ec 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2655,16 +2655,22 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
struct rte_mempool *mpx;
+   struct rte_eth_dev_info dev_info;
unsigned int i, mp_n;
uint32_t prev_hdrs = 0;
int ret;
 
+   ret = rte_eth_dev_info_get(port_id, &dev_info);
+   if (ret != 0)
+   return ret;
+
/* Verify Rx queue configuration is single pool and segment or
 * multiple pool/segment.
+* @see rte_eth_dev_info::max_rx_mempools
 * @see rte_eth_rxconf::rx_mempools
 * @see rte_eth_rxconf::rx_seg
 */
-   if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
+   if ((dev_info.max_rx_mempools == 0) && !(rx_pkt_nb_segs > 1 ||
((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
/* Single pool/segment configuration */
rx_conf->rx_seg = NULL;
-- 
2.25.1



Re: Regarding User Data in DPDK ACL Library.

2022-11-17 Thread Stephen Hemminger
On Thu, 17 Nov 2022 19:28:12 +0530
venkatesh bs  wrote:

> Hi DPDK Team,
> 
> After the ACL match for highest priority DPDK Classification API returns
> User Data Which is as mentioned below in the document.
> 
> 53. Packet Classification and Access Control — Data Plane Development Kit
> 22.11.0-rc2 documentation (dpdk.org)
> 
> 
>- *userdata*: A user-defined value. For each category, a successful
>match returns the userdata field of the highest priority matched rule. When
>no rules match, returned value is zero
> 
> I Wonder Why User Data Support does not returns 64 bit values, Always its
> possible that User Data in Application Can be 64bit long, But since 64 bit
> User data can't be returned by DPDK ACL Library, Application should have
> the conversion algorithm from 64 to 32 bit during Rule add and vice versa
> after classification.
> 
> I Wonder if anyone would have faced this issue, Please suggest any
> suggestions if somewhere am wrong in understanding/Possible Solution if
> someone has already gone through this issue.
> 
> Thanks In Advance.
> Regards,
> Venkatesh B Siddappa.

It looks like all users of this API use the userdata to be the index
into a table of application specific rules.


Re: [PATCH v3 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-17 Thread Ferruh Yigit
On 11/17/2022 4:03 PM, Hanumanth Pothula wrote:
> Validate ethdev parameter 'max_rx_mempools' to know whether
> device supports multi-mempool feature or not.
> 
> Bugzilla ID: 1128
> 
> Signed-off-by: Hanumanth Pothula 
> v3:
>  - Simplified conditional check.
>  - Corrected spell, whether.
> v2:
>  - Rebased on tip of next-net/main.
> ---
>  app/test-pmd/testpmd.c | 8 +++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 4e25f77c6a..6c3d0948ec 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -2655,16 +2655,22 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>   union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
>   struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
>   struct rte_mempool *mpx;
> + struct rte_eth_dev_info dev_info;
>   unsigned int i, mp_n;
>   uint32_t prev_hdrs = 0;
>   int ret;
>  
> + ret = rte_eth_dev_info_get(port_id, &dev_info);
> + if (ret != 0)
> + return ret;
> +
>   /* Verify Rx queue configuration is single pool and segment or
>* multiple pool/segment.
> +  * @see rte_eth_dev_info::max_rx_mempools
>* @see rte_eth_rxconf::rx_mempools
>* @see rte_eth_rxconf::rx_seg
>*/
> - if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> + if ((dev_info.max_rx_mempools == 0) && !(rx_pkt_nb_segs > 1 ||
>   ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
>   /* Single pool/segment configuration */
>   rx_conf->rx_seg = NULL;


Hi Yingya, Yu,

Can you please verify this patch?

Thanks,
ferruh


[PATCH 0/3] fix wrong increment of free list counter

2022-11-17 Thread Chaoyong He
The wrong increment of free list counter occurs three different place,
and is imported by different commit.

Chaoyong He (3):
  net/nfp: fix wrong increment of free list counter
  net/nfp: fix wrong increment of free list counter for PF
  net/nfp: fix wrong increment of free list counter for VNIC

 drivers/net/nfp/flower/nfp_flower.c  | 3 +--
 drivers/net/nfp/flower/nfp_flower_ctrl.c | 3 +--
 drivers/net/nfp/nfp_rxtx.c   | 3 +--
 3 files changed, 3 insertions(+), 6 deletions(-)

-- 
2.29.3



[PATCH 1/3] net/nfp: fix wrong increment of free list counter

2022-11-17 Thread Chaoyong He
When receiving a packet that is larger than the mbuf size, the Rx
function will break the receive loop and sent a free list descriptor
with random DMA address.

Fix this by moving the increment of the free list descriptor counter
to after the packet size have been checked and acted on.

Fixes: bb340f56fcb7 ("net/nfp: fix memory leak in Rx")
Cc: long...@corigine.com
Cc: sta...@dpdk.org

Signed-off-by: Chaoyong He 
Reviewed-by: Niklas Söderlund 
---
 drivers/net/nfp/nfp_rxtx.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index b8c874d315..38377ca218 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -293,8 +293,6 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts, uint16_t nb_pkts)
break;
}
 
-   nb_hold++;
-
/*
 * Grab the mbuf and refill the descriptor with the
 * previously allocated mbuf
@@ -365,6 +363,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts, uint16_t nb_pkts)
rxds->fld.dd = 0;
rxds->fld.dma_addr_hi = (dma_addr >> 32) & 0xff;
rxds->fld.dma_addr_lo = dma_addr & 0x;
+   nb_hold++;
 
rxq->rd_p++;
if (unlikely(rxq->rd_p == rxq->rx_count)) /* wrapping?*/
-- 
2.29.3



[PATCH 2/3] net/nfp: fix wrong increment of free list counter for PF

2022-11-17 Thread Chaoyong He
When using flower firmware application, and the PF receiving a packet
that is larger than the mbuf size, the Rx function will break the
receive loop and sent a free list descriptor with random DMA address.

Fix this by moving the increment of the free list descriptor counter
to after the packet size have been checked and acted on.

Fixes: cf559c2a1d2a ("net/nfp: add flower PF Rx/Tx")

Signed-off-by: Chaoyong He 
Reviewed-by: Niklas Söderlund 
---
 drivers/net/nfp/flower/nfp_flower.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower.c 
b/drivers/net/nfp/flower/nfp_flower.c
index aa8199dde2..e447258d97 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -369,8 +369,6 @@ nfp_flower_pf_recv_pkts(void *rx_queue,
break;
}
 
-   nb_hold++;
-
/*
 * Grab the mbuf and refill the descriptor with the
 * previously allocated mbuf
@@ -455,6 +453,7 @@ nfp_flower_pf_recv_pkts(void *rx_queue,
rxds->fld.dd = 0;
rxds->fld.dma_addr_hi = (dma_addr >> 32) & 0xff;
rxds->fld.dma_addr_lo = dma_addr & 0x;
+   nb_hold++;
 
rxq->rd_p++;
if (unlikely(rxq->rd_p == rxq->rx_count))
-- 
2.29.3



[PATCH 3/3] net/nfp: fix wrong increment of free list counter for VNIC

2022-11-17 Thread Chaoyong He
When using flower firmware application, and the ctrl vNIC receiving a
packet that is larger than the mbuf size, the Rx function will break the
receive loop and sent a free list descriptor with random DMA address.

Fix this by moving the increment of the free list descriptor counter
to after the packet size have been checked and acted on.

Fixes: a36634e87e16 ("net/nfp: add flower ctrl VNIC Rx/Tx")

Signed-off-by: Chaoyong He 
Reviewed-by: Niklas Söderlund 
---
 drivers/net/nfp/flower/nfp_flower_ctrl.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c 
b/drivers/net/nfp/flower/nfp_flower_ctrl.c
index 953ab6e98c..3631e764fe 100644
--- a/drivers/net/nfp/flower/nfp_flower_ctrl.c
+++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c
@@ -74,8 +74,6 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
break;
}
 
-   nb_hold++;
-
/*
 * Grab the mbuf and refill the descriptor with the
 * previously allocated mbuf
@@ -127,6 +125,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
rxds->fld.dd = 0;
rxds->fld.dma_addr_hi = (dma_addr >> 32) & 0xff;
rxds->fld.dma_addr_lo = dma_addr & 0x;
+   nb_hold++;
 
rxq->rd_p++;
if (unlikely(rxq->rd_p == rxq->rx_count)) /* wrapping?*/
-- 
2.29.3



[Bug 1130] [22.11-rc3] lib/eal meson build error with clang15.0.4 on Fedora37

2022-11-17 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=1130

Bug ID: 1130
   Summary: [22.11-rc3] lib/eal meson build error with clang15.0.4
on Fedora37
   Product: DPDK
   Version: 22.11
  Hardware: All
OS: All
Status: UNCONFIRMED
  Severity: critical
  Priority: Normal
 Component: core
  Assignee: dev@dpdk.org
  Reporter: chenyux.hu...@intel.com
  Target Milestone: ---

[git]
commit 04f68bb92b6fee621ddf0f0948f5565fa31a84fd (HEAD, tag: v22.11-rc3)
Author: Thomas Monjalon 
Date:   Tue Nov 15 18:21:34 2022 +0100

version: 22.11-rc3

Signed-off-by: Thomas Monjalon 

[OS version]
 Fedora Linux 37 (Server Edition)
 Linux 6.0.7-301.fc37.x86_64
 clang version 15.0.4 (Fedora 15.0.4-1.fc37)

[bad commit]
 The new fedora37 found this problem, while the old fedora36 did not。

[Test setup]
CC=clang meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all
--default-library=static x86_64-native-linuxapp-clang
ninja -j 10 -C x86_64-native-linuxapp-clang

[error log]
[root@localhost dpdk]# ninja -j 10 -C x86_64-native-linuxapp-clang/
ninja: Entering directory `x86_64-native-linuxapp-clang/'
[34/3596] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
FAILED: lib/librte_eal.a.p/eal_common_rte_service.c.o
clang -Ilib/librte_eal.a.p -Ilib -I../lib -I. -I.. -Iconfig -I../config
-Ilib/eal/include -I../lib/eal/include -Ilib/eal/linux/include
-I../lib/eal/linux/include -Ilib/eal/x86/include -I../lib/eal/x86/include
-Ilib/eal/common -I../lib/eal/common -Ilib/eal -I../lib/eal -Ilib/kvargs
-I../lib/kvargs -Ilib/metrics -I../lib/metrics -Ilib/telemetry
-I../lib/telemetry -fcolor-diagnostics -D_FILE_OFFSET_BITS=64 -Wall
-Winvalid-pch -Wextra -Werror -O3 -include rte_config.h -Wcast-qual
-Wdeprecated -Wformat -Wformat-nonliteral -Wformat-security
-Wmissing-declarations -Wmissing-prototypes -Wnested-externs
-Wold-style-definition -Wpointer-arith -Wsign-compare -Wstrict-prototypes
-Wundef -Wwrite-strings -Wno-address-of-packed-member
-Wno-missing-field-initializers -D_GNU_SOURCE -fPIC -march=native
-DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API '-DABI_VERSION="22.2"'
-DRTE_LIBEAL_USE_GETENTROPY -DRTE_LOG_DEFAULT_LOGTYPE=lib.eal -MD -MQ
lib/librte_eal.a.p/eal_common_rte_service.c.o -MF
lib/librte_eal.a.p/eal_common_rte_service.c.o.d -o
lib/librte_eal.a.p/eal_common_rte_service.c.o -c
../lib/eal/common/rte_service.c
../lib/eal/common/rte_service.c:100:6: error: variable 'count' set but not used
[-Werror,-Wunused-but-set-variable]
int count = 0;
^
1 error generated.
[43/3596] Generating lib/telemetry.sym_chk with a custom command (wrapped by
meson to capture output)
ninja: build stopped: subcommand failed.

-- 
You are receiving this mail because:
You are the assignee for the bug.

[PATCH] net/idpf: add supported ptypes get

2022-11-17 Thread beilei . xing
From: Beilei Xing 

Failed to launch l3fwd, the log shows:
port 0 cannot parse packet type, please add --parse-ptype
This patch adds dev_supported_ptypes_get ops.

Fixes: 549343c25db8 ("net/idpf: support device initialization")

Signed-off-by: Beilei Xing 
---
 drivers/net/idpf/idpf_ethdev.c | 19 +++
 1 file changed, 19 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 51fc97bf7b..1ea0ed69d8 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -115,6 +115,24 @@ idpf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu 
__rte_unused)
return 0;
 }
 
+static const uint32_t *
+idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+   static const uint32_t ptypes[] = {
+   RTE_PTYPE_L2_ETHER,
+   RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+   RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+   RTE_PTYPE_L4_FRAG,
+   RTE_PTYPE_L4_UDP,
+   RTE_PTYPE_L4_TCP,
+   RTE_PTYPE_L4_SCTP,
+   RTE_PTYPE_L4_ICMP,
+   RTE_PTYPE_UNKNOWN
+   };
+
+   return ptypes;
+}
+
 static int
 idpf_init_vport_req_info(struct rte_eth_dev *dev)
 {
@@ -1040,6 +1058,7 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.rx_queue_release   = idpf_dev_rx_queue_release,
.tx_queue_release   = idpf_dev_tx_queue_release,
.mtu_set= idpf_dev_mtu_set,
+   .dev_supported_ptypes_get   = idpf_dev_supported_ptypes_get,
 };
 
 static uint16_t
-- 
2.26.2



RE: [PATCH v2] net/idpf: fix crash when launching l3fwd

2022-11-17 Thread Wu, Jingjing
> -
>   if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
>   PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not supported",
>conf->txmode.mq_mode);
> diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
> index ac6486d4ef..88770447f8 100644
> --- a/drivers/net/idpf/idpf_vchnl.c
> +++ b/drivers/net/idpf/idpf_vchnl.c
> @@ -1197,6 +1197,9 @@ idpf_vc_dealloc_vectors(struct idpf_vport *vport)
>   int err, len;
> 
>   alloc_vec = vport->recv_vectors;
> + if (alloc_vec == NULL)
> + return -EINVAL;
> +
Would it be better to check before idpf_vc_dealloc_vectors?



RE: [PATCH] net/idpf: add supported ptypes get

2022-11-17 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Friday, November 18, 2022 11:51 AM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Peng, Yuan ; Xing, Beilei
> 
> Subject: [PATCH] net/idpf: add supported ptypes get
> 
> From: Beilei Xing 
> 
> Failed to launch l3fwd, the log shows:
> port 0 cannot parse packet type, please add --parse-ptype
> This patch adds dev_supported_ptypes_get ops.
> 
> Fixes: 549343c25db8 ("net/idpf: support device initialization")
> 
> Signed-off-by: Beilei Xing 
Reviewed-by: Jingjing Wu 



RE: [PATCH v3 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-17 Thread Han, YingyaX
There is a new issue after applying the patch.
Failed to configure buffer_split for a single queue and port can't up.
The test steps and logs are as follows:
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-9 -n 4  -a 31:00.0 
--force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 --txq=4 --rxq=4
testpmd> port stop all
testpmd> port 0 rxq 2 rx_offload buffer_split on
testpmd> show port 0 rx_offload configuration
Rx Offloading Configuration of port 0 :
  Port : RSS_HASH
  Queue[ 0] : RSS_HASH
  Queue[ 1] : RSS_HASH
  Queue[ 2] : RSS_HASH BUFFER_SPLIT
  Queue[ 3] : RSS_HASH
testpmd> set rxhdrs eth
testpmd> port start all
Configuring Port 0 (socket 0)
No Rx segmentation offload configured
Fail to configure port 0 rx queues

BRs,
Yingya

-Original Message-
From: Ferruh Yigit  
Sent: Friday, November 18, 2022 7:37 AM
To: Hanumanth Pothula ; Singh, Aman Deep 
; Zhang, Yuying ; Han, 
YingyaX ; Jiang, YuX 
Cc: dev@dpdk.org; andrew.rybche...@oktetlabs.ru; tho...@monjalon.net; 
jer...@marvell.com; ndabilpu...@marvell.com
Subject: Re: [PATCH v3 1/1] app/testpmd: add valid check to verify multi 
mempool feature

On 11/17/2022 4:03 PM, Hanumanth Pothula wrote:
> Validate ethdev parameter 'max_rx_mempools' to know whether device 
> supports multi-mempool feature or not.
> 
> Bugzilla ID: 1128
> 
> Signed-off-by: Hanumanth Pothula 
> v3:
>  - Simplified conditional check.
>  - Corrected spell, whether.
> v2:
>  - Rebased on tip of next-net/main.
> ---
>  app/test-pmd/testpmd.c | 8 +++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 
> 4e25f77c6a..6c3d0948ec 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -2655,16 +2655,22 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>   union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
>   struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
>   struct rte_mempool *mpx;
> + struct rte_eth_dev_info dev_info;
>   unsigned int i, mp_n;
>   uint32_t prev_hdrs = 0;
>   int ret;
>  
> + ret = rte_eth_dev_info_get(port_id, &dev_info);
> + if (ret != 0)
> + return ret;
> +
>   /* Verify Rx queue configuration is single pool and segment or
>* multiple pool/segment.
> +  * @see rte_eth_dev_info::max_rx_mempools
>* @see rte_eth_rxconf::rx_mempools
>* @see rte_eth_rxconf::rx_seg
>*/
> - if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> + if ((dev_info.max_rx_mempools == 0) && !(rx_pkt_nb_segs > 1 ||
>   ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
>   /* Single pool/segment configuration */
>   rx_conf->rx_seg = NULL;


Hi Yingya, Yu,

Can you please verify this patch?

Thanks,
ferruh


[PATCH] net/iavf: fix outer udp checksum offload

2022-11-17 Thread Zhichao Zeng
Currently, when dealing with UDP tunnel pkts checksum offloading,
the outer-udp checksum will be offloaded by default. So the
'csum set outer-udp hw/sw' command does not work.

This patch enables the 'csum set outer-udp hw/sw' command by adding
judgment on the outer-udp chekcusm offload flag.

Fixes: f7c8c36fdeb7 ("net/iavf: enable inner and outer Tx checksum offload")

Signed-off-by: Zhichao Zeng 
---
 drivers/net/iavf/iavf_rxtx.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index cf87a6beda..c12fb96cfd 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -2454,7 +2454,8 @@ iavf_fill_ctx_desc_tunnelling_field(volatile uint64_t 
*qw0,
 * Shall be set only if L4TUNT = 01b and EIPT is not zero
 */
if (!(eip_typ & IAVF_TX_CTX_EXT_IP_NONE) &&
-   (eip_typ & IAVF_TXD_CTX_UDP_TUNNELING))
+   (eip_typ & IAVF_TXD_CTX_UDP_TUNNELING) &&
+   (m->ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM))
eip_typ |= IAVF_TXD_CTX_QW0_L4T_CS_MASK;
}
 
-- 
2.25.1



[PATCH] doc: fix max supported packet len for virtio driver

2022-11-17 Thread liyi1
From: Yi Li 

According to VIRTIO_MAX_RX_PKTLEN macro definition, for virtio driver
currently supported pkt size is 9728.

Signed-off-by: Yi Li 
---
 doc/guides/nics/virtio.rst | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst
index aace780249..c422e7347a 100644
--- a/doc/guides/nics/virtio.rst
+++ b/doc/guides/nics/virtio.rst
@@ -43,7 +43,7 @@ Features and Limitations of virtio PMD
 In this release, the virtio PMD provides the basic functionality of packet 
reception and transmission.
 
 *   It supports merge-able buffers per packet when receiving packets and 
scattered buffer per packet
-when transmitting packets. The packet size supported is from 64 to 1518.
+when transmitting packets. The packet size supported is from 64 to 9728.
 
 *   It supports multicast packets and promiscuous mode.
 
-- 
2.31.1



[PATCH v3] net/idpf: fix crash when launching l3fwd

2022-11-17 Thread beilei . xing
From: Beilei Xing 

There's core dump when launching l3fwd with 1 queue 1 core. It's
because NULL pointer is used if fail to configure device.
This patch removes incorrect check during device configuration,
and checks NULL pointer when executing VIRTCHNL2_OP_DEALLOC_VECTORS.

Fixes: 549343c25db8 ("net/idpf: support device initialization")
Fixes: 70675bcc3a57 ("net/idpf: support RSS")
Fixes: 37291a68fd78 ("net/idpf: support write back based on ITR expire")

Signed-off-by: Beilei Xing 
---
v2 change: fix typo.
v3 change: check NULL pointer before virtual channel handling.

 drivers/net/idpf/idpf_ethdev.c | 10 ++
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 20f088eb80..491ef966a7 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -372,13 +372,6 @@ idpf_dev_configure(struct rte_eth_dev *dev)
return -ENOTSUP;
}
 
-   if ((dev->data->nb_rx_queues == 1 && conf->rxmode.mq_mode != 
RTE_ETH_MQ_RX_NONE) ||
-   (dev->data->nb_rx_queues > 1 && conf->rxmode.mq_mode != 
RTE_ETH_MQ_RX_RSS)) {
-   PMD_INIT_LOG(ERR, "Multi-queue packet distribution mode %d is 
not supported",
-conf->rxmode.mq_mode);
-   return -ENOTSUP;
-   }
-
if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not supported",
 conf->txmode.mq_mode);
@@ -620,7 +613,8 @@ idpf_dev_stop(struct rte_eth_dev *dev)
 
idpf_vc_config_irq_map_unmap(vport, false);
 
-   idpf_vc_dealloc_vectors(vport);
+   if (vport->recv_vectors != NULL)
+   idpf_vc_dealloc_vectors(vport);
 
vport->stopped = 1;
 
-- 
2.26.2



RE: [PATCH v2] net/idpf: fix crash when launching l3fwd

2022-11-17 Thread Xing, Beilei



> -Original Message-
> From: Wu, Jingjing 
> Sent: Friday, November 18, 2022 2:24 PM
> To: Xing, Beilei 
> Cc: dev@dpdk.org; Peng, Yuan 
> Subject: RE: [PATCH v2] net/idpf: fix crash when launching l3fwd
> 
> > -
> > if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
> > PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not
> supported",
> >  conf->txmode.mq_mode);
> > diff --git a/drivers/net/idpf/idpf_vchnl.c
> > b/drivers/net/idpf/idpf_vchnl.c index ac6486d4ef..88770447f8 100644
> > --- a/drivers/net/idpf/idpf_vchnl.c
> > +++ b/drivers/net/idpf/idpf_vchnl.c
> > @@ -1197,6 +1197,9 @@ idpf_vc_dealloc_vectors(struct idpf_vport *vport)
> > int err, len;
> >
> > alloc_vec = vport->recv_vectors;
> > +   if (alloc_vec == NULL)
> > +   return -EINVAL;
> > +
> Would it be better to check before idpf_vc_dealloc_vectors?
Make sense, will update in next version.


[Bug 1131] vmdq && kni meson build error with gcc12.2.1+debug on Fedora37

2022-11-17 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=1131

Bug ID: 1131
   Summary: vmdq && kni  meson build error with gcc12.2.1+debug on
Fedora37
   Product: DPDK
   Version: 22.11
  Hardware: All
OS: All
Status: UNCONFIRMED
  Severity: critical
  Priority: Normal
 Component: core
  Assignee: dev@dpdk.org
  Reporter: chenyux.hu...@intel.com
  Target Milestone: ---

[git]
commit 04f68bb92b6fee621ddf0f0948f5565fa31a84fd (HEAD, tag: v22.11-rc3)
Author: Thomas Monjalon 
Date:   Tue Nov 15 18:21:34 2022 +0100

version: 22.11-rc3

Signed-off-by: Thomas Monjalon 

[OS version]
 Fedora Linux 37 (Server Edition)
 Linux 6.0.7-301.fc37.x86_64
 gcc (GCC) 12.2.1

[Bad commit]
 The new fedora37 found this problem, while the old fedora36 did not。

[Test setup]
CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all
--buildtype=debugoptimized --default-library=static
x86_64-native-linuxapp-gcc+debug
ninja -j 10 -C x86_64-native-linuxapp-gcc/

[Error log]
[root@localhost dpdk]# ninja -j 10 -C x86_64-native-linuxapp-gcc+debug/
ninja: Entering directory `x86_64-native-linuxapp-gcc+debug/'
[3383/3390] Generating kernel/linux/kni/rte_kni with a custom command
FAILED: kernel/linux/kni/rte_kni.ko
/usr/bin/make -j4 -C /lib/modules/6.0.7-301.fc37.x86_64/build
M=/opt/dpdk/x86_64-native-linuxapp-gcc+debug/kernel/linux/kni
src=/opt/dpdk/kernel/linux/kni 'MODULE_CFLAGS= -DHAVE_ARG_TX_QUEUE -include
/opt/dpdk/config/rte_config.h -I/opt/dpdk/lib/eal/include -I/opt/dpdk/lib/kni
-I/opt/dpdk/x86_64-native-linuxapp-gcc+debug -I/opt/dpdk/kernel/linux/kni'
modules
make: Entering directory '/usr/src/kernels/6.0.7-301.fc37.x86_64'
mkdir: cannot create directory
‘/opt/dpdk/x86_64-native-linuxapp-gcc+debug/kernel/linux/kni/.tmp_49588’:
Input/output error
/bin/sh: line 1: rm: command not found
make: /bin/sh: Input/output error
make: /bin/sh: Input/output error
make: /bin/sh: Input/output error
make: /bin/sh: Input/output error
make: *** arch/x86/Makefile: Input/output error.  Stop.
make: Leaving directory '/usr/src/kernels/6.0.7-301.fc37.x86_64'
[3384/3390] Linking target examples/dpdk-skeleton
FAILED: examples/dpdk-skeleton
gcc  -o examples/dpdk-skeleton examples/dpdk-skeleton.p/skeleton_basicfwd.c.o
-Wl,--as-needed -Wl,--no-undefined -Wl,--whole-archive -Wl,--start-group
lib/librte_node.a lib/librte_graph.a lib/librte_pipeline.a lib/librte_table.a
lib/librte_pdump.a lib/librte_port.a lib/librte_fib.a lib/librte_ipsec.a
lib/librte_vhost.a lib/librte_stack.a lib/librte_security.a lib/librte_sched.a
lib/librte_reorder.a lib/librte_rib.a lib/librte_dmadev.a lib/librte_regexdev.a
lib/librte_rawdev.a lib/librte_power.a lib/librte_pcapng.a lib/librte_member.a
lib/librte_lpm.a lib/librte_latencystats.a lib/librte_jobstats.a
lib/librte_ip_frag.a lib/librte_gso.a lib/librte_gro.a lib/librte_gpudev.a
lib/librte_eventdev.a lib/librte_efd.a lib/librte_distributor.a
lib/librte_cryptodev.a lib/librte_compressdev.a lib/librte_cfgfile.a
lib/librte_bpf.a lib/librte_bitratestats.a lib/librte_bbdev.a lib/librte_acl.a
lib/librte_timer.a lib/librte_hash.a lib/librte_metrics.a lib/librte_cmdline.a
lib/librte_pci.a lib/librte_ethdev.a lib/librte_meter.a lib/librte_net.a
lib/librte_mbuf.a lib/librte_mempool.a lib/librte_rcu.a lib/librte_ring.a
lib/librte_eal.a lib/librte_telemetry.a lib/librte_kvargs.a
drivers/librte_common_cpt.a drivers/librte_common_dpaax.a
drivers/librte_common_iavf.a drivers/librte_common_idpf.a
drivers/librte_common_octeontx.a drivers/librte_bus_auxiliary.a
drivers/librte_bus_dpaa.a drivers/librte_bus_fslmc.a drivers/librte_bus_ifpga.a
drivers/librte_bus_pci.a drivers/librte_bus_vdev.a drivers/librte_bus_vmbus.a
drivers/librte_common_cnxk.a drivers/librte_common_qat.a
drivers/librte_common_sfc_efx.a drivers/librte_mempool_bucket.a
drivers/librte_mempool_cnxk.a drivers/librte_mempool_dpaa.a
drivers/librte_mempool_dpaa2.a drivers/librte_mempool_octeontx.a
drivers/librte_mempool_ring.a drivers/librte_mempool_stack.a
drivers/librte_dma_cnxk.a drivers/librte_dma_dpaa.a drivers/librte_dma_dpaa2.a
drivers/librte_dma_hisilicon.a drivers/librte_dma_idxd.a
drivers/librte_dma_ioat.a drivers/librte_dma_skeleton.a
drivers/librte_net_af_packet.a drivers/librte_net_ark.a
drivers/librte_net_atlantic.a drivers/librte_net_avp.a
drivers/librte_net_axgbe.a drivers/librte_net_bnx2x.a drivers/librte_net_bnxt.a
drivers/librte_net_bond.a drivers/librte_net_cnxk.a drivers/librte_net_cxgbe.a
drivers/librte_net_dpaa.a drivers/librte_net_dpaa2.a drivers/librte_net_e1000.a
drivers/librte_net_ena.a drivers/librte_net_enetc.a
drivers/librte_net_enetfec.a drivers/librte_net_enic.a
drivers/librte_net_failsafe.a drivers/librte_net_fm10k.a
drivers/librte_net_gve.a drivers/librte_net_hinic.a drivers/librte_net_hns3.a
drivers/librte_net_i40e.a drivers/librte_net_iavf.a drivers/librte_net_ice.a
drivers/librte_net_idpf.a drivers/librte_net_igc.a dr