Re: [PATCH 00/11] Fixes for clang 15

2022-11-21 Thread David Marchand
On Sat, Nov 19, 2022 at 1:13 AM Tyler Retzlaff
 wrote:
>
> On Fri, Nov 18, 2022 at 09:53:02AM +0100, David Marchand wrote:
> > Fedora 37 has just been released with clang 15.
> > The latter seems more picky wrt unused variable.
> >
> > Fixes have been tested in GHA with a simple patch I used in my own repo:
> > https://github.com/david-marchand/dpdk/commit/82cd57ae5490
> > https://github.com/david-marchand/dpdk/actions/runs/3495454457
> >
> > --
>
> Series-Acked-By: Tyler Retzlaff 

Thanks for the ack.
Re-adding dev@.


-- 
David Marchand



Re: [PATCH] bus/pci: fix bus info memleak during PCI scan

2022-11-21 Thread David Marchand
On Fri, Nov 18, 2022 at 2:36 PM Tomasz Zawadzki
 wrote:
>
> During pci_scan_one() for devices that were already registered
> the pci_common_set() is called to set some of the fields again.
>
> This resulted in bus_info allocation leaking, so this patch
> ensures they are always freed beforehand.
>
> Fixes: 8f4de2dba9b9 ("bus/pci: fill bus specific information")
>
> Signed-off-by: Tomasz Zawadzki 
> ---
>  drivers/bus/pci/pci_common.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/drivers/bus/pci/pci_common.c b/drivers/bus/pci/pci_common.c
> index 9901c34f4e..9a866055e8 100644
> --- a/drivers/bus/pci/pci_common.c
> +++ b/drivers/bus/pci/pci_common.c
> @@ -114,6 +114,7 @@ pci_common_set(struct rte_pci_device *dev)
> /* Otherwise, it uses the internal, canonical form. */
> dev->device.name = dev->name;
>
> +   free(dev->bus_info);
> if (asprintf(&dev->bus_info, "vendor_id=%"PRIx16", device_id=%"PRIx16,
> dev->id.vendor_id, dev->id.device_id) != -1)
> dev->device.bus_info = dev->bus_info;

Indeed, good catch.

The bus_info content is constant for a given device, there is no need
to free and reallocate.
WDYT of:

@@ -114,8 +114,9 @@ pci_common_set(struct rte_pci_device *dev)
/* Otherwise, it uses the internal, canonical form. */
dev->device.name = dev->name;

-   if (asprintf(&dev->bus_info, "vendor_id=%"PRIx16", device_id=%"PRIx16,
-   dev->id.vendor_id, dev->id.device_id) != -1)
+   if (dev->bus_info != NULL ||
+   asprintf(&dev->bus_info, "vendor_id=%"PRIx16",
device_id=%"PRIx16,
+   dev->id.vendor_id, dev->id.device_id) != -1)
dev->device.bus_info = dev->bus_info;
 }


-- 
David Marchand



[PATCH] doc: add tested platforms with NVIDIA NICs

2022-11-21 Thread Raslan Darawsheh
Add tested platforms with NVIDIA NICs to the 22.11 release notes.

Signed-off-by: Raslan Darawsheh 
---
 doc/guides/rel_notes/release_22_11.rst | 155 +
 1 file changed, 155 insertions(+)

diff --git a/doc/guides/rel_notes/release_22_11.rst 
b/doc/guides/rel_notes/release_22_11.rst
index 5e091403ad..fad0bf2b40 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -635,3 +635,158 @@ Tested Platforms
This section is a comment. Do not overwrite or remove it.
Also, make sure to start the actual text at the margin.
===
+
+* Intel\ |reg| platforms with NVIDIA \ |reg| NICs combinations
+
+  * CPU:
+
+* Intel\ |reg| Xeon\ |reg| Gold 6154 CPU @ 3.00GHz
+* Intel\ |reg| Xeon\ |reg| CPU E5-2697A v4 @ 2.60GHz
+* Intel\ |reg| Xeon\ |reg| CPU E5-2697 v3 @ 2.60GHz
+* Intel\ |reg| Xeon\ |reg| CPU E5-2680 v2 @ 2.80GHz
+* Intel\ |reg| Xeon\ |reg| CPU E5-2670 0 @ 2.60GHz
+* Intel\ |reg| Xeon\ |reg| CPU E5-2650 v4 @ 2.20GHz
+* Intel\ |reg| Xeon\ |reg| CPU E5-2650 v3 @ 2.30GHz
+* Intel\ |reg| Xeon\ |reg| CPU E5-2640 @ 2.50GHz
+* Intel\ |reg| Xeon\ |reg| CPU E5-2650 0 @ 2.00GHz
+* Intel\ |reg| Xeon\ |reg| CPU E5-2620 v4 @ 2.10GHz
+
+  * OS:
+
+* Red Hat Enterprise Linux release 8.6 (Ootpa)
+* Red Hat Enterprise Linux release 8.4 (Ootpa)
+* Red Hat Enterprise Linux release 8.2 (Ootpa)
+* Red Hat Enterprise Linux Server release 7.9 (Maipo)
+* Red Hat Enterprise Linux Server release 7.8 (Maipo)
+* Red Hat Enterprise Linux Server release 7.6 (Maipo)
+* Red Hat Enterprise Linux Server release 7.5 (Maipo)
+* Red Hat Enterprise Linux Server release 7.4 (Maipo)
+* Ubuntu 22.04
+* Ubuntu 20.04
+* Ubuntu 18.04
+* SUSE Enterprise Linux 15 SP2
+
+  * OFED:
+
+* MLNX_OFED 5.8-1.0.1.1 and above
+* MLNX_OFED 5.7-1.0.2.0
+
+  * upstream kernel:
+
+* Linux 6.1.0-rc3 and above
+
+  * rdma-core:
+
+* rdma-core-43.0 and above
+
+  * NICs:
+
+* NVIDIA\ |reg| ConnectX\ |reg|-3 Pro 40G MCX354A-FCC_Ax (2x40G)
+
+  * Host interface: PCI Express 3.0 x8
+  * Device ID: 15b3:1007
+  * Firmware version: 2.42.5000
+* Red Hat Enterprise Linux release 8.4 (Ootpa)
+
+* NVIDIA\ |reg| ConnectX\ |reg|-3 Pro 40G MCX354A-FCCT (2x40G)
+
+  * Host interface: PCI Express 3.0 x8
+  * Device ID: 15b3:1007
+  * Firmware version: 2.42.5000
+
+* NVIDIA\ |reg| ConnectX\ |reg|-4 Lx 25G MCX4121A-ACAT (2x25G)
+
+  * Host interface: PCI Express 3.0 x8
+  * Device ID: 15b3:1015
+  * Firmware version: 14.32.1010 and above
+
+* NVIDIA\ |reg| ConnectX\ |reg|-4 Lx 50G MCX4131A-GCAT (1x50G)
+
+  * Host interface: PCI Express 3.0 x8
+  * Device ID: 15b3:1015
+  * Firmware version: 14.32.1010 and above
+
+* NVIDIA\ |reg| ConnectX\ |reg|-5 100G MCX516A-CCAT (2x100G)
+
+  * Host interface: PCI Express 3.0 x16
+  * Device ID: 15b3:1017
+  * Firmware version: 16.35.1012 and above
+
+* NVIDIA\ |reg| ConnectX\ |reg|-5 100G MCX556A-ECAT (2x100G)
+
+  * Host interface: PCI Express 3.0 x16
+  * Device ID: 15b3:1017
+  * Firmware version: 16.35.1012 and above
+
+* NVIDIA\ |reg| ConnectX\ |reg|-5 100G MCX556A-EDAT (2x100G)
+
+  * Host interface: PCI Express 3.0 x16
+  * Device ID: 15b3:1017
+  * Firmware version: 16.35.1012 and above
+
+* NVIDIA\ |reg| ConnectX\ |reg|-5 Ex EN 100G MCX516A-CDAT (2x100G)
+
+  * Host interface: PCI Express 4.0 x16
+  * Device ID: 15b3:1019
+  * Firmware version: 16.35.1012 and above
+
+* NVIDIA\ |reg| ConnectX\ |reg|-6 Dx EN 100G MCX623106AN-CDAT (2x100G)
+
+  * Host interface: PCI Express 4.0 x16
+  * Device ID: 15b3:101d
+  * Firmware version: 22.35.1012 and above
+
+* NVIDIA\ |reg| ConnectX\ |reg|-6 Lx EN 25G MCX631102AN-ADAT (2x25G)
+
+  * Host interface: PCI Express 4.0 x8
+  * Device ID: 15b3:101f
+  * Firmware version: 26.35.1012 and above
+
+* NVIDIA\ |reg| ConnectX\ |reg| 7 200G CX713106AE-HEA_QP1_Ax (2x200G)
+
+  * Host interface: PCI Express 5.0 x16
+  * Device ID: 15b3:1021
+  * Firmware version: 28.35.1012 and above
+
+* NVIDIA \ |reg| BlueField\ |reg| SmartNIC
+
+  * NVIDIA\ |reg| BlueField\ |reg| 2 SmartNIC MT41686 - MBF2H332A-AEEOT_A1 
(2x25G)
+
+* Host interface: PCI Express 3.0 x16
+* Device ID: 15b3:a2d6
+* Firmware version: 24. 35.1012 and above
+
+  * Embedded software:
+
+* Ubuntu 20.04.3
+* MLNX_OFED 5.8-1.0.1.1 and above
+* DOCA 1.5 with BlueField 3.9.3
+* DPDK application running on Arm cores
+
+* IBM Power 9 platforms with NVIDIA\ |reg| NICs combinations
+
+  * CPU:
+
+* POWER9 2.2 (pvr 004e 1202)
+
+  * OS:
+
+* Ubuntu 20.04
+
+  * NICs:
+
+* NVIDIA\ |reg| ConnectX\ |reg|-5 100G MCX556A-ECAT (2x100G)
+
+  * Host interface: PCI Express 4.0 x16
+  * Device ID: 15b3:1017
+  * Fir

[PATCH v2] doc: fix max supported packet len for virtio driver

2022-11-21 Thread liyi1
From: Yi Li 

According to VIRTIO_MAX_RX_PKTLEN macro definition, for virtio driver
currently supported pkt size is 9728.

Fixes: fc1f2750a3ec ("doc: programmers guide")

Signed-off-by: Yi Li 
---

v2 change: Add "Fixes:" description in commit message.

 doc/guides/nics/virtio.rst | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst
index aace780249..c422e7347a 100644
--- a/doc/guides/nics/virtio.rst
+++ b/doc/guides/nics/virtio.rst
@@ -43,7 +43,7 @@ Features and Limitations of virtio PMD
 In this release, the virtio PMD provides the basic functionality of packet 
reception and transmission.
 
 *   It supports merge-able buffers per packet when receiving packets and 
scattered buffer per packet
-when transmitting packets. The packet size supported is from 64 to 1518.
+when transmitting packets. The packet size supported is from 64 to 9728.
 
 *   It supports multicast packets and promiscuous mode.
 
-- 
2.38.1



Re: [EXT] Re: [PATCH v4 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-21 Thread Ferruh Yigit
On 11/19/2022 12:00 AM, Hanumanth Reddy Pothula wrote:
> 
> 
>> -Original Message-
>> From: Ferruh Yigit 
>> Sent: Saturday, November 19, 2022 2:26 AM
>> To: Hanumanth Reddy Pothula ;
>> tho...@monjalon.net; andrew.rybche...@oktetlabs.ru; Nithin Kumar
>> Dabilpuram 
>> Cc: dev@dpdk.org; yux.ji...@intel.com; Jerin Jacob Kollanukkaran
>> ; Aman Singh ; Yuying
>> Zhang 
>> Subject: [EXT] Re: [PATCH v4 1/1] app/testpmd: add valid check to verify
>> multi mempool feature
>>
>> External Email
>>
>> --
>> On 11/18/2022 2:13 PM, Hanumanth Pothula wrote:
>>> Validate ethdev parameter 'max_rx_mempools' to know whether device
>>> supports multi-mempool feature or not.
>>>
>>
>> My preference would be revert the testpmd patch [1] that adds this new
>> feature after -rc2, and add it back next release with new testpmd argument
>> and below mentioned changes in setup function.
>>
>> @Andrew, @Thomas, @Jerin, what do you think?
>>
>>
>> [1]
>> 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")
>>
>>> Bugzilla ID: 1128
>>>
>>
>> Can you please add fixes line?
>>
> Ack
>>> Signed-off-by: Hanumanth Pothula 
>>
>> Please put the changelog after '---', which than git will take it as note.
>>
> Ack
>>> v4:
>>>  - updated if condition.
>>> v3:
>>>  - Simplified conditional check.
>>>  - Corrected spell, whether.
>>> v2:
>>>  - Rebased on tip of next-net/main.
>>> ---
>>>  app/test-pmd/testpmd.c | 10 --
>>>  1 file changed, 8 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
>>> 4e25f77c6a..c1b4dbd716 100644
>>> --- a/app/test-pmd/testpmd.c
>>> +++ b/app/test-pmd/testpmd.c
>>> @@ -2655,17 +2655,23 @@ rx_queue_setup(uint16_t port_id, uint16_t
>> rx_queue_id,
>>> union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
>>> struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
>>> struct rte_mempool *mpx;
>>> +   struct rte_eth_dev_info dev_info;
>>> unsigned int i, mp_n;
>>> uint32_t prev_hdrs = 0;
>>> int ret;
>>>
>>> +   ret = rte_eth_dev_info_get(port_id, &dev_info);
>>> +   if (ret != 0)
>>> +   return ret;
>>> +
>>> /* Verify Rx queue configuration is single pool and segment or
>>>  * multiple pool/segment.
>>> +* @see rte_eth_dev_info::max_rx_mempools
>>>  * @see rte_eth_rxconf::rx_mempools
>>>  * @see rte_eth_rxconf::rx_seg
>>>  */
>>> -   if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
>>> -   ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
>> 0))) {
>>> +   if ((dev_info.max_rx_mempools == 0) && (rx_pkt_nb_segs <= 1 ||
>>
>> Using `dev_info.max_rx_mempools` for check means if device supports
>> multiple mempool, multiple mempool will be configured independent from
>> user configuration. But user may prefer singe mempool or buffer split.
>>
> Please find my suggested logic.
> 
>> Right now only PMD support multiple mempool is 'cnxk', so this doesn't
>> impact others but I think this is not correct.
>>
>> Instead of re-using testpmd "mbuf-size" parameter (it is already used for
>> two other features, and this is the reason of the defect) it would be better
>> to have an explicit parameter for multiple mempool feature.
>>
>>
>>> +   ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) ==
>> 0))) {
>>> /* Single pool/segment configuration */
>>> rx_conf->rx_seg = NULL;
>>> rx_conf->rx_nseg = 0;
>>
>>
>> Logic seems correct, although I have not tested.
>>
>> Current functions tries to detect the requested feature and setup queues
>> accordingly, features are:
>> - single mempool
>> - packet split (to multiple mempool)
>> - multiple mempool (various size)
>>
>> And the logic in the function is:
>> ``
>> if ( (! multiple mempool) && (! packet split))
>>  setup for single mempool
>>  exit
>>
>> if (packet split)
>>  setup packet split
>> else
>>  setup multiple mempool
>> ``
>>
>> What do you think to
>> a) simplify logic by making single mempool as fallback and last option,
>> instead of detecting non existence of other configs
>> b) have explicit check for multiple mempool
>>
>> Like:
>>
>> ``
>> if (packet split)
>>  setup packet split
>>  exit
>> else if (multiple mempool)
>>  setup multiple mempool
>>  exit
>>
>> setup for single mempool
>> ``
>>
>> I think this both solves the defect and simplifies the code.
> 
> Yes Ferruh your suggested logic simplifies the code.
> 
> In the lines of your proposed logic,  below if conditions might work fine for 
> all features(buffer-split/multi-mempool) supported by PMD and user preference,
> 
> if (rx_pkt_nb_segs > 1 ||
> rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
>   /*multi-segment (buffer split)*/
> } else if (mbuf_data_size_n > 1 && dev_info.max_rx_mempools > 1) {
>   /*multi-mempool*/
> } else {
>   /* single pool and segment */
> } 
> 

`mbuf_dat

Re: [PATCH 00/11] Fixes for clang 15

2022-11-21 Thread David Marchand
On Fri, Nov 18, 2022 at 9:55 AM David Marchand
 wrote:
>
> Fedora 37 has just been released with clang 15.
> The latter seems more picky wrt unused variable.
>
> Fixes have been tested in GHA with a simple patch I used in my own repo:
> https://github.com/david-marchand/dpdk/commit/82cd57ae5490
> https://github.com/david-marchand/dpdk/actions/runs/3495454457
>
> --
> David Marchand
>
> David Marchand (11):
>   service: fix build with clang 15
>   vhost: fix build with clang 15
>   bus/dpaa: fix build with clang 15
>   net/atlantic: fix build with clang 15
>   net/dpaa2: fix build with clang 15
>   net/ice: fix build with clang 15
>   app/testpmd: fix build with clang 15
>   app/testpmd: fix build with clang 15 in flow code
>   test/efd: fix build with clang 15
>   test/member: fix build with clang 15
>   test/event: fix build with clang 15
>
>  app/test-pmd/config.c   | 14 --
>  app/test-pmd/noisy_vnf.c|  2 +-
>  app/test/test_efd_perf.c|  1 -
>  app/test/test_event_timer_adapter.c |  2 --
>  app/test/test_member.c  |  1 -
>  app/test/test_member_perf.c |  1 -
>  drivers/bus/dpaa/base/qbman/bman.h  |  4 +---
>  drivers/net/atlantic/atl_rxtx.c |  5 ++---
>  drivers/net/dpaa2/dpaa2_rxtx.c  |  4 +---
>  drivers/net/ice/ice_ddp_package.c   |  3 ---
>  lib/eal/common/rte_service.c|  2 --
>  lib/vhost/virtio_net.c  |  2 --
>  12 files changed, 5 insertions(+), 36 deletions(-)

Series applied for rc4, thanks.


-- 
David Marchand



Re: [PATCH] ring: build with global includes

2022-11-21 Thread Bruce Richardson
On Fri, Nov 18, 2022 at 03:22:07PM -0800, Tyler Retzlaff wrote:
> ring has no dependencies and should be able to be built standalone but
> cannot be since it cannot find rte_config.h. this change directs meson
> to include global_inc paths just like is done with other libraries
> e.g. telemetry.
> 
> Tyler Retzlaff (1):
>   ring: build with global includes
> 
>  lib/ring/meson.build | 2 ++
>  1 file changed, 2 insertions(+)
>

I am a little confused by this change - how do you mean built-standalone?
The ring library depends upon EAL for memory management, does it not? Also,
no DPDK library can be built on its own without the rest of the top-level
build infrastructure, which will ensure that the global-include folders are
on the include path? 

In terms of other libs, e.g. telemetry, the only reason those need the
global includes added to their include path explicitly is because those are
built ahead of EAL. Anything that depends on EAL - including ring - will
have the global includes available.

Can you explain a little more about the use-case you are looking at here,
and how you are attempting to build ring?

/Bruce 


RE: [EXT] Re: [PATCH v4 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-21 Thread Hanumanth Reddy Pothula


> -Original Message-
> From: Ferruh Yigit 
> Sent: Monday, November 21, 2022 3:38 PM
> To: Hanumanth Reddy Pothula ;
> tho...@monjalon.net; andrew.rybche...@oktetlabs.ru; Nithin Kumar
> Dabilpuram 
> Cc: dev@dpdk.org; yux.ji...@intel.com; Jerin Jacob Kollanukkaran
> ; Aman Singh ; Yuying
> Zhang 
> Subject: Re: [EXT] Re: [PATCH v4 1/1] app/testpmd: add valid check to
> verify multi mempool feature
> 
> On 11/19/2022 12:00 AM, Hanumanth Reddy Pothula wrote:
> >
> >
> >> -Original Message-
> >> From: Ferruh Yigit 
> >> Sent: Saturday, November 19, 2022 2:26 AM
> >> To: Hanumanth Reddy Pothula ;
> >> tho...@monjalon.net; andrew.rybche...@oktetlabs.ru; Nithin Kumar
> >> Dabilpuram 
> >> Cc: dev@dpdk.org; yux.ji...@intel.com; Jerin Jacob Kollanukkaran
> >> ; Aman Singh ;
> Yuying
> >> Zhang 
> >> Subject: [EXT] Re: [PATCH v4 1/1] app/testpmd: add valid check to
> >> verify multi mempool feature
> >>
> >> External Email
> >>
> >> -
> >> - On 11/18/2022 2:13 PM, Hanumanth Pothula wrote:
> >>> Validate ethdev parameter 'max_rx_mempools' to know whether
> device
> >>> supports multi-mempool feature or not.
> >>>
> >>
> >> My preference would be revert the testpmd patch [1] that adds this
> >> new feature after -rc2, and add it back next release with new testpmd
> >> argument and below mentioned changes in setup function.
> >>
> >> @Andrew, @Thomas, @Jerin, what do you think?
> >>
> >>
> >> [1]
> >> 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx
> >> queue")
> >>
> >>> Bugzilla ID: 1128
> >>>
> >>
> >> Can you please add fixes line?
> >>
> > Ack
> >>> Signed-off-by: Hanumanth Pothula 
> >>
> >> Please put the changelog after '---', which than git will take it as note.
> >>
> > Ack
> >>> v4:
> >>>  - updated if condition.
> >>> v3:
> >>>  - Simplified conditional check.
> >>>  - Corrected spell, whether.
> >>> v2:
> >>>  - Rebased on tip of next-net/main.
> >>> ---
> >>>  app/test-pmd/testpmd.c | 10 --
> >>>  1 file changed, 8 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> >>> 4e25f77c6a..c1b4dbd716 100644
> >>> --- a/app/test-pmd/testpmd.c
> >>> +++ b/app/test-pmd/testpmd.c
> >>> @@ -2655,17 +2655,23 @@ rx_queue_setup(uint16_t port_id,
> uint16_t
> >> rx_queue_id,
> >>>   union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
> >>>   struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
> >>>   struct rte_mempool *mpx;
> >>> + struct rte_eth_dev_info dev_info;
> >>>   unsigned int i, mp_n;
> >>>   uint32_t prev_hdrs = 0;
> >>>   int ret;
> >>>
> >>> + ret = rte_eth_dev_info_get(port_id, &dev_info);
> >>> + if (ret != 0)
> >>> + return ret;
> >>> +
> >>>   /* Verify Rx queue configuration is single pool and segment or
> >>>* multiple pool/segment.
> >>> +  * @see rte_eth_dev_info::max_rx_mempools
> >>>* @see rte_eth_rxconf::rx_mempools
> >>>* @see rte_eth_rxconf::rx_seg
> >>>*/
> >>> - if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> >>> - ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
> >> 0))) {
> >>> + if ((dev_info.max_rx_mempools == 0) && (rx_pkt_nb_segs <= 1 ||
> >>
> >> Using `dev_info.max_rx_mempools` for check means if device supports
> >> multiple mempool, multiple mempool will be configured independent
> >> from user configuration. But user may prefer singe mempool or buffer
> split.
> >>
> > Please find my suggested logic.
> >
> >> Right now only PMD support multiple mempool is 'cnxk', so this
> >> doesn't impact others but I think this is not correct.
> >>
> >> Instead of re-using testpmd "mbuf-size" parameter (it is already used
> >> for two other features, and this is the reason of the defect) it
> >> would be better to have an explicit parameter for multiple mempool
> feature.
> >>
> >>
> >>> + ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) ==
> >> 0))) {
> >>>   /* Single pool/segment configuration */
> >>>   rx_conf->rx_seg = NULL;
> >>>   rx_conf->rx_nseg = 0;
> >>
> >>
> >> Logic seems correct, although I have not tested.
> >>
> >> Current functions tries to detect the requested feature and setup
> >> queues accordingly, features are:
> >> - single mempool
> >> - packet split (to multiple mempool)
> >> - multiple mempool (various size)
> >>
> >> And the logic in the function is:
> >> ``
> >> if ( (! multiple mempool) && (! packet split))
> >>setup for single mempool
> >>exit
> >>
> >> if (packet split)
> >>setup packet split
> >> else
> >>setup multiple mempool
> >> ``
> >>
> >> What do you think to
> >> a) simplify logic by making single mempool as fallback and last
> >> option, instead of detecting non existence of other configs
> >> b) have explicit check for multiple mempool
> >>
> >> Like:
> >>
> >> ``
> >> if (packet split)
> >>setup packet split
> >>exit
> >> else if (multiple mempool)
> >>setup multiple mempool

Re: [EXT] [dpdk-dev v6] doc: support IPsec Multi-buffer lib v1.3

2022-11-21 Thread Zhang, Fan

Hi Akhil,


From 22.11 the ipsec-mb PMDs will be working on two different libraries 
that may or may not work the same.


We also have two different contributor groups working on adding features 
on top of each library, again one may or may not be compatible to another.


I believe there should be some words necessary to distinguish each 
library support as well as the credits for adding certain features for 
one platform.



On 11/21/2022 6:57 AM, Akhil Goyal wrote:

diff --git a/doc/guides/rel_notes/release_22_11.rst
b/doc/guides/rel_notes/release_22_11.rst
index 4e55b543ef..b98b603fe7 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -240,7 +240,16 @@ New Features

  * **Updated ipsec_mb crypto driver.**

-  Added SNOW-3G and ZUC support for ARM platform.
+  * Added ARM64 port of ipsec-mb library support and SNOW-3G and ZUC
+support for ARM platform.

You need not update the above line.
* Added SNOW-3G and ZUC support for ARM platform.
Should be good enough.

+  * Added Intel IPsec MB v1.3 library support for x86 platform,
+see the following guides for more details:
+:doc:`../cryptodevs/aesni_gcm`
+:doc:`../cryptodevs/aesni_mb`
+:doc:`../cryptodevs/chacha20_poly1305`
+:doc:`../cryptodevs/kasumi`
+:doc:`../cryptodevs/snow3g`
+:doc:`../cryptodevs/zuc`

I believe adding reference for each guide is not needed.

* Added Intel IPsec MB v1.3 library support for x86 platform.
Added details in the guides for all the drivers supported by ipsec_mb.


What the guideline Pablo/Kai added here only applies for x86 as

a. ARM does not support algorithms other than SNOW3G and ZUC.

b. The performance guideline may not apply to ARM.

Regards,

Fan



[PATCH] bus/pci: fix leak with multiple bus scan

2022-11-21 Thread David Marchand
The addition of the bus_info field did not account for the fact that the
PCI bus can be scanned multiple times (like for device hotplug and other
uses in SPDK).
Indeed, during pci_scan_one() for devices that were already registered,
the pci_common_set() overwrites the bus_info field, leaking the
previously allocated memory.

Since the bus_info content is fixed for a PCI device, we can simply skip
allocation if dev->bus_info is already set.

Fixes: 8f4de2dba9b9 ("bus/pci: fill bus specific information")

Reported-by: Tomasz Zawadzki 
Signed-off-by: David Marchand 
---
 drivers/bus/pci/pci_common.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/bus/pci/pci_common.c b/drivers/bus/pci/pci_common.c
index 9901c34f4e..bc3a7f39fe 100644
--- a/drivers/bus/pci/pci_common.c
+++ b/drivers/bus/pci/pci_common.c
@@ -114,8 +114,9 @@ pci_common_set(struct rte_pci_device *dev)
/* Otherwise, it uses the internal, canonical form. */
dev->device.name = dev->name;
 
-   if (asprintf(&dev->bus_info, "vendor_id=%"PRIx16", device_id=%"PRIx16,
-   dev->id.vendor_id, dev->id.device_id) != -1)
+   if (dev->bus_info != NULL ||
+   asprintf(&dev->bus_info, "vendor_id=%"PRIx16", 
device_id=%"PRIx16,
+   dev->id.vendor_id, dev->id.device_id) != -1)
dev->device.bus_info = dev->bus_info;
 }
 
-- 
2.38.1



RE: [EXT] [dpdk-dev v6] doc: support IPsec Multi-buffer lib v1.3

2022-11-21 Thread Akhil Goyal
Hi Fan,
> Hi Akhil,
> 
> 
>  From 22.11 the ipsec-mb PMDs will be working on two different libraries
> that may or may not work the same.
> 
> We also have two different contributor groups working on adding features
> on top of each library, again one may or may not be compatible to another.
> 
> I believe there should be some words necessary to distinguish each
> library support as well as the credits for adding certain features for
> one platform.

Ok, but release notes is not a correct place to mention that.
It should be part of the respective driver documentation.

> 
> 
> On 11/21/2022 6:57 AM, Akhil Goyal wrote:
> >> diff --git a/doc/guides/rel_notes/release_22_11.rst
> >> b/doc/guides/rel_notes/release_22_11.rst
> >> index 4e55b543ef..b98b603fe7 100644
> >> --- a/doc/guides/rel_notes/release_22_11.rst
> >> +++ b/doc/guides/rel_notes/release_22_11.rst
> >> @@ -240,7 +240,16 @@ New Features
> >>
> >>   * **Updated ipsec_mb crypto driver.**
> >>
> >> -  Added SNOW-3G and ZUC support for ARM platform.
> >> +  * Added ARM64 port of ipsec-mb library support and SNOW-3G and ZUC
> >> +support for ARM platform.
> > You need not update the above line.
> > * Added SNOW-3G and ZUC support for ARM platform.
> > Should be good enough.
> >> +  * Added Intel IPsec MB v1.3 library support for x86 platform,
> >> +see the following guides for more details:
> >> +:doc:`../cryptodevs/aesni_gcm`
> >> +:doc:`../cryptodevs/aesni_mb`
> >> +:doc:`../cryptodevs/chacha20_poly1305`
> >> +:doc:`../cryptodevs/kasumi`
> >> +:doc:`../cryptodevs/snow3g`
> >> +:doc:`../cryptodevs/zuc`
> > I believe adding reference for each guide is not needed.
> >
> > * Added Intel IPsec MB v1.3 library support for x86 platform.
> > Added details in the guides for all the drivers supported by ipsec_mb.
> 
> What the guideline Pablo/Kai added here only applies for x86 as
> 
> a. ARM does not support algorithms other than SNOW3G and ZUC.
This should be distinguished in the .rst file.
For release notes, above thing is sufficient.

> 
> b. The performance guideline may not apply to ARM.
Again, it should be part of driver documentation and not release notes.

And for each of the release note bullet that I suggested are mentioning the
Platform on which the support is added.




[PATCH v2 0/4] add support for self monitoring

2022-11-21 Thread Tomasz Duszynski
This series adds self monitoring support i.e allows to configure and
read performance measurement unit (PMU) counters in runtime without
using perf utility. This has certain adventages when application runs on
isolated cores with nohz_full kernel parameter.

Events can be read directly using rte_pmu_read() or using dedicated
tracepoint rte_eal_trace_pmu_read(). The latter will cause events to be
stored inside CTF file.

By design, all enabled events are grouped together and the same group
is attached to lcores that use self monitoring funtionality.

Events are enabled by names, which need to be read from standard
location under sysfs i.e

/sys/bus/event_source/devices/PMU/events

where PMU is a core pmu i.e one measuring cpu events. As of today
raw events are not supported.

v2:
- fix problems reported by test build infra

Tomasz Duszynski (4):
  eal: add generic support for reading PMU events
  eal/arm: support reading ARM PMU events in runtime
  eal/x86: support reading Intel PMU events in runtime
  eal: add PMU support to tracing library

 app/test/meson.build |   1 +
 app/test/test_pmu.c  |  47 ++
 app/test/test_trace_perf.c   |   4 +
 doc/guides/prog_guide/profile_app.rst|  13 +
 doc/guides/prog_guide/trace_lib.rst  |  32 ++
 lib/eal/arm/include/meson.build  |   1 +
 lib/eal/arm/include/rte_pmu_pmc.h|  39 ++
 lib/eal/arm/meson.build  |   4 +
 lib/eal/arm/rte_pmu.c| 103 +
 lib/eal/common/eal_common_trace_points.c |   3 +
 lib/eal/common/meson.build   |   3 +
 lib/eal/common/pmu_private.h |  41 ++
 lib/eal/common/rte_pmu.c | 519 +++
 lib/eal/include/meson.build  |   1 +
 lib/eal/include/rte_eal_trace.h  |  11 +
 lib/eal/include/rte_pmu.h| 207 +
 lib/eal/linux/eal.c  |   4 +
 lib/eal/version.map  |   6 +
 lib/eal/x86/include/meson.build  |   1 +
 lib/eal/x86/include/rte_pmu_pmc.h|  33 ++
 20 files changed, 1073 insertions(+)
 create mode 100644 app/test/test_pmu.c
 create mode 100644 lib/eal/arm/include/rte_pmu_pmc.h
 create mode 100644 lib/eal/arm/rte_pmu.c
 create mode 100644 lib/eal/common/pmu_private.h
 create mode 100644 lib/eal/common/rte_pmu.c
 create mode 100644 lib/eal/include/rte_pmu.h
 create mode 100644 lib/eal/x86/include/rte_pmu_pmc.h

--
2.25.1



[PATCH v2 1/4] eal: add generic support for reading PMU events

2022-11-21 Thread Tomasz Duszynski
Add support for programming PMU counters and reading their values
in runtime bypassing kernel completely.

This is especially useful in cases where CPU cores are isolated
(nohz_full) i.e run dedicated tasks. In such cases one cannot use
standard perf utility without sacrificing latency and performance.

Signed-off-by: Tomasz Duszynski 
---
 app/test/meson.build  |   1 +
 app/test/test_pmu.c   |  41 +++
 doc/guides/prog_guide/profile_app.rst |   8 +
 lib/eal/common/meson.build|   3 +
 lib/eal/common/pmu_private.h  |  41 +++
 lib/eal/common/rte_pmu.c  | 456 ++
 lib/eal/include/meson.build   |   1 +
 lib/eal/include/rte_pmu.h | 204 
 lib/eal/linux/eal.c   |   4 +
 lib/eal/version.map   |   5 +
 10 files changed, 764 insertions(+)
 create mode 100644 app/test/test_pmu.c
 create mode 100644 lib/eal/common/pmu_private.h
 create mode 100644 lib/eal/common/rte_pmu.c
 create mode 100644 lib/eal/include/rte_pmu.h

diff --git a/app/test/meson.build b/app/test/meson.build
index f34d19e3c3..93b3300309 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -143,6 +143,7 @@ test_sources = files(
 'test_timer_racecond.c',
 'test_timer_secondary.c',
 'test_ticketlock.c',
+'test_pmu.c',
 'test_trace.c',
 'test_trace_register.c',
 'test_trace_perf.c',
diff --git a/app/test/test_pmu.c b/app/test/test_pmu.c
new file mode 100644
index 00..fd331af9ee
--- /dev/null
+++ b/app/test/test_pmu.c
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2022 Marvell International Ltd.
+ */
+
+#include 
+
+#include "test.h"
+
+static int
+test_pmu_read(void)
+{
+   uint64_t val = 0;
+   int tries = 10;
+   int event = -1;
+
+   while (tries--)
+   val += rte_pmu_read(event);
+
+   if (val == 0)
+   return TEST_FAILED;
+
+   return TEST_SUCCESS;
+}
+
+static struct unit_test_suite pmu_tests = {
+   .suite_name = "pmu autotest",
+   .setup = NULL,
+   .teardown = NULL,
+   .unit_test_cases = {
+   TEST_CASE(test_pmu_read),
+   TEST_CASES_END()
+   }
+};
+
+static int
+test_pmu(void)
+{
+   return unit_test_suite_runner(&pmu_tests);
+}
+
+REGISTER_TEST_COMMAND(pmu_autotest, test_pmu);
diff --git a/doc/guides/prog_guide/profile_app.rst 
b/doc/guides/prog_guide/profile_app.rst
index bd6700ef85..8fc1b20cab 100644
--- a/doc/guides/prog_guide/profile_app.rst
+++ b/doc/guides/prog_guide/profile_app.rst
@@ -7,6 +7,14 @@ Profile Your Application
 The following sections describe methods of profiling DPDK applications on
 different architectures.
 
+Performance counter based profiling
+---
+
+Majority of architectures support some sort hardware measurement unit which 
provides a set of
+programmable counters that monitor specific events. There are different tools 
which can gather
+that information, perf being an example here. Though in some scenarios, eg. 
when CPU cores are
+isolated (nohz_full) and run dedicated tasks, using perf is less than ideal. 
In such cases one can
+read specific events directly from application via ``rte_pmu_read()``.
 
 Profiling on x86
 
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 917758cc65..d6d05b56f3 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -38,6 +38,9 @@ sources += files(
 'rte_service.c',
 'rte_version.c',
 )
+if is_linux
+sources += files('rte_pmu.c')
+endif
 if is_linux or is_windows
 sources += files('eal_common_dynmem.c')
 endif
diff --git a/lib/eal/common/pmu_private.h b/lib/eal/common/pmu_private.h
new file mode 100644
index 00..cade4245e6
--- /dev/null
+++ b/lib/eal/common/pmu_private.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Marvell
+ */
+
+#ifndef _PMU_PRIVATE_H_
+#define _PMU_PRIVATE_H_
+
+/**
+ * Architecture specific PMU init callback.
+ *
+ * @return
+ *   0 in case of success, negative value otherwise.
+ */
+int
+pmu_arch_init(void);
+
+/**
+ * Architecture specific PMU cleanup callback.
+ */
+void
+pmu_arch_fini(void);
+
+/**
+ * Apply architecture specific settings to config before passing it to syscall.
+ */
+void
+pmu_arch_fixup_config(uint64_t config[3]);
+
+/**
+ * Initialize PMU tracing internals.
+ */
+void
+eal_pmu_init(void);
+
+/**
+ * Cleanup PMU internals.
+ */
+void
+eal_pmu_fini(void);
+
+#endif /* _PMU_PRIVATE_H_ */
diff --git a/lib/eal/common/rte_pmu.c b/lib/eal/common/rte_pmu.c
new file mode 100644
index 00..dc169fb2cf
--- /dev/null
+++ b/lib/eal/common/rte_pmu.c
@@ -0,0 +1,456 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2022 Marvell International Ltd.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#inclu

[PATCH v2 2/4] eal/arm: support reading ARM PMU events in runtime

2022-11-21 Thread Tomasz Duszynski
Add support for reading ARM PMU events in runtime.

Signed-off-by: Tomasz Duszynski 
---
 app/test/test_pmu.c   |   4 ++
 lib/eal/arm/include/meson.build   |   1 +
 lib/eal/arm/include/rte_pmu_pmc.h |  39 +++
 lib/eal/arm/meson.build   |   4 ++
 lib/eal/arm/rte_pmu.c | 103 ++
 lib/eal/include/rte_pmu.h |   3 +
 6 files changed, 154 insertions(+)
 create mode 100644 lib/eal/arm/include/rte_pmu_pmc.h
 create mode 100644 lib/eal/arm/rte_pmu.c

diff --git a/app/test/test_pmu.c b/app/test/test_pmu.c
index fd331af9ee..f94866dff9 100644
--- a/app/test/test_pmu.c
+++ b/app/test/test_pmu.c
@@ -13,6 +13,10 @@ test_pmu_read(void)
int tries = 10;
int event = -1;
 
+#if defined(RTE_ARCH_ARM64)
+   event = rte_pmu_add_event("cpu_cycles");
+#endif
+
while (tries--)
val += rte_pmu_read(event);
 
diff --git a/lib/eal/arm/include/meson.build b/lib/eal/arm/include/meson.build
index 657bf58569..ab13b0220a 100644
--- a/lib/eal/arm/include/meson.build
+++ b/lib/eal/arm/include/meson.build
@@ -20,6 +20,7 @@ arch_headers = files(
 'rte_pause_32.h',
 'rte_pause_64.h',
 'rte_pause.h',
+'rte_pmu_pmc.h',
 'rte_power_intrinsics.h',
 'rte_prefetch_32.h',
 'rte_prefetch_64.h',
diff --git a/lib/eal/arm/include/rte_pmu_pmc.h 
b/lib/eal/arm/include/rte_pmu_pmc.h
new file mode 100644
index 00..10e2984813
--- /dev/null
+++ b/lib/eal/arm/include/rte_pmu_pmc.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Marvell.
+ */
+
+#ifndef _RTE_PMU_PMC_ARM_H_
+#define _RTE_PMU_PMC_ARM_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include 
+
+static __rte_always_inline uint64_t
+rte_pmu_pmc_read(int index)
+{
+   uint64_t val;
+
+   if (index == 31) {
+   /* CPU Cycles (0x11) must be read via pmccntr_el0 */
+   asm volatile("mrs %0, pmccntr_el0" : "=r" (val));
+   } else {
+   asm volatile(
+   "msr pmselr_el0, %x0\n"
+   "mrs %0, pmxevcntr_el0\n"
+   : "=r" (val)
+   : "rZ" (index)
+   );
+   }
+
+   return val;
+}
+#define rte_pmu_pmc_read rte_pmu_pmc_read
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_PMU_PMC_ARM_H_ */
diff --git a/lib/eal/arm/meson.build b/lib/eal/arm/meson.build
index dca1106aae..0c5575b197 100644
--- a/lib/eal/arm/meson.build
+++ b/lib/eal/arm/meson.build
@@ -9,3 +9,7 @@ sources += files(
 'rte_hypervisor.c',
 'rte_power_intrinsics.c',
 )
+
+if is_linux
+sources += files('rte_pmu.c')
+endif
diff --git a/lib/eal/arm/rte_pmu.c b/lib/eal/arm/rte_pmu.c
new file mode 100644
index 00..6c50a1b3c4
--- /dev/null
+++ b/lib/eal/arm/rte_pmu.c
@@ -0,0 +1,103 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2022 Marvell International Ltd.
+ */
+
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+
+#include "pmu_private.h"
+
+#define PERF_USER_ACCESS_PATH "/proc/sys/kernel/perf_user_access"
+
+static int restore_uaccess;
+
+static int
+read_attr_int(const char *path, int *val)
+{
+   char buf[BUFSIZ];
+   int ret, fd;
+
+   fd = open(path, O_RDONLY);
+   if (fd == -1)
+   return -errno;
+
+   ret = read(fd, buf, sizeof(buf));
+   if (ret == -1) {
+   close(fd);
+
+   return -errno;
+   }
+
+   *val = strtol(buf, NULL, 10);
+   close(fd);
+
+   return 0;
+}
+
+static int
+write_attr_int(const char *path, int val)
+{
+   char buf[BUFSIZ];
+   int num, ret, fd;
+
+   fd = open(path, O_WRONLY);
+   if (fd == -1)
+   return -errno;
+
+   num = snprintf(buf, sizeof(buf), "%d", val);
+   ret = write(fd, buf, num);
+   if (ret == -1) {
+   close(fd);
+
+   return -errno;
+   }
+
+   close(fd);
+
+   return 0;
+}
+
+int
+pmu_arch_init(void)
+{
+   int ret;
+
+   ret = read_attr_int(PERF_USER_ACCESS_PATH, &restore_uaccess);
+   if (ret) {
+   RTE_LOG(ERR, EAL, "failed to read %s\n", PERF_USER_ACCESS_PATH);
+
+   return ret;
+   }
+
+   ret = write_attr_int(PERF_USER_ACCESS_PATH, 1);
+   if (ret) {
+   RTE_LOG(ERR, EAL, "failed to enable perf user access\n"
+   "try enabling manually 'echo 1 > %s'\n",
+   PERF_USER_ACCESS_PATH);
+
+   return ret;
+   }
+
+   return 0;
+}
+
+void
+pmu_arch_fini(void)
+{
+   write_attr_int(PERF_USER_ACCESS_PATH, restore_uaccess);
+}
+
+void
+pmu_arch_fixup_config(uint64_t config[3])
+{
+   /* select 64 bit counters */
+   config[1] |= RTE_BIT64(0);
+   /* enable userspace access */
+   config[1] |= RTE_BIT64(1);
+}
diff --git a/lib/eal/include/rte_pmu.h b/lib/eal/include/rte_pmu.h
index 5955c22779..67b

[PATCH v2 3/4] eal/x86: support reading Intel PMU events in runtime

2022-11-21 Thread Tomasz Duszynski
Add support for reading Intel PMU events in runtime.

Signed-off-by: Tomasz Duszynski 
---
 app/test/test_pmu.c   |  2 ++
 lib/eal/include/rte_pmu.h |  2 +-
 lib/eal/x86/include/meson.build   |  1 +
 lib/eal/x86/include/rte_pmu_pmc.h | 33 +++
 4 files changed, 37 insertions(+), 1 deletion(-)
 create mode 100644 lib/eal/x86/include/rte_pmu_pmc.h

diff --git a/app/test/test_pmu.c b/app/test/test_pmu.c
index f94866dff9..016204c083 100644
--- a/app/test/test_pmu.c
+++ b/app/test/test_pmu.c
@@ -15,6 +15,8 @@ test_pmu_read(void)
 
 #if defined(RTE_ARCH_ARM64)
event = rte_pmu_add_event("cpu_cycles");
+#elif defined(RTE_ARCH_X86_64)
+   event = rte_pmu_add_event("cpu-cycles");
 #endif
 
while (tries--)
diff --git a/lib/eal/include/rte_pmu.h b/lib/eal/include/rte_pmu.h
index 67b1194a2a..bbe12d100d 100644
--- a/lib/eal/include/rte_pmu.h
+++ b/lib/eal/include/rte_pmu.h
@@ -20,7 +20,7 @@ extern "C" {
 #include 
 #include 
 #include 
-#if defined(RTE_ARCH_ARM64)
+#if defined(RTE_ARCH_ARM64) || defined(RTE_ARCH_X86_64)
 #include 
 #endif
 
diff --git a/lib/eal/x86/include/meson.build b/lib/eal/x86/include/meson.build
index 52d2f8e969..03d286ed25 100644
--- a/lib/eal/x86/include/meson.build
+++ b/lib/eal/x86/include/meson.build
@@ -9,6 +9,7 @@ arch_headers = files(
 'rte_io.h',
 'rte_memcpy.h',
 'rte_pause.h',
+'rte_pmu_pmc.h',
 'rte_power_intrinsics.h',
 'rte_prefetch.h',
 'rte_rtm.h',
diff --git a/lib/eal/x86/include/rte_pmu_pmc.h 
b/lib/eal/x86/include/rte_pmu_pmc.h
new file mode 100644
index 00..a2cd849fb1
--- /dev/null
+++ b/lib/eal/x86/include/rte_pmu_pmc.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Marvell.
+ */
+
+#ifndef _RTE_PMU_PMC_X86_H_
+#define _RTE_PMU_PMC_X86_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include 
+
+static __rte_always_inline uint64_t
+rte_pmu_pmc_read(int index)
+{
+   uint32_t high, low;
+
+   asm volatile(
+   "rdpmc\n"
+   : "=a" (low), "=d" (high)
+   : "c" (index)
+   );
+
+   return ((uint64_t)high << 32) | (uint64_t)low;
+}
+#define rte_pmu_pmc_read rte_pmu_pmc_read
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_PMU_PMC_X86_H_ */
-- 
2.25.1



[PATCH v2 4/4] eal: add PMU support to tracing library

2022-11-21 Thread Tomasz Duszynski
In order to profile app one needs to store significant amount of samples
somewhere for an analysis latern on. Since trace library supports
storing data in a CTF format lets take adventage of that and add a
dedicated PMU tracepoint.

Signed-off-by: Tomasz Duszynski 
---
 app/test/test_trace_perf.c   |  4 ++
 doc/guides/prog_guide/profile_app.rst|  5 ++
 doc/guides/prog_guide/trace_lib.rst  | 32 
 lib/eal/common/eal_common_trace_points.c |  3 ++
 lib/eal/common/rte_pmu.c | 63 
 lib/eal/include/rte_eal_trace.h  | 11 +
 lib/eal/version.map  |  1 +
 7 files changed, 119 insertions(+)

diff --git a/app/test/test_trace_perf.c b/app/test/test_trace_perf.c
index 46ae7d8074..4851b6852f 100644
--- a/app/test/test_trace_perf.c
+++ b/app/test/test_trace_perf.c
@@ -114,6 +114,8 @@ worker_fn_##func(void *arg) \
 #define GENERIC_DOUBLE rte_eal_trace_generic_double(3.6)
 #define GENERIC_STR rte_eal_trace_generic_str("hello world")
 #define VOID_FP app_dpdk_test_fp()
+/* 0 corresponds first event passed via --trace= */
+#define READ_PMU rte_eal_trace_pmu_read(0)
 
 WORKER_DEFINE(GENERIC_VOID)
 WORKER_DEFINE(GENERIC_U64)
@@ -122,6 +124,7 @@ WORKER_DEFINE(GENERIC_FLOAT)
 WORKER_DEFINE(GENERIC_DOUBLE)
 WORKER_DEFINE(GENERIC_STR)
 WORKER_DEFINE(VOID_FP)
+WORKER_DEFINE(READ_PMU)
 
 static void
 run_test(const char *str, lcore_function_t f, struct test_data *data, size_t 
sz)
@@ -174,6 +177,7 @@ test_trace_perf(void)
run_test("double", worker_fn_GENERIC_DOUBLE, data, sz);
run_test("string", worker_fn_GENERIC_STR, data, sz);
run_test("void_fp", worker_fn_VOID_FP, data, sz);
+   run_test("read_pmu", worker_fn_READ_PMU, data, sz);
 
rte_free(data);
return TEST_SUCCESS;
diff --git a/doc/guides/prog_guide/profile_app.rst 
b/doc/guides/prog_guide/profile_app.rst
index 8fc1b20cab..977800ea01 100644
--- a/doc/guides/prog_guide/profile_app.rst
+++ b/doc/guides/prog_guide/profile_app.rst
@@ -16,6 +16,11 @@ that information, perf being an example here. Though in some 
scenarios, eg. when
 isolated (nohz_full) and run dedicated tasks, using perf is less than ideal. 
In such cases one can
 read specific events directly from application via ``rte_pmu_read()``.
 
+Alternatively tracing library can be used which offers dedicated tracepoint
+``rte_eal_trace_pmu_event()``.
+
+Refer to :doc:`../prog_guide/trace_lib` for more details.
+
 Profiling on x86
 
 
diff --git a/doc/guides/prog_guide/trace_lib.rst 
b/doc/guides/prog_guide/trace_lib.rst
index 9a8f38073d..9a845fd86f 100644
--- a/doc/guides/prog_guide/trace_lib.rst
+++ b/doc/guides/prog_guide/trace_lib.rst
@@ -46,6 +46,7 @@ DPDK tracing library features
   trace format and is compatible with ``LTTng``.
   For detailed information, refer to
   `Common Trace Format `_.
+- Support reading PMU events on ARM64 and x86 (Intel)
 
 How to add a tracepoint?
 
@@ -137,6 +138,37 @@ the user must use ``RTE_TRACE_POINT_FP`` instead of 
``RTE_TRACE_POINT``.
 ``RTE_TRACE_POINT_FP`` is compiled out by default and it can be enabled using
 the ``enable_trace_fp`` option for meson build.
 
+PMU tracepoint
+--
+
+Performance measurement unit (PMU) event values can be read from hardware
+registers using predefined ``rte_pmu_read`` tracepoint.
+
+Tracing is enabled via ``--trace`` EAL option by passing both expression
+matching PMU tracepoint name i.e ``lib.eal.pmu.read`` and expression
+``e=ev1[,ev2,...]`` matching particular events::
+
+--trace='*pmu.read\|e=cpu_cycles,l1d_cache'
+
+Event names are available under ``/sys/bus/event_source/devices/PMU/events``
+directory, where ``PMU`` is a placeholder for either a ``cpu`` or a directory
+containing ``cpus``.
+
+In contrary to other tracepoints this does not need any extra variables
+added to source files. Instead, caller passes index which follows the order of
+events specified via ``--trace`` parameter. In the following example index 
``0``
+corresponds to ``cpu_cyclces`` while index ``1`` corresponds to ``l1d_cache``.
+
+.. code-block:: c
+
+ ...
+ rte_eal_trace_pmu_read(0);
+ rte_eal_trace_pmu_read(1);
+ ...
+
+PMU tracing support must be explicitly enabled using the ``enable_trace_fp``
+option for meson build.
+
 Event record mode
 -
 
diff --git a/lib/eal/common/eal_common_trace_points.c 
b/lib/eal/common/eal_common_trace_points.c
index 0b0b254615..de918ca618 100644
--- a/lib/eal/common/eal_common_trace_points.c
+++ b/lib/eal/common/eal_common_trace_points.c
@@ -75,3 +75,6 @@ RTE_TRACE_POINT_REGISTER(rte_eal_trace_intr_enable,
lib.eal.intr.enable)
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_intr_disable,
lib.eal.intr.disable)
+
+RTE_TRACE_POINT_REGISTER(rte_eal_trace_pmu_read,
+   lib.eal.pmu.read)
diff --git a/lib/eal/common/rte_pmu.c b/lib/eal/common/rte_pmu.c
index dc169fb2cf..6a417f74a9 100644
-

[PATCH v5 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-21 Thread Hanumanth Pothula
Validate ethdev parameter 'max_rx_mempools' to know whether
device supports multi-mempool feature or not.

Also, add new testpmd command line argument, multi-mempool,
to control multi-mempool feature. By default its disabled.

Bugzilla ID: 1128
Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")

Signed-off-by: Hanumanth Pothula 

---
v5:
 - Added testpmd argument to enable multi-mempool feature.
 - Simplified logic to distinguish between multi-mempool,
   multi-segment and single pool/segment.
v4:
 - updated if condition.
v3:
 - Simplified conditional check.
 - Corrected spell, whether.
v2:
 - Rebased on tip of next-net/main.
---
 app/test-pmd/parameters.c |  3 ++
 app/test-pmd/testpmd.c| 58 +--
 app/test-pmd/testpmd.h|  1 +
 3 files changed, 41 insertions(+), 21 deletions(-)

diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index aed4cdcb84..26d6450db4 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -700,6 +700,7 @@ launch_args_parse(int argc, char** argv)
{ "rx-mq-mode", 1, 0, 0 },
{ "record-core-cycles", 0, 0, 0 },
{ "record-burst-stats", 0, 0, 0 },
+   { "multi-mempool",  0, 0, 0 },
{ PARAM_NUM_PROCS,  1, 0, 0 },
{ PARAM_PROC_ID,1, 0, 0 },
{ 0, 0, 0, 0 },
@@ -1449,6 +1450,8 @@ launch_args_parse(int argc, char** argv)
record_core_cycles = 1;
if (!strcmp(lgopts[opt_idx].name, "record-burst-stats"))
record_burst_stats = 1;
+   if (!strcmp(lgopts[opt_idx].name, "multi-mempool"))
+   multi_mempool = 1;
if (!strcmp(lgopts[opt_idx].name, PARAM_NUM_PROCS))
num_procs = atoi(optarg);
if (!strcmp(lgopts[opt_idx].name, PARAM_PROC_ID))
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4e25f77c6a..9dfc4c9d0e 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -497,6 +497,11 @@ uint8_t record_burst_stats;
  */
 uint32_t rxq_share;
 
+/*
+ * Multi-mempool support, disabled by default.
+ */
+uint8_t multi_mempool;
+
 unsigned int num_sockets = 0;
 unsigned int socket_ids[RTE_MAX_NUMA_NODES];
 
@@ -2655,28 +2660,23 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
struct rte_mempool *mpx;
+   struct rte_eth_dev_info dev_info;
unsigned int i, mp_n;
uint32_t prev_hdrs = 0;
int ret;
 
+   ret = rte_eth_dev_info_get(port_id, &dev_info);
+   if (ret != 0)
+   return ret;
+
/* Verify Rx queue configuration is single pool and segment or
 * multiple pool/segment.
+* @see rte_eth_dev_info::max_rx_mempools
 * @see rte_eth_rxconf::rx_mempools
 * @see rte_eth_rxconf::rx_seg
 */
-   if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
-   ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
-   /* Single pool/segment configuration */
-   rx_conf->rx_seg = NULL;
-   rx_conf->rx_nseg = 0;
-   ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
-nb_rx_desc, socket_id,
-rx_conf, mp);
-   goto exit;
-   }
-
-   if (rx_pkt_nb_segs > 1 ||
-   rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
+   if ((rx_pkt_nb_segs > 1) &&
+   (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
/* multi-segment configuration */
for (i = 0; i < rx_pkt_nb_segs; i++) {
struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split;
@@ -2701,7 +2701,14 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
}
rx_conf->rx_nseg = rx_pkt_nb_segs;
rx_conf->rx_seg = rx_useg;
-   } else {
+   rx_conf->rx_mempools = NULL;
+   rx_conf->rx_nmempool = 0;
+   ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
+   socket_id, rx_conf, NULL);
+   rx_conf->rx_seg = NULL;
+   rx_conf->rx_nseg = 0;
+   } else if ((multi_mempool == 1) && (dev_info.max_rx_mempools != 0) &&
+ (mbuf_data_size_n > 1)) {
/* multi-pool configuration */
for (i = 0; i < mbuf_data_size_n; i++) {
mpx = mbuf_pool_find(socket_id, i);
@@ -2709,14 +2716,23 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
}
rx_conf->rx_me

Re: [PATCH v5 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-21 Thread Ferruh Yigit
On 11/21/2022 12:45 PM, Hanumanth Pothula wrote:
> Validate ethdev parameter 'max_rx_mempools' to know whether
> device supports multi-mempool feature or not.
> 
> Also, add new testpmd command line argument, multi-mempool,
> to control multi-mempool feature. By default its disabled.
> 
> Bugzilla ID: 1128
> Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")
> 
> Signed-off-by: Hanumanth Pothula 
> 
> ---
> v5:
>  - Added testpmd argument to enable multi-mempool feature.
>  - Simplified logic to distinguish between multi-mempool,
>multi-segment and single pool/segment.
> v4:
>  - updated if condition.
> v3:
>  - Simplified conditional check.
>  - Corrected spell, whether.
> v2:
>  - Rebased on tip of next-net/main.
> ---
>  app/test-pmd/parameters.c |  3 ++
>  app/test-pmd/testpmd.c| 58 +--
>  app/test-pmd/testpmd.h|  1 +
>  3 files changed, 41 insertions(+), 21 deletions(-)
> 
> diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> index aed4cdcb84..26d6450db4 100644
> --- a/app/test-pmd/parameters.c
> +++ b/app/test-pmd/parameters.c
> @@ -700,6 +700,7 @@ launch_args_parse(int argc, char** argv)
>   { "rx-mq-mode", 1, 0, 0 },
>   { "record-core-cycles", 0, 0, 0 },
>   { "record-burst-stats", 0, 0, 0 },
> + { "multi-mempool",  0, 0, 0 },

Can you please group with relatet paramters, instead of appending end,
after 'rxpkts' related parameters group (so after 'txpkts') can be good
location since it is used for buffer split.

need to document new argument on 'doc/guides/testpmd_app_ug/run_app.rst'

Also need to add help string in 'usage()' function, again grouped in
related paramters.

>   { PARAM_NUM_PROCS,  1, 0, 0 },
>   { PARAM_PROC_ID,1, 0, 0 },
>   { 0, 0, 0, 0 },
> @@ -1449,6 +1450,8 @@ launch_args_parse(int argc, char** argv)
>   record_core_cycles = 1;
>   if (!strcmp(lgopts[opt_idx].name, "record-burst-stats"))
>   record_burst_stats = 1;
> + if (!strcmp(lgopts[opt_idx].name, "multi-mempool"))
> + multi_mempool = 1;

Can you group with related paramters, same as above mentioned location?

>   if (!strcmp(lgopts[opt_idx].name, PARAM_NUM_PROCS))
>   num_procs = atoi(optarg);
>   if (!strcmp(lgopts[opt_idx].name, PARAM_PROC_ID))
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 4e25f77c6a..9dfc4c9d0e 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -497,6 +497,11 @@ uint8_t record_burst_stats;
>   */
>  uint32_t rxq_share;
>  
> +/*
> + * Multi-mempool support, disabled by default.
> + */
> +uint8_t multi_mempool;

Can you put this after 'rx_pkt_nb_segs' related group.

> +
>  unsigned int num_sockets = 0;
>  unsigned int socket_ids[RTE_MAX_NUMA_NODES];
>  
> @@ -2655,28 +2660,23 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>   union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
>   struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
>   struct rte_mempool *mpx;
> + struct rte_eth_dev_info dev_info;
>   unsigned int i, mp_n;
>   uint32_t prev_hdrs = 0;
>   int ret;
>  
> + ret = rte_eth_dev_info_get(port_id, &dev_info);
> + if (ret != 0)
> + return ret;
> +
>   /* Verify Rx queue configuration is single pool and segment or
>* multiple pool/segment.
> +  * @see rte_eth_dev_info::max_rx_mempools
>* @see rte_eth_rxconf::rx_mempools
>* @see rte_eth_rxconf::rx_seg
>*/

Is above comment block still valid?

> - if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> - ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
> - /* Single pool/segment configuration */
> - rx_conf->rx_seg = NULL;
> - rx_conf->rx_nseg = 0;
> - ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
> -  nb_rx_desc, socket_id,
> -  rx_conf, mp);
> - goto exit;
> - }
> -
> - if (rx_pkt_nb_segs > 1 ||
> - rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
> + if ((rx_pkt_nb_segs > 1) &&
> + (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
>   /* multi-segment configuration */
>   for (i = 0; i < rx_pkt_nb_segs; i++) {
>   struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split;
> @@ -2701,7 +2701,14 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>   }
>   rx_conf->rx_nseg = rx_pkt_nb_segs;
>   rx_conf->rx_seg = rx_useg;
> - } else {
> + rx_conf->rx_mempools =

RE: [EXT] Re: [PATCH v5 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-21 Thread Hanumanth Reddy Pothula


> -Original Message-
> From: Ferruh Yigit 
> Sent: Monday, November 21, 2022 6:53 PM
> To: Hanumanth Reddy Pothula ; Aman Singh
> ; Yuying Zhang 
> Cc: dev@dpdk.org; andrew.rybche...@oktetlabs.ru;
> tho...@monjalon.net; yux.ji...@intel.com; Jerin Jacob Kollanukkaran
> ; Nithin Kumar Dabilpuram
> 
> Subject: [EXT] Re: [PATCH v5 1/1] app/testpmd: add valid check to verify
> multi mempool feature
> 
> External Email
> 
> --
> On 11/21/2022 12:45 PM, Hanumanth Pothula wrote:
> > Validate ethdev parameter 'max_rx_mempools' to know whether device
> > supports multi-mempool feature or not.
> >
> > Also, add new testpmd command line argument, multi-mempool, to
> control
> > multi-mempool feature. By default its disabled.
> >
> > Bugzilla ID: 1128
> > Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx
> > queue")
> >
> > Signed-off-by: Hanumanth Pothula 
> >
> > ---
> > v5:
> >  - Added testpmd argument to enable multi-mempool feature.
> >  - Simplified logic to distinguish between multi-mempool,
> >multi-segment and single pool/segment.
> > v4:
> >  - updated if condition.
> > v3:
> >  - Simplified conditional check.
> >  - Corrected spell, whether.
> > v2:
> >  - Rebased on tip of next-net/main.
> > ---
> >  app/test-pmd/parameters.c |  3 ++
> >  app/test-pmd/testpmd.c| 58 +
> --
> >  app/test-pmd/testpmd.h|  1 +
> >  3 files changed, 41 insertions(+), 21 deletions(-)
> >
> > diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> > index aed4cdcb84..26d6450db4 100644
> > --- a/app/test-pmd/parameters.c
> > +++ b/app/test-pmd/parameters.c
> > @@ -700,6 +700,7 @@ launch_args_parse(int argc, char** argv)
> > { "rx-mq-mode", 1, 0, 0 },
> > { "record-core-cycles", 0, 0, 0 },
> > { "record-burst-stats", 0, 0, 0 },
> > +   { "multi-mempool",  0, 0, 0 },
> 
> Can you please group with relatet paramters, instead of appending end,
> after 'rxpkts' related parameters group (so after 'txpkts') can be good
> location since it is used for buffer split.
> 
Ack

> need to document new argument on
> 'doc/guides/testpmd_app_ug/run_app.rst'
>
Ack
 
> Also need to add help string in 'usage()' function, again grouped in related
> paramters.
Sure, will add help string
> 
> > { PARAM_NUM_PROCS,  1, 0, 0 },
> > { PARAM_PROC_ID,1, 0, 0 },
> > { 0, 0, 0, 0 },
> > @@ -1449,6 +1450,8 @@ launch_args_parse(int argc, char** argv)
> > record_core_cycles = 1;
> > if (!strcmp(lgopts[opt_idx].name, "record-burst-
> stats"))
> > record_burst_stats = 1;
> > +   if (!strcmp(lgopts[opt_idx].name, "multi-
> mempool"))
> > +   multi_mempool = 1;
> 
> Can you group with related paramters, same as above mentioned location?
> 
Ack
> > if (!strcmp(lgopts[opt_idx].name,
> PARAM_NUM_PROCS))
> > num_procs = atoi(optarg);
> > if (!strcmp(lgopts[opt_idx].name,
> PARAM_PROC_ID)) diff --git
> > a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > 4e25f77c6a..9dfc4c9d0e 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -497,6 +497,11 @@ uint8_t record_burst_stats;
> >   */
> >  uint32_t rxq_share;
> >
> > +/*
> > + * Multi-mempool support, disabled by default.
> > + */
> > +uint8_t multi_mempool;
> 
> Can you put this after 'rx_pkt_nb_segs' related group.
> 
Ack
> > +
> >  unsigned int num_sockets = 0;
> >  unsigned int socket_ids[RTE_MAX_NUMA_NODES];
> >
> > @@ -2655,28 +2660,23 @@ rx_queue_setup(uint16_t port_id, uint16_t
> rx_queue_id,
> > union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
> > struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
> > struct rte_mempool *mpx;
> > +   struct rte_eth_dev_info dev_info;
> > unsigned int i, mp_n;
> > uint32_t prev_hdrs = 0;
> > int ret;
> >
> > +   ret = rte_eth_dev_info_get(port_id, &dev_info);
> > +   if (ret != 0)
> > +   return ret;
> > +
> > /* Verify Rx queue configuration is single pool and segment or
> >  * multiple pool/segment.
> > +* @see rte_eth_dev_info::max_rx_mempools
> >  * @see rte_eth_rxconf::rx_mempools
> >  * @see rte_eth_rxconf::rx_seg
> >  */
> 
> Is above comment block still valid?
Will remove
> 
> > -   if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> > -   ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
> 0))) {
> > -   /* Single pool/segment configuration */
> > -   rx_conf->rx_seg = NULL;
> > -   rx_conf->rx_nseg = 0;
> > -   ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
> > -nb_rx_desc, socket_id,
> > -   

[PATCH] compress/mlx5: add Bluefield-3 device ID

2022-11-21 Thread Raslan Darawsheh
This adds the Bluefield-3 device ids to the list of
supported NVIDIA devices that run the MLX5 compress PMDs.

Signed-off-by: Raslan Darawsheh 
---
 drivers/compress/mlx5/mlx5_compress.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/drivers/compress/mlx5/mlx5_compress.c 
b/drivers/compress/mlx5/mlx5_compress.c
index 3d2c45fcee..fb2bda9745 100644
--- a/drivers/compress/mlx5/mlx5_compress.c
+++ b/drivers/compress/mlx5/mlx5_compress.c
@@ -790,6 +790,10 @@ static const struct rte_pci_id mlx5_compress_pci_id_map[] 
= {
RTE_PCI_DEVICE(PCI_VENDOR_ID_MELLANOX,
PCI_DEVICE_ID_MELLANOX_CONNECTX6DXBF)
},
+   {
+   RTE_PCI_DEVICE(PCI_VENDOR_ID_MELLANOX,
+   PCI_DEVICE_ID_MELLANOX_CONNECTX7BF)
+   },
{
.vendor_id = 0
}
-- 
2.25.1



RE: release candidate 22.11-rc3

2022-11-21 Thread Jiang, YuX
> -Original Message-
> From: Jiang, YuX
> Sent: Thursday, November 17, 2022 4:49 PM
> To: Thomas Monjalon ; dev (dev@dpdk.org)
> 
> Cc: Devlin, Michelle ; Mcnamara, John
> ; Richardson, Bruce
> ; Zhang, Qi Z 
> Subject: RE: release candidate 22.11-rc3
>
> > -Original Message-
> > From: Thomas Monjalon 
> > Sent: Wednesday, November 16, 2022 1:33 AM
> > To: annou...@dpdk.org
> > Subject: release candidate 22.11-rc3
> >
> > A new DPDK release candidate is ready for testing:
> > https://git.dpdk.org/dpdk/tag/?id=v22.11-rc3
> >
> > There are 161 new patches in this snapshot.
> >
> > Release notes:
> > https://doc.dpdk.org/guides/rel_notes/release_22_11.html
> >
> > Please test and report issues on bugs.dpdk.org.
> > You may share some release validation results by replying to this
> > message at dev@dpdk.org and by adding tested hardware in the release
> notes.
> >
> > DPDK 22.11-rc4 should be the last chance for bug fixes and doc
> > updates, and it is planned for the end of this week.
> >
> > Thank you everyone
> >
>

Update the test status for Intel part. Till now dpdk22.11-rc3 validation test 
is almost finished.
3 bugs are found, Bug1 & Bug2 are critical issues, hope they can be fixed in 
22.11.
  Bug1: https://bugs.dpdk.org/show_bug.cgi?id=1128 [dpdk22.11-rc3]failed to 
start testpmd with the mbuf-size parameter
- Bad commit 4f04edcda769770881832f8036fd209e7bb6ab9a
- Verify v4 dpdk 
patch(https://patches.dpdk.org/project/dpdk/patch/20221118141334.3825072-1-hpoth...@marvell.com/),
 test passed.
  Bug2: idpf: core dumped when launch l3fwd with 1c1q. Verify patches passed, 
hope it can be merged into RC4.
- patch link: 
https://patches.dpdk.org/project/dpdk/patch/20221118070246.114513-1-beilei.x...@intel.com/
 & 
https://patches.dpdk.org/project/dpdk/patch/20221118035039.106084-1-beilei.x...@intel.com/
  Bug3: [DPDK22.11] idpf: failed to start port all. Verify patch passed, hope 
it can be merged into RC4
- patch link: 
https://patches.dpdk.org/project/dpdk/patch/20221117030744.45460-1-beilei.x...@intel.com/

Meson test known bugs:
  1, https://bugs.dpdk.org/show_bug.cgi?id=1107 [22.11-rc1][meson test] 
seqlock_autotest test failed, which is only found on CentOS7.9/gcc4.8.5. No fix 
yet.
  2, https://bugs.dpdk.org/show_bug.cgi?id=1024 [dpdk-22.07][meson test] 
driver-tests/link_bonding_mode4_autotest bond handshake failed. No fix yet.
Asan test known bugs:
  https://bugs.dpdk.org/show_bug.cgi?id=1123 [dpdk-22.11][asan] the 
stack-buffer-overflow was found when quit testpmd in Redhat9. No fix yet.

# Basic Intel(R) NIC testing
* Build or compile:
 *Build: cover the build test combination with latest GCC/Clang version and the 
popular OS revision such as Ubuntu20.04.5, Ubuntu22.04.1, Ubuntu22.10, 
Fedora36, RHEL8.6 etc.
  - All test passed.
 *Compile: cover the CFLAGES(O0/O1/O2/O3) with popular OS such as Ubuntu22.04.1 
and RHEL8.6.
  - All test passed.
* PF/VF(i40e, ixgbe): test scenarios including 
PF/VF-RTE_FLOW/TSO/Jumboframe/checksum offload/VLAN/VXLAN, etc.
- All test done. No new issue is found.
- Known Bug "vf_interrupt_pmd/nic_interrupt_VF_vfio_pci: l3fwd-power 
Wake up failed" on X722 37d0. Verify patch passed.
- patch link: 
https://patchwork.dpdk.org/project/dpdk/patch/20221117065726.277672-1-kaisenx@intel.com/
* PF/VF(ice): test scenarios including Switch features/Package Management/Flow 
Director/Advanced Tx/Advanced RSS/ACL/DCF/Flexible Descriptor, etc.
- All test done. No new issue is found.
* idpf PMD and GVE PMD: basic test.
- All test done. Find Bug2 & Bug3 on idpf PMD.
* Intel NIC single core/NIC performance: test scenarios including PF/VF single 
core performance test, RFC2544 Zero packet loss performance test, etc.
- All test done. No big performance drop.
* Power and IPsec and other modules:
 * Power: test scenarios including bi-direction/Telemetry/Empty Poll 
Lib/Priority Base Frequency, etc.
- All test done. No new issue is found.
 * IPsec: test scenarios including ipsec/ipsec-gw/ipsec library basic test - 
QAT&SW/FIB library, etc.
- All test done. No new issue is found.
# Basic cryptodev and virtio testing
* Virtio: both function and performance test are covered. Such as 
PVP/Virtio_loopback/virtio-user loopback/virtio-net VM2VM perf testing/VMAWARE 
ESXI 7.0u3, etc.
- All test done. No new issue is found.
* Cryptodev:
 *Function test: test scenarios including Cryptodev API testing/CompressDev 
ISA-L/QAT/ZLIB PMD Testing/FIPS, etc.
- All test done. No new issue is found.
 *Performance test: test scenarios including Throughput Performance /Cryptodev 
Latency, etc.
- All test done. No big performance drop.

Best regards,
Yu Jiang


Re: [EXT] [dpdk-dev v6] doc: support IPsec Multi-buffer lib v1.3

2022-11-21 Thread Zhang, Fan

Hi Akhil,

Agreed. Thanks for clarification.

Regards,

Fan

On 11/21/2022 11:35 AM, Akhil Goyal wrote:

Hi Fan,

Hi Akhil,


  From 22.11 the ipsec-mb PMDs will be working on two different libraries
that may or may not work the same.

We also have two different contributor groups working on adding features
on top of each library, again one may or may not be compatible to another.

I believe there should be some words necessary to distinguish each
library support as well as the credits for adding certain features for
one platform.

Ok, but release notes is not a correct place to mention that.
It should be part of the respective driver documentation.



On 11/21/2022 6:57 AM, Akhil Goyal wrote:

diff --git a/doc/guides/rel_notes/release_22_11.rst
b/doc/guides/rel_notes/release_22_11.rst
index 4e55b543ef..b98b603fe7 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -240,7 +240,16 @@ New Features

   * **Updated ipsec_mb crypto driver.**

-  Added SNOW-3G and ZUC support for ARM platform.
+  * Added ARM64 port of ipsec-mb library support and SNOW-3G and ZUC
+support for ARM platform.

You need not update the above line.
* Added SNOW-3G and ZUC support for ARM platform.
Should be good enough.

+  * Added Intel IPsec MB v1.3 library support for x86 platform,
+see the following guides for more details:
+:doc:`../cryptodevs/aesni_gcm`
+:doc:`../cryptodevs/aesni_mb`
+:doc:`../cryptodevs/chacha20_poly1305`
+:doc:`../cryptodevs/kasumi`
+:doc:`../cryptodevs/snow3g`
+:doc:`../cryptodevs/zuc`

I believe adding reference for each guide is not needed.

* Added Intel IPsec MB v1.3 library support for x86 platform.
 Added details in the guides for all the drivers supported by ipsec_mb.

What the guideline Pablo/Kai added here only applies for x86 as

a. ARM does not support algorithms other than SNOW3G and ZUC.

This should be distinguished in the .rst file.
For release notes, above thing is sufficient.


b. The performance guideline may not apply to ARM.

Again, it should be part of driver documentation and not release notes.

And for each of the release note bullet that I suggested are mentioning the
Platform on which the support is added.




Re: [EXT] Re: [PATCH v5 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-21 Thread Ferruh Yigit
On 11/21/2022 1:36 PM, Hanumanth Reddy Pothula wrote:
> 
> 
>> -Original Message-
>> From: Ferruh Yigit 
>> Sent: Monday, November 21, 2022 6:53 PM
>> To: Hanumanth Reddy Pothula ; Aman Singh
>> ; Yuying Zhang 
>> Cc: dev@dpdk.org; andrew.rybche...@oktetlabs.ru;
>> tho...@monjalon.net; yux.ji...@intel.com; Jerin Jacob Kollanukkaran
>> ; Nithin Kumar Dabilpuram
>> 
>> Subject: [EXT] Re: [PATCH v5 1/1] app/testpmd: add valid check to verify
>> multi mempool feature
>>
>> External Email
>>
>> --
>> On 11/21/2022 12:45 PM, Hanumanth Pothula wrote:
>>> Validate ethdev parameter 'max_rx_mempools' to know whether device
>>> supports multi-mempool feature or not.
>>>
>>> Also, add new testpmd command line argument, multi-mempool, to
>> control
>>> multi-mempool feature. By default its disabled.
>>>
>>> Bugzilla ID: 1128
>>> Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx
>>> queue")
>>>
>>> Signed-off-by: Hanumanth Pothula 
>>>
>>> ---
>>> v5:
>>>  - Added testpmd argument to enable multi-mempool feature.
>>>  - Simplified logic to distinguish between multi-mempool,
>>>multi-segment and single pool/segment.
>>> v4:
>>>  - updated if condition.
>>> v3:
>>>  - Simplified conditional check.
>>>  - Corrected spell, whether.
>>> v2:
>>>  - Rebased on tip of next-net/main.
>>> ---
>>>  app/test-pmd/parameters.c |  3 ++
>>>  app/test-pmd/testpmd.c| 58 +
>> --
>>>  app/test-pmd/testpmd.h|  1 +
>>>  3 files changed, 41 insertions(+), 21 deletions(-)
>>>
>>> diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
>>> index aed4cdcb84..26d6450db4 100644
>>> --- a/app/test-pmd/parameters.c
>>> +++ b/app/test-pmd/parameters.c
>>> @@ -700,6 +700,7 @@ launch_args_parse(int argc, char** argv)
>>> { "rx-mq-mode", 1, 0, 0 },
>>> { "record-core-cycles", 0, 0, 0 },
>>> { "record-burst-stats", 0, 0, 0 },
>>> +   { "multi-mempool",  0, 0, 0 },
>>
>> Can you please group with relatet paramters, instead of appending end,
>> after 'rxpkts' related parameters group (so after 'txpkts') can be good
>> location since it is used for buffer split.
>>
> Ack
> 
>> need to document new argument on
>> 'doc/guides/testpmd_app_ug/run_app.rst'
>>
> Ack
>  
>> Also need to add help string in 'usage()' function, again grouped in related
>> paramters.
> Sure, will add help string
>>
>>> { PARAM_NUM_PROCS,  1, 0, 0 },
>>> { PARAM_PROC_ID,1, 0, 0 },
>>> { 0, 0, 0, 0 },
>>> @@ -1449,6 +1450,8 @@ launch_args_parse(int argc, char** argv)
>>> record_core_cycles = 1;
>>> if (!strcmp(lgopts[opt_idx].name, "record-burst-
>> stats"))
>>> record_burst_stats = 1;
>>> +   if (!strcmp(lgopts[opt_idx].name, "multi-
>> mempool"))
>>> +   multi_mempool = 1;
>>
>> Can you group with related paramters, same as above mentioned location?
>>
> Ack
>>> if (!strcmp(lgopts[opt_idx].name,
>> PARAM_NUM_PROCS))
>>> num_procs = atoi(optarg);
>>> if (!strcmp(lgopts[opt_idx].name,
>> PARAM_PROC_ID)) diff --git
>>> a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
>>> 4e25f77c6a..9dfc4c9d0e 100644
>>> --- a/app/test-pmd/testpmd.c
>>> +++ b/app/test-pmd/testpmd.c
>>> @@ -497,6 +497,11 @@ uint8_t record_burst_stats;
>>>   */
>>>  uint32_t rxq_share;
>>>
>>> +/*
>>> + * Multi-mempool support, disabled by default.
>>> + */
>>> +uint8_t multi_mempool;
>>
>> Can you put this after 'rx_pkt_nb_segs' related group.
>>
> Ack
>>> +
>>>  unsigned int num_sockets = 0;
>>>  unsigned int socket_ids[RTE_MAX_NUMA_NODES];
>>>
>>> @@ -2655,28 +2660,23 @@ rx_queue_setup(uint16_t port_id, uint16_t
>> rx_queue_id,
>>> union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
>>> struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
>>> struct rte_mempool *mpx;
>>> +   struct rte_eth_dev_info dev_info;
>>> unsigned int i, mp_n;
>>> uint32_t prev_hdrs = 0;
>>> int ret;
>>>
>>> +   ret = rte_eth_dev_info_get(port_id, &dev_info);
>>> +   if (ret != 0)
>>> +   return ret;
>>> +
>>> /* Verify Rx queue configuration is single pool and segment or
>>>  * multiple pool/segment.
>>> +* @see rte_eth_dev_info::max_rx_mempools
>>>  * @see rte_eth_rxconf::rx_mempools
>>>  * @see rte_eth_rxconf::rx_seg
>>>  */
>>
>> Is above comment block still valid?
> Will remove
>>
>>> -   if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
>>> -   ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
>> 0))) {
>>> -   /* Single pool/segment configuration */
>>> -   rx_conf->rx_seg = NULL;
>>> -   rx_conf->rx_nseg = 0;
>>> -   ret = rte_eth_rx_queue_setup

Re: release candidate 22.11-rc3

2022-11-21 Thread Ferruh Yigit
On 11/21/2022 1:50 PM, Jiang, YuX wrote:
> Update the test status for Intel part. Till now dpdk22.11-rc3 validation test 
> is almost finished.
> 3 bugs are found, Bug1 & Bug2 are critical issues, hope they can be fixed in 
> 22.11.
>   Bug1: https://bugs.dpdk.org/show_bug.cgi?id=1128 [dpdk22.11-rc3]failed to 
> start testpmd with the mbuf-size parameter
> - Bad commit 4f04edcda769770881832f8036fd209e7bb6ab9a
> - Verify v4 dpdk 
> patch(https://patches.dpdk.org/project/dpdk/patch/20221118141334.3825072-1-hpoth...@marvell.com/),
>  test passed.

Hi Yu,

Can you please verify v5 too?
https://patches.dpdk.org/project/dpdk/patch/20221121124546.3920722-1-hpoth...@marvell.com/

There will be a v6 too, but I expect the logic will be same but it will
have some code reordering and documentation update.

Thanks,
ferruh


[PATCH v2] compress/mlx5: add Bluefield-3 device ID

2022-11-21 Thread Raslan Darawsheh
This adds the Bluefield-3 device ids to the list of
supported NVIDIA devices that run the MLX5 compress PMDs.
The devices is still in development stage.

Signed-off-by: Raslan Darawsheh 
---
v2: update commit msg to mention the device is actually still in
development stage.

---
 drivers/compress/mlx5/mlx5_compress.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/drivers/compress/mlx5/mlx5_compress.c 
b/drivers/compress/mlx5/mlx5_compress.c
index 3d2c45fcee..fb2bda9745 100644
--- a/drivers/compress/mlx5/mlx5_compress.c
+++ b/drivers/compress/mlx5/mlx5_compress.c
@@ -790,6 +790,10 @@ static const struct rte_pci_id mlx5_compress_pci_id_map[] 
= {
RTE_PCI_DEVICE(PCI_VENDOR_ID_MELLANOX,
PCI_DEVICE_ID_MELLANOX_CONNECTX6DXBF)
},
+   {
+   RTE_PCI_DEVICE(PCI_VENDOR_ID_MELLANOX,
+   PCI_DEVICE_ID_MELLANOX_CONNECTX7BF)
+   },
{
.vendor_id = 0
}
-- 
2.25.1



RE: Regarding User Data in DPDK ACL Library.

2022-11-21 Thread Konstantin Ananyev


> > > On Thu, 17 Nov 2022 19:28:12 +0530
> > > venkatesh bs  wrote:
> > >
> > > > Hi DPDK Team,
> > > >
> > > > After the ACL match for highest priority DPDK Classification API
> > > > returns User Data Which is as mentioned below in the document.
> > > >
> > > > 53. Packet Classification and Access Control — Data Plane
> > > > Development Kit
> > > > 22.11.0-rc2 documentation (dpdk.org)
> > > >
> > > >
> > > >- *userdata*: A user-defined value. For each category, a successful
> > > >match returns the userdata field of the highest priority matched 
> > > > rule.
> > When
> > > >no rules match, returned value is zero
> > > >
> > > > I Wonder Why User Data Support does not returns 64 bit values,
> >
> > As I remember if first version of ACL code it was something about space
> > savings to improve performance...
> > Now I think it is more just a historical reason.
> > It would be good to change userdata to 64bit, but I presume it will be ABI
> > breakage.
> Agree. We should support 64b and even 128b (since architectures support 128b 
> atomic operations). This reduces required memory
> barriers required if the data size <= the size of atomic operations.

Hmm...  sorry, didn’t get you  here.
I do understand the user intention to save pointer to arbitrary memory location 
as user-data (64-bit).
But how does the size of atomic mem-ops relate?
Konstantin 


[PATCH] pipeline: fix validate header instruction

2022-11-21 Thread Cristian Dumitrescu
From: Yogesh Jangra 

The exported data structure for the header validate instruction did
not populate its struct_id field, which results in segmentation fault.

Fixes: 216bc906d00 ("pipeline: export pipeline instructions to file")
Cc: sta...@dpdk.org

Signed-off-by: Yogesh Jangra 
Acked-by: Cristian Dumitrescu 
---
 lib/pipeline/rte_swx_pipeline.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/lib/pipeline/rte_swx_pipeline.c b/lib/pipeline/rte_swx_pipeline.c
index 232dafb95e..0e631dea2b 100644
--- a/lib/pipeline/rte_swx_pipeline.c
+++ b/lib/pipeline/rte_swx_pipeline.c
@@ -11793,10 +11793,12 @@ instr_hdr_validate_export(struct instruction *instr, 
FILE *f)
"\t\t.type = %s,\n"
"\t\t.valid = {\n"
"\t\t\t.header_id = %u,\n"
+   "\t\t\t.struct_id = %u,\n"
"\t\t},\n"
"\t},\n",
instr_type_to_name(instr),
-   instr->valid.header_id);
+   instr->valid.header_id,
+   instr->valid.struct_id);
 }
 
 static void
-- 
2.25.1



[PATCH v6 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-21 Thread Hanumanth Pothula
Validate ethdev parameter 'max_rx_mempools' to know whether
device supports multi-mempool feature or not.

Also, add new testpmd command line argument, multi-mempool,
to control multi-mempool feature. By default its disabled.

Bugzilla ID: 1128
Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")

Signed-off-by: Hanumanth Pothula 

---
v6:
 - Updated run_app.rst file with multi-mempool argument.
 - defined and populated multi_mempool at related arguments.
 - invoking rte_eth_dev_info_get() withing multi-mempool condition
v5:
 - Added testpmd argument to enable multi-mempool feature.
 - Simplified logic to distinguish between multi-mempool,
   multi-segment and single pool/segment.
v4:
 - updated if condition.
v3:
 - Simplified conditional check.
 - Corrected spell, whether.
v2:
 - Rebased on tip of next-net/main.
---
 app/test-pmd/parameters.c |  4 ++
 app/test-pmd/testpmd.c| 66 +--
 app/test-pmd/testpmd.h|  1 +
 doc/guides/testpmd_app_ug/run_app.rst |  4 ++
 4 files changed, 50 insertions(+), 25 deletions(-)

diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index aed4cdcb84..d0f7b2f11d 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -155,6 +155,7 @@ usage(char* progname)
printf("  --rxhdrs=eth[,ipv4]*: set RX segment protocol to split.\n");
printf("  --txpkts=X[,Y]*: set TX segment sizes"
" or total packet length.\n");
+   printf(" --multi-mempool: enable multi-mempool support\n");
printf("  --txonly-multi-flow: generate multiple flows in txonly 
mode\n");
printf("  --tx-ip=src,dst: IP addresses in Tx-only mode\n");
printf("  --tx-udp=src[,dst]: UDP ports in Tx-only mode\n");
@@ -669,6 +670,7 @@ launch_args_parse(int argc, char** argv)
{ "rxpkts", 1, 0, 0 },
{ "rxhdrs", 1, 0, 0 },
{ "txpkts", 1, 0, 0 },
+   { "multi-mempool",  0, 0, 0 },
{ "txonly-multi-flow",  0, 0, 0 },
{ "rxq-share",  2, 0, 0 },
{ "eth-link-speed", 1, 0, 0 },
@@ -1295,6 +1297,8 @@ launch_args_parse(int argc, char** argv)
else
rte_exit(EXIT_FAILURE, "bad txpkts\n");
}
+   if (!strcmp(lgopts[opt_idx].name, "multi-mempool"))
+   multi_mempool = 1;
if (!strcmp(lgopts[opt_idx].name, "txonly-multi-flow"))
txonly_multi_flow = 1;
if (!strcmp(lgopts[opt_idx].name, "rxq-share")) {
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4e25f77c6a..0bf2e4bd0d 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -245,6 +245,7 @@ uint32_t max_rx_pkt_len;
  */
 uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
 uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
+uint8_t multi_mempool; /**< Enables multi-mempool feature */
 uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
 uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
 uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
@@ -258,6 +259,8 @@ uint16_t tx_pkt_seg_lengths[RTE_MAX_SEGS_PER_PKT] = {
 };
 uint8_t  tx_pkt_nb_segs = 1; /**< Number of segments in TXONLY packets */
 
+
+
 enum tx_pkt_split tx_pkt_split = TX_PKT_SPLIT_OFF;
 /**< Split policy for packets to TX. */
 
@@ -2659,24 +2662,9 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
uint32_t prev_hdrs = 0;
int ret;
 
-   /* Verify Rx queue configuration is single pool and segment or
-* multiple pool/segment.
-* @see rte_eth_rxconf::rx_mempools
-* @see rte_eth_rxconf::rx_seg
-*/
-   if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
-   ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
-   /* Single pool/segment configuration */
-   rx_conf->rx_seg = NULL;
-   rx_conf->rx_nseg = 0;
-   ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
-nb_rx_desc, socket_id,
-rx_conf, mp);
-   goto exit;
-   }
 
-   if (rx_pkt_nb_segs > 1 ||
-   rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
+   if ((rx_pkt_nb_segs > 1) &&
+   (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
/* multi-segment configuration */
for (i = 0; i < rx_pkt_nb_segs; i++) {
struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split;
@@ -2701,22 +2689,50 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
}
rx_conf->rx_nseg = rx_pkt_nb_segs;

Re: [PATCH 2/2] doc: update MLX5 LRO limitation

2022-11-21 Thread Thomas Monjalon
17/11/2022 15:39, Gregory Etelson:
> Maximal LRO message size must be multiply of 256.
> Otherwise, TCP payload may not fit into a single WQE.
> 
> Cc: sta...@dpdk.org
> Signed-off-by: Gregory Etelson 
> Acked-by: Matan Azrad 

Why the doc update is not in the same patch as the code change?

> @@ -278,6 +278,9 @@ Limitations
>  - No Tx metadata go to the E-Switch steering domain for the Flow group 0.
>The flows within group 0 and set metadata action are rejected by hardware.
>  
> +- The driver rounds down the ``max_lro_pkt_size`` value in the port
> +  configuration to a multiple of 256 due to HW limitation.
> +
>  .. note::
>  
> MAC addresses not already present in the bridge table of the associated

If you would like to read the doc, I guess you'd prefer to find this info
in the section dedicated to LRO, not in a random place.







RE: [PATCH v2] doc: update QAT device support

2022-11-21 Thread Ji, Kai
Acked-by: Kai Ji 

> -Original Message-
> From: Dooley, Brian 
> Sent: Friday, November 18, 2022 5:19 PM
> To: Ji, Kai 
> Cc: dev@dpdk.org; sta...@dpdk.org; gak...@marvell.com; Dooley, Brian
> 
> Subject: [PATCH v2] doc: update QAT device support
> 
> Update what drivers and devices are supported for Asymmetric Crypto Service
> on QAT
> 
> Signed-off-by: Brian Dooley 
> ---
>  doc/guides/cryptodevs/qat.rst | 17 ++---
>  1 file changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
> index 2d895e61ac..76d8187298 100644
> --- a/doc/guides/cryptodevs/qat.rst
> +++ b/doc/guides/cryptodevs/qat.rst
> @@ -168,8 +168,8 @@ poll mode crypto driver support for the following
> hardware accelerator devices:
>  * ``Intel QuickAssist Technology C62x``
>  * ``Intel QuickAssist Technology C3xxx``
>  * ``Intel QuickAssist Technology D15xx``
> -* ``Intel QuickAssist Technology C4xxx``
>  * ``Intel QuickAssist Technology 4xxx``
> +* ``Intel QuickAssist Technology 401xxx``
> 
>  The QAT ASYM PMD has support for:
> 
> @@ -393,9 +393,15 @@ to see the full table)
> 
> +-+-+-+-+--+---+---++
> +--+++
> | Yes | No  | No  | 3   | C4xxx| p | qat_c4xxx | 
> c4xxx  | 18a0   | 1
> | 18a1   | 128|
> 
> +-+-+-+-+--+---+---++
> +--+++
> -   | Yes | No  | No  | 4   | 4xxx | N/A   | qat_4xxx  | 4xxx 
>   | 4940   | 4
> | 4941   | 16 |
> +   | Yes | Yes | No  | 4   | 4xxx | linux/5.11+   | qat_4xxx  | 4xxx 
>   | 4940
> | 4| 4941   | 16 |
> +   
> +-+-+-+-+--+---+---++
> +--+++
> +   | Yes | Yes | Yes | 4   | 4xxx | linux/5.17+   | qat_4xxx  | 4xxx 
>   | 4940
> | 4| 4941   | 16 |
> +   
> +-+-+-+-+--+---+---++
> +--+++
> +   | Yes | No  | No  | 4   | 4xxx | IDZ/ N/A  | qat_4xxx  | 4xxx 
>   | 4940   |
> 4| 4941   | 16 |
> +   
> +-+-+-+-+--+---+---++
> +--+++
> +   | Yes | Yes | Yes | 4   | 401xxx   | linux/5.19+   | qat_401xxx| 4xxx 
>   |
> 4942   | 2| 4943   | 16 |
> 
> +-+-+-+-+--+---+---++
> +--+++
> -   | Yes | No  | No  | 4   | 401xxx   | N/A   | qat_401xxx| 4xxx 
>   | 4942
> | 2| 4943   | 16 |
> +   | Yes | No  | No  | 4   | 401xxx   | IDZ/ N/A  | qat_401xxx| 4xxx 
>   | 4942
> | 2| 4943   | 16 |
> 
> +-+-+-+-+--+---+---++
> +--+++
> 
>  * Note: Symmetric mixed crypto algorithms feature on Gen 2 works only with
> IDZ driver version 4.9.0+ @@ -416,6 +422,11 @@ If you are running on a kernel
> which includes a driver for your device, see  `Installation using IDZ QAT 
> driver`_.
> 
> 
> +.. Note::
> +
> +The Asymmetric service is not supported by DPDK QAT PMD for the Gen 3
> platform.
> +The actual Crypto services enabled on the system depend on QAT driver
> capabilities and hardware slice configuration.
> +
>  Installation using kernel.org driver
>  
> 
> --
> 2.25.1



[dpdk-dev v7] doc: support IPsec Multi-buffer lib v1.3

2022-11-21 Thread Kai Ji
From: Pablo de Lara 

Updated AESNI MB and AESNI GCM, KASUMI, ZUC, SNOW3G
and CHACHA20_POLY1305 PMD documentation guides
with information about the latest Intel IPSec Multi-buffer
library supported.

Signed-off-by: Pablo de Lara 
Acked-by: Ciara Power 
Acked-by: Brian Dooley 
Signed-off-by: Kai Ji 
---
-v7: Review comments update
-v6: Release notes update reword
-v5: Release notes update
-v4: Added information on CHACHA20_POLY1305 PMD guide
-v3: Fixed library version from 1.2 to 1.3 in one line
-v2: Removed repeated word 'the'
---
 doc/guides/cryptodevs/aesni_gcm.rst |  8 +++---
 doc/guides/cryptodevs/aesni_mb.rst  | 29 -
 doc/guides/cryptodevs/chacha20_poly1305.rst | 12 ++---
 doc/guides/cryptodevs/kasumi.rst| 15 ---
 doc/guides/cryptodevs/snow3g.rst| 19 +++---
 doc/guides/cryptodevs/zuc.rst   | 18 ++---
 doc/guides/rel_notes/release_22_11.rst  |  3 ++-
 7 files changed, 77 insertions(+), 27 deletions(-)

diff --git a/doc/guides/cryptodevs/aesni_gcm.rst 
b/doc/guides/cryptodevs/aesni_gcm.rst
index 6229392f58..5192287ed8 100644
--- a/doc/guides/cryptodevs/aesni_gcm.rst
+++ b/doc/guides/cryptodevs/aesni_gcm.rst
@@ -40,8 +40,8 @@ Installation
 To build DPDK with the AESNI_GCM_PMD the user is required to download the 
multi-buffer
 library from `here `_
 and compile it on their user system before building DPDK.
-The latest version of the library supported by this PMD is v1.2, which
-can be downloaded in 
``_.
+The latest version of the library supported by this PMD is v1.3, which
+can be downloaded in 
``_.

 .. code-block:: console

@@ -84,8 +84,8 @@ and the external crypto libraries supported by them:
17.08 - 18.02  Multi-buffer library 0.46 - 0.48
18.05 - 19.02  Multi-buffer library 0.49 - 0.52
19.05 - 20.08  Multi-buffer library 0.52 - 0.55
-   20.11 - 21.08  Multi-buffer library 0.53 - 1.2*
-   21.11+ Multi-buffer library 1.0  - 1.2*
+   20.11 - 21.08  Multi-buffer library 0.53 - 1.3*
+   21.11+ Multi-buffer library 1.0  - 1.3*
=  

 \* Multi-buffer library 1.0 or newer only works for Meson but not Make build 
system.
diff --git a/doc/guides/cryptodevs/aesni_mb.rst 
b/doc/guides/cryptodevs/aesni_mb.rst
index 599ed5698f..b9bf03655d 100644
--- a/doc/guides/cryptodevs/aesni_mb.rst
+++ b/doc/guides/cryptodevs/aesni_mb.rst
@@ -1,7 +1,7 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
 Copyright(c) 2015-2018 Intel Corporation.

-AESN-NI Multi Buffer Crypto Poll Mode Driver
+AES-NI Multi Buffer Crypto Poll Mode Driver
 


@@ -10,8 +10,6 @@ support for utilizing Intel multi buffer library, see the 
white paper
 `Fast Multi-buffer IPsec Implementations on Intel® Architecture Processors
 
`_.

-The AES-NI MB PMD has current only been tested on Fedora 21 64-bit with gcc.
-
 The AES-NI MB PMD supports synchronous mode of operation with
 ``rte_cryptodev_sym_cpu_crypto_process`` function call.

@@ -77,6 +75,23 @@ Limitations
 * RTE_CRYPTO_CIPHER_DES_DOCSISBPI is not supported for combined Crypto-CRC
   DOCSIS security protocol.

+AESNI MB PMD selection over SNOW3G/ZUC/KASUMI PMDs
+--
+
+This PMD supports wireless cipher suite (SNOW3G, ZUC and KASUMI).
+On Intel processors, it is recommended to use this PMD instead of SNOW3G, ZUC 
and KASUMI PMDs,
+as it enables algorithm mixing (e.g. cipher algorithm SNOW3G-UEA2 with
+authentication algorithm AES-CMAC-128) and performance over IMIX (packet size 
mix) traffic
+is significantly higher.
+
+AESNI MB PMD selection over CHACHA20-POLY1305 PMD
+-
+
+This PMD supports Chacha20-Poly1305 algorithm.
+On Intel processors, it is recommended to use this PMD instead of 
CHACHA20-POLY1305 PMD,
+as it delivers better performance on single segment buffers.
+For multi-segment buffers, it is still recommended to use CHACHA20-POLY1305 
PMD,
+until the new SGL API is introduced in the AESNI MB PMD.

 Installation
 
@@ -84,8 +99,8 @@ Installation
 To build DPDK with the AESNI_MB_PMD the user is required to download the 
multi-buffer
 library from `here `_
 and compile it on their user system before building DPDK.
-The latest version of the library supported by this PMD is v1.2, which
-can be downloaded from 
``_.
+The latest version of the library supported by this PMD is v1.3, which
+can be downloaded from 
``_.

 .. co

RE: Regarding User Data in DPDK ACL Library.

2022-11-21 Thread Honnappa Nagarahalli

> 
> > > > On Thu, 17 Nov 2022 19:28:12 +0530 venkatesh bs
> > > >  wrote:
> > > >
> > > > > Hi DPDK Team,
> > > > >
> > > > > After the ACL match for highest priority DPDK Classification API
> > > > > returns User Data Which is as mentioned below in the document.
> > > > >
> > > > > 53. Packet Classification and Access Control — Data Plane
> > > > > Development Kit
> > > > > 22.11.0-rc2 documentation (dpdk.org)
> > > > >
> > > > >
> > > > >- *userdata*: A user-defined value. For each category, a successful
> > > > >match returns the userdata field of the highest priority matched
> rule.
> > > When
> > > > >no rules match, returned value is zero
> > > > >
> > > > > I Wonder Why User Data Support does not returns 64 bit values,
> > >
> > > As I remember if first version of ACL code it was something about
> > > space savings to improve performance...
> > > Now I think it is more just a historical reason.
> > > It would be good to change userdata to 64bit, but I presume it will
> > > be ABI breakage.
> > Agree. We should support 64b and even 128b (since architectures
> > support 128b atomic operations). This reduces required memory barriers
> required if the data size <= the size of atomic operations.
> 
> Hmm...  sorry, didn’t get you  here.
> I do understand the user intention to save pointer to arbitrary memory
> location as user-data (64-bit).
> But how does the size of atomic mem-ops relate?
> Konstantin
What I meant is, if your data fits within 64b or 128b, having another 
indirection requires:

1) one additional memory operation to store the data (the first one being the 
store to the index)
2) on the control plane, we would need a release barrier between 'store data' 
and 'store index' (not a significant issue). On the data plane, we could use 
relaxed ordering between 'load index' and 'load data', so we do not need a 
barrier here.

So, looks like there is no barrier over-head on the data plane, but overhead of 
one additional memory operation.


Re: [PATCH v6 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-21 Thread Ferruh Yigit
On 11/21/2022 2:33 PM, Hanumanth Pothula wrote:
> Validate ethdev parameter 'max_rx_mempools' to know whether
> device supports multi-mempool feature or not.
> 

Validation 'max_rx_mempools' is not main purpose of this patch, I would
move below paragraph up.

> Also, add new testpmd command line argument, multi-mempool,
> to control multi-mempool feature. By default its disabled.
> 
> Bugzilla ID: 1128
> Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")
> 
> Signed-off-by: Hanumanth Pothula 
> 
> ---
> v6:
>  - Updated run_app.rst file with multi-mempool argument.
>  - defined and populated multi_mempool at related arguments.
>  - invoking rte_eth_dev_info_get() withing multi-mempool condition
> v5:
>  - Added testpmd argument to enable multi-mempool feature.
>  - Simplified logic to distinguish between multi-mempool,
>multi-segment and single pool/segment.
> v4:
>  - updated if condition.
> v3:
>  - Simplified conditional check.
>  - Corrected spell, whether.
> v2:
>  - Rebased on tip of next-net/main.
> ---
>  app/test-pmd/parameters.c |  4 ++
>  app/test-pmd/testpmd.c| 66 +--
>  app/test-pmd/testpmd.h|  1 +
>  doc/guides/testpmd_app_ug/run_app.rst |  4 ++
>  4 files changed, 50 insertions(+), 25 deletions(-)
> 
> diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> index aed4cdcb84..d0f7b2f11d 100644
> --- a/app/test-pmd/parameters.c
> +++ b/app/test-pmd/parameters.c
> @@ -155,6 +155,7 @@ usage(char* progname)
>   printf("  --rxhdrs=eth[,ipv4]*: set RX segment protocol to split.\n");
>   printf("  --txpkts=X[,Y]*: set TX segment sizes"
>   " or total packet length.\n");
> + printf(" --multi-mempool: enable multi-mempool support\n");

Indentation is wrong, one space is missing.

Can you also update the '--mbuf-size=' definition, it has:
" ... extra memory pools will be created for allocating mbufs to receive
packets with buffer splitting features.",
Now it is for both "buffer splitting and multi Rx mempool features."
Even it can be possible to reference to new argument.

>   printf("  --txonly-multi-flow: generate multiple flows in txonly 
> mode\n");
>   printf("  --tx-ip=src,dst: IP addresses in Tx-only mode\n");
>   printf("  --tx-udp=src[,dst]: UDP ports in Tx-only mode\n");
> @@ -669,6 +670,7 @@ launch_args_parse(int argc, char** argv)
>   { "rxpkts", 1, 0, 0 },
>   { "rxhdrs", 1, 0, 0 },
>   { "txpkts", 1, 0, 0 },
> + { "multi-mempool",  0, 0, 0 },

Thinking twice, I am not sure about the 'multi-mempool' name, because
'mbuf-size' already cause to create multiple mempool, 'multi-mempool'
can be confusing.
As ethdev variable name is 'max_rx_mempools', what do you think to use
'multi-rx-mempools' here as argument?

>   { "txonly-multi-flow",  0, 0, 0 },
>   { "rxq-share",  2, 0, 0 },
>   { "eth-link-speed", 1, 0, 0 },
> @@ -1295,6 +1297,8 @@ launch_args_parse(int argc, char** argv)
>   else
>   rte_exit(EXIT_FAILURE, "bad txpkts\n");
>   }
> + if (!strcmp(lgopts[opt_idx].name, "multi-mempool"))
> + multi_mempool = 1;
>   if (!strcmp(lgopts[opt_idx].name, "txonly-multi-flow"))
>   txonly_multi_flow = 1;
>   if (!strcmp(lgopts[opt_idx].name, "rxq-share")) {
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 4e25f77c6a..0bf2e4bd0d 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -245,6 +245,7 @@ uint32_t max_rx_pkt_len;
>   */
>  uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
>  uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
> +uint8_t multi_mempool; /**< Enables multi-mempool feature */
>  uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
>  uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
>  uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
> @@ -258,6 +259,8 @@ uint16_t tx_pkt_seg_lengths[RTE_MAX_SEGS_PER_PKT] = {
>  };
>  uint8_t  tx_pkt_nb_segs = 1; /**< Number of segments in TXONLY packets */
>  
> +
> +

Unintendend change.

>  enum tx_pkt_split tx_pkt_split = TX_PKT_SPLIT_OFF;
>  /**< Split policy for packets to TX. */
>  
> @@ -2659,24 +2662,9 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>   uint32_t prev_hdrs = 0;
>   int ret;
>  
> - /* Verify Rx queue configuration is single pool and segment or
> -  * multiple pool/segment.
> -  * @see rte_eth_rxconf::rx_mempools
> -  * @see rte_eth_rxconf::rx_seg
> -  */
> - if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> - ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) 

RE: [EXT] Re: [PATCH v6 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-21 Thread Hanumanth Reddy Pothula


> -Original Message-
> From: Ferruh Yigit 
> Sent: Monday, November 21, 2022 11:02 PM
> To: Hanumanth Reddy Pothula ; Aman Singh
> ; Yuying Zhang 
> Cc: dev@dpdk.org; andrew.rybche...@oktetlabs.ru;
> tho...@monjalon.net; yux.ji...@intel.com; Jerin Jacob Kollanukkaran
> ; Nithin Kumar Dabilpuram
> 
> Subject: [EXT] Re: [PATCH v6 1/1] app/testpmd: add valid check to verify
> multi mempool feature
> 
> External Email
> 
> --
> On 11/21/2022 2:33 PM, Hanumanth Pothula wrote:
> > Validate ethdev parameter 'max_rx_mempools' to know whether device
> > supports multi-mempool feature or not.
> >
> 
> Validation 'max_rx_mempools' is not main purpose of this patch, I would
> move below paragraph up.
> 
> > Also, add new testpmd command line argument, multi-mempool, to
> control
> > multi-mempool feature. By default its disabled.
> >
> > Bugzilla ID: 1128
> > Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx
> > queue")
> >
> > Signed-off-by: Hanumanth Pothula 
> >
> > ---
> > v6:
> >  - Updated run_app.rst file with multi-mempool argument.
> >  - defined and populated multi_mempool at related arguments.
> >  - invoking rte_eth_dev_info_get() withing multi-mempool condition
> > v5:
> >  - Added testpmd argument to enable multi-mempool feature.
> >  - Simplified logic to distinguish between multi-mempool,
> >multi-segment and single pool/segment.
> > v4:
> >  - updated if condition.
> > v3:
> >  - Simplified conditional check.
> >  - Corrected spell, whether.
> > v2:
> >  - Rebased on tip of next-net/main.
> > ---
> >  app/test-pmd/parameters.c |  4 ++
> >  app/test-pmd/testpmd.c| 66 +--
> >  app/test-pmd/testpmd.h|  1 +
> >  doc/guides/testpmd_app_ug/run_app.rst |  4 ++
> >  4 files changed, 50 insertions(+), 25 deletions(-)
> >
> > diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> > index aed4cdcb84..d0f7b2f11d 100644
> > --- a/app/test-pmd/parameters.c
> > +++ b/app/test-pmd/parameters.c
> > @@ -155,6 +155,7 @@ usage(char* progname)
> > printf("  --rxhdrs=eth[,ipv4]*: set RX segment protocol to split.\n");
> > printf("  --txpkts=X[,Y]*: set TX segment sizes"
> > " or total packet length.\n");
> > +   printf(" --multi-mempool: enable multi-mempool support\n");
> 
> Indentation is wrong, one space is missing.
> 
> Can you also update the '--mbuf-size=' definition, it has:
> " ... extra memory pools will be created for allocating mbufs to receive
> packets with buffer splitting features.", Now it is for both "buffer splitting
> and multi Rx mempool features."
> Even it can be possible to reference to new argument.
Sure, will update. 
> 
> > printf("  --txonly-multi-flow: generate multiple flows in txonly
> mode\n");
> > printf("  --tx-ip=src,dst: IP addresses in Tx-only mode\n");
> > printf("  --tx-udp=src[,dst]: UDP ports in Tx-only mode\n"); @@
> > -669,6 +670,7 @@ launch_args_parse(int argc, char** argv)
> > { "rxpkts", 1, 0, 0 },
> > { "rxhdrs", 1, 0, 0 },
> > { "txpkts", 1, 0, 0 },
> > +   { "multi-mempool",  0, 0, 0 },
> 
> Thinking twice, I am not sure about the 'multi-mempool' name, because
> 'mbuf-size' already cause to create multiple mempool, 'multi-mempool'
> can be confusing.
> As ethdev variable name is 'max_rx_mempools', what do you think to use
> 'multi-rx-mempools' here as argument?

Yes, 'multi-rx-mempools' looks clean.

> 
> > { "txonly-multi-flow",  0, 0, 0 },
> > { "rxq-share",  2, 0, 0 },
> > { "eth-link-speed", 1, 0, 0 },
> > @@ -1295,6 +1297,8 @@ launch_args_parse(int argc, char** argv)
> > else
> > rte_exit(EXIT_FAILURE, "bad
> txpkts\n");
> > }
> > +   if (!strcmp(lgopts[opt_idx].name, "multi-
> mempool"))
> > +   multi_mempool = 1;
> > if (!strcmp(lgopts[opt_idx].name, "txonly-multi-
> flow"))
> > txonly_multi_flow = 1;
> > if (!strcmp(lgopts[opt_idx].name, "rxq-share")) { diff
> --git
> > a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > 4e25f77c6a..0bf2e4bd0d 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -245,6 +245,7 @@ uint32_t max_rx_pkt_len;
> >   */
> >  uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
> >  uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
> > +uint8_t multi_mempool; /**< Enables multi-mempool feature */
> >  uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
> >  uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
> > uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
> > @@ -258,6 +259,8 @@ uint16_t
> tx_pkt_se

Re: [EXT] Re: [PATCH v6 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-21 Thread Ferruh Yigit
On 11/21/2022 5:45 PM, Hanumanth Reddy Pothula wrote:
> 
> 
>> -Original Message-
>> From: Ferruh Yigit 
>> Sent: Monday, November 21, 2022 11:02 PM
>> To: Hanumanth Reddy Pothula ; Aman Singh
>> ; Yuying Zhang 
>> Cc: dev@dpdk.org; andrew.rybche...@oktetlabs.ru;
>> tho...@monjalon.net; yux.ji...@intel.com; Jerin Jacob Kollanukkaran
>> ; Nithin Kumar Dabilpuram
>> 
>> Subject: [EXT] Re: [PATCH v6 1/1] app/testpmd: add valid check to verify
>> multi mempool feature
>>
>> External Email
>>
>> --
>> On 11/21/2022 2:33 PM, Hanumanth Pothula wrote:
>>> Validate ethdev parameter 'max_rx_mempools' to know whether device
>>> supports multi-mempool feature or not.
>>>
>>
>> Validation 'max_rx_mempools' is not main purpose of this patch, I would
>> move below paragraph up.
>>
>>> Also, add new testpmd command line argument, multi-mempool, to
>> control
>>> multi-mempool feature. By default its disabled.
>>>
>>> Bugzilla ID: 1128
>>> Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx
>>> queue")
>>>
>>> Signed-off-by: Hanumanth Pothula 
>>>
>>> ---
>>> v6:
>>>  - Updated run_app.rst file with multi-mempool argument.
>>>  - defined and populated multi_mempool at related arguments.
>>>  - invoking rte_eth_dev_info_get() withing multi-mempool condition
>>> v5:
>>>  - Added testpmd argument to enable multi-mempool feature.
>>>  - Simplified logic to distinguish between multi-mempool,
>>>multi-segment and single pool/segment.
>>> v4:
>>>  - updated if condition.
>>> v3:
>>>  - Simplified conditional check.
>>>  - Corrected spell, whether.
>>> v2:
>>>  - Rebased on tip of next-net/main.
>>> ---
>>>  app/test-pmd/parameters.c |  4 ++
>>>  app/test-pmd/testpmd.c| 66 +--
>>>  app/test-pmd/testpmd.h|  1 +
>>>  doc/guides/testpmd_app_ug/run_app.rst |  4 ++
>>>  4 files changed, 50 insertions(+), 25 deletions(-)
>>>
>>> diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
>>> index aed4cdcb84..d0f7b2f11d 100644
>>> --- a/app/test-pmd/parameters.c
>>> +++ b/app/test-pmd/parameters.c
>>> @@ -155,6 +155,7 @@ usage(char* progname)
>>> printf("  --rxhdrs=eth[,ipv4]*: set RX segment protocol to split.\n");
>>> printf("  --txpkts=X[,Y]*: set TX segment sizes"
>>> " or total packet length.\n");
>>> +   printf(" --multi-mempool: enable multi-mempool support\n");
>>
>> Indentation is wrong, one space is missing.
>>
>> Can you also update the '--mbuf-size=' definition, it has:
>> " ... extra memory pools will be created for allocating mbufs to receive
>> packets with buffer splitting features.", Now it is for both "buffer 
>> splitting
>> and multi Rx mempool features."
>> Even it can be possible to reference to new argument.
> Sure, will update. 
>>
>>> printf("  --txonly-multi-flow: generate multiple flows in txonly
>> mode\n");
>>> printf("  --tx-ip=src,dst: IP addresses in Tx-only mode\n");
>>> printf("  --tx-udp=src[,dst]: UDP ports in Tx-only mode\n"); @@
>>> -669,6 +670,7 @@ launch_args_parse(int argc, char** argv)
>>> { "rxpkts", 1, 0, 0 },
>>> { "rxhdrs", 1, 0, 0 },
>>> { "txpkts", 1, 0, 0 },
>>> +   { "multi-mempool",  0, 0, 0 },
>>
>> Thinking twice, I am not sure about the 'multi-mempool' name, because
>> 'mbuf-size' already cause to create multiple mempool, 'multi-mempool'
>> can be confusing.
>> As ethdev variable name is 'max_rx_mempools', what do you think to use
>> 'multi-rx-mempools' here as argument?
> 
> Yes, 'multi-rx-mempools' looks clean.
> 
>>
>>> { "txonly-multi-flow",  0, 0, 0 },
>>> { "rxq-share",  2, 0, 0 },
>>> { "eth-link-speed", 1, 0, 0 },
>>> @@ -1295,6 +1297,8 @@ launch_args_parse(int argc, char** argv)
>>> else
>>> rte_exit(EXIT_FAILURE, "bad
>> txpkts\n");
>>> }
>>> +   if (!strcmp(lgopts[opt_idx].name, "multi-
>> mempool"))
>>> +   multi_mempool = 1;
>>> if (!strcmp(lgopts[opt_idx].name, "txonly-multi-
>> flow"))
>>> txonly_multi_flow = 1;
>>> if (!strcmp(lgopts[opt_idx].name, "rxq-share")) { diff
>> --git
>>> a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
>>> 4e25f77c6a..0bf2e4bd0d 100644
>>> --- a/app/test-pmd/testpmd.c
>>> +++ b/app/test-pmd/testpmd.c
>>> @@ -245,6 +245,7 @@ uint32_t max_rx_pkt_len;
>>>   */
>>>  uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
>>>  uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
>>> +uint8_t multi_mempool; /**< Enables multi-mempool feature */
>>>  uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
>>>  uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets

[PATCH v7 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-21 Thread Hanumanth Pothula
Validate ethdev parameter 'max_rx_mempools' to know whether
device supports multi-mempool feature or not.

Also, add new testpmd command line argument, multi-mempool,
to control multi-mempool feature. By default its disabled.

Bugzilla ID: 1128
Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")

Signed-off-by: Hanumanth Pothula 

---
v7:
 - Update testpmd argument name from multi-mempool to multi-rx-mempool.
 - Upated defination of testpmd argument, mbuf-size.
 - Resolved indentations.
v6:
 - Updated run_app.rst file with multi-mempool argument.
 - defined and populated multi_mempool at related arguments.
 - invoking rte_eth_dev_info_get() withing multi-mempool condition
v5:
 - Added testpmd argument to enable multi-mempool feature.
 - Simplified logic to distinguish between multi-mempool,
   multi-segment and single pool/segment.
v4:
 - updated if condition.
v3:
 - Simplified conditional check.
 - Corrected spell, whether.
v2:
 - Rebased on tip of next-net/main.
---
 app/test-pmd/parameters.c |  7 ++-
 app/test-pmd/testpmd.c| 64 ---
 app/test-pmd/testpmd.h|  1 +
 doc/guides/testpmd_app_ug/run_app.rst |  4 ++
 4 files changed, 50 insertions(+), 26 deletions(-)

diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index aed4cdcb84..af9ec39cf9 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -88,7 +88,8 @@ usage(char* progname)
   "in NUMA mode.\n");
printf("  --mbuf-size=N,[N1[,..Nn]: set the data size of mbuf to "
   "N bytes. If multiple numbers are specified the extra pools "
-  "will be created to receive with packet split features\n");
+  "will be created to receive packets based on the features "
+  "supported, like buufer-split, multi-mempool.\n");
printf("  --total-num-mbufs=N: set the number of mbufs to be allocated "
   "in mbuf pools.\n");
printf("  --max-pkt-len=N: set the maximum size of packet to N 
bytes.\n");
@@ -155,6 +156,7 @@ usage(char* progname)
printf("  --rxhdrs=eth[,ipv4]*: set RX segment protocol to split.\n");
printf("  --txpkts=X[,Y]*: set TX segment sizes"
" or total packet length.\n");
+   printf("  --multi-rx-mempool: enable multi-mempool support\n");
printf("  --txonly-multi-flow: generate multiple flows in txonly 
mode\n");
printf("  --tx-ip=src,dst: IP addresses in Tx-only mode\n");
printf("  --tx-udp=src[,dst]: UDP ports in Tx-only mode\n");
@@ -669,6 +671,7 @@ launch_args_parse(int argc, char** argv)
{ "rxpkts", 1, 0, 0 },
{ "rxhdrs", 1, 0, 0 },
{ "txpkts", 1, 0, 0 },
+   { "multi-rx-mempool",   0, 0, 0 },
{ "txonly-multi-flow",  0, 0, 0 },
{ "rxq-share",  2, 0, 0 },
{ "eth-link-speed", 1, 0, 0 },
@@ -1295,6 +1298,8 @@ launch_args_parse(int argc, char** argv)
else
rte_exit(EXIT_FAILURE, "bad txpkts\n");
}
+   if (!strcmp(lgopts[opt_idx].name, "multi-rx-mempool"))
+   multi_rx_mempool = 1;
if (!strcmp(lgopts[opt_idx].name, "txonly-multi-flow"))
txonly_multi_flow = 1;
if (!strcmp(lgopts[opt_idx].name, "rxq-share")) {
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4e25f77c6a..716937925e 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -245,6 +245,7 @@ uint32_t max_rx_pkt_len;
  */
 uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
 uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
+uint8_t multi_rx_mempool; /**< Enables multi-mempool feature */
 uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
 uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
 uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
@@ -2659,24 +2660,9 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
uint32_t prev_hdrs = 0;
int ret;
 
-   /* Verify Rx queue configuration is single pool and segment or
-* multiple pool/segment.
-* @see rte_eth_rxconf::rx_mempools
-* @see rte_eth_rxconf::rx_seg
-*/
-   if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
-   ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
-   /* Single pool/segment configuration */
-   rx_conf->rx_seg = NULL;
-   rx_conf->rx_nseg = 0;
-   ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
-nb_rx_desc, socket_id,
-rx_conf, mp);
-   goto ex

Re: [PATCH v7 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-21 Thread Ferruh Yigit
On 11/21/2022 6:07 PM, Hanumanth Pothula wrote:
> Validate ethdev parameter 'max_rx_mempools' to know whether
> device supports multi-mempool feature or not.
> 
> Also, add new testpmd command line argument, multi-mempool,
> to control multi-mempool feature. By default its disabled.

s/multi-mempool/multi-rx-mempool/

Also moving argument paragraph up.

> 
> Bugzilla ID: 1128
> Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")
> 
> Signed-off-by: Hanumanth Pothula 

Reviewed-by: Ferruh Yigit 

With noted issues fixed,
Applied to dpdk-next-net/main, thanks.


Lets wait test report before requesting this to be merged to main repo.

@Yu, @Yuying,

Can you please verify this at the latest head of the next-net tree?

Thanks,
ferruh

> 
> ---
> v7:
>  - Update testpmd argument name from multi-mempool to multi-rx-mempool.
>  - Upated defination of testpmd argument, mbuf-size.
>  - Resolved indentations.
> v6:
>  - Updated run_app.rst file with multi-mempool argument.
>  - defined and populated multi_mempool at related arguments.
>  - invoking rte_eth_dev_info_get() withing multi-mempool condition
> v5:
>  - Added testpmd argument to enable multi-mempool feature.
>  - Simplified logic to distinguish between multi-mempool,
>multi-segment and single pool/segment.
> v4:
>  - updated if condition.
> v3:
>  - Simplified conditional check.
>  - Corrected spell, whether.
> v2:
>  - Rebased on tip of next-net/main.
> ---
>  app/test-pmd/parameters.c |  7 ++-
>  app/test-pmd/testpmd.c| 64 ---
>  app/test-pmd/testpmd.h|  1 +
>  doc/guides/testpmd_app_ug/run_app.rst |  4 ++
>  4 files changed, 50 insertions(+), 26 deletions(-)
> 
> diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> index aed4cdcb84..af9ec39cf9 100644
> --- a/app/test-pmd/parameters.c
> +++ b/app/test-pmd/parameters.c
> @@ -88,7 +88,8 @@ usage(char* progname)
>  "in NUMA mode.\n");
>   printf("  --mbuf-size=N,[N1[,..Nn]: set the data size of mbuf to "
>  "N bytes. If multiple numbers are specified the extra pools "
> -"will be created to receive with packet split features\n");
> +"will be created to receive packets based on the features "
> +"supported, like buufer-split, multi-mempool.\n");

s/buufer/buffer/

>   printf("  --total-num-mbufs=N: set the number of mbufs to be allocated "
>  "in mbuf pools.\n");
>   printf("  --max-pkt-len=N: set the maximum size of packet to N 
> bytes.\n");
> @@ -155,6 +156,7 @@ usage(char* progname)
>   printf("  --rxhdrs=eth[,ipv4]*: set RX segment protocol to split.\n");
>   printf("  --txpkts=X[,Y]*: set TX segment sizes"
>   " or total packet length.\n");
> + printf("  --multi-rx-mempool: enable multi-mempool support\n");
>   printf("  --txonly-multi-flow: generate multiple flows in txonly 
> mode\n");
>   printf("  --tx-ip=src,dst: IP addresses in Tx-only mode\n");
>   printf("  --tx-udp=src[,dst]: UDP ports in Tx-only mode\n");
> @@ -669,6 +671,7 @@ launch_args_parse(int argc, char** argv)
>   { "rxpkts", 1, 0, 0 },
>   { "rxhdrs", 1, 0, 0 },
>   { "txpkts", 1, 0, 0 },
> + { "multi-rx-mempool",   0, 0, 0 },
>   { "txonly-multi-flow",  0, 0, 0 },
>   { "rxq-share",  2, 0, 0 },
>   { "eth-link-speed", 1, 0, 0 },
> @@ -1295,6 +1298,8 @@ launch_args_parse(int argc, char** argv)
>   else
>   rte_exit(EXIT_FAILURE, "bad txpkts\n");
>   }
> + if (!strcmp(lgopts[opt_idx].name, "multi-rx-mempool"))
> + multi_rx_mempool = 1;
>   if (!strcmp(lgopts[opt_idx].name, "txonly-multi-flow"))
>   txonly_multi_flow = 1;
>   if (!strcmp(lgopts[opt_idx].name, "rxq-share")) {
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 4e25f77c6a..716937925e 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -245,6 +245,7 @@ uint32_t max_rx_pkt_len;
>   */
>  uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
>  uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
> +uint8_t multi_rx_mempool; /**< Enables multi-mempool feature */
>  uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
>  uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
>  uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];

Better to move new variable out of packet split related variables, and
below them.

> @@ -2659,24 +2660,9 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>   uint32_t prev_hdrs = 0;
>   int ret;
>  
> - /* Verify Rx queue configuration is single pool and segment or
> - 

Re: [PATCH v2] compress/mlx5: add Bluefield-3 device ID

2022-11-21 Thread Thomas Monjalon
21/11/2022 15:12, Raslan Darawsheh:
> This adds the Bluefield-3 device ids to the list of
> supported NVIDIA devices that run the MLX5 compress PMDs.
> The devices is still in development stage.
> 
> Signed-off-by: Raslan Darawsheh 

Applied, thanks.




Re: [PATCH] doc: improve event core description in vDPA mlx5

2022-11-21 Thread Thomas Monjalon
> > The event core is mlx5 vDPA driver devarg that selects the CPU core for
> > the internal timer thread used to manage data-path events into the
> > driver.
> > 
> > Emphasize that this CPU should be isolated for vDPA mlx5 devices only in
> > order to save the performance and latency of the device.
> > 
> > Signed-off-by: Matan Azrad 
> 
> Reviewed-by: Chenbo Xia 

Applied, thanks.





Re: [PATCH] ring: build with global includes

2022-11-21 Thread Tyler Retzlaff
On Mon, Nov 21, 2022 at 10:31:29AM +, Bruce Richardson wrote:
> On Fri, Nov 18, 2022 at 03:22:07PM -0800, Tyler Retzlaff wrote:
> > ring has no dependencies and should be able to be built standalone but
> > cannot be since it cannot find rte_config.h. this change directs meson
> > to include global_inc paths just like is done with other libraries
> > e.g. telemetry.
> > 
> > Tyler Retzlaff (1):
> >   ring: build with global includes
> > 
> >  lib/ring/meson.build | 2 ++
> >  1 file changed, 2 insertions(+)
> >
> 
> I am a little confused by this change - how do you mean built-standalone?
> The ring library depends upon EAL for memory management, does it not? Also,
> no DPDK library can be built on its own without the rest of the top-level
> build infrastructure, which will ensure that the global-include folders are
> on the include path? 
> 
> In terms of other libs, e.g. telemetry, the only reason those need the
> global includes added to their include path explicitly is because those are
> built ahead of EAL. Anything that depends on EAL - including ring - will
> have the global includes available.

i'm having trouble seeing where in the meson.build that ring depends on
eal can you point me to where it is?

> 
> Can you explain a little more about the use-case you are looking at here,
> and how you are attempting to build ring?

so i found this by trying to understand other libraries dependencies
through a process of disabling the build of various subsets.

it's possible i didn't look deeply enough but i didn't see an explicit
dependency on eal (in the meson.build files). maybe you can point out
where it is because by just having rte_config.h available it compiles
and links.

e.g. i don't see.

deps += ['eal']

is the dependency on eal the library or just eal headers? because if it
is header only it is equivalent to telemetry i think?

thanks!

ty

> 
> /Bruce 


[PATCH] doc: announce the legacy pipeline API deprecation

2022-11-21 Thread Cristian Dumitrescu
Announce the deprecation of the legacy pipeline, table and port
library API and gradual stabilization of the new API.

Signed-off-by: Cristian Dumitrescu 
---
 doc/guides/rel_notes/deprecation.rst | 15 +++
 1 file changed, 15 insertions(+)

diff --git a/doc/guides/rel_notes/deprecation.rst 
b/doc/guides/rel_notes/deprecation.rst
index e2efa2f8b0..dfc6fa96ba 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -102,3 +102,18 @@ Deprecation Notices
   Its removal has been postponed to let potential users report interest
   in maintaining it.
   In the absence of such interest, this library will be removed in DPDK 23.11.
+
+* pipeline: The pipeline library legacy API (functions rte_pipeline_*) will be
+  deprecated and removed in DPDK 23.11 release. The new pipeline library API
+  (functions rte_swx_pipeline_*) will gradually transition from experimental
+  to stable status starting with DPDK 23.11 release.
+
+* table: The table library legacy API (functions rte_table_*) will be
+  deprecated and removed in DPDK 23.11 release. The new table library API
+  (functions rte_swx_table_*) will gradually transition from experimental
+  to stable status starting with DPDK 23.11 release.
+
+* port: The port library legacy API (functions rte_port_*) will be
+  deprecated and removed in DPDK 23.11 release. The new port library API
+  (functions rte_swx_port_*) will gradually transition from experimental
+  to stable status starting with DPDK 23.11 release.
-- 
2.34.1



Re: [PATCH] doc: add tested platforms with NVIDIA NICs

2022-11-21 Thread Thomas Monjalon
21/11/2022 10:10, Raslan Darawsheh:
> Add tested platforms with NVIDIA NICs to the 22.11 release notes.
> 
> Signed-off-by: Raslan Darawsheh 

Applied with few typos fixed, thanks.





Re: release candidate 22.11-rc3

2022-11-21 Thread Thinh Tran

IBM - Power Systems
DPDK 22.11.0-rc3


* Basic PF on Mellanox: No new issues or regressions were seen.
* Performance: not tested.
* OS: RHEL 8.5  kernel: 4.18.0-348.el8.ppc64le
with gcc version 8.5.0 20210514 (Red Hat 8.5.0-10)
  RHEL 9.0  kernel: 5.14.0-70.13.1.el9_0.ppc64le
with gcc version 11.2.1 20220127 (Red Hat 11.2.1-9)

Systems tested:
 - IBM Power9 PowerNV 9006-22P
NICs:
 - Mellanox Technologies MT28800 Family [ConnectX-5 Ex]
 - firmware version: 16.34.1002
 - MLNX_OFED_LINUX-5.7-1.0.2.1 (OFED-5.7-1.0.2)

 - IBM Power10 PowerVM  IBM,9105-22A
NICs:
- Mellanox Technologies MT28800 Family [ConnectX-5 Ex]
- firmware version: 16.34.1002
- MLNX_OFED_LINUX-5.7-1.0.2.1 (OFED-5.7-1.0.2)

Regards,
Thinh Tran

On 11/15/2022 11:32 AM, Thomas Monjalon wrote:

A new DPDK release candidate is ready for testing:
https://git.dpdk.org/dpdk/tag/?id=v22.11-rc3

There are 161 new patches in this snapshot.

Release notes:
https://doc.dpdk.org/guides/rel_notes/release_22_11.html

Please test and report issues on bugs.dpdk.org.
You may share some release validation results
by replying to this message at dev@dpdk.org
and by adding tested hardware in the release notes.

DPDK 22.11-rc4 should be the last chance for bug fixes and doc updates,
and it is planned for the end of this week.

Thank you everyone




[PATCH 01/11] ethdev: check return result of rte_eth_dev_info_get

2022-11-21 Thread okaya
From: Sinan Kaya 

rte_class_eth: eth_mac_cmp: The status of this call to rte_eth_dev_info_get
is not checked, potentially leaving dev_info uninitialized.

Signed-off-by: Sinan Kaya 
---
 lib/ethdev/rte_class_eth.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c
index 838b3a8f9f..8165e5adc0 100644
--- a/lib/ethdev/rte_class_eth.c
+++ b/lib/ethdev/rte_class_eth.c
@@ -51,7 +51,9 @@ eth_mac_cmp(const char *key __rte_unused,
return -1; /* invalid devargs value */
 
/* Return 0 if devargs MAC is matching one of the device MACs. */
-   rte_eth_dev_info_get(data->port_id, &dev_info);
+   if (rte_eth_dev_info_get(data->port_id, &dev_info) < 0)
+   return -1;
+
for (index = 0; index < dev_info.max_mac_addrs; index++)
if (rte_is_same_ether_addr(&mac, &data->mac_addrs[index]))
return 0;
-- 
2.25.1



[PATCH 00/11] codeql fixes for various subsystems

2022-11-21 Thread okaya
From: Sinan Kaya 

Following up the codeql reported problems first submitted
by Stephen Hemminger here:

https://lore.kernel.org/all/20220527161210.77212d0b@hermes.local/t/

Posting a series of fixes about potential null pointer accesses.

Sinan Kaya (11):
  ethdev: check return result of rte_eth_dev_info_get
  net/tap: check if name is null
  memzone: check result of rte_fbarray_get
  memzone: check result of malloc_elem_from_data
  malloc: malloc_elem_join_adjacent_free can return null
  malloc: check result of rte_mem_virt2memseg_list
  malloc: check result of rte_fbarray_get
  malloc: check result of rte_mem_virt2memseg
  malloc: check result of malloc_elem_free
  malloc: check result of elem_start_pt
  bus/vdev: check result of rte_vdev_device_name

 drivers/net/tap/rte_eth_tap.c|  4 
 lib/eal/common/eal_common_memalloc.c |  4 +++-
 lib/eal/common/eal_common_memzone.c  | 10 +-
 lib/eal/common/malloc_elem.c | 14 +++---
 lib/eal/common/malloc_heap.c | 11 ++-
 lib/ethdev/ethdev_vdev.h |  2 ++
 lib/ethdev/rte_class_eth.c   |  4 +++-
 7 files changed, 42 insertions(+), 7 deletions(-)

-- 
2.25.1



[PATCH 04/11] memzone: check result of malloc_elem_from_data

2022-11-21 Thread okaya
From: Sinan Kaya 

In memzone_reserve_aligned_thread_unsafe result of call
to malloc_elem_from_data is dereferenced here and may be null.

Signed-off-by: Sinan Kaya 
---
 lib/eal/common/eal_common_memzone.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/lib/eal/common/eal_common_memzone.c 
b/lib/eal/common/eal_common_memzone.c
index 0ed03ad337..74aa5ac114 100644
--- a/lib/eal/common/eal_common_memzone.c
+++ b/lib/eal/common/eal_common_memzone.c
@@ -169,6 +169,10 @@ memzone_reserve_aligned_thread_unsafe(const char *name, 
size_t len,
}
 
struct malloc_elem *elem = malloc_elem_from_data(mz_addr);
+   if (!elem) {
+   rte_errno = ENOSPC;
+   return NULL;
+   }
 
/* fill the zone in config */
mz_idx = rte_fbarray_find_next_free(arr, 0);
-- 
2.25.1



[PATCH 02/11] net/tap: check if name is null

2022-11-21 Thread okaya
From: Sinan Kaya 

In rte_pmd_tun_probe result of call to rte_vdev_device_name is
dereferenced here and may be null.

Signed-off-by: Sinan Kaya 
---
 drivers/net/tap/rte_eth_tap.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index f2a6c33a19..aa640f8acc 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -2340,6 +2340,10 @@ rte_pmd_tun_probe(struct rte_vdev_device *dev)
struct rte_eth_dev *eth_dev;
 
name = rte_vdev_device_name(dev);
+   if (!name) {
+   return -1;
+   }
+
params = rte_vdev_device_args(dev);
memset(remote_iface, 0, RTE_ETH_NAME_MAX_LEN);
 
-- 
2.25.1



[PATCH 05/11] malloc: malloc_elem_join_adjacent_free can return null

2022-11-21 Thread okaya
From: Sinan Kaya 

In malloc_heap_add_memory result of call to malloc_elem_join_adjacent_free
is dereferenced here and may be null.

Signed-off-by: Sinan Kaya 
---
 lib/eal/common/malloc_heap.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c
index d7c410b786..d2ccc387bf 100644
--- a/lib/eal/common/malloc_heap.c
+++ b/lib/eal/common/malloc_heap.c
@@ -97,6 +97,9 @@ malloc_heap_add_memory(struct malloc_heap *heap, struct 
rte_memseg_list *msl,
malloc_elem_insert(elem);
 
elem = malloc_elem_join_adjacent_free(elem);
+   if (!elem) {
+   return NULL;
+   }
 
malloc_elem_free_list_insert(elem);
 
-- 
2.25.1



[PATCH 06/11] malloc: check result of rte_mem_virt2memseg_list

2022-11-21 Thread okaya
From: Sinan Kaya 

In alloc_pages_on_heap result of call to rte_mem_virt2memseg_list
is dereferenced here and may be null.

Signed-off-by: Sinan Kaya 
---
 lib/eal/common/malloc_heap.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c
index d2ccc387bf..438c0856e2 100644
--- a/lib/eal/common/malloc_heap.c
+++ b/lib/eal/common/malloc_heap.c
@@ -324,6 +324,9 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t 
pg_sz, size_t elt_size,
 
map_addr = ms[0]->addr;
msl = rte_mem_virt2memseg_list(map_addr);
+   if (!msl) {
+   return NULL;
+   }
 
/* check if we wanted contiguous memory but didn't get it */
if (contig && !eal_memalloc_is_contig(msl, map_addr, alloc_sz)) {
-- 
2.25.1



[PATCH 03/11] memzone: check result of rte_fbarray_get

2022-11-21 Thread okaya
From: Sinan Kaya 

In memzone_lookup_thread_unsafe result of call to rte_fbarray_get
is dereferenced here and may be null.

Signed-off-by: Sinan Kaya 
---
 lib/eal/common/eal_common_memzone.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/lib/eal/common/eal_common_memzone.c 
b/lib/eal/common/eal_common_memzone.c
index 860fb5fb64..0ed03ad337 100644
--- a/lib/eal/common/eal_common_memzone.c
+++ b/lib/eal/common/eal_common_memzone.c
@@ -41,7 +41,7 @@ memzone_lookup_thread_unsafe(const char *name)
i = rte_fbarray_find_next_used(arr, 0);
while (i >= 0) {
mz = rte_fbarray_get(arr, i);
-   if (mz->addr != NULL &&
+   if (mz && mz->addr != NULL &&
!strncmp(name, mz->name, RTE_MEMZONE_NAMESIZE))
return mz;
i = rte_fbarray_find_next_used(arr, i + 1);
@@ -358,6 +358,10 @@ dump_memzone(const struct rte_memzone *mz, void *arg)
fprintf(f, "physical segments used:\n");
ms_idx = RTE_PTR_DIFF(mz->addr, msl->base_va) / page_sz;
ms = rte_fbarray_get(&msl->memseg_arr, ms_idx);
+   if (!ms) {
+   RTE_LOG(DEBUG, EAL, "Skipping bad memzone\n");
+   return;
+   }
 
do {
fprintf(f, "  addr: %p iova: 0x%" PRIx64 " "
-- 
2.25.1



[PATCH 08/11] malloc: check result of rte_mem_virt2memseg

2022-11-21 Thread okaya
From: Sinan Kaya 

In malloc_elem_find_max_iova_contig result of call to rte_mem_virt2memseg
is dereferenced here and may be null.

Signed-off-by: Sinan Kaya 
---
 lib/eal/common/malloc_elem.c | 11 ---
 lib/eal/common/malloc_heap.c |  2 +-
 2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/lib/eal/common/malloc_elem.c b/lib/eal/common/malloc_elem.c
index 83f05497cc..54d7b2f278 100644
--- a/lib/eal/common/malloc_elem.c
+++ b/lib/eal/common/malloc_elem.c
@@ -63,6 +63,8 @@ malloc_elem_find_max_iova_contig(struct malloc_elem *elem, 
size_t align)
 
cur_page = RTE_PTR_ALIGN_FLOOR(contig_seg_start, page_sz);
ms = rte_mem_virt2memseg(cur_page, elem->msl);
+   if (!ms)
+   return 0;
 
/* do first iteration outside the loop */
page_end = RTE_PTR_ADD(cur_page, page_sz);
@@ -91,9 +93,12 @@ malloc_elem_find_max_iova_contig(struct malloc_elem *elem, 
size_t align)
 * we're not blowing past data end.
 */
ms = rte_mem_virt2memseg(contig_seg_start, elem->msl);
-   cur_page = ms->addr;
-   /* don't trigger another recalculation */
-   expected_iova = ms->iova;
+   if (ms) {
+   cur_page = ms->addr;
+
+   /* don't trigger another recalculation */
+   expected_iova = ms->iova;
+   }
continue;
}
/* cur_seg_end ends on a page boundary or on data end. if we're
diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c
index 438c0856e2..1bf2e94c83 100644
--- a/lib/eal/common/malloc_heap.c
+++ b/lib/eal/common/malloc_heap.c
@@ -932,7 +932,7 @@ malloc_heap_free(struct malloc_elem *elem)
const struct rte_memseg *tmp =
rte_mem_virt2memseg(aligned_start, msl);
 
-   if (tmp->flags & RTE_MEMSEG_FLAG_DO_NOT_FREE) {
+   if (tmp && (tmp->flags & RTE_MEMSEG_FLAG_DO_NOT_FREE)) {
/* this is an unfreeable segment, so move start */
aligned_start = RTE_PTR_ADD(tmp->addr, tmp->len);
}
-- 
2.25.1



[PATCH 07/11] malloc: check result of rte_fbarray_get

2022-11-21 Thread okaya
From: Sinan Kaya 

In eal_memalloc_is_contig result of call to rte_fbarray_get
is dereferenced here and may be null.

Signed-off-by: Sinan Kaya 
---
 lib/eal/common/eal_common_memalloc.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/lib/eal/common/eal_common_memalloc.c 
b/lib/eal/common/eal_common_memalloc.c
index ab04479c1c..e7f4bede39 100644
--- a/lib/eal/common/eal_common_memalloc.c
+++ b/lib/eal/common/eal_common_memalloc.c
@@ -126,6 +126,8 @@ eal_memalloc_is_contig(const struct rte_memseg_list *msl, 
void *start,
 
/* skip first iteration */
ms = rte_fbarray_get(&msl->memseg_arr, start_seg);
+   if (!ms)
+   return false;
cur = ms->iova;
expected = cur + pgsz;
 
@@ -137,7 +139,7 @@ eal_memalloc_is_contig(const struct rte_memseg_list *msl, 
void *start,
cur_seg++, expected += pgsz) {
ms = rte_fbarray_get(&msl->memseg_arr, cur_seg);
 
-   if (ms->iova != expected)
+   if (ms && (ms->iova != expected))
return false;
}
}
-- 
2.25.1



[PATCH 09/11] malloc: check result of malloc_elem_free

2022-11-21 Thread okaya
From: Sinan Kaya 

In malloc_heap_free result of call to malloc_elem_free is dereferenced
here and may be null.

Signed-off-by: Sinan Kaya 
---
 lib/eal/common/malloc_heap.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c
index 1bf2e94c83..78a540c860 100644
--- a/lib/eal/common/malloc_heap.c
+++ b/lib/eal/common/malloc_heap.c
@@ -894,6 +894,9 @@ malloc_heap_free(struct malloc_elem *elem)
/* anything after this is a bonus */
ret = 0;
 
+   if (!elem)
+   goto free_unlock;
+
/* ...of which we can't avail if we are in legacy mode, or if this is an
 * externally allocated segment.
 */
-- 
2.25.1



RE: [PATCH] ring: build with global includes

2022-11-21 Thread Konstantin Ananyev



> -Original Message-
> From: Tyler Retzlaff 
> Sent: Monday, November 21, 2022 7:53 PM
> To: Bruce Richardson 
> Cc: dev@dpdk.org
> Subject: Re: [PATCH] ring: build with global includes
> 
> On Mon, Nov 21, 2022 at 10:31:29AM +, Bruce Richardson wrote:
> > On Fri, Nov 18, 2022 at 03:22:07PM -0800, Tyler Retzlaff wrote:
> > > ring has no dependencies and should be able to be built standalone but
> > > cannot be since it cannot find rte_config.h. this change directs meson
> > > to include global_inc paths just like is done with other libraries
> > > e.g. telemetry.
> > >
> > > Tyler Retzlaff (1):
> > >   ring: build with global includes
> > >
> > >  lib/ring/meson.build | 2 ++
> > >  1 file changed, 2 insertions(+)
> > >
> >
> > I am a little confused by this change - how do you mean built-standalone?
> > The ring library depends upon EAL for memory management, does it not? Also,
> > no DPDK library can be built on its own without the rest of the top-level
> > build infrastructure, which will ensure that the global-include folders are
> > on the include path?
> >
> > In terms of other libs, e.g. telemetry, the only reason those need the
> > global includes added to their include path explicitly is because those are
> > built ahead of EAL. Anything that depends on EAL - including ring - will
> > have the global includes available.
> 
> i'm having trouble seeing where in the meson.build that ring depends on
> eal can you point me to where it is?
> 
> >
> > Can you explain a little more about the use-case you are looking at here,
> > and how you are attempting to build ring?
> 
> so i found this by trying to understand other libraries dependencies
> through a process of disabling the build of various subsets.
> 
> it's possible i didn't look deeply enough but i didn't see an explicit
> dependency on eal (in the meson.build files). maybe you can point out
> where it is because by just having rte_config.h available it compiles
> and links.
> 
> e.g. i don't see.
> 
> deps += ['eal']
> 
> is the dependency on eal the library or just eal headers? because if it
> is header only it is equivalent to telemetry i think?

rte_ring.c uses bunch of EAL functions:
rte_zmalloc, rte_memzone_*,  rte_log*, rte_mcfg*, etc.  

> thanks!
> 
> ty
> 
> >
> > /Bruce


Re: [PATCH v2 0/6] doc: some fixes

2022-11-21 Thread Thomas Monjalon
> Michael Baum (6):
>   doc: fix underlines too long in testpmd documentation
>   doc: fix the colon type in listing aged flow rules
>   doc: fix miss blank line in testpmd flow syntax doc
>   doc: fix miss blank line in release notes
>   doc: add mlx5 HWS aging support to release notes
>   doc: add ethdev pre-config flags to release notes

Applied, thanks.





Re: [PATCH] ring: build with global includes

2022-11-21 Thread Thomas Monjalon
21/11/2022 22:27, Konstantin Ananyev:
> From: Tyler Retzlaff 
> > e.g. i don't see.
> > 
> > deps += ['eal']
> > 
> > is the dependency on eal the library or just eal headers? because if it
> > is header only it is equivalent to telemetry i think?
> 
> rte_ring.c uses bunch of EAL functions:
> rte_zmalloc, rte_memzone_*,  rte_log*, rte_mcfg*, etc.  

I think deps += ['eal'] is missing in ring meson file.





Re: [PATCH 02/11] net/tap: check if name is null

2022-11-21 Thread Thomas Monjalon
21/11/2022 21:40, ok...@kernel.org:
> --- a/drivers/net/tap/rte_eth_tap.c
> +++ b/drivers/net/tap/rte_eth_tap.c
> @@ -2340,6 +2340,10 @@ rte_pmd_tun_probe(struct rte_vdev_device *dev)
>   struct rte_eth_dev *eth_dev;
>  
>   name = rte_vdev_device_name(dev);
> + if (!name) {

Please it is preferred to check against NULL,
because name is not a boolean, thanks.
I know it's longer but it is more explicit.

Thanks for the fixes in this series.





Re: [PATCH 02/11] net/tap: check if name is null

2022-11-21 Thread Sinan Kaya
On Mon, 2022-11-21 at 22:41 +0100, Thomas Monjalon wrote:
> 21/11/2022 21:40, ok...@kernel.org:
> > --- a/drivers/net/tap/rte_eth_tap.c+++
> > b/drivers/net/tap/rte_eth_tap.c@@ -2340,6 +2340,10 @@
> > rte_pmd_tun_probe(struct rte_vdev_device *dev)  struct
> > rte_eth_dev *eth_dev;   name = rte_vdev_device_name(dev);+  if
> > (!name) {
> 
> Please it is preferred to check against NULL,because name is not a
> boolean, thanks.I know it's longer but it is more explicit.

Sure, I can do that. Getting used to dpdk coding style. I wasn't sure
what to do with braces on single line too. At least, I got a warning on
that too.

> Thanks for the fixes in this series.
> 

Cheers


[PATCH v2 00/11] codeql fixes for various subsystems

2022-11-21 Thread okaya
From: Sinan Kaya 

Following up the codeql reported problems first submitted
by Stephen Hemminger here:

https://lore.kernel.org/all/20220527161210.77212d0b@hermes.local/t/

Posting a series of fixes about potential null pointer accesses.

Changes from v1:
- Remove braces around single line statements
- use NULL comparisons

Sinan Kaya (11):
  ethdev: check return result of rte_eth_dev_info_get
  net/tap: check if name is null
  memzone: check result of rte_fbarray_get
  memzone: check result of malloc_elem_from_data
  malloc: malloc_elem_join_adjacent_free can return null
  malloc: check result of rte_mem_virt2memseg_list
  malloc: check result of rte_fbarray_get
  malloc: check result of rte_mem_virt2memseg
  malloc: check result of malloc_elem_free
  malloc: check result of elem_start_pt
  bus/vdev: check result of rte_vdev_device_name

 drivers/net/tap/rte_eth_tap.c|  3 +++
 lib/eal/common/eal_common_memalloc.c |  5 -
 lib/eal/common/eal_common_memzone.c  | 10 +-
 lib/eal/common/malloc_elem.c | 14 +++---
 lib/eal/common/malloc_heap.c |  9 -
 lib/ethdev/ethdev_vdev.h |  2 ++
 lib/ethdev/rte_class_eth.c   |  4 +++-
 7 files changed, 40 insertions(+), 7 deletions(-)

-- 
2.25.1



[PATCH v2 01/11] ethdev: check return result of rte_eth_dev_info_get

2022-11-21 Thread okaya
From: Sinan Kaya 

rte_class_eth: eth_mac_cmp: The status of this call to rte_eth_dev_info_get
is not checked, potentially leaving dev_info uninitialized.

Signed-off-by: Sinan Kaya 
---
 lib/ethdev/rte_class_eth.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c
index 838b3a8f9f..8165e5adc0 100644
--- a/lib/ethdev/rte_class_eth.c
+++ b/lib/ethdev/rte_class_eth.c
@@ -51,7 +51,9 @@ eth_mac_cmp(const char *key __rte_unused,
return -1; /* invalid devargs value */
 
/* Return 0 if devargs MAC is matching one of the device MACs. */
-   rte_eth_dev_info_get(data->port_id, &dev_info);
+   if (rte_eth_dev_info_get(data->port_id, &dev_info) < 0)
+   return -1;
+
for (index = 0; index < dev_info.max_mac_addrs; index++)
if (rte_is_same_ether_addr(&mac, &data->mac_addrs[index]))
return 0;
-- 
2.25.1



[PATCH v2 02/11] net/tap: check if name is null

2022-11-21 Thread okaya
From: Sinan Kaya 

In rte_pmd_tun_probe result of call to rte_vdev_device_name is
dereferenced here and may be null.

Signed-off-by: Sinan Kaya 
---
 drivers/net/tap/rte_eth_tap.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index f2a6c33a19..b99439e4f2 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -2340,6 +2340,9 @@ rte_pmd_tun_probe(struct rte_vdev_device *dev)
struct rte_eth_dev *eth_dev;
 
name = rte_vdev_device_name(dev);
+   if (name == NULL)
+   return -1;
+
params = rte_vdev_device_args(dev);
memset(remote_iface, 0, RTE_ETH_NAME_MAX_LEN);
 
-- 
2.25.1



[PATCH v2 03/11] memzone: check result of rte_fbarray_get

2022-11-21 Thread okaya
From: Sinan Kaya 

In memzone_lookup_thread_unsafe result of call to rte_fbarray_get
is dereferenced here and may be null.

Signed-off-by: Sinan Kaya 
---
 lib/eal/common/eal_common_memzone.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/lib/eal/common/eal_common_memzone.c 
b/lib/eal/common/eal_common_memzone.c
index 860fb5fb64..8d472505eb 100644
--- a/lib/eal/common/eal_common_memzone.c
+++ b/lib/eal/common/eal_common_memzone.c
@@ -41,7 +41,7 @@ memzone_lookup_thread_unsafe(const char *name)
i = rte_fbarray_find_next_used(arr, 0);
while (i >= 0) {
mz = rte_fbarray_get(arr, i);
-   if (mz->addr != NULL &&
+   if ((mz != NULL) && (mz->addr != NULL) &&
!strncmp(name, mz->name, RTE_MEMZONE_NAMESIZE))
return mz;
i = rte_fbarray_find_next_used(arr, i + 1);
@@ -358,6 +358,10 @@ dump_memzone(const struct rte_memzone *mz, void *arg)
fprintf(f, "physical segments used:\n");
ms_idx = RTE_PTR_DIFF(mz->addr, msl->base_va) / page_sz;
ms = rte_fbarray_get(&msl->memseg_arr, ms_idx);
+   if (ms == NULL) {
+   RTE_LOG(DEBUG, EAL, "Skipping bad memzone\n");
+   return;
+   }
 
do {
fprintf(f, "  addr: %p iova: 0x%" PRIx64 " "
-- 
2.25.1



[PATCH v2 04/11] memzone: check result of malloc_elem_from_data

2022-11-21 Thread okaya
From: Sinan Kaya 

In memzone_reserve_aligned_thread_unsafe result of call
to malloc_elem_from_data is dereferenced here and may be null.

Signed-off-by: Sinan Kaya 
---
 lib/eal/common/eal_common_memzone.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/lib/eal/common/eal_common_memzone.c 
b/lib/eal/common/eal_common_memzone.c
index 8d472505eb..930fee5fdc 100644
--- a/lib/eal/common/eal_common_memzone.c
+++ b/lib/eal/common/eal_common_memzone.c
@@ -169,6 +169,10 @@ memzone_reserve_aligned_thread_unsafe(const char *name, 
size_t len,
}
 
struct malloc_elem *elem = malloc_elem_from_data(mz_addr);
+   if (elem == NULL) {
+   rte_errno = ENOSPC;
+   return NULL;
+   }
 
/* fill the zone in config */
mz_idx = rte_fbarray_find_next_free(arr, 0);
-- 
2.25.1



[PATCH v2 05/11] malloc: malloc_elem_join_adjacent_free can return null

2022-11-21 Thread okaya
From: Sinan Kaya 

In malloc_heap_add_memory result of call to malloc_elem_join_adjacent_free
is dereferenced here and may be null.

Signed-off-by: Sinan Kaya 
---
 lib/eal/common/malloc_heap.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c
index d7c410b786..503e551bf9 100644
--- a/lib/eal/common/malloc_heap.c
+++ b/lib/eal/common/malloc_heap.c
@@ -97,6 +97,8 @@ malloc_heap_add_memory(struct malloc_heap *heap, struct 
rte_memseg_list *msl,
malloc_elem_insert(elem);
 
elem = malloc_elem_join_adjacent_free(elem);
+   if (elem == NULL)
+   return NULL;
 
malloc_elem_free_list_insert(elem);
 
-- 
2.25.1



[PATCH v2 06/11] malloc: check result of rte_mem_virt2memseg_list

2022-11-21 Thread okaya
From: Sinan Kaya 

In alloc_pages_on_heap result of call to rte_mem_virt2memseg_list
is dereferenced here and may be null.

Signed-off-by: Sinan Kaya 
---
 lib/eal/common/malloc_heap.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c
index 503e551bf9..3f41430e42 100644
--- a/lib/eal/common/malloc_heap.c
+++ b/lib/eal/common/malloc_heap.c
@@ -323,6 +323,8 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t 
pg_sz, size_t elt_size,
 
map_addr = ms[0]->addr;
msl = rte_mem_virt2memseg_list(map_addr);
+   if (msl == NULL)
+   return NULL;
 
/* check if we wanted contiguous memory but didn't get it */
if (contig && !eal_memalloc_is_contig(msl, map_addr, alloc_sz)) {
-- 
2.25.1



[PATCH v2 07/11] malloc: check result of rte_fbarray_get

2022-11-21 Thread okaya
From: Sinan Kaya 

In eal_memalloc_is_contig result of call to rte_fbarray_get
is dereferenced here and may be null.

Signed-off-by: Sinan Kaya 
---
 lib/eal/common/eal_common_memalloc.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/lib/eal/common/eal_common_memalloc.c 
b/lib/eal/common/eal_common_memalloc.c
index ab04479c1c..24506f8447 100644
--- a/lib/eal/common/eal_common_memalloc.c
+++ b/lib/eal/common/eal_common_memalloc.c
@@ -126,6 +126,9 @@ eal_memalloc_is_contig(const struct rte_memseg_list *msl, 
void *start,
 
/* skip first iteration */
ms = rte_fbarray_get(&msl->memseg_arr, start_seg);
+   if (ms == NULL)
+   return false;
+
cur = ms->iova;
expected = cur + pgsz;
 
@@ -137,7 +140,7 @@ eal_memalloc_is_contig(const struct rte_memseg_list *msl, 
void *start,
cur_seg++, expected += pgsz) {
ms = rte_fbarray_get(&msl->memseg_arr, cur_seg);
 
-   if (ms->iova != expected)
+   if ((ms != NULL) && (ms->iova != expected))
return false;
}
}
-- 
2.25.1



[PATCH v2 08/11] malloc: check result of rte_mem_virt2memseg

2022-11-21 Thread okaya
From: Sinan Kaya 

In malloc_elem_find_max_iova_contig result of call to rte_mem_virt2memseg
is dereferenced here and may be null.

Signed-off-by: Sinan Kaya 
---
 lib/eal/common/malloc_elem.c | 11 ---
 lib/eal/common/malloc_heap.c |  2 +-
 2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/lib/eal/common/malloc_elem.c b/lib/eal/common/malloc_elem.c
index 83f05497cc..8f49812846 100644
--- a/lib/eal/common/malloc_elem.c
+++ b/lib/eal/common/malloc_elem.c
@@ -63,6 +63,8 @@ malloc_elem_find_max_iova_contig(struct malloc_elem *elem, 
size_t align)
 
cur_page = RTE_PTR_ALIGN_FLOOR(contig_seg_start, page_sz);
ms = rte_mem_virt2memseg(cur_page, elem->msl);
+   if (ms == NULL)
+   return 0;
 
/* do first iteration outside the loop */
page_end = RTE_PTR_ADD(cur_page, page_sz);
@@ -91,9 +93,12 @@ malloc_elem_find_max_iova_contig(struct malloc_elem *elem, 
size_t align)
 * we're not blowing past data end.
 */
ms = rte_mem_virt2memseg(contig_seg_start, elem->msl);
-   cur_page = ms->addr;
-   /* don't trigger another recalculation */
-   expected_iova = ms->iova;
+   if (ms != NULL) {
+   cur_page = ms->addr;
+
+   /* don't trigger another recalculation */
+   expected_iova = ms->iova;
+   }
continue;
}
/* cur_seg_end ends on a page boundary or on data end. if we're
diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c
index 3f41430e42..88270ce4d2 100644
--- a/lib/eal/common/malloc_heap.c
+++ b/lib/eal/common/malloc_heap.c
@@ -930,7 +930,7 @@ malloc_heap_free(struct malloc_elem *elem)
const struct rte_memseg *tmp =
rte_mem_virt2memseg(aligned_start, msl);
 
-   if (tmp->flags & RTE_MEMSEG_FLAG_DO_NOT_FREE) {
+   if ((tmp != NULL) && (tmp->flags & 
RTE_MEMSEG_FLAG_DO_NOT_FREE)) {
/* this is an unfreeable segment, so move start */
aligned_start = RTE_PTR_ADD(tmp->addr, tmp->len);
}
-- 
2.25.1



[PATCH v2 09/11] malloc: check result of malloc_elem_free

2022-11-21 Thread okaya
From: Sinan Kaya 

In malloc_heap_free result of call to malloc_elem_free is dereferenced
here and may be null.

Signed-off-by: Sinan Kaya 
---
 lib/eal/common/malloc_heap.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c
index 88270ce4d2..6eb6fcda5e 100644
--- a/lib/eal/common/malloc_heap.c
+++ b/lib/eal/common/malloc_heap.c
@@ -892,6 +892,9 @@ malloc_heap_free(struct malloc_elem *elem)
/* anything after this is a bonus */
ret = 0;
 
+   if (elem == NULL)
+   goto free_unlock;
+
/* ...of which we can't avail if we are in legacy mode, or if this is an
 * externally allocated segment.
 */
-- 
2.25.1



Re: [PATCH] ring: build with global includes

2022-11-21 Thread Tyler Retzlaff
On Mon, Nov 21, 2022 at 10:36:24PM +0100, Thomas Monjalon wrote:
> 21/11/2022 22:27, Konstantin Ananyev:
> > From: Tyler Retzlaff 
> > > e.g. i don't see.
> > > 
> > > deps += ['eal']
> > > 
> > > is the dependency on eal the library or just eal headers? because if it
> > > is header only it is equivalent to telemetry i think?
> > 
> > rte_ring.c uses bunch of EAL functions:
> > rte_zmalloc, rte_memzone_*,  rte_log*, rte_mcfg*, etc.  
> 
> I think deps += ['eal'] is missing in ring meson file.

i guess that's what i'm kind of getting at... if it was there then the
patch i submitted is not required since depending on eal would drag in
global_inc.


Re: [PATCH v3 0/3] ethdev: document special cases of port start and stop

2022-11-21 Thread Thomas Monjalon
> Dariusz Sosnowski (3):
>   net/mlx5: fix log level on failed transfer proxy stop
>   net/mlx5: document E-Switch limitations with HWS in mlx5 PMD
>   ethdev: document special cases of port start and stop

There is a change in testpmd but it looks simple enough for -rc4.
Applied, thanks.





Re: [PATCH 02/11] net/tap: check if name is null

2022-11-21 Thread Ferruh Yigit
On 11/21/2022 10:03 PM, Sinan Kaya wrote:
> On Mon, 2022-11-21 at 22:41 +0100, Thomas Monjalon wrote:
>> 21/11/2022 21:40, 
>> ok...@kernel.org
>>  
>> :
>>> --- a/drivers/net/tap/rte_eth_tap.c
>>> +++ b/drivers/net/tap/rte_eth_tap.c
>>> @@ -2340,6 +2340,10 @@ rte_pmd_tun_probe(struct rte_vdev_device *dev)
>>> struct rte_eth_dev *eth_dev;
>>>  
>>> name = rte_vdev_device_name(dev);
>>> +   if (!name) {
>>
>> Please it is preferred to check against NULL,
>> because name is not a boolean, thanks.
>> I know it's longer but it is more explicit.
> 
> Sure, I can do that. Getting used to dpdk coding style. I wasn't sure
> what to do with braces on single line too. At least, I got a warning on
> that too.
> 

DPDK coding convention is documented if it helps:
https://doc.dpdk.org/guides/contributing/coding_style.html

>>
>> Thanks for the fixes in this series.
>>
>>
> 
> Cheers
> 



Re: [PATCH] net/nfp: fix return path in TSO processing function

2022-11-21 Thread Ferruh Yigit
On 11/18/2022 4:23 PM, Niklas Söderlund wrote:
> From: Fei Qin 
> 
> When enable TSO, nfp_net_nfdk_tx_tso() fills segment information in Tx
> descriptor. However, the return path for TSO is lost and the LSO related
> fields of Tx descriptor is filled with zeros which prevents packets from
> being sent.
> 
> This patch fixes the return path in TSO processing function to make sure
> TSO works fine.
> 
> Fixes: c73dced48c8c ("net/nfp: add NFDk Tx")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Fei Qin 
> Reviewed-by: Niklas Söderlund 
> Reviewed-by: Chaoyong He 
> Signed-off-by: Niklas Söderlund 

Applied to dpdk-next-net/main, thanks.



[PATCH v2 1/2] net/mlx5: fix port private max LRO msg size

2022-11-21 Thread Gregory Etelson
The PMD analyzes each Rx queue maximal LRO size and selects one that
fits all queues to configure TIR LRO attribute.
TIR LRO attribute is number of 256 bytes chunks that match the
selected maximal LRO size.

PMD used `priv->max_lro_msg_size` for selected maximal LRO size and
number of TIR chunks.

Fixes: b9f1f4c239 ("net/mlx5: fix port initialization with small LRO")

Signed-off-by: Gregory Etelson 
Acked-by: Matan Azrad 
---
 drivers/net/mlx5/mlx5.h  | 2 +-
 drivers/net/mlx5/mlx5_devx.c | 3 ++-
 drivers/net/mlx5/mlx5_rxq.c  | 4 +---
 3 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 02bee5808d..31982002ee 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1711,7 +1711,7 @@ struct mlx5_priv {
uint32_t refcnt; /**< Reference counter. */
/**< Verbs modify header action object. */
uint8_t ft_type; /**< Flow table type, Rx or Tx. */
-   uint8_t max_lro_msg_size;
+   uint32_t max_lro_msg_size;
uint32_t link_speed_capa; /* Link speed capabilities. */
struct mlx5_xstats_ctrl xstats_ctrl; /* Extended stats control. */
struct mlx5_stats_ctrl stats_ctrl; /* Stats control. */
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index c1305836cf..02deaac612 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -870,7 +870,8 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const 
uint8_t *rss_key,
if (lro) {
MLX5_ASSERT(priv->sh->config.lro_allowed);
tir_attr->lro_timeout_period_usecs = priv->config.lro_timeout;
-   tir_attr->lro_max_msg_sz = priv->max_lro_msg_size;
+   tir_attr->lro_max_msg_sz =
+   priv->max_lro_msg_size / MLX5_LRO_SEG_CHUNK_SIZE;
tir_attr->lro_enable_mask =
MLX5_TIRC_LRO_ENABLE_MASK_IPV4_LRO |
MLX5_TIRC_LRO_ENABLE_MASK_IPV6_LRO;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 724cd6c7e6..81aa3f074a 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1533,7 +1533,6 @@ mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, 
uint16_t idx,
MLX5_MAX_TCP_HDR_OFFSET)
max_lro_size -= MLX5_MAX_TCP_HDR_OFFSET;
max_lro_size = RTE_MIN(max_lro_size, MLX5_MAX_LRO_SIZE);
-   max_lro_size /= MLX5_LRO_SEG_CHUNK_SIZE;
if (priv->max_lro_msg_size)
priv->max_lro_msg_size =
RTE_MIN((uint32_t)priv->max_lro_msg_size, max_lro_size);
@@ -1541,8 +1540,7 @@ mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, 
uint16_t idx,
priv->max_lro_msg_size = max_lro_size;
DRV_LOG(DEBUG,
"port %u Rx Queue %u max LRO message size adjusted to %u bytes",
-   dev->data->port_id, idx,
-   priv->max_lro_msg_size * MLX5_LRO_SEG_CHUNK_SIZE);
+   dev->data->port_id, idx, priv->max_lro_msg_size);
 }
 
 /**
-- 
2.34.1



[PATCH v2 2/2] doc: update MLX5 LRO limitation

2022-11-21 Thread Gregory Etelson
Maximal LRO message size must be multiply of 256.
Otherwise, TCP payload may not fit into a single WQE.

Cc: sta...@dpdk.org
Signed-off-by: Gregory Etelson 
Acked-by: Matan Azrad 
---
v2: move the patch to LRO section.
---
 doc/guides/nics/mlx5.rst | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 4f0db21dde..e77d79774b 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -411,6 +411,8 @@ Limitations
   - LRO packet aggregation is performed by HW only for packet size larger than
 ``lro_min_mss_size``. This value is reported on device start, when debug
 mode is enabled.
+  - The driver rounds down the ``max_lro_pkt_size`` value in the port 
configuration
+to a multiple of 256 due to HW limitation.
 
 - CRC:
 
-- 
2.34.1



RE: [PATCH 2/2] doc: update MLX5 LRO limitation

2022-11-21 Thread Gregory Etelson
Hello Thomas,

> >  .. note::
> >
> > MAC addresses not already present in the bridge table of the
> associated
> 
> If you would like to read the doc, I guess you'd prefer to find this info
> in the section dedicated to LRO, not in a random place.
> 
I moved the patch location in v2

Regards,
Gregory


[Bug 1131] [22.11-rc3] vmdq && kni meson build error with gcc12.2.1+debug on Fedora37

2022-11-21 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=1131

huang chenyu (chenyux.hu...@intel.com) changed:

   What|Removed |Added

 Resolution|INVALID |---
 Status|RESOLVED|UNCONFIRMED

--- Comment #2 from huang chenyu (chenyux.hu...@intel.com) ---
After execution, the following error occurred:
[3390/3390] Generating kernel/linux/kni/rte_kni with a custom command
make: Entering directory '/usr/src/kernels/6.0.7-301.fc37.x86_64'
  CC [M] 
/root/FC37-64_K6.0.7_GCC12.1.1/x86_64-native-linuxapp-gcc+debug/20221121192322/dpdk/x86_64-native-linuxapp-gcc+debug/kernel/linux/kni/kni_misc.o
  CC [M] 
/root/FC37-64_K6.0.7_GCC12.1.1/x86_64-native-linuxapp-gcc+debug/20221121192322/dpdk/x86_64-native-linuxapp-gcc+debug/kernel/linux/kni/kni_net.o
  LD [M] 
/root/FC37-64_K6.0.7_GCC12.1.1/x86_64-native-linuxapp-gcc+debug/20221121192322/dpdk/x86_64-native-linuxapp-gcc+debug/kernel/linux/kni/rte_kni.o
  MODPOST
/root/FC37-64_K6.0.7_GCC12.1.1/x86_64-native-linuxapp-gcc+debug/20221121192322/dpdk/x86_64-native-linuxapp-gcc+debug/kernel/linux/kni/Module.symvers
  CC [M] 
/root/FC37-64_K6.0.7_GCC12.1.1/x86_64-native-linuxapp-gcc+debug/20221121192322/dpdk/x86_64-native-linuxapp-gcc+debug/kernel/linux/kni/rte_kni.mod.o
  LD [M] 
/root/FC37-64_K6.0.7_GCC12.1.1/x86_64-native-linuxapp-gcc+debug/20221121192322/dpdk/x86_64-native-linuxapp-gcc+debug/kernel/linux/kni/rte_kni.ko
  BTF [M]
/root/FC37-64_K6.0.7_GCC12.1.1/x86_64-native-linuxapp-gcc+debug/20221121192322/dpdk/x86_64-native-linuxapp-gcc+debug/kernel/linux/kni/rte_kni.ko
Skipping BTF generation for
/root/FC37-64_K6.0.7_GCC12.1.1/x86_64-native-linuxapp-gcc+debug/20221121192322/dpdk/x86_64-native-linuxapp-gcc+debug/kernel/linux/kni/rte_kni.ko
due to unavailability of vmlinux
make: Leaving directory '/usr/src/kernels/6.0.7-301.fc37.x86_64'

2022-11-22 12:57:56,999 INFO sqlalchemy.engine.Engine ROLLBACK
FilesystemSize  Used Avail Use% Mounted on
devtmpfs  4.0M 0  4.0M   0% /dev
tmpfs  16G 0   16G   0% /dev/shm
tmpfs 6.3G  1.1M  6.3G   1% /run
/dev/mapper/fedora-root99G  3.8G   96G   4% /
tmpfs  16G   48K   16G   1% /tmp
/dev/sda2 960M  215M  746M  23% /boot
10.239.252.109:/srv/regression_share  890G  536G  309G  64%
/root/regression_share
10.239.252.109:/regression_dts917G   65G  806G   8% /regression_dts
tmpfs 3.2G  4.0K  3.2G   1% /run/user/0

[root@shwdeNPG189 platform_build]# ip a
-bash: ip: command not found
[root@shwdeNPG189 platform_build]# ls
-bash: /usr/bin/ls: Input/output error
[root@shwdeNPG189 platform_build]#

However, the device still has a lot of disk space

-- 
You are receiving this mail because:
You are the assignee for the bug.

RE: [PATCH v7 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-21 Thread Han, YingyaX


> -Original Message-
> From: Hanumanth Pothula 
> Sent: Tuesday, November 22, 2022 2:08 AM
> To: Singh, Aman Deep ; Zhang, Yuying
> 
> Cc: dev@dpdk.org; andrew.rybche...@oktetlabs.ru;
> tho...@monjalon.net; Jiang, YuX ;
> jer...@marvell.com; ndabilpu...@marvell.com; hpoth...@marvell.com
> Subject: [PATCH v7 1/1] app/testpmd: add valid check to verify multi
> mempool feature
> 
> Validate ethdev parameter 'max_rx_mempools' to know whether device
> supports multi-mempool feature or not.
> 
> Also, add new testpmd command line argument, multi-mempool, to control
> multi-mempool feature. By default its disabled.
> 
> Bugzilla ID: 1128
> Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx
> queue")
> 
> Signed-off-by: Hanumanth Pothula 

Tested-by: Yingya Han 

 



RE: [PATCH v7 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-21 Thread Tang, Yaqi


> -Original Message-
> From: Han, YingyaX 
> Sent: Tuesday, November 22, 2022 2:43 PM
> To: Hanumanth Pothula ; Singh, Aman Deep
> ; Zhang, Yuying 
> Cc: dev@dpdk.org; andrew.rybche...@oktetlabs.ru; tho...@monjalon.net;
> Jiang, YuX ; jer...@marvell.com;
> ndabilpu...@marvell.com
> Subject: RE: [PATCH v7 1/1] app/testpmd: add valid check to verify multi
> mempool feature
> 
> 
> > -Original Message-
> > From: Hanumanth Pothula 
> > Sent: Tuesday, November 22, 2022 2:08 AM
> > To: Singh, Aman Deep ; Zhang, Yuying
> > 
> > Cc: dev@dpdk.org; andrew.rybche...@oktetlabs.ru;
> tho...@monjalon.net;
> > Jiang, YuX ; jer...@marvell.com;
> > ndabilpu...@marvell.com; hpoth...@marvell.com
> > Subject: [PATCH v7 1/1] app/testpmd: add valid check to verify multi
> > mempool feature
> >
> > Validate ethdev parameter 'max_rx_mempools' to know whether device
> > supports multi-mempool feature or not.
> >
> > Also, add new testpmd command line argument, multi-mempool, to
> control
> > multi-mempool feature. By default its disabled.
> >
> > Bugzilla ID: 1128
> > Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx
> > queue")
> >
> > Signed-off-by: Hanumanth Pothula 
> 
> Tested-by: Yingya Han 
> 
> 

Tested-by: Yaqi Tang