[PATCH v3] doc: add iavf live migration guide

2023-07-06 Thread Lingyu Liu
Add iavf live migration steps based on KVM VFIO migration.

Signed-off-by: Lingyu Liu 
---
v2: Fixed CI.
Added brief introduction about live migration.
Clarified this is iavf feature.

v3: Added intro and link about vfio live migration.
Added description about kernel boot parameters.
Changed to use sysfs to bind device to driver.
Noted for running dpdk-testpmd.
Highlighted KVM vfio migration.
---
 doc/guides/nics/intel_vf.rst   | 113 +
 doc/guides/rel_notes/release_23_07.rst |   3 +
 2 files changed, 116 insertions(+)

diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index d365dbc185..8c24485bdd 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -622,6 +622,119 @@ which belongs to the destination VF on the VM.
Inter-VM Communication
 
 
+Live Migrating a VM running DPDK
+
+
+Live migration refers to the process of moving a running virtual machine (VM) 
or application
+between different physical machines without disconnecting the client or 
application
+(see https://en.wikipedia.org/wiki/Live_migration for more information).
+
+VFIO device migration refers to migrating a VM which have VFIO device 
pass-through
+(see https://qemu.readthedocs.io/en/latest/devel/vfio-migration.html for more 
information).
+
+This part describes stpes to migrate a VM which has a iavf device pass through.
+
+The following describes a target environment:
+
+*   Host Operating System: Ubuntu 20.04.5
+
+*   Guest Operating System: Ubuntu 20.04.5
+
+*   Linux Kernel Version: 5.15.0-72-generic
+
+*   Target Applications: dpdk-testpmd
+
+*   Ice Kernel Driver Version: 1.11.17.1 
``_
+
+*   Qemu Version: 7.2
+
+The setup procedure is as follows:
+
+#.  Before booting the Host OS, open **BIOS setup** and enable **Intel® VT 
features**.
+
+#.  While booting the Host OS kernel, pass the intel_iommu=on kernel command 
line argument using GRUB.
+
+#.  In the Host OS
+
+Install the ice driver and migration driver:
+
+.. code-block:: console
+
+insmod ice.ko
+insmod ice-vfio-pci.ko
+
+Create 2 VFs and bind them to vfio pci driver:
+
+.. code-block:: console
+
+echo 2 > /sys/bus/pci/devices/:ca:00.1/sriov_numvfs
+echo "8086 1889" > /sys/bus/pci/drivers/ice-vfio-pci/new_id
+echo :ca:11.0 > /sys/bus/pci/devices/:ca:11.0/driver/unbind
+echo :ca:11.0 > /sys/bus/pci/drivers/ice-vfio-pci/bind
+echo :ca:11.1 > /sys/bus/pci/devices/:ca:11.1/driver/unbind
+echo :ca:11.1 > /sys/bus/pci/drivers/ice-vfio-pci/bind
+
+.. note::
+
+The command above creates two vfs for device :ca:00.1:
+
+.. code-block:: console
+
+:ca:11.0 'Ethernet Adaptive Virtual Function 1889' if= 
drv=ice-vfio-pci unused=iavf
+:ca:11.1 'Ethernet Adaptive Virtual Function 1889' if= 
drv=ice-vfio-pci unused=iavf
+
+#.  Now, start the migration source Virtual Machine by running the following 
command:
+
+.. code-block:: console
+
+qemu/build/x86_64-softmmu/qemu-system-x86_64 -enable-kvm -cpu host -m 
4G -smp 1 -device 
vfio-pci,host=:ca:11.0,x-enable-migration=true,x-pre-copy-dirty-page-tracking=off
 -drive file=ubuntu-2004.qcow2 -nic user,hostfwd=tcp::-:22 -monitor stdio
+
+.. note::
+The vfio-pci,host=:ca:11.0 value indicates that you want to attach 
a vfio PCI device
+to a Virtual Machine and the respective (Bus:Device.Function) numbers 
should be passed,
+x-enable-migration=true indicates that this VF supports migration. 
Dirty page tracking
+is not supported, so set x-pre-copy-dirty-page-tracking=off.
+
+#.  In VM, install iavf driver and vfio-pci driver
+
+.. code-block:: console
+
+insmod iavf.ko
+modprobe vfio enable_unsafe_noiommu_mode=1
+moodprobe vfio-pci
+
+#.  Bind net device to vfio-pci driver and launch dpdk-testpmd
+
+.. code-block:: console
+
+dpdk-testpmd -l 0-1 -- -i
+testpmd> set txpkts 64
+testpmd> start tx_first
+
+.. note::
+Please ensure dpdk-testpmd to run independently of ssh console.
+Suggest to put it in a background process like tmux/screen so that
+migration will not causing ssh console exit and dpdk-testpmd killed.
+
+#. Start the migration destination Virtual Machine
+
+.. code-block:: console
+
+qemu/build/x86_64-softmmu/qemu-system-x86_64 -enable-kvm -cpu host -m 
4G -smp 1 -device 
vfio-pci,host=:ca:11.1,x-enable-migration=true,x-pre-copy-dirty-page-tracking=off
 -drive file=ubuntu-2004.qcow2 -nic user,hostfwd=tcp::5556-:22 -monitor stdio 
-incoming tcp:127.0.0.1:
+
+ #. Start migration by issuing the command in qemu console
+
+.. code-block:: console
+
+migrate -d tcp:127.0.0.1:
+
+#. Log in the destination VM, and dpdk-testpmd is

RE: [PATCH] doc: support IPsec Multi-buffer lib v1.4

2023-07-06 Thread De Lara Guarch, Pablo



> -Original Message-
> From: Power, Ciara 
> Sent: Wednesday, July 5, 2023 3:34 PM
> To: dev@dpdk.org
> Cc: Ji, Kai ; De Lara Guarch, Pablo
> ; Power, Ciara 
> Subject: [PATCH] doc: support IPsec Multi-buffer lib v1.4
> 
> Updated AESNI MB and AESNI GCM, KASUMI, ZUC, SNOW3G and
> CHACHA20_POLY1305 PMD documentation guides with information about
> the latest Intel IPsec Multi-buffer library supported.
> 
> Signed-off-by: Ciara Power 

Acked-by: Pablo de Lara 


[PATCH v2 2/2] vhost: fix vduse features negotiation

2023-07-06 Thread Maxime Coquelin
The series introducing VDUSE support missed the
application capability to disable supported features.

This results in TSO being negotiated while not supported by
the application.

Fixes: 0adb8eccc6a6 ("vhost: add VDUSE device creation and destruction")

Signed-off-by: Maxime Coquelin 
---
 lib/vhost/socket.c | 19 +--
 lib/vhost/vduse.c  | 29 +++--
 lib/vhost/vduse.h  |  2 ++
 lib/vhost/vhost.h  |  8 +---
 lib/vhost/vhost_user.h |  9 +
 5 files changed, 32 insertions(+), 35 deletions(-)

diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
index 19a7469e45..f55fb299fd 100644
--- a/lib/vhost/socket.c
+++ b/lib/vhost/socket.c
@@ -921,6 +921,10 @@ rte_vhost_driver_register(const char *path, uint64_t flags)
VHOST_LOG_CONFIG(path, ERR, "failed to init connection 
mutex\n");
goto out_free;
}
+
+   if (!strncmp("/dev/vduse/", path, strlen("/dev/vduse/")))
+   vsocket->is_vduse = true;
+
vsocket->vdpa_dev = NULL;
vsocket->max_queue_pairs = VHOST_MAX_QUEUE_PAIRS;
vsocket->extbuf = flags & RTE_VHOST_USER_EXTBUF_SUPPORT;
@@ -950,9 +954,14 @@ rte_vhost_driver_register(const char *path, uint64_t flags)
 * two values.
 */
vsocket->use_builtin_virtio_net = true;
-   vsocket->supported_features = VIRTIO_NET_SUPPORTED_FEATURES;
-   vsocket->features   = VIRTIO_NET_SUPPORTED_FEATURES;
-   vsocket->protocol_features  = VHOST_USER_PROTOCOL_FEATURES;
+   if (vsocket->is_vduse) {
+   vsocket->supported_features = VDUSE_NET_SUPPORTED_FEATURES;
+   vsocket->features   = VDUSE_NET_SUPPORTED_FEATURES;
+   } else {
+   vsocket->supported_features = VHOST_USER_NET_SUPPORTED_FEATURES;
+   vsocket->features   = VHOST_USER_NET_SUPPORTED_FEATURES;
+   vsocket->protocol_features  = VHOST_USER_PROTOCOL_FEATURES;
+   }
 
if (vsocket->async_copy) {
vsocket->supported_features &= ~(1ULL << VHOST_F_LOG_ALL);
@@ -993,9 +1002,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags)
 #endif
}
 
-   if (!strncmp("/dev/vduse/", path, strlen("/dev/vduse/"))) {
-   vsocket->is_vduse = true;
-   } else {
+   if (!vsocket->is_vduse) {
if ((flags & RTE_VHOST_USER_CLIENT) != 0) {
vsocket->reconnect = !(flags & 
RTE_VHOST_USER_NO_RECONNECT);
if (vsocket->reconnect && reconn_tid == 0) {
diff --git a/lib/vhost/vduse.c b/lib/vhost/vduse.c
index b9514e6c29..1478562be1 100644
--- a/lib/vhost/vduse.c
+++ b/lib/vhost/vduse.c
@@ -26,27 +26,6 @@
 #define VHOST_VDUSE_API_VERSION 0
 #define VDUSE_CTRL_PATH "/dev/vduse/control"
 
-#define VDUSE_NET_SUPPORTED_FEATURES ((1ULL << VIRTIO_NET_F_MRG_RXBUF) | \
-   (1ULL << VIRTIO_F_ANY_LAYOUT) | \
-   (1ULL << VIRTIO_F_VERSION_1)   | \
-   (1ULL << VIRTIO_NET_F_GSO) | \
-   (1ULL << VIRTIO_NET_F_HOST_TSO4) | \
-   (1ULL << VIRTIO_NET_F_HOST_TSO6) | \
-   (1ULL << VIRTIO_NET_F_HOST_UFO) | \
-   (1ULL << VIRTIO_NET_F_HOST_ECN) | \
-   (1ULL << VIRTIO_NET_F_CSUM)| \
-   (1ULL << VIRTIO_NET_F_GUEST_CSUM) | \
-   (1ULL << VIRTIO_NET_F_GUEST_TSO4) | \
-   (1ULL << VIRTIO_NET_F_GUEST_TSO6) | \
-   (1ULL << VIRTIO_NET_F_GUEST_UFO) | \
-   (1ULL << VIRTIO_NET_F_GUEST_ECN) | \
-   (1ULL << VIRTIO_RING_F_INDIRECT_DESC) | \
-   (1ULL << VIRTIO_RING_F_EVENT_IDX) | \
-   (1ULL << VIRTIO_F_IN_ORDER) | \
-   (1ULL << VIRTIO_F_IOMMU_PLATFORM) | \
-   (1ULL << VIRTIO_NET_F_CTRL_VQ) | \
-   (1ULL << VIRTIO_NET_F_MQ))
-
 struct vduse {
struct fdset fdset;
 };
@@ -441,7 +420,7 @@ vduse_device_create(const char *path)
struct virtio_net *dev;
struct virtio_net_config vnet_config = { 0 };
uint64_t ver = VHOST_VDUSE_API_VERSION;
-   uint64_t features = VDUSE_NET_SUPPORTED_FEATURES;
+   uint64_t features;
struct vduse_dev_config *dev_config = NULL;
const char *name = path + strlen("/dev/vduse/");
 
@@ -489,6 +468,12 @@ vduse_device_create(const char *path)
goto out_ctrl_close;
}
 
+   ret = rte_vhost_driver_get_features(path, &features);
+   if (ret < 0) {
+   VHOST_LOG_CONFIG(name, ERR, "Failed to get backend features\n");
+   goto out_free;
+   }
+
ret = rte_vhost_driver_get_queue_num(path, &max_que

[PATCH v2 0/2] VDUSE fixes for v23.07

2023-07-06 Thread Maxime Coquelin
This small series brings a couple of VDUSE fixes
for v23.07, discovered during testing with OVS-DPDK.

Changes in v2:
==
- Define a common set of features to highlight delta
  between Vhsot and VDUSE (David)
- Change patches order for simplification

Maxime Coquelin (2):
  vduse: fix missing event index features
  vhost: fix vduse features negotiation

 lib/vhost/socket.c | 19 +--
 lib/vhost/vduse.c  | 28 +++-
 lib/vhost/vduse.h  |  2 ++
 lib/vhost/vhost.h  |  8 +---
 lib/vhost/vhost_user.h |  9 +
 5 files changed, 32 insertions(+), 34 deletions(-)

-- 
2.41.0



[PATCH v2 1/2] vduse: fix missing event index features

2023-07-06 Thread Maxime Coquelin
This features was mistakenly removed, add it back.

Fixes: 0adb8eccc6a6 ("vhost: add VDUSE device creation and destruction")

Signed-off-by: Maxime Coquelin 
---
 lib/vhost/vduse.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/lib/vhost/vduse.c b/lib/vhost/vduse.c
index a509daf80c..b9514e6c29 100644
--- a/lib/vhost/vduse.c
+++ b/lib/vhost/vduse.c
@@ -41,6 +41,7 @@
(1ULL << VIRTIO_NET_F_GUEST_UFO) | \
(1ULL << VIRTIO_NET_F_GUEST_ECN) | \
(1ULL << VIRTIO_RING_F_INDIRECT_DESC) | \
+   (1ULL << VIRTIO_RING_F_EVENT_IDX) | \
(1ULL << VIRTIO_F_IN_ORDER) | \
(1ULL << VIRTIO_F_IOMMU_PLATFORM) | \
(1ULL << VIRTIO_NET_F_CTRL_VQ) | \
-- 
2.41.0



Re: [PATCH v2 0/2] VDUSE fixes for v23.07

2023-07-06 Thread David Marchand
On Thu, Jul 6, 2023 at 10:13 AM Maxime Coquelin
 wrote:
>
> This small series brings a couple of VDUSE fixes
> for v23.07, discovered during testing with OVS-DPDK.
>
> Changes in v2:
> ==
> - Define a common set of features to highlight delta
>   between Vhsot and VDUSE (David)
> - Change patches order for simplification

For the series,
Reviewed-by: David Marchand 


-- 
David Marchand



Re: [dpdk-dev] [PATCH v5 0/4] improve options help

2023-07-06 Thread Thomas Monjalon
29/06/2023 18:27, Stephen Hemminger:
> On Mon,  5 Apr 2021 21:39:50 +0200
> Thomas Monjalon  wrote:
> 
> > After v4, this series is split in several parts.
> > The remaining 4 patches of this series are low priority.
> > 
> > Patches 1 and 3 are simple improvements.
> > 
> > Patches 2 and 4 lead to a new formatting of the usage text.
> > It is a matter of taste and should be discussed more.
> > 
> > v5: no change
> > 
> > Thomas Monjalon (4):
> >   eal: explain argv behaviour during init
> >   eal: improve options usage text
> >   eal: use macros for help option
> >   app: hook in EAL usage help
> 
> Thomas, this patchset seems ready but never made it in.
> What is best disposition for it:
>   1. Rebase and resubmit?
>   2. I could add it to the log patch series WIP?
>   3. Drop it since old?

I've applied the patches 1 and 3 that you acked.

I let you revisit the patches 2 and 4 if you wish.




[Bug 1256] drivers/common/mlx5: mlx5_malloc() called on invalid socket ID when global MR cache is full and rte_extmem_* API is used

2023-07-06 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=1256

Raslan Darawsheh (rasl...@nvidia.com) changed:

   What|Removed |Added

 Status|UNCONFIRMED |RESOLVED
 Resolution|--- |FIXED

--- Comment #2 from Raslan Darawsheh (rasl...@nvidia.com) ---
https://git.dpdk.org/dpdk/commit/?h=releases&id=147f6fb42bd7637b37a9180b0774275531c05f9b

-- 
You are receiving this mail because:
You are the assignee for the bug.

RE: [PATCH v2 1/2] vduse: fix missing event index features

2023-07-06 Thread Xia, Chenbo
> -Original Message-
> From: Maxime Coquelin 
> Sent: Thursday, July 6, 2023 4:12 PM
> To: dev@dpdk.org; Xia, Chenbo ;
> david.march...@redhat.com
> Cc: Maxime Coquelin 
> Subject: [PATCH v2 1/2] vduse: fix missing event index features
> 
> This features was mistakenly removed, add it back.
> 
> Fixes: 0adb8eccc6a6 ("vhost: add VDUSE device creation and destruction")
> 
> Signed-off-by: Maxime Coquelin 
> ---
>  lib/vhost/vduse.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/lib/vhost/vduse.c b/lib/vhost/vduse.c
> index a509daf80c..b9514e6c29 100644
> --- a/lib/vhost/vduse.c
> +++ b/lib/vhost/vduse.c
> @@ -41,6 +41,7 @@
>   (1ULL << VIRTIO_NET_F_GUEST_UFO) | \
>   (1ULL << VIRTIO_NET_F_GUEST_ECN) | \
>   (1ULL << VIRTIO_RING_F_INDIRECT_DESC) | \
> + (1ULL << VIRTIO_RING_F_EVENT_IDX) | \
>   (1ULL << VIRTIO_F_IN_ORDER) | \
>   (1ULL << VIRTIO_F_IOMMU_PLATFORM) | \
>   (1ULL << VIRTIO_NET_F_CTRL_VQ) | \
> --
> 2.41.0

Reviewed-by: Chenbo Xia  


RE: [PATCH v2 2/2] vhost: fix vduse features negotiation

2023-07-06 Thread Xia, Chenbo
> -Original Message-
> From: Maxime Coquelin 
> Sent: Thursday, July 6, 2023 4:12 PM
> To: dev@dpdk.org; Xia, Chenbo ;
> david.march...@redhat.com
> Cc: Maxime Coquelin 
> Subject: [PATCH v2 2/2] vhost: fix vduse features negotiation
> 
> The series introducing VDUSE support missed the
> application capability to disable supported features.
> 
> This results in TSO being negotiated while not supported by
> the application.
> 
> Fixes: 0adb8eccc6a6 ("vhost: add VDUSE device creation and destruction")
> 
> Signed-off-by: Maxime Coquelin 
> ---
>  lib/vhost/socket.c | 19 +--
>  lib/vhost/vduse.c  | 29 +++--
>  lib/vhost/vduse.h  |  2 ++
>  lib/vhost/vhost.h  |  8 +---
>  lib/vhost/vhost_user.h |  9 +
>  5 files changed, 32 insertions(+), 35 deletions(-)
> 
> diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
> index 19a7469e45..f55fb299fd 100644
> --- a/lib/vhost/socket.c
> +++ b/lib/vhost/socket.c
> @@ -921,6 +921,10 @@ rte_vhost_driver_register(const char *path, uint64_t
> flags)
>   VHOST_LOG_CONFIG(path, ERR, "failed to init connection
> mutex\n");
>   goto out_free;
>   }
> +
> + if (!strncmp("/dev/vduse/", path, strlen("/dev/vduse/")))
> + vsocket->is_vduse = true;
> +
>   vsocket->vdpa_dev = NULL;
>   vsocket->max_queue_pairs = VHOST_MAX_QUEUE_PAIRS;
>   vsocket->extbuf = flags & RTE_VHOST_USER_EXTBUF_SUPPORT;
> @@ -950,9 +954,14 @@ rte_vhost_driver_register(const char *path, uint64_t
> flags)
>* two values.
>*/
>   vsocket->use_builtin_virtio_net = true;
> - vsocket->supported_features = VIRTIO_NET_SUPPORTED_FEATURES;
> - vsocket->features   = VIRTIO_NET_SUPPORTED_FEATURES;
> - vsocket->protocol_features  = VHOST_USER_PROTOCOL_FEATURES;
> + if (vsocket->is_vduse) {
> + vsocket->supported_features = VDUSE_NET_SUPPORTED_FEATURES;
> + vsocket->features   = VDUSE_NET_SUPPORTED_FEATURES;
> + } else {
> + vsocket->supported_features =
> VHOST_USER_NET_SUPPORTED_FEATURES;
> + vsocket->features   =
> VHOST_USER_NET_SUPPORTED_FEATURES;
> + vsocket->protocol_features  = VHOST_USER_PROTOCOL_FEATURES;
> + }
> 
>   if (vsocket->async_copy) {
>   vsocket->supported_features &= ~(1ULL << VHOST_F_LOG_ALL);
> @@ -993,9 +1002,7 @@ rte_vhost_driver_register(const char *path, uint64_t
> flags)
>  #endif
>   }
> 
> - if (!strncmp("/dev/vduse/", path, strlen("/dev/vduse/"))) {
> - vsocket->is_vduse = true;
> - } else {
> + if (!vsocket->is_vduse) {
>   if ((flags & RTE_VHOST_USER_CLIENT) != 0) {
>   vsocket->reconnect = !(flags &
> RTE_VHOST_USER_NO_RECONNECT);
>   if (vsocket->reconnect && reconn_tid == 0) {
> diff --git a/lib/vhost/vduse.c b/lib/vhost/vduse.c
> index b9514e6c29..1478562be1 100644
> --- a/lib/vhost/vduse.c
> +++ b/lib/vhost/vduse.c
> @@ -26,27 +26,6 @@
>  #define VHOST_VDUSE_API_VERSION 0
>  #define VDUSE_CTRL_PATH "/dev/vduse/control"
> 
> -#define VDUSE_NET_SUPPORTED_FEATURES ((1ULL << VIRTIO_NET_F_MRG_RXBUF) |
> \
> - (1ULL << VIRTIO_F_ANY_LAYOUT) | \
> - (1ULL << VIRTIO_F_VERSION_1)   | \
> - (1ULL << VIRTIO_NET_F_GSO) | \
> - (1ULL << VIRTIO_NET_F_HOST_TSO4) | \
> - (1ULL << VIRTIO_NET_F_HOST_TSO6) | \
> - (1ULL << VIRTIO_NET_F_HOST_UFO) | \
> - (1ULL << VIRTIO_NET_F_HOST_ECN) | \
> - (1ULL << VIRTIO_NET_F_CSUM)| \
> - (1ULL << VIRTIO_NET_F_GUEST_CSUM) | \
> - (1ULL << VIRTIO_NET_F_GUEST_TSO4) | \
> - (1ULL << VIRTIO_NET_F_GUEST_TSO6) | \
> - (1ULL << VIRTIO_NET_F_GUEST_UFO) | \
> - (1ULL << VIRTIO_NET_F_GUEST_ECN) | \
> - (1ULL << VIRTIO_RING_F_INDIRECT_DESC) | \
> - (1ULL << VIRTIO_RING_F_EVENT_IDX) | \
> - (1ULL << VIRTIO_F_IN_ORDER) | \
> - (1ULL << VIRTIO_F_IOMMU_PLATFORM) | \
> - (1ULL << VIRTIO_NET_F_CTRL_VQ) | \
> - (1ULL << VIRTIO_NET_F_MQ))
> -
>  struct vduse {
>   struct fdset fdset;
>  };
> @@ -441,7 +420,7 @@ vduse_device_create(const char *path)
>   struct virtio_net *dev;
>   struct virtio_net_config vnet_config = { 0 };
>   uint64_t ver = VHOST_VDUSE_API_VERSION;
> - uint64_t features = VDUSE_NET_SUPPORTED_FEATURES;
> + uint64_t features;
>   struct vduse_dev_config *dev_config = NULL;
>   const char *name = path + strlen("/dev/vduse/");
> 
> @@ -489,6 +468,12 @@ vduse_device

Re: [PATCH v4 3/3] ring: add telemetry cmd for ring info

2023-07-06 Thread David Marchand
On Tue, Jul 4, 2023 at 4:11 PM Thomas Monjalon  wrote:
>
> 04/07/2023 10:04, Jie Hai:
> > On 2023/6/20 22:34, Thomas Monjalon wrote:
> > > 20/06/2023 10:14, Jie Hai:
> > >> On 2023/2/20 20:55, David Marchand wrote:
> > >>> On Fri, Feb 10, 2023 at 3:50 AM Jie Hai  wrote:
> > 
> >  This patch supports dump of ring information by its name.
> >  An example using this command is shown below:
> > 
> >  --> /ring/info,MP_mb_pool_0
> >  {
> >  "/ring/info": {
> >    "name": "MP_mb_pool_0",
> >    "socket": 0,
> >    "flags": "0x0",
> >    "producer_type": "MP",
> >    "consumer_type": "MC",
> >    "size": 262144,
> >    "mask": "0x3",
> >    "capacity": 262143,
> >    "used_count": 153197,
> >    "consumer_tail": 2259,
> >    "consumer_head": 2259,
> >    "producer_tail": 155456,
> >    "producer_head": 155456,
> > >>>
> > >>> What would an external user make of such an information?
> > >>>
> > >>> I'd like to have a better idea what your usecase is.
> > >>> If it is for debugging, well, gdb is probably a better candidate.
> > >>>
> > >>>
> > >> Hi David,
> > >> Thanks for your question and I'm sorry for getting back to you so late.
> > >> There was a problem with my mailbox and I lost all my mails.
> > >>
> > >> The ring information exported by telemetry can be used to check the ring
> > >> status periodically during normal use. When an error occurs, the fault
> > >> cause can be deduced based on the information.
> > >> GDB is more suitable for locating errors only when they are sure that
> > >> errors will occur.
> > >
> > > Yes, when an error occurs, you can use GDB,
> > > and you don't need all these internal values in telemetry.
> > >
> > >
> > Hi, David, Thomas,
> >
> > Would it be better to delete the last four items?
> > "consumer_tail": 2259,
> > "consumer_head": 2259,
> > "producer_tail": 155456,
> > "producer_head": 155456,
>
> Yes it would be better.
> David, other maintainers, would it make the telemetry command a good idea?
>
>

Without the ring head/tail exposed, it seems ok.
It still exposes the ring flags which are kind of internal things, but
those are parts of the API/ABI, iiuc, so it should not be an issue.


-- 
David Marchand



[PATCH v3] net/mlx5: fix RSS expansion inner buffer overflow.

2023-07-06 Thread Maayan Kashani
The stack which used for RSS expansion was overflowed and trashed RSS expansion 
data.
(buf->entry[MLX5_RSS_EXP_ELT_N]).
Due to this overflow, packets such as ARP or LACP with overwritten RSS types 
due to the
overflow will be dropped.

This increases the buffer size to avoid such overflows and adds relevant ASSERT 
for the future.

Bugzilla ID: 1173

Signed-off-by: Maayan Kashani 
Acked-by: Ori Kam 
---
 drivers/net/mlx5/mlx5_flow.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index cf83db7b60..41e298855b 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -374,7 +374,7 @@ mlx5_flow_expand_rss_skip_explicit(const struct 
mlx5_flow_expand_node graph[],
return next;
 }
 
-#define MLX5_RSS_EXP_ELT_N 16
+#define MLX5_RSS_EXP_ELT_N 32
 
 /**
  * Expand RSS flows into several possible flows according to the RSS hash
@@ -539,6 +539,7 @@ mlx5_flow_expand_rss(struct mlx5_flow_expand_rss *buf, 
size_t size,
if (lsize > size)
return -EINVAL;
n = elt * sizeof(*item);
+   MLX5_ASSERT((buf->entries) < MLX5_RSS_EXP_ELT_N);
buf->entry[buf->entries].priority =
stack_pos + 1 + missed;
buf->entry[buf->entries].pattern = addr;
-- 
2.25.1



RE: [PATCH v2 1/2] net/virtio: fix legacy device IO port map in secondary process

2023-07-06 Thread Xia, Chenbo
> -Original Message-
> From: David Marchand 
> Sent: Monday, July 3, 2023 3:48 PM
> To: Li, Miao 
> Cc: dev@dpdk.org; sta...@dpdk.org; Maxime Coquelin
> ; Xia, Chenbo 
> Subject: Re: [PATCH v2 1/2] net/virtio: fix legacy device IO port map in
> secondary process
> 
> On Thu, Jun 29, 2023 at 4:27 AM Miao Li  wrote:
> >
> > When doing IO port map for legacy device in secondary process,
> > vfio_cfg setup for legacy device like vfio_group_fd and vfio_dev_fd
> > is missing. So, in secondary process, rte_pci_map_device is added
> > for legacy device to setup vfio_cfg and fill in region info like in
> > primary process.
> 
> I think, in legacy mode, there is no PCI mappable memory.
> So there should be no need for this call to rte_pci_map_device.
> 
> What is missing is a vfio setup, is this correct?
> I'd rather see this issue be fixed in the pci_vfio_ioport_map() function.

Thinking about this again: pci_vfio_ioport_map is defined to map specific 
ioport so
it does not make sense to do any device setup in such function. Any reason why
we can't call rte_pci_map_device in secondary/legacy? This function 
rte_pci_map_device
is defined to set-up device and set up BAR mapping if needed. Secondary process 
for any
driver needs set-up device and BAR mapping again (right?). For legacy device it 
can skip
the BAR mapping part, which rte_pci_map_device is already doing.

Any comments?

Thanks,
Chenbo

> 
> 
> >> Fixes: 512e27eeb743 ("net/virtio: move PCI specific dev init to PCI
> ethdev init")
> 
> This commit only moved code, and at this point, there was no need for
> a call to rte_pci_map_device in the secondary process case.
> It seems unlikely this is a faulty change.
> 
> The recent addition on the vfio side seems a better culprit, but I am
> fine with being proven wrong :-).
> 
> 
> > Cc: sta...@dpdk.org
> >
> > Signed-off-by: Miao Li 
> 
> 
> --
> David Marchand



Re: [PATCH] vhost: fix build with gcc 4.8

2023-07-06 Thread Maxime Coquelin




On 7/3/23 10:21, Ali Alnubani wrote:

Adds braces around initializer to resolve the following
false-positive build error with gcc 4.8.5 on CentOS:
lib/vhost/vduse.c:441:9: error: missing braces around initializer
   [-Werror=missing-braces]

Fixes: 653327e191f0 ("vhost: add multiqueue support to VDUSE")
Cc: maxime.coque...@redhat.com

Signed-off-by: Ali Alnubani 
---
  lib/vhost/vduse.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/vhost/vduse.c b/lib/vhost/vduse.c
index a509daf80c..aa5f1f564d 100644
--- a/lib/vhost/vduse.c
+++ b/lib/vhost/vduse.c
@@ -438,7 +438,7 @@ vduse_device_create(const char *path)
pthread_t fdset_tid;
uint32_t i, max_queue_pairs, total_queues;
struct virtio_net *dev;
-   struct virtio_net_config vnet_config = { 0 };
+   struct virtio_net_config vnet_config = {{ 0 }};
uint64_t ver = VHOST_VDUSE_API_VERSION;
uint64_t features = VDUSE_NET_SUPPORTED_FEATURES;
struct vduse_dev_config *dev_config = NULL;



Applied to dpdk-next-virtio/main.

Thanks,
Maxime



Re: [PATCH v2 0/2] VDUSE fixes for v23.07

2023-07-06 Thread Maxime Coquelin




On 7/6/23 10:12, Maxime Coquelin wrote:

This small series brings a couple of VDUSE fixes
for v23.07, discovered during testing with OVS-DPDK.

Changes in v2:
==
- Define a common set of features to highlight delta
   between Vhsot and VDUSE (David)
- Change patches order for simplification

Maxime Coquelin (2):
   vduse: fix missing event index features
   vhost: fix vduse features negotiation

  lib/vhost/socket.c | 19 +--
  lib/vhost/vduse.c  | 28 +++-
  lib/vhost/vduse.h  |  2 ++
  lib/vhost/vhost.h  |  8 +---
  lib/vhost/vhost_user.h |  9 +
  5 files changed, 32 insertions(+), 34 deletions(-)




Applied to dpdk-next-virtio/main.

Thanks,
Maxime



[Bug 1256] drivers/common/mlx5: mlx5_malloc() called on invalid socket ID when global MR cache is full and rte_extmem_* API is used

2023-07-06 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=1256

Marius-Cristian Baciu (baciumariuscrist...@yahoo.com) changed:

   What|Removed |Added

 Resolution|FIXED   |---
 Status|RESOLVED|UNCONFIRMED

--- Comment #3 from Marius-Cristian Baciu (baciumariuscrist...@yahoo.com) ---
Hi,

Unfortunately that patch only targets a memory socket issue with the ASO
mechanism. However, in my setup ASO is never an issue - I actually do not
believe it is enabled.

To give a little more insight, the problem I am describing manifests on the
data path:
- rte_eth_tx_burst();
- mlx5_tx_burst_*() is called;
- at some later point, in mr_lookup_caches(), mr_btree_lookup() returns
UINT32_MAX because all 256 entries in the cache have been occupied and last
memory registration did not catch an empty slot;
- when mr_lookup_caches() fails, mlx5_mr_create() -> mlx5_mr_create_primary()
is called;
- mlx5_malloc() at line 723 fails because it is called with an inappropriate
socket ID (the socket ID of the memseg list associated with an external buffer
(prior with rte_extmem_register()), EXTERNAL_HEAP_MIN_SOCKET_ID, which does not
actually have a valid heap associated, from which memory could be allocated.

-- 
You are receiving this mail because:
You are the assignee for the bug.

RE: [EXT] [PATCH] ipsec: fix NAT-T length calculation

2023-07-06 Thread Konstantin Ananyev


Hi Akhil,

> 
> Hi Konstantin,
> Can you review this patch?
> 
> > UDP header length is included in sa->hdr_len. Take care of that in
> > L3 header and pakcet length calculation.
> >
> > Fixes: 01eef5907fc3 ("ipsec: support NAT-T")
> >
> > Signed-off-by: Xiao Liang 
> > ---
> >  lib/ipsec/esp_outb.c | 2 +-
> >  lib/ipsec/sa.c   | 2 +-
> >  2 files changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
> > index 9cbd9202f6..ec87b1dce2 100644
> > --- a/lib/ipsec/esp_outb.c
> > +++ b/lib/ipsec/esp_outb.c
> > @@ -198,7 +198,7 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa,
> > rte_be64_t sqc,
> > struct rte_udp_hdr *udph = (struct rte_udp_hdr *)
> > (ph + sa->hdr_len - sizeof(struct rte_udp_hdr));
> > udph->dgram_len = rte_cpu_to_be_16(mb->pkt_len - sqh_len -
> > -   sa->hdr_l3_off - sa->hdr_len);
> > +   sa->hdr_len + sizeof(struct rte_udp_hdr));

To be honest, it is not clear to me why we shouldn't take into account 
sa->hdr_l3_off
 any more.
Probably the author can explain.
Also would like author of  NAT-T support to chime in.
Radu, any comments on that patch?
Thanks
Konstantin

> > }
> >
> > /* update original and new ip header fields */
> > diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
> > index 59a547637d..2297bd6d72 100644
> > --- a/lib/ipsec/sa.c
> > +++ b/lib/ipsec/sa.c
> > @@ -371,7 +371,7 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct
> > rte_ipsec_sa_prm *prm)
> >
> > /* update l2_len and l3_len fields for outbound mbuf */
> > sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
> > -   sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
> > +   prm->tun.hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
> >
> > esp_outb_init(sa, sa->hdr_len, prm->ipsec_xform.esn.value);
> >  }
> > --
> > 2.40.0



Re: [PATCH v2 1/6] baseband/fpga_5gnr_fec: fix possible div by zero

2023-07-06 Thread Maxime Coquelin




On 5/25/23 20:28, Hernan Vargas wrote:

Add fix to have an early exit when z_c is zero to prevent a possible
division by zero.

Fixes: 44dc6faa796f ("baseband/fpga_5gnr_fec: add LDPC processing functions")
Cc: sta...@dpdk.org

Signed-off-by: Hernan Vargas 
Reviewed-by: Maxime Coquelin 
---
  drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 4 +++-
  1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c 
b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index f29565af8cca..99390c48160c 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -877,9 +877,11 @@ check_desc_error(uint32_t error_code) {
  static inline uint16_t
  get_k0(uint16_t n_cb, uint16_t z_c, uint8_t bg, uint8_t rv_index)
  {
+   uint16_t n = (bg == 1 ? N_ZC_1 : N_ZC_2) * z_c;
if (rv_index == 0)
return 0;
-   uint16_t n = (bg == 1 ? N_ZC_1 : N_ZC_2) * z_c;
+   if (z_c == 0)
+   return 0;
if (n_cb == n) {
if (rv_index == 1)
return (bg == 1 ? K0_1_1 : K0_1_2) * z_c;



Applied to dpdk-next-baseband/for-main.

Thanks,
Maxime



Re: [PATCH v2 1/6] baseband/fpga_5gnr_fec: fix possible div by zero

2023-07-06 Thread Maxime Coquelin




On 5/25/23 20:28, Hernan Vargas wrote:

Add fix to have an early exit when z_c is zero to prevent a possible
division by zero.

Fixes: 44dc6faa796f ("baseband/fpga_5gnr_fec: add LDPC processing functions")
Cc: sta...@dpdk.org

Signed-off-by: Hernan Vargas 
Reviewed-by: Maxime Coquelin 
---
  drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 4 +++-
  1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c 
b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index f29565af8cca..99390c48160c 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -877,9 +877,11 @@ check_desc_error(uint32_t error_code) {
  static inline uint16_t
  get_k0(uint16_t n_cb, uint16_t z_c, uint8_t bg, uint8_t rv_index)
  {
+   uint16_t n = (bg == 1 ? N_ZC_1 : N_ZC_2) * z_c;
if (rv_index == 0)
return 0;
-   uint16_t n = (bg == 1 ? N_ZC_1 : N_ZC_2) * z_c;
+   if (z_c == 0)
+   return 0;
if (n_cb == n) {
if (rv_index == 1)
return (bg == 1 ? K0_1_1 : K0_1_2) * z_c;



Applied to dpdk-next-baseband/for-main.

Thanks,
Maxime



Re: [PATCH v2 0/6] changes for 23.07

2023-07-06 Thread Maxime Coquelin




On 5/25/23 20:28, Hernan Vargas wrote:

v2: Targeting 23.11. Update in commits 1,2 based on review comments.
v1: Targeting 23.07 if possible. Add support for AGX100 (N6000) and corner case 
fixes.

Hernan Vargas (6):
   baseband/fpga_5gnr_fec: fix possible div by zero
   baseband/fpga_5gnr_fec: fix seg fault unconf queue
   baseband/fpga_5gnr_fec: renaming for consistency
   baseband/fpga_5gnr_fec: add Vista Creek variant
   baseband/fpga_5gnr_fec: add AGX100 support
   baseband/fpga_5gnr_fec: cosmetic comment changes

  doc/guides/bbdevs/fpga_5gnr_fec.rst   |   72 +-
  drivers/baseband/fpga_5gnr_fec/agx100_pmd.h   |  273 ++
  .../baseband/fpga_5gnr_fec/fpga_5gnr_fec.h|  349 +--
  .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 2279 -
  .../fpga_5gnr_fec/rte_pmd_fpga_5gnr_fec.h |   27 +-
  drivers/baseband/fpga_5gnr_fec/vc_5gnr_pmd.h  |  140 +
  6 files changed, 2166 insertions(+), 974 deletions(-)
  create mode 100644 drivers/baseband/fpga_5gnr_fec/agx100_pmd.h
  create mode 100644 drivers/baseband/fpga_5gnr_fec/vc_5gnr_pmd.h



Applied patches 1 & 2 only, as the other ones aren't fixes and missed
submission deadline.

Maxime



Re: [PATCH v1 1/1] bbdev: extend range of allocation function

2023-07-06 Thread Maxime Coquelin




On 6/2/23 04:04, Nicolas Chautru wrote:

Realigning the argument to unsigned int to
align with number support by underlying
rte_mempool_get_bulk function.

Signed-off-by: Nicolas Chautru 
---
  lib/bbdev/rte_bbdev_op.h | 6 +++---
  1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/lib/bbdev/rte_bbdev_op.h b/lib/bbdev/rte_bbdev_op.h
index 96a390cd9b..9353fd588b 100644
--- a/lib/bbdev/rte_bbdev_op.h
+++ b/lib/bbdev/rte_bbdev_op.h
@@ -982,7 +982,7 @@ rte_bbdev_op_pool_create(const char *name, enum 
rte_bbdev_op_type type,
   */
  static inline int
  rte_bbdev_enc_op_alloc_bulk(struct rte_mempool *mempool,
-   struct rte_bbdev_enc_op **ops, uint16_t num_ops)
+   struct rte_bbdev_enc_op **ops, unsigned int num_ops)
  {
struct rte_bbdev_op_pool_private *priv;
  
@@ -1013,7 +1013,7 @@ rte_bbdev_enc_op_alloc_bulk(struct rte_mempool *mempool,

   */
  static inline int
  rte_bbdev_dec_op_alloc_bulk(struct rte_mempool *mempool,
-   struct rte_bbdev_dec_op **ops, uint16_t num_ops)
+   struct rte_bbdev_dec_op **ops, unsigned int num_ops)
  {
struct rte_bbdev_op_pool_private *priv;
  
@@ -1045,7 +1045,7 @@ rte_bbdev_dec_op_alloc_bulk(struct rte_mempool *mempool,

  __rte_experimental
  static inline int
  rte_bbdev_fft_op_alloc_bulk(struct rte_mempool *mempool,
-   struct rte_bbdev_fft_op **ops, uint16_t num_ops)
+   struct rte_bbdev_fft_op **ops, unsigned int num_ops)
  {
struct rte_bbdev_op_pool_private *priv;
  



Applied to dpdk-next-baseband/for-main.

Thanks,
Maxime



Re: [PATCH v11 1/2] mempool cache: add zero-copy get and put functions

2023-07-06 Thread Konstantin Ananyev

05/07/2023 18:18, Kamalakshitha Aligeri пишет:

From: Morten Brørup 

Zero-copy access to mempool caches is beneficial for PMD performance.
Furthermore, having a zero-copy mempool API is considered a precondition
for fixing a certain category of bugs, present in some PMDs: For
performance reasons, some PMDs had bypassed the mempool API in order to
achieve zero-copy access to the mempool cache. This can only be fixed
in those PMDs without a performance regression if the mempool library
offers zero-copy access APIs, so the PMDs can use the proper mempool
API instead of copy-pasting code from the mempool library.
Furthermore, the copy-pasted code in those PMDs has not been kept up to
date with the improvements of the mempool library, so when they bypass
the mempool API, mempool trace is missing and mempool statistics is not
updated.

Bugzilla ID: 1052

Signed-off-by: Morten Brørup 
Signed-off-by: Kamalakshitha Aligeri 

v11:
* Changed patch description and version to 23.07
v10:
* Added mempool test cases with zero-copy API's
v9:
* Also set rte_errno in zero-copy put function, if returning NULL.
   (Honnappa)
* Revert v3 comparison to prevent overflow if n is really huge and len is
   non-zero. (Olivier)
v8:
* Actually include the rte_errno header file.
   Note to self: The changes only take effect on the disk after the file in
   the text editor has been saved.
v7:
* Fix typo in function description. (checkpatch)
* Zero-copy functions may set rte_errno; include rte_errno header file.
   (ci/loongarch-compilation)
v6:
* Improve description of the 'n' parameter to the zero-copy get function.
   (Konstantin, Bruce)
* The caches used for zero-copy may not be user-owned, so remove this word
   from the function descriptions. (Kamalakshitha)
v5:
* Bugfix: Compare zero-copy get request to the cache size instead of the
   flush threshold; otherwise refill could overflow the memory allocated
   for the cache. (Andrew)
* Split the zero-copy put function into an internal function doing the
   work, and a public function with trace.
* Avoid code duplication by rewriting rte_mempool_do_generic_put() to use
   the internal zero-copy put function. (Andrew)
* Corrected the return type of rte_mempool_cache_zc_put_bulk() from void *
   to void **; it returns a pointer to an array of objects.
* Fix coding style: Add missing curly brackets. (Andrew)
v4:
* Fix checkpatch warnings.
v3:
* Bugfix: Respect the cache size; compare to the flush threshold instead
   of RTE_MEMPOOL_CACHE_MAX_SIZE.
* Added 'rewind' function for incomplete 'put' operations. (Konstantin)
* Replace RTE_ASSERTs with runtime checks of the request size.
   Instead of failing, return NULL if the request is too big. (Konstantin)
* Modified comparison to prevent overflow if n is really huge and len is
   non-zero. (Andrew)
* Updated the comments in the code.
v2:
* Fix checkpatch warnings.
* Fix missing registration of trace points.
* The functions are inline, so they don't go into the map file.
v1 changes from the RFC:
* Removed run-time parameter checks. (Honnappa)
   This is a hot fast path function; requiring correct application
   behaviour, i.e. function parameters must be valid.
* Added RTE_ASSERT for parameters instead.
   Code for this is only generated if built with RTE_ENABLE_ASSERT.
* Removed fallback when 'cache' parameter is not set. (Honnappa)
* Chose the simple get function; i.e. do not move the existing objects in
   the cache to the top of the new stack, just leave them at the bottom.
* Renamed the functions. Other suggestions are welcome, of course. ;-)
* Updated the function descriptions.
* Added the functions to trace_fp and version.map.

---
  app/test/test_mempool.c|  81 +++---
  lib/mempool/mempool_trace_points.c |   9 ++
  lib/mempool/rte_mempool.h  | 239 +
  lib/mempool/rte_mempool_trace_fp.h |  23 +++
  lib/mempool/version.map|   9 ++
  5 files changed, 311 insertions(+), 50 deletions(-)

diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index 8e493eda47..6d29f5bc7b 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -74,7 +74,7 @@ my_obj_init(struct rte_mempool *mp, __rte_unused void *arg,

  /* basic tests (done on one core) */
  static int
-test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
+test_mempool_basic(struct rte_mempool *mp, int use_external_cache, int 
use_zc_api)
  {
uint32_t *objnum;
void **objtable;
@@ -84,6 +84,7 @@ test_mempool_basic(struct rte_mempool *mp, int 
use_external_cache)
unsigned i, j;
int offset;
struct rte_mempool_cache *cache;
+   void **cache_objs;

if (use_external_cache) {
/* Create a user-owned mempool cache. */
@@ -100,8 +101,13 @@ test_mempool_basic(struct rte_mempool *mp, int 
use_external_cache)
rte_mempool_dump(stdout, mp);

printf("get an object\n");
-   if (rte_mempool_generic_get(mp, &obj, 1, cache)

Re: [PATCH v11 2/2] net/i40e: replace put function

2023-07-06 Thread Konstantin Ananyev

05/07/2023 18:18, Kamalakshitha Aligeri пишет:

Integrated zero-copy put API in mempool cache in i40e PMD.
On Ampere Altra server, l3fwd single core's performance improves by 5%
with the new API

Signed-off-by: Kamalakshitha Aligeri 
Reviewed-by: Ruifeng Wang 
Reviewed-by: Feifei Wang 
---
  .mailmap|  1 +
  drivers/net/i40e/i40e_rxtx_vec_common.h | 27 -
  2 files changed, 23 insertions(+), 5 deletions(-)

diff --git a/.mailmap b/.mailmap
index a9f4f28fba..2581d0efe7 100644
--- a/.mailmap
+++ b/.mailmap
@@ -677,6 +677,7 @@ Kai Ji 
  Kaiwen Deng 
  Kalesh AP 
  Kamalakannan R 
+Kamalakshitha Aligeri 
  Kamil Bednarczyk 
  Kamil Chalupnik 
  Kamil Rytarowski 
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h 
b/drivers/net/i40e/i40e_rxtx_vec_common.h
index fe1a6ec75e..35cdb31b2e 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -95,18 +95,35 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)

n = txq->tx_rs_thresh;

-/* first buffer to free from S/W ring is at index
- * tx_next_dd - (tx_rs_thresh-1)
- */
+   /* first buffer to free from S/W ring is at index
+* tx_next_dd - (tx_rs_thresh-1)
+*/
txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];

if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
+   struct rte_mempool *mp = txep[0].mbuf->pool;
+   struct rte_mempool_cache *cache = rte_mempool_default_cache(mp, 
rte_lcore_id());
+   void **cache_objs;
+
+   if (unlikely(!cache))
+   goto fallback;
+
+   cache_objs = rte_mempool_cache_zc_put_bulk(cache, mp, n);
+   if (unlikely(!cache_objs))
+   goto fallback;
+
for (i = 0; i < n; i++) {
-   free[i] = txep[i].mbuf;
+   cache_objs[i] = txep[i].mbuf;
/* no need to reset txep[i].mbuf in vector path */
}
-   rte_mempool_put_bulk(free[0]->pool, (void **)free, n);
goto done;
+
+fallback:
+   for (i = 0; i < n; i++)
+   free[i] = txep[i].mbuf;
+   rte_mempool_generic_put(mp, (void **)free, n, cache);
+   goto done;
+
}

m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
--


Acked-by: Konstantin Ananyev 


2.25.1





RE: [PATCH] app/testpmd: revert primary process polling all queues fix

2023-07-06 Thread Jiale, SongX
> -Original Message-
> From: Ferruh Yigit 
> Sent: Wednesday, July 5, 2023 10:32 PM
> To: Singh, Aman Deep ; Zhang, Yuying
> ; Burakov, Anatoly ;
> Jie Hai 
> Cc: dev@dpdk.org; Thomas Monjalon ; David
> Marchand ; sta...@dpdk.org; Jiale, SongX
> ; Yang, Qiming 
> Subject: [PATCH] app/testpmd: revert primary process polling all queues fix
> 
> For some drivers [1], testpmd forwarding is broken with commit [2].
> 
> This is because with [2] testpmd gets queue state from ethdev and
> forwarding is done only on queues in started state, but some drivers don't
> update queue status properly, and this breaks forwarding for those drivers.
> 
> Drivers should be fixed but more time is required to verify drivers again,
> instead reverting [2] for now to not break drivers.
> Target is to merge [2] back at the beginning of next release cycle and fix
> drivers accordingly.
> 
> [1]
> Bugzilla ID: 1259
> 
> [2]
> Fixes: 141a520b35f7 ("app/testpmd: fix primary process not polling all
> queues")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Ferruh Yigit 
> ---
Tested-by: Jiale Song 


Re: [PATCH 1/7] net/mlx5: fix the modify field check of tag

2023-07-06 Thread Thomas Monjalon
03/07/2023 15:31, Bing Zhao:
> Hi Stephen,
> If I understand correctly, do you mean that the internal value and rte_flow 
> API value may have some conflict?
> All the MLX5 internal enum values start from INT_MIN. When treating it as a 
> int value, it would not have the same value with rte_flow enums, unless all 
> the 2^^32 are defined.
> But yes, this has some risk since there is no limitation of the values in the 
> rte_flow API.

We can assume it will never happen.
This is good to go.


> > -Original Message-
> > From: Stephen Hemminger 
> > Sent: Friday, June 30, 2023 2:09 PM
> > To: Bing Zhao 
> > Cc: Matan Azrad ; Slava Ovsiienko
> > ; Ori Kam ; Suanming Mou
> > ; Raslan Darawsheh ;
> > dev@dpdk.org; Michael Baum 
> > Subject: Re: [PATCH 1/7] net/mlx5: fix the modify field check of tag
> > 
> > External email: Use caution opening links or attachments
> > 
> > 
> > On Fri, 30 Jun 2023 08:43:03 +0300
> > Bing Zhao  wrote:
> > 
> > > @@ -1117,9 +1117,10 @@ flow_dv_fetch_field(const uint8_t *data,
> > > uint32_t size)  static inline bool
> > > flow_modify_field_support_tag_array(enum rte_flow_field_id field)  {
> > > - switch (field) {
> > > + switch ((int)field) {
> > >   case RTE_FLOW_FIELD_TAG:
> > >   case RTE_FLOW_FIELD_MPLS:
> > > + case MLX5_RTE_FLOW_FIELD_META_REG:
> > 
> > Mixing internal and API fields seems like something that could get easily
> > broken by changes to rte_flow.
> 







Re: [PATCH] devtools: fix mailmap check for parentheses

2023-07-06 Thread Thomas Monjalon
04/07/2023 01:40, Stephen Hemminger:
> On Mon, 26 Jun 2023 12:24:03 +0200
> Thomas Monjalon  wrote:
> 
> > When checking names having parentheses, the grep matching was failing.
> > It is fixed by escaping the open parenthesis.
> > 
> > Also, the mailmap path was relative to the root directory.
> > The path is made absolute.
> > 
> > Fixes: e83d41f0694d ("mailmap: add list of contributors")
> > Fixes: 83812de4f2f3 ("devtools: move mailmap check after patch applied")
> > Cc: sta...@dpdk.org
> > 
> > Signed-off-by: Thomas Monjalon 
> > ---
> 
> Acked-by: Stephen Hemminger 

Applied





[PATCH v7 0/4] Recycle mbufs from Tx queue to Rx queue

2023-07-06 Thread Feifei Wang
Currently, the transmit side frees the buffers into the lcore cache and
the receive side allocates buffers from the lcore cache. The transmit
side typically frees 32 buffers resulting in 32*8=256B of stores to
lcore cache. The receive side allocates 32 buffers and stores them in
the receive side software ring, resulting in 32*8=256B of stores and
256B of load from the lcore cache.

This patch proposes a mechanism to avoid freeing to/allocating from
the lcore cache. i.e. the receive side will free the buffers from
transmit side directly into its software ring. This will avoid the 256B
of loads and stores introduced by the lcore cache. It also frees up the
cache lines used by the lcore cache. And we can call this mode as mbufs
recycle mode.

In the latest version, mbufs recycle mode is packaged as a separate API. 
This allows for the users to change rxq/txq pairing in real time in data plane,
according to the analysis of the packet flow by the application, for example:
---
Step 1: upper application analyse the flow direction
Step 2: recycle_rxq_info = rte_eth_recycle_rx_queue_info_get(rx_portid, 
rx_queueid)
Step 3: rte_eth_recycle_mbufs(rx_portid, rx_queueid, tx_portid, tx_queueid, 
recycle_rxq_info);
Step 4: rte_eth_rx_burst(rx_portid,rx_queueid);
Step 5: rte_eth_tx_burst(tx_portid,tx_queueid);
---
Above can support user to change rxq/txq pairing  at run-time and user does not 
need to
know the direction of flow in advance. This can effectively expand mbufs 
recycle mode's
use scenarios.

Furthermore, mbufs recycle mode is no longer limited to the same pmd,
it can support moving mbufs between different vendor pmds, even can put the 
mbufs
anywhere into your Rx mbuf ring as long as the address of the mbuf ring can be 
provided.
In the latest version, we enable mbufs recycle mode in i40e pmd and ixgbe pmd, 
and also try to
use i40e driver in Rx, ixgbe driver in Tx, and then achieve 7-9% performance 
improvement
by mbufs recycle mode.

Difference between mbuf recycle, ZC API used in mempool and general path
For general path: 
Rx: 32 pkts memcpy from mempool cache to rx_sw_ring
Tx: 32 pkts memcpy from tx_sw_ring to temporary variable + 32 
pkts memcpy from temporary variable to mempool cache
For ZC API used in mempool:
Rx: 32 pkts memcpy from mempool cache to rx_sw_ring
Tx: 32 pkts memcpy from tx_sw_ring to zero-copy mempool cache
Refer link: 
http://patches.dpdk.org/project/dpdk/patch/20230221055205.22984-2-kamalakshitha.alig...@arm.com/
For mbufs recycle:
Rx/Tx: 32 pkts memcpy from tx_sw_ring to rx_sw_ring
Thus we can see in the one loop, compared to general path, mbufs recycle mode 
reduces 32+32=64 pkts memcpy;
Compared to ZC API used in mempool, we can see mbufs recycle mode reduce 32 
pkts memcpy in each loop.
So, mbufs recycle has its own benefits.

Testing status:
(1) dpdk l3fwd test with multiple drivers:
port 0: 82599 NIC   port 1: XL710 NIC
-
Without fast free   With fast free
Thunderx2:  +7.53%  +13.54%
-

(2) dpdk l3fwd test with same driver:
port 0 && 1: XL710 NIC
-
Without fast free   With fast free
Ampere altra:   +12.61% +11.42%
n1sdp:  +8.30%  +3.85%
x86-sse:+8.43%  +3.72%
-

(3) Performance comparison with ZC_mempool used
port 0 && 1: XL710 NIC
with fast free
-
With recycle buffer With zc_mempool
Ampere altra:   11.42%  3.54%
-

Furthermore, we add recycle_mbuf engine in testpmd. Due to XL710 NIC has
I/O bottleneck in testpmd in ampere altra, we can not see throughput change
compared with I/O fwd engine. However, using record cmd in testpmd:
'$set record-burst-stats on'
we can see the ratio of 'Rx/Tx burst size of 32' is reduced. This
indicate mbufs recycle can save CPU cycles.

V2:
1. Use data-plane API to enable direct-rearm (Konstantin, Honnappa)
2. Add 'txq_data_get' API to get txq info for Rx (Konstantin)
3. Use input parameter to enable direct rearm in l3fwd (Konstantin)
4. Add condition detection for direct rearm API (Morten, Andrew Rybchenko)

V3:
1. Seperate Rx and Tx operation with two APIs in direct-rearm (Konstantin)
2. Delete L3fwd change for direct rearm (Jerin)
3. enable direct rearm in ixgbe driver in Arm

v4:
1. Rename direct-rearm as buffer recycle. Based on this, function name
and variable name are changed to let this m

[PATCH v7 1/4] ethdev: add API for mbufs recycle mode

2023-07-06 Thread Feifei Wang
Add 'rte_eth_recycle_rx_queue_info_get' and 'rte_eth_recycle_mbufs'
APIs to recycle used mbufs from a transmit queue of an Ethernet device,
and move these mbufs into a mbuf ring for a receive queue of an Ethernet
device. This can bypass mempool 'put/get' operations hence saving CPU
cycles.

For each recycling mbufs, the rte_eth_recycle_mbufs() function performs
the following operations:
- Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf
ring.
- Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed
from the Tx mbuf ring.

Suggested-by: Honnappa Nagarahalli 
Suggested-by: Ruifeng Wang 
Signed-off-by: Feifei Wang 
Reviewed-by: Ruifeng Wang 
Reviewed-by: Honnappa Nagarahalli 
---
 doc/guides/rel_notes/release_23_07.rst |   7 +
 lib/ethdev/ethdev_driver.h |  10 ++
 lib/ethdev/ethdev_private.c|   2 +
 lib/ethdev/rte_ethdev.c|  31 +
 lib/ethdev/rte_ethdev.h| 181 +
 lib/ethdev/rte_ethdev_core.h   |  23 +++-
 lib/ethdev/version.map |   2 +
 7 files changed, 250 insertions(+), 6 deletions(-)

diff --git a/doc/guides/rel_notes/release_23_07.rst 
b/doc/guides/rel_notes/release_23_07.rst
index 4459144140..7402262f22 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -200,6 +200,13 @@ New Features
 
   Enhanced the GRO library to support TCP packets over IPv6 network.
 
+* **Add mbufs recycling support. **
+
+  Added ``rte_eth_recycle_rx_queue_info_get`` and ``rte_eth_recycle_mbufs``
+  APIs which allow the user to copy used mbufs from the Tx mbuf ring
+  into the Rx mbuf ring. This feature supports the case that the Rx Ethernet
+  device is different from the Tx Ethernet device with respective driver
+  callback functions in ``rte_eth_recycle_mbufs``.
 
 Removed Items
 -
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 980f837ab6..b0c55a8523 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -58,6 +58,10 @@ struct rte_eth_dev {
eth_rx_descriptor_status_t rx_descriptor_status;
/** Check the status of a Tx descriptor */
eth_tx_descriptor_status_t tx_descriptor_status;
+   /** Pointer to PMD transmit mbufs reuse function */
+   eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
+   /** Pointer to PMD receive descriptors refill function */
+   eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
 
/**
 * Device data that is shared between primary and secondary processes
@@ -507,6 +511,10 @@ typedef void (*eth_rxq_info_get_t)(struct rte_eth_dev *dev,
 typedef void (*eth_txq_info_get_t)(struct rte_eth_dev *dev,
uint16_t tx_queue_id, struct rte_eth_txq_info *qinfo);
 
+typedef void (*eth_recycle_rxq_info_get_t)(struct rte_eth_dev *dev,
+   uint16_t rx_queue_id,
+   struct rte_eth_recycle_rxq_info *recycle_rxq_info);
+
 typedef int (*eth_burst_mode_get_t)(struct rte_eth_dev *dev,
uint16_t queue_id, struct rte_eth_burst_mode *mode);
 
@@ -1250,6 +1258,8 @@ struct eth_dev_ops {
eth_rxq_info_get_t rxq_info_get;
/** Retrieve Tx queue information */
eth_txq_info_get_t txq_info_get;
+   /** Retrieve mbufs recycle Rx queue information */
+   eth_recycle_rxq_info_get_t recycle_rxq_info_get;
eth_burst_mode_get_t   rx_burst_mode_get; /**< Get Rx burst mode */
eth_burst_mode_get_t   tx_burst_mode_get; /**< Get Tx burst mode */
eth_fw_version_get_t   fw_version_get; /**< Get firmware version */
diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 14ec8c6ccf..f8ab64f195 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -277,6 +277,8 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
fpo->rx_queue_count = dev->rx_queue_count;
fpo->rx_descriptor_status = dev->rx_descriptor_status;
fpo->tx_descriptor_status = dev->tx_descriptor_status;
+   fpo->recycle_tx_mbufs_reuse = dev->recycle_tx_mbufs_reuse;
+   fpo->recycle_rx_descriptors_refill = dev->recycle_rx_descriptors_refill;
 
fpo->rxq.data = dev->data->rx_queues;
fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 0840d2b594..ea89a101a1 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -5876,6 +5876,37 @@ rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t 
queue_id,
return 0;
 }
 
+int
+rte_eth_recycle_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
+   struct rte_eth_recycle_rxq_info *recycle_rxq_info)
+{
+   struct rte_eth_dev *dev;
+
+   RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+   dev = &rte_eth_devices[port_id];
+
+   if (queue_id >= dev->data->nb_rx_queues) {
+   RTE_ETHDEV_LOG(ERR, "Invalid Rx queue

[PATCH v7 2/4] net/i40e: implement mbufs recycle mode

2023-07-06 Thread Feifei Wang
Define specific function implementation for i40e driver.
Currently, mbufs recycle mode can support 128bit
vector path and avx2 path. And can be enabled both in
fast free and no fast free mode.

Suggested-by: Honnappa Nagarahalli 
Signed-off-by: Feifei Wang 
Reviewed-by: Ruifeng Wang 
Reviewed-by: Honnappa Nagarahalli 
---
 drivers/net/i40e/i40e_ethdev.c|   1 +
 drivers/net/i40e/i40e_ethdev.h|   2 +
 .../net/i40e/i40e_recycle_mbufs_vec_common.c  | 147 ++
 drivers/net/i40e/i40e_rxtx.c  |  32 
 drivers/net/i40e/i40e_rxtx.h  |   4 +
 drivers/net/i40e/meson.build  |   1 +
 6 files changed, 187 insertions(+)
 create mode 100644 drivers/net/i40e/i40e_recycle_mbufs_vec_common.c

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 8271bbb394..50ba9aac94 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -496,6 +496,7 @@ static const struct eth_dev_ops i40e_eth_dev_ops = {
.flow_ops_get = i40e_dev_flow_ops_get,
.rxq_info_get = i40e_rxq_info_get,
.txq_info_get = i40e_txq_info_get,
+   .recycle_rxq_info_get = i40e_recycle_rxq_info_get,
.rx_burst_mode_get= i40e_rx_burst_mode_get,
.tx_burst_mode_get= i40e_tx_burst_mode_get,
.timesync_enable  = i40e_timesync_enable,
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 6f65d5e0ac..af758798e1 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -1355,6 +1355,8 @@ void i40e_rxq_info_get(struct rte_eth_dev *dev, uint16_t 
queue_id,
struct rte_eth_rxq_info *qinfo);
 void i40e_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo);
+void i40e_recycle_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+   struct rte_eth_recycle_rxq_info *recycle_rxq_info);
 int i40e_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
   struct rte_eth_burst_mode *mode);
 int i40e_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
diff --git a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c 
b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
new file mode 100644
index 00..5663ecccde
--- /dev/null
+++ b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
@@ -0,0 +1,147 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2023 Arm Limited.
+ */
+
+#include 
+#include 
+
+#include "base/i40e_prototype.h"
+#include "base/i40e_type.h"
+#include "i40e_ethdev.h"
+#include "i40e_rxtx.h"
+
+#pragma GCC diagnostic ignored "-Wcast-qual"
+
+void
+i40e_recycle_rx_descriptors_refill_vec(void *rx_queue, uint16_t nb_mbufs)
+{
+   struct i40e_rx_queue *rxq = rx_queue;
+   struct i40e_rx_entry *rxep;
+   volatile union i40e_rx_desc *rxdp;
+   uint16_t rx_id;
+   uint64_t paddr;
+   uint64_t dma_addr;
+   uint16_t i;
+
+   rxdp = rxq->rx_ring + rxq->rxrearm_start;
+   rxep = &rxq->sw_ring[rxq->rxrearm_start];
+
+   for (i = 0; i < nb_mbufs; i++) {
+   /* Initialize rxdp descs. */
+   paddr = (rxep[i].mbuf)->buf_iova + RTE_PKTMBUF_HEADROOM;
+   dma_addr = rte_cpu_to_le_64(paddr);
+   /* flush desc with pa dma_addr */
+   rxdp[i].read.hdr_addr = 0;
+   rxdp[i].read.pkt_addr = dma_addr;
+   }
+
+   /* Update the descriptor initializer index */
+   rxq->rxrearm_start += nb_mbufs;
+   rx_id = rxq->rxrearm_start - 1;
+
+   if (unlikely(rxq->rxrearm_start >= rxq->nb_rx_desc)) {
+   rxq->rxrearm_start = 0;
+   rx_id = rxq->nb_rx_desc - 1;
+   }
+
+   rxq->rxrearm_nb -= nb_mbufs;
+
+   rte_io_wmb();
+   /* Update the tail pointer on the NIC */
+   I40E_PCI_REG_WRITE_RELAXED(rxq->qrx_tail, rx_id);
+}
+
+uint16_t
+i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue,
+   struct rte_eth_recycle_rxq_info *recycle_rxq_info)
+{
+   struct i40e_tx_queue *txq = tx_queue;
+   struct i40e_tx_entry *txep;
+   struct rte_mbuf **rxep;
+   int i, n;
+   uint16_t nb_recycle_mbufs;
+   uint16_t avail = 0;
+   uint16_t mbuf_ring_size = recycle_rxq_info->mbuf_ring_size;
+   uint16_t mask = recycle_rxq_info->mbuf_ring_size - 1;
+   uint16_t refill_requirement = recycle_rxq_info->refill_requirement;
+   uint16_t refill_head = *recycle_rxq_info->refill_head;
+   uint16_t receive_tail = *recycle_rxq_info->receive_tail;
+
+   /* Get available recycling Rx buffers. */
+   avail = (mbuf_ring_size - (refill_head - receive_tail)) & mask;
+
+   /* Check Tx free thresh and Rx available space. */
+   if (txq->nb_tx_free > txq->tx_free_thresh || avail <= txq->tx_rs_thresh)
+   return 0;
+
+   /* check DD bits on threshold descr

[PATCH v7 3/4] net/ixgbe: implement mbufs recycle mode

2023-07-06 Thread Feifei Wang
Define specific function implementation for ixgbe driver.
Currently, recycle buffer mode can support 128bit
vector path. And can be enabled both in fast free and
no fast free mode.

Suggested-by: Honnappa Nagarahalli 
Signed-off-by: Feifei Wang 
Reviewed-by: Ruifeng Wang 
Reviewed-by: Honnappa Nagarahalli 
---
 drivers/net/ixgbe/ixgbe_ethdev.c  |   1 +
 drivers/net/ixgbe/ixgbe_ethdev.h  |   3 +
 .../ixgbe/ixgbe_recycle_mbufs_vec_common.c| 143 ++
 drivers/net/ixgbe/ixgbe_rxtx.c|  29 
 drivers/net/ixgbe/ixgbe_rxtx.h|   4 +
 drivers/net/ixgbe/meson.build |   1 +
 6 files changed, 181 insertions(+)
 create mode 100644 drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 14a7d571e0..ea4c9dd561 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -543,6 +543,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
.set_mc_addr_list = ixgbe_dev_set_mc_addr_list,
.rxq_info_get = ixgbe_rxq_info_get,
.txq_info_get = ixgbe_txq_info_get,
+   .recycle_rxq_info_get = ixgbe_recycle_rxq_info_get,
.timesync_enable  = ixgbe_timesync_enable,
.timesync_disable = ixgbe_timesync_disable,
.timesync_read_rx_timestamp = ixgbe_timesync_read_rx_timestamp,
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 1291e9099c..22fc3be3d8 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -626,6 +626,9 @@ void ixgbe_rxq_info_get(struct rte_eth_dev *dev, uint16_t 
queue_id,
 void ixgbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo);
 
+void ixgbe_recycle_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+   struct rte_eth_recycle_rxq_info *recycle_rxq_info);
+
 int ixgbevf_dev_rx_init(struct rte_eth_dev *dev);
 
 void ixgbevf_dev_tx_init(struct rte_eth_dev *dev);
diff --git a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c 
b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
new file mode 100644
index 00..9a8cc86954
--- /dev/null
+++ b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c
@@ -0,0 +1,143 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2023 Arm Limited.
+ */
+
+#include 
+#include 
+
+#include "ixgbe_ethdev.h"
+#include "ixgbe_rxtx.h"
+
+#pragma GCC diagnostic ignored "-Wcast-qual"
+
+void
+ixgbe_recycle_rx_descriptors_refill_vec(void *rx_queue, uint16_t nb_mbufs)
+{
+   struct ixgbe_rx_queue *rxq = rx_queue;
+   struct ixgbe_rx_entry *rxep;
+   volatile union ixgbe_adv_rx_desc *rxdp;
+   uint16_t rx_id;
+   uint64_t paddr;
+   uint64_t dma_addr;
+   uint16_t i;
+
+   rxdp = rxq->rx_ring + rxq->rxrearm_start;
+   rxep = &rxq->sw_ring[rxq->rxrearm_start];
+
+   for (i = 0; i < nb_mbufs; i++) {
+   /* Initialize rxdp descs. */
+   paddr = (rxep[i].mbuf)->buf_iova + RTE_PKTMBUF_HEADROOM;
+   dma_addr = rte_cpu_to_le_64(paddr);
+   /* Flush descriptors with pa dma_addr */
+   rxdp[i].read.hdr_addr = 0;
+   rxdp[i].read.pkt_addr = dma_addr;
+   }
+
+   /* Update the descriptor initializer index */
+   rxq->rxrearm_start += nb_mbufs;
+   if (rxq->rxrearm_start >= rxq->nb_rx_desc)
+   rxq->rxrearm_start = 0;
+
+   rxq->rxrearm_nb -= nb_mbufs;
+
+   rx_id = (uint16_t)((rxq->rxrearm_start == 0) ?
+   (rxq->nb_rx_desc - 1) : (rxq->rxrearm_start - 1));
+
+   /* Update the tail pointer on the NIC */
+   IXGBE_PCI_REG_WRITE(rxq->rdt_reg_addr, rx_id);
+}
+
+uint16_t
+ixgbe_recycle_tx_mbufs_reuse_vec(void *tx_queue,
+   struct rte_eth_recycle_rxq_info *recycle_rxq_info)
+{
+   struct ixgbe_tx_queue *txq = tx_queue;
+   struct ixgbe_tx_entry *txep;
+   struct rte_mbuf **rxep;
+   int i, n;
+   uint32_t status;
+   uint16_t nb_recycle_mbufs;
+   uint16_t avail = 0;
+   uint16_t mbuf_ring_size = recycle_rxq_info->mbuf_ring_size;
+   uint16_t mask = recycle_rxq_info->mbuf_ring_size - 1;
+   uint16_t refill_requirement = recycle_rxq_info->refill_requirement;
+   uint16_t refill_head = *recycle_rxq_info->refill_head;
+   uint16_t receive_tail = *recycle_rxq_info->receive_tail;
+
+   /* Get available recycling Rx buffers. */
+   avail = (mbuf_ring_size - (refill_head - receive_tail)) & mask;
+
+   /* Check Tx free thresh and Rx available space. */
+   if (txq->nb_tx_free > txq->tx_free_thresh || avail <= txq->tx_rs_thresh)
+   return 0;
+
+   /* check DD bits on threshold descriptor */
+   status = txq->tx_ring[txq->tx_next_dd].wb.status;
+   if (!(status & IXGBE_ADVTXD_STAT_DD))
+   return 0;
+
+   n = txq->

[PATCH v7 4/4] app/testpmd: add recycle mbufs engine

2023-07-06 Thread Feifei Wang
Add recycle mbufs engine for testpmd. This engine forward pkts with
I/O forward mode. But enable mbufs recycle feature to recycle used
txq mbufs for rxq mbuf ring, which can bypass mempool path and save
CPU cycles.

Suggested-by: Jerin Jacob 
Signed-off-by: Feifei Wang 
Reviewed-by: Ruifeng Wang 
---
 app/test-pmd/meson.build|  1 +
 app/test-pmd/recycle_mbufs.c| 58 +
 app/test-pmd/testpmd.c  |  1 +
 app/test-pmd/testpmd.h  |  3 ++
 doc/guides/testpmd_app_ug/run_app.rst   |  1 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  5 +-
 6 files changed, 68 insertions(+), 1 deletion(-)
 create mode 100644 app/test-pmd/recycle_mbufs.c

diff --git a/app/test-pmd/meson.build b/app/test-pmd/meson.build
index d2e3f60892..6e5f067274 100644
--- a/app/test-pmd/meson.build
+++ b/app/test-pmd/meson.build
@@ -22,6 +22,7 @@ sources = files(
 'macswap.c',
 'noisy_vnf.c',
 'parameters.c',
+   'recycle_mbufs.c',
 'rxonly.c',
 'shared_rxq_fwd.c',
 'testpmd.c',
diff --git a/app/test-pmd/recycle_mbufs.c b/app/test-pmd/recycle_mbufs.c
new file mode 100644
index 00..6e9e1c5eb6
--- /dev/null
+++ b/app/test-pmd/recycle_mbufs.c
@@ -0,0 +1,58 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2023 Arm Limited.
+ */
+
+#include "testpmd.h"
+
+/*
+ * Forwarding of packets in I/O mode.
+ * Enable mbufs recycle mode to recycle txq used mbufs
+ * for rxq mbuf ring. This can bypass mempool path and
+ * save CPU cycles.
+ */
+static bool
+pkt_burst_recycle_mbufs(struct fwd_stream *fs)
+{
+   struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
+   uint16_t nb_rx;
+
+   /* Recycle used mbufs from the txq, and move these mbufs into
+* the rxq mbuf ring.
+*/
+   rte_eth_recycle_mbufs(fs->rx_port, fs->rx_queue,
+   fs->tx_port, fs->tx_queue, &(fs->recycle_rxq_info));
+
+   /*
+* Receive a burst of packets and forward them.
+*/
+   nb_rx = common_fwd_stream_receive(fs, pkts_burst, nb_pkt_per_burst);
+   if (unlikely(nb_rx == 0))
+   return false;
+
+   common_fwd_stream_transmit(fs, pkts_burst, nb_rx);
+
+   return true;
+}
+
+static void
+recycle_mbufs_stream_init(struct fwd_stream *fs)
+{
+   int rc;
+
+   /* Retrieve information about given ports's Rx queue
+* for recycling mbufs.
+*/
+   rc = rte_eth_recycle_rx_queue_info_get(fs->rx_port,
+   fs->rx_queue, &(fs->recycle_rxq_info));
+   if (rc != 0)
+   TESTPMD_LOG(WARNING,
+   "Failed to get rx queue mbufs recycle info\n");
+
+   common_fwd_stream_init(fs);
+}
+
+struct fwd_engine recycle_mbufs_engine = {
+   .fwd_mode_name  = "recycle_mbufs",
+   .stream_init= recycle_mbufs_stream_init,
+   .packet_fwd = pkt_burst_recycle_mbufs,
+};
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 1fc70650e0..b10128bb77 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -199,6 +199,7 @@ struct fwd_engine * fwd_engines[] = {
&icmp_echo_engine,
&noisy_vnf_engine,
&five_tuple_swap_fwd_engine,
+   &recycle_mbufs_engine,
 #ifdef RTE_LIBRTE_IEEE1588
&ieee1588_fwd_engine,
 #endif
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 1761768add..f1ddea16bd 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -188,6 +188,8 @@ struct fwd_stream {
struct pkt_burst_stats rx_burst_stats;
struct pkt_burst_stats tx_burst_stats;
struct fwd_lcore *lcore; /**< Lcore being scheduled. */
+   /**< Rx queue information for recycling mbufs */
+   struct rte_eth_recycle_rxq_info recycle_rxq_info;
 };
 
 /**
@@ -448,6 +450,7 @@ extern struct fwd_engine csum_fwd_engine;
 extern struct fwd_engine icmp_echo_engine;
 extern struct fwd_engine noisy_vnf_engine;
 extern struct fwd_engine five_tuple_swap_fwd_engine;
+extern struct fwd_engine recycle_mbufs_engine;
 #ifdef RTE_LIBRTE_IEEE1588
 extern struct fwd_engine ieee1588_fwd_engine;
 #endif
diff --git a/doc/guides/testpmd_app_ug/run_app.rst 
b/doc/guides/testpmd_app_ug/run_app.rst
index 6e9c552e76..24a086401e 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -232,6 +232,7 @@ The command line options are:
noisy
5tswap
shared-rxq
+   recycle_mbufs
 
 *   ``--rss-ip``
 
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst 
b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index b755c38c98..723ccb28cb 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -318,7 +318,7 @@ set fwd
 Set the packet forwarding mode::
 
testpmd> set fwd (io|mac|macswap|flowgen| \
- rxonly|txonly|csum|icmpecho|noisy|5tswap|shared-rxq) 
(""|retry)
+ 
r

RE: [PATCH 1/7] net/mlx5: fix the modify field check of tag

2023-07-06 Thread Bing Zhao
Thank you.

> -Original Message-
> From: Thomas Monjalon 
> Sent: Thursday, July 6, 2023 5:37 PM
> To: Stephen Hemminger ; Bing Zhao
> 
> Cc: dev@dpdk.org; Matan Azrad ; Slava Ovsiienko
> ; Ori Kam ; Suanming Mou
> ; Raslan Darawsheh ;
> dev@dpdk.org; Michael Baum 
> Subject: Re: [PATCH 1/7] net/mlx5: fix the modify field check of tag
> 
> External email: Use caution opening links or attachments
> 
> 
> 03/07/2023 15:31, Bing Zhao:
> > Hi Stephen,
> > If I understand correctly, do you mean that the internal value and rte_flow
> API value may have some conflict?
> > All the MLX5 internal enum values start from INT_MIN. When treating it as a
> int value, it would not have the same value with rte_flow enums, unless all 
> the
> 2^^32 are defined.
> > But yes, this has some risk since there is no limitation of the values in 
> > the
> rte_flow API.
> 
> We can assume it will never happen.
> This is good to go.
> 
> 
> > > -Original Message-
> > > From: Stephen Hemminger 
> > > Sent: Friday, June 30, 2023 2:09 PM
> > > To: Bing Zhao 
> > > Cc: Matan Azrad ; Slava Ovsiienko
> > > ; Ori Kam ; Suanming Mou
> > > ; Raslan Darawsheh ;
> > > dev@dpdk.org; Michael Baum 
> > > Subject: Re: [PATCH 1/7] net/mlx5: fix the modify field check of tag
> > >
> > > External email: Use caution opening links or attachments
> > >
> > >
> > > On Fri, 30 Jun 2023 08:43:03 +0300
> > > Bing Zhao  wrote:
> > >
> > > > @@ -1117,9 +1117,10 @@ flow_dv_fetch_field(const uint8_t *data,
> > > > uint32_t size)  static inline bool
> > > > flow_modify_field_support_tag_array(enum rte_flow_field_id field)  {
> > > > - switch (field) {
> > > > + switch ((int)field) {
> > > >   case RTE_FLOW_FIELD_TAG:
> > > >   case RTE_FLOW_FIELD_MPLS:
> > > > + case MLX5_RTE_FLOW_FIELD_META_REG:
> > >
> > > Mixing internal and API fields seems like something that could get
> > > easily broken by changes to rte_flow.
> >
> 
> 
> 
> 



RE: [PATCH] app/testpmd: revert primary process polling all queues fix

2023-07-06 Thread Ali Alnubani
> -Original Message-
> From: Ferruh Yigit 
> Sent: Wednesday, July 5, 2023 5:32 PM
> To: Aman Singh ; Yuying Zhang
> ; Anatoly Burakov ;
> Jie Hai 
> Cc: dev@dpdk.org; NBU-Contact-Thomas Monjalon (EXTERNAL)
> ; David Marchand ;
> sta...@dpdk.org; songx.ji...@intel.com; qiming.y...@intel.com
> Subject: [PATCH] app/testpmd: revert primary process polling all queues fix
> 
> For some drivers [1], testpmd forwarding is broken with commit [2].
> 
> This is because with [2] testpmd gets queue state from ethdev and
> forwarding is done only on queues in started state, but some drivers
> don't update queue status properly, and this breaks forwarding for those
> drivers.
> 
> Drivers should be fixed but more time is required to verify drivers
> again, instead reverting [2] for now to not break drivers.
> Target is to merge [2] back at the beginning of next release cycle and
> fix drivers accordingly.
> 
> [1]
> Bugzilla ID: 1259
> 
> [2]
> Fixes: 141a520b35f7 ("app/testpmd: fix primary process not polling all
> queues")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Ferruh Yigit 
> ---

Thanks, Ferruh. Tested on top of latest main (df60837ccd65).

Tested-by: Ali Alnubani 


Re: [EXT] [PATCH] ipsec: fix NAT-T length calculation

2023-07-06 Thread Radu Nicolau



On 06-Jul-23 10:08 AM, Konstantin Ananyev wrote:

Hi Akhil,


Hi Konstantin,
Can you review this patch?


UDP header length is included in sa->hdr_len. Take care of that in
L3 header and pakcet length calculation.

Fixes: 01eef5907fc3 ("ipsec: support NAT-T")

Signed-off-by: Xiao Liang 
---
  lib/ipsec/esp_outb.c | 2 +-
  lib/ipsec/sa.c   | 2 +-
  2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 9cbd9202f6..ec87b1dce2 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -198,7 +198,7 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa,
rte_be64_t sqc,
struct rte_udp_hdr *udph = (struct rte_udp_hdr *)
(ph + sa->hdr_len - sizeof(struct rte_udp_hdr));
udph->dgram_len = rte_cpu_to_be_16(mb->pkt_len - sqh_len -
-   sa->hdr_l3_off - sa->hdr_len);
+   sa->hdr_len + sizeof(struct rte_udp_hdr));

To be honest, it is not clear to me why we shouldn't take into account 
sa->hdr_l3_off
  any more.
Probably the author can explain.
Also would like author of  NAT-T support to chime in.
Radu, any comments on that patch?
I agree, hdr_l3_off should not be ignored. Also sa->hdr_len already 
includes the size of UDP header, see line 366 in esp_outb_tun_init in 
sa.c (or the line above this change, where the udph pointer is computed 
assuming this)

Thanks
Konstantin


}

/* update original and new ip header fields */
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 59a547637d..2297bd6d72 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -371,7 +371,7 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct
rte_ipsec_sa_prm *prm)

/* update l2_len and l3_len fields for outbound mbuf */
sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off,
-   sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);
+   prm->tun.hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0);

esp_outb_init(sa, sa->hdr_len, prm->ipsec_xform.esn.value);
  }
--
2.40.0


Re: [PATCH] app/testpmd: revert primary process polling all queues fix

2023-07-06 Thread Ferruh Yigit
On 7/6/2023 10:24 AM, Jiale, SongX wrote:
>> -Original Message-
>> From: Ferruh Yigit 
>> Sent: Wednesday, July 5, 2023 10:32 PM
>> To: Singh, Aman Deep ; Zhang, Yuying
>> ; Burakov, Anatoly ;
>> Jie Hai 
>> Cc: dev@dpdk.org; Thomas Monjalon ; David
>> Marchand ; sta...@dpdk.org; Jiale, SongX
>> ; Yang, Qiming 
>> Subject: [PATCH] app/testpmd: revert primary process polling all queues fix
>>
>> For some drivers [1], testpmd forwarding is broken with commit [2].
>>
>> This is because with [2] testpmd gets queue state from ethdev and
>> forwarding is done only on queues in started state, but some drivers don't
>> update queue status properly, and this breaks forwarding for those drivers.
>>
>> Drivers should be fixed but more time is required to verify drivers again,
>> instead reverting [2] for now to not break drivers.
>> Target is to merge [2] back at the beginning of next release cycle and fix
>> drivers accordingly.
>>
>> [1]
>> Bugzilla ID: 1259
>>
>> [2]
>> Fixes: 141a520b35f7 ("app/testpmd: fix primary process not polling all
>> queues")
>> Cc: sta...@dpdk.org
>>
>> Signed-off-by: Ferruh Yigit 
>> ---
>
> Tested-by: Jiale Song 
>
> Tested-by: Ali Alnubani 
>

Applied to dpdk-next-net/main, thanks.


Re: [PATCH v2] app/pdump: exit if no device specified

2023-07-06 Thread Thomas Monjalon
03/07/2023 08:29, fengchengwen:
> Acked-by: Chengwen Feng 
> 
> On 2023/7/1 10:16, Stephen Hemminger wrote:
> > Simpler version of earlier patch which had a good idea, was just
> > implemented with more code than necessary.
> > If no device is specified don't start the capture loop.
> > 
> > Reported-by: usman.tanveer 
> > Signed-off-by: Stephen Hemminger 

Applied, thanks.




RE: [PATCH v2] pcap: support MTU set

2023-07-06 Thread Ido Goshen
I've suggested 2 ways to do it
1. Data path enforcement by pcap pmd [PATCH v4] 
http://patches.dpdk.org/project/dpdk/patch/20220606162147.57218-1-...@cgstowernetworks.com/
2. Control path only sets the underlying OS network interface MTU [PATCH v8]
http://patches.dpdk.org/project/dpdk/cover/20220620083944.51517-1-...@cgstowernetworks.com/

It was pretty long ago - not sure which is in favor

> -Original Message-
> From: Stephen Hemminger 
> Sent: Wednesday, 5 July 2023 18:19
> To: Ferruh Yigit 
> Cc: dev@dpdk.org; Ido Goshen 
> Subject: Re: [PATCH v2] pcap: support MTU set
> 
> On Wed, 5 Jul 2023 12:37:41 +0100
> Ferruh Yigit  wrote:
> 
> > On 7/4/2023 10:02 PM, Stephen Hemminger wrote:
> > > Support rte_eth_dev_set_mtu for pcap driver when the pcap device is
> > > convigured to point to a network interface.
> > >
> > > This is rebased an consolidated from earlier version.
> > > Added support for FreeBSD.
> > >
> >
> > As far as I understand motivation is to make pcap PMD behave close the
> > physical NIC and able to test the application MTU feature.
> > If so, Ido's v4 was simpler, which doesn't distinguish if pcap backed
> > by physical interface or .pcap file.
> > What was wrong with that approach?
> 
> I started with Ido's patch, then:
>   - combined the two patches into one.
>   - fixed the error handling (propogate errno correctly)
>   - add missing freebsd support
> 
> Normally would just give feedback but patch was so old not sure if it was
> stuck and unlikely to get merged.


[PATCH v2] app/crypto-perf: fix socket ID default value

2023-07-06 Thread Ciara Power
Due to recent changes to the default device socket ID,
before being used as an index for session mempool list,
the socket ID should be set to 0 if unknown (-1).

Fixes: 7dcd73e37965 ("drivers/bus: set device NUMA node to unknown by default")
Fixes: 64c469b9e7d8 ("app/crypto-perf: check range of socket id")
Cc: bruce.richard...@intel.com
Cc: olivier.m...@6wind.com
Cc: sta...@dpdk.org

Signed-off-by: Ciara Power 
Acked-by: Kai Ji 
---
v2: check if socket ID equals SOCKET_ID_ANY
---
 app/test-crypto-perf/main.c | 15 +--
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
index af5bd0d23b..bc1f0f9659 100644
--- a/app/test-crypto-perf/main.c
+++ b/app/test-crypto-perf/main.c
@@ -193,11 +193,10 @@ cperf_initialize_cryptodev(struct cperf_options *opts, 
uint8_t *enabled_cdevs)
 #endif
 
struct rte_cryptodev_info cdev_info;
-   uint8_t socket_id = rte_cryptodev_socket_id(cdev_id);
-   /* range check the socket_id - negative values become big
-* positive ones due to use of unsigned value
-*/
-   if (socket_id >= RTE_MAX_NUMA_NODES)
+   int socket_id = rte_cryptodev_socket_id(cdev_id);
+
+   /* Use the first socket if SOCKET_ID_ANY is returned. */
+   if (socket_id == SOCKET_ID_ANY)
socket_id = 0;
 
rte_cryptodev_info_get(cdev_id, &cdev_info);
@@ -650,7 +649,11 @@ main(int argc, char **argv)
 
cdev_id = enabled_cdevs[cdev_index];
 
-   uint8_t socket_id = rte_cryptodev_socket_id(cdev_id);
+   int socket_id = rte_cryptodev_socket_id(cdev_id);
+
+   /* Use the first socket if SOCKET_ID_ANY is returned. */
+   if (socket_id == SOCKET_ID_ANY)
+   socket_id = 0;
 
ctx[i] = cperf_testmap[opts.test].constructor(
session_pool_socket[socket_id].sess_mp,
-- 
2.25.1



RE: [PATCH] doc: announce deprecation for security ops

2023-07-06 Thread Power, Ciara
Hi Akhil,

> -Original Message-
> From: Akhil Goyal 
> Sent: Tuesday 4 July 2023 20:45
> To: dev@dpdk.org
> Cc: tho...@monjalon.net; david.march...@redhat.com;
> jer...@marvell.com; ano...@marvell.com; ndabilpu...@marvell.com; De
> Lara Guarch, Pablo ;
> hemant.agra...@nxp.com; g.si...@nxp.com;
> konstantin.v.anan...@yandex.ru; Nicolau, Radu ;
> Power, Ciara ; ruifeng.w...@arm.com;
> ma...@nvidia.com; fanzhang@gmail.com; Akhil Goyal
> 
> Subject: [PATCH] doc: announce deprecation for security ops
> 
> Structure rte_security_ops and rte_security_ctx are meant to be used by
> rte_security library and the PMDs associated.
> These will be moved to an internal header in DPDK 23.11 release.
> 
> Signed-off-by: Akhil Goyal 
> ---


Seems a reasonable change to me.

Acked-by: Ciara Power 



RE: [PATCH v3] net/mlx5: fix RSS expansion inner buffer overflow.

2023-07-06 Thread Raslan Darawsheh
Hi,

> -Original Message-
> From: Maayan Kashani 
> Sent: Thursday, July 6, 2023 11:56 AM
> To: dev@dpdk.org
> Cc: Maayan Kashani ; Ori Kam ;
> Raslan Darawsheh ; Matan Azrad
> ; Slava Ovsiienko ; Suanming
> Mou 
> Subject: [PATCH v3] net/mlx5: fix RSS expansion inner buffer overflow.
> 
> The stack which used for RSS expansion was overflowed and trashed RSS
> expansion data.
[Raslan Darawsheh] line too long, wrap at 75 char will fix during integration 
> (buf->entry[MLX5_RSS_EXP_ELT_N]).
> Due to this overflow, packets such as ARP or LACP with overwritten RSS types
> due to the
> overflow will be dropped.
> 
> This increases the buffer size to avoid such overflows and adds relevant
> ASSERT for the future.
> 
> Bugzilla ID: 1173
> 
missing fixes tag and CC stable:
Fixes: 18ca4a4ec73a ("net/mlx5: support ESP SPI match and RSS hash")
Cc: sta...@dpdk.org
will fix during integration

missing:
Reported-by: Mário Kuka 
will fix during the integration

> Signed-off-by: Maayan Kashani 
> Acked-by: Ori Kam 
> ---
Reviewed-by: Raslan Darawsheh 

Patch applied to next-net-mlx,
Kindest regards
Raslan Darawsheh


[PATCH v2] examples/ipsec-secgw: fix of socket id default value

2023-07-06 Thread Kai Ji
Due to recent changes to the default device socket ID, before
being used as an index for session mempool list,
set socket ID to 0 if unknown (-1).

Fixes: 7dcd73e37965 ("drivers/bus: set device NUMA node to unknown by default")
Cc: olivier.m...@6wind.com
Cc: sta...@dpdk.org

Signed-off-by: Kai Ji 
---
 examples/ipsec-secgw/ipsec-secgw.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/examples/ipsec-secgw/ipsec-secgw.c 
b/examples/ipsec-secgw/ipsec-secgw.c
index 029749e522..72b3bfba9e 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -1699,6 +1699,9 @@ cryptodevs_init(enum eh_pkt_transfer_mode mode)
 
total_nb_qps += qp;
dev_conf.socket_id = rte_cryptodev_socket_id(cdev_id);
+   /* Use the first socket if SOCKET_ID_ANY is returned. */
+   if (dev_conf.socket_id == SOCKET_ID_ANY)
+   dev_conf.socket_id = 0;
dev_conf.nb_queue_pairs = qp;
dev_conf.ff_disable = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO;
 
-- 
2.34.1



RE: [PATCH] service: avoid worker lcore exit deadlock

2023-07-06 Thread Van Haaren, Harry
> -Original Message-
> From: Mattias Rönnblom 
> Sent: Tuesday, July 4, 2023 10:44 PM
> To: Van Haaren, Harry ; Stephen Hemminger
> 
> Cc: hof...@lysator.liu.se; dev@dpdk.org; Suanming Mou
> ; tho...@monjalon.net;
> david.march...@redhat.com; mattias.ronnblom
> ; sta...@dpdk.org
> Subject: [PATCH] service: avoid worker lcore exit deadlock
> 
> Calling rte_exit() from a worker lcore thread causes a deadlock in
> rte_service_finalize().
> 
> This patch makes rte_service_finalize() deadlock-free by avoiding the
> need to synchronize with service lcore threads, which in turn is
> achieved by moving service and per-lcore state from the heap to being
> statically allocated.

Elegant solution to avoiding the malloc/free in cleanup issue.
Thanks for investigating & implementing the solution!

> The BSS segment increases with ~156 kB (on x86_64 with default
> RTE_MAX_LCORE and RTE_SERVICE_NUM_MAX).
> 
> According to the service perf autotest, this change also results in a
> slight reduction of service framework overhead.
> 
> Fixes: 33666b448f15 ("service: fix crash on exit")
> Cc: harry.van.haa...@intel.com
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Mattias Rönnblom 

Acked-by: Harry van Haaren 



RE: [PATCH v2] app/testpmd: add IP length field matching

2023-07-06 Thread Ori Kam


> -Original Message-
> From: Bing Zhao 
> Sent: Saturday, July 1, 2023 12:27 PM
> Subject: [PATCH v2] app/testpmd: add IP length field matching
> 
> Added support of parsing IPv4 total length and IPv6 payload length
> in the command line. The value of L3 length can be passed to the
> rte_flow API for testing.
> 
> Signed-off-by: Bing Zhao 
> ---

Acked-by: Ori Kam 
Best,
Ori


[PATCH] net/ice: allow setting CIR

2023-07-06 Thread markus . theil
From: Michael Rossberg 

ice only allowed to set peak information rate (PIR), while the hardware
also supports setting committed information rate (CIR). In many use
cases both values are needed, therefore add support for CIR.

Signed-off-by: Michael Rossberg 
---
 drivers/net/ice/ice_tm.c | 39 ---
 1 file changed, 28 insertions(+), 11 deletions(-)

diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 34a0bfcff8..f5ea47ae83 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -693,6 +693,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
struct ice_vsi *vsi;
int ret_val = ICE_SUCCESS;
uint64_t peak = 0;
+   uint64_t committed = 0;
uint8_t priority;
uint32_t i;
uint32_t idx_vsi_child;
@@ -801,17 +802,33 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
q_teid = txq->q_teid;
if (tm_node->shaper_profile) {
/* Transfer from Byte per seconds to Kbps */
-   peak = tm_node->shaper_profile->profile.peak.rate;
-   peak = peak / 1000 * BITS_PER_BYTE;
-   ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
-  tm_node->tc, tm_node->id,
-  ICE_MAX_BW, (u32)peak);
-   if (ret_val) {
-   error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
-   PMD_DRV_LOG(ERR,
-   "configure queue %u bandwidth 
failed",
-   tm_node->id);
-   goto fail_clear;
+   if (tm_node->shaper_profile->profile.peak.rate > 0) {
+   peak = 
tm_node->shaper_profile->profile.peak.rate;
+   peak = peak / 1000 * BITS_PER_BYTE;
+   ret_val = ice_cfg_q_bw_lmt(hw->port_info, 
vsi->idx,
+  tm_node->tc, 
tm_node->id,
+  ICE_MAX_BW, 
(u32)peak);
+   if (ret_val) {
+   error->type = 
RTE_TM_ERROR_TYPE_UNSPECIFIED;
+   PMD_DRV_LOG(ERR,
+   "configure queue %u peak 
bandwidth failed",
+   tm_node->id);
+   goto fail_clear;
+   }
+   }
+   if (tm_node->shaper_profile->profile.committed.rate > 
0) {
+   committed = 
tm_node->shaper_profile->profile.committed.rate;
+   committed = committed / 1000 * BITS_PER_BYTE;
+   ret_val = ice_cfg_q_bw_lmt(hw->port_info, 
vsi->idx,
+  tm_node->tc, 
tm_node->id,
+  ICE_MIN_BW, 
(u32)committed);
+   if (ret_val) {
+   error->type = 
RTE_TM_ERROR_TYPE_UNSPECIFIED;
+   PMD_DRV_LOG(ERR,
+   "configure queue %u 
committed bandwidth failed",
+   tm_node->id);
+   goto fail_clear;
+   }
}
}
priority = 7 - tm_node->priority;
-- 
2.41.0



[PATCH v2] net/mlx5: support symmetric RSS hash function

2023-07-06 Thread Xueming Li
This patch supports symmetric hash function that creating same
hash result for bi-direction traffic which having reverse
source and destination IP and L4 port.

Since the hash algorithom is different than spec(XOR), leave a
warning in validation.

Signed-off-by: Xueming Li 
---
 doc/guides/nics/mlx5.rst|  4 
 drivers/net/mlx5/mlx5.h |  3 +++
 drivers/net/mlx5/mlx5_devx.c| 11 ---
 drivers/net/mlx5/mlx5_flow.c| 10 --
 drivers/net/mlx5/mlx5_flow.h|  5 +
 drivers/net/mlx5/mlx5_flow_dv.c | 13 -
 drivers/net/mlx5/mlx5_flow_hw.c |  7 +++
 drivers/net/mlx5/mlx5_rx.h  |  2 +-
 drivers/net/mlx5/mlx5_rxq.c |  8 +---
 9 files changed, 53 insertions(+), 10 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index b9843edbd9..20420c7feb 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -110,6 +110,7 @@ Features
   and source only, destination only or both.
 - Several RSS hash keys, one for each flow type.
 - Default RSS operation with no hash key specification.
+- Symmetric RSS function.
 - Configurable RETA table.
 - Link flow control (pause frame).
 - Support for multiple MAC addresses.
@@ -708,6 +709,9 @@ Limitations
   The flow engine of a process cannot move from active to standby mode
   if preceding active application rules are still present and vice versa.
 
+- The symmetric RSS function is supported by swapping source and desitination
+  addresses and ports
+
 
 Statistics
 --
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 2a82348135..b7534933bc 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1509,6 +1509,7 @@ struct mlx5_mtr_config {
 
 /* RSS description. */
 struct mlx5_flow_rss_desc {
+   bool symmetric_hash_function; /**< Symmetric hash function */
uint32_t level;
uint32_t queue_num; /**< Number of entries in @p queue. */
uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
@@ -1577,6 +1578,7 @@ struct mlx5_hrxq {
 #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H)
void *action; /* DV QP action pointer. */
 #endif
+   bool symmetric_hash_function; /* Symmetric hash function */
uint32_t hws_flags; /* Hw steering flags. */
uint64_t hash_fields; /* Verbs Hash fields. */
uint32_t rss_key_len; /* Hash key length in bytes. */
@@ -1648,6 +1650,7 @@ struct mlx5_obj_ops {
int (*hrxq_modify)(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
   const uint8_t *rss_key,
   uint64_t hash_fields,
+  bool symmetric_hash_function,
   const struct mlx5_ind_table_obj *ind_tbl);
void (*hrxq_destroy)(struct mlx5_hrxq *hrxq);
int (*drop_action_create)(struct rte_eth_dev *dev);
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 4369d2557e..f9d8dc6987 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -803,7 +803,8 @@ static void
 mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
   uint64_t hash_fields,
   const struct mlx5_ind_table_obj *ind_tbl,
-  int tunnel, struct mlx5_devx_tir_attr *tir_attr)
+  int tunnel, bool symmetric_hash_function,
+  struct mlx5_devx_tir_attr *tir_attr)
 {
struct mlx5_priv *priv = dev->data->dev_private;
bool is_hairpin;
@@ -834,6 +835,7 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const 
uint8_t *rss_key,
tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT;
tir_attr->rx_hash_fn = MLX5_RX_HASH_FN_TOEPLITZ;
tir_attr->tunneled_offload_en = !!tunnel;
+   tir_attr->rx_hash_symmetric = symmetric_hash_function;
/* If needed, translate hash_fields bitmap to PRM format. */
if (hash_fields) {
struct mlx5_rx_hash_field_select *rx_hash_field_select =
@@ -902,7 +904,8 @@ mlx5_devx_hrxq_new(struct rte_eth_dev *dev, struct 
mlx5_hrxq *hrxq,
int err;
 
mlx5_devx_tir_attr_set(dev, hrxq->rss_key, hrxq->hash_fields,
-  hrxq->ind_table, tunnel, &tir_attr);
+  hrxq->ind_table, tunnel, 
hrxq->symmetric_hash_function,
+  &tir_attr);
hrxq->tir = mlx5_devx_cmd_create_tir(priv->sh->cdev->ctx, &tir_attr);
if (!hrxq->tir) {
DRV_LOG(ERR, "Port %u cannot create DevX TIR.",
@@ -969,13 +972,13 @@ static int
 mlx5_devx_hrxq_modify(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
   const uint8_t *rss_key,
   uint64_t hash_fields,
+  bool symmetric_hash_function,
   const struct mlx5_ind_table_obj *ind_tbl)
 {
struct mlx5_devx_modify_tir_attr modify_tir = {0};
 
  

[PATCH] net/mlx5: fix query for NIC flow cap

2023-07-06 Thread Ori Kam
Add query for nic flow table support bit.

Fixes: 5f44fb1958e5 ("common/mlx5: query capability of registers")
Cc: bi...@nvidia.com

Signed-off-by: Ori Kam 
Acked-by: Suanming Mou suanmi...@nvidia.com
---
 drivers/common/mlx5/mlx5_devx_cmds.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c 
b/drivers/common/mlx5/mlx5_devx_cmds.c
index ef87862a6d..66a77159a0 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -1078,6 +1078,7 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
 general_obj_types) &
  MLX5_GENERAL_OBJ_TYPES_CAP_CONN_TRACK_OFFLOAD);
attr->rq_delay_drop = MLX5_GET(cmd_hca_cap, hcattr, rq_delay_drop);
+   attr->nic_flow_table = MLX5_GET(cmd_hca_cap, hcattr, nic_flow_table);
attr->striding_rq = MLX5_GET(cmd_hca_cap, hcattr, striding_rq);
attr->ext_stride_num_range =
MLX5_GET(cmd_hca_cap, hcattr, ext_stride_num_range);
-- 
2.34.1



Re: [PATCH] doc: ensure sphinx output is reproducible

2023-07-06 Thread Christian Ehrhardt
On Mon, Jul 3, 2023 at 5:29 PM Thomas Monjalon  wrote:
>
> 29/06/2023 14:58, christian.ehrha...@canonical.com:
> > From: Christian Ehrhardt 
> >
> > By adding -j we build in parallel, to make building on multiprocessor
> > machines more effective. While that works it does also break
> > reproducible builds as the order of the sphinx generated searchindex.js
> > is depending on execution speed of the individual processes.
> [...]
> > -if Version(ver) >= Version('1.7'):
> > -sphinx_cmd += ['-j', 'auto']
>
> What is the impact on build speed on an average machine?

Hi,
I haven't tested this in isolation as it was just a mandatory change
on the Debian/Ubuntu side.
And the time for exactly and only the doc build is hidden inside the
concurrency of meson.
But I can compare a full build [1] and a full build with the change [2].

That is an average build machine and it is 35 seconds slower with the
change to no more do doc builds in parallel.

[1]: 
https://launchpadlibrarian.net/673520160/buildlog_ubuntu-mantic-amd64.dpdk_22.11.2-2_BUILDING.txt.gz
[2]: 
https://launchpadlibrarian.net/674783718/buildlog_ubuntu-mantic-amd64.dpdk_22.11.2-3_BUILDING.txt.gz

-- 
Christian Ehrhardt
Senior Staff Engineer and acting Director, Ubuntu Server
Canonical Ltd


[PATCH v1] crypto/ipsec_mb: remove unused defines

2023-07-06 Thread Brian Dooley
removed AESNI_MB_DOCSIS_SEC_ENABLED defines as they are no longer used.

Fixes: fda5216fba55 ("crypto/aesni_mb: support DOCSIS protocol")
Cc: david.co...@intel.com

Signed-off-by: Brian Dooley 
---
 drivers/crypto/ipsec_mb/ipsec_mb_private.c  |  4 
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c  | 22 ++---
 drivers/crypto/ipsec_mb/pmd_aesni_mb_priv.h |  1 -
 3 files changed, 2 insertions(+), 25 deletions(-)

diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_private.c 
b/drivers/crypto/ipsec_mb/ipsec_mb_private.c
index 64f2b4b604..f485d130b6 100644
--- a/drivers/crypto/ipsec_mb/ipsec_mb_private.c
+++ b/drivers/crypto/ipsec_mb/ipsec_mb_private.c
@@ -205,10 +205,6 @@ ipsec_mb_remove(struct rte_vdev_device *vdev)
rte_free(cryptodev->security_ctx);
cryptodev->security_ctx = NULL;
}
-#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
-   rte_free(cryptodev->security_ctx);
-   cryptodev->security_ctx = NULL;
-#endif
 
for (qp_id = 0; qp_id < cryptodev->data->nb_queue_pairs; qp_id++)
ipsec_mb_qp_release(cryptodev, qp_id);
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c 
b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 7fcb8f99e0..9e298023d7 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -851,7 +851,6 @@ aesni_mb_session_configure(IMB_MGR *mb_mgr,
return 0;
 }
 
-#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
 /** Check DOCSIS security session configuration is valid */
 static int
 check_docsis_sec_session(struct rte_security_session_conf *conf)
@@ -988,7 +987,6 @@ aesni_mb_set_docsis_sec_session_parameters(
free_mb_mgr(mb_mgr);
return ret;
 }
-#endif
 
 static inline uint64_t
 auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session,
@@ -1762,7 +1760,6 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
return 0;
 }
 
-#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
 /**
  * Process a crypto operation containing a security op and complete a
  * IMB_JOB job structure for submission to the multi buffer library for
@@ -1853,7 +1850,6 @@ verify_docsis_sec_crc(IMB_JOB *job, uint8_t *status)
if (memcmp(job->auth_tag_output, crc, RTE_ETHER_CRC_LEN) != 0)
*status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
 }
-#endif
 
 static inline void
 verify_digest(IMB_JOB *job, void *digest, uint16_t len, uint8_t *status)
@@ -1921,8 +1917,6 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
struct aesni_mb_session *sess = NULL;
uint8_t *linear_buf = NULL;
int sgl = 0;
-
-#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
uint8_t is_docsis_sec = 0;
 
if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
@@ -1933,7 +1927,6 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
is_docsis_sec = 1;
sess = SECURITY_GET_SESS_PRIV(op->sym->session);
} else
-#endif
sess = CRYPTODEV_GET_SYM_SESS_PRIV(op->sym->session);
 
if (likely(op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)) {
@@ -1961,11 +1954,9 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
op->sym->aead.digest.data,
sess->auth.req_digest_len,
&op->status);
-#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
else if (is_docsis_sec)
verify_docsis_sec_crc(job,
&op->status);
-#endif
else
verify_digest(job,
op->sym->auth.digest.data,
@@ -2098,12 +2089,10 @@ aesni_mb_dequeue_burst(void *queue_pair, struct 
rte_crypto_op **ops,
job = jobs[i];
op = deqd_ops[i];
 
-#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION)
retval = set_sec_mb_job_params(job, qp, op,
   &digest_idx);
else
-#endif
retval = set_mb_job_params(job, qp, op,
   &digest_idx, mb_mgr);
 
@@ -2259,12 +2248,10 @@ aesni_mb_dequeue_burst(void *queue_pair, struct 
rte_crypto_op **ops,
if (retval < 0)
break;
 
-#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION)
retval = set_sec_mb_job_params(job, qp, op,
&digest_idx);
else
-#endif
retval = set_mb_job_params(job, qp, op,
&digest_idx, mb_mgr);
 
@@ -2440,7 +2427,6 @@ struct rte_cryptodev_ops aesn

RE: [PATCH v7 1/4] ethdev: add API for mbufs recycle mode

2023-07-06 Thread Morten Brørup
> From: Feifei Wang [mailto:feifei.wa...@arm.com]
> Sent: Thursday, 6 July 2023 11.50
> 
> Add 'rte_eth_recycle_rx_queue_info_get' and 'rte_eth_recycle_mbufs'
> APIs to recycle used mbufs from a transmit queue of an Ethernet device,
> and move these mbufs into a mbuf ring for a receive queue of an Ethernet
> device. This can bypass mempool 'put/get' operations hence saving CPU
> cycles.
> 
> For each recycling mbufs, the rte_eth_recycle_mbufs() function performs
> the following operations:
> - Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf
> ring.
> - Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed
> from the Tx mbuf ring.
> 
> Suggested-by: Honnappa Nagarahalli 
> Suggested-by: Ruifeng Wang 
> Signed-off-by: Feifei Wang 
> Reviewed-by: Ruifeng Wang 
> Reviewed-by: Honnappa Nagarahalli 
> ---

Acked-by: Morten Brørup 



RE: [PATCH v11 2/2] net/i40e: replace put function

2023-07-06 Thread Morten Brørup
> From: Kamalakshitha Aligeri [mailto:kamalakshitha.alig...@arm.com]
> Sent: Wednesday, 5 July 2023 19.18
> 
> Integrated zero-copy put API in mempool cache in i40e PMD.
> On Ampere Altra server, l3fwd single core's performance improves by 5%
> with the new API
> 
> Signed-off-by: Kamalakshitha Aligeri 
> Reviewed-by: Ruifeng Wang 
> Reviewed-by: Feifei Wang 
> ---

Acked-by: Morten Brørup 



Re: [PATCH v2] app/testpmd: add IP length field matching

2023-07-06 Thread Ferruh Yigit
On 7/6/2023 12:38 PM, Ori Kam wrote:
> 
> 
>> -Original Message-
>> From: Bing Zhao 
>> Sent: Saturday, July 1, 2023 12:27 PM
>> Subject: [PATCH v2] app/testpmd: add IP length field matching
>>
>> Added support of parsing IPv4 total length and IPv6 payload length
>> in the command line. The value of L3 length can be passed to the
>> rte_flow API for testing.
>>
>> Signed-off-by: Bing Zhao 
>> ---
> 
> Acked-by: Ori Kam 
> 
Applied to dpdk-next-net/main, thanks.


Re: [PATCH] fib: fix adding a default route

2023-07-06 Thread Thomas Monjalon
03/07/2023 17:43, Vladimir Medvedkin:
> Fixed an issue that occurs when
> adding a default route as the first route.
> 
> Bugzilla ID: 1160
> Fixes: 7dc7868b200d ("fib: add DIR24-8 dataplane algorithm")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Vladimir Medvedkin 

Applied, thanks.





Re: [PATCH v3] eal: fix prompt info when remap_segment failed

2023-07-06 Thread Thomas Monjalon
05/07/2023 03:33, fengchengwen:
> This is bugfix, suggest add Cc.
> 
> With above add, Acked-by: Chengwen Feng 
> 
> On 2023/7/4 20:17, Fengnan Chang wrote:
> > When there is not enough space to memsegs, we should prompt
> > which configuration should be modified instead of printing
> > some numbers.
> > 
> > Signed-off-by: Fengnan Chang 

Applied with
Fixes: dd61b34d580b ("mem: error if requesting more segments than 
MAX_MEMSEG")
Fixes: 66cc45e293ed ("mem: replace memseg with memseg lists")
Cc: sta...@dpdk.org





Re: [dpdk-dev] [PATCH v5 0/4] improve options help

2023-07-06 Thread Stephen Hemminger
On Thu, 06 Jul 2023 10:29:04 +0200
Thomas Monjalon  wrote:

> 29/06/2023 18:27, Stephen Hemminger:
> > On Mon,  5 Apr 2021 21:39:50 +0200
> > Thomas Monjalon  wrote:
> >   
> > > After v4, this series is split in several parts.
> > > The remaining 4 patches of this series are low priority.
> > > 
> > > Patches 1 and 3 are simple improvements.
> > > 
> > > Patches 2 and 4 lead to a new formatting of the usage text.
> > > It is a matter of taste and should be discussed more.
> > > 
> > > v5: no change
> > > 
> > > Thomas Monjalon (4):
> > >   eal: explain argv behaviour during init
> > >   eal: improve options usage text
> > >   eal: use macros for help option
> > >   app: hook in EAL usage help  
> > 
> > Thomas, this patchset seems ready but never made it in.
> > What is best disposition for it:
> >   1. Rebase and resubmit?
> >   2. I could add it to the log patch series WIP?
> >   3. Drop it since old?  
> 
> I've applied the patches 1 and 3 that you acked.
> 
> I let you revisit the patches 2 and 4 if you wish.

Thanks. Trying to reach 500 patches by 23.08 release.


Re: [PATCH v2 1/2] hash: fix reading unaligned bits implementation

2023-07-06 Thread Thomas Monjalon
30/06/2023 19:09, Vladimir Medvedkin:
> Fixes: 28ebff11c2dc ("hash: add predictable RSS")
> Cc: sta...@dpdk.org
> 
> Acked-by: Konstantin Ananyev 
> Tested-by: Konstantin Ananyev 
> Signed-off-by: Vladimir Medvedkin 

I've just merged another patch from you where the explanation is useless.
Here there is no explanation at all.
The tags are in the wrong order, and there is a checkpatch warning.

I apply this series, considering they are not big issues in this case,
but please consider this is the very last time.





RE: [v3 1/5] net/mlx5/hws: remove unneeded new line for DR_LOG

2023-07-06 Thread Raslan Darawsheh
Hi,

> -Original Message-
> From: Itamar Gozlan 
> Sent: Tuesday, July 4, 2023 7:05 PM
> To: Alex Vesker ; Slava Ovsiienko
> ; Matan Azrad ; NBU-
> Contact-Thomas Monjalon (EXTERNAL) ; Suanming
> Mou ; Ori Kam 
> Cc: dev@dpdk.org
> Subject: [v3 1/5] net/mlx5/hws: remove unneeded new line for DR_LOG
> 
> From: Alex Vesker 
> 
> In some places an extra new line was added, remove to have clean prints.
> 
> Signed-off-by: Alex Vesker 
> Acked-by: Matan Azrad 
> ---
> v1 -> v3
> (1) amending a wrong subject prefix send (v1 instead of v3).
> (2) typo fix (uneeded -> unneeded)
> v2->v3
> 1. Right patches instead of wrong patches in the previous series
> v1->v2
> 1. Last patch in the series (net/mlx5/hws: support default miss action on FDB)
> needed some fixes to be properly rebased
> 
>  drivers/net/mlx5/hws/mlx5dr_action.c  | 30 +--
>  drivers/net/mlx5/hws/mlx5dr_cmd.c |  2 +-
>  drivers/net/mlx5/hws/mlx5dr_definer.c |  4 ++--
> drivers/net/mlx5/hws/mlx5dr_pat_arg.c |  4 ++--
>  4 files changed, 20 insertions(+), 20 deletions(-)
> 
Series applied to next-net-mlx,


Kindest regards,
Raslan Darawsheh


RE: [PATCH v2] net/mlx5: support symmetric RSS hash function

2023-07-06 Thread Matan Azrad


From: Xueming(Steven) Li 
> This patch supports symmetric hash function that creating same hash result
> for bi-direction traffic which having reverse source and destination IP and L4
> port.
> 
> Since the hash algorithom is different than spec(XOR), leave a warning in
> validation.
> 
> Signed-off-by: Xueming Li 

Acked-by: Matan Azrad 


RE: [PATCH v2] net/mlx5: support symmetric RSS hash function

2023-07-06 Thread Raslan Darawsheh
Hi,

> -Original Message-
> From: Xueming Li 
> Sent: Thursday, July 6, 2023 2:56 PM
> To: Matan Azrad ; Slava Ovsiienko
> ; Ori Kam ; Suanming Mou
> 
> Cc: Xueming(Steven) Li ; dev@dpdk.org
> Subject: [PATCH v2] net/mlx5: support symmetric RSS hash function
> 
> This patch supports symmetric hash function that creating same
> hash result for bi-direction traffic which having reverse
> source and destination IP and L4 port.
> 
> Since the hash algorithom is different than spec(XOR), leave a
fixed a typo algorithom -> algorithm

> warning in validation.
> 
> Signed-off-by: Xueming Li 
> ---
>  doc/guides/nics/mlx5.rst|  4 
>  drivers/net/mlx5/mlx5.h |  3 +++
>  drivers/net/mlx5/mlx5_devx.c| 11 ---
>  drivers/net/mlx5/mlx5_flow.c| 10 --
>  drivers/net/mlx5/mlx5_flow.h|  5 +
>  drivers/net/mlx5/mlx5_flow_dv.c | 13 -
>  drivers/net/mlx5/mlx5_flow_hw.c |  7 +++
>  drivers/net/mlx5/mlx5_rx.h  |  2 +-
>  drivers/net/mlx5/mlx5_rxq.c |  8 +---
>  9 files changed, 53 insertions(+), 10 deletions(-)
> 
> diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
> index b9843edbd9..20420c7feb 100644
> --- a/doc/guides/nics/mlx5.rst
> +++ b/doc/guides/nics/mlx5.rst
> @@ -110,6 +110,7 @@ Features
>and source only, destination only or both.
>  - Several RSS hash keys, one for each flow type.
>  - Default RSS operation with no hash key specification.
> +- Symmetric RSS function.
>  - Configurable RETA table.
>  - Link flow control (pause frame).
>  - Support for multiple MAC addresses.
> @@ -708,6 +709,9 @@ Limitations
>The flow engine of a process cannot move from active to standby mode
>if preceding active application rules are still present and vice versa.
> 
> +- The symmetric RSS function is supported by swapping source and
> desitination
Fixed typo desitination -> destination

Patch applied to next -net-mlx,

Kindest regards
Raslan Darawsheh
<>

[PATCH] hash: fix segfault by adding param name NULL check

2023-07-06 Thread Conor Fogarty
Add NULL pointer check to params->name, which is later
copied into the hash datastructure. Without this check
the code segfaults on the strlcpy() of a NULL pointer.

Fixes: 48a399119619 ("hash: replace with cuckoo hash implementation")

Signed-off-by: Conor Fogarty 

---
Cc: pablo.de.lara.gua...@intel.com
---
 lib/hash/rte_cuckoo_hash.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/lib/hash/rte_cuckoo_hash.c b/lib/hash/rte_cuckoo_hash.c
index d92a903bb3..0aab091c4d 100644
--- a/lib/hash/rte_cuckoo_hash.c
+++ b/lib/hash/rte_cuckoo_hash.c
@@ -166,6 +166,7 @@ rte_hash_create(const struct rte_hash_parameters *params)
/* Check for valid parameters */
if ((params->entries > RTE_HASH_ENTRIES_MAX) ||
(params->entries < RTE_HASH_BUCKET_ENTRIES) ||
+   (params->name == NULL) ||
(params->key_len == 0)) {
rte_errno = EINVAL;
RTE_LOG(ERR, HASH, "rte_hash_create has invalid parameters\n");
-- 
2.25.1

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.



Re: [dpdk-dev] [PATCH v5 0/4] improve options help

2023-07-06 Thread Thomas Monjalon
06/07/2023 16:44, Stephen Hemminger:
> Trying to reach 500 patches by 23.08 release.

Impossible. It is 23.07 :)




RE: [PATCH v2] examples/ipsec-secgw: fix of socket id default value

2023-07-06 Thread Power, Ciara



> -Original Message-
> From: Kai Ji 
> Sent: Thursday 6 July 2023 12:01
> To: dev@dpdk.org
> Cc: gak...@marvell.com; sta...@dpdk.org; Ji, Kai ; Matz,
> Olivier 
> Subject: [PATCH v2] examples/ipsec-secgw: fix of socket id default value
> 
> Due to recent changes to the default device socket ID, before being used as
> an index for session mempool list, set socket ID to 0 if unknown (-1).
> 
> Fixes: 7dcd73e37965 ("drivers/bus: set device NUMA node to unknown by
> default")
> Cc: olivier.m...@6wind.com
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Kai Ji 
> ---
>  examples/ipsec-secgw/ipsec-secgw.c | 3 +++

Acked-by: Ciara Power 


[PATCH v2 0/2] remove unused defines

2023-07-06 Thread Brian Dooley
This series removes some unused defines throughout common qat drivers
and crypto ipsec mb drivers. It also removes some defines that should
have been removed previously.

v2:
more defines removed in additional patch and changed fixline

Brian Dooley (2):
  crypto/ipsec_mb: remove unused defines
  common/qat: change define header

 drivers/common/qat/qat_qp.c |  2 +-
 drivers/crypto/ipsec_mb/ipsec_mb_private.c  |  4 
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c  | 22 ++---
 drivers/crypto/ipsec_mb/pmd_aesni_mb_priv.h |  1 -
 4 files changed, 3 insertions(+), 26 deletions(-)

-- 
2.25.1



[PATCH v2 1/2] crypto/ipsec_mb: remove unused defines

2023-07-06 Thread Brian Dooley
removed AESNI_MB_DOCSIS_SEC_ENABLED defines as they are no longer used.

Fixes: 66a9d8d0bc6d ("crypto/qat: remove security library presence checks")
Cc: maxime.coque...@redhat.com

Signed-off-by: Brian Dooley 
---
 drivers/crypto/ipsec_mb/ipsec_mb_private.c  |  4 
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c  | 22 ++---
 drivers/crypto/ipsec_mb/pmd_aesni_mb_priv.h |  1 -
 3 files changed, 2 insertions(+), 25 deletions(-)

diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_private.c 
b/drivers/crypto/ipsec_mb/ipsec_mb_private.c
index 64f2b4b604..f485d130b6 100644
--- a/drivers/crypto/ipsec_mb/ipsec_mb_private.c
+++ b/drivers/crypto/ipsec_mb/ipsec_mb_private.c
@@ -205,10 +205,6 @@ ipsec_mb_remove(struct rte_vdev_device *vdev)
rte_free(cryptodev->security_ctx);
cryptodev->security_ctx = NULL;
}
-#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
-   rte_free(cryptodev->security_ctx);
-   cryptodev->security_ctx = NULL;
-#endif
 
for (qp_id = 0; qp_id < cryptodev->data->nb_queue_pairs; qp_id++)
ipsec_mb_qp_release(cryptodev, qp_id);
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c 
b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 7fcb8f99e0..9e298023d7 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -851,7 +851,6 @@ aesni_mb_session_configure(IMB_MGR *mb_mgr,
return 0;
 }
 
-#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
 /** Check DOCSIS security session configuration is valid */
 static int
 check_docsis_sec_session(struct rte_security_session_conf *conf)
@@ -988,7 +987,6 @@ aesni_mb_set_docsis_sec_session_parameters(
free_mb_mgr(mb_mgr);
return ret;
 }
-#endif
 
 static inline uint64_t
 auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session,
@@ -1762,7 +1760,6 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
return 0;
 }
 
-#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
 /**
  * Process a crypto operation containing a security op and complete a
  * IMB_JOB job structure for submission to the multi buffer library for
@@ -1853,7 +1850,6 @@ verify_docsis_sec_crc(IMB_JOB *job, uint8_t *status)
if (memcmp(job->auth_tag_output, crc, RTE_ETHER_CRC_LEN) != 0)
*status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
 }
-#endif
 
 static inline void
 verify_digest(IMB_JOB *job, void *digest, uint16_t len, uint8_t *status)
@@ -1921,8 +1917,6 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
struct aesni_mb_session *sess = NULL;
uint8_t *linear_buf = NULL;
int sgl = 0;
-
-#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
uint8_t is_docsis_sec = 0;
 
if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
@@ -1933,7 +1927,6 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
is_docsis_sec = 1;
sess = SECURITY_GET_SESS_PRIV(op->sym->session);
} else
-#endif
sess = CRYPTODEV_GET_SYM_SESS_PRIV(op->sym->session);
 
if (likely(op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)) {
@@ -1961,11 +1954,9 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
op->sym->aead.digest.data,
sess->auth.req_digest_len,
&op->status);
-#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
else if (is_docsis_sec)
verify_docsis_sec_crc(job,
&op->status);
-#endif
else
verify_digest(job,
op->sym->auth.digest.data,
@@ -2098,12 +2089,10 @@ aesni_mb_dequeue_burst(void *queue_pair, struct 
rte_crypto_op **ops,
job = jobs[i];
op = deqd_ops[i];
 
-#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION)
retval = set_sec_mb_job_params(job, qp, op,
   &digest_idx);
else
-#endif
retval = set_mb_job_params(job, qp, op,
   &digest_idx, mb_mgr);
 
@@ -2259,12 +2248,10 @@ aesni_mb_dequeue_burst(void *queue_pair, struct 
rte_crypto_op **ops,
if (retval < 0)
break;
 
-#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION)
retval = set_sec_mb_job_params(job, qp, op,
&digest_idx);
else
-#endif
retval = set_mb_job_params(job, qp, op,
&digest_idx, mb_mgr);
 
@@ -2440,7 +2427,6 @@ struct rte_cr

[PATCH v2 2/2] common/qat: change define header

2023-07-06 Thread Brian Dooley
change define from RTE_LIB_SECURITY to BUILD_QAT_SYM as
RTE_ETHER_CRC_LEN value is protected by BUILD_QAT_SYM.

Fixes: ce7a737c8f02 ("crypto/qat: support cipher-CRC offload")
Cc: kevin.osulli...@intel.com

Signed-off-by: Brian Dooley 
---
 drivers/common/qat/qat_qp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index 094d684abc..f284718441 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -11,7 +11,7 @@
 #include 
 #include 
 #include 
-#ifdef RTE_LIB_SECURITY
+#ifdef BUILD_QAT_SYM
 #include 
 #endif
 
-- 
2.25.1



RE: [PATCH v2 2/2] common/qat: change define header

2023-07-06 Thread Power, Ciara



> -Original Message-
> From: Brian Dooley 
> Sent: Thursday 6 July 2023 17:05
> To: Ji, Kai 
> Cc: dev@dpdk.org; gak...@marvell.com; Dooley, Brian
> ; O'Sullivan, Kevin 
> Subject: [PATCH v2 2/2] common/qat: change define header
> 
> change define from RTE_LIB_SECURITY to BUILD_QAT_SYM as
> RTE_ETHER_CRC_LEN value is protected by BUILD_QAT_SYM.
> 
> Fixes: ce7a737c8f02 ("crypto/qat: support cipher-CRC offload")
> Cc: kevin.osulli...@intel.com
> 
> Signed-off-by: Brian Dooley 
> ---
>  drivers/common/qat/qat_qp.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
> index 094d684abc..f284718441 100644


Acked-by: Ciara Power 


RE: [PATCH v2 1/2] crypto/ipsec_mb: remove unused defines

2023-07-06 Thread Power, Ciara
Hi Brian,

> -Original Message-
> From: Brian Dooley 
> Sent: Thursday 6 July 2023 17:05
> To: Ji, Kai ; De Lara Guarch, Pablo
> 
> Cc: dev@dpdk.org; gak...@marvell.com; Dooley, Brian
> ; maxime.coque...@redhat.com
> Subject: [PATCH v2 1/2] crypto/ipsec_mb: remove unused defines
> 
> removed AESNI_MB_DOCSIS_SEC_ENABLED defines as they are no longer
> used.
> 
> Fixes: 66a9d8d0bc6d ("crypto/qat: remove security library presence checks")
> Cc: maxime.coque...@redhat.com
> 

I think this fixes line should be:
Fixes: 798f9d134519 ("crypto/ipsec_mb: remove security lib presence checks")

Asides from that, code change looks good to me.

Acked-by: Ciara Power 


Re: [PATCH] ethtool: remove a redundant call to rte_eth_dev_stop()

2023-07-06 Thread Thomas Monjalon
04/07/2023 00:31, Stephen Hemminger:
> On Thu, 18 Aug 2022 15:18:36 +0500
> Usman Tanveer  wrote:
> 
> > Hi,
> > 
> > Can you please have a look and update the status?
> 
> Looks OK to me.
> 
> Acked-by: Stephen Hemminger 

better title: examples/ethtool: remove stop before start

Applied, thanks.




Re: [PATCH] member: fix PRNG seed reset in NitroSketch mode

2023-07-06 Thread Thomas Monjalon
03/07/2023 17:54, Stephen Hemminger:
> On Wed, 21 Jun 2023 00:17:20 +0300
> Dmitry Kozlyuk  wrote:
> 
> > Seeding the global PRNG at sketch creation
> > does not make the sketch operation deterministic:
> > it uses rte_rand() later, the PRNG may be seeded again by that point.
> > On the other hand, seeding the global PRNG with a hash seed,
> > is likely undesired, because it may be low-entropy or even constant.
> > Deterministic operation can be achieved by seeding the PRNG externally.
> > 
> > Remove the call to rte_srand() at sketch creation.
> > Document that hash seeds are not used by SKETCH set summary type.
> > 
> > Fixes: db354bd2e1f8 ("member: add NitroSketch mode")
> > Cc: leyi.r...@intel.com
> > 
> > Signed-off-by: Dmitry Kozlyuk 
> 
> This raises a more global issue.
> rte_srand() overrides the system seed which is set during startup.
> This is a bad thing, it reduces the entropy in the random number generator.
> 
> There are two possible solutions to this:
> 1. Remove all all calls to rte_srand() and deprecate it.
> 2. Make rte_srand() add a fixed value to existing entropy. This is what the
>kernel PRNG does. It adds any user supplied additional entropy to original
>state.
> 
> Looking at current source.
>   - code in tests seeding PRNG with TSC. This is unnecessary and can be 
> removed.
>   - this code in member library. Should be removed.
> 
> Acked-by: Stephen Hemminger 

Applied, thanks.

What's next regarding rte_srand?




[PATCH v3 0/2] remove unused defines

2023-07-06 Thread Brian Dooley
This series removes some unused defines throughout common qat drivers
and crypto ipsec mb drivers. It also removes some defines that should
have been removed previously.

v3:
Incorrect fixline

v2:
more defines removed in additional patch and changed fixline

Brian Dooley (2):
  crypto/ipsec_mb: remove unused defines
  common/qat: change define header

 drivers/common/qat/qat_qp.c |  2 +-
 drivers/crypto/ipsec_mb/ipsec_mb_private.c  |  4 
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c  | 22 ++---
 drivers/crypto/ipsec_mb/pmd_aesni_mb_priv.h |  1 -
 4 files changed, 3 insertions(+), 26 deletions(-)

-- 
2.25.1



[PATCH v3 1/2] crypto/ipsec_mb: remove unused defines

2023-07-06 Thread Brian Dooley
removed AESNI_MB_DOCSIS_SEC_ENABLED defines as they are no longer used.

Fixes: 798f9d134519 ("crypto/ipsec_mb: remove security lib presence checks")
Cc: maxime.coque...@redhat.com

Signed-off-by: Brian Dooley 
---
 drivers/crypto/ipsec_mb/ipsec_mb_private.c  |  4 
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c  | 22 ++---
 drivers/crypto/ipsec_mb/pmd_aesni_mb_priv.h |  1 -
 3 files changed, 2 insertions(+), 25 deletions(-)

diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_private.c 
b/drivers/crypto/ipsec_mb/ipsec_mb_private.c
index 64f2b4b604..f485d130b6 100644
--- a/drivers/crypto/ipsec_mb/ipsec_mb_private.c
+++ b/drivers/crypto/ipsec_mb/ipsec_mb_private.c
@@ -205,10 +205,6 @@ ipsec_mb_remove(struct rte_vdev_device *vdev)
rte_free(cryptodev->security_ctx);
cryptodev->security_ctx = NULL;
}
-#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
-   rte_free(cryptodev->security_ctx);
-   cryptodev->security_ctx = NULL;
-#endif
 
for (qp_id = 0; qp_id < cryptodev->data->nb_queue_pairs; qp_id++)
ipsec_mb_qp_release(cryptodev, qp_id);
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c 
b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 7fcb8f99e0..9e298023d7 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -851,7 +851,6 @@ aesni_mb_session_configure(IMB_MGR *mb_mgr,
return 0;
 }
 
-#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
 /** Check DOCSIS security session configuration is valid */
 static int
 check_docsis_sec_session(struct rte_security_session_conf *conf)
@@ -988,7 +987,6 @@ aesni_mb_set_docsis_sec_session_parameters(
free_mb_mgr(mb_mgr);
return ret;
 }
-#endif
 
 static inline uint64_t
 auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session,
@@ -1762,7 +1760,6 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
return 0;
 }
 
-#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
 /**
  * Process a crypto operation containing a security op and complete a
  * IMB_JOB job structure for submission to the multi buffer library for
@@ -1853,7 +1850,6 @@ verify_docsis_sec_crc(IMB_JOB *job, uint8_t *status)
if (memcmp(job->auth_tag_output, crc, RTE_ETHER_CRC_LEN) != 0)
*status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
 }
-#endif
 
 static inline void
 verify_digest(IMB_JOB *job, void *digest, uint16_t len, uint8_t *status)
@@ -1921,8 +1917,6 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
struct aesni_mb_session *sess = NULL;
uint8_t *linear_buf = NULL;
int sgl = 0;
-
-#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
uint8_t is_docsis_sec = 0;
 
if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
@@ -1933,7 +1927,6 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
is_docsis_sec = 1;
sess = SECURITY_GET_SESS_PRIV(op->sym->session);
} else
-#endif
sess = CRYPTODEV_GET_SYM_SESS_PRIV(op->sym->session);
 
if (likely(op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)) {
@@ -1961,11 +1954,9 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
op->sym->aead.digest.data,
sess->auth.req_digest_len,
&op->status);
-#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
else if (is_docsis_sec)
verify_docsis_sec_crc(job,
&op->status);
-#endif
else
verify_digest(job,
op->sym->auth.digest.data,
@@ -2098,12 +2089,10 @@ aesni_mb_dequeue_burst(void *queue_pair, struct 
rte_crypto_op **ops,
job = jobs[i];
op = deqd_ops[i];
 
-#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION)
retval = set_sec_mb_job_params(job, qp, op,
   &digest_idx);
else
-#endif
retval = set_mb_job_params(job, qp, op,
   &digest_idx, mb_mgr);
 
@@ -2259,12 +2248,10 @@ aesni_mb_dequeue_burst(void *queue_pair, struct 
rte_crypto_op **ops,
if (retval < 0)
break;
 
-#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION)
retval = set_sec_mb_job_params(job, qp, op,
&digest_idx);
else
-#endif
retval = set_mb_job_params(job, qp, op,
&digest_idx, mb_mgr);
 
@@ -2440,7 +2427,6 @@ struct rte_c

[PATCH v3 2/2] common/qat: change define header

2023-07-06 Thread Brian Dooley
change define from RTE_LIB_SECURITY to BUILD_QAT_SYM as
RTE_ETHER_CRC_LEN value is protected by BUILD_QAT_SYM.

Fixes: ce7a737c8f02 ("crypto/qat: support cipher-CRC offload")
Cc: kevin.osulli...@intel.com

Signed-off-by: Brian Dooley 
---
 drivers/common/qat/qat_qp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index 094d684abc..f284718441 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -11,7 +11,7 @@
 #include 
 #include 
 #include 
-#ifdef RTE_LIB_SECURITY
+#ifdef BUILD_QAT_SYM
 #include 
 #endif
 
-- 
2.25.1



RE: [PATCH v5 0/4] net/mlx5: introduce Tx datapath tracing

2023-07-06 Thread Raslan Darawsheh
Hi,

> -Original Message-
> From: Slava Ovsiienko 
> Sent: Wednesday, July 5, 2023 6:31 PM
> To: dev@dpdk.org
> Cc: jer...@marvell.com; Raslan Darawsheh 
> Subject: [PATCH v5 0/4] net/mlx5: introduce Tx datapath tracing
> 
> The mlx5 provides the send scheduling on specific moment of time, and for
> the related kind of applications it would be extremely useful to have extra
> debug information - when and how packets were scheduled and when the
> actual sending was completed by the NIC hardware (it helps application to
> track the internal delay issues).
> 
> Because the DPDK tx datapath API does not suppose getting any feedback
> from the driver and the feature looks like to be mlx5 specific, it seems to be
> reasonable to engage exisiting DPDK datapath tracing capability.
> 
> The work cycle is supposed to be:
>   - compile appplication with enabled tracing
>   - run application with EAL parameters configuring the tracing in mlx5
> Tx datapath
>   - store the dump file with gathered tracing information
>   - run analyzing scrypt (in Python) to combine related events (packet
> firing and completion) and see the data in human-readable view
> 
> Below is the detailed instruction "how to" with mlx5 NIC to gather all the
> debug data including the full timings information.
> 
> 
> 1. Build DPDK application with enabled datapath tracing
> 
> The meson option should be specified:
>--enable_trace_fp=true
> 
> The c_args shoudl be specified:
>-DALLOW_EXPERIMENTAL_API
> 
> The DPDK configuration examples:
> 
>   meson configure --buildtype=debug -Denable_trace_fp=true
> -Dc_args='-DRTE_LIBRTE_MLX5_DEBUG -DRTE_ENABLE_ASSERT -
> DALLOW_EXPERIMENTAL_API' build
> 
>   meson configure --buildtype=debug -Denable_trace_fp=true
> -Dc_args='-DRTE_ENABLE_ASSERT -DALLOW_EXPERIMENTAL_API' build
> 
>   meson configure --buildtype=release -Denable_trace_fp=true
> -Dc_args='-DRTE_ENABLE_ASSERT -DALLOW_EXPERIMENTAL_API' build
> 
>   meson configure --buildtype=release -Denable_trace_fp=true
> -Dc_args='-DALLOW_EXPERIMENTAL_API' build
> 
> 
> 2. Configuring the NIC
> 
> If the sending completion timings are important the NIC should be configured
> to provide realtime timestamps, the REAL_TIME_CLOCK_ENABLE NV settings
> parameter should be configured to TRUE, for example with command (and
> with following FW/driver reset):
> 
>   sudo mlxconfig -d /dev/mst/mt4125_pciconf0 s
> REAL_TIME_CLOCK_ENABLE=1
> 
> 
> 3. Run DPDK application to gather the traces
> 
> EAL parameters controlling trace capability in runtime
> 
>   --trace=pmd.net.mlx5.tx - the regular expression enabling the tracepoints
> with matching names at least "pmd.net.mlx5.tx"
> must be enabled to gather all events needed
> to analyze mlx5 Tx datapath and its timings.
> By default all tracepoints are disabled.
> 
>   --trace-dir=/var/log - trace storing directory
> 
>   --trace-bufsz=B|K|M - optional, trace data buffer size
>per thread. The default is 1MB.
> 
>   --trace-mode=overwrite|discard  - optional, selects trace data buffer mode.
> 
> 
> 4. Installing or Building Babeltrace2 Package
> 
> The gathered trace data can be analyzed with a developed Python script.
> To parse the trace, the data script uses the Babeltrace2 library.
> The package should be either installed or built from source code as shown
> below:
> 
>   git clone https://github.com/efficios/babeltrace.git
>   cd babeltrace
>   ./bootstrap
>   ./configure -help
>   ./configure --disable-api-doc --disable-man-pages
>   --disable-python-bindings-doc --enbale-python-plugins
>   --enable-python-binding
> 
> 5. Running the Analyzing Script
> 
> The analyzing script is located in the folder: ./drivers/net/mlx5/tools It 
> requires
> Python3.6, Babeltrace2 packages and it takes the only parameter of trace data
> file. For example:
> 
>./mlx5_trace.py /var/log/rte-2023-01-23-AM-11-52-39
> 
> 
> 6. Interpreting the Script Output Data
> 
> All the timings are given in nanoseconds.
> The list of Tx (and coming Rx) bursts per port/queue is presented in the
> output.
> Each list element contains the list of built WQEs with specific opcodes, and
> each WQE contains the list of the encompassed packets to send.
> 
> Signed-off-by: Viacheslav Ovsiienko 
> 
> --
> v2: - comment addressed: "dump_trace" command is replaced with
> "save_trace"
> - Windows build failure addressed, Windows does not support tracing
> 
> v3: - tracepoint routines are moved to the net folder, no need to export
> - documentation added
> - testpmd patches moved out from series to the dedicated patches
> 
> v4: - Python comments addressed
> - codestyle issues fixed
> 
> v5: - traces are moved to the dedicated files, otherwise registration
>   header caused wrong code generation for 3rd party file

Re: [PATCH 1/1] app/mldev: add check for model and filelist option

2023-07-06 Thread Thomas Monjalon
23/03/2023 17:21, Srikanth Yalavarthi:
> Application currently doesn't check for empty models list and
> filelist entries. This causes the app to report an incorrect
> error messages and test status when the lists are empty.
> 
> Fixes: bbd272edcb14 ("app/mldev: add ordered inferences")
> Fixes: f6661e6d9a3a ("app/mldev: validate model operations")
> 
> Signed-off-by: Srikanth Yalavarthi 

Applied, thanks.





Re: [PATCH v2] app/mldev: fix code formatting and typos

2023-07-06 Thread Thomas Monjalon
23/04/2023 06:58, Srikanth Yalavarthi:
> Updated ML application source files to have uniform code formatting
> style across. Remove extra blank lines. Fix typos in application help.
> 
> Fixes: 8cb22a545447 ("app/mldev: fix debug build")
> Fixes: da6793390596 ("app/mldev: support inference validation")
> Fixes: c0e871657d6a ("app/mldev: support queue pairs and size")
> 
> Signed-off-by: Srikanth Yalavarthi 

Applied, thanks.





Re: [PATCH] member: fix PRNG seed reset in NitroSketch mode

2023-07-06 Thread Stephen Hemminger
On Thu, 06 Jul 2023 18:20:19 +0200
Thomas Monjalon  wrote:

> > 
> > This raises a more global issue.
> > rte_srand() overrides the system seed which is set during startup.
> > This is a bad thing, it reduces the entropy in the random number generator.
> > 
> > There are two possible solutions to this:
> > 1. Remove all all calls to rte_srand() and deprecate it.
> > 2. Make rte_srand() add a fixed value to existing entropy. This is what the
> >kernel PRNG does. It adds any user supplied additional entropy to 
> > original
> >state.
> > 
> > Looking at current source.
> >   - code in tests seeding PRNG with TSC. This is unnecessary and can be 
> > removed.
> >   - this code in member library. Should be removed.
> > 
> > Acked-by: Stephen Hemminger   
> 
> Applied, thanks.
> 
> What's next regarding rte_srand?

I am not a random number expert and the topic gets complex with tradeoffs.
How secure do you want versus how fast versus how paranoid.

OpenBSD is paranoid. Linux kernel chooses secure. Looks like DPDK is choosing 
fast
like FreeBSD prng.

The problem is (despite documentation) applications end up needing
a crypto-graphic secure random numbers. Examples are hash seeds or
session keys.




Re: [PATCH 0/1] mempool: implement index-based per core cache

2023-07-06 Thread Stephen Hemminger
On Thu, 13 Jan 2022 05:31:18 +
Dharmik Thakkar  wrote:

> Hi,
> 
> Thank you for your valuable review comments and suggestions!
> 
> I will be sending out a v2 in which I have increased the size of the mempool 
> to 32GB by using division by sizeof(uintptr_t).
> However, I am seeing ~5% performance degradation with mempool_perf_autotest 
> (for bulk size of 32) with this change
> when compared to the base performance.
> Earlier, without this change, I was seeing an improvement of ~13% compared to 
> base performance. So, this is a significant degradation.
> I would appreciate your review comments on v2.
> 
> Thank you!
> 
> > On Jan 10, 2022, at 12:38 AM, Jerin Jacob  wrote:
> > 
> > On Sat, Jan 8, 2022 at 3:07 PM Morten Brørup  
> > wrote:  
> >>   
> >>> From: Bruce Richardson [mailto:bruce.richard...@intel.com]
> >>> Sent: Friday, 7 January 2022 14.51
> >>> 
> >>> On Fri, Jan 07, 2022 at 12:29:23PM +0100, Morten Brørup wrote:  
> > From: Bruce Richardson [mailto:bruce.richard...@intel.com]
> > Sent: Friday, 7 January 2022 12.16
> > 
> > On Sat, Dec 25, 2021 at 01:16:03AM +0100, Morten Brørup wrote:  
> >>> From: Dharmik Thakkar [mailto:dharmik.thak...@arm.com] Sent:  
> > Friday, 24  
> >>> December 2021 23.59
> >>> 
> >>> Current mempool per core cache implementation stores pointers  
> >>> to  
> > mbufs  
> >>> On 64b architectures, each pointer consumes 8B This patch  
> >>> replaces  
> > it  
> >>> with index-based implementation, where in each buffer is  
> >>> addressed  
> > by  
> >>> (pool base address + index) It reduces the amount of  
> >>> memory/cache  
> >>> required for per core cache
> >>> 
> >>> L3Fwd performance testing reveals minor improvements in the  
> >>> cache  
> >>> performance (L1 and L2 misses reduced by 0.60%) with no change  
> >>> in  
> >>> throughput
> >>> 
> >>> Micro-benchmarking the patch using mempool_perf_test shows  
> > significant  
> >>> improvement with majority of the test cases
> >>>   
> >> 
> >> I still think this is very interesting. And your performance  
> >>> numbers  
> > are  
> >> looking good.
> >> 
> >> However, it limits the size of a mempool to 4 GB. As previously
> >> discussed, the max mempool size can be increased by multiplying  
> >>> the  
> > index  
> >> with a constant.
> >> 
> >> I would suggest using sizeof(uintptr_t) as the constant  
> >>> multiplier,  
> > so  
> >> the mempool can hold objects of any size divisible by  
> > sizeof(uintptr_t).  
> >> And it would be silly to use a mempool to hold objects smaller  
> >>> than  
> >> sizeof(uintptr_t).
> >> 
> >> How does the performance look if you multiply the index by
> >> sizeof(uintptr_t)?
> >>   
> > 
> > Each mempool entry is cache aligned, so we can use that if we want  
> >>> a  
> > bigger
> > multiplier.  
>  
>  Thanks for chiming in, Bruce.
>  
>  Please also read this discussion about the multiplier:
>  http://inbox.dpdk.org/dev/calbae1prqyyog96f6ecew1vpf3toh1h7mzzuliy95z9xjbr...@mail.gmail.com/
>    
> >>> 
> >>> I actually wondered after I had sent the email whether we had indeed an
> >>> option to disable the cache alignment or not! Thanks for pointing out
> >>> that
> >>> we do. This brings a couple additional thoughts:
> >>> 
> >>> * Using indexes for the cache should probably be a runtime flag rather
> >>> than
> >>>  a build-time one.
> >>> * It would seem reasonable to me to disallow use of the indexed-cache
> >>> flag
> >>>  and the non-cache aligned flag simultaneously.
> >>> * On the offchance that that restriction is unacceptable, then we can
> >>>  make things a little more complicated by doing a runtime computation
> >>> of
> >>>  the "index-shiftwidth" to use.
> >>> 
> >>> Overall, I think defaulting to cacheline shiftwidth and disallowing
> >>> index-based addressing when using unaligned buffers is simplest and
> >>> easiest
> >>> unless we can come up with a valid usecase for needing more than that.
> >>> 
> >>> /Bruce  
> >> 
> >> This feature is a performance optimization.
> >> 
> >> With that in mind, it should not introduce function pointers or similar 
> >> run-time checks or in the fast path, to determine what kind of cache to 
> >> use per mempool. And if an index multiplier is implemented, it should be a 
> >> compile time constant, probably something between sizeof(uintptr_t) or 
> >> RTE_MEMPOOL_ALIGN (=RTE_CACHE_LINE_SIZE).
> >> 
> >> The patch comes with a tradeoff between better performance and limited 
> >> mempool size, and possibly some limitations regarding very small objects 
> >> that are not cache line aligned to avoid wasting memory 
> >> (RTE_MEMPOOL_POPULATE_F_ALIGN_OBJ).
> >> 
> >> With no multiplier, the only tradeoff is that the mempool size is limited 
> >> to 4 GB.
> >> 
> >> If the multiplier is small 

[PATCH v3 00/14] Use rte_pktmbuf_mtod_offset() where possible

2023-07-06 Thread Stephen Hemminger
Run the coccinelle script for rte_pktmbuf_mtod_offset
against current main branch.

v3 - rebase to cover gro changes

Stephen Hemminger (14):
  gro: use rte_pktmbuf_mtod_offset
  gso: use rte_pktmbuf_mtod_offset
  testpmd: use rte_pktmbuf_mtod_offset
  test: cryptodev use rte_pktmbuf_mtod_offset
  examples: use rte_pktmbuf_mtod_offset
  net/tap: use rte_pktmbuf_mtod_offset
  net/nfp: use rte_pktmbuf_mtod_offset
  crypto/ipsec_mb: use rte_pktmbuf_mtod_offset
  crypto/qat: use rte_pktmbuf_mtod_offset
  crypto/cnxk: use rte_ptkmbuf_mtod_offset
  common/cpt: use rte_pktmbuf_mtod_offset
  crypto/caam_jr: use rte_pktmbuf_mtod_offset
  net/mlx4: use rte_pktmbuf_mtod_offset
  baseband/fpga_5gnr: use rte_pktmbu_mtod_offset

 app/test-pmd/ieee1588fwd.c|  4 +-
 app/test/test_cryptodev.c | 66 ++-
 .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c |  7 +-
 drivers/common/cpt/cpt_ucode.h| 10 ++-
 drivers/crypto/caam_jr/caam_jr.c  |  8 +--
 drivers/crypto/cnxk/cn9k_cryptodev_ops.c  |  2 +-
 drivers/crypto/cnxk/cnxk_se.h |  5 +-
 drivers/crypto/ipsec_mb/pmd_kasumi.c  | 16 ++---
 drivers/crypto/ipsec_mb/pmd_snow3g.c  | 35 --
 drivers/crypto/ipsec_mb/pmd_zuc.c | 16 ++---
 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h  |  9 +--
 drivers/crypto/qat/qat_sym.h  |  9 +--
 drivers/net/mlx4/mlx4_rxtx.c  |  6 +-
 drivers/net/nfp/flower/nfp_flower_cmsg.h  |  3 +-
 drivers/net/nfp/flower/nfp_flower_ctrl.c  |  4 +-
 drivers/net/tap/rte_eth_tap.c |  3 +-
 examples/l2fwd-crypto/main.c  | 16 +++--
 examples/ptpclient/ptpclient.c| 18 ++---
 lib/gro/gro_tcp.h |  2 +-
 lib/gro/gro_tcp4.c|  2 +-
 lib/gro/gro_udp4.c|  4 +-
 lib/gro/gro_vxlan_tcp4.c  |  4 +-
 lib/gro/gro_vxlan_udp4.c  |  4 +-
 lib/gso/gso_common.h  | 11 ++--
 lib/gso/gso_tcp4.c|  8 +--
 lib/gso/gso_tunnel_tcp4.c | 12 ++--
 lib/gso/gso_tunnel_udp4.c | 18 ++---
 27 files changed, 150 insertions(+), 152 deletions(-)

-- 
2.39.2



[PATCH v3 01/14] gro: use rte_pktmbuf_mtod_offset

2023-07-06 Thread Stephen Hemminger
Use rte_pktmbuf_mtod_offset. Change was automatically generated
by cocci/mtod-offset.cocci.

Signed-off-by: Stephen Hemminger 
---
 lib/gro/gro_tcp.h| 2 +-
 lib/gro/gro_tcp4.c   | 2 +-
 lib/gro/gro_udp4.c   | 4 ++--
 lib/gro/gro_vxlan_tcp4.c | 4 ++--
 lib/gro/gro_vxlan_udp4.c | 4 ++--
 5 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/lib/gro/gro_tcp.h b/lib/gro/gro_tcp.h
index d926c4b8cc71..2c825413c261 100644
--- a/lib/gro/gro_tcp.h
+++ b/lib/gro/gro_tcp.h
@@ -150,7 +150,7 @@ check_seq_option(struct gro_tcp_item *item,
struct rte_tcp_hdr *tcph_orig;
uint16_t len, tcp_hl_orig;
 
-   iph_orig = (char *)(rte_pktmbuf_mtod(pkt_orig, char *) +
+   iph_orig = rte_pktmbuf_mtod_offset(pkt_orig, char *,
l2_offset + pkt_orig->l2_len);
tcph_orig = (struct rte_tcp_hdr *)(iph_orig + pkt_orig->l3_len);
tcp_hl_orig = pkt_orig->l4_len;
diff --git a/lib/gro/gro_tcp4.c b/lib/gro/gro_tcp4.c
index 6645de592b63..f8cd92950c63 100644
--- a/lib/gro/gro_tcp4.c
+++ b/lib/gro/gro_tcp4.c
@@ -223,7 +223,7 @@ update_header(struct gro_tcp_item *item)
struct rte_ipv4_hdr *ipv4_hdr;
struct rte_mbuf *pkt = item->firstseg;
 
-   ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) +
+   ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *,
pkt->l2_len);
ipv4_hdr->total_length = rte_cpu_to_be_16(pkt->pkt_len -
pkt->l2_len);
diff --git a/lib/gro/gro_udp4.c b/lib/gro/gro_udp4.c
index 42596d33b6dc..019e05bcdea5 100644
--- a/lib/gro/gro_udp4.c
+++ b/lib/gro/gro_udp4.c
@@ -179,8 +179,8 @@ update_header(struct gro_udp4_item *item)
struct rte_mbuf *pkt = item->firstseg;
uint16_t frag_offset;
 
-   ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) +
-   pkt->l2_len);
+   ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *,
+  pkt->l2_len);
ipv4_hdr->total_length = rte_cpu_to_be_16(pkt->pkt_len -
pkt->l2_len);
 
diff --git a/lib/gro/gro_vxlan_tcp4.c b/lib/gro/gro_vxlan_tcp4.c
index 6ab700192261..2752650389a4 100644
--- a/lib/gro/gro_vxlan_tcp4.c
+++ b/lib/gro/gro_vxlan_tcp4.c
@@ -263,8 +263,8 @@ update_vxlan_header(struct gro_vxlan_tcp4_item *item)
 
/* Update the outer IPv4 header. */
len = pkt->pkt_len - pkt->outer_l2_len;
-   ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) +
-   pkt->outer_l2_len);
+   ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *,
+  pkt->outer_l2_len);
ipv4_hdr->total_length = rte_cpu_to_be_16(len);
 
/* Update the outer UDP header. */
diff --git a/lib/gro/gro_vxlan_udp4.c b/lib/gro/gro_vxlan_udp4.c
index b78a7ae89eef..ca8cee270d3d 100644
--- a/lib/gro/gro_vxlan_udp4.c
+++ b/lib/gro/gro_vxlan_udp4.c
@@ -259,8 +259,8 @@ update_vxlan_header(struct gro_vxlan_udp4_item *item)
 
/* Update the outer IPv4 header. */
len = pkt->pkt_len - pkt->outer_l2_len;
-   ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) +
-   pkt->outer_l2_len);
+   ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *,
+  pkt->outer_l2_len);
ipv4_hdr->total_length = rte_cpu_to_be_16(len);
 
/* Update the outer UDP header. */
-- 
2.39.2



[PATCH v3 02/14] gso: use rte_pktmbuf_mtod_offset

2023-07-06 Thread Stephen Hemminger
Use the rte_pktmbuf_mtod_offset macro.
Change was automatically generated by cocci/mtod-offset.cocci.

Signed-off-by: Stephen Hemminger 
---
 lib/gso/gso_common.h  | 11 +--
 lib/gso/gso_tcp4.c|  8 
 lib/gso/gso_tunnel_tcp4.c | 12 ++--
 lib/gso/gso_tunnel_udp4.c | 18 +-
 4 files changed, 24 insertions(+), 25 deletions(-)

diff --git a/lib/gso/gso_common.h b/lib/gso/gso_common.h
index 9456d596d3c5..8987e368605c 100644
--- a/lib/gso/gso_common.h
+++ b/lib/gso/gso_common.h
@@ -52,8 +52,8 @@ update_udp_header(struct rte_mbuf *pkt, uint16_t udp_offset)
 {
struct rte_udp_hdr *udp_hdr;
 
-   udp_hdr = (struct rte_udp_hdr *)(rte_pktmbuf_mtod(pkt, char *) +
-   udp_offset);
+   udp_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_udp_hdr *,
+ udp_offset);
udp_hdr->dgram_len = rte_cpu_to_be_16(pkt->pkt_len - udp_offset);
 }
 
@@ -77,8 +77,7 @@ update_tcp_header(struct rte_mbuf *pkt, uint16_t l4_offset, 
uint32_t sent_seq,
 {
struct rte_tcp_hdr *tcp_hdr;
 
-   tcp_hdr = (struct rte_tcp_hdr *)(rte_pktmbuf_mtod(pkt, char *) +
-   l4_offset);
+   tcp_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_tcp_hdr *, l4_offset);
tcp_hdr->sent_seq = rte_cpu_to_be_32(sent_seq);
if (likely(non_tail))
tcp_hdr->tcp_flags &= (~(TCP_HDR_PSH_MASK |
@@ -104,8 +103,8 @@ update_ipv4_header(struct rte_mbuf *pkt, uint16_t 
l3_offset, uint16_t id)
 {
struct rte_ipv4_hdr *ipv4_hdr;
 
-   ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) +
-   l3_offset);
+   ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *,
+  l3_offset);
ipv4_hdr->total_length = rte_cpu_to_be_16(pkt->pkt_len - l3_offset);
ipv4_hdr->packet_id = rte_cpu_to_be_16(id);
 }
diff --git a/lib/gso/gso_tcp4.c b/lib/gso/gso_tcp4.c
index d31feaff95cd..e2ae4aaf6c5a 100644
--- a/lib/gso/gso_tcp4.c
+++ b/lib/gso/gso_tcp4.c
@@ -16,8 +16,8 @@ update_ipv4_tcp_headers(struct rte_mbuf *pkt, uint8_t 
ipid_delta,
uint16_t l3_offset = pkt->l2_len;
uint16_t l4_offset = l3_offset + pkt->l3_len;
 
-   ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char*) +
-   l3_offset);
+   ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *,
+  l3_offset);
tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr + pkt->l3_len);
id = rte_be_to_cpu_16(ipv4_hdr->packet_id);
sent_seq = rte_be_to_cpu_32(tcp_hdr->sent_seq);
@@ -46,8 +46,8 @@ gso_tcp4_segment(struct rte_mbuf *pkt,
int ret;
 
/* Don't process the fragmented packet */
-   ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) +
-   pkt->l2_len);
+   ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *,
+  pkt->l2_len);
frag_off = rte_be_to_cpu_16(ipv4_hdr->fragment_offset);
if (unlikely(IS_FRAGMENTED(frag_off))) {
return 0;
diff --git a/lib/gso/gso_tunnel_tcp4.c b/lib/gso/gso_tunnel_tcp4.c
index 1a7ef30ddebf..3a9159774b27 100644
--- a/lib/gso/gso_tunnel_tcp4.c
+++ b/lib/gso/gso_tunnel_tcp4.c
@@ -23,13 +23,13 @@ update_tunnel_ipv4_tcp_headers(struct rte_mbuf *pkt, 
uint8_t ipid_delta,
tcp_offset = inner_ipv4_offset + pkt->l3_len;
 
/* Outer IPv4 header. */
-   ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) +
-   outer_ipv4_offset);
+   ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *,
+  outer_ipv4_offset);
outer_id = rte_be_to_cpu_16(ipv4_hdr->packet_id);
 
/* Inner IPv4 header. */
-   ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) +
-   inner_ipv4_offset);
+   ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *,
+  inner_ipv4_offset);
inner_id = rte_be_to_cpu_16(ipv4_hdr->packet_id);
 
tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr + pkt->l3_len);
@@ -65,8 +65,8 @@ gso_tunnel_tcp4_segment(struct rte_mbuf *pkt,
int ret;
 
hdr_offset = pkt->outer_l2_len + pkt->outer_l3_len + pkt->l2_len;
-   inner_ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) +
-   hdr_offset);
+   inner_ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *,
+hdr_offset);
/*
 * Don't process the packet whose MF bit or offset in the inner
 * IPv4 header are non-zero.
diff --git a/lib/gso/gso_tunnel_udp4.c b/lib/gso/gso_tunnel_udp4.c
index 1fc7a8dbc5aa..4fb275484ca8 100644
--- a/lib/gso/gso_tunnel_udp4.c
+++ b/lib/gso/gso_tunnel_udp4.c
@@ -22,13 +22

[PATCH v3 03/14] testpmd: use rte_pktmbuf_mtod_offset

2023-07-06 Thread Stephen Hemminger
Use helper macro.

Signed-off-by: Stephen Hemminger 
---
 app/test-pmd/ieee1588fwd.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/ieee1588fwd.c b/app/test-pmd/ieee1588fwd.c
index 386d9f10e642..3371771751dd 100644
--- a/app/test-pmd/ieee1588fwd.c
+++ b/app/test-pmd/ieee1588fwd.c
@@ -138,8 +138,8 @@ ieee1588_packet_fwd(struct fwd_stream *fs)
 * Check that the received PTP packet is a PTP V2 packet of type
 * PTP_SYNC_MESSAGE.
 */
-   ptp_hdr = (struct ptpv2_msg *) (rte_pktmbuf_mtod(mb, char *) +
-   sizeof(struct rte_ether_hdr));
+   ptp_hdr = rte_pktmbuf_mtod_offset(mb, struct ptpv2_msg *,
+ sizeof(struct rte_ether_hdr));
if (ptp_hdr->version != 0x02) {
printf("Port %u Received PTP V2 Ethernet frame with wrong PTP"
   " protocol version 0x%x (should be 0x02)\n",
-- 
2.39.2



[PATCH v3 04/14] test: cryptodev use rte_pktmbuf_mtod_offset

2023-07-06 Thread Stephen Hemminger
Based off patch generated by cocci/mtod-offset.cocci.
With some cleanup to shorten lines by using conditional
with omitted operand.

Signed-off-by: Stephen Hemminger 
---
 app/test/test_cryptodev.c | 66 +--
 1 file changed, 36 insertions(+), 30 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index fb2af40b99ee..5072b3b6ece5 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -3153,8 +3153,9 @@ test_snow3g_authentication(const struct 
snow3g_hash_test_data *tdata)
ut_params->op);
ut_params->obuf = ut_params->op->sym->m_src;
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
-   ut_params->digest = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
-   + plaintext_pad_len;
+   ut_params->digest = rte_pktmbuf_mtod_offset(ut_params->obuf,
+   uint8_t *,
+   plaintext_pad_len);
 
/* Validate obuf */
TEST_ASSERT_BUFFERS_ARE_EQUAL(
@@ -3247,8 +3248,9 @@ test_snow3g_authentication_verify(const struct 
snow3g_hash_test_data *tdata)
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
-   ut_params->digest = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
-   + plaintext_pad_len;
+   ut_params->digest = rte_pktmbuf_mtod_offset(ut_params->obuf,
+   uint8_t *,
+   plaintext_pad_len);
 
/* Validate obuf */
if (ut_params->op->status == RTE_CRYPTO_OP_STATUS_SUCCESS)
@@ -3337,8 +3339,9 @@ test_kasumi_authentication(const struct 
kasumi_hash_test_data *tdata)
 
ut_params->obuf = ut_params->op->sym->m_src;
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
-   ut_params->digest = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
-   + plaintext_pad_len;
+   ut_params->digest = rte_pktmbuf_mtod_offset(ut_params->obuf,
+   uint8_t *,
+   plaintext_pad_len);
 
/* Validate obuf */
TEST_ASSERT_BUFFERS_ARE_EQUAL(
@@ -3425,8 +3428,9 @@ test_kasumi_authentication_verify(const struct 
kasumi_hash_test_data *tdata)
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
-   ut_params->digest = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
-   + plaintext_pad_len;
+   ut_params->digest = rte_pktmbuf_mtod_offset(ut_params->obuf,
+   uint8_t *,
+   plaintext_pad_len);
 
/* Validate obuf */
if (ut_params->op->status == RTE_CRYPTO_OP_STATUS_SUCCESS)
@@ -4879,8 +4883,9 @@ test_zuc_cipher_auth(const struct wireless_test_data 
*tdata)
tdata->validDataLenInBits.len,
"ZUC Ciphertext data not as expected");
 
-   ut_params->digest = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
-   + plaintext_pad_len;
+   ut_params->digest = rte_pktmbuf_mtod_offset(ut_params->obuf,
+   uint8_t *,
+   plaintext_pad_len);
 
/* Validate obuf */
TEST_ASSERT_BUFFERS_ARE_EQUAL(
@@ -4994,8 +4999,9 @@ test_snow3g_cipher_auth(const struct snow3g_test_data 
*tdata)
tdata->validDataLenInBits.len,
"SNOW 3G Ciphertext data not as expected");
 
-   ut_params->digest = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
-   + plaintext_pad_len;
+   ut_params->digest = rte_pktmbuf_mtod_offset(ut_params->obuf,
+   uint8_t *,
+   plaintext_pad_len);
 
/* Validate obuf */
TEST_ASSERT_BUFFERS_ARE_EQUAL(
@@ -5163,9 +5169,9 @@ test_snow3g_auth_cipher(const struct snow3g_test_data 
*tdata,
debug_hexdump(stdout, "ciphertext expected:",
tdata->ciphertext.data, tdata->ciphertext.len >> 3);
 
-   ut_params->digest = rte_pktmbuf_mtod(ut_params->obuf, uint8_t *)
-   + (tdata->digest.offset_bytes == 0 ?
-   plaintext_pad_len : tdata->digest.offset_bytes);
+   ut_params->digest = rte_pktmbuf_mtod_offset(ut_params->obuf,
+   uint8_t *,
+   tdata->digest.offset_bytes ? : 
plaintext_pad_len);
 
debug_hexdump(stdout,

[PATCH v3 05/14] examples: use rte_pktmbuf_mtod_offset

2023-07-06 Thread Stephen Hemminger
Automatically generated from cocci/mtod-offset.cocci

Signed-off-by: Stephen Hemminger 
---
 examples/l2fwd-crypto/main.c   | 16 +---
 examples/ptpclient/ptpclient.c | 18 +-
 2 files changed, 18 insertions(+), 16 deletions(-)

diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index efe7eea2a768..403ed6b44de9 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -410,8 +410,8 @@ l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
 
ipdata_offset = sizeof(struct rte_ether_hdr);
 
-   ip_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(m, char *) +
-   ipdata_offset);
+   ip_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *,
+ipdata_offset);
 
ipdata_offset += (ip_hdr->version_ihl & RTE_IPV4_HDR_IHL_MASK)
* RTE_IPV4_IHL_MULTIPLIER;
@@ -479,8 +479,9 @@ l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
op->sym->auth.digest.data = (uint8_t 
*)rte_pktmbuf_append(m,
cparams->digest_length);
} else {
-   op->sym->auth.digest.data = rte_pktmbuf_mtod(m,
-   uint8_t *) + ipdata_offset + data_len;
+   op->sym->auth.digest.data = rte_pktmbuf_mtod_offset(m,
+   uint8_t *,
+   ipdata_offset + 
data_len);
}
 
op->sym->auth.digest.phys_addr = rte_pktmbuf_iova_offset(m,
@@ -540,8 +541,9 @@ l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
op->sym->aead.digest.data = (uint8_t 
*)rte_pktmbuf_append(m,
cparams->digest_length);
} else {
-   op->sym->aead.digest.data = rte_pktmbuf_mtod(m,
-   uint8_t *) + ipdata_offset + data_len;
+   op->sym->aead.digest.data = rte_pktmbuf_mtod_offset(m,
+   uint8_t *,
+   ipdata_offset + 
data_len);
}
 
op->sym->aead.digest.phys_addr = rte_pktmbuf_iova_offset(m,
@@ -631,7 +633,7 @@ l2fwd_simple_forward(struct rte_mbuf *m, uint16_t portid,
struct rte_ipv4_hdr *ip_hdr;
uint32_t ipdata_offset = sizeof(struct rte_ether_hdr);
 
-   ip_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(m, char *) +
+   ip_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *,
 ipdata_offset);
dst_port = l2fwd_dst_ports[portid];
 
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index cdf2da64dfee..2535d848a1e9 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -354,8 +354,8 @@ parse_sync(struct ptpv2_data_slave_ordinary *ptp_data, 
uint16_t rx_tstamp_idx)
 {
struct ptp_header *ptp_hdr;
 
-   ptp_hdr = (struct ptp_header *)(rte_pktmbuf_mtod(ptp_data->m, char *)
-   + sizeof(struct rte_ether_hdr));
+   ptp_hdr = rte_pktmbuf_mtod_offset(ptp_data->m, struct ptp_header *,
+ sizeof(struct rte_ether_hdr));
ptp_data->seqID_SYNC = rte_be_to_cpu_16(ptp_hdr->seq_id);
 
if (ptp_data->ptpset == 0) {
@@ -397,15 +397,15 @@ parse_fup(struct ptpv2_data_slave_ordinary *ptp_data)
int ret;
 
eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
-   ptp_hdr = (struct ptp_header *)(rte_pktmbuf_mtod(m, char *)
-   + sizeof(struct rte_ether_hdr));
+   ptp_hdr = rte_pktmbuf_mtod_offset(m, struct ptp_header *,
+ sizeof(struct rte_ether_hdr));
if (memcmp(&ptp_data->master_clock_id,
&ptp_hdr->source_port_id.clock_id,
sizeof(struct clock_id)) != 0)
return;
 
ptp_data->seqID_FOLLOWUP = rte_be_to_cpu_16(ptp_hdr->seq_id);
-   ptp_msg = (struct ptp_message *) (rte_pktmbuf_mtod(m, char *) +
+   ptp_msg = rte_pktmbuf_mtod_offset(m, struct ptp_message *,
  sizeof(struct rte_ether_hdr));
 
origin_tstamp = &ptp_msg->follow_up.precise_origin_tstamp;
@@ -537,8 +537,8 @@ parse_drsp(struct ptpv2_data_slave_ordinary *ptp_data)
struct tstamp *rx_tstamp;
uint16_t seq_id;
 
-   ptp_msg = (struct ptp_message *) (rte_pktmbuf_mtod(m, char *) +
-   sizeof(struct rte_ether_hdr));
+   ptp_msg = rte_pktmbuf_mtod_offset(m, struct ptp_message *,
+ sizeof(struct rte_ether_hdr));
seq_id = rte_be_to_cpu_16(ptp_msg->delay_resp.hdr.seq_id);
if (memcmp(&ptp_data->client_clock_id,
   &ptp_msg->de

[PATCH v3 06/14] net/tap: use rte_pktmbuf_mtod_offset

2023-07-06 Thread Stephen Hemminger
Automatically generated by cocci/mbuf-offset.cocci

Signed-off-by: Stephen Hemminger 
---
 drivers/net/tap/rte_eth_tap.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index bf98f7555990..ebddbae9fe9f 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -672,8 +672,7 @@ tap_write_mbufs(struct tx_queue *txq, uint16_t num_mbufs,
if (seg_len > l234_hlen) {
iovecs[k].iov_len = seg_len - l234_hlen;
iovecs[k].iov_base =
-   rte_pktmbuf_mtod(seg, char *) +
-   l234_hlen;
+   rte_pktmbuf_mtod_offset(seg, char *, 
l234_hlen);
tap_tx_l4_add_rcksum(iovecs[k].iov_base,
iovecs[k].iov_len, l4_cksum,
&l4_raw_cksum);
-- 
2.39.2



[PATCH v3 07/14] net/nfp: use rte_pktmbuf_mtod_offset

2023-07-06 Thread Stephen Hemminger
Automatically generated by cocci/mtod-offset.cocci.

Signed-off-by: Stephen Hemminger 
---
 drivers/net/nfp/flower/nfp_flower_cmsg.h | 3 ++-
 drivers/net/nfp/flower/nfp_flower_ctrl.c | 4 ++--
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h 
b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index f643d54d39a4..787a38dc9aa0 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -381,7 +381,8 @@ enum nfp_flower_cmsg_port_vnic_type {
 static inline char*
 nfp_flower_cmsg_get_data(struct rte_mbuf *m)
 {
-   return rte_pktmbuf_mtod(m, char *) + 4 + 4 + NFP_FLOWER_CMSG_HLEN;
+   return rte_pktmbuf_mtod_offset(m, char *,
+  4 + 4 + NFP_FLOWER_CMSG_HLEN);
 }
 
 /*
diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c 
b/drivers/net/nfp/flower/nfp_flower_ctrl.c
index 4cb2c2f99e04..18823a97887d 100644
--- a/drivers/net/nfp/flower/nfp_flower_ctrl.c
+++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c
@@ -389,7 +389,7 @@ nfp_flower_cmsg_rx_stats(struct nfp_flow_priv *flow_priv,
uint32_t ctx_id;
struct nfp_flower_stats_frame *stats;
 
-   msg = rte_pktmbuf_mtod(mbuf, char *) + NFP_FLOWER_CMSG_HLEN;
+   msg = rte_pktmbuf_mtod_offset(mbuf, char *, NFP_FLOWER_CMSG_HLEN);
msg_len = mbuf->data_len - NFP_FLOWER_CMSG_HLEN;
count = msg_len / sizeof(struct nfp_flower_stats_frame);
 
@@ -412,7 +412,7 @@ nfp_flower_cmsg_rx_qos_stats(struct nfp_mtr_priv *mtr_priv,
struct nfp_mtr *mtr;
struct nfp_mtr_stats_reply *mtr_stats;
 
-   msg = rte_pktmbuf_mtod(mbuf, char *) + NFP_FLOWER_CMSG_HLEN;
+   msg = rte_pktmbuf_mtod_offset(mbuf, char *, NFP_FLOWER_CMSG_HLEN);
 
mtr_stats = (struct nfp_mtr_stats_reply *)msg;
profile_id = rte_be_to_cpu_32(mtr_stats->head.profile_id);
-- 
2.39.2



[PATCH v3 09/14] crypto/qat: use rte_pktmbuf_mtod_offset

2023-07-06 Thread Stephen Hemminger
Auto generated with cocci/mtod-offset.cocci

Signed-off-by: Stephen Hemminger 
---
 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 9 +
 drivers/crypto/qat/qat_sym.h | 9 +
 2 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h 
b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
index 1bafeb4a53e8..3e0dfea94c87 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
@@ -56,14 +56,15 @@ qat_bpicipher_preprocess(struct qat_sym_session *ctx,
uint8_t *last_block, *dst, *iv;
uint32_t last_block_offset = sym_op->cipher.data.offset +
sym_op->cipher.data.length - last_block_len;
-   last_block = (uint8_t *) rte_pktmbuf_mtod_offset(sym_op->m_src,
-   uint8_t *, last_block_offset);
+   last_block = rte_pktmbuf_mtod_offset(sym_op->m_src, uint8_t *,
+last_block_offset);
 
if (unlikely((sym_op->m_dst != NULL)
&& (sym_op->m_dst != sym_op->m_src)))
/* out-of-place operation (OOP) */
-   dst = (uint8_t *) rte_pktmbuf_mtod_offset(sym_op->m_dst,
-   uint8_t *, last_block_offset);
+   dst = rte_pktmbuf_mtod_offset(sym_op->m_dst,
+ uint8_t *,
+ last_block_offset);
else
dst = last_block;
 
diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h
index 193281cd9135..d7ceb13b29cd 100644
--- a/drivers/crypto/qat/qat_sym.h
+++ b/drivers/crypto/qat/qat_sym.h
@@ -192,13 +192,14 @@ qat_bpicipher_postprocess(struct qat_sym_session *ctx,
 
last_block_offset = sym_op->cipher.data.offset +
sym_op->cipher.data.length - last_block_len;
-   last_block = (uint8_t *) rte_pktmbuf_mtod_offset(sym_op->m_src,
-   uint8_t *, last_block_offset);
+   last_block = rte_pktmbuf_mtod_offset(sym_op->m_src, uint8_t *,
+last_block_offset);
 
if (unlikely(sym_op->m_dst != NULL))
/* out-of-place operation (OOP) */
-   dst = (uint8_t *) rte_pktmbuf_mtod_offset(sym_op->m_dst,
-   uint8_t *, last_block_offset);
+   dst = rte_pktmbuf_mtod_offset(sym_op->m_dst,
+ uint8_t *,
+ last_block_offset);
else
dst = last_block;
 
-- 
2.39.2



[PATCH v3 08/14] crypto/ipsec_mb: use rte_pktmbuf_mtod_offset

2023-07-06 Thread Stephen Hemminger
Initial patch generated with cocci/mtod-offset.
Additional manual cleanups to indentation and remove unnecessary
parenthesis.

Signed-off-by: Stephen Hemminger 
---
 drivers/crypto/ipsec_mb/pmd_kasumi.c | 16 ++---
 drivers/crypto/ipsec_mb/pmd_snow3g.c | 35 +++-
 drivers/crypto/ipsec_mb/pmd_zuc.c| 16 ++---
 3 files changed, 30 insertions(+), 37 deletions(-)

diff --git a/drivers/crypto/ipsec_mb/pmd_kasumi.c 
b/drivers/crypto/ipsec_mb/pmd_kasumi.c
index 5db9c523cd9a..5b1694276468 100644
--- a/drivers/crypto/ipsec_mb/pmd_kasumi.c
+++ b/drivers/crypto/ipsec_mb/pmd_kasumi.c
@@ -83,13 +83,13 @@ process_kasumi_cipher_op(struct ipsec_mb_qp *qp, struct 
rte_crypto_op **ops,
uint32_t num_bytes[num_ops];
 
for (i = 0; i < num_ops; i++) {
-   src[i] = rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *)
-+ (ops[i]->sym->cipher.data.offset >> 3);
+   src[i] = rte_pktmbuf_mtod_offset(ops[i]->sym->m_src, uint8_t *,
+   ops[i]->sym->cipher.data.offset >> 3);
dst[i] = ops[i]->sym->m_dst
-? rte_pktmbuf_mtod(ops[i]->sym->m_dst, uint8_t *)
-  + (ops[i]->sym->cipher.data.offset >> 3)
-: rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *)
-  + (ops[i]->sym->cipher.data.offset >> 3);
+? rte_pktmbuf_mtod_offset(ops[i]->sym->m_dst, 
uint8_t *,
+  
ops[i]->sym->cipher.data.offset >> 3)
+: rte_pktmbuf_mtod_offset(ops[i]->sym->m_src, 
uint8_t *,
+  
ops[i]->sym->cipher.data.offset >> 3);
iv_ptr = rte_crypto_op_ctod_offset(ops[i], uint8_t *,
session->cipher_iv_offset);
iv[i] = *((uint64_t *)(iv_ptr));
@@ -155,8 +155,8 @@ process_kasumi_hash_op(struct ipsec_mb_qp *qp, struct 
rte_crypto_op **ops,
 
length_in_bits = ops[i]->sym->auth.data.length;
 
-   src = rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *)
- + (ops[i]->sym->auth.data.offset >> 3);
+   src = rte_pktmbuf_mtod_offset(ops[i]->sym->m_src, uint8_t *,
+ ops[i]->sym->auth.data.offset >> 
3);
/* Direction from next bit after end of message */
num_bytes = length_in_bits >> 3;
 
diff --git a/drivers/crypto/ipsec_mb/pmd_snow3g.c 
b/drivers/crypto/ipsec_mb/pmd_snow3g.c
index e64df1a462e3..90b8d80c2c56 100644
--- a/drivers/crypto/ipsec_mb/pmd_snow3g.c
+++ b/drivers/crypto/ipsec_mb/pmd_snow3g.c
@@ -111,14 +111,12 @@ process_snow3g_cipher_op(struct ipsec_mb_qp *qp, struct 
rte_crypto_op **ops,
 
cipher_off = ops[i]->sym->cipher.data.offset >> 3;
cipher_len = ops[i]->sym->cipher.data.length >> 3;
-   src[i] = rte_pktmbuf_mtod_offset(
-   ops[i]->sym->m_src, uint8_t *, cipher_off);
+   src[i] = rte_pktmbuf_mtod_offset(ops[i]->sym->m_src, uint8_t *, 
cipher_off);
 
/* If out-of-place operation */
if (ops[i]->sym->m_dst &&
ops[i]->sym->m_src != ops[i]->sym->m_dst) {
-   dst[i] = rte_pktmbuf_mtod_offset(
-   ops[i]->sym->m_dst, uint8_t *, cipher_off);
+   dst[i] = rte_pktmbuf_mtod_offset(ops[i]->sym->m_dst, 
uint8_t *, cipher_off);
 
/* In case of out-of-place, auth-cipher operation
 * with partial encryption of the digest, copy
@@ -133,16 +131,14 @@ process_snow3g_cipher_op(struct ipsec_mb_qp *qp, struct 
rte_crypto_op **ops,
cipher_off - cipher_len;
if (unencrypted_bytes > 0)
rte_memcpy(
-   rte_pktmbuf_mtod_offset(
-   ops[i]->sym->m_dst, uint8_t *,
+   
rte_pktmbuf_mtod_offset(ops[i]->sym->m_dst, uint8_t *,
cipher_off + cipher_len),
-   rte_pktmbuf_mtod_offset(
-   ops[i]->sym->m_src, uint8_t *,
+   
rte_pktmbuf_mtod_offset(ops[i]->sym->m_src, uint8_t *,
cipher_off + cipher_len),
unencrypted_bytes);
} else
-   dst[i] = rte_pktmbuf_mtod_offset(ops[i]->sym->m_src,
-   uint8_t *, cipher_off);
+   dst[i] = rte_pktmbuf_mtod_offset(ops[i]->sym->m_src, 
uint8_t *,

[PATCH v3 10/14] crypto/cnxk: use rte_ptkmbuf_mtod_offset

2023-07-06 Thread Stephen Hemminger
Autogenerated with cocci/mtod-offset.cocci

Signed-off-by: Stephen Hemminger 
---
 drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 2 +-
 drivers/crypto/cnxk/cnxk_se.h| 5 ++---
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c 
b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index 34d40b07d4c6..8b91d11b79cc 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -520,7 +520,7 @@ cn9k_cpt_sec_post_process(struct rte_crypto_op *cop,
 
if (infl_req->op_flags & CPT_OP_FLAGS_IPSEC_DIR_INBOUND) {
 
-   hdr = (struct roc_ie_on_inb_hdr *)rte_pktmbuf_mtod(m, char *);
+   hdr = rte_pktmbuf_mtod(m, struct roc_ie_on_inb_hdr *);
 
if (likely(m->next == NULL)) {
ip = PLT_PTR_ADD(hdr, ROC_IE_ON_INB_RPTR_HDR);
diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h
index 75c1dce231bf..1392af5833d1 100644
--- a/drivers/crypto/cnxk/cnxk_se.h
+++ b/drivers/crypto/cnxk/cnxk_se.h
@@ -2724,7 +2724,7 @@ fill_fc_params(struct rte_crypto_op *cop, struct 
cnxk_se_sess *sess,
m = cpt_m_dst_get(cpt_op, m_src, m_dst);
 
/* Digest immediately following data is best case */
-   if (unlikely(rte_pktmbuf_mtod(m, uint8_t *) + mc_hash_off !=
+   if (unlikely(rte_pktmbuf_mtod_offset(m, uint8_t *, mc_hash_off) 
!=
 (uint8_t *)sym_op->aead.digest.data)) {
flags |= ROC_SE_VALID_MAC_BUF;
fc_params.mac_buf.size = sess->mac_len;
@@ -2759,8 +2759,7 @@ fill_fc_params(struct rte_crypto_op *cop, struct 
cnxk_se_sess *sess,
 
/* hmac immediately following data is best case */
if (!(op_minor & ROC_SE_FC_MINOR_OP_HMAC_FIRST) &&
-   (unlikely(rte_pktmbuf_mtod(m, uint8_t *) +
- mc_hash_off !=
+   (unlikely(rte_pktmbuf_mtod_offset(m, uint8_t *, 
mc_hash_off) !=
  (uint8_t *)sym_op->auth.digest.data))) {
flags |= ROC_SE_VALID_MAC_BUF;
fc_params.mac_buf.size = sess->mac_len;
-- 
2.39.2



[PATCH v3 11/14] common/cpt: use rte_pktmbuf_mtod_offset

2023-07-06 Thread Stephen Hemminger
Autogenerated with cocci/mtod-offset.cocci

Signed-off-by: Stephen Hemminger 
---
 drivers/common/cpt/cpt_ucode.h | 10 --
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/common/cpt/cpt_ucode.h b/drivers/common/cpt/cpt_ucode.h
index b393be4cf661..87a3ac80b9da 100644
--- a/drivers/common/cpt/cpt_ucode.h
+++ b/drivers/common/cpt/cpt_ucode.h
@@ -3167,9 +3167,8 @@ fill_fc_params(struct rte_crypto_op *cop,
m = m_src;
 
/* hmac immediately following data is best case */
-   if (unlikely(rte_pktmbuf_mtod(m, uint8_t *) +
-   mc_hash_off !=
-   (uint8_t *)sym_op->aead.digest.data)) {
+   if (unlikely(rte_pktmbuf_mtod_offset(m, uint8_t *, 
mc_hash_off) !=
+(uint8_t *)sym_op->aead.digest.data)) {
flags |= VALID_MAC_BUF;
fc_params.mac_buf.size = sess_misc->mac_len;
fc_params.mac_buf.vaddr =
@@ -3211,9 +3210,8 @@ fill_fc_params(struct rte_crypto_op *cop,
 
/* hmac immediately following data is best case */
if (!ctx->dec_auth && !ctx->auth_enc &&
-(unlikely(rte_pktmbuf_mtod(m, uint8_t *) +
-   mc_hash_off !=
-(uint8_t *)sym_op->auth.digest.data))) {
+(unlikely(rte_pktmbuf_mtod_offset(m, uint8_t 
*, mc_hash_off) !=
+  (uint8_t 
*)sym_op->auth.digest.data))) {
flags |= VALID_MAC_BUF;
fc_params.mac_buf.size =
sess_misc->mac_len;
-- 
2.39.2



[PATCH v3 12/14] crypto/caam_jr: use rte_pktmbuf_mtod_offset

2023-07-06 Thread Stephen Hemminger
Autogenerated with cocci/mtod-offset.cocci.

Signed-off-by: Stephen Hemminger 
---
 drivers/crypto/caam_jr/caam_jr.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/caam_jr/caam_jr.c b/drivers/crypto/caam_jr/caam_jr.c
index b55258689b49..9c96fd21a48d 100644
--- a/drivers/crypto/caam_jr/caam_jr.c
+++ b/drivers/crypto/caam_jr/caam_jr.c
@@ -631,15 +631,15 @@ hw_poll_job_ring(struct sec_job_ring_t *job_ring,
 
if (ctx->op->sym->m_dst) {
/*TODO check for ip header or other*/
-   ip4_hdr = (struct ip *)
-   rte_pktmbuf_mtod(ctx->op->sym->m_dst, char*);
+   ip4_hdr = rte_pktmbuf_mtod(ctx->op->sym->m_dst,
+  struct ip *);
ctx->op->sym->m_dst->pkt_len =
rte_be_to_cpu_16(ip4_hdr->ip_len);
ctx->op->sym->m_dst->data_len =
rte_be_to_cpu_16(ip4_hdr->ip_len);
} else {
-   ip4_hdr = (struct ip *)
-   rte_pktmbuf_mtod(ctx->op->sym->m_src, char*);
+   ip4_hdr = rte_pktmbuf_mtod(ctx->op->sym->m_src,
+  struct ip *);
ctx->op->sym->m_src->pkt_len =
rte_be_to_cpu_16(ip4_hdr->ip_len);
ctx->op->sym->m_src->data_len =
-- 
2.39.2



[PATCH v3 13/14] net/mlx4: use rte_pktmbuf_mtod_offset

2023-07-06 Thread Stephen Hemminger
Autogenerated with cocci/mtod-offset.cocci.

Signed-off-by: Stephen Hemminger 
---
 drivers/net/mlx4/mlx4_rxtx.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/mlx4/mlx4_rxtx.c b/drivers/net/mlx4/mlx4_rxtx.c
index 059e432a63fc..d5feeb7f7e6d 100644
--- a/drivers/net/mlx4/mlx4_rxtx.c
+++ b/drivers/net/mlx4/mlx4_rxtx.c
@@ -1014,9 +1014,9 @@ mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, 
uint16_t pkts_n)
 * loopback in eSwitch, so that VFs and PF can
 * communicate with each other.
 */
-   srcrb.flags16[0] = *(rte_pktmbuf_mtod(buf, uint16_t *));
-   ctrl->imm = *(rte_pktmbuf_mtod_offset(buf, uint32_t *,
- sizeof(uint16_t)));
+   srcrb.flags16[0] = *rte_pktmbuf_mtod(buf, uint16_t *);
+   ctrl->imm = *rte_pktmbuf_mtod_offset(buf, uint32_t *,
+sizeof(uint16_t));
} else {
ctrl->imm = 0;
}
-- 
2.39.2



[PATCH v3 14/14] baseband/fpga_5gnr: use rte_pktmbu_mtod_offset

2023-07-06 Thread Stephen Hemminger
Autogenerated with cocci/mtod-offset.cocci.

Signed-off-by: Stephen Hemminger 
---
 drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c 
b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index f29565af8cca..465a65f3dca2 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -1543,8 +1543,7 @@ fpga_harq_write_loopback(struct fpga_queue *q,
rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size\n");
}
 
-   input = (uint64_t *)rte_pktmbuf_mtod_offset(harq_input,
-   uint8_t *, in_offset);
+   input = rte_pktmbuf_mtod_offset(harq_input, uint64_t *, in_offset);
 
while (left_length > 0) {
if (fpga_reg_read_8(q->d->mmio_base,
@@ -1621,8 +1620,8 @@ fpga_harq_read_loopback(struct fpga_queue *q,
}
left_length = harq_in_length;
 
-   input = (uint64_t *)rte_pktmbuf_mtod_offset(harq_output,
-   uint8_t *, harq_out_offset);
+   input = rte_pktmbuf_mtod_offset(harq_output, uint64_t *,
+   harq_out_offset);
 
while (left_length > 0) {
fpga_reg_write_32(q->d->mmio_base,
-- 
2.39.2



Re: [PATCH v1] examples/l3fwd: fix for coverity scan

2023-07-06 Thread Stephen Hemminger
On Wed, 01 Feb 2023 18:28:44 +0100
Thomas Monjalon  wrote:

> 10/01/2023 15:56, Mohammad Iqbal Ahmad:
> > This patch fixes (Logically dead code) coverity issue.
> > This patch also fixes (Uninitialized scalar variable) coverity issue.
> > 
> > Coverity issue: 381687
> > Coverity issue: 381686
> > Fixes: 6a094e328598 ("examples/l3fwd: implement FIB lookup method")
> > 
> > Signed-off-by: Mohammad Iqbal Ahmad   
> 
> It seems you removed "if (nh != FIB_DEFAULT_HOP)"
> 
> Please could you explain what was the issue
> inside the commit message.
> It could help to find a better title as well.

Coverity is spotting that the same condition is evaluated first
in the if() then in the conditional expression. So yes it is a bug.

Would prefer the title of
   examples/l3fwd: fix duplicate expression for default nexthop

Don't think the default nexthop was ever tested. If it was then
hops[i] would have never been updated. Probably would just get previous
value so it worked.

Acked-by: Stephen Hemminger 


RE: [EXT] [PATCH v2] app/crypto-perf: fix socket ID default value

2023-07-06 Thread Akhil Goyal
> Due to recent changes to the default device socket ID,
> before being used as an index for session mempool list,
> the socket ID should be set to 0 if unknown (-1).
> 
> Fixes: 7dcd73e37965 ("drivers/bus: set device NUMA node to unknown by
> default")
> Fixes: 64c469b9e7d8 ("app/crypto-perf: check range of socket id")
> Cc: bruce.richard...@intel.com
> Cc: olivier.m...@6wind.com
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Ciara Power 
> Acked-by: Kai Ji 
> ---
Acked-by: Akhil Goyal 

Applied to dpdk-next-crypto
Thanks.


RE: [PATCH v2] examples/ipsec-secgw: fix of socket id default value

2023-07-06 Thread Akhil Goyal
> > Subject: [PATCH v2] examples/ipsec-secgw: fix of socket id default value
> >
> > Due to recent changes to the default device socket ID, before being used as
> > an index for session mempool list, set socket ID to 0 if unknown (-1).
> >
> > Fixes: 7dcd73e37965 ("drivers/bus: set device NUMA node to unknown by
> > default")
> > Cc: olivier.m...@6wind.com
> > Cc: sta...@dpdk.org
> >
> > Signed-off-by: Kai Ji 
> > ---
> >  examples/ipsec-secgw/ipsec-secgw.c | 3 +++
> 
> Acked-by: Ciara Power 
Acked-by: Akhil Goyal 
Applied to dpdk-next-crypto
Thanks.



Re: [PATCH v3] usertools: add check for IOMMU support in dpdk-devbind

2023-07-06 Thread Stephen Hemminger
On Mon, 21 Mar 2022 17:27:27 +0500
Fidaullah Noonari  wrote:

> binding with vfio driver, when IOMMU is disabled, causes program to crash.
> this patch adds a flag for noiommmu-mode. when this is set, if IOMMU is
> disabled, it changes vfio into unsafe noiommu mode and prints warning
> message.
> 
> Signed-off-by: Fidaullah Noonari 
> ---

Minor indentation issues reported by flake8 python checker:

./usertools/dpdk-devbind.py:489:27: E231 missing whitespace after ','
./usertools/dpdk-devbind.py:494:17: E128 continuation line under-indented for 
visual indent
./usertools/dpdk-devbind.py:507:17: E128 continuation line under-indented for 
visual indent


RE: [EXT] [PATCH v3 0/2] remove unused defines

2023-07-06 Thread Akhil Goyal
> This series removes some unused defines throughout common qat drivers
> and crypto ipsec mb drivers. It also removes some defines that should
> have been removed previously.
> 
> v3:
> Incorrect fixline
Series applied to dpdk-next-crypto
Thanks


RE: [EXT] [PATCH] test/ipsec: fix TAP default MAC address

2023-07-06 Thread Akhil Goyal
> default TAP mac address was changed in commit id:
> c3006be2acab49c6b77ae9c9ef04b061e5dacbd6
> reflect changes in ipsec test scripts.
> 
> Fixes: c3006be2acab ("net/tap: set locally administered bit for fixed MAC
> address")
> Cc: d...@linux.vnet.ibm.com
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Vladimir Medvedkin 
Applied to dpdk-next-crypto



RE: [PATCH] doc: announce deprecation for security ops

2023-07-06 Thread Akhil Goyal
> > Subject: [PATCH] doc: announce deprecation for security ops
> >
> > Structure rte_security_ops and rte_security_ctx are meant to be used by
> > rte_security library and the PMDs associated.
> > These will be moved to an internal header in DPDK 23.11 release.
> >
> > Signed-off-by: Akhil Goyal 
> > ---
> 
> 
> Seems a reasonable change to me.
> 
> Acked-by: Ciara Power 
Applied to dpdk-next-crypto


Re: [dpdk-dev] [PATCH v5 2/4] eal: improve options usage text

2023-07-06 Thread Stephen Hemminger
On Mon,  5 Apr 2021 21:39:52 +0200
Thomas Monjalon  wrote:

> The description of the EAL options was printed before the application
> description provided via the hook.
> It is better to let the application print the global syntax
> and describes the detail of the EAL options below.
> 
> Also, some useless lines are removed,
> and the alignment of few options is fixed.
> 
> Signed-off-by: Thomas Monjalon 
> Acked-by: Bruce Richardson 
> Acked-by: Andrew Rybchenko 

The default eal usage is due for an overhaul. Split it into sections
and only show a short summary.  See git for an example.

But this part is fine, probably hold for 23.11 where users might
expect more changes.

Acked-by: Stephen Hemminger 


Re: [dpdk-dev] [PATCH] interrupts: fix error log level

2023-07-06 Thread Stephen Hemminger
On Mon, 1 Nov 2021 23:14:47 +0530
Harman Kalra  wrote:

> Fixing the error logs level, as currently default level is
> set to debug. Due to this failure is not getting captured.
> 
> Fixes: b7c984291611 ("interrupts: add allocator and accessors")
> 
> Signed-off-by: Harman Kalra 
> ---
>  lib/eal/common/eal_common_interrupts.c | 24 
>  1 file changed, 12 insertions(+), 12 deletions(-)
> 
> diff --git a/lib/eal/common/eal_common_interrupts.c 
> b/lib/eal/common/eal_common_interrupts.c
> index 97b64fed58..96c7e9e40e 100644
> --- a/lib/eal/common/eal_common_interrupts.c
> +++ b/lib/eal/common/eal_common_interrupts.c
> @@ -15,7 +15,7 @@
>  /* Macros to check for valid interrupt handle */
>  #define CHECK_VALID_INTR_HANDLE(intr_handle) do { \
>   if (intr_handle == NULL) { \
> - RTE_LOG(DEBUG, EAL, "Interrupt instance unallocated\n"); \
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); \
>   rte_errno = EINVAL; \
>   goto fail; \
>   } \
> @@ -37,7 +37,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t 
> flags)
>* defined flags.
>*/
>   if ((flags & ~RTE_INTR_INSTANCE_KNOWN_FLAGS) != 0) {
> - RTE_LOG(DEBUG, EAL, "Invalid alloc flag passed 0x%x\n", flags);
> + RTE_LOG(ERR, EAL, "Invalid alloc flag passed 0x%x\n", flags);
>   rte_errno = EINVAL;
>   return NULL;
>   }
> @@ -100,7 +100,7 @@ struct rte_intr_handle *rte_intr_instance_dup(const 
> struct rte_intr_handle *src)
>   struct rte_intr_handle *intr_handle;
>  
>   if (src == NULL) {
> - RTE_LOG(DEBUG, EAL, "Source interrupt instance unallocated\n");
> + RTE_LOG(ERR, EAL, "Source interrupt instance unallocated\n");
>   rte_errno = EINVAL;
>   return NULL;
>   }
> @@ -129,7 +129,7 @@ int rte_intr_event_list_update(struct rte_intr_handle 
> *intr_handle, int size)
>   CHECK_VALID_INTR_HANDLE(intr_handle);
>  
>   if (size == 0) {
> - RTE_LOG(DEBUG, EAL, "Size can't be zero\n");
> + RTE_LOG(ERR, EAL, "Size can't be zero\n");
>   rte_errno = EINVAL;
>   goto fail;
>   }
> @@ -253,7 +253,7 @@ int rte_intr_max_intr_set(struct rte_intr_handle 
> *intr_handle,
>   CHECK_VALID_INTR_HANDLE(intr_handle);
>  
>   if (max_intr > intr_handle->nb_intr) {
> - RTE_LOG(DEBUG, EAL, "Maximum interrupt vector ID (%d) exceeds "
> + RTE_LOG(ERR, EAL, "Maximum interrupt vector ID (%d) exceeds "
>   "the number of available events (%d)\n", max_intr,
>   intr_handle->nb_intr);
>   rte_errno = ERANGE;
> @@ -332,7 +332,7 @@ int rte_intr_efds_index_get(const struct rte_intr_handle 
> *intr_handle,
>   CHECK_VALID_INTR_HANDLE(intr_handle);
>  
>   if (index >= intr_handle->nb_intr) {
> - RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index,
> + RTE_LOG(ERR, EAL, "Invalid index %d, max limit %d\n", index,
>   intr_handle->nb_intr);
>   rte_errno = EINVAL;
>   goto fail;
> @@ -349,7 +349,7 @@ int rte_intr_efds_index_set(struct rte_intr_handle 
> *intr_handle,
>   CHECK_VALID_INTR_HANDLE(intr_handle);
>  
>   if (index >= intr_handle->nb_intr) {
> - RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index,
> + RTE_LOG(ERR, EAL, "Invalid index %d, max limit %d\n", index,
>   intr_handle->nb_intr);
>   rte_errno = ERANGE;
>   goto fail;
> @@ -368,7 +368,7 @@ struct rte_epoll_event *rte_intr_elist_index_get(
>   CHECK_VALID_INTR_HANDLE(intr_handle);
>  
>   if (index >= intr_handle->nb_intr) {
> - RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index,
> + RTE_LOG(ERR, EAL, "Invalid index %d, max limit %d\n", index,
>   intr_handle->nb_intr);
>   rte_errno = ERANGE;
>   goto fail;
> @@ -385,7 +385,7 @@ int rte_intr_elist_index_set(struct rte_intr_handle 
> *intr_handle,
>   CHECK_VALID_INTR_HANDLE(intr_handle);
>  
>   if (index >= intr_handle->nb_intr) {
> - RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index,
> + RTE_LOG(ERR, EAL, "Invalid index %d, max limit %d\n", index,
>   intr_handle->nb_intr);
>   rte_errno = ERANGE;
>   goto fail;
> @@ -408,7 +408,7 @@ int rte_intr_vec_list_alloc(struct rte_intr_handle 
> *intr_handle,
>   return 0;
>  
>   if (size > intr_handle->nb_intr) {
> - RTE_LOG(DEBUG, EAL, "Invalid size %d, max limit %d\n", size,
> + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", size,
>   intr_handle->nb_intr);
>   rte_errno = ERANGE;
>   goto fail;
> @@ -437,7 +437,7 @@ int rte_intr_vec_

  1   2   >