[PATCH] net/mlx5: enable flow aging action

2022-10-31 Thread Suanming Mou
As the queue-based aging API has been integrated[1], the flow aging
action support in HWS steering code can be enabled now.

[1]: 
https://patchwork.dpdk.org/project/dpdk/cover/20221026214943.3686635-1-michae...@nvidia.com/

Signed-off-by: Suanming Mou 
---
 drivers/net/mlx5/mlx5_flow.c|  2 --
 drivers/net/mlx5/mlx5_flow_hw.c |  2 --
 drivers/net/mlx5/mlx5_hws_cnt.c | 11 ---
 3 files changed, 15 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 8e7d649d15..e28587da8e 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1084,9 +1084,7 @@ static const struct rte_flow_ops mlx5_flow_ops = {
.isolate = mlx5_flow_isolate,
.query = mlx5_flow_query,
.dev_dump = mlx5_flow_dev_dump,
-#ifdef MLX5_HAVE_RTE_FLOW_Q_AGE
.get_q_aged_flows = mlx5_flow_get_q_aged_flows,
-#endif
.get_aged_flows = mlx5_flow_get_aged_flows,
.action_handle_create = mlx5_action_handle_create,
.action_handle_destroy = mlx5_action_handle_destroy,
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 2d275ad111..0e904e4dea 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -7032,10 +7032,8 @@ flow_hw_configure(struct rte_eth_dev *dev,
goto err;
if (_queue_attr)
mlx5_free(_queue_attr);
-#ifdef MLX5_HAVE_RTE_FLOW_Q_AGE
if (port_attr->flags & RTE_FLOW_PORT_FLAG_STRICT_QUEUE)
priv->hws_strict_queue = 1;
-#endif
return 0;
 err:
if (priv->hws_ctpool) {
diff --git a/drivers/net/mlx5/mlx5_hws_cnt.c b/drivers/net/mlx5/mlx5_hws_cnt.c
index b8ce69af57..534a4d76ce 100644
--- a/drivers/net/mlx5/mlx5_hws_cnt.c
+++ b/drivers/net/mlx5/mlx5_hws_cnt.c
@@ -846,7 +846,6 @@ int
 mlx5_hws_age_action_update(struct mlx5_priv *priv, uint32_t idx,
   const void *update, struct rte_flow_error *error)
 {
-#ifdef MLX5_HAVE_RTE_FLOW_Q_AGE
const struct rte_flow_update_age *update_ade = update;
struct mlx5_age_info *age_info = GET_PORT_AGE_INFO(priv);
struct mlx5_indexed_pool *ipool = age_info->ages_ipool;
@@ -899,14 +898,6 @@ mlx5_hws_age_action_update(struct mlx5_priv *priv, 
uint32_t idx,
 __ATOMIC_RELAXED);
}
return 0;
-#else
-   RTE_SET_USED(priv);
-   RTE_SET_USED(idx);
-   RTE_SET_USED(update);
-   return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "update age action not supported");
-#endif
 }
 
 /**
@@ -1193,9 +1184,7 @@ mlx5_hws_age_pool_init(struct rte_eth_dev *dev,
uint32_t nb_ages_updated;
int ret;
 
-#ifdef MLX5_HAVE_RTE_FLOW_Q_AGE
strict_queue = !!(attr->flags & RTE_FLOW_PORT_FLAG_STRICT_QUEUE);
-#endif
MLX5_ASSERT(priv->hws_cpool);
nb_alloc_cnts = mlx5_hws_cnt_pool_get_size(priv->hws_cpool);
if (strict_queue) {
-- 
2.25.1



[Bug 1118] [dpdk-22.11.0rc1][meson test] driver-tests/link_bonding_autotest test failed core dumped

2022-10-31 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=1118

Bug ID: 1118
   Summary: [dpdk-22.11.0rc1][meson test]
driver-tests/link_bonding_autotest test failed core
dumped
   Product: DPDK
   Version: 22.11
  Hardware: All
OS: All
Status: UNCONFIRMED
  Severity: normal
  Priority: Normal
 Component: meson
  Assignee: dev@dpdk.org
  Reporter: weiyuanx...@intel.com
  Target Milestone: ---

[Environment]

DPDK version: Use make showversion or for a non-released version: git remote -v
&& git show-ref --heads
dpdk22.11.0rc1 5976328d91c3616b1ad841a9181e1da23a2980bf
Other software versions: name/version for QEMU, OVS, etc. Repeat as required.
OS: Ubuntu 22.04.1 LTS (Jammy Jellyfish)/5.15.0-27-generic
Compiler: gcc (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Hardware platform: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
NIC hardware: Ethernet Controller XXV710 for 25GbE SFP28 158b.
NIC firmware: 
driver: i40e
version: 5.15.0-27-generic
firmware-version: 9.00 0x8000cead 1.3179.0

[Test Setup]
Steps to reproduce
List the steps to reproduce the issue.

1. Use the following command to build DPDK: 
CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static
x86_64-native-linuxapp-gcc/ 
ninja -C x86_64-native-linuxapp-gcc/ 

2. Execute the following command in the dpdk directory.   

MALLOC_PERTURB_=219 DPDK_TEST=link_bonding_autotest
/root/dpdk/x86_64-native-linuxapp-gcc/app/test/dpdk-test -c 0xff

MALLOC_PERTURB_=219 DPDK_TEST=link_bonding_rssconf_autotest
/root/dpdk/x86_64-native-linuxapp-gcc/app/test/dpdk-test -c 0xff

MALLOC_PERTURB_=219 DPDK_TEST=link_bonding_mode4_autotest
/root/dpdk/x86_64-native-linuxapp-gcc/app/test/dpdk-test -c 0xff

Show the output from the previous commands.

root@dpdk-VF-dut247:~/dpdk# MALLOC_PERTURB_=219
DPDK_TEST=link_bonding_rssconf_autotest
/root/dpdk/x86_64-native-linuxapp-gcc/app/test/dpdk-test -c 0xff
EAL: Detected CPU lcores: 72
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
APP: HPET is not enabled, using TSC as default timer
RTE>>link_bonding_rssconf_autotest
 + --- +
 + Test Suite : RSS Dynamic Configuration for Bonding Unit Test Suite
 + --- +
Floating point exception (core dumped)

root@dpdk-VF-dut247:~/dpdk# MALLOC_PERTURB_=219 DPDK_TEST=link_bonding_autotest
/root/dpdk/x86_64-native-linuxapp-gcc/app/test/dpdk-test -c 0xff
EAL: Detected CPU lcores: 72
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
APP: HPET is not enabled, using TSC as default timer
RTE>>link_bonding_autotest
 + --- +
 + Test Suite : Link Bonding Unit Test Suite
 + --- +
Floating point exception (core dumped)

root@dpdk-VF-dut247:~/dpdk# MALLOC_PERTURB_=219
DPDK_TEST=link_bonding_mode4_autotest
/root/dpdk/x86_64-native-linuxapp-gcc/app/test/dpdk-test -c 0xff
EAL: Detected CPU lcores: 72
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
APP: HPET is not enabled, using TSC as default timer
RTE>>link_bonding_mode4_autotest
 + --- +
 + Test Suite : Link Bonding mode 4 Unit Test Suite
Floating point exception (core dumped)

[Expected Result]
Explain what is the expected result in text or as an example output:

Test ok.

[Regression]
Is this issue a regression: (Y/N) Y

d03c0e83cc0042dc35e37f984de15533b09e6ac9 is the first bad commit
commit d03c0e83cc0042dc35e37f984de15533b09e6ac9
Author: Ivan Malov 
Date:   Sun Sep 11 15:19:01 2022 +0300



   net/bonding: fix descriptor limit reporting



   Commit 5be3b40fea60 ("net/bonding: fix values of descriptor limits")
breaks reporting of "nb_min" and "nb_align" values obtained from
back-end devices' descriptor limits. This means that work done
by eth_bond_slave_inherit_desc_lim_first() as well as
eth_bond_slave_inherit_desc_lim_next() gets dismissed.



   Revert the offending commit and use proper workaround
for the test case mentioned in the said commit.



   Meanwhile, the test case itself might be poorly constructed.
It tries to run a bond with no back-end devices attached,
but, according to [1] ("Requirements / Limitations"),
at least one back-end device must be attached.



   [1] doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst



   Fixes: 5be3b40fea60 ("net/bonding: fix values of descriptor limits")
Cc: sta...@dpdk.org



   Signed-off-by: Ivan Mal

[PATCH v18 00/18] add support for idpf PMD in DPDK

2022-10-31 Thread beilei . xing
From: Beilei Xing 

This patchset introduced the idpf (Infrastructure Data Path Function) PMD in 
DPDK for Intel® IPU E2000 (Device ID: 0x1452).
The Intel® IPU E2000 targets to deliver high performance under real workloads 
with security and isolation.
Please refer to
https://www.intel.com/content/www/us/en/products/network-io/infrastructure-processing-units/asic/e2000-asic.html
for more information.

Linux upstream is still ongoing, previous work refers to 
https://patchwork.ozlabs.org/project/intel-wired-lan/patch/20220128001009.721392-20-alan.br...@intel.com/.

v2-v4:
fixed some coding style issues and did some refactors.

v5:
fixed typo.

v6-v9:
fixed build errors and coding style issues.

v11:
 - move shared code to common/idpf/base
 - Create one vport if there's no vport devargs
 - Refactor if conditions according to coding style
 - Refactor virtual channel return values
 - Refine dev_stop function
 - Refine RSS lut/key
 - Fix build error

v12:
 - Refine dev_configure
 - Fix coding style according to the comments
 - Re-order patch
 - Romove dev_supported_ptypes_get

v13:
 - refine dev_start/stop and queue_start/stop
 - fix timestamp offload

v14:
 - fix wrong position for rte_validate_tx_offload

v15:
 - refine the return value for ethdev ops.
 - removce forward static declarations.
 - refine get caps.
 - fix lock/unlock handling.

v16:
 - refine errno in shared code
 - remove the conditional compilation IDPF_RX_PTYPE_OFFLOAD

v17:
 - fix build error on FreeBSD

v18:
 - remove the conditional compilation IDPF_RX_PTYPE_OFFLOAD

Junfeng Guo (18):
  common/idpf: introduce common library
  net/idpf: add support for device initialization
  net/idpf: add Tx queue setup
  net/idpf: add Rx queue setup
  net/idpf: add support for device start and stop
  net/idpf: add support for queue start
  net/idpf: add support for queue stop
  net/idpf: add queue release
  net/idpf: add support for MTU configuration
  net/idpf: add support for basic Rx datapath
  net/idpf: add support for basic Tx datapath
  net/idpf: support parsing packet type
  net/idpf: add support for write back based on ITR expire
  net/idpf: add support for RSS
  net/idpf: add support for Rx offloading
  net/idpf: add support for Tx offloading
  net/idpf: add AVX512 data path for single queue model
  net/idpf: add support for timestamp offload

 MAINTAINERS   |9 +
 doc/guides/nics/features/idpf.ini |   17 +
 doc/guides/nics/idpf.rst  |   85 +
 doc/guides/nics/index.rst |1 +
 doc/guides/rel_notes/release_22_11.rst|6 +
 drivers/common/idpf/base/idpf_alloc.h |   22 +
 drivers/common/idpf/base/idpf_common.c|  364 +++
 drivers/common/idpf/base/idpf_controlq.c  |  691 
 drivers/common/idpf/base/idpf_controlq.h  |  224 ++
 drivers/common/idpf/base/idpf_controlq_api.h  |  207 ++
 .../common/idpf/base/idpf_controlq_setup.c|  179 +
 drivers/common/idpf/base/idpf_devids.h|   18 +
 drivers/common/idpf/base/idpf_lan_pf_regs.h   |  134 +
 drivers/common/idpf/base/idpf_lan_txrx.h  |  428 +++
 drivers/common/idpf/base/idpf_lan_vf_regs.h   |  114 +
 drivers/common/idpf/base/idpf_osdep.h |  364 +++
 drivers/common/idpf/base/idpf_prototype.h |   45 +
 drivers/common/idpf/base/idpf_type.h  |  106 +
 drivers/common/idpf/base/meson.build  |   14 +
 drivers/common/idpf/base/siov_regs.h  |   47 +
 drivers/common/idpf/base/virtchnl.h   | 2866 +
 drivers/common/idpf/base/virtchnl2.h  | 1462 +
 drivers/common/idpf/base/virtchnl2_lan_desc.h |  606 
 .../common/idpf/base/virtchnl_inline_ipsec.h  |  567 
 drivers/common/idpf/meson.build   |4 +
 drivers/common/idpf/version.map   |   12 +
 drivers/common/meson.build|1 +
 drivers/net/idpf/idpf_ethdev.c| 1293 
 drivers/net/idpf/idpf_ethdev.h|  252 ++
 drivers/net/idpf/idpf_logs.h  |   56 +
 drivers/net/idpf/idpf_rxtx.c  | 2308 +
 drivers/net/idpf/idpf_rxtx.h  |  291 ++
 drivers/net/idpf/idpf_rxtx_vec_avx512.c   |  857 +
 drivers/net/idpf/idpf_rxtx_vec_common.h   |  100 +
 drivers/net/idpf/idpf_vchnl.c | 1416 
 drivers/net/idpf/meson.build  |   44 +
 drivers/net/idpf/version.map  |3 +
 drivers/net/meson.build   |1 +
 38 files changed, 15214 insertions(+)
 create mode 100644 doc/guides/nics/features/idpf.ini
 create mode 100644 doc/guides/nics/idpf.rst
 create mode 100644 drivers/common/idpf/base/idpf_alloc.h
 create mode 100644 drivers/common/idpf/base/idpf_common.c
 create mode 100644 drivers/common/idpf/base/idpf_controlq.c
 create mode 100644 drivers/common/idpf/base/idpf_controlq.h
 create mode 100644 drivers/common/idpf/base/idpf_controlq_api.h
 create mode

[PATCH v18 02/18] net/idpf: add support for device initialization

2022-10-31 Thread beilei . xing
From: Junfeng Guo 

Support device init and add the following dev ops:
 - dev_configure
 - dev_close
 - dev_infos_get

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Xiao Wang 
Signed-off-by: Wenjun Wu 
Signed-off-by: Junfeng Guo 
---
 MAINTAINERS|   9 +
 doc/guides/nics/features/idpf.ini  |   9 +
 doc/guides/nics/idpf.rst   |  66 ++
 doc/guides/nics/index.rst  |   1 +
 doc/guides/rel_notes/release_22_11.rst |   6 +
 drivers/net/idpf/idpf_ethdev.c | 891 +
 drivers/net/idpf/idpf_ethdev.h | 189 ++
 drivers/net/idpf/idpf_logs.h   |  56 ++
 drivers/net/idpf/idpf_vchnl.c  | 416 
 drivers/net/idpf/meson.build   |  15 +
 drivers/net/idpf/version.map   |   3 +
 drivers/net/meson.build|   1 +
 12 files changed, 1662 insertions(+)
 create mode 100644 doc/guides/nics/features/idpf.ini
 create mode 100644 doc/guides/nics/idpf.rst
 create mode 100644 drivers/net/idpf/idpf_ethdev.c
 create mode 100644 drivers/net/idpf/idpf_ethdev.h
 create mode 100644 drivers/net/idpf/idpf_logs.h
 create mode 100644 drivers/net/idpf/idpf_vchnl.c
 create mode 100644 drivers/net/idpf/meson.build
 create mode 100644 drivers/net/idpf/version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index bdf233c9f8..cc66db25e8 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -770,6 +770,15 @@ F: drivers/net/ice/
 F: doc/guides/nics/ice.rst
 F: doc/guides/nics/features/ice.ini
 
+Intel idpf
+M: Jingjing Wu 
+M: Beilei Xing 
+T: git://dpdk.org/next/dpdk-next-net-intel
+F: drivers/net/idpf/
+F: drivers/common/idpf/
+F: doc/guides/nics/idpf.rst
+F: doc/guides/nics/features/idpf.ini
+
 Intel igc
 M: Junfeng Guo 
 M: Simei Su 
diff --git a/doc/guides/nics/features/idpf.ini 
b/doc/guides/nics/features/idpf.ini
new file mode 100644
index 00..46aab2eb61
--- /dev/null
+++ b/doc/guides/nics/features/idpf.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'idpf' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux= Y
+x86-32   = Y
+x86-64   = Y
diff --git a/doc/guides/nics/idpf.rst b/doc/guides/nics/idpf.rst
new file mode 100644
index 00..c1001d5d0c
--- /dev/null
+++ b/doc/guides/nics/idpf.rst
@@ -0,0 +1,66 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+Copyright(c) 2022 Intel Corporation.
+
+IDPF Poll Mode Driver
+==
+
+The [*EXPERIMENTAL*] idpf PMD (**librte_net_idpf**) provides poll mode driver 
support for
+Intel® Infrastructure Processing Unit (Intel® IPU) E2000.
+
+
+Linux Prerequisites
+---
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux ` to setup 
the basic DPDK environment.
+
+- To get better performance on Intel platforms, please follow the "How to get 
best performance with NICs on Intel platforms"
+  section of the :ref:`Getting Started Guide for Linux `.
+
+
+Pre-Installation Configuration
+--
+
+Runtime Config Options
+~~
+
+- ``vport`` (default ``0``)
+
+  The IDPF PMD supports creation of multiple vports for one PCI device, each 
vport
+  corresponds to a single ethdev. Using the ``devargs`` parameter ``vport`` 
the user
+  can specify the vports with specific ID to be created, for example::
+
+-a ca:00.0,vport=[0,2,3]
+
+  Then idpf PMD will create 3 vports (ethdevs) for device ca:00.0.
+  NOTE: If the parameter is not provided, the vport 0 will be created by 
default.
+
+- ``rx_single`` (default ``0``)
+
+  There're two queue modes supported by Intel® IPU Ethernet ES2000 Series, 
single queue
+  mode and split queue mode for Rx queue. User can choose Rx queue mode by the 
``devargs``
+  parameter ``rx_single``.
+
+-a ca:00.0,rx_single=1
+
+  Then idpf PMD will configure Rx queue with single queue mode. Otherwise, 
split queue
+  mode is chosen by default.
+
+- ``tx_single`` (default ``0``)
+
+  There're two queue modes supported by Intel® IPU Ethernet ES2000 Series, 
single queue
+  mode and split queue mode for Tx queue. User can choose Tx queue mode by the 
``devargs``
+  parameter ``tx_single``.
+
+-a ca:00.0,tx_single=1
+
+  Then idpf PMD will configure Tx queue with single queue mode. Otherwise, 
split queue
+  mode is chosen by default.
+
+
+Driver compilation and testing
+--
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC 
`
+for details.
+
+
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 4d40ea29a3..12841ce407 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -34,6 +34,7 @@ Network Interface Controller Drivers
 hns3
 i40e
 ice
+idpf
 igb
 igc
 ionic
diff --git a/doc/guides/rel_notes/release_22_11.rst 
b/doc/guides/rel_notes/release_22_11.rst
index 28812e092f..77674f2b06 100644
--- a/doc/guides/rel_notes/rele

[PATCH v18 03/18] net/idpf: add Tx queue setup

2022-10-31 Thread beilei . xing
From: Junfeng Guo 

Add support for tx_queue_setup ops.

In the single queue model, the same descriptor queue is used by SW to
post buffer descriptors to HW and by HW to post completed descriptors
to SW.

In the split queue model, "RX buffer queues" are used to pass
descriptor buffers from SW to HW while Rx queues are used only to
pass the descriptor completions, that is, descriptors that point
to completed buffers, from HW to SW. This is contrary to the single
queue model in which Rx queues are used for both purposes.

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c |  13 ++
 drivers/net/idpf/idpf_rxtx.c   | 364 +
 drivers/net/idpf/idpf_rxtx.h   |  70 +++
 drivers/net/idpf/meson.build   |   1 +
 4 files changed, 448 insertions(+)
 create mode 100644 drivers/net/idpf/idpf_rxtx.c
 create mode 100644 drivers/net/idpf/idpf_rxtx.h

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 035f563275..54f20d30ca 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -11,6 +11,7 @@
 #include 
 
 #include "idpf_ethdev.h"
+#include "idpf_rxtx.h"
 
 #define IDPF_TX_SINGLE_Q   "tx_single"
 #define IDPF_RX_SINGLE_Q   "rx_single"
@@ -42,6 +43,17 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
dev_info->max_mtu = dev_info->max_rx_pktlen - IDPF_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+   dev_info->default_txconf = (struct rte_eth_txconf) {
+   .tx_free_thresh = IDPF_DEFAULT_TX_FREE_THRESH,
+   .tx_rs_thresh = IDPF_DEFAULT_TX_RS_THRESH,
+   };
+
+   dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+   .nb_max = IDPF_MAX_RING_DESC,
+   .nb_min = IDPF_MIN_RING_DESC,
+   .nb_align = IDPF_ALIGN_RING_DESC,
+   };
+
return 0;
 }
 
@@ -631,6 +643,7 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct 
idpf_adapter *adapter)
 static const struct eth_dev_ops idpf_eth_dev_ops = {
.dev_configure  = idpf_dev_configure,
.dev_close  = idpf_dev_close,
+   .tx_queue_setup = idpf_tx_queue_setup,
.dev_infos_get  = idpf_dev_info_get,
 };
 
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
new file mode 100644
index 00..4afa0a2560
--- /dev/null
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -0,0 +1,364 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include 
+#include 
+
+#include "idpf_ethdev.h"
+#include "idpf_rxtx.h"
+
+static int
+check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
+   uint16_t tx_free_thresh)
+{
+   /* TX descriptors will have their RS bit set after tx_rs_thresh
+* descriptors have been used. The TX descriptor ring will be cleaned
+* after tx_free_thresh descriptors are used or if the number of
+* descriptors required to transmit a packet is greater than the
+* number of free TX descriptors.
+*
+* The following constraints must be satisfied:
+*  - tx_rs_thresh must be less than the size of the ring minus 2.
+*  - tx_free_thresh must be less than the size of the ring minus 3.
+*  - tx_rs_thresh must be less than or equal to tx_free_thresh.
+*  - tx_rs_thresh must be a divisor of the ring size.
+*
+* One descriptor in the TX ring is used as a sentinel to avoid a H/W
+* race condition, hence the maximum threshold constraints. When set
+* to zero use default values.
+*/
+   if (tx_rs_thresh >= (nb_desc - 2)) {
+   PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than the "
+"number of TX descriptors (%u) minus 2",
+tx_rs_thresh, nb_desc);
+   return -EINVAL;
+   }
+   if (tx_free_thresh >= (nb_desc - 3)) {
+   PMD_INIT_LOG(ERR, "tx_free_thresh (%u) must be less than the "
+"number of TX descriptors (%u) minus 3.",
+tx_free_thresh, nb_desc);
+   return -EINVAL;
+   }
+   if (tx_rs_thresh > tx_free_thresh) {
+   PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than or "
+"equal to tx_free_thresh (%u).",
+tx_rs_thresh, tx_free_thresh);
+   return -EINVAL;
+   }
+   if ((nb_desc % tx_rs_thresh) != 0) {
+   PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the "
+"number of TX descriptors (%u).",
+tx_rs_thresh, nb_desc);
+   return -EINVAL;
+   }
+
+   return 0;
+}
+
+static void
+reset_split_tx_descq(struct idpf_tx_queue *txq)
+{
+   struct id

[PATCH v18 05/18] net/idpf: add support for device start and stop

2022-10-31 Thread beilei . xing
From: Junfeng Guo 

Add dev ops dev_start, dev_stop and link_update.

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c | 55 ++
 drivers/net/idpf/idpf_rxtx.c   | 20 +
 2 files changed, 75 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index fb5cd1b111..621bf9aad5 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -29,6 +29,22 @@ static const char * const idpf_valid_args[] = {
NULL
 };
 
+static int
+idpf_dev_link_update(struct rte_eth_dev *dev,
+__rte_unused int wait_to_complete)
+{
+   struct rte_eth_link new_link;
+
+   memset(&new_link, 0, sizeof(new_link));
+
+   new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+   new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+   new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
+ RTE_ETH_LINK_SPEED_FIXED);
+
+   return rte_eth_linkstatus_set(dev, &new_link);
+}
+
 static int
 idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 {
@@ -267,6 +283,42 @@ idpf_dev_configure(struct rte_eth_dev *dev)
return 0;
 }
 
+static int
+idpf_dev_start(struct rte_eth_dev *dev)
+{
+   struct idpf_vport *vport = dev->data->dev_private;
+   int ret;
+
+   if (dev->data->mtu > vport->max_mtu) {
+   PMD_DRV_LOG(ERR, "MTU should be less than %d", vport->max_mtu);
+   return -EINVAL;
+   }
+
+   vport->max_pkt_len = dev->data->mtu + IDPF_ETH_OVERHEAD;
+
+   /* TODO: start queues */
+
+   ret = idpf_vc_ena_dis_vport(vport, true);
+   if (ret != 0) {
+   PMD_DRV_LOG(ERR, "Failed to enable vport");
+   return ret;
+   }
+
+   return 0;
+}
+
+static int
+idpf_dev_stop(struct rte_eth_dev *dev)
+{
+   struct idpf_vport *vport = dev->data->dev_private;
+
+   idpf_vc_ena_dis_vport(vport, false);
+
+   /* TODO: stop queues */
+
+   return 0;
+}
+
 static int
 idpf_dev_close(struct rte_eth_dev *dev)
 {
@@ -656,6 +708,9 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.rx_queue_setup = idpf_rx_queue_setup,
.tx_queue_setup = idpf_tx_queue_setup,
.dev_infos_get  = idpf_dev_info_get,
+   .dev_start  = idpf_dev_start,
+   .dev_stop   = idpf_dev_stop,
+   .link_update= idpf_dev_link_update,
 };
 
 static uint16_t
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 25dd5d85d5..3528d2f2c7 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -334,6 +334,11 @@ idpf_rx_split_queue_setup(struct rte_eth_dev *dev, 
uint16_t queue_idx,
if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
return -EINVAL;
 
+   if (rx_conf->rx_deferred_start) {
+   PMD_INIT_LOG(ERR, "Queue start is not supported currently.");
+   return -EINVAL;
+   }
+
/* Setup Rx description queue */
rxq = rte_zmalloc_socket("idpf rxq",
 sizeof(struct idpf_rx_queue),
@@ -465,6 +470,11 @@ idpf_rx_single_queue_setup(struct rte_eth_dev *dev, 
uint16_t queue_idx,
if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
return -EINVAL;
 
+   if (rx_conf->rx_deferred_start) {
+   PMD_INIT_LOG(ERR, "Queue start is not supported currently.");
+   return -EINVAL;
+   }
+
/* Setup Rx description queue */
rxq = rte_zmalloc_socket("idpf rxq",
 sizeof(struct idpf_rx_queue),
@@ -569,6 +579,11 @@ idpf_tx_split_queue_setup(struct rte_eth_dev *dev, 
uint16_t queue_idx,
if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
return -EINVAL;
 
+   if (tx_conf->tx_deferred_start) {
+   PMD_INIT_LOG(ERR, "Queue start is not supported currently.");
+   return -EINVAL;
+   }
+
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("idpf split txq",
 sizeof(struct idpf_tx_queue),
@@ -691,6 +706,11 @@ idpf_tx_single_queue_setup(struct rte_eth_dev *dev, 
uint16_t queue_idx,
if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
return -EINVAL;
 
+   if (tx_conf->tx_deferred_start) {
+   PMD_INIT_LOG(ERR, "Queue start is not supported currently.");
+   return -EINVAL;
+   }
+
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("idpf txq",
 sizeof(struct idpf_tx_queue),
-- 
2.26.2



[PATCH v18 04/18] net/idpf: add Rx queue setup

2022-10-31 Thread beilei . xing
From: Junfeng Guo 

Add support for rx_queue_setup ops.

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c |  11 +
 drivers/net/idpf/idpf_rxtx.c   | 400 +
 drivers/net/idpf/idpf_rxtx.h   |  46 
 3 files changed, 457 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 54f20d30ca..fb5cd1b111 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -48,12 +48,22 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
.tx_rs_thresh = IDPF_DEFAULT_TX_RS_THRESH,
};
 
+   dev_info->default_rxconf = (struct rte_eth_rxconf) {
+   .rx_free_thresh = IDPF_DEFAULT_RX_FREE_THRESH,
+   };
+
dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
.nb_max = IDPF_MAX_RING_DESC,
.nb_min = IDPF_MIN_RING_DESC,
.nb_align = IDPF_ALIGN_RING_DESC,
};
 
+   dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+   .nb_max = IDPF_MAX_RING_DESC,
+   .nb_min = IDPF_MIN_RING_DESC,
+   .nb_align = IDPF_ALIGN_RING_DESC,
+   };
+
return 0;
 }
 
@@ -643,6 +653,7 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct 
idpf_adapter *adapter)
 static const struct eth_dev_ops idpf_eth_dev_ops = {
.dev_configure  = idpf_dev_configure,
.dev_close  = idpf_dev_close,
+   .rx_queue_setup = idpf_rx_queue_setup,
.tx_queue_setup = idpf_tx_queue_setup,
.dev_infos_get  = idpf_dev_info_get,
 };
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 4afa0a2560..25dd5d85d5 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -8,6 +8,21 @@
 #include "idpf_ethdev.h"
 #include "idpf_rxtx.h"
 
+static int
+check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
+{
+   /* The following constraints must be satisfied:
+*   thresh < rxq->nb_rx_desc
+*/
+   if (thresh >= nb_desc) {
+   PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u",
+thresh, nb_desc);
+   return -EINVAL;
+   }
+
+   return 0;
+}
+
 static int
 check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
uint16_t tx_free_thresh)
@@ -56,6 +71,87 @@ check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
return 0;
 }
 
+static void
+reset_split_rx_descq(struct idpf_rx_queue *rxq)
+{
+   uint16_t len;
+   uint32_t i;
+
+   if (rxq == NULL)
+   return;
+
+   len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+   for (i = 0; i < len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3);
+i++)
+   ((volatile char *)rxq->rx_ring)[i] = 0;
+
+   rxq->rx_tail = 0;
+   rxq->expected_gen_id = 1;
+}
+
+static void
+reset_split_rx_bufq(struct idpf_rx_queue *rxq)
+{
+   uint16_t len;
+   uint32_t i;
+
+   if (rxq == NULL)
+   return;
+
+   len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+   for (i = 0; i < len * sizeof(struct virtchnl2_splitq_rx_buf_desc);
+i++)
+   ((volatile char *)rxq->rx_ring)[i] = 0;
+
+   memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+   for (i = 0; i < IDPF_RX_MAX_BURST; i++)
+   rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+   /* The next descriptor id which can be received. */
+   rxq->rx_next_avail = 0;
+
+   /* The next descriptor id which can be refilled. */
+   rxq->rx_tail = 0;
+   /* The number of descriptors which can be refilled. */
+   rxq->nb_rx_hold = rxq->nb_rx_desc - 1;
+
+   rxq->bufq1 = NULL;
+   rxq->bufq2 = NULL;
+}
+
+static void
+reset_single_rx_queue(struct idpf_rx_queue *rxq)
+{
+   uint16_t len;
+   uint32_t i;
+
+   if (rxq == NULL)
+   return;
+
+   len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+   for (i = 0; i < len * sizeof(struct virtchnl2_singleq_rx_buf_desc);
+i++)
+   ((volatile char *)rxq->rx_ring)[i] = 0;
+
+   memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+   for (i = 0; i < IDPF_RX_MAX_BURST; i++)
+   rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+   rxq->rx_tail = 0;
+   rxq->nb_rx_hold = 0;
+
+   if (rxq->pkt_first_seg != NULL)
+   rte_pktmbuf_free(rxq->pkt_first_seg);
+
+   rxq->pkt_first_seg = NULL;
+   rxq->pkt_last_seg = NULL;
+}
+
 static void
 reset_split_tx_descq(struct idpf_tx_queue *txq)
 {
@@ -145,6 +241,310 @@ reset_single_tx_queue(struct idpf_tx_queue *txq)
txq->next_rs = txq->rs_thresh - 1;
 }
 
+static int
+idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq,
+   

[PATCH v18 06/18] net/idpf: add support for queue start

2022-10-31 Thread beilei . xing
From: Junfeng Guo 

Add support for these device ops:
 - rx_queue_start
 - tx_queue_start

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c |  42 +++-
 drivers/net/idpf/idpf_ethdev.h |   9 +
 drivers/net/idpf/idpf_rxtx.c   | 237 +++--
 drivers/net/idpf/idpf_rxtx.h   |   6 +
 drivers/net/idpf/idpf_vchnl.c  | 447 +
 5 files changed, 720 insertions(+), 21 deletions(-)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 621bf9aad5..0400ed611f 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -283,6 +283,39 @@ idpf_dev_configure(struct rte_eth_dev *dev)
return 0;
 }
 
+static int
+idpf_start_queues(struct rte_eth_dev *dev)
+{
+   struct idpf_rx_queue *rxq;
+   struct idpf_tx_queue *txq;
+   int err = 0;
+   int i;
+
+   for (i = 0; i < dev->data->nb_tx_queues; i++) {
+   txq = dev->data->tx_queues[i];
+   if (txq == NULL || txq->tx_deferred_start)
+   continue;
+   err = idpf_tx_queue_start(dev, i);
+   if (err != 0) {
+   PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i);
+   return err;
+   }
+   }
+
+   for (i = 0; i < dev->data->nb_rx_queues; i++) {
+   rxq = dev->data->rx_queues[i];
+   if (rxq == NULL || rxq->rx_deferred_start)
+   continue;
+   err = idpf_rx_queue_start(dev, i);
+   if (err != 0) {
+   PMD_DRV_LOG(ERR, "Fail to start Rx queue %u", i);
+   return err;
+   }
+   }
+
+   return err;
+}
+
 static int
 idpf_dev_start(struct rte_eth_dev *dev)
 {
@@ -296,11 +329,16 @@ idpf_dev_start(struct rte_eth_dev *dev)
 
vport->max_pkt_len = dev->data->mtu + IDPF_ETH_OVERHEAD;
 
-   /* TODO: start queues */
+   ret = idpf_start_queues(dev);
+   if (ret != 0) {
+   PMD_DRV_LOG(ERR, "Failed to start queues");
+   return ret;
+   }
 
ret = idpf_vc_ena_dis_vport(vport, true);
if (ret != 0) {
PMD_DRV_LOG(ERR, "Failed to enable vport");
+   /* TODO: stop queues */
return ret;
}
 
@@ -711,6 +749,8 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.dev_start  = idpf_dev_start,
.dev_stop   = idpf_dev_stop,
.link_update= idpf_dev_link_update,
+   .rx_queue_start = idpf_rx_queue_start,
+   .tx_queue_start = idpf_tx_queue_start,
 };
 
 static uint16_t
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 84ae6641e2..96c22009e9 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -24,7 +24,9 @@
 #define IDPF_DEFAULT_TXQ_NUM   16
 
 #define IDPF_INVALID_VPORT_IDX 0x
+#define IDPF_TXQ_PER_GRP   1
 #define IDPF_TX_COMPLQ_PER_GRP 1
+#define IDPF_RXQ_PER_GRP   1
 #define IDPF_RX_BUFQ_PER_GRP   2
 
 #define IDPF_CTLQ_ID   -1
@@ -182,6 +184,13 @@ int idpf_vc_check_api_version(struct idpf_adapter 
*adapter);
 int idpf_vc_get_caps(struct idpf_adapter *adapter);
 int idpf_vc_create_vport(struct idpf_adapter *adapter);
 int idpf_vc_destroy_vport(struct idpf_vport *vport);
+int idpf_vc_config_rxqs(struct idpf_vport *vport);
+int idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id);
+int idpf_vc_config_txqs(struct idpf_vport *vport);
+int idpf_vc_config_txq(struct idpf_vport *vport, uint16_t txq_id);
+int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
+ bool rx, bool on);
+int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
 int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
 int idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
  uint16_t buf_len, uint8_t *buf);
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 3528d2f2c7..6d954afd9d 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -334,11 +334,6 @@ idpf_rx_split_queue_setup(struct rte_eth_dev *dev, 
uint16_t queue_idx,
if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
return -EINVAL;
 
-   if (rx_conf->rx_deferred_start) {
-   PMD_INIT_LOG(ERR, "Queue start is not supported currently.");
-   return -EINVAL;
-   }
-
/* Setup Rx description queue */
rxq = rte_zmalloc_socket("idpf rxq",
 sizeof(struct idpf_rx_queue),
@@ -354,6 +349,7 @@ idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t 
queue_idx,
rxq->rx_free_thresh = rx_free_thresh;
rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
rxq->port_id = dev-

[PATCH v18 07/18] net/idpf: add support for queue stop

2022-10-31 Thread beilei . xing
From: Junfeng Guo 

Add support for these device ops:
 - rx_queue_stop
 - tx_queue_stop

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c |  17 ++--
 drivers/net/idpf/idpf_rxtx.c   | 148 +
 drivers/net/idpf/idpf_rxtx.h   |  13 +++
 drivers/net/idpf/idpf_vchnl.c  |  69 +++
 4 files changed, 242 insertions(+), 5 deletions(-)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 0400ed611f..9f1e1e6a18 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -324,7 +324,8 @@ idpf_dev_start(struct rte_eth_dev *dev)
 
if (dev->data->mtu > vport->max_mtu) {
PMD_DRV_LOG(ERR, "MTU should be less than %d", vport->max_mtu);
-   return -EINVAL;
+   ret = -EINVAL;
+   goto err_mtu;
}
 
vport->max_pkt_len = dev->data->mtu + IDPF_ETH_OVERHEAD;
@@ -332,17 +333,21 @@ idpf_dev_start(struct rte_eth_dev *dev)
ret = idpf_start_queues(dev);
if (ret != 0) {
PMD_DRV_LOG(ERR, "Failed to start queues");
-   return ret;
+   goto err_mtu;
}
 
ret = idpf_vc_ena_dis_vport(vport, true);
if (ret != 0) {
PMD_DRV_LOG(ERR, "Failed to enable vport");
-   /* TODO: stop queues */
-   return ret;
+   goto err_vport;
}
 
return 0;
+
+err_vport:
+   idpf_stop_queues(dev);
+err_mtu:
+   return ret;
 }
 
 static int
@@ -352,7 +357,7 @@ idpf_dev_stop(struct rte_eth_dev *dev)
 
idpf_vc_ena_dis_vport(vport, false);
 
-   /* TODO: stop queues */
+   idpf_stop_queues(dev);
 
return 0;
 }
@@ -751,6 +756,8 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.link_update= idpf_dev_link_update,
.rx_queue_start = idpf_rx_queue_start,
.tx_queue_start = idpf_tx_queue_start,
+   .rx_queue_stop  = idpf_rx_queue_stop,
+   .tx_queue_stop  = idpf_tx_queue_stop,
 };
 
 static uint16_t
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 6d954afd9d..8d5ec41a1f 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -71,6 +71,55 @@ check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
return 0;
 }
 
+static void
+release_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+   uint16_t i;
+
+   if (rxq->sw_ring == NULL)
+   return;
+
+   for (i = 0; i < rxq->nb_rx_desc; i++) {
+   if (rxq->sw_ring[i] != NULL) {
+   rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+   rxq->sw_ring[i] = NULL;
+   }
+   }
+}
+
+static void
+release_txq_mbufs(struct idpf_tx_queue *txq)
+{
+   uint16_t nb_desc, i;
+
+   if (txq == NULL || txq->sw_ring == NULL) {
+   PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
+   return;
+   }
+
+   if (txq->sw_nb_desc != 0) {
+   /* For split queue model, descriptor ring */
+   nb_desc = txq->sw_nb_desc;
+   } else {
+   /* For single queue model */
+   nb_desc = txq->nb_tx_desc;
+   }
+   for (i = 0; i < nb_desc; i++) {
+   if (txq->sw_ring[i].mbuf != NULL) {
+   rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+   txq->sw_ring[i].mbuf = NULL;
+   }
+   }
+}
+
+static const struct idpf_rxq_ops def_rxq_ops = {
+   .release_mbufs = release_rxq_mbufs,
+};
+
+static const struct idpf_txq_ops def_txq_ops = {
+   .release_mbufs = release_txq_mbufs,
+};
+
 static void
 reset_split_rx_descq(struct idpf_rx_queue *rxq)
 {
@@ -122,6 +171,14 @@ reset_split_rx_bufq(struct idpf_rx_queue *rxq)
rxq->bufq2 = NULL;
 }
 
+static inline void
+reset_split_rx_queue(struct idpf_rx_queue *rxq)
+{
+   reset_split_rx_descq(rxq);
+   reset_split_rx_bufq(rxq->bufq1);
+   reset_split_rx_bufq(rxq->bufq2);
+}
+
 static void
 reset_single_rx_queue(struct idpf_rx_queue *rxq)
 {
@@ -301,6 +358,7 @@ idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct 
idpf_rx_queue *bufq,
bufq->q_set = true;
bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+   bufq->ops = &def_rxq_ops;
 
/* TODO: allow bulk or vec */
 
@@ -527,6 +585,7 @@ idpf_rx_single_queue_setup(struct rte_eth_dev *dev, 
uint16_t queue_idx,
dev->data->rx_queues[queue_idx] = rxq;
rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
queue_idx * vport->chunks_info.rx_qtail_spacing);
+   rxq->ops = &def_rxq_ops;
 
return 0;
 }
@@ -621,6 +680,7 @@ idpf_tx_split_queue_setup(struct

[PATCH v18 09/18] net/idpf: add support for MTU configuration

2022-10-31 Thread beilei . xing
From: Junfeng Guo 

Add dev ops mtu_set.

Signed-off-by: Beilei Xing 
Signed-off-by: Junfeng Guo 
---
 doc/guides/nics/features/idpf.ini |  1 +
 drivers/net/idpf/idpf_ethdev.c| 13 +
 2 files changed, 14 insertions(+)

diff --git a/doc/guides/nics/features/idpf.ini 
b/doc/guides/nics/features/idpf.ini
index 46aab2eb61..d722c49fde 100644
--- a/doc/guides/nics/features/idpf.ini
+++ b/doc/guides/nics/features/idpf.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+MTU update   = Y
 Linux= Y
 x86-32   = Y
 x86-64   = Y
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 1485f40e71..856f3d7266 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -83,6 +83,18 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
return 0;
 }
 
+static int
+idpf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
+{
+   /* mtu setting is forbidden if port is start */
+   if (dev->data->dev_started) {
+   PMD_DRV_LOG(ERR, "port must be stopped before configuration");
+   return -EBUSY;
+   }
+
+   return 0;
+}
+
 static int
 idpf_init_vport_req_info(struct rte_eth_dev *dev)
 {
@@ -760,6 +772,7 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.tx_queue_stop  = idpf_tx_queue_stop,
.rx_queue_release   = idpf_dev_rx_queue_release,
.tx_queue_release   = idpf_dev_tx_queue_release,
+   .mtu_set= idpf_dev_mtu_set,
 };
 
 static uint16_t
-- 
2.26.2



[PATCH v18 08/18] net/idpf: add queue release

2022-10-31 Thread beilei . xing
From: Junfeng Guo 

Add support for queue operations:
 - rx_queue_release
 - tx_queue_release

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c |  2 +
 drivers/net/idpf/idpf_rxtx.c   | 81 ++
 drivers/net/idpf/idpf_rxtx.h   |  3 ++
 3 files changed, 86 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 9f1e1e6a18..1485f40e71 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -758,6 +758,8 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.tx_queue_start = idpf_tx_queue_start,
.rx_queue_stop  = idpf_rx_queue_stop,
.tx_queue_stop  = idpf_tx_queue_stop,
+   .rx_queue_release   = idpf_dev_rx_queue_release,
+   .tx_queue_release   = idpf_dev_tx_queue_release,
 };
 
 static uint16_t
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 8d5ec41a1f..053409b99a 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -171,6 +171,51 @@ reset_split_rx_bufq(struct idpf_rx_queue *rxq)
rxq->bufq2 = NULL;
 }
 
+static void
+idpf_rx_queue_release(void *rxq)
+{
+   struct idpf_rx_queue *q = rxq;
+
+   if (q == NULL)
+   return;
+
+   /* Split queue */
+   if (q->bufq1 != NULL && q->bufq2 != NULL) {
+   q->bufq1->ops->release_mbufs(q->bufq1);
+   rte_free(q->bufq1->sw_ring);
+   rte_memzone_free(q->bufq1->mz);
+   rte_free(q->bufq1);
+   q->bufq2->ops->release_mbufs(q->bufq2);
+   rte_free(q->bufq2->sw_ring);
+   rte_memzone_free(q->bufq2->mz);
+   rte_free(q->bufq2);
+   rte_memzone_free(q->mz);
+   rte_free(q);
+   return;
+   }
+
+   /* Single queue */
+   q->ops->release_mbufs(q);
+   rte_free(q->sw_ring);
+   rte_memzone_free(q->mz);
+   rte_free(q);
+}
+
+static void
+idpf_tx_queue_release(void *txq)
+{
+   struct idpf_tx_queue *q = txq;
+
+   if (q == NULL)
+   return;
+
+   rte_free(q->complq);
+   q->ops->release_mbufs(q);
+   rte_free(q->sw_ring);
+   rte_memzone_free(q->mz);
+   rte_free(q);
+}
+
 static inline void
 reset_split_rx_queue(struct idpf_rx_queue *rxq)
 {
@@ -392,6 +437,12 @@ idpf_rx_split_queue_setup(struct rte_eth_dev *dev, 
uint16_t queue_idx,
if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
return -EINVAL;
 
+   /* Free memory if needed */
+   if (dev->data->rx_queues[queue_idx] != NULL) {
+   idpf_rx_queue_release(dev->data->rx_queues[queue_idx]);
+   dev->data->rx_queues[queue_idx] = NULL;
+   }
+
/* Setup Rx description queue */
rxq = rte_zmalloc_socket("idpf rxq",
 sizeof(struct idpf_rx_queue),
@@ -524,6 +575,12 @@ idpf_rx_single_queue_setup(struct rte_eth_dev *dev, 
uint16_t queue_idx,
if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
return -EINVAL;
 
+   /* Free memory if needed */
+   if (dev->data->rx_queues[queue_idx] != NULL) {
+   idpf_rx_queue_release(dev->data->rx_queues[queue_idx]);
+   dev->data->rx_queues[queue_idx] = NULL;
+   }
+
/* Setup Rx description queue */
rxq = rte_zmalloc_socket("idpf rxq",
 sizeof(struct idpf_rx_queue),
@@ -630,6 +687,12 @@ idpf_tx_split_queue_setup(struct rte_eth_dev *dev, 
uint16_t queue_idx,
if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
return -EINVAL;
 
+   /* Free memory if needed. */
+   if (dev->data->tx_queues[queue_idx] != NULL) {
+   idpf_tx_queue_release(dev->data->tx_queues[queue_idx]);
+   dev->data->tx_queues[queue_idx] = NULL;
+   }
+
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("idpf split txq",
 sizeof(struct idpf_tx_queue),
@@ -754,6 +817,12 @@ idpf_tx_single_queue_setup(struct rte_eth_dev *dev, 
uint16_t queue_idx,
if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
return -EINVAL;
 
+   /* Free memory if needed. */
+   if (dev->data->tx_queues[queue_idx] != NULL) {
+   idpf_tx_queue_release(dev->data->tx_queues[queue_idx]);
+   dev->data->tx_queues[queue_idx] = NULL;
+   }
+
/* Allocate the TX queue data structure. */
txq = rte_zmalloc_socket("idpf txq",
 sizeof(struct idpf_tx_queue),
@@ -1102,6 +1171,18 @@ idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t 
tx_queue_id)
return 0;
 }
 
+void
+idpf_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+   idpf_rx_queue_r

[PATCH v18 10/18] net/idpf: add support for basic Rx datapath

2022-10-31 Thread beilei . xing
From: Junfeng Guo 

Add basic Rx support in split queue mode and single queue mode.

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c |   2 +
 drivers/net/idpf/idpf_rxtx.c   | 273 +
 drivers/net/idpf/idpf_rxtx.h   |   7 +-
 3 files changed, 281 insertions(+), 1 deletion(-)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 856f3d7266..2f1f95 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -348,6 +348,8 @@ idpf_dev_start(struct rte_eth_dev *dev)
goto err_mtu;
}
 
+   idpf_set_rx_function(dev);
+
ret = idpf_vc_ena_dis_vport(vport, true);
if (ret != 0) {
PMD_DRV_LOG(ERR, "Failed to enable vport");
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 053409b99a..ea499c4d37 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1208,3 +1208,276 @@ idpf_stop_queues(struct rte_eth_dev *dev)
PMD_DRV_LOG(WARNING, "Fail to stop Tx queue %d", i);
}
 }
+
+static void
+idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq)
+{
+   volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_ring;
+   volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_desc;
+   uint16_t nb_refill = rx_bufq->rx_free_thresh;
+   uint16_t nb_desc = rx_bufq->nb_rx_desc;
+   uint16_t next_avail = rx_bufq->rx_tail;
+   struct rte_mbuf *nmb[rx_bufq->rx_free_thresh];
+   struct rte_eth_dev *dev;
+   uint64_t dma_addr;
+   uint16_t delta;
+   int i;
+
+   if (rx_bufq->nb_rx_hold < rx_bufq->rx_free_thresh)
+   return;
+
+   rx_buf_ring = rx_bufq->rx_ring;
+   delta = nb_desc - next_avail;
+   if (unlikely(delta < nb_refill)) {
+   if (likely(rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, delta) == 
0)) {
+   for (i = 0; i < delta; i++) {
+   rx_buf_desc = &rx_buf_ring[next_avail + i];
+   rx_bufq->sw_ring[next_avail + i] = nmb[i];
+   dma_addr = 
rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+   rx_buf_desc->hdr_addr = 0;
+   rx_buf_desc->pkt_addr = dma_addr;
+   }
+   nb_refill -= delta;
+   next_avail = 0;
+   rx_bufq->nb_rx_hold -= delta;
+   } else {
+   dev = &rte_eth_devices[rx_bufq->port_id];
+   dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
+   PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u 
queue_id=%u",
+  rx_bufq->port_id, rx_bufq->queue_id);
+   return;
+   }
+   }
+
+   if (nb_desc - next_avail >= nb_refill) {
+   if (likely(rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, nb_refill) 
== 0)) {
+   for (i = 0; i < nb_refill; i++) {
+   rx_buf_desc = &rx_buf_ring[next_avail + i];
+   rx_bufq->sw_ring[next_avail + i] = nmb[i];
+   dma_addr = 
rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+   rx_buf_desc->hdr_addr = 0;
+   rx_buf_desc->pkt_addr = dma_addr;
+   }
+   next_avail += nb_refill;
+   rx_bufq->nb_rx_hold -= nb_refill;
+   } else {
+   dev = &rte_eth_devices[rx_bufq->port_id];
+   dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
+   PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u 
queue_id=%u",
+  rx_bufq->port_id, rx_bufq->queue_id);
+   }
+   }
+
+   IDPF_PCI_REG_WRITE(rx_bufq->qrx_tail, next_avail);
+
+   rx_bufq->rx_tail = next_avail;
+}
+
+uint16_t
+idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+   volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
+   volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
+   uint16_t pktlen_gen_bufq_id;
+   struct idpf_rx_queue *rxq;
+   struct rte_mbuf *rxm;
+   uint16_t rx_id_bufq1;
+   uint16_t rx_id_bufq2;
+   uint16_t pkt_len;
+   uint16_t bufq_id;
+   uint16_t gen_id;
+   uint16_t rx_id;
+   uint16_t nb_rx;
+
+   nb_rx = 0;
+   rxq = rx_queue;
+
+   if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
+   return nb_rx;
+
+   rx_id = rxq->rx_tail;
+   rx_id_bufq1 = rxq->bufq1->rx_next_avail;
+   rx_id_bufq2 = rxq->bufq2->rx_next_avail;
+   rx_desc_ring = rxq->rx_ring;
+
+  

[PATCH v18 11/18] net/idpf: add support for basic Tx datapath

2022-10-31 Thread beilei . xing
From: Junfeng Guo 

Add basic Tx support in split queue mode and single queue mode.

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c |   3 +
 drivers/net/idpf/idpf_ethdev.h |   1 +
 drivers/net/idpf/idpf_rxtx.c   | 357 +
 drivers/net/idpf/idpf_rxtx.h   |  10 +
 4 files changed, 371 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 2f1f95..f9f6fe1162 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -59,6 +59,8 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
dev_info->max_mtu = dev_info->max_rx_pktlen - IDPF_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+   dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+
dev_info->default_txconf = (struct rte_eth_txconf) {
.tx_free_thresh = IDPF_DEFAULT_TX_FREE_THRESH,
.tx_rs_thresh = IDPF_DEFAULT_TX_RS_THRESH,
@@ -349,6 +351,7 @@ idpf_dev_start(struct rte_eth_dev *dev)
}
 
idpf_set_rx_function(dev);
+   idpf_set_tx_function(dev);
 
ret = idpf_vc_ena_dis_vport(vport, true);
if (ret != 0) {
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 96c22009e9..af0a8e2970 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -35,6 +35,7 @@
 
 #define IDPF_MIN_BUF_SIZE  1024
 #define IDPF_MAX_FRAME_SIZE9728
+#define IDPF_MIN_FRAME_SIZE14
 
 #define IDPF_NUM_MACADDR_MAX   64
 
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index ea499c4d37..f55d2143b9 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1365,6 +1365,148 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
return nb_rx;
 }
 
+static inline void
+idpf_split_tx_free(struct idpf_tx_queue *cq)
+{
+   volatile struct idpf_splitq_tx_compl_desc *compl_ring = cq->compl_ring;
+   volatile struct idpf_splitq_tx_compl_desc *txd;
+   uint16_t next = cq->tx_tail;
+   struct idpf_tx_entry *txe;
+   struct idpf_tx_queue *txq;
+   uint16_t gen, qid, q_head;
+   uint8_t ctype;
+
+   txd = &compl_ring[next];
+   gen = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+   IDPF_TXD_COMPLQ_GEN_M) >> IDPF_TXD_COMPLQ_GEN_S;
+   if (gen != cq->expected_gen_id)
+   return;
+
+   ctype = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+   IDPF_TXD_COMPLQ_COMPL_TYPE_M) >> IDPF_TXD_COMPLQ_COMPL_TYPE_S;
+   qid = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+   IDPF_TXD_COMPLQ_QID_M) >> IDPF_TXD_COMPLQ_QID_S;
+   q_head = rte_le_to_cpu_16(txd->q_head_compl_tag.compl_tag);
+   txq = cq->txqs[qid - cq->tx_start_qid];
+
+   switch (ctype) {
+   case IDPF_TXD_COMPLT_RE:
+   if (q_head == 0)
+   txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+   else
+   txq->last_desc_cleaned = q_head - 1;
+   if (unlikely((txq->last_desc_cleaned % 32) == 0)) {
+   PMD_DRV_LOG(ERR, "unexpected desc (head = %u) 
completion.",
+   q_head);
+   return;
+   }
+
+   break;
+   case IDPF_TXD_COMPLT_RS:
+   txq->nb_free++;
+   txq->nb_used--;
+   txe = &txq->sw_ring[q_head];
+   if (txe->mbuf != NULL) {
+   rte_pktmbuf_free_seg(txe->mbuf);
+   txe->mbuf = NULL;
+   }
+   break;
+   default:
+   PMD_DRV_LOG(ERR, "unknown completion type.");
+   return;
+   }
+
+   if (++next == cq->nb_tx_desc) {
+   next = 0;
+   cq->expected_gen_id ^= 1;
+   }
+
+   cq->tx_tail = next;
+}
+
+uint16_t
+idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+   struct idpf_tx_queue *txq = (struct idpf_tx_queue *)tx_queue;
+   volatile struct idpf_flex_tx_sched_desc *txr;
+   volatile struct idpf_flex_tx_sched_desc *txd;
+   struct idpf_tx_entry *sw_ring;
+   struct idpf_tx_entry *txe, *txn;
+   uint16_t nb_used, tx_id, sw_id;
+   struct rte_mbuf *tx_pkt;
+   uint16_t nb_to_clean;
+   uint16_t nb_tx = 0;
+
+   if (unlikely(txq == NULL) || unlikely(!txq->q_started))
+   return nb_tx;
+
+   txr = txq->desc_ring;
+   sw_ring = txq->sw_ring;
+   tx_id = txq->tx_tail;
+   sw_id = txq->sw_tail;
+   txe = &sw_ring[sw_id];
+
+   for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+   tx_pkt = tx_pkts[nb_tx];
+
+   if (txq->nb_free <= txq->free_thresh) {
+   /* TODO: Need to refine
+

[PATCH v18 12/18] net/idpf: support parsing packet type

2022-10-31 Thread beilei . xing
From: Junfeng Guo 

Parse packet type during receiving packets.

Signed-off-by: Wenjun Wu 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c |   6 +
 drivers/net/idpf/idpf_ethdev.h |   6 +
 drivers/net/idpf/idpf_rxtx.c   |  11 ++
 drivers/net/idpf/idpf_rxtx.h   |   5 +
 drivers/net/idpf/idpf_vchnl.c  | 240 +
 5 files changed, 268 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index f9f6fe1162..d0821ec3f3 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -686,6 +686,12 @@ idpf_adapter_init(struct rte_pci_device *pci_dev, struct 
idpf_adapter *adapter)
goto err_api;
}
 
+   ret = idpf_get_pkt_type(adapter);
+   if (ret != 0) {
+   PMD_INIT_LOG(ERR, "Failed to set ptype table");
+   goto err_api;
+   }
+
adapter->caps = rte_zmalloc("idpf_caps",
sizeof(struct virtchnl2_get_capabilities), 0);
if (adapter->caps == NULL) {
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index af0a8e2970..db9af58f72 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -39,6 +39,8 @@
 
 #define IDPF_NUM_MACADDR_MAX   64
 
+#define IDPF_MAX_PKT_TYPE  1024
+
 #define IDPF_VLAN_TAG_SIZE 4
 #define IDPF_ETH_OVERHEAD \
(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + IDPF_VLAN_TAG_SIZE * 2)
@@ -125,6 +127,8 @@ struct idpf_adapter {
/* Max config queue number per VC message */
uint32_t max_rxq_per_msg;
uint32_t max_txq_per_msg;
+
+   uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
 };
 
 TAILQ_HEAD(idpf_adapter_list, idpf_adapter);
@@ -182,6 +186,7 @@ atomic_set_cmd(struct idpf_adapter *adapter, enum 
virtchnl_ops ops)
 struct idpf_adapter *idpf_find_adapter(struct rte_pci_device *pci_dev);
 void idpf_handle_virtchnl_msg(struct rte_eth_dev *dev);
 int idpf_vc_check_api_version(struct idpf_adapter *adapter);
+int idpf_get_pkt_type(struct idpf_adapter *adapter);
 int idpf_vc_get_caps(struct idpf_adapter *adapter);
 int idpf_vc_create_vport(struct idpf_adapter *adapter);
 int idpf_vc_destroy_vport(struct idpf_vport *vport);
@@ -193,6 +198,7 @@ int idpf_switch_queue(struct idpf_vport *vport, uint16_t 
qid,
  bool rx, bool on);
 int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
 int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable);
+int idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
 int idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
  uint16_t buf_len, uint8_t *buf);
 
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index f55d2143b9..a980714060 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1281,6 +1281,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
uint16_t pktlen_gen_bufq_id;
struct idpf_rx_queue *rxq;
+   const uint32_t *ptype_tbl;
struct rte_mbuf *rxm;
uint16_t rx_id_bufq1;
uint16_t rx_id_bufq2;
@@ -1300,6 +1301,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
rx_id_bufq1 = rxq->bufq1->rx_next_avail;
rx_id_bufq2 = rxq->bufq2->rx_next_avail;
rx_desc_ring = rxq->rx_ring;
+   ptype_tbl = rxq->adapter->ptype_tbl;
 
while (nb_rx < nb_pkts) {
rx_desc = &rx_desc_ring[rx_id];
@@ -1347,6 +1349,10 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
rxm->next = NULL;
rxm->nb_segs = 1;
rxm->port = rxq->port_id;
+   rxm->packet_type =
+   ptype_tbl[(rte_le_to_cpu_16(rx_desc->ptype_err_fflags0) 
&
+  VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M) >>
+ VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S];
 
rx_pkts[nb_rx++] = rxm;
}
@@ -1533,6 +1539,7 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
volatile union virtchnl2_rx_desc *rxdp;
union virtchnl2_rx_desc rxd;
struct idpf_rx_queue *rxq;
+   const uint32_t *ptype_tbl;
uint16_t rx_id, nb_hold;
struct rte_eth_dev *dev;
uint16_t rx_packet_len;
@@ -1551,6 +1558,7 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
 
rx_id = rxq->rx_tail;
rx_ring = rxq->rx_ring;
+   ptype_tbl = rxq->adapter->ptype_tbl;
 
while (nb_rx < nb_pkts) {
rxdp = &rx_ring[rx_id];
@@ -1603,6 +1611,9 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
rxm->pkt_len = rx_packet_len;
rxm->data_len = rx_packet_len;
rxm->port = rxq->port_id;
+   rxm->packet_type =
+   

[PATCH v18 13/18] net/idpf: add support for write back based on ITR expire

2022-10-31 Thread beilei . xing
From: Junfeng Guo 

Enable write back on ITR expire, then packets can be received one by
one.

Signed-off-by: Beilei Xing 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c | 120 +
 drivers/net/idpf/idpf_ethdev.h |  13 
 drivers/net/idpf/idpf_vchnl.c  | 113 +++
 3 files changed, 246 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index d0821ec3f3..957cc10616 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -297,6 +297,90 @@ idpf_dev_configure(struct rte_eth_dev *dev)
return 0;
 }
 
+static int
+idpf_config_rx_queues_irqs(struct rte_eth_dev *dev)
+{
+   struct idpf_vport *vport = dev->data->dev_private;
+   struct idpf_adapter *adapter = vport->adapter;
+   struct virtchnl2_queue_vector *qv_map;
+   struct idpf_hw *hw = &adapter->hw;
+   uint32_t dynctl_reg_start;
+   uint32_t itrn_reg_start;
+   uint32_t dynctl_val, itrn_val;
+   uint16_t i;
+
+   qv_map = rte_zmalloc("qv_map",
+   dev->data->nb_rx_queues *
+   sizeof(struct virtchnl2_queue_vector), 0);
+   if (qv_map == NULL) {
+   PMD_DRV_LOG(ERR, "Failed to allocate %d queue-vector map",
+   dev->data->nb_rx_queues);
+   goto qv_map_alloc_err;
+   }
+
+   /* Rx interrupt disabled, Map interrupt only for writeback */
+
+   /* The capability flags adapter->caps->other_caps should be
+* compared with bit VIRTCHNL2_CAP_WB_ON_ITR here. The if
+* condition should be updated when the FW can return the
+* correct flag bits.
+*/
+   dynctl_reg_start =
+   vport->recv_vectors->vchunks.vchunks->dynctl_reg_start;
+   itrn_reg_start =
+   vport->recv_vectors->vchunks.vchunks->itrn_reg_start;
+   dynctl_val = IDPF_READ_REG(hw, dynctl_reg_start);
+   PMD_DRV_LOG(DEBUG, "Value of dynctl_reg_start is 0x%x",
+   dynctl_val);
+   itrn_val = IDPF_READ_REG(hw, itrn_reg_start);
+   PMD_DRV_LOG(DEBUG, "Value of itrn_reg_start is 0x%x", itrn_val);
+   /* Force write-backs by setting WB_ON_ITR bit in DYN_CTL
+* register. WB_ON_ITR and INTENA are mutually exclusive
+* bits. Setting WB_ON_ITR bits means TX and RX Descs
+* are written back based on ITR expiration irrespective
+* of INTENA setting.
+*/
+   /* TBD: need to tune INTERVAL value for better performance. */
+   if (itrn_val != 0)
+   IDPF_WRITE_REG(hw,
+  dynctl_reg_start,
+  VIRTCHNL2_ITR_IDX_0  <<
+  PF_GLINT_DYN_CTL_ITR_INDX_S |
+  PF_GLINT_DYN_CTL_WB_ON_ITR_M |
+  itrn_val <<
+  PF_GLINT_DYN_CTL_INTERVAL_S);
+   else
+   IDPF_WRITE_REG(hw,
+  dynctl_reg_start,
+  VIRTCHNL2_ITR_IDX_0  <<
+  PF_GLINT_DYN_CTL_ITR_INDX_S |
+  PF_GLINT_DYN_CTL_WB_ON_ITR_M |
+  IDPF_DFLT_INTERVAL <<
+  PF_GLINT_DYN_CTL_INTERVAL_S);
+
+   for (i = 0; i < dev->data->nb_rx_queues; i++) {
+   /* map all queues to the same vector */
+   qv_map[i].queue_id = vport->chunks_info.rx_start_qid + i;
+   qv_map[i].vector_id =
+   vport->recv_vectors->vchunks.vchunks->start_vector_id;
+   }
+   vport->qv_map = qv_map;
+
+   if (idpf_vc_config_irq_map_unmap(vport, true) != 0) {
+   PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+   goto config_irq_map_err;
+   }
+
+   return 0;
+
+config_irq_map_err:
+   rte_free(vport->qv_map);
+   vport->qv_map = NULL;
+
+qv_map_alloc_err:
+   return -1;
+}
+
 static int
 idpf_start_queues(struct rte_eth_dev *dev)
 {
@@ -334,6 +418,10 @@ static int
 idpf_dev_start(struct rte_eth_dev *dev)
 {
struct idpf_vport *vport = dev->data->dev_private;
+   struct idpf_adapter *adapter = vport->adapter;
+   uint16_t num_allocated_vectors =
+   adapter->caps->num_allocated_vectors;
+   uint16_t req_vecs_num;
int ret;
 
if (dev->data->mtu > vport->max_mtu) {
@@ -344,6 +432,27 @@ idpf_dev_start(struct rte_eth_dev *dev)
 
vport->max_pkt_len = dev->data->mtu + IDPF_ETH_OVERHEAD;
 
+   req_vecs_num = IDPF_DFLT_Q_VEC_NUM;
+   if (req_vecs_num + adapter->used_vecs_num > num_allocated_vectors) {
+   PMD_DRV_LOG(ERR, "The accumulated request vectors' number 
should be less than %d",
+   num_allocated_vectors);
+   ret = -EINVAL;
+   goto err_mtu;
+   }
+
+   ret = idpf_vc_alloc_vectors(vpo

[PATCH v18 14/18] net/idpf: add support for RSS

2022-10-31 Thread beilei . xing
From: Junfeng Guo 

Add RSS support.

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c | 120 -
 drivers/net/idpf/idpf_ethdev.h |  26 +++
 drivers/net/idpf/idpf_vchnl.c  | 113 +++
 3 files changed, 258 insertions(+), 1 deletion(-)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 957cc10616..58560ea404 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -59,6 +59,8 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
dev_info->max_mtu = dev_info->max_rx_pktlen - IDPF_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+   dev_info->flow_type_rss_offloads = IDPF_RSS_OFFLOAD_ALL;
+
dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
dev_info->default_txconf = (struct rte_eth_txconf) {
@@ -169,6 +171,8 @@ idpf_parse_devarg_id(char *name)
return val;
 }
 
+#define IDPF_RSS_KEY_LEN 52
+
 static int
 idpf_init_vport(struct rte_eth_dev *dev)
 {
@@ -189,6 +193,10 @@ idpf_init_vport(struct rte_eth_dev *dev)
vport->max_mtu = vport_info->max_mtu;
rte_memcpy(vport->default_mac_addr,
   vport_info->default_mac_addr, ETH_ALEN);
+   vport->rss_algorithm = vport_info->rss_algorithm;
+   vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN,
+vport_info->rss_key_size);
+   vport->rss_lut_size = vport_info->rss_lut_size;
vport->sw_idx = idx;
 
for (i = 0; i < vport_info->chunks.num_chunks; i++) {
@@ -246,17 +254,110 @@ idpf_init_vport(struct rte_eth_dev *dev)
return 0;
 }
 
+static int
+idpf_config_rss(struct idpf_vport *vport)
+{
+   int ret;
+
+   ret = idpf_vc_set_rss_key(vport);
+   if (ret != 0) {
+   PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+   return ret;
+   }
+
+   ret = idpf_vc_set_rss_lut(vport);
+   if (ret != 0) {
+   PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+   return ret;
+   }
+
+   ret = idpf_vc_set_rss_hash(vport);
+   if (ret != 0) {
+   PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+   return ret;
+   }
+
+   return ret;
+}
+
+static int
+idpf_init_rss(struct idpf_vport *vport)
+{
+   struct rte_eth_rss_conf *rss_conf;
+   uint16_t i, nb_q, lut_size;
+   int ret = 0;
+
+   rss_conf = &vport->dev_data->dev_conf.rx_adv_conf.rss_conf;
+   nb_q = vport->dev_data->nb_rx_queues;
+
+   vport->rss_key = rte_zmalloc("rss_key",
+vport->rss_key_size, 0);
+   if (vport->rss_key == NULL) {
+   PMD_INIT_LOG(ERR, "Failed to allocate RSS key");
+   ret = -ENOMEM;
+   goto err_alloc_key;
+   }
+
+   lut_size = vport->rss_lut_size;
+   vport->rss_lut = rte_zmalloc("rss_lut",
+sizeof(uint32_t) * lut_size, 0);
+   if (vport->rss_lut == NULL) {
+   PMD_INIT_LOG(ERR, "Failed to allocate RSS lut");
+   ret = -ENOMEM;
+   goto err_alloc_lut;
+   }
+
+   if (rss_conf->rss_key == NULL) {
+   for (i = 0; i < vport->rss_key_size; i++)
+   vport->rss_key[i] = (uint8_t)rte_rand();
+   } else if (rss_conf->rss_key_len != vport->rss_key_size) {
+   PMD_INIT_LOG(ERR, "Invalid RSS key length in RSS configuration, 
should be %d",
+vport->rss_key_size);
+   ret = -EINVAL;
+   goto err_cfg_key;
+   } else {
+   rte_memcpy(vport->rss_key, rss_conf->rss_key,
+  vport->rss_key_size);
+   }
+
+   for (i = 0; i < lut_size; i++)
+   vport->rss_lut[i] = i % nb_q;
+
+   vport->rss_hf = IDPF_DEFAULT_RSS_HASH_EXPANDED;
+
+   ret = idpf_config_rss(vport);
+   if (ret != 0) {
+   PMD_INIT_LOG(ERR, "Failed to configure RSS");
+   goto err_cfg_key;
+   }
+
+   return ret;
+
+err_cfg_key:
+   rte_free(vport->rss_lut);
+   vport->rss_lut = NULL;
+err_alloc_lut:
+   rte_free(vport->rss_key);
+   vport->rss_key = NULL;
+err_alloc_key:
+   return ret;
+}
+
 static int
 idpf_dev_configure(struct rte_eth_dev *dev)
 {
+   struct idpf_vport *vport = dev->data->dev_private;
struct rte_eth_conf *conf = &dev->data->dev_conf;
+   struct idpf_adapter *adapter = vport->adapter;
+   int ret;
 
if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
PMD_INIT_LOG(ERR, "Setting link speed is not supported");
return -ENOTSUP;
}
 
-   if (dev->data->nb_rx_queues == 1 && conf->rxmode.mq_mode != 
RTE_ETH_MQ_RX_NONE) {
+   if ((dev->data->nb_rx_queues == 1 && conf->rxmode.mq_mode != 
R

[PATCH v18 15/18] net/idpf: add support for Rx offloading

2022-10-31 Thread beilei . xing
From: Junfeng Guo 

Add Rx offloading support:
 - support CHKSUM and RSS offload for split queue model
 - support CHKSUM offload for single queue model

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 doc/guides/nics/features/idpf.ini |   5 ++
 drivers/net/idpf/idpf_ethdev.c|   6 ++
 drivers/net/idpf/idpf_rxtx.c  | 123 ++
 drivers/net/idpf/idpf_vchnl.c |  18 +
 4 files changed, 152 insertions(+)

diff --git a/doc/guides/nics/features/idpf.ini 
b/doc/guides/nics/features/idpf.ini
index d722c49fde..868571654f 100644
--- a/doc/guides/nics/features/idpf.ini
+++ b/doc/guides/nics/features/idpf.ini
@@ -3,8 +3,13 @@
 ;
 ; Refer to default.ini for the full list of available PMD features.
 ;
+; A feature with "P" indicates only be supported when non-vector path
+; is selected.
+;
 [Features]
 MTU update   = Y
+L3 checksum offload  = P
+L4 checksum offload  = P
 Linux= Y
 x86-32   = Y
 x86-64   = Y
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 58560ea404..a09f104425 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -61,6 +61,12 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
 
dev_info->flow_type_rss_offloads = IDPF_RSS_OFFLOAD_ALL;
 
+   dev_info->rx_offload_capa =
+   RTE_ETH_RX_OFFLOAD_IPV4_CKSUM   |
+   RTE_ETH_RX_OFFLOAD_UDP_CKSUM|
+   RTE_ETH_RX_OFFLOAD_TCP_CKSUM|
+   RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+
dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
dev_info->default_txconf = (struct rte_eth_txconf) {
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index a980714060..f15e61a785 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1209,6 +1209,73 @@ idpf_stop_queues(struct rte_eth_dev *dev)
}
 }
 
+#define IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S   \
+   (RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S) | \
+RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S) | \
+RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S) |\
+RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S))
+
+static inline uint64_t
+idpf_splitq_rx_csum_offload(uint8_t err)
+{
+   uint64_t flags = 0;
+
+   if (unlikely((err & 
RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S)) == 0))
+   return flags;
+
+   if (likely((err & IDPF_RX_FLEX_DESC_ADV_STATUS0_XSUM_S) == 0)) {
+   flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+ RTE_MBUF_F_RX_L4_CKSUM_GOOD);
+   return flags;
+   }
+
+   if (unlikely((err & 
RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S)) != 0))
+   flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+   else
+   flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+
+   if (unlikely((err & 
RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S)) != 0))
+   flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+   else
+   flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+
+   if (unlikely((err & 
RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S)) != 0))
+   flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
+
+   if (unlikely((err & 
RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S)) != 0))
+   flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
+   else
+   flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
+
+   return flags;
+}
+
+#define IDPF_RX_FLEX_DESC_ADV_HASH1_S  0
+#define IDPF_RX_FLEX_DESC_ADV_HASH2_S  16
+#define IDPF_RX_FLEX_DESC_ADV_HASH3_S  24
+
+static inline uint64_t
+idpf_splitq_rx_rss_offload(struct rte_mbuf *mb,
+  volatile struct virtchnl2_rx_flex_desc_adv_nic_3 
*rx_desc)
+{
+   uint8_t status_err0_qw0;
+   uint64_t flags = 0;
+
+   status_err0_qw0 = rx_desc->status_err0_qw0;
+
+   if ((status_err0_qw0 & 
RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S)) != 0) {
+   flags |= RTE_MBUF_F_RX_RSS_HASH;
+   mb->hash.rss = (rte_le_to_cpu_16(rx_desc->hash1) <<
+   IDPF_RX_FLEX_DESC_ADV_HASH1_S) |
+   ((uint32_t)(rx_desc->ff2_mirrid_hash2.hash2) <<
+IDPF_RX_FLEX_DESC_ADV_HASH2_S) |
+   ((uint32_t)(rx_desc->hash3) <<
+IDPF_RX_FLEX_DESC_ADV_HASH3_S);
+   }
+
+   return flags;
+}
+
 static void
 idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq)
 {
@@ -1282,9 +1349,11 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
uint16_t pktlen_gen_bufq_id;
struct idpf_rx_queue *rxq;
const uint32_t *ptype_tbl;
+   uint8_t status_err0_qw1;
struct rte_mb

[PATCH v18 16/18] net/idpf: add support for Tx offloading

2022-10-31 Thread beilei . xing
From: Junfeng Guo 

Add Tx offloading support:
 - support TSO for single queue model and split queue model.

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 doc/guides/nics/features/idpf.ini |   1 +
 drivers/net/idpf/idpf_ethdev.c|   4 +-
 drivers/net/idpf/idpf_rxtx.c  | 128 +-
 drivers/net/idpf/idpf_rxtx.h  |  22 +
 4 files changed, 152 insertions(+), 3 deletions(-)

diff --git a/doc/guides/nics/features/idpf.ini 
b/doc/guides/nics/features/idpf.ini
index 868571654f..d82b4aa0ff 100644
--- a/doc/guides/nics/features/idpf.ini
+++ b/doc/guides/nics/features/idpf.ini
@@ -8,6 +8,7 @@
 ;
 [Features]
 MTU update   = Y
+TSO  = P
 L3 checksum offload  = P
 L4 checksum offload  = P
 Linux= Y
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index a09f104425..084426260c 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -67,7 +67,9 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
RTE_ETH_RX_OFFLOAD_TCP_CKSUM|
RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
-   dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+   dev_info->tx_offload_capa =
+   RTE_ETH_TX_OFFLOAD_TCP_TSO  |
+   RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
dev_info->default_txconf = (struct rte_eth_txconf) {
.tx_free_thresh = IDPF_DEFAULT_TX_FREE_THRESH,
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index f15e61a785..cc296d7ab1 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1506,6 +1506,49 @@ idpf_split_tx_free(struct idpf_tx_queue *cq)
cq->tx_tail = next;
 }
 
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+idpf_calc_context_desc(uint64_t flags)
+{
+   if ((flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
+   return 1;
+
+   return 0;
+}
+
+/* set TSO context descriptor
+ */
+static inline void
+idpf_set_splitq_tso_ctx(struct rte_mbuf *mbuf,
+   union idpf_tx_offload tx_offload,
+   volatile union idpf_flex_tx_ctx_desc *ctx_desc)
+{
+   uint16_t cmd_dtype;
+   uint32_t tso_len;
+   uint8_t hdr_len;
+
+   if (tx_offload.l4_len == 0) {
+   PMD_TX_LOG(DEBUG, "L4 length set to 0");
+   return;
+   }
+
+   hdr_len = tx_offload.l2_len +
+   tx_offload.l3_len +
+   tx_offload.l4_len;
+   cmd_dtype = IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX |
+   IDPF_TX_FLEX_CTX_DESC_CMD_TSO;
+   tso_len = mbuf->pkt_len - hdr_len;
+
+   ctx_desc->tso.qw1.cmd_dtype = rte_cpu_to_le_16(cmd_dtype);
+   ctx_desc->tso.qw0.hdr_len = hdr_len;
+   ctx_desc->tso.qw0.mss_rt =
+   rte_cpu_to_le_16((uint16_t)mbuf->tso_segsz &
+IDPF_TXD_FLEX_CTX_MSS_RT_M);
+   ctx_desc->tso.qw0.flex_tlen =
+   rte_cpu_to_le_32(tso_len &
+IDPF_TXD_FLEX_CTX_MSS_RT_M);
+}
+
 uint16_t
 idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
  uint16_t nb_pkts)
@@ -1514,11 +1557,14 @@ idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf 
**tx_pkts,
volatile struct idpf_flex_tx_sched_desc *txr;
volatile struct idpf_flex_tx_sched_desc *txd;
struct idpf_tx_entry *sw_ring;
+   union idpf_tx_offload tx_offload = {0};
struct idpf_tx_entry *txe, *txn;
uint16_t nb_used, tx_id, sw_id;
struct rte_mbuf *tx_pkt;
uint16_t nb_to_clean;
uint16_t nb_tx = 0;
+   uint64_t ol_flags;
+   uint16_t nb_ctx;
 
if (unlikely(txq == NULL) || unlikely(!txq->q_started))
return nb_tx;
@@ -1548,7 +1594,29 @@ idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf 
**tx_pkts,
 
if (txq->nb_free < tx_pkt->nb_segs)
break;
-   nb_used = tx_pkt->nb_segs;
+
+   ol_flags = tx_pkt->ol_flags;
+   tx_offload.l2_len = tx_pkt->l2_len;
+   tx_offload.l3_len = tx_pkt->l3_len;
+   tx_offload.l4_len = tx_pkt->l4_len;
+   tx_offload.tso_segsz = tx_pkt->tso_segsz;
+   /* Calculate the number of context descriptors needed. */
+   nb_ctx = idpf_calc_context_desc(ol_flags);
+   nb_used = tx_pkt->nb_segs + nb_ctx;
+
+   /* context descriptor */
+   if (nb_ctx != 0) {
+   volatile union idpf_flex_tx_ctx_desc *ctx_desc =
+   (volatile union idpf_flex_tx_ctx_desc *)&txr[tx_id];
+
+   if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0)
+   idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
+   

[PATCH v18 18/18] net/idpf: add support for timestamp offload

2022-10-31 Thread beilei . xing
From: Junfeng Guo 

Add support for timestamp offload.

Signed-off-by: Wenjing Qiao 
Signed-off-by: Junfeng Guo 
---
 doc/guides/nics/features/idpf.ini |  1 +
 drivers/net/idpf/idpf_ethdev.c|  5 +-
 drivers/net/idpf/idpf_ethdev.h|  3 ++
 drivers/net/idpf/idpf_rxtx.c  | 65 ++
 drivers/net/idpf/idpf_rxtx.h  | 90 +++
 5 files changed, 163 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/idpf.ini 
b/doc/guides/nics/features/idpf.ini
index d82b4aa0ff..099fd7f216 100644
--- a/doc/guides/nics/features/idpf.ini
+++ b/doc/guides/nics/features/idpf.ini
@@ -11,6 +11,7 @@ MTU update   = Y
 TSO  = P
 L3 checksum offload  = P
 L4 checksum offload  = P
+Timestamp offload= P
 Linux= Y
 x86-32   = Y
 x86-64   = Y
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index cd4ebcc2c6..50aac65daf 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -22,6 +22,8 @@ rte_spinlock_t idpf_adapter_lock;
 struct idpf_adapter_list idpf_adapter_list;
 bool idpf_adapter_list_init;
 
+uint64_t idpf_timestamp_dynflag;
+
 static const char * const idpf_valid_args[] = {
IDPF_TX_SINGLE_Q,
IDPF_RX_SINGLE_Q,
@@ -65,7 +67,8 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
RTE_ETH_RX_OFFLOAD_IPV4_CKSUM   |
RTE_ETH_RX_OFFLOAD_UDP_CKSUM|
RTE_ETH_RX_OFFLOAD_TCP_CKSUM|
-   RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+   RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+   RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
dev_info->tx_offload_capa =
RTE_ETH_TX_OFFLOAD_TCP_TSO  |
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 7d54e5db60..ccdf4abe40 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -167,6 +167,9 @@ struct idpf_adapter {
bool tx_vec_allowed;
bool rx_use_avx512;
bool tx_use_avx512;
+
+   /* For PTP */
+   uint64_t time_hw;
 };
 
 TAILQ_HEAD(idpf_adapter_list, idpf_adapter);
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 9e20f2b9d3..bafa007faf 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -10,6 +10,8 @@
 #include "idpf_rxtx.h"
 #include "idpf_rxtx_vec_common.h"
 
+static int idpf_timestamp_dynfield_offset = -1;
+
 static int
 check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
 {
@@ -900,6 +902,24 @@ idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t 
queue_idx,
return idpf_tx_split_queue_setup(dev, queue_idx, nb_desc,
 socket_id, tx_conf);
 }
+
+static int
+idpf_register_ts_mbuf(struct idpf_rx_queue *rxq)
+{
+   int err;
+   if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0) {
+   /* Register mbuf field and flag for Rx timestamp */
+   err = 
rte_mbuf_dyn_rx_timestamp_register(&idpf_timestamp_dynfield_offset,
+
&idpf_timestamp_dynflag);
+   if (err != 0) {
+   PMD_DRV_LOG(ERR,
+   "Cannot register mbuf field/flag for 
timestamp");
+   return -EINVAL;
+   }
+   }
+   return 0;
+}
+
 static int
 idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq)
 {
@@ -993,6 +1013,13 @@ idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t 
rx_queue_id)
return -EINVAL;
}
 
+   err = idpf_register_ts_mbuf(rxq);
+   if (err != 0) {
+   PMD_DRV_LOG(ERR, "fail to regidter timestamp mbuf %u",
+   rx_queue_id);
+   return -EIO;
+   }
+
if (rxq->bufq1 == NULL) {
/* Single queue */
err = idpf_alloc_single_rxq_mbufs(rxq);
@@ -1354,6 +1381,7 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
struct idpf_rx_queue *rxq;
const uint32_t *ptype_tbl;
uint8_t status_err0_qw1;
+   struct idpf_adapter *ad;
struct rte_mbuf *rxm;
uint16_t rx_id_bufq1;
uint16_t rx_id_bufq2;
@@ -1363,9 +1391,11 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
uint16_t gen_id;
uint16_t rx_id;
uint16_t nb_rx;
+   uint64_t ts_ns;
 
nb_rx = 0;
rxq = rx_queue;
+   ad = rxq->adapter;
 
if (unlikely(rxq == NULL) || unlikely(!rxq->q_started))
return nb_rx;
@@ -1376,6 +1406,9 @@ idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf 
**rx_pkts,
rx_desc_ring = rxq->rx_ring;
ptype_tbl = rxq->adapter->ptype_tbl;
 
+   if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
+   rxq->hw_register

[PATCH v18 17/18] net/idpf: add AVX512 data path for single queue model

2022-10-31 Thread beilei . xing
From: Junfeng Guo 

Add support of AVX512 vector data path for single queue model.

Signed-off-by: Wenjun Wu 
Signed-off-by: Junfeng Guo 
---
 doc/guides/nics/idpf.rst|  19 +
 drivers/net/idpf/idpf_ethdev.c  |   3 +-
 drivers/net/idpf/idpf_ethdev.h  |   5 +
 drivers/net/idpf/idpf_rxtx.c| 145 
 drivers/net/idpf/idpf_rxtx.h|  21 +
 drivers/net/idpf/idpf_rxtx_vec_avx512.c | 857 
 drivers/net/idpf/idpf_rxtx_vec_common.h | 100 +++
 drivers/net/idpf/meson.build|  28 +
 8 files changed, 1177 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/idpf/idpf_rxtx_vec_avx512.c
 create mode 100644 drivers/net/idpf/idpf_rxtx_vec_common.h

diff --git a/doc/guides/nics/idpf.rst b/doc/guides/nics/idpf.rst
index c1001d5d0c..3039c61748 100644
--- a/doc/guides/nics/idpf.rst
+++ b/doc/guides/nics/idpf.rst
@@ -64,3 +64,22 @@ Refer to the document :ref:`compiling and testing a PMD for 
a NIC tx_offload_capa =
RTE_ETH_TX_OFFLOAD_TCP_TSO  |
-   RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+   RTE_ETH_TX_OFFLOAD_MULTI_SEGS   |
+   RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
dev_info->default_txconf = (struct rte_eth_txconf) {
.tx_free_thresh = IDPF_DEFAULT_TX_FREE_THRESH,
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 8d0804f603..7d54e5db60 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -162,6 +162,11 @@ struct idpf_adapter {
uint32_t max_txq_per_msg;
 
uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
+
+   bool rx_vec_allowed;
+   bool tx_vec_allowed;
+   bool rx_use_avx512;
+   bool tx_use_avx512;
 };
 
 TAILQ_HEAD(idpf_adapter_list, idpf_adapter);
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index cc296d7ab1..9e20f2b9d3 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -4,9 +4,11 @@
 
 #include 
 #include 
+#include 
 
 #include "idpf_ethdev.h"
 #include "idpf_rxtx.h"
+#include "idpf_rxtx_vec_common.h"
 
 static int
 check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
@@ -252,6 +254,8 @@ reset_single_rx_queue(struct idpf_rx_queue *rxq)
 
rxq->pkt_first_seg = NULL;
rxq->pkt_last_seg = NULL;
+   rxq->rxrearm_start = 0;
+   rxq->rxrearm_nb = 0;
 }
 
 static void
@@ -2073,25 +2077,166 @@ idpf_prep_pkts(__rte_unused void *tx_queue, struct 
rte_mbuf **tx_pkts,
return i;
 }
 
+static void __rte_cold
+release_rxq_mbufs_vec(struct idpf_rx_queue *rxq)
+{
+   const uint16_t mask = rxq->nb_rx_desc - 1;
+   uint16_t i;
+
+   if (rxq->sw_ring == NULL || rxq->rxrearm_nb >= rxq->nb_rx_desc)
+   return;
+
+   /* free all mbufs that are valid in the ring */
+   if (rxq->rxrearm_nb == 0) {
+   for (i = 0; i < rxq->nb_rx_desc; i++) {
+   if (rxq->sw_ring[i] != NULL)
+   rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+   }
+   } else {
+   for (i = rxq->rx_tail; i != rxq->rxrearm_start; i = (i + 1) & 
mask) {
+   if (rxq->sw_ring[i] != NULL)
+   rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+   }
+   }
+
+   rxq->rxrearm_nb = rxq->nb_rx_desc;
+
+   /* set all entries to NULL */
+   memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
+}
+
+static const struct idpf_rxq_ops def_singleq_rx_ops_vec = {
+   .release_mbufs = release_rxq_mbufs_vec,
+};
+
+static inline int
+idpf_singleq_rx_vec_setup_default(struct idpf_rx_queue *rxq)
+{
+   uintptr_t p;
+   struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
+
+   mb_def.nb_segs = 1;
+   mb_def.data_off = RTE_PKTMBUF_HEADROOM;
+   mb_def.port = rxq->port_id;
+   rte_mbuf_refcnt_set(&mb_def, 1);
+
+   /* prevent compiler reordering: rearm_data covers previous fields */
+   rte_compiler_barrier();
+   p = (uintptr_t)&mb_def.rearm_data;
+   rxq->mbuf_initializer = *(uint64_t *)p;
+   return 0;
+}
+
+int __rte_cold
+idpf_singleq_rx_vec_setup(struct idpf_rx_queue *rxq)
+{
+   rxq->ops = &def_singleq_rx_ops_vec;
+   return idpf_singleq_rx_vec_setup_default(rxq);
+}
+
 void
 idpf_set_rx_function(struct rte_eth_dev *dev)
 {
struct idpf_vport *vport = dev->data->dev_private;
+#ifdef RTE_ARCH_X86
+   struct idpf_adapter *ad = vport->adapter;
+   struct idpf_rx_queue *rxq;
+   int i;
+
+   if (idpf_rx_vec_dev_check_default(dev) == IDPF_VECTOR_PATH &&
+   rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+   ad->rx_vec_allowed = true;
+
+   if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
+#ifdef CC_AVX512_SUPPORT
+   if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 
&&
+  

meson test link bonding failed//RE: [PATCH] net/bonding: fix descriptor limit reporting

2022-10-31 Thread Li, WeiyuanX
Hi Ivan,

This patch is merged into dpdk22.11.0rc1  we execute meson test case 
link_bonding_autotest, link_bonding_rssconf_autotest and 
link_bonding_mode4_autotest test failed.
Could you please have a look at it, also submitted a Bugzilla ticket: 
https://bugs.dpdk.org/show_bug.cgi?id=1118

Regards,
Li, Weiyuan

> -Original Message-
> From: Ivan Malov 
> Sent: Sunday, September 11, 2022 8:19 PM
> To: dev@dpdk.org
> Cc: sta...@dpdk.org; Andrew Rybchenko
> ; Chas Williams ; Min
> Hu (Connor) ; Hari Kumar Vemula
> 
> Subject: [PATCH] net/bonding: fix descriptor limit reporting
> 
> Commit 5be3b40fea60 ("net/bonding: fix values of descriptor limits") breaks
> reporting of "nb_min" and "nb_align" values obtained from back-end
> devices' descriptor limits. This means that work done by
> eth_bond_slave_inherit_desc_lim_first() as well as
> eth_bond_slave_inherit_desc_lim_next() gets dismissed.
> 
> Revert the offending commit and use proper workaround for the test case
> mentioned in the said commit.
> 
> Meanwhile, the test case itself might be poorly constructed.
> It tries to run a bond with no back-end devices attached, but, according to 
> [1]
> ("Requirements / Limitations"), at least one back-end device must be
> attached.
> 
> [1] doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
> 
> Fixes: 5be3b40fea60 ("net/bonding: fix values of descriptor limits")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Ivan Malov 
> Reviewed-by: Andrew Rybchenko 
> ---


Renaming DTS

2022-10-31 Thread Juraj Linkeš
Hello DPDK devs,

As many of you are already aware, we're moving DTS (the testing framework along 
with tests) to DPDK. As part of this effort, we're doing a major 
refactoring/rewrite of DTS. What I feel is missing is a discussion of how we 
should name the rewritten DTS - not only are the changes big enough but it's 
also being moved into another community and I think the community should have a 
say in how we name it. Of course, that discussion could have happened as part 
of the review process of the first patch [0], but that usually focuses on the 
code and these big picture issues can get overlooked.

With that, here are some proposals:

* Change the name altogether, to something like

o   test_harness

o   test_framework

o   test(ing)

o   integration_test

* Use the existing DTS initialism, but change the meaning, possibly to

o   DPDK test system

o   DPDK test solution

o   DPDK test software

o   DPDK test scenarios

o   DPDK test framework/harness and suites. This is a stretch, as it add a new 
word that's part of the initialism

* Use a new initialism

o   DTFS, DTH - DPDK test framework and suites, DPDK test harness

* Use two directories, one for framework (libs) and the other for tests

* And of course, stay with DTS and its original meaning - DPDK test 
suite

DPDK test suite doesn't fully capture what DTS is (it isn't just tests, but 
also the framework (libs) that runs them). That is a minor point, but it's 
possible there's a better name. It's likely that it's not worth changing the 
name (or the meaning of the initialism) since people are familiar with it. If 
the topic doesn't get traction, we'll stay with the DPDK test suite. Or maybe 
change it to DPDK test suites, as there are many different test suites.

I like using DTS, but maybe changing the initialism to something that means 
tests + libs. I don't think my suggestions above capture quite that, but there 
could be something else that does. I actually like DPDK test suites the best.

Let us know whether you'd like to see a new directory named 'dts' in you 
repository or something completely different!

Thanks,
Juraj

[0] http://patches.dpdk.org/project/dpdk/list/?series=25207


[PATCH] app/testeventdev: fix limit names in error message

2022-10-31 Thread Volodymyr Fialko
Swap min and max values to match their labels.

Fixes: 2eaa37b8663 ("app/eventdev: add vector mode in pipeline test")

Signed-off-by: Volodymyr Fialko 
---
 app/test-eventdev/test_pipeline_common.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/app/test-eventdev/test_pipeline_common.c 
b/app/test-eventdev/test_pipeline_common.c
index ab39046ce7..5229d74fe0 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -534,8 +534,8 @@ pipeline_event_rx_adapter_setup(struct evt_options *opt, 
uint8_t stride,
if (opt->vector_size < limits.min_sz ||
opt->vector_size > limits.max_sz) {
evt_err("Vector size [%d] not within limits 
max[%d] min[%d]",
-   opt->vector_size, limits.min_sz,
-   limits.max_sz);
+   opt->vector_size, limits.max_sz,
+   limits.min_sz);
return -EINVAL;
}
 
-- 
2.25.1



[PATCH] drivers: remove unused build variable

2022-10-31 Thread Thomas Monjalon
The variable fmt_name has been removed from DPDK 21.02-rc1.
Then some drivers were integrated in the same year with this variable.
Of this course it has no effect, so it is cleaned up.

Fixes: 832a4cf1d11d ("compress/mlx5: introduce PMD")
Fixes: a7c86884f150 ("crypto/mlx5: introduce Mellanox crypto driver")
Fixes: 5e7596ba7cb3 ("vdpa/sfc: introduce Xilinx vDPA driver")
Cc: sta...@dpdk.org

Signed-off-by: Thomas Monjalon 
---
 drivers/compress/mlx5/meson.build | 1 -
 drivers/crypto/mlx5/meson.build   | 1 -
 drivers/vdpa/sfc/meson.build  | 1 -
 3 files changed, 3 deletions(-)

diff --git a/drivers/compress/mlx5/meson.build 
b/drivers/compress/mlx5/meson.build
index 49ce3aff46..9e947244ee 100644
--- a/drivers/compress/mlx5/meson.build
+++ b/drivers/compress/mlx5/meson.build
@@ -7,7 +7,6 @@ if not is_linux
 subdir_done()
 endif
 
-fmt_name = 'mlx5_compress'
 deps += ['common_mlx5', 'eal', 'compressdev']
 if not ('mlx5' in common_drivers)
 # avoid referencing undefined variables from common/mlx5
diff --git a/drivers/crypto/mlx5/meson.build b/drivers/crypto/mlx5/meson.build
index 7521c4c671..20ee69636f 100644
--- a/drivers/crypto/mlx5/meson.build
+++ b/drivers/crypto/mlx5/meson.build
@@ -7,7 +7,6 @@ if not (is_linux or is_windows)
 subdir_done()
 endif
 
-fmt_name = 'mlx5_crypto'
 deps += ['common_mlx5', 'eal', 'cryptodev']
 if not ('mlx5' in common_drivers)
 # avoid referencing undefined variables from common/mlx5
diff --git a/drivers/vdpa/sfc/meson.build b/drivers/vdpa/sfc/meson.build
index b55f9cd691..933f3f18f3 100644
--- a/drivers/vdpa/sfc/meson.build
+++ b/drivers/vdpa/sfc/meson.build
@@ -8,7 +8,6 @@ if ((arch_subdir != 'x86' or not dpdk_conf.get('RTE_ARCH_64')) 
and
 reason = 'only supported on x86_64 and aarch64'
 endif
 
-fmt_name = 'sfc_vdpa'
 extra_flags = []
 
 foreach flag: extra_flags
-- 
2.36.1



RE: [PATCH] maintainers: change maintainer for event ethdev Rx/Tx adapters

2022-10-31 Thread Naga Harish K, S V


> -Original Message-
> From: Thomas Monjalon 
> Sent: Sunday, October 30, 2022 2:30 PM
> To: Naga Harish K, S V ; Jayatheerthan, Jay
> 
> Cc: dev@dpdk.org; jerinjac...@gmail.com; jer...@marvell.com;
> dev@dpdk.org; Mcnamara, John 
> Subject: Re: [PATCH] maintainers: change maintainer for event ethdev Rx/Tx
> adapters
> 
> 21/10/2022 13:35, Jay Jayatheerthan:
> > Harish is the new maintainer of Rx/Tx adapters due to role change of Jay
> >
> > Signed-off-by: Jay Jayatheerthan 
> 
> Please could we have an approval from the new maintainer?
> An ack would make things clear and accepted.

Acked by: Naga Harish K S V 

> 
> 



[PATCH v2 2/3] mempool: include non-DPDK threads in statistics

2022-10-31 Thread Morten Brørup
Offset the stats array index by one, and count non-DPDK threads at index
zero.

This patch provides two benefits:
* Non-DPDK threads are also included in the statistics.
* A conditional in the fast path is removed. Static branch prediction was
  correct, so the performance improvement is negligible.

v2:
* New. No v1 of this patch in the series.

Suggested-by: Stephen Hemminger 
Signed-off-by: Morten Brørup 
---
 lib/mempool/rte_mempool.c |  2 +-
 lib/mempool/rte_mempool.h | 12 ++--
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 62d1ce764e..e6208125e0 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -1272,7 +1272,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 #ifdef RTE_LIBRTE_MEMPOOL_STATS
rte_mempool_ops_get_info(mp, &info);
memset(&sum, 0, sizeof(sum));
-   for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+   for (lcore_id = 0; lcore_id < RTE_MAX_LCORE + 1; lcore_id++) {
sum.put_bulk += mp->stats[lcore_id].put_bulk;
sum.put_objs += mp->stats[lcore_id].put_objs;
sum.put_common_pool_bulk += 
mp->stats[lcore_id].put_common_pool_bulk;
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 9c4bf5549f..16e7e62e3c 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -238,8 +238,11 @@ struct rte_mempool {
struct rte_mempool_memhdr_list mem_list; /**< List of memory chunks */
 
 #ifdef RTE_LIBRTE_MEMPOOL_STATS
-   /** Per-lcore statistics. */
-   struct rte_mempool_debug_stats stats[RTE_MAX_LCORE];
+   /** Per-lcore statistics.
+*
+* Offset by one, to include non-DPDK threads.
+*/
+   struct rte_mempool_debug_stats stats[RTE_MAX_LCORE + 1];
 #endif
 }  __rte_cache_aligned;
 
@@ -304,10 +307,7 @@ struct rte_mempool {
  */
 #ifdef RTE_LIBRTE_MEMPOOL_STATS
 #define RTE_MEMPOOL_STAT_ADD(mp, name, n) do {  \
-   unsigned __lcore_id = rte_lcore_id();   \
-   if (__lcore_id < RTE_MAX_LCORE) {   \
-   mp->stats[__lcore_id].name += n;\
-   }   \
+   (mp)->stats[rte_lcore_id() + 1].name += n;  \
} while (0)
 #else
 #define RTE_MEMPOOL_STAT_ADD(mp, name, n) do {} while (0)
-- 
2.17.1



[PATCH v2 1/3] mempool: split statistics from debug

2022-10-31 Thread Morten Brørup
Split statistics from debug, to make mempool statistics available without
the performance cost of continuously validating the cookies in the mempool
elements.

mempool_perf_autotest shows the follwing change in rate_persec.

When enabling mempool debug without this patch:
-28.1 % and -74.0 %, respectively without and with cache.

When enabling mempool stats (but not debug) with this patch:
-5.8 % and -21.2 %, respectively without and with cache.

v2:
* Fix checkpatch warning:
  Use C style comments in rte_include.h, not C++ style.
* Do not rename the rte_mempool_debug_stats structure.

Signed-off-by: Morten Brørup 
---
 config/rte_config.h   | 2 ++
 lib/mempool/rte_mempool.c | 6 +++---
 lib/mempool/rte_mempool.h | 6 +++---
 3 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/config/rte_config.h b/config/rte_config.h
index ae56a86394..3c4876d434 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -47,6 +47,8 @@
 
 /* mempool defines */
 #define RTE_MEMPOOL_CACHE_MAX_SIZE 512
+/* RTE_LIBRTE_MEMPOOL_STATS is not set */
+/* RTE_LIBRTE_MEMPOOL_DEBUG is not set */
 
 /* mbuf defines */
 #define RTE_MBUF_DEFAULT_MEMPOOL_OPS "ring_mp_mc"
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 21c94a2b9f..62d1ce764e 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -818,7 +818,7 @@ rte_mempool_create_empty(const char *name, unsigned n, 
unsigned elt_size,
  RTE_CACHE_LINE_MASK) != 0);
RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_cache) &
  RTE_CACHE_LINE_MASK) != 0);
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+#ifdef RTE_LIBRTE_MEMPOOL_STATS
RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_debug_stats) &
  RTE_CACHE_LINE_MASK) != 0);
RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, stats) &
@@ -1221,7 +1221,7 @@ rte_mempool_audit(struct rte_mempool *mp)
 void
 rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 {
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+#ifdef RTE_LIBRTE_MEMPOOL_STATS
struct rte_mempool_info info;
struct rte_mempool_debug_stats sum;
unsigned lcore_id;
@@ -1269,7 +1269,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
fprintf(f, "  common_pool_count=%u\n", common_count);
 
/* sum and dump statistics */
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+#ifdef RTE_LIBRTE_MEMPOOL_STATS
rte_mempool_ops_get_info(mp, &info);
memset(&sum, 0, sizeof(sum));
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 3725a72951..9c4bf5549f 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -56,7 +56,7 @@ extern "C" {
 #define RTE_MEMPOOL_HEADER_COOKIE2  0xf2eef2eedadd2e55ULL /**< Header cookie. 
*/
 #define RTE_MEMPOOL_TRAILER_COOKIE  0xadd2e55badbadbadULL /**< Trailer 
cookie.*/
 
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+#ifdef RTE_LIBRTE_MEMPOOL_STATS
 /**
  * A structure that stores the mempool statistics (per-lcore).
  * Note: Cache stats (put_cache_bulk/objs, get_cache_bulk/objs) are not
@@ -237,7 +237,7 @@ struct rte_mempool {
uint32_t nb_mem_chunks;  /**< Number of memory chunks */
struct rte_mempool_memhdr_list mem_list; /**< List of memory chunks */
 
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+#ifdef RTE_LIBRTE_MEMPOOL_STATS
/** Per-lcore statistics. */
struct rte_mempool_debug_stats stats[RTE_MAX_LCORE];
 #endif
@@ -302,7 +302,7 @@ struct rte_mempool {
  * @param n
  *   Number to add to the object-oriented statistics.
  */
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+#ifdef RTE_LIBRTE_MEMPOOL_STATS
 #define RTE_MEMPOOL_STAT_ADD(mp, name, n) do {  \
unsigned __lcore_id = rte_lcore_id();   \
if (__lcore_id < RTE_MAX_LCORE) {   \
-- 
2.17.1



[PATCH v2 3/3] mempool: use cache for frequently updated statistics

2022-10-31 Thread Morten Brørup
When built with statistics enabled (RTE_LIBRTE_MEMPOOL_STATS defined), the
performance of mempools with caches is improved as follows.

When accessing objects in the mempool, either the put_bulk and put_objs or
the get_success_bulk and get_success_objs statistics counters are likely
to be incremented.

By adding an alternative set of these counters to the mempool cache
structure, accesing the dedicated statistics structure is avoided in the
likely cases where these counters are incremented.

The trick here is that the cache line holding the mempool cache structure
is accessed anyway, in order to access the 'len' or 'flushthresh' fields.
Updating some statistics counters in the same cache line has lower
performance cost than accessing the statistics counters in the dedicated
statistics structure, which resides in another cache line.

mempool_perf_autotest with this patch shows the follwing change in
rate_persec.

Compared to only spliting statistics from debug:
+1.5 % and +14.4 %, respectively without and with cache.

Compared to not enabling mempool stats:
-4.4 % and -9.9 %, respectively without and with cache.

v2:
* Move the statistics counters into a stats structure.

Signed-off-by: Morten Brørup 
---
 lib/mempool/rte_mempool.c |  9 +
 lib/mempool/rte_mempool.h | 73 ---
 2 files changed, 69 insertions(+), 13 deletions(-)

diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index e6208125e0..a18e39af04 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -1286,6 +1286,15 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
sum.get_success_blks += mp->stats[lcore_id].get_success_blks;
sum.get_fail_blks += mp->stats[lcore_id].get_fail_blks;
}
+   if (mp->cache_size != 0) {
+   /* Add the statistics stored in the mempool caches. */
+   for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+   sum.put_bulk += 
mp->local_cache[lcore_id].stats.put_bulk;
+   sum.put_objs += 
mp->local_cache[lcore_id].stats.put_objs;
+   sum.get_success_bulk += 
mp->local_cache[lcore_id].stats.get_success_bulk;
+   sum.get_success_objs += 
mp->local_cache[lcore_id].stats.get_success_objs;
+   }
+   }
fprintf(f, "  stats:\n");
fprintf(f, "put_bulk=%"PRIu64"\n", sum.put_bulk);
fprintf(f, "put_objs=%"PRIu64"\n", sum.put_objs);
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 16e7e62e3c..5806e75609 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -86,6 +86,21 @@ struct rte_mempool_cache {
uint32_t size;/**< Size of the cache */
uint32_t flushthresh; /**< Threshold before we flush excess elements */
uint32_t len; /**< Current cache count */
+   uint32_t unused0;
+#ifdef RTE_LIBRTE_MEMPOOL_STATS
+   /*
+* Alternative location for the most frequently updated mempool 
statistics (per-lcore),
+* providing faster update access when using a mempool cache.
+*/
+   struct {
+   uint64_t put_bulk;  /**< Number of puts. */
+   uint64_t put_objs;  /**< Number of objects successfully 
put. */
+   uint64_t get_success_bulk;  /**< Successful allocation number. 
*/
+   uint64_t get_success_objs;  /**< Objects successfully 
allocated. */
+   } stats;/**< Statistics */
+#else
+   uint64_t unused1[4];
+#endif
/**
 * Cache objects
 *
@@ -296,14 +311,14 @@ struct rte_mempool {
| RTE_MEMPOOL_F_NO_IOVA_CONTIG \
)
 /**
- * @internal When debug is enabled, store some statistics.
+ * @internal When stats is enabled, store some statistics.
  *
  * @param mp
  *   Pointer to the memory pool.
  * @param name
  *   Name of the statistics field to increment in the memory pool.
  * @param n
- *   Number to add to the object-oriented statistics.
+ *   Number to add to the statistics.
  */
 #ifdef RTE_LIBRTE_MEMPOOL_STATS
 #define RTE_MEMPOOL_STAT_ADD(mp, name, n) do {  \
@@ -312,6 +327,23 @@ struct rte_mempool {
 #else
 #define RTE_MEMPOOL_STAT_ADD(mp, name, n) do {} while (0)
 #endif
+/**
+ * @internal When stats is enabled, store some statistics.
+ *
+ * @param cache
+ *   Pointer to the memory pool cache.
+ * @param name
+ *   Name of the statistics field to increment in the memory pool cache.
+ * @param n
+ *   Number to add to the statistics.
+ */
+#ifdef RTE_LIBRTE_MEMPOOL_STATS
+#define RTE_MEMPOOL_CACHE_STAT_ADD(cache, name, n) do { \
+   (cache)->stats.name += n;   \
+   } while (0)
+#else
+#define RTE_MEMPOOL_CACHE_STAT_ADD(cache, name, n) do {} while (0)
+#endif
 
 /**
  * @internal Calculate the size of the mempool header.
@@ -1327,13 +1359,17 @@ rte_mempool_do_generic_put(struct rt

[PATCH v4] doc: update Linux core isolation guide

2022-10-31 Thread pbhagavatula
From: Pavan Nikhilesh 

Update Linux core isolation guide to include isolation from
timers, RCU processing and IRQs.

Signed-off-by: Pavan Nikhilesh 
Acked-by: Jerin Jacob 
---
 v4 Changes:
 - Give names to Links to make them clickable. (Thomas)
 - Fix link formatting.

 v3 Changes:
 - Add additional information links for Cgroups.

 v2 Changes:
 - Add references to the parameters used.
 - Add note about Linux cgroups.

 doc/guides/linux_gsg/enable_func.rst | 32 +++-
 1 file changed, 27 insertions(+), 5 deletions(-)

diff --git a/doc/guides/linux_gsg/enable_func.rst 
b/doc/guides/linux_gsg/enable_func.rst
index b15bfb2f9f..b544d2e50b 100644
--- a/doc/guides/linux_gsg/enable_func.rst
+++ b/doc/guides/linux_gsg/enable_func.rst
@@ -126,16 +126,38 @@ Using Linux Core Isolation to Reduce Context Switches
 -

 While the threads used by a DPDK application are pinned to logical cores on 
the system,
-it is possible for the Linux scheduler to run other tasks on those cores also.
-To help prevent additional workloads from running on those cores,
-it is possible to use the ``isolcpus`` Linux kernel parameter to isolate them 
from the general Linux scheduler.
+it is possible for the Linux scheduler to run other tasks on those cores.
+To help prevent additional workloads, timers, rcu processing and IRQs from 
running on those cores, it is possible to use
+the Linux kernel parameters ``isolcpus``, ``nohz_full``, ``irqaffinity`` to 
isolate them from the general Linux scheduler tasks.

-For example, if DPDK applications are to run on logical cores 2, 4 and 6,
+For example, if a given CPU has 0-7 cores and DPDK applications are to run on 
logical cores 2, 4 and 6,
 the following should be added to the kernel parameter list:

 .. code-block:: console

-isolcpus=2,4,6
+isolcpus=2,4,6 nohz_full=2,4,6 irqaffinity=0,1,3,5,7
+
+.. Note::
+
+ | More detailed information about the above parameters can be found 
at:
+ | `NO_HZ `_
+ | `IRQs `_
+ | `Kernel parameters 
`_
+
+
+For more fine grained control over resource management and performance tuning 
one can look
+into ``Linux cgroups``.
+
+Cpusets using cgroups:
+   `CPUSETS 
`_
+
+Systemd (CPUAffinity):
+   `CPUAffinity 
`_
+
+Also, see:
+   | `CPUSET man pages `_
+   | `CPU isolation example 
`_
+   | `Systemd core isolation 
`_

 .. _High_Precision_Event_Timer:

--
2.25.1



Flow Bifurcation of splitting the traffic between kernel space and user space (DPDK)

2022-10-31 Thread Ramakrishnan G
Dear Aaron and DPDK Dev Team,

Thanks for the Article talks about the Traffic Flow bifurcation
between kernel space and user space (DPDK) (3. Flow Bifurcation How-to
Guide — Data Plane Development Kit 16.07.2 documentation (dpdk.org)
)

We are trying to test this functionality for sending only the SSH (port 22)
traffic to kernel and all the other traffic to be transferred to the user
space (DPDK) by assigning same IP for both the virtual interface (one
virtual interface is owned by the DPDK and another virtual interface is
owned by the DPDK )

Using the igb driver with max_vfs setting, we were able to create the
virtual link and map it to user space (DPDK) and another link into kernel
space. we assigned different IP addresses and we were able to reach from
other host.

But when we are trying to configure the flow-type for port 22

Ubuntu# ethtool -K eth9 ntuple on
Ubuntu## ethtool -N eth9 flow-type ip4 dst-port 22 action 0
rmgr: Cannot insert RX class rule: Invalid argument
Ubuntu## ethtool -N eth9 flow-type ip4 dst-port 22 action 1
rmgr: Cannot insert RX class rule: Invalid argument
Ubuntu## ethtool -N eth9 flow-type ip4 dst-port 22 action 2
rmgr: Cannot insert RX class rule: Invalid argument

We tried to apply the patch that was given in the following link,
(
https://patchwork.ozlabs.org/project/intel-wired-lan/patch/1451456399-13353-1-git-send-email-gangfeng.hu...@ni.com/#1236040
)

But we couldn't patch any of the latest igb driver and we tried to patch
with the 2016 igb driver.

please help us in sharing the info where can we apply the patch for igb
driver in Ubuntu.

Thanks,
Ram


[PATCH] net/bonding: set initial value of descriptor count alignment

2022-10-31 Thread Ivan Malov
The driver had once been broken by patch [1] looking to have
a non-zero "nb_max" value in a use case not involving adding
any back-end ports. That was addressed afterwards ([2]). But,
as per report [3], similar test cases exist which attempt to
setup Rx queues on a void bond before attaching any back-end
ports. Rx queue setup, in turn, involves device info get API
invocation, and one of the checks on received data causes an
exception (division by zero). The "nb_align" value is indeed
zero at that time, but, as explained in [2], such test cases
are totally incorrect since a bond device must have at least
one back-end port plugged before any ethdev APIs can be used.

Once again, to avoid any problems with fixing the test cases,
this patch adjusts the bond PMD itself to workaround the bug.

[1] commit 5be3b40fea60 ("net/bonding: fix values of descriptor limits")
[2] commit d03c0e83cc00 ("net/bonding: fix descriptor limit reporting")
[3] https://bugs.dpdk.org/show_bug.cgi?id=1118

Fixes: d03c0e83cc00 ("net/bonding: fix descriptor limit reporting")
Cc: sta...@dpdk.org

Signed-off-by: Ivan Malov 
Reviewed-by: Andrew Rybchenko 
---
 drivers/net/bonding/rte_eth_bond_pmd.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c 
b/drivers/net/bonding/rte_eth_bond_pmd.c
index dc74852137..145cb7099f 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -3426,6 +3426,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
 */
internals->rx_desc_lim.nb_max = UINT16_MAX;
internals->tx_desc_lim.nb_max = UINT16_MAX;
+   internals->rx_desc_lim.nb_align = 1;
+   internals->tx_desc_lim.nb_align = 1;
 
memset(internals->active_slaves, 0, sizeof(internals->active_slaves));
memset(internals->slaves, 0, sizeof(internals->slaves));
-- 
2.30.2



Re: [PATCH v18 00/18] add support for idpf PMD in DPDK

2022-10-31 Thread Thomas Monjalon
31/10/2022 09:33, beilei.x...@intel.com:
> From: Beilei Xing 
> 
> This patchset introduced the idpf (Infrastructure Data Path Function) PMD in 
> DPDK for Intel® IPU E2000 (Device ID: 0x1452).
> The Intel® IPU E2000 targets to deliver high performance under real workloads 
> with security and isolation.
> Please refer to
> https://www.intel.com/content/www/us/en/products/network-io/infrastructure-processing-units/asic/e2000-asic.html
> for more information.
> 
> Linux upstream is still ongoing, previous work refers to 
> https://patchwork.ozlabs.org/project/intel-wired-lan/patch/20220128001009.721392-20-alan.br...@intel.com/.

I've fixed/improved doc and build files.
Applied, thanks.




Re: [PATCH] net/gve: fix pointers dereference before null check

2022-10-31 Thread Ferruh Yigit

On 10/31/2022 5:05 AM, Junfeng Guo wrote:



The pointers 'rxq' and 'txq' are dereferenced before the null check.
Fixed the logic in this patch.

Fixes: 4bec2d0b5572 ("net/gve: support queue operations")

Signed-off-by: Junfeng Guo 


Reviewed-by: Ferruh Yigit 

Applied to dpdk-next-net/main, thanks.



Re: meson test link bonding failed//RE: [PATCH] net/bonding: fix descriptor limit reporting

2022-10-31 Thread Ivan Malov

Hi,

Please see 
https://patches.dpdk.org/project/dpdk/patch/20221031131744.2340150-1-ivan.ma...@oktetlabs.ru/ 
.


Thank you.

On Mon, 31 Oct 2022, Li, WeiyuanX wrote:


Hi Ivan,

This patch is merged into dpdk22.11.0rc1  we execute meson test case 
link_bonding_autotest, link_bonding_rssconf_autotest and 
link_bonding_mode4_autotest test failed.
Could you please have a look at it, also submitted a Bugzilla ticket: 
https://bugs.dpdk.org/show_bug.cgi?id=1118

Regards,
Li, Weiyuan


-Original Message-
From: Ivan Malov 
Sent: Sunday, September 11, 2022 8:19 PM
To: dev@dpdk.org
Cc: sta...@dpdk.org; Andrew Rybchenko
; Chas Williams ; Min
Hu (Connor) ; Hari Kumar Vemula

Subject: [PATCH] net/bonding: fix descriptor limit reporting

Commit 5be3b40fea60 ("net/bonding: fix values of descriptor limits") breaks
reporting of "nb_min" and "nb_align" values obtained from back-end
devices' descriptor limits. This means that work done by
eth_bond_slave_inherit_desc_lim_first() as well as
eth_bond_slave_inherit_desc_lim_next() gets dismissed.

Revert the offending commit and use proper workaround for the test case
mentioned in the said commit.

Meanwhile, the test case itself might be poorly constructed.
It tries to run a bond with no back-end devices attached, but, according to [1]
("Requirements / Limitations"), at least one back-end device must be
attached.

[1] doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst

Fixes: 5be3b40fea60 ("net/bonding: fix values of descriptor limits")
Cc: sta...@dpdk.org

Signed-off-by: Ivan Malov 
Reviewed-by: Andrew Rybchenko 
---




Re: [PATCH v3] dumpcap: fix interface parameter check.

2022-10-31 Thread Thomas Monjalon
15/09/2022 10:44, Pattan, Reshma:
> From: Arshdeep Kaur 
> > 
> > Correction in handling 'IF' condition for -i parameter.
> > Fixes: cbb44143be74 ("app/dumpcap: add new packet capture application")
> > Signed-off-by: Arshdeep Kaur 
> 
> Acked-by: Reshma Pattan 

Sorry for not noticing, it was merged in DPDK 22.11-rc1.





RE: [PATCH] net/mlx5: enable flow aging action

2022-10-31 Thread Matan Azrad



> As the queue-based aging API has been integrated[1], the flow aging action
> support in HWS steering code can be enabled now.
> 
> [1]:
> https://patchwork.dpdk.org/project/dpdk/cover/20221026214943.3686635-
> 1-michae...@nvidia.com/
> 
> Signed-off-by: Suanming Mou 
Acked-by: Matan Azrad 


Re: [PATCH v6 1/5] examples/l3fwd: fix port group mask generation

2022-10-31 Thread Thomas Monjalon
25/10/2022 18:05, pbhagavat...@marvell.com:
> From: Pavan Nikhilesh 
> 
> Fix port group mask generation in altivec, vec_any_eq returns
> 0 or 1 while port_groupx4 expects comparison mask result.
> 
> Fixes: 2193b7467f7a ("examples/l3fwd: optimize packet processing on powerpc")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Pavan Nikhilesh 
> Acked-by: Shijith Thotton 

Series applied, thanks.






Re: [PATCH v3] examples/distributor: update dynamic configuration

2022-10-31 Thread Thomas Monjalon
Hello,

Not a complete review, but few general comments to improve the patch below:

01/09/2022 16:09, Abdullah Ömer Yamaç:
> In this patch,
> * It is possible to switch the running mode of the distributor
> using the command line argument.
> * With "-c" parameter, you can run RX and Distributor
> on the same core.
> * Without "-c" parameter, you can run RX and Distributor
> on the different core.
> * Consecutive termination of the lcores fixed.
> The termination order was wrong, and you couldn't terminate the
> application while traffic was capturing. The current order is
> RX -> Distributor -> TX -> Workers
> * When "-c" parameter is active, the wasted distributor core is
> also deactivated in the main function.

Please could you make clear what was the issue,
and what was changed in the commit message?

> -#if 0

It's good to remove such thing.
Dead code should not exist.

> + /*
> +  * Swap the following two lines if you want the rx traffic
> +  * to go directly to tx, no distribution.
> +  */

In DPDK, it is preferred to use uppercase Rx and Tx.

> + struct rte_ring *out_ring = p->rx_dist_ring;
> + /* struct rte_ring *out_ring = p->dist_tx_ring; */

This line is dead code, please remove.

> + if (!pd)

It is preferred to not use boolean operator with pointer.
Explicit comparison is encouraged: pd == NULL

> - rte_free(pd);
> + if (pd)
> + rte_free(pd);

This check is useless because redundant with rte_free behaviour.




Re: [PATCH v3] usertools: telemetry pretty print in interactive mode

2022-10-31 Thread Thomas Monjalon
17/10/2022 11:15, Bruce Richardson:
> On Mon, Oct 17, 2022 at 07:41:02AM +, Chengwen Feng wrote:
> > Currently, the dpdk-telemetry.py show json in raw format under
> > interactive mode, which is not good for human reading.
> > 
> > E.g. The command '/ethdev/xstats,0' will output:
> > {"/ethdev/xstats": {"rx_good_packets": 0, "tx_good_packets": 0,
> > "rx_good_bytes": 0, "tx_good_bytes": 0, "rx_missed_errors": 0,
> > "rx_errors": 0, "tx_errors": 0, "rx_mbuf_allocation_errors": 0,
> > "rx_q0_packets": 0,...}}
> > 
> > This patch supports json pretty print by adding extra indent=2
> > parameter under interactive mode, so the same command will output:
> > {
> >   "/ethdev/xstats": {
> > "rx_good_packets": 0,
> > "tx_good_packets": 0,
> > "rx_good_bytes": 0,
> > "tx_good_bytes": 0,
> > "rx_missed_errors": 0,
> > "rx_errors": 0,
> > "rx_mbuf_allocation_errors": 0,
> > "rx_q0_packets": 0,
> > ...
> >   }
> > }
> > 
> > Note: the non-interactive mode is made machine-readable and remains the
> > original way (it means don't use indent to pretty print).
> > 
> > Signed-off-by: Chengwen Feng 
> > Acked-by: David Marchand 
> > Acked-by: Ciara Power 
> > 
> Tested-by: Bruce Richardson 

Applied, thanks.




Re: [PATCH v3] doc: add removal note for power empty poll API

2022-10-31 Thread Thomas Monjalon
07/10/2022 15:40, Reshma Pattan:
> --- a/doc/guides/prog_guide/power_man.rst
> +++ b/doc/guides/prog_guide/power_man.rst
> @@ -192,6 +192,14 @@ User Cases
>  --
>  The mechanism can applied to any device which is based on polling. e.g. NIC, 
> FPGA.
>  
> +
> +Removal Note
> +
> +
> +The experimental empty poll APIs will be removed from the library in a future
> +DPDK release.

After more thoughts, I think it would be better highlighted if moved
at the beginning of the section "Empty Poll API".
It could a note block or a warning.

Also, could we explain how it is replaced?




Re: [RFC] doc: update required kernel version to 4.14

2022-10-31 Thread Thomas Monjalon
02/08/2022 18:28, Morten Brørup:
> > From: Stephen Hemminger [mailto:step...@networkplumber.org]
> > Sent: Tuesday, 2 August 2022 17.36
> > 
> > The 4.4 kernel was end of life in February 2022,
> > and the next LTS is 4.9 and it is reaching EOL in January 2023.
> > The main distro using 4.9 is Debian Stretch and it is no longer
> > supported. When DPDK 22.11 is released, the 4.9 kernel would
> > only be receiving fixes for three months; therefore
> > lets make the official version 4.14.
> 
> Makes very good sense to me.

Yes

> > As always, current major enterprise Linux releases will continue
> > to be supported, but those releases don't track regular kernel
> > version numbering.
> > 
> > For full details on kernel support see:
> > https://www.kernel.org/category/releases.html
> > https://en.wikipedia.org/wiki/Linux_kernel_version_history
> > 
> > Debian Stretch:
> > https://www.debian.org/releases/stretch/
> > 
> > Signed-off-by: Stephen Hemminger 

Applied, thanks.




RE: [PATCH v12 04/16] baseband/acc: introduce PMD for ACC200

2022-10-31 Thread Chautru, Nicolas
Hi Thomas, 

> -Original Message-
> From: Thomas Monjalon 
> Sent: Sunday, October 30, 2022 9:03 AM
> To: Chautru, Nicolas 
> Cc: dev@dpdk.org; gak...@marvell.com; maxime.coque...@redhat.com;
> t...@redhat.com; Richardson, Bruce ;
> hemant.agra...@nxp.com; david.march...@redhat.com;
> step...@networkplumber.org; Vargas, Hernan 
> Subject: Re: [PATCH v12 04/16] baseband/acc: introduce PMD for ACC200
> 
> 12/10/2022 19:59, Nicolas Chautru:
> > +Bind PF UIO driver(s)
> > +~
> > +
> > +Install the DPDK igb_uio driver, bind it with the PF PCI device ID
> > +and use ``lspci`` to confirm the PF device is under use by ``igb_uio`` DPDK
> UIO driver.
> 
> igb_uio is not recommended.
> Please focus on VFIO first.
> 
> > +The igb_uio driver may be bound to the PF PCI device using one of two
> > +methods for ACC200:
> > +
> > +
> > +1. PCI functions (physical or virtual, depending on the use case) can
> > +be bound to the UIO driver by repeating this command for every function.
> > +
> > +.. code-block:: console
> > +
> > +  cd 
> > +  insmod ./build/kmod/igb_uio.ko
> > +  echo "8086 57c0" > /sys/bus/pci/drivers/igb_uio/new_id
> > +  lspci -vd8086:57c0
> > +
> > +
> > +2. Another way to bind PF with DPDK UIO driver is by using the
> > +``dpdk-devbind.py`` tool
> > +
> > +.. code-block:: console
> > +
> > +  cd 
> > +  ./usertools/dpdk-devbind.py -b igb_uio :f7:00.0
> > +
> > +where the PCI device ID (example: :f7:00.0) is obtained using
> > +lspci -vd8086:57c0
> 
> This binding is not specific to the driver.
> It would be better to refer to the Linux guide instead of duplicating it again
> and again.
> 
> > +In a similar way the PF may be bound with vfio-pci as any PCIe device.
> 
> You could mention igb_uio here.
> Is there any advantage in using igb_uio?
> 

Igb_uio is arguably easier to use to new user tend to start with it or specific 
ecosystem. This is typically the entry point (no iommu, no flr below the 
bonnet, no vfio token...) hence good to have a bit of handholding with a couple 
of lines capturing how to easily run a few tests. I don't believe this is too 
redundant to have these few lines compared to the help in bring to the user not 
having to double guess their steps. 
More generally there are a number of module drivers combinations that are 
supported based on different deployments. We don't document in too much details 
for the details since that is not too ACC specific and there is more 
documentation no pf_bb_config repo for using the PMD from the VF.. 

Basically Thomas let us know more explicitly what you are suggesting as 
documentation update. You just want more emphasis on vfio-pci flow (which is 
fair, some of it documented on pf_bb_config including the vfio token passing 
but we can reproduce here as well) or something else? 

Thanks!
Nic




Re: [PATCH v2] devtools: check for supported git version

2022-10-31 Thread Thomas Monjalon
26/10/2022 10:34, Chaoyong He:
> > On 10/26/2022 7:24 AM, David Marchand wrote:
> > > On Tue, Oct 25, 2022 at 12:15 PM Ali Alnubani  wrote:
> > >>
> > >> The script devtools/parse-flow-support.sh uses the git-grep option
> > >> (-o, --only-matching), which is only supported from git version 2.19
> > >> and onwards.[1]
> > >>
> > >> The script now exits early providing a clear message to the user
> > >> about the required git version instead of showing the following error
> > >> messages multiple times:
> > >>error: unknown switch `o'
> > >>usage: git grep [] [-e]  [...] [[--] ...]
> > >>[..]
> > >>
> > >> [1]
> > >> https://github.com/git/git/blob/v2.19.0/Documentation/RelNotes/2.19.0
> > >> .txt
> > >>
> > >> Signed-off-by: Ali Alnubani 
> > >> Signed-off-by: Thomas Monjalon 
> > >
> > > I don't have a "non working" git, but the patch lgtm.
> > >
> > > Acked-by: David Marchand 
> > >
> > 
> > +Chaoyong,
> > 
> > He had observed the problem, perhaps he can help to test.
> 
> I test in my host, it does work.
> 
> $ git --version
> git version 2.18.5
> 
> Before this patch:
> $ ./devtools/check-doc-vs-code.sh
> error: unknown switch `o'
> usage: git grep [] [-e]  [...] [[--] ...]
> --cached  search in index instead of in the work tree
> ...
> repeat many times.
> 
> After this patch:
> $ ./devtools/check-doc-vs-code.sh
> git version >= 2.19 is required

Applied, thanks.






Re: [PATCH v12 04/16] baseband/acc: introduce PMD for ACC200

2022-10-31 Thread Thomas Monjalon
31/10/2022 16:43, Chautru, Nicolas:
> From: Thomas Monjalon 
> > 12/10/2022 19:59, Nicolas Chautru:
> > > +Bind PF UIO driver(s)
> > > +~
> > > +
> > > +Install the DPDK igb_uio driver, bind it with the PF PCI device ID
> > > +and use ``lspci`` to confirm the PF device is under use by ``igb_uio`` 
> > > DPDK
> > UIO driver.
> > 
> > igb_uio is not recommended.
> > Please focus on VFIO first.
> > 
> > > +The igb_uio driver may be bound to the PF PCI device using one of two
> > > +methods for ACC200:
> > > +
> > > +
> > > +1. PCI functions (physical or virtual, depending on the use case) can
> > > +be bound to the UIO driver by repeating this command for every function.
> > > +
> > > +.. code-block:: console
> > > +
> > > +  cd 
> > > +  insmod ./build/kmod/igb_uio.ko
> > > +  echo "8086 57c0" > /sys/bus/pci/drivers/igb_uio/new_id
> > > +  lspci -vd8086:57c0
> > > +
> > > +
> > > +2. Another way to bind PF with DPDK UIO driver is by using the
> > > +``dpdk-devbind.py`` tool
> > > +
> > > +.. code-block:: console
> > > +
> > > +  cd 
> > > +  ./usertools/dpdk-devbind.py -b igb_uio :f7:00.0
> > > +
> > > +where the PCI device ID (example: :f7:00.0) is obtained using
> > > +lspci -vd8086:57c0
> > 
> > This binding is not specific to the driver.
> > It would be better to refer to the Linux guide instead of duplicating it 
> > again
> > and again.
> > 
> > > +In a similar way the PF may be bound with vfio-pci as any PCIe device.
> > 
> > You could mention igb_uio here.
> > Is there any advantage in using igb_uio?
> > 
> 
> Igb_uio is arguably easier to use to new user tend to start with it or 
> specific ecosystem. This is typically the entry point (no iommu, no flr below 
> the bonnet, no vfio token...) hence good to have a bit of handholding with a 
> couple of lines capturing how to easily run a few tests. I don't believe this 
> is too redundant to have these few lines compared to the help in bring to the 
> user not having to double guess their steps. 
> More generally there are a number of module drivers combinations that are 
> supported based on different deployments. We don't document in too much 
> details for the details since that is not too ACC specific and there is more 
> documentation no pf_bb_config repo for using the PMD from the VF.. 
> 
> Basically Thomas let us know more explicitly what you are suggesting as 
> documentation update. You just want more emphasis on vfio-pci flow (which is 
> fair, some of it documented on pf_bb_config including the vfio token passing 
> but we can reproduce here as well) or something else? 

There are 2 things to change:
1/ igb_uio is going to be deprecated, so we must emphasize on VFIO
2/ for doc maintenance, it is better to have common steps described in one 
place.
If needed, you can change the common doc and refer to it.





[PATCH 0/5] net/mlx5: some counter fixes

2022-10-31 Thread Michael Baum
Some fixes for HW/SW steering counters.

Michael Baum (5):
  net/mlx5: fix race condition in counter pool resizing
  net/mlx5: fix accessing the wrong counter
  net/mlx5: fix missing counter elements copies in r2r cases
  net/mlx5: add assertions in counter get/put
  net/mlx5: assert for enough space in counter rings

 drivers/net/mlx5/mlx5.c|  28 ++-
 drivers/net/mlx5/mlx5.h|   7 +-
 drivers/net/mlx5/mlx5_flow.c   |  24 +++---
 drivers/net/mlx5/mlx5_flow_dv.c|  53 +++--
 drivers/net/mlx5/mlx5_flow_hw.c|   2 +-
 drivers/net/mlx5/mlx5_flow_verbs.c |  23 ++
 drivers/net/mlx5/mlx5_hws_cnt.c|  25 +++---
 drivers/net/mlx5/mlx5_hws_cnt.h| 117 -
 8 files changed, 131 insertions(+), 148 deletions(-)

-- 
2.25.1



[PATCH 1/5] net/mlx5: fix race condition in counter pool resizing

2022-10-31 Thread Michael Baum
Counter management structure has array of counter pools. This array is
invalid in management structure initialization and grows on demand.

The resizing include:
1. Allocate memory for the new size.
2. Copy the existing data to the new memory.
3. Move the pointer to the new memory.
4. Free the old memory.

The third step can be performed before for this function, and compiler
may do that, but another thread might read the pointer before coping and
read invalid data or even crash.

This patch allocates memory for this array once in management structure
initialization and limit the counters number by 16M.

Fixes: 3aa279157fa0 ("net/mlx5: synchronize flow counter pool creation")
Cc: suanmi...@nvidia.com
Cc: sta...@dpdk.org

Signed-off-by: Michael Baum 
Acked-by: Matan Azrad 
---
 drivers/net/mlx5/mlx5.c| 28 +---
 drivers/net/mlx5/mlx5.h|  7 ++--
 drivers/net/mlx5/mlx5_flow.c   | 24 +++---
 drivers/net/mlx5/mlx5_flow_dv.c| 53 +-
 drivers/net/mlx5/mlx5_flow_verbs.c | 23 +++--
 5 files changed, 52 insertions(+), 83 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 78234b116c..b85a56ec24 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -561,18 +561,34 @@ mlx5_flow_counter_mode_config(struct rte_eth_dev *dev 
__rte_unused)
  *
  * @param[in] sh
  *   Pointer to mlx5_dev_ctx_shared object to free
+ *
+ * @return
+ *   0 on success, otherwise negative errno value and rte_errno is set.
  */
-static void
+static int
 mlx5_flow_counters_mng_init(struct mlx5_dev_ctx_shared *sh)
 {
int i, j;
 
if (sh->config.dv_flow_en < 2) {
+   void *pools;
+
+   pools = mlx5_malloc(MLX5_MEM_ZERO,
+   sizeof(struct mlx5_flow_counter_pool *) *
+   MLX5_COUNTER_POOLS_MAX_NUM,
+   0, SOCKET_ID_ANY);
+   if (!pools) {
+   DRV_LOG(ERR,
+   "Counter management allocation was failed.");
+   rte_errno = ENOMEM;
+   return -rte_errno;
+   }
memset(&sh->sws_cmng, 0, sizeof(sh->sws_cmng));
TAILQ_INIT(&sh->sws_cmng.flow_counters);
sh->sws_cmng.min_id = MLX5_CNT_BATCH_OFFSET;
sh->sws_cmng.max_id = -1;
sh->sws_cmng.last_pool_idx = POOL_IDX_INVALID;
+   sh->sws_cmng.pools = pools;
rte_spinlock_init(&sh->sws_cmng.pool_update_sl);
for (i = 0; i < MLX5_COUNTER_TYPE_MAX; i++) {
TAILQ_INIT(&sh->sws_cmng.counters[i]);
@@ -598,6 +614,7 @@ mlx5_flow_counters_mng_init(struct mlx5_dev_ctx_shared *sh)
sh->hws_max_log_bulk_sz = log_dcs;
sh->hws_max_nb_counters = max_nb_cnts;
}
+   return 0;
 }
 
 /**
@@ -655,8 +672,7 @@ mlx5_flow_counters_mng_close(struct mlx5_dev_ctx_shared *sh)
claim_zero
 (mlx5_flow_os_destroy_flow_action
  (cnt->action));
-   if (fallback && MLX5_POOL_GET_CNT
-   (pool, j)->dcs_when_free)
+   if (fallback && cnt->dcs_when_free)
claim_zero(mlx5_devx_cmd_destroy
   (cnt->dcs_when_free));
}
@@ -1572,8 +1588,12 @@ mlx5_alloc_shared_dev_ctx(const struct 
mlx5_dev_spawn_data *spawn,
if (err)
goto error;
}
+   err = mlx5_flow_counters_mng_init(sh);
+   if (err) {
+   DRV_LOG(ERR, "Fail to initialize counters manage.");
+   goto error;
+   }
mlx5_flow_aging_init(sh);
-   mlx5_flow_counters_mng_init(sh);
mlx5_flow_ipool_create(sh);
/* Add context to the global device list. */
LIST_INSERT_HEAD(&mlx5_dev_ctx_list, sh, next);
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index c9fcb71b69..cbe2d88b9e 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -386,11 +386,10 @@ struct mlx5_hw_q {
 } __rte_cache_aligned;
 
 
-
-
+#define MLX5_COUNTER_POOLS_MAX_NUM (1 << 15)
 #define MLX5_COUNTERS_PER_POOL 512
 #define MLX5_MAX_PENDING_QUERIES 4
-#define MLX5_CNT_CONTAINER_RESIZE 64
+#define MLX5_CNT_MR_ALLOC_BULK 64
 #define MLX5_CNT_SHARED_OFFSET 0x8000
 #define IS_BATCH_CNT(cnt) (((cnt) & (MLX5_CNT_SHARED_OFFSET - 1)) >= \
   MLX5_CNT_BATCH_OFFSET)
@@ -549,7 +548,6 @@ TAILQ_HEAD(mlx5_counter_pools, mlx5_flow_counter_pool);
 /* Counter global management structure. */
 struct mlx5_flow_counter_mng {
volatile uint16_t n_valid; /* Number of valid pools. */
-   uint16_t n; /* Number of pool

[PATCH 2/5] net/mlx5: fix accessing the wrong counter

2022-10-31 Thread Michael Baum
The HWS counter has 2 different identifiers:
1. Type "cnt_id_t" which represents the counter inside caches and in
   the flow structure. This index cannot be zero and is mostly called
   "cnt_id".
 2. Internal index, the index in counters array with type "uint32_t".
mostly it is called "iidx".
The second ID is calculated from the first using "mlx5_hws_cnt_iidx()"
function.

When a direct counter is allocated, if the queue cache is not empty, the
counter represented by cnt_id is popped from the cache. This counter may
be invalid according to the query_gen field. Thus, the "iidx" is parsed
from cnt_id and if it is valid, it is used to update the fields of the
counter structure.
When this counter is invalid, all the cache is flashed and new counters
are fetched into the cache. After fetching, another counter represented
by cnt_id is taken from the cache.
Unfortunately, for updating fields like "in_used" or "age_idx", the
function wrongly may use the old "iidx" coming from an invalid cnt_id.

Update the "iidx" in case of an invalid counter popped from the cache.

Fixes: 4d368e1da3a4 ("net/mlx5: support flow counter action for HWS")
Cc: jack...@nvidia.com

Signed-off-by: Michael Baum 
Acked-by: Matan Azrad 
Acked-by: Xiaoyu Min 
---
 drivers/net/mlx5/mlx5_hws_cnt.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/mlx5/mlx5_hws_cnt.h b/drivers/net/mlx5/mlx5_hws_cnt.h
index e311923f71..196604aded 100644
--- a/drivers/net/mlx5/mlx5_hws_cnt.h
+++ b/drivers/net/mlx5/mlx5_hws_cnt.h
@@ -506,6 +506,7 @@ mlx5_hws_cnt_pool_get(struct mlx5_hws_cnt_pool *cpool, 
uint32_t *queue,
rte_ring_dequeue_zc_burst_elem_start(qcache, sizeof(cnt_id_t),
1, &zcdc, NULL);
*cnt_id = *(cnt_id_t *)zcdc.ptr1;
+   iidx = mlx5_hws_cnt_iidx(cpool, *cnt_id);
}
__hws_cnt_query_raw(cpool, *cnt_id, &cpool->pool[iidx].reset.hits,
&cpool->pool[iidx].reset.bytes);
-- 
2.25.1



[PATCH 3/5] net/mlx5: fix missing counter elements copies in r2r cases

2022-10-31 Thread Michael Baum
The __hws_cnt_r2rcpy() function copies elements from one zero-copy ring
to another zero-copy ring in place.
This routine needs to consider the situation that the address was given
by source and destination could be both wrapped.

It uses 4 different "n" local variables to manage it:
 - n:  Number of elements to copy in total.
 - n1: Number of elements to copy from ptr1, it is the minimal value
   from source/dest n1 field.
 - n2: Number of elements to copy from src->ptr1 to dst->ptr2 or from
   src->ptr2 to dst->ptr1, this variable is 0 when both source and
   dest n1 field are equal.
 - n3: Number of elements to copy from src->ptr2 to dst->ptr2.

The function copies the first n1 elements. If n2 isn't zero it copies
more elements and check whether n3 is zero.
This logic is wrong since n3 may be bigger than zero even when n2 is
zero. This scenario is commonly happening in counters when the internal
mlx5 service thread copies elements from the reset ring into the reuse
ring.

This patch changes the function to copy n3 regardless of n2 value.

Fixes: 4d368e1da3a4 ("net/mlx5: support flow counter action for HWS")
Cc: jack...@nvidia.com

Signed-off-by: Michael Baum 
Acked-by: Matan Azrad 
Acked-by: Xiaoyu Min 
---
 drivers/net/mlx5/mlx5_hws_cnt.h | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_hws_cnt.h b/drivers/net/mlx5/mlx5_hws_cnt.h
index 196604aded..6e371f1929 100644
--- a/drivers/net/mlx5/mlx5_hws_cnt.h
+++ b/drivers/net/mlx5/mlx5_hws_cnt.h
@@ -281,11 +281,10 @@ __hws_cnt_r2rcpy(struct rte_ring_zc_data *zcdd, struct 
rte_ring_zc_data *zcds,
d3 = zcdd->ptr2;
}
memcpy(d1, s1, n1 * sizeof(cnt_id_t));
-   if (n2 != 0) {
+   if (n2 != 0)
memcpy(d2, s2, n2 * sizeof(cnt_id_t));
-   if (n3 != 0)
-   memcpy(d3, s3, n3 * sizeof(cnt_id_t));
-   }
+   if (n3 != 0)
+   memcpy(d3, s3, n3 * sizeof(cnt_id_t));
 }
 
 static __rte_always_inline int
-- 
2.25.1



[PATCH 4/5] net/mlx5: add assertions in counter get/put

2022-10-31 Thread Michael Baum
Add assertions to help debug in case of counter double alloc/free.

Signed-off-by: Michael Baum 
Acked-by: Matan Azrad 
Acked-by: Xiaoyu Min 
---
 drivers/net/mlx5/mlx5_hws_cnt.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_hws_cnt.h b/drivers/net/mlx5/mlx5_hws_cnt.h
index 6e371f1929..338ee4d688 100644
--- a/drivers/net/mlx5/mlx5_hws_cnt.h
+++ b/drivers/net/mlx5/mlx5_hws_cnt.h
@@ -396,6 +396,7 @@ mlx5_hws_cnt_pool_put(struct mlx5_hws_cnt_pool *cpool,
uint32_t iidx;
 
iidx = mlx5_hws_cnt_iidx(cpool, *cnt_id);
+   MLX5_ASSERT(cpool->pool[iidx].in_used);
cpool->pool[iidx].in_used = false;
cpool->pool[iidx].query_gen_when_free =
__atomic_load_n(&cpool->query_gen, __ATOMIC_RELAXED);
@@ -475,6 +476,7 @@ mlx5_hws_cnt_pool_get(struct mlx5_hws_cnt_pool *cpool, 
uint32_t *queue,
__hws_cnt_query_raw(cpool, *cnt_id,
&cpool->pool[iidx].reset.hits,
&cpool->pool[iidx].reset.bytes);
+   MLX5_ASSERT(!cpool->pool[iidx].in_used);
cpool->pool[iidx].in_used = true;
cpool->pool[iidx].age_idx = age_idx;
return 0;
@@ -511,6 +513,7 @@ mlx5_hws_cnt_pool_get(struct mlx5_hws_cnt_pool *cpool, 
uint32_t *queue,
&cpool->pool[iidx].reset.bytes);
rte_ring_dequeue_zc_elem_finish(qcache, 1);
cpool->pool[iidx].share = 0;
+   MLX5_ASSERT(!cpool->pool[iidx].in_used);
cpool->pool[iidx].in_used = true;
cpool->pool[iidx].age_idx = age_idx;
return 0;
-- 
2.25.1



[PATCH 5/5] net/mlx5: assert for enough space in counter rings

2022-10-31 Thread Michael Baum
There is a by-design assumption in the code that the global counter
rings can contain all the port counters.
So, enqueuing to these global rings should always succeed.

Add assertions to help for debugging this assumption.

In addition, change mlx5_hws_cnt_pool_put() function to return void due
to those assumptions.

Signed-off-by: Michael Baum 
Acked-by: Matan Azrad 
Acked-by: Xiaoyu Min 
---
 drivers/net/mlx5/mlx5_flow_hw.c |   2 +-
 drivers/net/mlx5/mlx5_hws_cnt.c |  25 
 drivers/net/mlx5/mlx5_hws_cnt.h | 106 +---
 3 files changed, 72 insertions(+), 61 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 2d275ad111..54a0afe45f 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -7874,7 +7874,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, 
uint32_t queue,
 * time to update the AGE.
 */
mlx5_hws_age_nb_cnt_decrease(priv, age_idx);
-   ret = mlx5_hws_cnt_shared_put(priv->hws_cpool, &act_idx);
+   mlx5_hws_cnt_shared_put(priv->hws_cpool, &act_idx);
break;
case MLX5_INDIRECT_ACTION_TYPE_CT:
ret = flow_hw_conntrack_destroy(dev, act_idx, error);
diff --git a/drivers/net/mlx5/mlx5_hws_cnt.c b/drivers/net/mlx5/mlx5_hws_cnt.c
index b8ce69af57..24c01eace0 100644
--- a/drivers/net/mlx5/mlx5_hws_cnt.c
+++ b/drivers/net/mlx5/mlx5_hws_cnt.c
@@ -58,13 +58,14 @@ __hws_cnt_id_load(struct mlx5_hws_cnt_pool *cpool)
 
 static void
 __mlx5_hws_cnt_svc(struct mlx5_dev_ctx_shared *sh,
-   struct mlx5_hws_cnt_pool *cpool)
+  struct mlx5_hws_cnt_pool *cpool)
 {
struct rte_ring *reset_list = cpool->wait_reset_list;
struct rte_ring *reuse_list = cpool->reuse_list;
uint32_t reset_cnt_num;
struct rte_ring_zc_data zcdr = {0};
struct rte_ring_zc_data zcdu = {0};
+   uint32_t ret __rte_unused;
 
reset_cnt_num = rte_ring_count(reset_list);
do {
@@ -72,17 +73,19 @@ __mlx5_hws_cnt_svc(struct mlx5_dev_ctx_shared *sh,
mlx5_aso_cnt_query(sh, cpool);
zcdr.n1 = 0;
zcdu.n1 = 0;
-   rte_ring_enqueue_zc_burst_elem_start(reuse_list,
-   sizeof(cnt_id_t), reset_cnt_num, &zcdu,
-   NULL);
-   rte_ring_dequeue_zc_burst_elem_start(reset_list,
-   sizeof(cnt_id_t), reset_cnt_num, &zcdr,
-   NULL);
+   ret = rte_ring_enqueue_zc_burst_elem_start(reuse_list,
+  sizeof(cnt_id_t),
+  reset_cnt_num, &zcdu,
+  NULL);
+   MLX5_ASSERT(ret == reset_cnt_num);
+   ret = rte_ring_dequeue_zc_burst_elem_start(reset_list,
+  sizeof(cnt_id_t),
+  reset_cnt_num, &zcdr,
+  NULL);
+   MLX5_ASSERT(ret == reset_cnt_num);
__hws_cnt_r2rcpy(&zcdu, &zcdr, reset_cnt_num);
-   rte_ring_dequeue_zc_elem_finish(reset_list,
-   reset_cnt_num);
-   rte_ring_enqueue_zc_elem_finish(reuse_list,
-   reset_cnt_num);
+   rte_ring_dequeue_zc_elem_finish(reset_list, reset_cnt_num);
+   rte_ring_enqueue_zc_elem_finish(reuse_list, reset_cnt_num);
reset_cnt_num = rte_ring_count(reset_list);
} while (reset_cnt_num > 0);
 }
diff --git a/drivers/net/mlx5/mlx5_hws_cnt.h b/drivers/net/mlx5/mlx5_hws_cnt.h
index 338ee4d688..030dcead86 100644
--- a/drivers/net/mlx5/mlx5_hws_cnt.h
+++ b/drivers/net/mlx5/mlx5_hws_cnt.h
@@ -116,7 +116,7 @@ enum {
HWS_AGE_CANDIDATE_INSIDE_RING,
/*
 * AGE assigned to flows but it still in ring. It was aged-out but the
-* timeout was changed, so it in ring but stiil candidate.
+* timeout was changed, so it in ring but still candidate.
 */
HWS_AGE_AGED_OUT_REPORTED,
/*
@@ -182,7 +182,7 @@ mlx5_hws_cnt_id_valid(cnt_id_t cnt_id)
  *
  * @param cpool
  *   The pointer to counter pool
- * @param index
+ * @param iidx
  *   The internal counter index.
  *
  * @return
@@ -231,32 +231,32 @@ __hws_cnt_query_raw(struct mlx5_hws_cnt_pool *cpool, 
cnt_id_t cnt_id,
 }
 
 /**
- * Copy elems from one zero-copy ring to zero-copy ring in place.
+ * Copy elements from one zero-copy ring to zero-copy ring in place.
  *
  * The input is a rte ring zero-copy data struct, which has two pointer.
  * in case of the wrapper happened, the ptr2 will be meaningful.
  *
- * So this rountin needs t

Re: [PATCH] maintainers: change maintainer for event ethdev Rx/Tx adapters

2022-10-31 Thread Thomas Monjalon
31/10/2022 12:05, Naga Harish K, S V:
> From: Thomas Monjalon 
> > 21/10/2022 13:35, Jay Jayatheerthan:
> > > Harish is the new maintainer of Rx/Tx adapters due to role change of Jay
> > >
> > > Signed-off-by: Jay Jayatheerthan 
> > 
> > Please could we have an approval from the new maintainer?
> > An ack would make things clear and accepted.
> 
> Acked by: Naga Harish K S V 

As a maintainer, you must approve with the exact reply (note the hyphen)
Acked-by: Naga Harish K S V 
so it will be recognized by the tooling (like patchwork).

Applied, thanks.




Re: unsubscribe

2022-10-31 Thread Thomas Monjalon
13/10/2022 12:14, Benjamin Demartin:
> Hello I would like to unsubscribe but I don’t know how

This is the link to do it yourself:
https://mails.dpdk.org/listinfo/dev

I've unsubscribed you from dev list.




Re: 回复: 回复: 回复: [PATCH v2 1/3] ethdev: add API for direct rearm mode

2022-10-31 Thread Konstantin Ananyev




Hi Feifei,



Add API for enabling direct rearm mode and for mapping RX and TX
queues. Currently, the API supports 1:1(txq : rxq) mapping.

Furthermore, to avoid Rx load Tx data directly, add API called
'rte_eth_txq_data_get' to get Tx sw_ring and its information.

Suggested-by: Honnappa Nagarahalli



Suggested-by: Ruifeng Wang 
Signed-off-by: Feifei Wang 
Reviewed-by: Ruifeng Wang 
Reviewed-by: Honnappa Nagarahalli



---
 lib/ethdev/ethdev_driver.h   |  9 
 lib/ethdev/ethdev_private.c  |  1 +
 lib/ethdev/rte_ethdev.c  | 37 ++
 lib/ethdev/rte_ethdev.h  | 95



 lib/ethdev/rte_ethdev_core.h |  5 ++
 lib/ethdev/version.map   |  4 ++
 6 files changed, 151 insertions(+)

diff --git a/lib/ethdev/ethdev_driver.h
b/lib/ethdev/ethdev_driver.h index 47a55a419e..14f52907c1 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -58,6 +58,8 @@ struct rte_eth_dev {
eth_rx_descriptor_status_t rx_descriptor_status;
/** Check the status of a Tx descriptor */
eth_tx_descriptor_status_t tx_descriptor_status;
+   /**  Use Tx mbufs for Rx to rearm */
+   eth_rx_direct_rearm_t rx_direct_rearm;

/**
 * Device data that is shared between primary and secondary
processes @@ -486,6 +488,11 @@ typedef int

(*eth_rx_enable_intr_t)(struct rte_eth_dev *dev,

 typedef int (*eth_rx_disable_intr_t)(struct rte_eth_dev *dev,
uint16_t rx_queue_id);

+/**< @internal Get Tx information of a transmit queue of an
+Ethernet device. */ typedef void (*eth_txq_data_get_t)(struct

rte_eth_dev *dev,

+ uint16_t tx_queue_id,
+ struct rte_eth_txq_data

*txq_data);

+
 /** @internal Release memory resources allocated by given
Rx/Tx

queue.

*/

 typedef void (*eth_queue_release_t)(struct rte_eth_dev *dev,
uint16_t queue_id);
@@ -1138,6 +1145,8 @@ struct eth_dev_ops {
eth_rxq_info_get_t rxq_info_get;
/** Retrieve Tx queue information */
eth_txq_info_get_t txq_info_get;
+   /** Get the address where Tx data is stored */
+   eth_txq_data_get_t txq_data_get;
eth_burst_mode_get_t   rx_burst_mode_get; /**< Get Rx

burst

mode */

eth_burst_mode_get_t   tx_burst_mode_get; /**< Get Tx

burst

mode */

eth_fw_version_get_t   fw_version_get; /**< Get

firmware

version */

diff --git a/lib/ethdev/ethdev_private.c
b/lib/ethdev/ethdev_private.c index 48090c879a..bfe16c7d77 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -276,6 +276,7 @@ eth_dev_fp_ops_setup(struct

rte_eth_fp_ops

*fpo,

fpo->rx_queue_count = dev->rx_queue_count;
fpo->rx_descriptor_status = dev->rx_descriptor_status;
fpo->tx_descriptor_status = dev->tx_descriptor_status;
+   fpo->rx_direct_rearm = dev->rx_direct_rearm;

fpo->rxq.data = dev->data->rx_queues;
fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 0c2c1088c0..0dccec2e4b 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1648,6 +1648,43 @@ rte_eth_dev_is_removed(uint16_t port_id)
return ret;
 }

+int
+rte_eth_tx_queue_data_get(uint16_t port_id, uint16_t queue_id,
+   struct rte_eth_txq_data *txq_data) {
+   struct rte_eth_dev *dev;
+
+   RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+   dev = &rte_eth_devices[port_id];
+
+   if (queue_id >= dev->data->nb_tx_queues) {
+   RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n",

queue_id);

+   return -EINVAL;
+   }
+
+   if (txq_data == NULL) {
+   RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u

Tx

queue %u data to NULL\n",

+   port_id, queue_id);
+   return -EINVAL;
+   }
+
+   if (dev->data->tx_queues == NULL ||
+   dev->data->tx_queues[queue_id] == NULL) {
+   RTE_ETHDEV_LOG(ERR,
+  "Tx queue %"PRIu16" of device with

port_id=%"

+  PRIu16" has not been setup\n",
+  queue_id, port_id);
+   return -EINVAL;
+   }
+
+   if (*dev->dev_ops->txq_data_get == NULL)
+   return -ENOTSUP;
+
+   dev->dev_ops->txq_data_get(dev, queue_id, txq_data);
+
+   return 0;
+}
+
 static int
 rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split

*rx_seg,

 uint16_t n_seg, uint32_t *mbp_buf_size,

diff --git

a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
2e783536c1..daf7f05d62 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1949,6 +1949,23 @@ struct rte_eth_txq_info {
uint8_t queue

[PATCH v1] crypto/qat: fix reallocate OpenSSL version check

2022-10-31 Thread Brian Dooley
Move the ossl_legacy_provider_unload() into the right place for secure
protocol for QAT. Remove unnecessary unload from session destroy.

Fixes: 52d59b92b06d ("crypto/qat: enable OpenSSL legacy provider in session")
Cc: kai...@intel.com
CC: sta...@dpdk.org
Signed-off-by: Brian Dooley 
---
 drivers/crypto/qat/qat_sym_session.c | 32 ++--
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/drivers/crypto/qat/qat_sym_session.c 
b/drivers/crypto/qat/qat_sym_session.c
index 71fa595031..6872531d67 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -520,19 +520,19 @@ qat_sym_session_configure(struct rte_cryptodev *dev,
int ret;
 
 #if (OPENSSL_VERSION_NUMBER >= 0x3000L)
-   OSSL_PROVIDER * legacy;
-   OSSL_PROVIDER *deflt;
+   OSSL_PROVIDER * legacy;
+   OSSL_PROVIDER *deflt;
 
-   /* Load Multiple providers into the default (NULL) library 
context */
-   legacy = OSSL_PROVIDER_load(NULL, "legacy");
-   if (legacy == NULL)
-   return -EINVAL;
+   /* Load Multiple providers into the default (NULL) library context */
+   legacy = OSSL_PROVIDER_load(NULL, "legacy");
+   if (legacy == NULL)
+   return -EINVAL;
 
-   deflt = OSSL_PROVIDER_load(NULL, "default");
-   if (deflt == NULL) {
-   OSSL_PROVIDER_unload(legacy);
-   return -EINVAL;
-   }
+   deflt = OSSL_PROVIDER_load(NULL, "default");
+   if (deflt == NULL) {
+   OSSL_PROVIDER_unload(legacy);
+   return -EINVAL;
+   }
 #endif
ret = qat_sym_session_set_parameters(dev, xform,
CRYPTODEV_GET_SYM_SESS_PRIV(sess),
@@ -545,8 +545,8 @@ qat_sym_session_configure(struct rte_cryptodev *dev,
}
 
 # if (OPENSSL_VERSION_NUMBER >= 0x3000L)
-   OSSL_PROVIDER_unload(legacy);
-   OSSL_PROVIDER_unload(deflt);
+   OSSL_PROVIDER_unload(legacy);
+   OSSL_PROVIDER_unload(deflt);
 # endif
return 0;
 }
@@ -2668,6 +2668,9 @@ qat_security_session_create(void *dev,
return ret;
}
 
+#if (OPENSSL_VERSION_NUMBER >= 0x3000L)
+   ossl_legacy_provider_unload();
+#endif
return 0;
 }
 
@@ -2684,9 +2687,6 @@ qat_security_session_destroy(void *dev __rte_unused,
memset(s, 0, qat_sym_session_get_private_size(dev));
}
 
-# if (OPENSSL_VERSION_NUMBER >= 0x3000L)
-   ossl_legacy_provider_unload();
-# endif
return 0;
 }
 
-- 
2.25.1



RE: [PATCH 03/13] net/idpf: support device initialization

2022-10-31 Thread Ali Alnubani
> -Original Message-
> From: Junfeng Guo 
> Sent: Wednesday, August 3, 2022 2:31 PM
> To: qi.z.zh...@intel.com; jingjing...@intel.com; beilei.x...@intel.com
> Cc: dev@dpdk.org; junfeng@intel.com; Xiaoyun Li
> ; Xiao Wang 
> Subject: [PATCH 03/13] net/idpf: support device initialization
> 
> Support device init and the following dev ops:
>   - dev_configure
>   - dev_start
>   - dev_stop
>   - dev_close
> 
> Signed-off-by: Beilei Xing 
> Signed-off-by: Xiaoyun Li 
> Signed-off-by: Xiao Wang 
> Signed-off-by: Junfeng Guo 
> ---

Hello,

This patch is causing the following build failure in latest main (6a88cbc) with 
clang 3.4.2 in CentOS 7:
drivers/net/idpf/idpf_vchnl.c:141:13: error: comparison of constant 522 with 
expression of type 'enum virtchnl_ops' is always false 
[-Werror,-Wtautological-constant-out-of-range-compare]

Regards,
Ali


[PATCH] net/mlx5: fix the building with flexible array

2022-10-31 Thread Bing Zhao
With some higher GCC/CLANG version, it is not recommended to use a
structure with a tailing flexible array inside another structure.
Accessing this array may be considered as a risk to corrupt the
following field even if it is by intention.

The error below was observed:

  drivers/net/mlx5/linux/mlx5_ethdev_os.c: In function 
'mlx5_get_flag_dropless_rq':
  drivers/net/mlx5/linux/mlx5_ethdev_os.c:1679:42: error: invalid use of 
structure with flexible array member [-Werror=pedantic]
  1679 | struct ethtool_sset_info hdr;
   | ^~~

Changing it to memory dynamic allocation method will help to get
rid of this complain.

Fixes: e848218741ea ("net/mlx5: check delay drop settings in kernel driver")
Cc: sta...@dpdk.org

Signed-off-by: Bing Zhao 
---
 drivers/net/mlx5/linux/mlx5_ethdev_os.c | 22 +-
 1 file changed, 13 insertions(+), 9 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c 
b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index 661d362dc0..d8bb03b875 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -1675,10 +1675,7 @@ mlx5_get_mac(struct rte_eth_dev *dev, uint8_t 
(*mac)[RTE_ETHER_ADDR_LEN])
  */
 int mlx5_get_flag_dropless_rq(struct rte_eth_dev *dev)
 {
-   struct {
-   struct ethtool_sset_info hdr;
-   uint32_t buf[1];
-   } sset_info;
+   struct ethtool_sset_info *sset_info = NULL;
struct ethtool_drvinfo drvinfo;
struct ifreq ifr;
struct ethtool_gstrings *strings = NULL;
@@ -1689,15 +1686,21 @@ int mlx5_get_flag_dropless_rq(struct rte_eth_dev *dev)
int32_t i;
int ret;
 
-   sset_info.hdr.cmd = ETHTOOL_GSSET_INFO;
-   sset_info.hdr.reserved = 0;
-   sset_info.hdr.sset_mask = 1ULL << ETH_SS_PRIV_FLAGS;
+   sset_info = mlx5_malloc(0, sizeof(struct ethtool_sset_info) +
+   sizeof(uint32_t), 0, SOCKET_ID_ANY);
+   if (sset_info == NULL) {
+   rte_errno = ENOMEM;
+   return -rte_errno;
+   }
+   sset_info->cmd = ETHTOOL_GSSET_INFO;
+   sset_info->reserved = 0;
+   sset_info->sset_mask = 1ULL << ETH_SS_PRIV_FLAGS;
ifr.ifr_data = (caddr_t)&sset_info;
ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr);
if (!ret) {
-   const uint32_t *sset_lengths = sset_info.hdr.data;
+   const uint32_t *sset_lengths = sset_info->data;
 
-   len = sset_info.hdr.sset_mask ? sset_lengths[0] : 0;
+   len = sset_info->sset_mask ? sset_lengths[0] : 0;
} else if (ret == -EOPNOTSUPP) {
drvinfo.cmd = ETHTOOL_GDRVINFO;
ifr.ifr_data = (caddr_t)&drvinfo;
@@ -1770,5 +1773,6 @@ int mlx5_get_flag_dropless_rq(struct rte_eth_dev *dev)
ret = !!(flags.data & (1U << i));
 exit:
mlx5_free(strings);
+   mlx5_free(sset_info);
return ret;
 }
-- 
2.21.0



Understanding RX_OFFLOAD_VLAN_EXTEND

2022-10-31 Thread Ivan Malov

Hi!

We have a hard time figuring out what the API contract of
RX_OFFLOAD_VLAN_EXTEND might be. The best educated guess
we can make is that the feature might have something to
do with identifying VLAN packets and extracting TCI
without stripping the tags from incoming packets.

Is this understanding correct?

You see, things aren't helped by the offload bit having
almost no commentary. Such could've shed light on its
meaning. Perhaps this gap in documentation should
be addressed somehow. Any opinions?

Thank you.


Re: [PATCH v6 00/10] dts: ssh connection to a node

2022-10-31 Thread Thomas Monjalon
I was about to merge this series,
and after long thoughts, it deserves a bit more changes.
I would like to work with you for a merge in 22.11-rc3.

13/10/2022 12:35, Juraj Linkeš:
> All the necessary code needed to connect to a node in a topology with
> a bit more, such as basic logging and some extra useful methods.

There is also some developer tooling,
and some documentation.

[...]
> There are configuration files with a README that help with setting up
> the execution/development environment.

I don't want to merge some doc which is not integrated
in the doc/ directory.
It should be in RST format in doc/guides/dts/
I can help with this conversion.

> The code only connects to a node. You'll see logs emitted to console
> saying where DTS connected.
> 
> There's only a bit of documentation, as there's not much to document.
> We'll add some real docs when there's enough functionality to document,
> when the HelloWorld testcases is in (point 4 in our roadmap below). What
> will be documented later is runtime dependencies and how to set up the DTS
> control node environment.
> 
[...]
>  .editorconfig |   2 +-
>  .gitignore|   9 +-

Updating general Python guidelines in these files
should be done separately to get broader agreement.

>  MAINTAINERS   |   5 +

You can update this file in the first patch.

>  devtools/python-checkpatch.sh |  39 ++

Let's postpone the integration of checkpatch.
It should be integrated with the existing checkpatch.

>  devtools/python-format.sh |  54 +++
>  devtools/python-lint.sh   |  26 ++

Let's postpone the integration of these tools.
We need to discuss what is specific to DTS or not.

>  doc/guides/contributing/coding_style.rst  |   4 +-

It is not specific to DTS.

>  dts/.devcontainer/devcontainer.json   |  30 ++
>  dts/Dockerfile|  39 ++

Not sure about Docker tied to some personal choices.

>  dts/README.md | 154 

As said above, it should in RST format in doc/guides/dts/

>  dts/conf.yaml |   6 +
>  dts/framework/__init__.py |   4 +
>  dts/framework/config/__init__.py  | 100 +
>  dts/framework/config/conf_yaml_schema.json|  65 
>  dts/framework/dts.py  |  68 
>  dts/framework/exception.py|  57 +++
>  dts/framework/logger.py   | 114 ++
>  dts/framework/remote_session/__init__.py  |  15 +
>  .../remote_session/remote_session.py  | 100 +
>  dts/framework/remote_session/ssh_session.py   | 185 +
>  dts/framework/settings.py | 119 ++
>  dts/framework/testbed_model/__init__.py   |   8 +
>  dts/framework/testbed_model/node.py   |  63 
>  dts/framework/utils.py|  31 ++
>  dts/main.py   |  24 ++
>  dts/poetry.lock   | 351 ++

A lot of dependencies look not useful in this first series for SSH connection.

>  dts/pyproject.toml|  55 +++
>  27 files changed, 1723 insertions(+), 4 deletions(-)





Re: [PATCH] net/mlx5: fix the building with flexible array

2022-10-31 Thread Thomas Monjalon
31/10/2022 19:24, Bing Zhao:
> With some higher GCC/CLANG version, it is not recommended to use a
> structure with a tailing flexible array inside another structure.
> Accessing this array may be considered as a risk to corrupt the
> following field even if it is by intention.
> 
> The error below was observed:
> 
>   drivers/net/mlx5/linux/mlx5_ethdev_os.c: In function 
> 'mlx5_get_flag_dropless_rq':
>   drivers/net/mlx5/linux/mlx5_ethdev_os.c:1679:42: error: invalid use of 
> structure with flexible array member [-Werror=pedantic]
>   1679 | struct ethtool_sset_info hdr;
>| ^~~
> 
> Changing it to memory dynamic allocation method will help to get
> rid of this complain.
> 
> Fixes: e848218741ea ("net/mlx5: check delay drop settings in kernel driver")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Bing Zhao 

Acked-by: Thomas Monjalon 

Applied, thanks.
For an unknown reason, our GitHub CI started to fail on Sunday
with Fedora 35. Looks like an update was done in Fedora 35.
Now it is fixed!




Re: Understanding RX_OFFLOAD_VLAN_EXTEND

2022-10-31 Thread Ferruh Yigit

On 10/31/2022 6:46 PM, Ivan Malov wrote:

Hi!

We have a hard time figuring out what the API contract of
RX_OFFLOAD_VLAN_EXTEND might be. The best educated guess
we can make is that the feature might have something to
do with identifying VLAN packets and extracting TCI
without stripping the tags from incoming packets.

Is this understanding correct?

You see, things aren't helped by the offload bit having
almost no commentary. Such could've shed light on its
meaning. Perhaps this gap in documentation should
be addressed somehow. Any opinions?

Thank you.



Hi Ivan,

It is legacy from ixgbe driver, you can find more details on the ixgbe 
(82599) datasheet [1].


RX_OFFLOAD_VLAN_EXTEND is *like*, QinQ but not, that is why we have 
'QINQ_STRIP' offload.


And RX_OFFLOAD_VLAN_EXTEND is more a configuration option, briefly you 
can ignore it.
But in detail that is to configure device in a mode that it knows that 
received packets always has at least one VLAN tag, I assume it is for a 
case that some in the middle networking device inserts/requires VLAN 
tags. But optionally packet can have two VLAN tags. But as far as I can 
see this is not for to strip the VLAN tag or to filter packet based on 
it, this is just to configure device for this environment.




[1] copy/paste from a public datasheet 
(http://iommu.com/datasheets/ixgbe-datasheets/82599-datasheet-v3-4.pdf), 
not sure if this is up to date version, but I think it is OK for this 
context:


Double VLAN and Single VLAN Support
The 82599 supports a mode where all received and sent packets have at 
least one VLAN tag in addition to the regular tagging that might 
optionally be added. In this document, when a packet carries two VLAN 
headers, the first header is referred to as an outer VLAN and the second 
header as an inner VLAN header (as listed in the table that follows). 
This mode is used for systems where the near end switch adds the outer 
VLAN header containing switching information. This mode is enabled by 
the following configuration:
• This mode is activated by setting the DMATXCTL.GDV and the Extended 
VLAN bit in the CTRL_EXT register.
• The EtherType of the VLAN tag used for the additional VLAN is defined 
in the VET EXT field in the EXVET register.


[PATCH v7 0/1] baseband/acc100: changes for 22.11

2022-10-31 Thread Hernan Vargas
v7: Not including dependency on SDK introduced in previous v3 for now due to 
lack of consensus yet.
  Still detecting the known corner case and flagging it.
v6: Fix commit message typo.
v5: Fix compilation error and squash documentation changes.
v4: Rebased code to use the latest ACC common API and implemented review 
comment changes.
v3: Code refactor based on comments and grouping fixes at beginning of series.
v2: Rebased code to use ACC common API.
v1: Upstreaming ACC100 changes for 22.11.
This patch series is dependant on series:
https://patches.dpdk.org/project/dpdk/list/?series=25191

Hernan Vargas (1):
  baseband/acc100: add detection for deRM corner cases

 drivers/baseband/acc/acc_common.h |  8 
 drivers/baseband/acc/rte_acc100_pmd.c | 55 +--
 2 files changed, 60 insertions(+), 3 deletions(-)

-- 
2.37.1



[PATCH v7 1/1] baseband/acc100: add detection for deRM corner cases

2022-10-31 Thread Hernan Vargas
Add function to detect if de-ratematch pre-processing is recommended for
SW corner cases.
Some specific 5GUL FEC corner cases may cause unintended back pressure
and in some cases a potential stability issue on the ACC100.
The PMD can detect such code block configuration and issue an info
message to the user.

Signed-off-by: Hernan Vargas 
---
 drivers/baseband/acc/acc_common.h |  8 
 drivers/baseband/acc/rte_acc100_pmd.c | 55 +--
 2 files changed, 60 insertions(+), 3 deletions(-)

diff --git a/drivers/baseband/acc/acc_common.h 
b/drivers/baseband/acc/acc_common.h
index eae7eab4e9..6213b0b61e 100644
--- a/drivers/baseband/acc/acc_common.h
+++ b/drivers/baseband/acc/acc_common.h
@@ -123,6 +123,14 @@
 #define ACC_HARQ_ALIGN_64B  64
 #define ACC_MAX_ZC  384
 
+/* De-ratematch code rate limitation for recommended operation */
+#define ACC_LIM_03 2  /* 0.03 */
+#define ACC_LIM_09 6  /* 0.09 */
+#define ACC_LIM_14 9  /* 0.14 */
+#define ACC_LIM_21 14 /* 0.21 */
+#define ACC_LIM_31 20 /* 0.31 */
+#define ACC_MAX_E (128 * 1024 - 2)
+
 /* Helper macro for logging */
 #define rte_acc_log(level, fmt, ...) \
rte_log(RTE_LOG_ ## level, RTE_LOG_NOTICE, fmt "\n", \
diff --git a/drivers/baseband/acc/rte_acc100_pmd.c 
b/drivers/baseband/acc/rte_acc100_pmd.c
index 23bc5d25bb..47609f95b7 100644
--- a/drivers/baseband/acc/rte_acc100_pmd.c
+++ b/drivers/baseband/acc/rte_acc100_pmd.c
@@ -756,6 +756,14 @@ acc100_queue_setup(struct rte_bbdev *dev, uint16_t 
queue_id,
ret = -ENOMEM;
goto free_lb_out;
}
+   q->derm_buffer = rte_zmalloc_socket(dev->device->driver->name,
+   RTE_BBDEV_TURBO_MAX_CB_SIZE * 10,
+   RTE_CACHE_LINE_SIZE, conf->socket);
+   if (q->derm_buffer == NULL) {
+   rte_bbdev_log(ERR, "Failed to allocate derm_buffer memory");
+   ret = -ENOMEM;
+   goto free_companion_ring_addr;
+   }
 
/*
 * Software queue ring wraps synchronously with the HW when it reaches
@@ -776,7 +784,7 @@ acc100_queue_setup(struct rte_bbdev *dev, uint16_t queue_id,
q_idx = acc100_find_free_queue_idx(dev, conf);
if (q_idx == -1) {
ret = -EINVAL;
-   goto free_companion_ring_addr;
+   goto free_derm_buffer;
}
 
q->qgrp_id = (q_idx >> ACC100_GRP_ID_SHIFT) & 0xF;
@@ -804,6 +812,9 @@ acc100_queue_setup(struct rte_bbdev *dev, uint16_t queue_id,
dev->data->queues[queue_id].queue_private = q;
return 0;
 
+free_derm_buffer:
+   rte_free(q->derm_buffer);
+   q->derm_buffer = NULL;
 free_companion_ring_addr:
rte_free(q->companion_ring_addr);
q->companion_ring_addr = NULL;
@@ -890,6 +901,7 @@ acc100_queue_release(struct rte_bbdev *dev, uint16_t q_id)
/* Mark the Queue as un-assigned */
d->q_assigned_bit_map[q->qgrp_id] &= (0x -
(uint64_t) (1 << q->aq_id));
+   rte_free(q->derm_buffer);
rte_free(q->companion_ring_addr);
rte_free(q->lb_in);
rte_free(q->lb_out);
@@ -3111,10 +3123,41 @@ harq_loopback(struct acc_queue *q, struct 
rte_bbdev_dec_op *op,
return 1;
 }
 
+/* Assess whether a work around is recommended for the deRM corner cases */
+static inline bool
+derm_workaround_recommended(struct rte_bbdev_op_ldpc_dec *ldpc_dec, struct 
acc_queue *q)
+{
+   if (!is_acc100(q))
+   return false;
+   int32_t e = ldpc_dec->cb_params.e;
+   int q_m = ldpc_dec->q_m;
+   int z_c = ldpc_dec->z_c;
+   int K = (ldpc_dec->basegraph == 1 ? ACC_K_ZC_1 : ACC_K_ZC_2) * z_c;
+   bool recommended = false;
+
+   if (ldpc_dec->basegraph == 1) {
+   if ((q_m == 4) && (z_c >= 320) && (e * ACC_LIM_31 > K * 64))
+   recommended = true;
+   else if ((e * ACC_LIM_21 > K * 64))
+   recommended = true;
+   } else {
+   if (q_m <= 2) {
+   if ((z_c >= 208) && (e * ACC_LIM_09 > K * 64))
+   recommended = true;
+   else if ((z_c < 208) && (e * ACC_LIM_03 > K * 64))
+   recommended = true;
+   } else if (e * ACC_LIM_14 > K * 64)
+   recommended = true;
+   }
+
+   return recommended;
+}
+
 /** Enqueue one decode operations for ACC100 device in CB mode */
 static inline int
 enqueue_ldpc_dec_one_op_cb(struct acc_queue *q, struct rte_bbdev_dec_op *op,
-   uint16_t total_enqueued_cbs, bool same_op)
+   uint16_t total_enqueued_cbs, bool same_op,
+   struct rte_bbdev_queue_data *q_data)
 {
int ret;
if (unlikely(check_bit(op->ldpc_dec.op_flags,
@@ -3168,6 +3211,12 @@ enqueue_ldpc_dec_one_op_cb(struct acc_queue *q, struct 
rte_bbdev_de

RE: [PATCH 10/14] baseband/ark: introduce ark baseband driver

2022-10-31 Thread Chautru, Nicolas
Hi John, 

> From: John Miller  
> Sent: Monday, October 31, 2022 10:34 AM
> To: Chautru, Nicolas 
> Cc: dev@dpdk.org; ed.cz...@atomicrules.com; Shepard Siegel 
> ; Maxime Coquelin 
> Subject: Re: [PATCH 10/14] baseband/ark: introduce ark baseband driver
> 
> Hi Nicolas,
> 
> 
> 
> On Oct 26, 2022, at 7:11 PM, Chautru, Nicolas 
>  wrote:
> 
> Hi John,
> 
> General comment. I was a bit lost in the split in the commits 10 to 14. 
> First I would have expected it to build from the first commit. I don't 
> believe it makes sense to add 13 and 14 after the fact. 
> 
> I first introduced this patch set in 22.07 but we had to defer due other 
> company priorities.  I had 10 to 14 in the same commit but you asked me to 
> split it into smaller commits.  Perhaps I misunderstood what you were asking. 
>  I will put 10 thru 14 back into the same commit.
> 

This is about splitting these logically not artificially, see examples from 
other PMDs contributions with incremental commits but still not based on 
splitting away doc/build. 


> 
> Between 10 and 11 there were a bit of confusion as well to me.
> Like ark_bbdev_info_get is being first referred in 10 but then the 
> implementation is in 11.
> The ldpc decoding functions are also split between 10 and 11. As a 
> consequence the commit 10 is a hard to review in one chunk arguably. 
> 
> This will be addressed when I put them all in the same commit.
> 

That is not the intent see above. 

> 
> 
> I would suggest to consider a split of commits that may be more logical and 
> incremental. For instance what was done recently for the acc driver just as 
> an imperfect example. 
> This way it would also provide more digestible chunks of code to be reviewed 
> incrementally. 
> 
> I would be nice to have some doc with the first commit matching the code. 
> Notably as I had the impression the implementation doesn't fully match your 
> cover letter (I will add some more comments on this for 11), but unclear to 
> me whether this is intentional or not.
> 
> 
> We will address the doc to make sure it is accurate.
> 
> 
> We will address your other comments in a separate response.
> 
> Thank you
> -John
> 
> 
> 
> Thanks
> Nic
> 
> 
> 
> -Original Message-
> From: John Miller 
> Sent: Wednesday, October 26, 2022 12:46 PM
> To: Chautru, Nicolas 
> Cc: mailto:dev@dpdk.org; mailto:ed.cz...@atomicrules.com; Shepard Siegel
> ; John Miller
> 
> Subject: [PATCH 10/14] baseband/ark: introduce ark baseband driver
> 
> This patch introduces the Arkville baseband device driver.
> 
> Signed-off-by: John Miller 
> ---
> drivers/baseband/ark/ark_bbdev.c | 1127
> ++  drivers/baseband/ark/ark_bbext.h |  163
> +
> 2 files changed, 1290 insertions(+)
> create mode 100644 drivers/baseband/ark/ark_bbdev.c  create mode 100644
> drivers/baseband/ark/ark_bbext.h
> 
> diff --git a/drivers/baseband/ark/ark_bbdev.c
> b/drivers/baseband/ark/ark_bbdev.c
> new file mode 100644
> index 00..8736d170d1
> --- /dev/null
> +++ b/drivers/baseband/ark/ark_bbdev.c
> @@ -0,0 +1,1127 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2016-2021 Atomic Rules LLC  */
> +
> +#include 
> +#include 
> +#include 
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include "ark_common.h"
> +#include "ark_bbdev_common.h"
> +#include "ark_bbdev_custom.h"
> +#include "ark_ddm.h"
> +#include "ark_mpu.h"
> +#include "ark_rqp.h"
> +#include "ark_udm.h"
> +#include "ark_bbext.h"
> +
> +#define DRIVER_NAME baseband_ark
> +
> +#define ARK_SYSCTRL_BASE  0x0
> +#define ARK_PKTGEN_BASE   0x1
> +#define ARK_MPU_RX_BASE   0x2
> +#define ARK_UDM_BASE  0x3
> +#define ARK_MPU_TX_BASE   0x4
> +#define ARK_DDM_BASE  0x6
> +#define ARK_PKTDIR_BASE   0xa
> +#define ARK_PKTCHKR_BASE  0x9
> +#define ARK_RCPACING_BASE 0xb
> +#define ARK_MPU_QOFFSET   0x00100
> +
> +#define BB_ARK_TX_Q_FACTOR 4
> +
> +#define ARK_RX_META_SIZE 32
> +#define ARK_RX_META_OFFSET (RTE_PKTMBUF_HEADROOM -
> ARK_RX_META_SIZE)
> +#define ARK_RX_MAX_NOCHAIN (RTE_MBUF_DEFAULT_DATAROOM)
> +
> +static_assert(sizeof(struct ark_rx_meta) == ARK_RX_META_SIZE,
> +"Unexpected struct size ark_rx_meta"); static_assert(sizeof(union
> +ark_tx_meta) == 8, "Unexpected struct size ark_tx_meta");
> +
> +static struct rte_pci_id pci_id_ark[] = {
> + {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x1015)},
> + {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x1016)},
> + {.device_id = 0},
> +};
> +
> +static const struct ark_dev_caps
> +ark_device_caps[] = {
> +  SET_DEV_CAPS(0x1015, true, false),
> +  SET_DEV_CAPS(0x1016, true, false),
> +  {.device_id = 0,}
> +};
> +
> +
> +/* F

RE: [PATCH v12 04/16] baseband/acc: introduce PMD for ACC200

2022-10-31 Thread Chautru, Nicolas
Hi Thomas, 

> -Original Message-
> From: Thomas Monjalon 
> 31/10/2022 16:43, Chautru, Nicolas:
> > From: Thomas Monjalon 
> > > 12/10/2022 19:59, Nicolas Chautru:
> > > > +Bind PF UIO driver(s)
> > > > +~
> > > > +
> > > > +Install the DPDK igb_uio driver, bind it with the PF PCI device
> > > > +ID and use ``lspci`` to confirm the PF device is under use by
> > > > +``igb_uio`` DPDK
> > > UIO driver.
> > >
> > > igb_uio is not recommended.
> > > Please focus on VFIO first.
> > >
> > > > +The igb_uio driver may be bound to the PF PCI device using one of
> > > > +two methods for ACC200:
> > > > +
> > > > +
> > > > +1. PCI functions (physical or virtual, depending on the use case)
> > > > +can be bound to the UIO driver by repeating this command for every
> function.
> > > > +
> > > > +.. code-block:: console
> > > > +
> > > > +  cd 
> > > > +  insmod ./build/kmod/igb_uio.ko
> > > > +  echo "8086 57c0" > /sys/bus/pci/drivers/igb_uio/new_id
> > > > +  lspci -vd8086:57c0
> > > > +
> > > > +
> > > > +2. Another way to bind PF with DPDK UIO driver is by using the
> > > > +``dpdk-devbind.py`` tool
> > > > +
> > > > +.. code-block:: console
> > > > +
> > > > +  cd 
> > > > +  ./usertools/dpdk-devbind.py -b igb_uio :f7:00.0
> > > > +
> > > > +where the PCI device ID (example: :f7:00.0) is obtained using
> > > > +lspci -vd8086:57c0
> > >
> > > This binding is not specific to the driver.
> > > It would be better to refer to the Linux guide instead of
> > > duplicating it again and again.
> > >
> > > > +In a similar way the PF may be bound with vfio-pci as any PCIe device.
> > >
> > > You could mention igb_uio here.
> > > Is there any advantage in using igb_uio?
> > >
> >
> > Igb_uio is arguably easier to use to new user tend to start with it or 
> > specific
> ecosystem. This is typically the entry point (no iommu, no flr below the 
> bonnet,
> no vfio token...) hence good to have a bit of handholding with a couple of 
> lines
> capturing how to easily run a few tests. I don't believe this is too 
> redundant to
> have these few lines compared to the help in bring to the user not having to
> double guess their steps.
> > More generally there are a number of module drivers combinations that are
> supported based on different deployments. We don't document in too much
> details for the details since that is not too ACC specific and there is more
> documentation no pf_bb_config repo for using the PMD from the VF..
> >
> > Basically Thomas let us know more explicitly what you are suggesting as
> documentation update. You just want more emphasis on vfio-pci flow (which is
> fair, some of it documented on pf_bb_config including the vfio token passing
> but we can reproduce here as well) or something else?
> 
> There are 2 things to change:
> 1/ igb_uio is going to be deprecated, so we must emphasize on VFIO

Is there a date for deprecation? Do you mean to EOL the dpdk-kmods repository 
itself; or something more specific for DPDK code like removing 
RTE_PCI_KDRV_IGB_UIO; or last to just take out from documentation?
It tends to be historical but uio has value notably for ease of use. 

2/ for doc
> maintenance, it is better to have common steps described in one place.
> If needed, you can change the common doc and refer to it.

Do you mean to remove these sections and just add a pointer to 
https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html instead in all these 
bbdev PMDS?
Please kindly confirm. I see specific steps for binding in many other PMDs docs 
in DPDK, a bit redundant but provides simple steps specific to a PMD in one 
place. I don't mind either way. 

Thanks
Nic
 



Re: Understanding RX_OFFLOAD_VLAN_EXTEND

2022-10-31 Thread Ivan Malov

Thank you, Ferruh. You have been most helpful.

On Mon, 31 Oct 2022, Ferruh Yigit wrote:


On 10/31/2022 6:46 PM, Ivan Malov wrote:

Hi!

We have a hard time figuring out what the API contract of
RX_OFFLOAD_VLAN_EXTEND might be. The best educated guess
we can make is that the feature might have something to
do with identifying VLAN packets and extracting TCI
without stripping the tags from incoming packets.

Is this understanding correct?

You see, things aren't helped by the offload bit having
almost no commentary. Such could've shed light on its
meaning. Perhaps this gap in documentation should
be addressed somehow. Any opinions?

Thank you.



Hi Ivan,

It is legacy from ixgbe driver, you can find more details on the ixgbe 
(82599) datasheet [1].


RX_OFFLOAD_VLAN_EXTEND is *like*, QinQ but not, that is why we have 
'QINQ_STRIP' offload.


And RX_OFFLOAD_VLAN_EXTEND is more a configuration option, briefly you can 
ignore it.
But in detail that is to configure device in a mode that it knows that 
received packets always has at least one VLAN tag, I assume it is for a case 
that some in the middle networking device inserts/requires VLAN tags. But 
optionally packet can have two VLAN tags. But as far as I can see this is not 
for to strip the VLAN tag or to filter packet based on it, this is just to 
configure device for this environment.




[1] copy/paste from a public datasheet 
(http://iommu.com/datasheets/ixgbe-datasheets/82599-datasheet-v3-4.pdf), not 
sure if this is up to date version, but I think it is OK for this context:


Double VLAN and Single VLAN Support
The 82599 supports a mode where all received and sent packets have at least 
one VLAN tag in addition to the regular tagging that might optionally be 
added. In this document, when a packet carries two VLAN headers, the first 
header is referred to as an outer VLAN and the second header as an inner VLAN 
header (as listed in the table that follows). This mode is used for systems 
where the near end switch adds the outer VLAN header containing switching 
information. This mode is enabled by the following configuration:
• This mode is activated by setting the DMATXCTL.GDV and the Extended VLAN 
bit in the CTRL_EXT register.
• The EtherType of the VLAN tag used for the additional VLAN is defined in 
the VET EXT field in the EXVET register.


RE: [PATCH] net/iavf: fix Tx descriptors for IPSec

2022-10-31 Thread Zhang, Qi Z



> -Original Message-
> From: Zeng, ZhichaoX 
> Sent: Friday, October 28, 2022 5:43 PM
> To: dev@dpdk.org
> Cc: Yang, Qiming ; Zhou, YidingX
> ; Zeng, ZhichaoX ;
> Nicolau, Radu ; Xu, Ke1 ; Wu,
> Jingjing ; Xing, Beilei ; Zhang,
> Qi Z ; Peng Zhang 
> Subject: [PATCH] net/iavf: fix Tx descriptors for IPSec
> 
> This patch fixes the building of context and data descriptor on the scalar 
> path
> for IPSec.
> 
> Fixes: f7c8c36fdeb7 ("net/iavf: enable inner and outer Tx checksum offload")
> 
> Signed-off-by: Radu Nicolau 
> Signed-off-by: Zhichao Zeng 
> Tested-by: Ke Xu 

Applied to dpdk-next-net-intel.

Thanks
Qi



Re: Flow Bifurcation of splitting the traffic between kernel space and user space (DPDK)

2022-10-31 Thread Stephen Hemminger
On Sat, 29 Oct 2022 02:39:01 +0530
Ramakrishnan G  wrote:

> From: Ramakrishnan G 
> To: aaron.f.br...@intel.com, dev@dpdk.org, sarava...@gmail.com
> Subject: Flow Bifurcation of splitting the traffic between kernel space and  
> user space (DPDK)
> Date: Sat, 29 Oct 2022 02:39:01 +0530
> 
> Dear Aaron and DPDK Dev Team,
> 
> Thanks for the Article talks about the Traffic Flow bifurcation
> between kernel space and user space (DPDK) (3. Flow Bifurcation How-to
> Guide — Data Plane Development Kit 16.07.2 documentation (dpdk.org)
> )

That DPDK release is over 6 years old. That feature is no longer supported on 
Intel
NIC's. You are better off using AF_XDP. 

> 
> We are trying to test this functionality for sending only the SSH (port 22)
> traffic to kernel and all the other traffic to be transferred to the user
> space (DPDK) by assigning same IP for both the virtual interface (one
> virtual interface is owned by the DPDK and another virtual interface is
> owned by the DPDK )
> 
> Using the igb driver with max_vfs setting, we were able to create the
> virtual link and map it to user space (DPDK) and another link into kernel
> space. we assigned different IP addresses and we were able to reach from
> other host.
> 
> But when we are trying to configure the flow-type for port 22
> 
> Ubuntu# ethtool -K eth9 ntuple on
> Ubuntu## ethtool -N eth9 flow-type ip4 dst-port 22 action 0
> rmgr: Cannot insert RX class rule: Invalid argument
> Ubuntu## ethtool -N eth9 flow-type ip4 dst-port 22 action 1
> rmgr: Cannot insert RX class rule: Invalid argument
> Ubuntu## ethtool -N eth9 flow-type ip4 dst-port 22 action 2
> rmgr: Cannot insert RX class rule: Invalid argument
> 
> We tried to apply the patch that was given in the following link,
> (
> https://patchwork.ozlabs.org/project/intel-wired-lan/patch/1451456399-13353-1-git-send-email-gangfeng.hu...@ni.com/#1236040
> )
> 
> But we couldn't patch any of the latest igb driver and we tried to patch
> with the 2016 igb driver.
> 
> please help us in sharing the info where can we apply the patch for igb
> driver in Ubuntu.

The igb NIC does not have an flow direction.
The bifurcation for Intel NIC is based off of kernel flow director.


release candidate 22.11-rc2

2022-10-31 Thread Thomas Monjalon
A new DPDK release candidate is ready for testing:
https://git.dpdk.org/dpdk/tag/?id=v22.11-rc2

There are 422 new patches in this snapshot.

Release notes:
https://doc.dpdk.org/guides/rel_notes/release_22_11.html

There were a lot of updates in drivers, including 3 new drivers:
- GVE (Google Virtual Ethernet)
- IDPF (Intel DataPlane Function or Infrastructure DataPath Function)
- UADK (User Space Accelerator Development Kit) supporting HiSilicon 
crypto
The driver features should be frozen now.

Please test and report issues on bugs.dpdk.org.

Thank you everyone




Re: [PATCH] net/bonding: set initial value of descriptor count alignment

2022-10-31 Thread humin (Q)

Acked-by: Min Hu (Connor) 

在 2022/10/31 21:17, Ivan Malov 写道:

The driver had once been broken by patch [1] looking to have
a non-zero "nb_max" value in a use case not involving adding
any back-end ports. That was addressed afterwards ([2]). But,
as per report [3], similar test cases exist which attempt to
setup Rx queues on a void bond before attaching any back-end
ports. Rx queue setup, in turn, involves device info get API
invocation, and one of the checks on received data causes an
exception (division by zero). The "nb_align" value is indeed
zero at that time, but, as explained in [2], such test cases
are totally incorrect since a bond device must have at least
one back-end port plugged before any ethdev APIs can be used.

Once again, to avoid any problems with fixing the test cases,
this patch adjusts the bond PMD itself to workaround the bug.

[1] commit 5be3b40fea60 ("net/bonding: fix values of descriptor limits")
[2] commit d03c0e83cc00 ("net/bonding: fix descriptor limit reporting")
[3] https://bugs.dpdk.org/show_bug.cgi?id=1118

Fixes: d03c0e83cc00 ("net/bonding: fix descriptor limit reporting")
Cc: sta...@dpdk.org

Signed-off-by: Ivan Malov 
Reviewed-by: Andrew Rybchenko 
---
  drivers/net/bonding/rte_eth_bond_pmd.c | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c 
b/drivers/net/bonding/rte_eth_bond_pmd.c
index dc74852137..145cb7099f 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -3426,6 +3426,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
 */
internals->rx_desc_lim.nb_max = UINT16_MAX;
internals->tx_desc_lim.nb_max = UINT16_MAX;
+   internals->rx_desc_lim.nb_align = 1;
+   internals->tx_desc_lim.nb_align = 1;
  
  	memset(internals->active_slaves, 0, sizeof(internals->active_slaves));

memset(internals->slaves, 0, sizeof(internals->slaves));


[PATCH] net/idpf: fix compiling error in CentOS 7

2022-10-31 Thread beilei . xing
From: Beilei Xing 

There's build error with clang 3.4.2 in CentOS 7:

drivers/net/idpf/idpf_vchnl.c:141:13: error: comparison of constant
522 with expression of type 'enum virtchnl_ops' is always false
[-Werror,-Wtautological-constant-out-of-range-compare]

Fixed the compiling error in the patch.

Fixes: 549343c25db8 ("net/idpf: support device initialization")

Signed-off-by: Beilei Xing 
---
 drivers/net/idpf/idpf_vchnl.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index 00ac5b2a6b..ac6486d4ef 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -55,7 +55,7 @@ idpf_vc_clean(struct idpf_adapter *adapter)
 }
 
 static int
-idpf_send_vc_msg(struct idpf_adapter *adapter, enum virtchnl_ops op,
+idpf_send_vc_msg(struct idpf_adapter *adapter, uint32_t op,
 uint16_t msg_size, uint8_t *msg)
 {
struct idpf_ctlq_msg *ctlq_msg;
@@ -118,7 +118,7 @@ idpf_read_msg_from_cp(struct idpf_adapter *adapter, 
uint16_t buf_len,
struct idpf_ctlq_msg ctlq_msg;
struct idpf_dma_mem *dma_mem = NULL;
enum idpf_vc_result result = IDPF_MSG_NON;
-   enum virtchnl_ops opcode;
+   uint32_t opcode;
uint16_t pending = 1;
int ret;
 
@@ -132,7 +132,7 @@ idpf_read_msg_from_cp(struct idpf_adapter *adapter, 
uint16_t buf_len,
 
rte_memcpy(buf, ctlq_msg.ctx.indirect.payload->va, buf_len);
 
-   opcode = (enum 
virtchnl_ops)rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+   opcode = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
adapter->cmd_retval =
(enum 
virtchnl_status_code)rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
 
-- 
2.26.2



Re: [PATCH] net/bonding: fix slave device Rx/Tx offload configuration

2022-10-31 Thread humin (Q)

Acked-by: Min Hu (Connor) 

在 2022/10/28 10:36, Huisong Li 写道:

Normally, the Rx/Tx offload capability of bonding interface is
the intersection of the capability of all slave devices. And
Rx/Tx offloads configuration of slave device comes from bonding
interface. But now there is a risk that slave device retains its
previous offload configurations which is not within the offload
configurations of bond interface.

Fixes: 57b156540f51 ("net/bonding: fix offloading configuration")
Cc: sta...@dpdk.org

Signed-off-by: Huisong Li 
---
  drivers/net/bonding/rte_eth_bond_pmd.c | 17 -
  1 file changed, 4 insertions(+), 13 deletions(-)

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c 
b/drivers/net/bonding/rte_eth_bond_pmd.c
index dc74852137..ca87490065 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1741,20 +1741,11 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
slave_eth_dev->data->dev_conf.link_speeds =
bonded_eth_dev->data->dev_conf.link_speeds;
  
-	slave_eth_dev->data->dev_conf.txmode.offloads |=

-   bonded_eth_dev->data->dev_conf.txmode.offloads;
-
-   slave_eth_dev->data->dev_conf.txmode.offloads &=
-   (bonded_eth_dev->data->dev_conf.txmode.offloads |
-   ~internals->tx_offload_capa);
-
-   slave_eth_dev->data->dev_conf.rxmode.offloads |=
-   bonded_eth_dev->data->dev_conf.rxmode.offloads;
-
-   slave_eth_dev->data->dev_conf.rxmode.offloads &=
-   (bonded_eth_dev->data->dev_conf.rxmode.offloads |
-   ~internals->rx_offload_capa);
+   slave_eth_dev->data->dev_conf.txmode.offloads =
+   bonded_eth_dev->data->dev_conf.txmode.offloads;
  
+	slave_eth_dev->data->dev_conf.rxmode.offloads =

+   bonded_eth_dev->data->dev_conf.rxmode.offloads;
  
  	nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;

nb_tx_queues = bonded_eth_dev->data->nb_tx_queues;


[PATCH v2] net/idpf: fix compiling error in CentOS 7

2022-10-31 Thread beilei . xing
From: Beilei Xing 

There's build error with clang 3.4.2 in CentOS 7:

drivers/net/idpf/idpf_vchnl.c:141:13: error: comparison of constant
522 with expression of type 'enum virtchnl_ops' is always false
[-Werror,-Wtautological-constant-out-of-range-compare]

Fixed the compiling error in the patch.

Fixes: 549343c25db8 ("net/idpf: support device initialization")

Signed-off-by: Beilei Xing 
---
 drivers/net/idpf/idpf_ethdev.h | 10 +-
 drivers/net/idpf/idpf_vchnl.c  |  6 +++---
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index ccdf4abe40..1efdfe4ce0 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -137,7 +137,7 @@ struct idpf_adapter {
struct virtchnl2_version_info virtchnl_version;
struct virtchnl2_get_capabilities *caps;
 
-   volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
+   volatile uint32_t pend_cmd; /* pending command not finished */
uint32_t cmd_retval; /* return value of the cmd response from ipf */
uint8_t *mbx_resp; /* buffer to store the mailbox response from ipf */
 
@@ -195,7 +195,7 @@ notify_cmd(struct idpf_adapter *adapter, int msg_ret)
adapter->cmd_retval = msg_ret;
/* Return value may be checked in anither thread, need to ensure the 
coherence. */
rte_wmb();
-   adapter->pend_cmd = VIRTCHNL_OP_UNKNOWN;
+   adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
 }
 
 /* clear current command. Only call in case execute
@@ -206,15 +206,15 @@ clear_cmd(struct idpf_adapter *adapter)
 {
/* Return value may be checked in anither thread, need to ensure the 
coherence. */
rte_wmb();
-   adapter->pend_cmd = VIRTCHNL_OP_UNKNOWN;
+   adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
adapter->cmd_retval = VIRTCHNL_STATUS_SUCCESS;
 }
 
 /* Check there is pending cmd in execution. If none, set new command. */
 static inline bool
-atomic_set_cmd(struct idpf_adapter *adapter, enum virtchnl_ops ops)
+atomic_set_cmd(struct idpf_adapter *adapter, uint32_t ops)
 {
-   enum virtchnl_ops op_unk = VIRTCHNL_OP_UNKNOWN;
+   uint32_t op_unk = VIRTCHNL2_OP_UNKNOWN;
bool ret = __atomic_compare_exchange(&adapter->pend_cmd, &op_unk, &ops,
0, __ATOMIC_ACQUIRE, 
__ATOMIC_ACQUIRE);
 
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index 00ac5b2a6b..ac6486d4ef 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -55,7 +55,7 @@ idpf_vc_clean(struct idpf_adapter *adapter)
 }
 
 static int
-idpf_send_vc_msg(struct idpf_adapter *adapter, enum virtchnl_ops op,
+idpf_send_vc_msg(struct idpf_adapter *adapter, uint32_t op,
 uint16_t msg_size, uint8_t *msg)
 {
struct idpf_ctlq_msg *ctlq_msg;
@@ -118,7 +118,7 @@ idpf_read_msg_from_cp(struct idpf_adapter *adapter, 
uint16_t buf_len,
struct idpf_ctlq_msg ctlq_msg;
struct idpf_dma_mem *dma_mem = NULL;
enum idpf_vc_result result = IDPF_MSG_NON;
-   enum virtchnl_ops opcode;
+   uint32_t opcode;
uint16_t pending = 1;
int ret;
 
@@ -132,7 +132,7 @@ idpf_read_msg_from_cp(struct idpf_adapter *adapter, 
uint16_t buf_len,
 
rte_memcpy(buf, ctlq_msg.ctx.indirect.payload->va, buf_len);
 
-   opcode = (enum 
virtchnl_ops)rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+   opcode = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
adapter->cmd_retval =
(enum 
virtchnl_status_code)rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
 
-- 
2.26.2



RE: [PATCH v1] net/iavf: fix refine protocol header error

2022-10-31 Thread Zhang, Qi Z



> -Original Message-
> From: Steve Yang 
> Sent: Monday, October 31, 2022 2:43 PM
> To: dev@dpdk.org
> Cc: Wu, Jingjing ; Xing, Beilei
> ; Yang, SteveX 
> Subject: [PATCH v1] net/iavf: fix refine protocol header error
> 
> Protocol header count should be changed when tunnel level is larger than 1.
> 
> Fixes: 13a7dcddd8ee ("net/iavf: fix taninted scalar")
> 
> Signed-off-by: Steve Yang 

Acked-by: Qi Zhang 

Applied to dpdk-next-net-intel.

Thanks
Qi



[PATCH v2] net/idpf: fix compiling error in CentOS 7

2022-10-31 Thread beilei . xing
From: Beilei Xing 

There's build error with clang 3.4.2 in CentOS 7:

drivers/net/idpf/idpf_vchnl.c:141:13: error: comparison of constant
522 with expression of type 'enum virtchnl_ops' is always false
[-Werror,-Wtautological-constant-out-of-range-compare]

Fixed the compiling error in the patch.

Fixes: 549343c25db8 ("net/idpf: support device initialization")

Signed-off-by: Beilei Xing 
---

v2 change: modify enum virtchnl_ops with uint32_t in header file.

 drivers/net/idpf/idpf_ethdev.h | 10 +-
 drivers/net/idpf/idpf_vchnl.c  |  6 +++---
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index ccdf4abe40..1efdfe4ce0 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -137,7 +137,7 @@ struct idpf_adapter {
struct virtchnl2_version_info virtchnl_version;
struct virtchnl2_get_capabilities *caps;
 
-   volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
+   volatile uint32_t pend_cmd; /* pending command not finished */
uint32_t cmd_retval; /* return value of the cmd response from ipf */
uint8_t *mbx_resp; /* buffer to store the mailbox response from ipf */
 
@@ -195,7 +195,7 @@ notify_cmd(struct idpf_adapter *adapter, int msg_ret)
adapter->cmd_retval = msg_ret;
/* Return value may be checked in anither thread, need to ensure the 
coherence. */
rte_wmb();
-   adapter->pend_cmd = VIRTCHNL_OP_UNKNOWN;
+   adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
 }
 
 /* clear current command. Only call in case execute
@@ -206,15 +206,15 @@ clear_cmd(struct idpf_adapter *adapter)
 {
/* Return value may be checked in anither thread, need to ensure the 
coherence. */
rte_wmb();
-   adapter->pend_cmd = VIRTCHNL_OP_UNKNOWN;
+   adapter->pend_cmd = VIRTCHNL2_OP_UNKNOWN;
adapter->cmd_retval = VIRTCHNL_STATUS_SUCCESS;
 }
 
 /* Check there is pending cmd in execution. If none, set new command. */
 static inline bool
-atomic_set_cmd(struct idpf_adapter *adapter, enum virtchnl_ops ops)
+atomic_set_cmd(struct idpf_adapter *adapter, uint32_t ops)
 {
-   enum virtchnl_ops op_unk = VIRTCHNL_OP_UNKNOWN;
+   uint32_t op_unk = VIRTCHNL2_OP_UNKNOWN;
bool ret = __atomic_compare_exchange(&adapter->pend_cmd, &op_unk, &ops,
0, __ATOMIC_ACQUIRE, 
__ATOMIC_ACQUIRE);
 
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index 00ac5b2a6b..ac6486d4ef 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -55,7 +55,7 @@ idpf_vc_clean(struct idpf_adapter *adapter)
 }
 
 static int
-idpf_send_vc_msg(struct idpf_adapter *adapter, enum virtchnl_ops op,
+idpf_send_vc_msg(struct idpf_adapter *adapter, uint32_t op,
 uint16_t msg_size, uint8_t *msg)
 {
struct idpf_ctlq_msg *ctlq_msg;
@@ -118,7 +118,7 @@ idpf_read_msg_from_cp(struct idpf_adapter *adapter, 
uint16_t buf_len,
struct idpf_ctlq_msg ctlq_msg;
struct idpf_dma_mem *dma_mem = NULL;
enum idpf_vc_result result = IDPF_MSG_NON;
-   enum virtchnl_ops opcode;
+   uint32_t opcode;
uint16_t pending = 1;
int ret;
 
@@ -132,7 +132,7 @@ idpf_read_msg_from_cp(struct idpf_adapter *adapter, 
uint16_t buf_len,
 
rte_memcpy(buf, ctlq_msg.ctx.indirect.payload->va, buf_len);
 
-   opcode = (enum 
virtchnl_ops)rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+   opcode = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
adapter->cmd_retval =
(enum 
virtchnl_status_code)rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
 
-- 
2.26.2



[PATCH] doc: support flow matching on representor ID

2022-10-31 Thread Sean Zhang
Add note for support of matching on port representor ID.

Signed-off-by: Sean Zhang 
---
 doc/guides/nics/mlx5.rst   | 1 +
 doc/guides/rel_notes/release_22_11.rst | 1 +
 2 files changed, 2 insertions(+)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index d5f9375a4e..b121ef059c 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -106,6 +106,7 @@ Features
 - Sub-Function representors.
 - Sub-Function.
 - Matching on represented port.
+- Matching on port representor ID.
 
 
 Limitations
diff --git a/doc/guides/rel_notes/release_22_11.rst 
b/doc/guides/rel_notes/release_22_11.rst
index 70afb57f2b..42c6b3d0f7 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -185,6 +185,7 @@ New Features
 - Support of counter.
 - Support of meter.
 - Support of modify fields.
+  * Added support of matching on port representor ID.
 
 * **Updated NXP dpaa2 driver.**
 
-- 
2.34.1



Re: [PATCH V3] app/testpmd: update bond port configurations when add slave

2022-10-31 Thread humin (Q)

Reviewed-by: Min Hu (Connor) 

在 2022/10/29 11:50, Huisong Li 写道:

Some capabilities (like, rx_offload_capa and tx_offload_capa) of bonding
device in dev_info is zero when no slave is added. And its capability will
be updated when add a new slave device.

The capability to update dynamically may introduce some problems if not
handled properly. For example, the reconfig() is called to initialize
bonding port configurations when create a bonding device. The global
tx_mode is assigned to dev_conf.txmode. The DEV_TX_OFFLOAD_MBUF_FAST_FREE
which is the default value of global tx_mode.offloads in testpmd is removed
from bonding device configuration because of zero rx_offload_capa.
As a result, this offload isn't set to bonding device.

Generally, port configurations of bonding device must be within the
intersection of the capability of all slave devices. If use original port
configurations, the removed capabilities because of adding a new slave may
cause failure when re-initialize bonding device.

So port configurations of bonding device also need to be updated because of
the added and removed capabilities. In addition, this also helps to ensure
consistency between testpmd and bonding device.

Signed-off-by: Huisong Li 
---
  - v3: fix code comment
  - v2: fix a spelling error in commit log
---
  app/test-pmd/testpmd.c| 40 +++
  app/test-pmd/testpmd.h|  3 +-
  drivers/net/bonding/bonding_testpmd.c |  2 ++
  3 files changed, 44 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 97adafacd0..7c9de07367 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2805,6 +2805,41 @@ fill_xstats_display_info(void)
fill_xstats_display_info_for_port(pi);
  }
  
+/*

+ * Some capabilities (like, rx_offload_capa and tx_offload_capa) of bonding
+ * device in dev_info is zero when no slave is added. And its capability
+ * will be updated when add a new slave device. So adding a device slave need
+ * to update the port configurations of bonding device.
+ */
+static void
+update_bonding_port_dev_conf(portid_t bond_pid)
+{
+#ifdef RTE_NET_BOND
+   struct rte_port *port = &ports[bond_pid];
+   uint16_t i;
+   int ret;
+
+   ret = eth_dev_info_get_print_err(bond_pid, &port->dev_info);
+   if (ret != 0) {
+   fprintf(stderr, "Failed to get dev info for port = %u\n",
+   bond_pid);
+   return;
+   }
+
+   if (port->dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+   port->dev_conf.txmode.offloads |=
+   RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
+   /* Apply Tx offloads configuration */
+   for (i = 0; i < port->dev_info.max_tx_queues; i++)
+   port->txq[i].conf.offloads = port->dev_conf.txmode.offloads;
+
+   port->dev_conf.rx_adv_conf.rss_conf.rss_hf &=
+   port->dev_info.flow_type_rss_offloads;
+#else
+   RTE_SET_USED(bond_pid);
+#endif
+}
+
  int
  start_port(portid_t pid)
  {
@@ -2869,6 +2904,11 @@ start_port(portid_t pid)
return -1;
}
  
+			if (port->bond_flag == 1 && port->update_conf == 1) {

+   update_bonding_port_dev_conf(pi);
+   port->update_conf = 0;
+   }
+
/* configure port */
diag = eth_dev_configure_mp(pi, nb_rxq + nb_hairpinq,
 nb_txq + nb_hairpinq,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 7fef96f9b1..82714119e8 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -316,7 +316,8 @@ struct rte_port {
queueid_t   queue_nb; /**< nb. of queues for flow rules */
uint32_tqueue_sz; /**< size of a queue for flow rules */
uint8_t slave_flag : 1, /**< bonding slave port */
-   bond_flag : 1; /**< port is bond device */
+   bond_flag : 1, /**< port is bond device */
+   update_conf : 1; /**< need to update bonding 
device configuration */
struct port_template*pattern_templ_list; /**< Pattern templates. */
struct port_template*actions_templ_list; /**< Actions templates. */
struct port_table   *table_list; /**< Flow tables. */
diff --git a/drivers/net/bonding/bonding_testpmd.c 
b/drivers/net/bonding/bonding_testpmd.c
index 3941f4cf23..9529e16fb6 100644
--- a/drivers/net/bonding/bonding_testpmd.c
+++ b/drivers/net/bonding/bonding_testpmd.c
@@ -625,6 +625,7 @@ static void cmd_add_bonding_slave_parsed(void 
*parsed_result,
slave_port_id, master_port_id);
return;
}
+   ports[master_port_id].update_conf = 1;
init_port_confi

[PATCH] doc: correct product name for idpf

2022-10-31 Thread beilei . xing
From: Beilei Xing 

This patch corrects the product name for idpf PMD.

Fixes: 549343c25db8 ("net/idpf: support device initialization")

Signed-off-by: Beilei Xing 
---
 doc/guides/nics/idpf.rst   | 2 +-
 doc/guides/rel_notes/release_22_11.rst | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/idpf.rst b/doc/guides/nics/idpf.rst
index 15c0e58a2f..b5c3aa5763 100644
--- a/doc/guides/nics/idpf.rst
+++ b/doc/guides/nics/idpf.rst
@@ -7,7 +7,7 @@ IDPF Poll Mode Driver
 =
 
 The [*EXPERIMENTAL*] idpf PMD (**librte_net_idpf**) provides poll mode driver 
support
-for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2000.
+for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
 
 
 Linux Prerequisites
diff --git a/doc/guides/rel_notes/release_22_11.rst 
b/doc/guides/rel_notes/release_22_11.rst
index 61f7d4d0aa..699c1231fa 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -161,7 +161,7 @@ New Features
 * **Added Intel idpf driver.**
 
   Added the new ``idpf`` net driver
-  for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2000.
+  for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
   See the :doc:`../nics/idpf` NIC guide for more details on this new driver.
 
 * **Updated Marvell cnxk driver.**
-- 
2.26.2



[PATCH] common/idpf: add README for base code

2022-10-31 Thread beilei . xing
From: Beilei Xing 

This patch adds README for idpf base code.

Signed-off-by: Beilei Xing 
---
 drivers/common/idpf/base/README | 21 +
 1 file changed, 21 insertions(+)
 create mode 100644 drivers/common/idpf/base/README

diff --git a/drivers/common/idpf/base/README b/drivers/common/idpf/base/README
new file mode 100644
index 00..257ad6c4b1
--- /dev/null
+++ b/drivers/common/idpf/base/README
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021-2022 Intel Corporation
+ */
+
+Intel® IDPF driver
+==
+
+This directory contains source code of BSD-3-Clause idpf driver of version
+2022.09.13 released by the team which develops basic drivers for Intel IPU.
+The directory of base/ contains the original source package.
+This driver is valid for the product(s) listed below
+
+* Intel® IPU E2100
+
+Updating the driver
+===
+
+NOTE: The source code in this directory should not be modified apart from
+the following file(s):
+
+idpf_osdep.h
\ No newline at end of file
-- 
2.26.2



RE: [PATCH 03/13] net/idpf: support device initialization

2022-10-31 Thread Xing, Beilei


> -Original Message-
> From: Ali Alnubani 
> Sent: Tuesday, November 1, 2022 2:01 AM
> To: Guo, Junfeng ; Zhang, Qi Z
> ; Wu, Jingjing ; Xing, Beilei
> 
> Cc: dev@dpdk.org; Li, Xiaoyun ; Wang, Xiao W
> ; NBU-Contact-Thomas Monjalon (EXTERNAL)
> 
> Subject: RE: [PATCH 03/13] net/idpf: support device initialization
> 
> > -Original Message-
> > From: Junfeng Guo 
> > Sent: Wednesday, August 3, 2022 2:31 PM
> > To: qi.z.zh...@intel.com; jingjing...@intel.com; beilei.x...@intel.com
> > Cc: dev@dpdk.org; junfeng@intel.com; Xiaoyun Li
> > ; Xiao Wang 
> > Subject: [PATCH 03/13] net/idpf: support device initialization
> >
> > Support device init and the following dev ops:
> > - dev_configure
> > - dev_start
> > - dev_stop
> > - dev_close
> >
> > Signed-off-by: Beilei Xing 
> > Signed-off-by: Xiaoyun Li 
> > Signed-off-by: Xiao Wang 
> > Signed-off-by: Junfeng Guo 
> > ---
> 
> Hello,
> 
> This patch is causing the following build failure in latest main (6a88cbc) 
> with
> clang 3.4.2 in CentOS 7:
> drivers/net/idpf/idpf_vchnl.c:141:13: error: comparison of constant 522 with
> expression of type 'enum virtchnl_ops' is always false [-Werror,-
> Wtautological-constant-out-of-range-compare]
> 

Hi,

Thanks for reporting the issue, fix patch has been sent, 
https://patches.dpdk.org/project/dpdk/patch/20221101024350.105241-1-beilei.x...@intel.com/

> Regards,
> Ali