Re: [PATCH] ethdev: fix Tx queue mask endianness

2023-06-30 Thread David Marchand
On Thu, Jun 29, 2023 at 6:14 PM Ferruh Yigit  wrote:
>
> On 6/29/2023 4:42 PM, Thomas Monjalon wrote:
> > 29/06/2023 17:40, David Marchand:
> >> On Thu, Jun 29, 2023 at 5:31 PM Thomas Monjalon  
> >> wrote:
> >>> 29/06/2023 15:58, David Marchand:
>  - .tx_queue = RTE_BE16(0x),
>  + .tx_queue = 0x,
> >>>
> >>> As I said in an earlier comment about the same issue,
> >>> UINT16_MAX would be better.
> >>
> >> I don't mind updating (or maybe Ferruh can squash this directly ?) but
> >> there are lots of uint16_t fields initialised with 0x in this same
> >> file.
> >
> > It can be made in a separate patch for all occurences.
> > First I would like to get some comments, what do you prefer
> > between 0x and UINT16_MAX?
> >
>
> Both works, no strong opinion, I am OK with 0x,
>
> The variable we are setting is '*_mask', and main point of the value
> used is to have all bits set, and 0xff.. usage highlights it.
>
> Not for UINT16_MAX, but for wider variables, it is easier to make
> mistake and put wrong number of 'f', using 'UINTxx_MAX' macro can
> prevent this mistake, this is a benefit.
>
>
> And I think consistency matters more, so if you prefer 'UINTxx_MAX',
> lets stick to it.
>
> I can update above in next-net, but as far as I understand we can have a
> patch to fix all occurrences.

Given that we are considering unsigned integers, is there something
wrong with using (typeof(var)) -1 ?
We could define a new macro to hide this ugly detail.


-- 
David Marchand



Re: [PATCH] ethdev: fix Tx queue mask endianness

2023-06-30 Thread David Marchand
On Fri, Jun 30, 2023 at 9:00 AM David Marchand
 wrote:
>
> On Thu, Jun 29, 2023 at 6:14 PM Ferruh Yigit  wrote:
> >
> > On 6/29/2023 4:42 PM, Thomas Monjalon wrote:
> > > 29/06/2023 17:40, David Marchand:
> > >> On Thu, Jun 29, 2023 at 5:31 PM Thomas Monjalon  
> > >> wrote:
> > >>> 29/06/2023 15:58, David Marchand:
> >  - .tx_queue = RTE_BE16(0x),
> >  + .tx_queue = 0x,
> > >>>
> > >>> As I said in an earlier comment about the same issue,
> > >>> UINT16_MAX would be better.
> > >>
> > >> I don't mind updating (or maybe Ferruh can squash this directly ?) but
> > >> there are lots of uint16_t fields initialised with 0x in this same
> > >> file.
> > >
> > > It can be made in a separate patch for all occurences.
> > > First I would like to get some comments, what do you prefer
> > > between 0x and UINT16_MAX?
> > >
> >
> > Both works, no strong opinion, I am OK with 0x,
> >
> > The variable we are setting is '*_mask', and main point of the value
> > used is to have all bits set, and 0xff.. usage highlights it.
> >
> > Not for UINT16_MAX, but for wider variables, it is easier to make
> > mistake and put wrong number of 'f', using 'UINTxx_MAX' macro can
> > prevent this mistake, this is a benefit.
> >
> >
> > And I think consistency matters more, so if you prefer 'UINTxx_MAX',
> > lets stick to it.
> >
> > I can update above in next-net, but as far as I understand we can have a
> > patch to fix all occurrences.
>
> Given that we are considering unsigned integers, is there something
> wrong with using (typeof(var)) -1 ?

Or maybe get inspiration from what the Linux kernel does :-)
Like GENMASK().


-- 
David Marchand



[PATCH] Updated to dpdk20.11

2023-06-30 Thread David Young
---
 doc/guides/freebsd_gsg/install_from_ports.rst | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/doc/guides/freebsd_gsg/install_from_ports.rst 
b/doc/guides/freebsd_gsg/install_from_ports.rst
index d946f3f3b2..ae866cd879 100644
--- a/doc/guides/freebsd_gsg/install_from_ports.rst
+++ b/doc/guides/freebsd_gsg/install_from_ports.rst
@@ -23,7 +23,7 @@ Installing the DPDK Package for FreeBSD
 
 DPDK can be installed on FreeBSD using the command::
 
-   pkg install dpdk
+   pkg install dpdk20.11
 
 After the installation of the DPDK package, instructions will be printed on
 how to install the kernel modules required to use the DPDK. A more
@@ -51,7 +51,7 @@ a pre-compiled binary package.
 On a system with the ports collection installed in ``/usr/ports``, the DPDK
 can be installed using the commands::
 
-cd /usr/ports/net/dpdk
+cd /usr/ports/net/dpdk20.11
 
 make install
 
@@ -123,3 +123,4 @@ via the contigmem module, and 4 NIC ports bound to the 
nic_uio module::
 
For an explanation of the command-line parameters that can be passed to an
DPDK application, see section :ref:`running_sample_app`.
+
-- 
2.41.0.windows.1



[PATCH] vhost: add notify reply ops to avoid message deadlock

2023-06-30 Thread Rma Ma
Since backend and frontend message are synchronous in the same thread,
there will be a probability of message deadlock.
Consider each driver to determine whether to wait for response.

Signed-off-by: Rma Ma 
---
 lib/vhost/vdpa_driver.h |  3 +++
 lib/vhost/vhost_user.c  | 23 ++-
 2 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/lib/vhost/vdpa_driver.h b/lib/vhost/vdpa_driver.h
index 8db4ab9f4d..3d2ea3c90e 100644
--- a/lib/vhost/vdpa_driver.h
+++ b/lib/vhost/vdpa_driver.h
@@ -81,6 +81,9 @@ struct rte_vdpa_dev_ops {
 
/** get device type: net device, blk device... */
int (*get_dev_type)(struct rte_vdpa_device *dev, uint32_t *type);
+
+   /** Get the notify reply flag */
+   int (*get_notify_reply_flag)(int vid, bool *need_reply);
 };
 
 /**
diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
index 901a80bbaa..aa61992939 100644
--- a/lib/vhost/vhost_user.c
+++ b/lib/vhost/vhost_user.c
@@ -3365,13 +3365,14 @@ rte_vhost_backend_config_change(int vid, bool 
need_reply)
 static int vhost_user_backend_set_vring_host_notifier(struct virtio_net *dev,
int index, int fd,
uint64_t offset,
-   uint64_t size)
+   uint64_t size,
+   bool need_reply)
 {
int ret;
struct vhu_msg_context ctx = {
.msg = {
.request.backend = 
VHOST_USER_BACKEND_VRING_HOST_NOTIFIER_MSG,
-   .flags = VHOST_USER_VERSION | VHOST_USER_NEED_REPLY,
+   .flags = VHOST_USER_VERSION,
.size = sizeof(ctx.msg.payload.area),
.payload.area = {
.u64 = index & VHOST_USER_VRING_IDX_MASK,
@@ -3388,7 +3389,13 @@ static int 
vhost_user_backend_set_vring_host_notifier(struct virtio_net *dev,
ctx.fd_num = 1;
}
 
-   ret = send_vhost_backend_message_process_reply(dev, &ctx);
+   if (!need_reply)
+   ret = send_vhost_backend_message(dev, &ctx);
+   else {
+   ctx.msg.flags |= VHOST_USER_NEED_REPLY;
+   ret = send_vhost_backend_message_process_reply(dev, &ctx);
+   }
+
if (ret < 0)
VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to set host notifier 
(%d)\n", ret);
 
@@ -3402,6 +3409,7 @@ int rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, 
bool enable)
int vfio_device_fd, ret = 0;
uint64_t offset, size;
unsigned int i, q_start, q_last;
+   bool need_reply;
 
dev = get_device(vid);
if (!dev)
@@ -3440,6 +3448,11 @@ int rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, 
bool enable)
if (vfio_device_fd < 0)
return -ENOTSUP;
 
+   if (vdpa_dev->ops->get_notify_reply_flag == NULL)
+   need_reply = true;
+   else
+   vdpa_dev->ops->get_notify_reply_flag(vid, &need_reply);
+
if (enable) {
for (i = q_start; i <= q_last; i++) {
if (vdpa_dev->ops->get_notify_area(vid, i, &offset,
@@ -3449,7 +3462,7 @@ int rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, 
bool enable)
}
 
if (vhost_user_backend_set_vring_host_notifier(dev, i,
-   vfio_device_fd, offset, size) < 0) {
+   vfio_device_fd, offset, size, 
need_reply) < 0) {
ret = -EFAULT;
goto disable;
}
@@ -3458,7 +3471,7 @@ int rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, 
bool enable)
 disable:
for (i = q_start; i <= q_last; i++) {
vhost_user_backend_set_vring_host_notifier(dev, i, -1,
-   0, 0);
+   0, 0, need_reply);
}
}
 
-- 
2.17.1



[PATCH] vhost: add notify reply ops to avoid message deadlock

2023-06-30 Thread Rma Ma
Since backend and frontend message are synchronous in the same thread,
there will be a probability of message deadlock.
Consider each driver to determine whether to wait for response.

Signed-off-by: Rma Ma 
---
 lib/vhost/vdpa_driver.h |  3 +++
 lib/vhost/vhost_user.c  | 23 ++-
 2 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/lib/vhost/vdpa_driver.h b/lib/vhost/vdpa_driver.h
index 8db4ab9f4d..3d2ea3c90e 100644
--- a/lib/vhost/vdpa_driver.h
+++ b/lib/vhost/vdpa_driver.h
@@ -81,6 +81,9 @@ struct rte_vdpa_dev_ops {
 
/** get device type: net device, blk device... */
int (*get_dev_type)(struct rte_vdpa_device *dev, uint32_t *type);
+
+   /** Get the notify reply flag */
+   int (*get_notify_reply_flag)(int vid, bool *need_reply);
 };
 
 /**
diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
index 901a80bbaa..aa61992939 100644
--- a/lib/vhost/vhost_user.c
+++ b/lib/vhost/vhost_user.c
@@ -3365,13 +3365,14 @@ rte_vhost_backend_config_change(int vid, bool 
need_reply)
 static int vhost_user_backend_set_vring_host_notifier(struct virtio_net *dev,
int index, int fd,
uint64_t offset,
-   uint64_t size)
+   uint64_t size,
+   bool need_reply)
 {
int ret;
struct vhu_msg_context ctx = {
.msg = {
.request.backend = 
VHOST_USER_BACKEND_VRING_HOST_NOTIFIER_MSG,
-   .flags = VHOST_USER_VERSION | VHOST_USER_NEED_REPLY,
+   .flags = VHOST_USER_VERSION,
.size = sizeof(ctx.msg.payload.area),
.payload.area = {
.u64 = index & VHOST_USER_VRING_IDX_MASK,
@@ -3388,7 +3389,13 @@ static int 
vhost_user_backend_set_vring_host_notifier(struct virtio_net *dev,
ctx.fd_num = 1;
}
 
-   ret = send_vhost_backend_message_process_reply(dev, &ctx);
+   if (!need_reply)
+   ret = send_vhost_backend_message(dev, &ctx);
+   else {
+   ctx.msg.flags |= VHOST_USER_NEED_REPLY;
+   ret = send_vhost_backend_message_process_reply(dev, &ctx);
+   }
+
if (ret < 0)
VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to set host notifier 
(%d)\n", ret);
 
@@ -3402,6 +3409,7 @@ int rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, 
bool enable)
int vfio_device_fd, ret = 0;
uint64_t offset, size;
unsigned int i, q_start, q_last;
+   bool need_reply;
 
dev = get_device(vid);
if (!dev)
@@ -3440,6 +3448,11 @@ int rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, 
bool enable)
if (vfio_device_fd < 0)
return -ENOTSUP;
 
+   if (vdpa_dev->ops->get_notify_reply_flag == NULL)
+   need_reply = true;
+   else
+   vdpa_dev->ops->get_notify_reply_flag(vid, &need_reply);
+
if (enable) {
for (i = q_start; i <= q_last; i++) {
if (vdpa_dev->ops->get_notify_area(vid, i, &offset,
@@ -3449,7 +3462,7 @@ int rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, 
bool enable)
}
 
if (vhost_user_backend_set_vring_host_notifier(dev, i,
-   vfio_device_fd, offset, size) < 0) {
+   vfio_device_fd, offset, size, 
need_reply) < 0) {
ret = -EFAULT;
goto disable;
}
@@ -3458,7 +3471,7 @@ int rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, 
bool enable)
 disable:
for (i = q_start; i <= q_last; i++) {
vhost_user_backend_set_vring_host_notifier(dev, i, -1,
-   0, 0);
+   0, 0, need_reply);
}
}
 
-- 
2.17.1



[PATCH] net/mlx5: support symmetric RSS hash function

2023-06-30 Thread Xueming Li
This patch supports symmetric hash function that creating same
hash result for bi-direction traffic which having reverse
source and destination IP and L4 port.

Since the hash algorithom is different than spec(XOR), leave a
warning in validation.

Signed-off-by: Xueming Li 
---
 drivers/net/mlx5/mlx5.h |  3 +++
 drivers/net/mlx5/mlx5_devx.c| 11 ---
 drivers/net/mlx5/mlx5_flow.c| 10 --
 drivers/net/mlx5/mlx5_flow.h|  5 +
 drivers/net/mlx5/mlx5_flow_dv.c | 13 -
 drivers/net/mlx5/mlx5_flow_hw.c |  7 +++
 drivers/net/mlx5/mlx5_rx.h  |  2 +-
 drivers/net/mlx5/mlx5_rxq.c |  8 +---
 8 files changed, 49 insertions(+), 10 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 2a82348135..b7534933bc 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1509,6 +1509,7 @@ struct mlx5_mtr_config {
 
 /* RSS description. */
 struct mlx5_flow_rss_desc {
+   bool symmetric_hash_function; /**< Symmetric hash function */
uint32_t level;
uint32_t queue_num; /**< Number of entries in @p queue. */
uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
@@ -1577,6 +1578,7 @@ struct mlx5_hrxq {
 #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H)
void *action; /* DV QP action pointer. */
 #endif
+   bool symmetric_hash_function; /* Symmetric hash function */
uint32_t hws_flags; /* Hw steering flags. */
uint64_t hash_fields; /* Verbs Hash fields. */
uint32_t rss_key_len; /* Hash key length in bytes. */
@@ -1648,6 +1650,7 @@ struct mlx5_obj_ops {
int (*hrxq_modify)(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
   const uint8_t *rss_key,
   uint64_t hash_fields,
+  bool symmetric_hash_function,
   const struct mlx5_ind_table_obj *ind_tbl);
void (*hrxq_destroy)(struct mlx5_hrxq *hrxq);
int (*drop_action_create)(struct rte_eth_dev *dev);
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 4369d2557e..f9d8dc6987 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -803,7 +803,8 @@ static void
 mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
   uint64_t hash_fields,
   const struct mlx5_ind_table_obj *ind_tbl,
-  int tunnel, struct mlx5_devx_tir_attr *tir_attr)
+  int tunnel, bool symmetric_hash_function,
+  struct mlx5_devx_tir_attr *tir_attr)
 {
struct mlx5_priv *priv = dev->data->dev_private;
bool is_hairpin;
@@ -834,6 +835,7 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const 
uint8_t *rss_key,
tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT;
tir_attr->rx_hash_fn = MLX5_RX_HASH_FN_TOEPLITZ;
tir_attr->tunneled_offload_en = !!tunnel;
+   tir_attr->rx_hash_symmetric = symmetric_hash_function;
/* If needed, translate hash_fields bitmap to PRM format. */
if (hash_fields) {
struct mlx5_rx_hash_field_select *rx_hash_field_select =
@@ -902,7 +904,8 @@ mlx5_devx_hrxq_new(struct rte_eth_dev *dev, struct 
mlx5_hrxq *hrxq,
int err;
 
mlx5_devx_tir_attr_set(dev, hrxq->rss_key, hrxq->hash_fields,
-  hrxq->ind_table, tunnel, &tir_attr);
+  hrxq->ind_table, tunnel, 
hrxq->symmetric_hash_function,
+  &tir_attr);
hrxq->tir = mlx5_devx_cmd_create_tir(priv->sh->cdev->ctx, &tir_attr);
if (!hrxq->tir) {
DRV_LOG(ERR, "Port %u cannot create DevX TIR.",
@@ -969,13 +972,13 @@ static int
 mlx5_devx_hrxq_modify(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
   const uint8_t *rss_key,
   uint64_t hash_fields,
+  bool symmetric_hash_function,
   const struct mlx5_ind_table_obj *ind_tbl)
 {
struct mlx5_devx_modify_tir_attr modify_tir = {0};
 
/*
 * untested for modification fields:
-* - rx_hash_symmetric not set in hrxq_new(),
 * - rx_hash_fn set hard-coded in hrxq_new(),
 * - lro_xxx not set after rxq setup
 */
@@ -983,11 +986,13 @@ mlx5_devx_hrxq_modify(struct rte_eth_dev *dev, struct 
mlx5_hrxq *hrxq,
modify_tir.modify_bitmask |=
MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_INDIRECT_TABLE;
if (hash_fields != hrxq->hash_fields ||
+   symmetric_hash_function != 
hrxq->symmetric_hash_function ||
memcmp(hrxq->rss_key, rss_key, MLX5_RSS_HASH_KEY_LEN))
modify_tir.modify_bitmask |=
MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_HASH;
mlx5_devx_tir_attr_set(dev, rss_key, hash_fields, ind_tbl,
 

[PATCH 0/3] doc: fix some errors in hns3 guide

2023-06-30 Thread Dongdong Liu
This patchset is to fix some errors in hns3 guide doc.

Dongdong Liu (3):
  doc: fix invalid link in hns3 guide
  doc: fix wrong syntax in hns3 guide
  doc: fix wrong number of spaces in hns3 guide

 doc/guides/nics/hns3.rst | 33 +++--
 1 file changed, 19 insertions(+), 14 deletions(-)

--
2.22.0



[PATCH 1/3] doc: fix invalid link in hns3 guide

2023-06-30 Thread Dongdong Liu
The LSC support of Vf driver depends on a patch link in kernel pf driver.
But current the link is invalid, so fixes it.

Add a blank line after the link.

Fixes: 80006b598730 ("doc: add link status event requirements in hns3 guide")
Cc: sta...@dpdk.org

Signed-off-by: Dongdong Liu 
---
 doc/guides/nics/hns3.rst | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/hns3.rst b/doc/guides/nics/hns3.rst
index 45f9a99a89..a6f3a5eb9e 100644
--- a/doc/guides/nics/hns3.rst
+++ b/doc/guides/nics/hns3.rst
@@ -54,7 +54,8 @@ Firmware 1.8.0.0 and later versions support reporting link 
changes to the PF.
 Therefore, to use the LSC for the PF driver, ensure that the firmware version
 also supports reporting link changes.
 If the VF driver needs to support LSC, special patch must be added:
-``_.
+``_.
+
 Note: The patch has been uploaded to 5.13 of the Linux kernel mainline.
 
 
-- 
2.22.0



[PATCH 2/3] doc: fix wrong syntax in hns3 guide

2023-06-30 Thread Dongdong Liu
'::' doesn't provide pre-formatted text without an empty line after it,
so fixes it.

Fixes: cdf6a5fbc540 ("doc: add runtime option examples to hns3 guide")
Cc: sta...@dpdk.org

Signed-off-by: Dongdong Liu 
---
 doc/guides/nics/hns3.rst | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/doc/guides/nics/hns3.rst b/doc/guides/nics/hns3.rst
index a6f3a5eb9e..644d520b64 100644
--- a/doc/guides/nics/hns3.rst
+++ b/doc/guides/nics/hns3.rst
@@ -92,6 +92,7 @@ Runtime Configuration
   ``common``.
 
   For example::
+
   -a :7d:00.0,rx_func_hint=simple
 
 - ``tx_func_hint`` (default ``none``)
@@ -112,6 +113,7 @@ Runtime Configuration
   ``common``.
 
   For example::
+
   -a :7d:00.0,tx_func_hint=common
 
 - ``dev_caps_mask`` (default ``0``)
@@ -124,6 +126,7 @@ Runtime Configuration
   Its main purpose is to debug and avoid problems.
 
   For example::
+
   -a :7d:00.0,dev_caps_mask=0xF
 
 - ``mbx_time_limit_ms`` (default ``500``)
-- 
2.22.0



[PATCH 3/3] doc: fix wrong number of spaces in hns3 guide

2023-06-30 Thread Dongdong Liu
The current description of 'mbx_time_limit_ms' has three spaces
at the beginning. Use two spaces to keep the same style with other
places and add a blank line after '::'.

Fixes: 2fc3e696a7f1 ("net/hns3: add runtime config for mailbox limit time")
Cc: sta...@dpdk.org

Signed-off-by: Dongdong Liu 
---
 doc/guides/nics/hns3.rst | 27 ++-
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/doc/guides/nics/hns3.rst b/doc/guides/nics/hns3.rst
index 644d520b64..b9f7035300 100644
--- a/doc/guides/nics/hns3.rst
+++ b/doc/guides/nics/hns3.rst
@@ -130,19 +130,20 @@ Runtime Configuration
   -a :7d:00.0,dev_caps_mask=0xF
 
 - ``mbx_time_limit_ms`` (default ``500``)
-   Used to define the mailbox time limit by user.
-   Current, the max waiting time for MBX response is 500ms, but in
-   some scenarios, it is not enough. Since it depends on the response
-   of the kernel mode driver, and its response time is related to the
-   scheduling of the system. In this special scenario, most of the
-   cores are isolated, and only a few cores are used for system
-   scheduling. When a large number of services are started, the
-   scheduling of the system will be very busy, and the reply of the
-   mbx message will time out, which will cause our PMD initialization
-   to fail. So provide access to set mailbox time limit for user.
-
-   For example::
-   -a :7d:00.0,mbx_time_limit_ms=600
+  Used to define the mailbox time limit by user.
+  Current, the max waiting time for MBX response is 500ms, but in
+  some scenarios, it is not enough. Since it depends on the response
+  of the kernel mode driver, and its response time is related to the
+  scheduling of the system. In this special scenario, most of the
+  cores are isolated, and only a few cores are used for system
+  scheduling. When a large number of services are started, the
+  scheduling of the system will be very busy, and the reply of the
+  mbx message will time out, which will cause our PMD initialization
+  to fail. So provide access to set mailbox time limit for user.
+
+  For example::
+
+  -a :7d:00.0,mbx_time_limit_ms=600
 
 - ``fdir_vlan_match_mode`` (default ``strict``)
 
-- 
2.22.0



[PATCH v1] net/mlx5: support symmetric RSS hash function

2023-06-30 Thread Xueming Li
This patch supports symmetric hash function that creating same
hash result for bi-direction traffic which having reverse
source and destination IP and L4 port.

Since the hash algorithom is different than spec(XOR), leave a
warning in validation.

Signed-off-by: Xueming Li 
---
 drivers/net/mlx5/mlx5.h |  3 +++
 drivers/net/mlx5/mlx5_devx.c| 11 ---
 drivers/net/mlx5/mlx5_flow.c| 10 --
 drivers/net/mlx5/mlx5_flow.h|  5 +
 drivers/net/mlx5/mlx5_flow_dv.c | 13 -
 drivers/net/mlx5/mlx5_flow_hw.c |  7 +++
 drivers/net/mlx5/mlx5_rx.h  |  2 +-
 drivers/net/mlx5/mlx5_rxq.c |  8 +---
 8 files changed, 49 insertions(+), 10 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 2a82348135..b7534933bc 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1509,6 +1509,7 @@ struct mlx5_mtr_config {
 
 /* RSS description. */
 struct mlx5_flow_rss_desc {
+   bool symmetric_hash_function; /**< Symmetric hash function */
uint32_t level;
uint32_t queue_num; /**< Number of entries in @p queue. */
uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
@@ -1577,6 +1578,7 @@ struct mlx5_hrxq {
 #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H)
void *action; /* DV QP action pointer. */
 #endif
+   bool symmetric_hash_function; /* Symmetric hash function */
uint32_t hws_flags; /* Hw steering flags. */
uint64_t hash_fields; /* Verbs Hash fields. */
uint32_t rss_key_len; /* Hash key length in bytes. */
@@ -1648,6 +1650,7 @@ struct mlx5_obj_ops {
int (*hrxq_modify)(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
   const uint8_t *rss_key,
   uint64_t hash_fields,
+  bool symmetric_hash_function,
   const struct mlx5_ind_table_obj *ind_tbl);
void (*hrxq_destroy)(struct mlx5_hrxq *hrxq);
int (*drop_action_create)(struct rte_eth_dev *dev);
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 4369d2557e..f9d8dc6987 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -803,7 +803,8 @@ static void
 mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
   uint64_t hash_fields,
   const struct mlx5_ind_table_obj *ind_tbl,
-  int tunnel, struct mlx5_devx_tir_attr *tir_attr)
+  int tunnel, bool symmetric_hash_function,
+  struct mlx5_devx_tir_attr *tir_attr)
 {
struct mlx5_priv *priv = dev->data->dev_private;
bool is_hairpin;
@@ -834,6 +835,7 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const 
uint8_t *rss_key,
tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT;
tir_attr->rx_hash_fn = MLX5_RX_HASH_FN_TOEPLITZ;
tir_attr->tunneled_offload_en = !!tunnel;
+   tir_attr->rx_hash_symmetric = symmetric_hash_function;
/* If needed, translate hash_fields bitmap to PRM format. */
if (hash_fields) {
struct mlx5_rx_hash_field_select *rx_hash_field_select =
@@ -902,7 +904,8 @@ mlx5_devx_hrxq_new(struct rte_eth_dev *dev, struct 
mlx5_hrxq *hrxq,
int err;
 
mlx5_devx_tir_attr_set(dev, hrxq->rss_key, hrxq->hash_fields,
-  hrxq->ind_table, tunnel, &tir_attr);
+  hrxq->ind_table, tunnel, 
hrxq->symmetric_hash_function,
+  &tir_attr);
hrxq->tir = mlx5_devx_cmd_create_tir(priv->sh->cdev->ctx, &tir_attr);
if (!hrxq->tir) {
DRV_LOG(ERR, "Port %u cannot create DevX TIR.",
@@ -969,13 +972,13 @@ static int
 mlx5_devx_hrxq_modify(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
   const uint8_t *rss_key,
   uint64_t hash_fields,
+  bool symmetric_hash_function,
   const struct mlx5_ind_table_obj *ind_tbl)
 {
struct mlx5_devx_modify_tir_attr modify_tir = {0};
 
/*
 * untested for modification fields:
-* - rx_hash_symmetric not set in hrxq_new(),
 * - rx_hash_fn set hard-coded in hrxq_new(),
 * - lro_xxx not set after rxq setup
 */
@@ -983,11 +986,13 @@ mlx5_devx_hrxq_modify(struct rte_eth_dev *dev, struct 
mlx5_hrxq *hrxq,
modify_tir.modify_bitmask |=
MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_INDIRECT_TABLE;
if (hash_fields != hrxq->hash_fields ||
+   symmetric_hash_function != 
hrxq->symmetric_hash_function ||
memcmp(hrxq->rss_key, rss_key, MLX5_RSS_HASH_KEY_LEN))
modify_tir.modify_bitmask |=
MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_HASH;
mlx5_devx_tir_attr_set(dev, rss_key, hash_fields, ind_tbl,
 

RE: [PATCH v2] net/iavf: fix duplicate reset done check with large VF

2023-06-30 Thread Zhang, Qi Z



> -Original Message-
> From: Zeng, ZhichaoX 
> Sent: Friday, June 30, 2023 2:06 PM
> To: dev@dpdk.org
> Cc: Zhang, Qi Z ; Jiale, SongX ;
> Zeng, ZhichaoX ; Wu, Jingjing
> ; Xing, Beilei 
> Subject: [PATCH v2] net/iavf: fix duplicate reset done check with large VF
> 
> When starting with large vf, need to reset VF to request queues, the reset
> process will execute VIRTCHNL commands to clean up resource.
> 
> VF reset done check and reset watchdog read the same global register,
> resulting in the NIC not responding to the VIRTCHNL command.
> 
> This patch turns off the watchdog when request queues to avoid the VIRTCHNL
> command timeout error when starting with large VF.
> 
> Fixes: af801b0374e3 ("net/iavf: add devargs to control watchdog")
> Fixes: 7a93cd3575eb ("net/iavf: add VF reset check")
> Signed-off-by: Zhichao Zeng 

Acked-by: Qi Zhang 

Applied to dpdk-next-net-intel.

Thanks
Qi


RE: [PATCH] net/idpf: refine idpf_dev_vport_init() function

2023-06-30 Thread Zhang, Qi Z



> -Original Message-
> From: Mingxia Liu 
> Sent: Monday, June 26, 2023 10:06 PM
> To: dev@dpdk.org
> Cc: Wu, Jingjing ; Xing, Beilei 
> ;
> Liu, Mingxia 
> Subject: [PATCH] net/idpf: refine idpf_dev_vport_init() function
> 
> This patch adds 'cur_vports' and 'cur_vport_nb' updation in error path.
> 
> Signed-off-by: Mingxia Liu 
> ---
>  drivers/net/idpf/idpf_ethdev.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
> index 801da57472..3e66898aaf 100644
> --- a/drivers/net/idpf/idpf_ethdev.c
> +++ b/drivers/net/idpf/idpf_ethdev.c
> @@ -1300,6 +1300,8 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void
> *init_params)
>  err_mac_addrs:
>   adapter->vports[param->idx] = NULL;  /* reset */
>   idpf_vport_deinit(vport);
> + adapter->cur_vports &= ~RTE_BIT32(param->devarg_id);
> + adapter->cur_vport_nb--;

Can we move below two lines to the last?

adapter->cur_vports |= RTE_BIT32(param->devarg_id);
adapter->cur_vport_nb++;

so we don't need to revert them in error handle

>  err:
>   return ret;
>  }
> --
> 2.34.1



RE: [PATCH] net/idpf: refine idpf_dev_vport_init() function

2023-06-30 Thread Zhang, Qi Z



> -Original Message-
> From: Zhang, Qi Z
> Sent: Friday, June 30, 2023 4:13 PM
> To: Mingxia Liu ; dev@dpdk.org
> Cc: Wu, Jingjing ; Xing, Beilei 
> ;
> Liu, Mingxia 
> Subject: RE: [PATCH] net/idpf: refine idpf_dev_vport_init() function
> 
> 
> 
> > -Original Message-
> > From: Mingxia Liu 
> > Sent: Monday, June 26, 2023 10:06 PM
> > To: dev@dpdk.org
> > Cc: Wu, Jingjing ; Xing, Beilei
> > ; Liu, Mingxia 
> > Subject: [PATCH] net/idpf: refine idpf_dev_vport_init() function
> >
> > This patch adds 'cur_vports' and 'cur_vport_nb' updation in error path.
> >
> > Signed-off-by: Mingxia Liu 
> > ---
> >  drivers/net/idpf/idpf_ethdev.c | 2 ++
> >  1 file changed, 2 insertions(+)
> >
> > diff --git a/drivers/net/idpf/idpf_ethdev.c
> > b/drivers/net/idpf/idpf_ethdev.c index 801da57472..3e66898aaf 100644
> > --- a/drivers/net/idpf/idpf_ethdev.c
> > +++ b/drivers/net/idpf/idpf_ethdev.c
> > @@ -1300,6 +1300,8 @@ idpf_dev_vport_init(struct rte_eth_dev *dev,
> > void
> > *init_params)
> >  err_mac_addrs:
> > adapter->vports[param->idx] = NULL;  /* reset */
> > idpf_vport_deinit(vport);
> > +   adapter->cur_vports &= ~RTE_BIT32(param->devarg_id);
> > +   adapter->cur_vport_nb--;
> 
> Can we move below two lines to the last?
> 
> adapter->cur_vports |= RTE_BIT32(param->devarg_id); cur_vport_nb++;
> 
> so we don't need to revert them in error handle

Btw this is a fix, please also add the fixline.
> 
> >  err:
> > return ret;
> >  }
> > --
> > 2.34.1



RE: [PATCH] net/idpf: refine dev_link_update function

2023-06-30 Thread Zhang, Qi Z



> -Original Message-
> From: Mingxia Liu 
> Sent: Monday, June 26, 2023 7:15 PM
> To: dev@dpdk.org
> Cc: Wu, Jingjing ; Xing, Beilei 
> ;
> Liu, Mingxia 
> Subject: [PATCH] net/idpf: refine dev_link_update function
> 
> This patch refines idpf_dev_link_update callback function according to CPFL
> PMD basic code.
> 
> Signed-off-by: Mingxia Liu 
> ---
>  drivers/net/idpf/idpf_ethdev.c | 63 --
>  1 file changed, 30 insertions(+), 33 deletions(-)
> 
> diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
> index fb5965..bfdac92b95 100644
> --- a/drivers/net/idpf/idpf_ethdev.c
> +++ b/drivers/net/idpf/idpf_ethdev.c
> @@ -30,6 +30,23 @@ static const char * const idpf_valid_args[] = {
>   NULL
>  };
> 
> +uint32_t idpf_supported_speeds[] = {
> + RTE_ETH_SPEED_NUM_NONE,
> + RTE_ETH_SPEED_NUM_10M,
> + RTE_ETH_SPEED_NUM_100M,
> + RTE_ETH_SPEED_NUM_1G,
> + RTE_ETH_SPEED_NUM_2_5G,
> + RTE_ETH_SPEED_NUM_5G,
> + RTE_ETH_SPEED_NUM_10G,
> + RTE_ETH_SPEED_NUM_20G,
> + RTE_ETH_SPEED_NUM_25G,
> + RTE_ETH_SPEED_NUM_40G,
> + RTE_ETH_SPEED_NUM_50G,
> + RTE_ETH_SPEED_NUM_56G,
> + RTE_ETH_SPEED_NUM_100G,
> + RTE_ETH_SPEED_NUM_200G
> +};
> +
>  static const uint64_t idpf_map_hena_rss[] = {
>   [IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
>   RTE_ETH_RSS_NONFRAG_IPV4_UDP,
> @@ -110,42 +127,22 @@ idpf_dev_link_update(struct rte_eth_dev *dev,  {
>   struct idpf_vport *vport = dev->data->dev_private;
>   struct rte_eth_link new_link;
> + unsigned int i;
> 
>   memset(&new_link, 0, sizeof(new_link));
> 
> - switch (vport->link_speed) {
> - case RTE_ETH_SPEED_NUM_10M:
> - new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
> - break;
> - case RTE_ETH_SPEED_NUM_100M:
> - new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
> - break;
> - case RTE_ETH_SPEED_NUM_1G:
> - new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
> - break;
> - case RTE_ETH_SPEED_NUM_10G:
> - new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
> - break;
> - case RTE_ETH_SPEED_NUM_20G:
> - new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
> - break;
> - case RTE_ETH_SPEED_NUM_25G:
> - new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
> - break;
> - case RTE_ETH_SPEED_NUM_40G:
> - new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
> - break;
> - case RTE_ETH_SPEED_NUM_50G:
> - new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
> - break;
> - case RTE_ETH_SPEED_NUM_100G:
> - new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
> - break;
> - case RTE_ETH_SPEED_NUM_200G:
> - new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
> - break;
> - default:
> - new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
> + for (i = 0; i < RTE_DIM(idpf_supported_speeds); i++) {
> + if (vport->link_speed == idpf_supported_speeds[i]) {
> + new_link.link_speed = vport->link_speed;
> + break;
> + }
> + }
> +
> + if (i == RTE_DIM(idpf_supported_speeds)) {
> + if (vport->link_up)
> + new_link.link_speed =
> RTE_ETH_SPEED_NUM_UNKNOWN;
> + else
> + new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
>   }

What about

/* initialize with default value */
new_link.link_speed = vport->link_up ?  RTE_ETH_SPEED_NUM_UNKNOWN : 
RTE_ETH_SPEED_NUM_NONE

/ * update in case a match */
for (i = 0; i < RTE_DIM(idpf_supported_speeds); i++) {

}

> 
>   new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
> --
> 2.34.1



RE: [PATCH] net/idpf: optimize the code of IDPF PMD

2023-06-30 Thread Zhang, Qi Z



> -Original Message-
> From: Mingxia Liu 
> Sent: Monday, June 26, 2023 10:06 PM
> To: dev@dpdk.org
> Cc: Wu, Jingjing ; Xing, Beilei 
> ;
> Liu, Mingxia 
> Subject: [PATCH] net/idpf: optimize the code of IDPF PMD

net/idpf: reorder code.

> 
> This patch moves 'struct eth_dev_ops idpf_eth_dev_ops = {...}'
> block just after idpf_dev_close(), to group dev_ops related code together.
> 
> Signed-off-by: Mingxia Liu 

Acked-by: Qi Zhang 

Applied to dpdk-next-net-intel.

Thanks
Qi




[PATCH] drivers/ipsec_mb: fix aesni_mb set session ID

2023-06-30 Thread Ciara Power
In the case of multiprocess, when the same session is being used for both
primary and secondary processes, the session ID will be the same.
However the pointers are not available to the secondary process, so in this
case when the session was created by a different process ID, then copy
the template session to the job again.

Fixes: 0fb4834e00af ("crypto/ipsec_mb: set and use session ID")
Cc: pablo.de.lara.gua...@intel.com

Signed-off-by: Ciara Power 
---
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c  | 8 +++-
 drivers/crypto/ipsec_mb/pmd_aesni_mb_priv.h | 2 ++
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c 
b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index f4322d9af4..555b59621d 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -2,6 +2,8 @@
  * Copyright(c) 2015-2021 Intel Corporation
  */
 
+#include 
+
 #include "pmd_aesni_mb_priv.h"
 
 struct aesni_mb_op_buf_data {
@@ -847,6 +849,7 @@ aesni_mb_session_configure(IMB_MGR *mb_mgr,
 
 #if IMB_VERSION(1, 3, 0) < IMB_VERSION_NUM
sess->session_id = imb_set_session(mb_mgr, &sess->template_job);
+   sess->pid = getpid();
 #endif
 
return 0;
@@ -1482,7 +1485,10 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
session->template_job.cipher_mode;
 
 #if IMB_VERSION(1, 3, 0) < IMB_VERSION_NUM
-   if (job->session_id != session->session_id)
+   if (session->pid != getpid()) {
+   memcpy(job, &session->template_job, sizeof(IMB_JOB));
+   imb_set_session(mb_mgr, job);
+   } else if (job->session_id != session->session_id)
 #endif
memcpy(job, &session->template_job, sizeof(IMB_JOB));
 
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb_priv.h 
b/drivers/crypto/ipsec_mb/pmd_aesni_mb_priv.h
index 4ffbe4b282..3f6cf30c39 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb_priv.h
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb_priv.h
@@ -854,6 +854,8 @@ struct aesni_mb_session {
/*< Template job structure */
uint32_t session_id;
/*< IPSec MB session ID */
+   pid_t pid;
+   /*< Process ID that created session */
struct {
uint16_t offset;
} iv;
-- 
2.25.1



RE: [PATCH] net/idpf: refine RTE_PMD_REGISTER_PARAM_STRING of IDPF PMD

2023-06-30 Thread Zhang, Qi Z



> -Original Message-
> From: Mingxia Liu 
> Sent: Monday, June 26, 2023 10:06 PM
> To: dev@dpdk.org
> Cc: Wu, Jingjing ; Xing, Beilei 
> ;
> Liu, Mingxia 
> Subject: [PATCH] net/idpf: refine RTE_PMD_REGISTER_PARAM_STRING of IDPF
> PMD

No need to mention the PMD in the title as its duplicate with the prefix.

How about 

net/idpf: refine vport parameter string

> 
> This patch refines 'IDPF_VPORT' param string in
> 'RTE_PMD_REGISTER_PARAM_STRING'.
> 
> Signed-off-by: Mingxia Liu 
> ---
>  drivers/net/idpf/idpf_ethdev.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
> index 3e66898aaf..34ca5909f1 100644
> --- a/drivers/net/idpf/idpf_ethdev.c
> +++ b/drivers/net/idpf/idpf_ethdev.c
> @@ -1478,9 +1478,9 @@ RTE_PMD_REGISTER_PCI(net_idpf, rte_idpf_pmd);
> RTE_PMD_REGISTER_PCI_TABLE(net_idpf, pci_id_idpf_map);
> RTE_PMD_REGISTER_KMOD_DEP(net_idpf, "* igb_uio | vfio-pci");
> RTE_PMD_REGISTER_PARAM_STRING(net_idpf,
> -   IDPF_TX_SINGLE_Q "=<0|1> "
> -   IDPF_RX_SINGLE_Q "=<0|1> "
> -   IDPF_VPORT "=[vport_set0,[vport_set1],...]");
> + IDPF_TX_SINGLE_Q "=<0|1> "
> + IDPF_RX_SINGLE_Q "=<0|1> "
> + IDPF_VPORT
> +"=[vport0_begin[-vport0_end][,vport1_begin[-vport1_end]][,..]]");

Better to use "<>" to wrap a symbol
How about " [[-][,[-]][, ... ]]"?

> 
>  RTE_LOG_REGISTER_SUFFIX(idpf_logtype_init, init, NOTICE);
> RTE_LOG_REGISTER_SUFFIX(idpf_logtype_driver, driver, NOTICE);
> --
> 2.34.1



[PATCH v2] net/mlx5: fix the error set in Tx representor tagging

2023-06-30 Thread Bing Zhao
In the previous implementation, the error information was not set
when there was a failure during the initialization.

The pointer from the user should be passed to the called functions
to be set properly before returning.

Fixes: 483181f7b6dd ("net/mlx5: support device control of representor matching")
Cc: dsosnow...@nvidia.com
Cc: sta...@dpdk.org

Signed-off-by: Bing Zhao 
Acked-by: Ori Kam 
---
v2: CC stable
---
 drivers/net/mlx5/mlx5_flow_hw.c | 44 +++--
 1 file changed, 25 insertions(+), 19 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index ba2f1f7c92..6683bcbc7f 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -5961,12 +5961,14 @@ flow_hw_destroy_send_to_kernel_action(struct mlx5_priv 
*priv)
  *
  * @param dev
  *   Pointer to Ethernet device.
+ * @param[out] error
+ *   Pointer to error structure.
  *
  * @return
  *   Pointer to pattern template on success. NULL otherwise, and rte_errno is 
set.
  */
 static struct rte_flow_pattern_template *
-flow_hw_create_tx_repr_sq_pattern_tmpl(struct rte_eth_dev *dev)
+flow_hw_create_tx_repr_sq_pattern_tmpl(struct rte_eth_dev *dev, struct 
rte_flow_error *error)
 {
struct rte_flow_pattern_template_attr attr = {
.relaxed_matching = 0,
@@ -5985,7 +5987,7 @@ flow_hw_create_tx_repr_sq_pattern_tmpl(struct rte_eth_dev 
*dev)
},
};
 
-   return flow_hw_pattern_template_create(dev, &attr, items, NULL);
+   return flow_hw_pattern_template_create(dev, &attr, items, error);
 }
 
 static __rte_always_inline uint32_t
@@ -6043,12 +6045,15 @@ flow_hw_update_action_mask(struct rte_flow_action 
*action,
  *
  * @param dev
  *   Pointer to Ethernet device.
+ * @param[out] error
+ *   Pointer to error structure.
  *
  * @return
  *   Pointer to actions template on success. NULL otherwise, and rte_errno is 
set.
  */
 static struct rte_flow_actions_template *
-flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev)
+flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev,
+ struct rte_flow_error *error)
 {
uint32_t tag_mask = flow_hw_tx_tag_regc_mask(dev);
uint32_t tag_value = flow_hw_tx_tag_regc_value(dev);
@@ -6137,7 +6142,7 @@ flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct 
rte_eth_dev *dev)
   NULL, NULL);
idx++;
MLX5_ASSERT(idx <= RTE_DIM(actions_v));
-   return flow_hw_actions_template_create(dev, &attr, actions_v, 
actions_m, NULL);
+   return flow_hw_actions_template_create(dev, &attr, actions_v, 
actions_m, error);
 }
 
 static void
@@ -6166,12 +6171,14 @@ flow_hw_cleanup_tx_repr_tagging(struct rte_eth_dev *dev)
  *
  * @param dev
  *   Pointer to Ethernet device.
+ * @param[out] error
+ *   Pointer to error structure.
  *
  * @return
  *   0 on success, negative errno value otherwise.
  */
 static int
-flow_hw_setup_tx_repr_tagging(struct rte_eth_dev *dev)
+flow_hw_setup_tx_repr_tagging(struct rte_eth_dev *dev, struct rte_flow_error 
*error)
 {
struct mlx5_priv *priv = dev->data->dev_private;
struct rte_flow_template_table_attr attr = {
@@ -6189,20 +6196,22 @@ flow_hw_setup_tx_repr_tagging(struct rte_eth_dev *dev)
 
MLX5_ASSERT(priv->sh->config.dv_esw_en);
MLX5_ASSERT(priv->sh->config.repr_matching);
-   priv->hw_tx_repr_tagging_pt = 
flow_hw_create_tx_repr_sq_pattern_tmpl(dev);
+   priv->hw_tx_repr_tagging_pt =
+   flow_hw_create_tx_repr_sq_pattern_tmpl(dev, error);
if (!priv->hw_tx_repr_tagging_pt)
-   goto error;
-   priv->hw_tx_repr_tagging_at = 
flow_hw_create_tx_repr_tag_jump_acts_tmpl(dev);
+   goto err;
+   priv->hw_tx_repr_tagging_at =
+   flow_hw_create_tx_repr_tag_jump_acts_tmpl(dev, error);
if (!priv->hw_tx_repr_tagging_at)
-   goto error;
+   goto err;
priv->hw_tx_repr_tagging_tbl = flow_hw_table_create(dev, &cfg,

&priv->hw_tx_repr_tagging_pt, 1,

&priv->hw_tx_repr_tagging_at, 1,
-   NULL);
+   error);
if (!priv->hw_tx_repr_tagging_tbl)
-   goto error;
+   goto err;
return 0;
-error:
+err:
flow_hw_cleanup_tx_repr_tagging(dev);
return -rte_errno;
 }
@@ -7634,8 +7643,7 @@ flow_hw_configure(struct rte_eth_dev *dev,
goto err;
}
 
-   memcpy(_queue_attr, queue_attr,
-  sizeof(void *) * nb_queue);
+   memcpy(_queue_attr, queue_attr, sizeof(void *) * nb_queue);
_queue_attr[nb_queue] = &ctrl_queue_attr;
priv->acts_ipool = mlx5_ipool_create(&cfg);
if (!priv->acts_ipool)
@@ -7

[PATCH v2] net/txgbe: fix blocking system events

2023-06-30 Thread Jiawen Wu
Refer to commit 819d0d1d57f1 ("net/ixgbe: fix blocking system events").
Fix the same issue as ixgbe.

TXGBE link status task uses rte alarm thread in old implementation.
Sometime txgbe link status task takes up to 9 seconds. This will
severely affect the rte-alarm-thread dependent tasks in the
system, like interrupt or hotplug event. So replace with an
independent thread which has the same thread affinity settings
as rte interrupt.

Fixes: 0c061eadec59 ("net/txgbe: add link status change")
Cc: sta...@dpdk.org

Signed-off-by: Jiawen Wu 
---
 drivers/net/txgbe/txgbe_ethdev.c| 70 ++---
 drivers/net/txgbe/txgbe_ethdev.h|  6 +++
 drivers/net/txgbe/txgbe_ethdev_vf.c |  6 ++-
 3 files changed, 74 insertions(+), 8 deletions(-)

diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 74765a469d..d942b542ea 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -546,6 +546,7 @@ txgbe_parse_devargs(struct txgbe_hw *hw, struct rte_devargs 
*devargs)
 static int
 eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 {
+   struct txgbe_adapter *ad = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(eth_dev);
struct txgbe_vfta *shadow_vfta = TXGBE_DEV_VFTA(eth_dev);
@@ -594,6 +595,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void 
*init_params __rte_unused)
return 0;
}
 
+   __atomic_clear(&ad->link_thread_running, __ATOMIC_SEQ_CST);
rte_eth_copy_pci_info(eth_dev, pci_dev);
 
hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
@@ -1680,7 +1682,7 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 
/* Stop the link setup handler before resetting the HW. */
rte_eal_alarm_cancel(txgbe_dev_detect_sfp, dev);
-   rte_eal_alarm_cancel(txgbe_dev_setup_link_alarm_handler, dev);
+   txgbe_dev_wait_setup_link_complete(dev, 0);
 
/* disable uio/vfio intr/eventfd mapping */
rte_intr_disable(intr_handle);
@@ -1919,7 +1921,7 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
 
rte_eal_alarm_cancel(txgbe_dev_detect_sfp, dev);
-   rte_eal_alarm_cancel(txgbe_dev_setup_link_alarm_handler, dev);
+   txgbe_dev_wait_setup_link_complete(dev, 0);
 
/* disable interrupts */
txgbe_disable_intr(hw);
@@ -2803,11 +2805,52 @@ txgbe_dev_setup_link_alarm_handler(void *param)
intr->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
 }
 
+/*
+ * If @timeout_ms was 0, it means that it will not return until link complete.
+ * It returns 1 on complete, return 0 on timeout.
+ */
+int
+txgbe_dev_wait_setup_link_complete(struct rte_eth_dev *dev, uint32_t 
timeout_ms)
+{
+#define WARNING_TIMEOUT9000 /* 9s in total */
+   struct txgbe_adapter *ad = TXGBE_DEV_ADAPTER(dev);
+   uint32_t timeout = timeout_ms ? timeout_ms : WARNING_TIMEOUT;
+
+   while (__atomic_load_n(&ad->link_thread_running, __ATOMIC_SEQ_CST)) {
+   msec_delay(1);
+   timeout--;
+
+   if (timeout_ms) {
+   if (!timeout)
+   return 0;
+   } else if (!timeout) {
+   /* It will not return until link complete */
+   timeout = WARNING_TIMEOUT;
+   PMD_DRV_LOG(ERR, "TXGBE link thread not complete too 
long time!");
+   }
+   }
+
+   return 1;
+}
+
+static uint32_t
+txgbe_dev_setup_link_thread_handler(void *param)
+{
+   struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+   struct txgbe_adapter *ad = TXGBE_DEV_ADAPTER(dev);
+
+   rte_thread_detach(rte_thread_self());
+   txgbe_dev_setup_link_alarm_handler(dev);
+   __atomic_clear(&ad->link_thread_running, __ATOMIC_SEQ_CST);
+   return 0;
+}
+
 /* return 0 means link status changed, -1 means not changed */
 int
 txgbe_dev_link_update_share(struct rte_eth_dev *dev,
int wait_to_complete)
 {
+   struct txgbe_adapter *ad = TXGBE_DEV_ADAPTER(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_eth_link link;
u32 link_speed = TXGBE_LINK_SPEED_UNKNOWN;
@@ -2844,10 +2887,25 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
if ((hw->subsystem_device_id & 0xFF) ==
TXGBE_DEV_ID_KR_KX_KX4) {
hw->mac.bp_down_event(hw);
-   } else if (hw->phy.media_type == txgbe_media_type_fiber) {
-   intr->flags |= TXGBE_FLAG_NEED_LINK_CONFIG;
-   rte_eal_alarm_set(10,
-   txgbe_dev_setup_link_alarm_handler, dev);
+   } else if (hw->phy.media_type == txgbe_media_type_fiber &&
+   dev->data->dev_conf.intr_conf.lsc != 0) {
+   txgbe_dev_w

[PATCH v2] net/mlx5: fix the return value of vport action

2023-06-30 Thread Bing Zhao
The "rte_flow_error" should be set in case a invalid pointer access
to the message member.

Fixes: 1939eb6f660c ("net/mlx5: support flow port action with HWS")
Cc: dsosnow...@nvidia.com
Cc: sta...@dpdk.org

Signed-off-by: Bing Zhao 
Acked-by: Ori Kam 
---
v2: add CC stable
---
 drivers/net/mlx5/mlx5_flow_hw.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 6683bcbc7f..87584c1e94 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -7830,7 +7830,8 @@ flow_hw_configure(struct rte_eth_dev *dev,
if (is_proxy) {
ret = flow_hw_create_vport_actions(priv);
if (ret) {
-   rte_errno = -ret;
+   rte_flow_error_set(error, -ret, 
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+  NULL, "Failed to create vport 
actions.");
goto err;
}
ret = flow_hw_create_ctrl_tables(dev);
-- 
2.34.1



[PATCH v2] net/mlx5: fix the error set in control tables create

2023-06-30 Thread Bing Zhao
When some failure occurrs in the flow_hw_create_ctrl_tables(), the
"rte_flow_error" should be set properly with all needed information.
Then the rte_errno and the "message" will reflect the actual failure.

This will also solve the crash when trying to access the "message".

Fixes: 1939eb6f660c ("net/mlx5: support flow port action with HWS")
Fixes: 483181f7b6dd ("net/mlx5: support device control of representor matching")
Fixes: ddb68e47331e ("net/mlx5: add extended metadata mode for HWS")
Cc: dsosnow...@nvidia.com
Cc: sta...@dpdk.org

Signed-off-by: Bing Zhao 
Reviewed-by: Suanming Mou 
Acked-by: Ori Kam 
---
v2: add CC stable
---
 drivers/net/mlx5/mlx5_flow_hw.c | 176 +++-
 1 file changed, 106 insertions(+), 70 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 87584c1e94..4163fe23e6 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -5421,7 +5421,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
struct rte_flow_pattern_template *it;
struct rte_flow_item *copied_items = NULL;
const struct rte_flow_item *tmpl_items;
-   uint64_t orig_item_nb;
+   uint32_t orig_item_nb;
struct rte_flow_item port = {
.type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT,
.mask = &rte_flow_item_ethdev_mask,
@@ -6242,12 +6242,15 @@ flow_hw_esw_mgr_regc_marker(struct rte_eth_dev *dev)
  *
  * @param dev
  *   Pointer to Ethernet device.
+ * @param error
+ *   Pointer to error structure.
  *
  * @return
  *   Pointer to flow pattern template on success, NULL otherwise.
  */
 static struct rte_flow_pattern_template *
-flow_hw_create_ctrl_esw_mgr_pattern_template(struct rte_eth_dev *dev)
+flow_hw_create_ctrl_esw_mgr_pattern_template(struct rte_eth_dev *dev,
+struct rte_flow_error *error)
 {
struct rte_flow_pattern_template_attr attr = {
.relaxed_matching = 0,
@@ -6277,7 +6280,7 @@ flow_hw_create_ctrl_esw_mgr_pattern_template(struct 
rte_eth_dev *dev)
},
};
 
-   return flow_hw_pattern_template_create(dev, &attr, items, NULL);
+   return flow_hw_pattern_template_create(dev, &attr, items, error);
 }
 
 /**
@@ -6290,12 +6293,15 @@ flow_hw_create_ctrl_esw_mgr_pattern_template(struct 
rte_eth_dev *dev)
  *
  * @param dev
  *   Pointer to Ethernet device.
+ * @param error
+ *   Pointer to error structure.
  *
  * @return
  *   Pointer to flow pattern template on success, NULL otherwise.
  */
 static struct rte_flow_pattern_template *
-flow_hw_create_ctrl_regc_sq_pattern_template(struct rte_eth_dev *dev)
+flow_hw_create_ctrl_regc_sq_pattern_template(struct rte_eth_dev *dev,
+struct rte_flow_error *error)
 {
struct rte_flow_pattern_template_attr attr = {
.relaxed_matching = 0,
@@ -6328,7 +6334,7 @@ flow_hw_create_ctrl_regc_sq_pattern_template(struct 
rte_eth_dev *dev)
},
};
 
-   return flow_hw_pattern_template_create(dev, &attr, items, NULL);
+   return flow_hw_pattern_template_create(dev, &attr, items, error);
 }
 
 /**
@@ -6338,12 +6344,15 @@ flow_hw_create_ctrl_regc_sq_pattern_template(struct 
rte_eth_dev *dev)
  *
  * @param dev
  *   Pointer to Ethernet device.
+ * @param error
+ *   Pointer to error structure.
  *
  * @return
  *   Pointer to flow pattern template on success, NULL otherwise.
  */
 static struct rte_flow_pattern_template *
-flow_hw_create_ctrl_port_pattern_template(struct rte_eth_dev *dev)
+flow_hw_create_ctrl_port_pattern_template(struct rte_eth_dev *dev,
+ struct rte_flow_error *error)
 {
struct rte_flow_pattern_template_attr attr = {
.relaxed_matching = 0,
@@ -6362,7 +6371,7 @@ flow_hw_create_ctrl_port_pattern_template(struct 
rte_eth_dev *dev)
},
};
 
-   return flow_hw_pattern_template_create(dev, &attr, items, NULL);
+   return flow_hw_pattern_template_create(dev, &attr, items, error);
 }
 
 /*
@@ -6372,12 +6381,15 @@ flow_hw_create_ctrl_port_pattern_template(struct 
rte_eth_dev *dev)
  *
  * @param dev
  *   Pointer to Ethernet device.
+ * @param error
+ *   Pointer to error structure.
  *
  * @return
  *   Pointer to flow pattern template on success, NULL otherwise.
  */
 static struct rte_flow_pattern_template *
-flow_hw_create_tx_default_mreg_copy_pattern_template(struct rte_eth_dev *dev)
+flow_hw_create_tx_default_mreg_copy_pattern_template(struct rte_eth_dev *dev,
+struct rte_flow_error 
*error)
 {
struct rte_flow_pattern_template_attr tx_pa_attr = {
.relaxed_matching = 0,
@@ -6398,10 +6410,8 @@ 
flow_hw_create_tx_default_mreg_copy_pattern_template(struct rte_eth_dev *dev)
.type = RTE_FLOW_ITEM_TYPE_END,
},
};

Re: [PATCH 0/3] doc: fix some errors in hns3 guide

2023-06-30 Thread Ferruh Yigit
On 6/30/2023 8:37 AM, Dongdong Liu wrote:
> This patchset is to fix some errors in hns3 guide doc.
> 
> Dongdong Liu (3):
>   doc: fix invalid link in hns3 guide
>   doc: fix wrong syntax in hns3 guide
>   doc: fix wrong number of spaces in hns3 guide
>

Thanks for the fix.

Series applied to dpdk-next-net/main, thanks.


[PATCH v3] net/mlx5: fix the error set in control tables create

2023-06-30 Thread Bing Zhao
When some failure occurs in the flow_hw_create_ctrl_tables(), the
"rte_flow_error" should be set properly with all needed information.
Then the rte_errno and the "message" will reflect the actual failure.

This will also solve the crash when trying to access the "message".

Fixes: 1939eb6f660c ("net/mlx5: support flow port action with HWS")
Fixes: 483181f7b6dd ("net/mlx5: support device control of representor matching")
Fixes: ddb68e47331e ("net/mlx5: add extended metadata mode for HWS")
Cc: dsosnow...@nvidia.com
Cc: sta...@dpdk.org

Signed-off-by: Bing Zhao 
Reviewed-by: Suanming Mou 
Acked-by: Ori Kam 
---
v2: add CC stable
v3: fix the typo
---
 drivers/net/mlx5/mlx5_flow_hw.c | 176 +++-
 1 file changed, 106 insertions(+), 70 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 87584c1e94..4163fe23e6 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -5421,7 +5421,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
struct rte_flow_pattern_template *it;
struct rte_flow_item *copied_items = NULL;
const struct rte_flow_item *tmpl_items;
-   uint64_t orig_item_nb;
+   uint32_t orig_item_nb;
struct rte_flow_item port = {
.type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT,
.mask = &rte_flow_item_ethdev_mask,
@@ -6242,12 +6242,15 @@ flow_hw_esw_mgr_regc_marker(struct rte_eth_dev *dev)
  *
  * @param dev
  *   Pointer to Ethernet device.
+ * @param error
+ *   Pointer to error structure.
  *
  * @return
  *   Pointer to flow pattern template on success, NULL otherwise.
  */
 static struct rte_flow_pattern_template *
-flow_hw_create_ctrl_esw_mgr_pattern_template(struct rte_eth_dev *dev)
+flow_hw_create_ctrl_esw_mgr_pattern_template(struct rte_eth_dev *dev,
+struct rte_flow_error *error)
 {
struct rte_flow_pattern_template_attr attr = {
.relaxed_matching = 0,
@@ -6277,7 +6280,7 @@ flow_hw_create_ctrl_esw_mgr_pattern_template(struct 
rte_eth_dev *dev)
},
};
 
-   return flow_hw_pattern_template_create(dev, &attr, items, NULL);
+   return flow_hw_pattern_template_create(dev, &attr, items, error);
 }
 
 /**
@@ -6290,12 +6293,15 @@ flow_hw_create_ctrl_esw_mgr_pattern_template(struct 
rte_eth_dev *dev)
  *
  * @param dev
  *   Pointer to Ethernet device.
+ * @param error
+ *   Pointer to error structure.
  *
  * @return
  *   Pointer to flow pattern template on success, NULL otherwise.
  */
 static struct rte_flow_pattern_template *
-flow_hw_create_ctrl_regc_sq_pattern_template(struct rte_eth_dev *dev)
+flow_hw_create_ctrl_regc_sq_pattern_template(struct rte_eth_dev *dev,
+struct rte_flow_error *error)
 {
struct rte_flow_pattern_template_attr attr = {
.relaxed_matching = 0,
@@ -6328,7 +6334,7 @@ flow_hw_create_ctrl_regc_sq_pattern_template(struct 
rte_eth_dev *dev)
},
};
 
-   return flow_hw_pattern_template_create(dev, &attr, items, NULL);
+   return flow_hw_pattern_template_create(dev, &attr, items, error);
 }
 
 /**
@@ -6338,12 +6344,15 @@ flow_hw_create_ctrl_regc_sq_pattern_template(struct 
rte_eth_dev *dev)
  *
  * @param dev
  *   Pointer to Ethernet device.
+ * @param error
+ *   Pointer to error structure.
  *
  * @return
  *   Pointer to flow pattern template on success, NULL otherwise.
  */
 static struct rte_flow_pattern_template *
-flow_hw_create_ctrl_port_pattern_template(struct rte_eth_dev *dev)
+flow_hw_create_ctrl_port_pattern_template(struct rte_eth_dev *dev,
+ struct rte_flow_error *error)
 {
struct rte_flow_pattern_template_attr attr = {
.relaxed_matching = 0,
@@ -6362,7 +6371,7 @@ flow_hw_create_ctrl_port_pattern_template(struct 
rte_eth_dev *dev)
},
};
 
-   return flow_hw_pattern_template_create(dev, &attr, items, NULL);
+   return flow_hw_pattern_template_create(dev, &attr, items, error);
 }
 
 /*
@@ -6372,12 +6381,15 @@ flow_hw_create_ctrl_port_pattern_template(struct 
rte_eth_dev *dev)
  *
  * @param dev
  *   Pointer to Ethernet device.
+ * @param error
+ *   Pointer to error structure.
  *
  * @return
  *   Pointer to flow pattern template on success, NULL otherwise.
  */
 static struct rte_flow_pattern_template *
-flow_hw_create_tx_default_mreg_copy_pattern_template(struct rte_eth_dev *dev)
+flow_hw_create_tx_default_mreg_copy_pattern_template(struct rte_eth_dev *dev,
+struct rte_flow_error 
*error)
 {
struct rte_flow_pattern_template_attr tx_pa_attr = {
.relaxed_matching = 0,
@@ -6398,10 +6410,8 @@ 
flow_hw_create_tx_default_mreg_copy_pattern_template(struct rte_eth_dev *dev)
.type = RTE_FLOW_ITEM_TYPE_END,
  

Re: [PATCH v2] net/txgbe: fix blocking system events

2023-06-30 Thread Ferruh Yigit
On 6/30/2023 10:35 AM, Jiawen Wu wrote:
> Refer to commit 819d0d1d57f1 ("net/ixgbe: fix blocking system events").
> Fix the same issue as ixgbe.
> 
> TXGBE link status task uses rte alarm thread in old implementation.
> Sometime txgbe link status task takes up to 9 seconds. This will
> severely affect the rte-alarm-thread dependent tasks in the
> system, like interrupt or hotplug event. So replace with an
> independent thread which has the same thread affinity settings
> as rte interrupt.
> 
> Fixes: 0c061eadec59 ("net/txgbe: add link status change")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Jiawen Wu 
>

Applied to dpdk-next-net/main, thanks.


Re: [PATCH V2 2/2] app/testpmd: assign custom ID to flow rules

2023-06-30 Thread Ferruh Yigit
On 6/2/2023 9:19 PM, Ferruh Yigit wrote:
> On 3/16/2023 2:19 PM, Gregory Etelson wrote:
>> From: Eli Britstein 
>>
>> Upon creation of a flow, testpmd assigns it a flow ID. Later, the
>> flow ID is used for flow operations (query, destroy, dump).
>>
>> The testpmd application allows to manage flow rules with its IDs.
>> The flow ID is known only when the flow is created.
>> In order to prepare a complete sequence of testpmd commands to
>> copy/paste, the flow IDs must be predictable.
>>
>> Allow the user to provide an assigned ID.
>>
>> Example:
>> testpmd> flow create 0 ingress user_id 0x1234 pattern eth / end actions
>> count / drop / end
>> Flow rule #0 created, user-id 0x1234
>>
>> testpmd> flow query 0 0x1234 count user_id
>>
>> testpmd> flow dump 0 user_id rule 0x1234
>>
>> testpmd> flow destroy 0 rule 0x1234 user_id
>> Flow rule #0 destroyed, user-id 0x1234
>>
>> Here, "user_id" is a flag that signifies the "rule" ID is the user-id.
>>
>> The motivation is from OVS. OVS dumps its "rte_flow_create" calls to the
>> log in testpmd commands syntax. As the flow ID testpmd would assign is
>> unkwon, it cannot log valid "flow destroy" commands.
>>
>> With this enhancement, valid testpmd commands can be created in a
>> log to copy/paste to testpmd.
>> The application's flows sequence can then be played back in
>> testpmd, to enable enhanced dpdk debug capabilities of the
>> applications's flows in a controlled environment of testpmd
>> rather than a dynamic, more difficult to debug environment of the
>> application.
>>
>> Signed-off-by: Eli Britstein 
>> ---
>>  app/test-pmd/cmdline_flow.c | 72 +++--
>>  app/test-pmd/config.c   | 34 +++---
>>  app/test-pmd/testpmd.h  | 12 ++--
>>  doc/guides/testpmd_app_ug/testpmd_funcs.rst | 33 +++---
>>  4 files changed, 121 insertions(+), 30 deletions(-)
>>
> 
> Hi Ori,
> 
> Can you please help reviewing this patch?
> 

Reminder for review.



[PATCH v2] net/mlx5: fix the error set for age pool initialization

2023-06-30 Thread Bing Zhao
The rte_flow_error needs to be set when the age pool initialization
has a failure. Or else the application will crash due to the access
of the invalid "message" field.

Fixes: 04a4de756e14 ("net/mlx5: support flow age action with HWS")
Cc: michae...@nvidia.com
Cc: sta...@dpdk.org

Signed-off-by: Bing Zhao 
Acked-by: Ori Kam 
---
v2: add CC stable
---
 drivers/net/mlx5/mlx5_flow_hw.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 4163fe23e6..20941b4fc7 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -7913,8 +7913,11 @@ flow_hw_configure(struct rte_eth_dev *dev,
goto err;
}
ret = mlx5_hws_age_pool_init(dev, port_attr, nb_queue);
-   if (ret < 0)
+   if (ret < 0) {
+   rte_flow_error_set(error, -ret, 
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+  NULL, "Failed to init age pool.");
goto err;
+   }
}
ret = flow_hw_create_vlan(dev);
if (ret)
-- 
2.34.1



[PATCH v2] net/idpf: refine vport parameter string

2023-06-30 Thread Mingxia Liu
This patch refines 'IDPF_VPORT' param string in
'RTE_PMD_REGISTER_PARAM_STRING'.

Signed-off-by: Mingxia Liu 
---
 drivers/net/idpf/idpf_ethdev.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 6d9a53c94c..75b4f301ab 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -1472,9 +1472,9 @@ RTE_PMD_REGISTER_PCI(net_idpf, rte_idpf_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(net_idpf, pci_id_idpf_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_idpf, "* igb_uio | vfio-pci");
 RTE_PMD_REGISTER_PARAM_STRING(net_idpf,
- IDPF_TX_SINGLE_Q "=<0|1> "
- IDPF_RX_SINGLE_Q "=<0|1> "
- IDPF_VPORT "=[vport_set0,[vport_set1],...]");
+   IDPF_TX_SINGLE_Q "=<0|1> "
+   IDPF_RX_SINGLE_Q "=<0|1> "
+   IDPF_VPORT "=[[-][,[-]][, ... ]]");
 
 RTE_LOG_REGISTER_SUFFIX(idpf_logtype_init, init, NOTICE);
 RTE_LOG_REGISTER_SUFFIX(idpf_logtype_driver, driver, NOTICE);
-- 
2.34.1



[PATCH v2] net/idpf: fix error path processing

2023-06-30 Thread Mingxia Liu
This patch moves vport info updating lines to the last,
in order to fix reverting missing in the error handle.

Fixes: 5e0f60527e5b ("net/idpf: remove vport req and recv info from adapter")

Signed-off-by: Mingxia Liu 
---
 drivers/net/idpf/idpf_ethdev.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 4b7cc81550..6d9a53c94c 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -1277,10 +1277,6 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void 
*init_params)
goto err;
}
 
-   adapter->vports[param->idx] = vport;
-   adapter->cur_vports |= RTE_BIT32(param->devarg_id);
-   adapter->cur_vport_nb++;
-
dev->data->mac_addrs = rte_zmalloc(NULL, RTE_ETHER_ADDR_LEN, 0);
if (dev->data->mac_addrs == NULL) {
PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory.");
@@ -1291,6 +1287,10 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void 
*init_params)
rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr,
&dev->data->mac_addrs[0]);
 
+   adapter->vports[param->idx] = vport;
+   adapter->cur_vports |= RTE_BIT32(param->devarg_id);
+   adapter->cur_vport_nb++;
+
return 0;
 
 err_mac_addrs:
-- 
2.34.1



[PATCH v2] net/idpf: refine dev_link_update function

2023-06-30 Thread Mingxia Liu
This patch refines idpf_dev_link_update callback function according to
CPFL PMD basic code.

Signed-off-by: Mingxia Liu 
---
 drivers/net/idpf/idpf_ethdev.c | 63 +++---
 1 file changed, 28 insertions(+), 35 deletions(-)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index fb5965..4aa0a18cd8 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -30,6 +30,23 @@ static const char * const idpf_valid_args[] = {
NULL
 };
 
+uint32_t idpf_supported_speeds[] = {
+   RTE_ETH_SPEED_NUM_NONE,
+   RTE_ETH_SPEED_NUM_10M,
+   RTE_ETH_SPEED_NUM_100M,
+   RTE_ETH_SPEED_NUM_1G,
+   RTE_ETH_SPEED_NUM_2_5G,
+   RTE_ETH_SPEED_NUM_5G,
+   RTE_ETH_SPEED_NUM_10G,
+   RTE_ETH_SPEED_NUM_20G,
+   RTE_ETH_SPEED_NUM_25G,
+   RTE_ETH_SPEED_NUM_40G,
+   RTE_ETH_SPEED_NUM_50G,
+   RTE_ETH_SPEED_NUM_56G,
+   RTE_ETH_SPEED_NUM_100G,
+   RTE_ETH_SPEED_NUM_200G
+};
+
 static const uint64_t idpf_map_hena_rss[] = {
[IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
RTE_ETH_RSS_NONFRAG_IPV4_UDP,
@@ -110,47 +127,23 @@ idpf_dev_link_update(struct rte_eth_dev *dev,
 {
struct idpf_vport *vport = dev->data->dev_private;
struct rte_eth_link new_link;
+   unsigned int i;
 
memset(&new_link, 0, sizeof(new_link));
 
-   switch (vport->link_speed) {
-   case RTE_ETH_SPEED_NUM_10M:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
-   break;
-   case RTE_ETH_SPEED_NUM_100M:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
-   break;
-   case RTE_ETH_SPEED_NUM_1G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
-   break;
-   case RTE_ETH_SPEED_NUM_10G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
-   break;
-   case RTE_ETH_SPEED_NUM_20G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
-   break;
-   case RTE_ETH_SPEED_NUM_25G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
-   break;
-   case RTE_ETH_SPEED_NUM_40G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
-   break;
-   case RTE_ETH_SPEED_NUM_50G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
-   break;
-   case RTE_ETH_SPEED_NUM_100G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
-   break;
-   case RTE_ETH_SPEED_NUM_200G:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
-   break;
-   default:
-   new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+   /* initialize with default value */
+   new_link.link_speed = vport->link_up ? RTE_ETH_SPEED_NUM_UNKNOWN : 
RTE_ETH_SPEED_NUM_NONE;
+
+   /* update in case a match */
+   for (i = 0; i < RTE_DIM(idpf_supported_speeds); i++) {
+   if (vport->link_speed == idpf_supported_speeds[i]) {
+   new_link.link_speed = vport->link_speed;
+   break;
+   }
}
 
new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
-   new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
-   RTE_ETH_LINK_DOWN;
+   new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP : 
RTE_ETH_LINK_DOWN;
new_link.link_autoneg = (dev->data->dev_conf.link_speeds & 
RTE_ETH_LINK_SPEED_FIXED) ?
 RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG;
 
-- 
2.34.1



RE: [PATCH] net/idpf: refine idpf_dev_vport_init() function

2023-06-30 Thread Liu, Mingxia



> -Original Message-
> From: Zhang, Qi Z 
> Sent: Friday, June 30, 2023 4:14 PM
> To: Liu, Mingxia ; dev@dpdk.org
> Cc: Wu, Jingjing ; Xing, Beilei 
> ;
> Liu, Mingxia 
> Subject: RE: [PATCH] net/idpf: refine idpf_dev_vport_init() function
> 
> 
> 
> > -Original Message-
> > From: Zhang, Qi Z
> > Sent: Friday, June 30, 2023 4:13 PM
> > To: Mingxia Liu ; dev@dpdk.org
> > Cc: Wu, Jingjing ; Xing, Beilei
> > ; Liu, Mingxia 
> > Subject: RE: [PATCH] net/idpf: refine idpf_dev_vport_init() function
> >
> >
> >
> > > -Original Message-
> > > From: Mingxia Liu 
> > > Sent: Monday, June 26, 2023 10:06 PM
> > > To: dev@dpdk.org
> > > Cc: Wu, Jingjing ; Xing, Beilei
> > > ; Liu, Mingxia 
> > > Subject: [PATCH] net/idpf: refine idpf_dev_vport_init() function
> > >
> > > This patch adds 'cur_vports' and 'cur_vport_nb' updation in error path.
> > >
> > > Signed-off-by: Mingxia Liu 
> > > ---
> > >  drivers/net/idpf/idpf_ethdev.c | 2 ++
> > >  1 file changed, 2 insertions(+)
> > >
> > > diff --git a/drivers/net/idpf/idpf_ethdev.c
> > > b/drivers/net/idpf/idpf_ethdev.c index 801da57472..3e66898aaf 100644
> > > --- a/drivers/net/idpf/idpf_ethdev.c
> > > +++ b/drivers/net/idpf/idpf_ethdev.c
> > > @@ -1300,6 +1300,8 @@ idpf_dev_vport_init(struct rte_eth_dev *dev,
> > > void
> > > *init_params)
> > >  err_mac_addrs:
> > >   adapter->vports[param->idx] = NULL;  /* reset */
> > >   idpf_vport_deinit(vport);
> > > + adapter->cur_vports &= ~RTE_BIT32(param->devarg_id);
> > > + adapter->cur_vport_nb--;
> >
> > Can we move below two lines to the last?
> >
> > adapter->cur_vports |= RTE_BIT32(param->devarg_id); cur_vport_nb++;
> >
> > so we don't need to revert them in error handle
> 
> Btw this is a fix, please also add the fixline.
> >
[Liu, Mingxia] Thanks, new patch has been sent.

> > >  err:
> > >   return ret;
> > >  }
> > > --
> > > 2.34.1



RE: [PATCH] net/idpf: refine dev_link_update function

2023-06-30 Thread Liu, Mingxia



> -Original Message-
> From: Zhang, Qi Z 
> Sent: Friday, June 30, 2023 4:27 PM
> To: Liu, Mingxia ; dev@dpdk.org
> Cc: Wu, Jingjing ; Xing, Beilei 
> ;
> Liu, Mingxia 
> Subject: RE: [PATCH] net/idpf: refine dev_link_update function
> 
> 
> 
> > -Original Message-
> > From: Mingxia Liu 
> > Sent: Monday, June 26, 2023 7:15 PM
> > To: dev@dpdk.org
> > Cc: Wu, Jingjing ; Xing, Beilei
> > ; Liu, Mingxia 
> > Subject: [PATCH] net/idpf: refine dev_link_update function
> >
> > This patch refines idpf_dev_link_update callback function according to
> > CPFL PMD basic code.
> >
> > Signed-off-by: Mingxia Liu 
> > ---
> >  drivers/net/idpf/idpf_ethdev.c | 63
> > --
> >  1 file changed, 30 insertions(+), 33 deletions(-)
> >
> > diff --git a/drivers/net/idpf/idpf_ethdev.c
> > b/drivers/net/idpf/idpf_ethdev.c index fb5965..bfdac92b95 100644
> > --- a/drivers/net/idpf/idpf_ethdev.c
> > +++ b/drivers/net/idpf/idpf_ethdev.c
> > @@ -30,6 +30,23 @@ static const char * const idpf_valid_args[] = {
> > NULL
> >  };
> >
> > +uint32_t idpf_supported_speeds[] = {
> > +   RTE_ETH_SPEED_NUM_NONE,
> > +   RTE_ETH_SPEED_NUM_10M,
> > +   RTE_ETH_SPEED_NUM_100M,
> > +   RTE_ETH_SPEED_NUM_1G,
> > +   RTE_ETH_SPEED_NUM_2_5G,
> > +   RTE_ETH_SPEED_NUM_5G,
> > +   RTE_ETH_SPEED_NUM_10G,
> > +   RTE_ETH_SPEED_NUM_20G,
> > +   RTE_ETH_SPEED_NUM_25G,
> > +   RTE_ETH_SPEED_NUM_40G,
> > +   RTE_ETH_SPEED_NUM_50G,
> > +   RTE_ETH_SPEED_NUM_56G,
> > +   RTE_ETH_SPEED_NUM_100G,
> > +   RTE_ETH_SPEED_NUM_200G
> > +};
> > +
> >  static const uint64_t idpf_map_hena_rss[] = {
> > [IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
> > RTE_ETH_RSS_NONFRAG_IPV4_UDP,
> > @@ -110,42 +127,22 @@ idpf_dev_link_update(struct rte_eth_dev *dev,  {
> > struct idpf_vport *vport = dev->data->dev_private;
> > struct rte_eth_link new_link;
> > +   unsigned int i;
> >
> > memset(&new_link, 0, sizeof(new_link));
> >
> > -   switch (vport->link_speed) {
> > -   case RTE_ETH_SPEED_NUM_10M:
> > -   new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
> > -   break;
> > -   case RTE_ETH_SPEED_NUM_100M:
> > -   new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
> > -   break;
> > -   case RTE_ETH_SPEED_NUM_1G:
> > -   new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
> > -   break;
> > -   case RTE_ETH_SPEED_NUM_10G:
> > -   new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
> > -   break;
> > -   case RTE_ETH_SPEED_NUM_20G:
> > -   new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
> > -   break;
> > -   case RTE_ETH_SPEED_NUM_25G:
> > -   new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
> > -   break;
> > -   case RTE_ETH_SPEED_NUM_40G:
> > -   new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
> > -   break;
> > -   case RTE_ETH_SPEED_NUM_50G:
> > -   new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
> > -   break;
> > -   case RTE_ETH_SPEED_NUM_100G:
> > -   new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
> > -   break;
> > -   case RTE_ETH_SPEED_NUM_200G:
> > -   new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
> > -   break;
> > -   default:
> > -   new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
> > +   for (i = 0; i < RTE_DIM(idpf_supported_speeds); i++) {
> > +   if (vport->link_speed == idpf_supported_speeds[i]) {
> > +   new_link.link_speed = vport->link_speed;
> > +   break;
> > +   }
> > +   }
> > +
> > +   if (i == RTE_DIM(idpf_supported_speeds)) {
> > +   if (vport->link_up)
> > +   new_link.link_speed =
> > RTE_ETH_SPEED_NUM_UNKNOWN;
> > +   else
> > +   new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
> > }
> 
> What about
> 
> /* initialize with default value */
> new_link.link_speed = vport->link_up ?  RTE_ETH_SPEED_NUM_UNKNOWN :
> RTE_ETH_SPEED_NUM_NONE
> 
> / * update in case a match */
> for (i = 0; i < RTE_DIM(idpf_supported_speeds); i++) {
>   
> }
> 
[Liu, Mingxia] Good idea, new patch has been sent.
> >
> > new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
> > --
> > 2.34.1



RE: [PATCH] net/idpf: refine RTE_PMD_REGISTER_PARAM_STRING of IDPF PMD

2023-06-30 Thread Liu, Mingxia



> -Original Message-
> From: Zhang, Qi Z 
> Sent: Friday, June 30, 2023 5:08 PM
> To: Liu, Mingxia ; dev@dpdk.org
> Cc: Wu, Jingjing ; Xing, Beilei 
> ;
> Liu, Mingxia 
> Subject: RE: [PATCH] net/idpf: refine RTE_PMD_REGISTER_PARAM_STRING of
> IDPF PMD
> 
> 
> 
> > -Original Message-
> > From: Mingxia Liu 
> > Sent: Monday, June 26, 2023 10:06 PM
> > To: dev@dpdk.org
> > Cc: Wu, Jingjing ; Xing, Beilei
> > ; Liu, Mingxia 
> > Subject: [PATCH] net/idpf: refine RTE_PMD_REGISTER_PARAM_STRING of
> > IDPF PMD
> 
> No need to mention the PMD in the title as its duplicate with the prefix.
> 
> How about
> 
> net/idpf: refine vport parameter string
> 
> >
> > This patch refines 'IDPF_VPORT' param string in
> > 'RTE_PMD_REGISTER_PARAM_STRING'.
> >
> > Signed-off-by: Mingxia Liu 
> > ---
> >  drivers/net/idpf/idpf_ethdev.c | 6 +++---
> >  1 file changed, 3 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/net/idpf/idpf_ethdev.c
> > b/drivers/net/idpf/idpf_ethdev.c index 3e66898aaf..34ca5909f1 100644
> > --- a/drivers/net/idpf/idpf_ethdev.c
> > +++ b/drivers/net/idpf/idpf_ethdev.c
> > @@ -1478,9 +1478,9 @@ RTE_PMD_REGISTER_PCI(net_idpf, rte_idpf_pmd);
> > RTE_PMD_REGISTER_PCI_TABLE(net_idpf, pci_id_idpf_map);
> > RTE_PMD_REGISTER_KMOD_DEP(net_idpf, "* igb_uio | vfio-pci");
> > RTE_PMD_REGISTER_PARAM_STRING(net_idpf,
> > - IDPF_TX_SINGLE_Q "=<0|1> "
> > - IDPF_RX_SINGLE_Q "=<0|1> "
> > - IDPF_VPORT "=[vport_set0,[vport_set1],...]");
> > +   IDPF_TX_SINGLE_Q "=<0|1> "
> > +   IDPF_RX_SINGLE_Q "=<0|1> "
> > +   IDPF_VPORT
> > +"=[vport0_begin[-vport0_end][,vport1_begin[-vport1_end]][,..]]");
> 
> Better to use "<>" to wrap a symbol
> How about " [[-][,[-]][, ... ]]"?
> 
[Liu, Mingxia] Thanks, new patch has been sent.
> >
> >  RTE_LOG_REGISTER_SUFFIX(idpf_logtype_init, init, NOTICE);
> > RTE_LOG_REGISTER_SUFFIX(idpf_logtype_driver, driver, NOTICE);
> > --
> > 2.34.1



[PATCH v2] net/mlx5: fix the error in VLAN actions creation

2023-06-30 Thread Bing Zhao
When a failure occurs during the VLAN actions creating, the value
of "rte_errno" is already set by the mlx5dr_action_create*. The
value can be returned directly to reflect the actual reason.

In the meanwhile, the "rte_flow_error" structure should also be set
with explict message.

Fixes: 773ca0e91ba1 ("net/mlx5: support VLAN push/pop/modify with HWS")
Cc: getel...@nvidia.com
Cc: sta...@dpdk.org

Signed-off-by: Bing Zhao 
Acked-by: Ori Kam 
---
v2: add CC stable
---
 drivers/net/mlx5/mlx5_flow_hw.c | 14 +-
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 20941b4fc7..36a7f0989c 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -7125,27 +7125,28 @@ flow_hw_create_vlan(struct rte_eth_dev *dev)
MLX5DR_ACTION_FLAG_HWS_FDB
};
 
+   /* rte_errno is set in the mlx5dr_action* functions. */
for (i = MLX5DR_TABLE_TYPE_NIC_RX; i <= MLX5DR_TABLE_TYPE_NIC_TX; i++) {
priv->hw_pop_vlan[i] =
mlx5dr_action_create_pop_vlan(priv->dr_ctx, flags[i]);
if (!priv->hw_pop_vlan[i])
-   return -ENOENT;
+   return -rte_errno;
priv->hw_push_vlan[i] =
mlx5dr_action_create_push_vlan(priv->dr_ctx, flags[i]);
if (!priv->hw_pop_vlan[i])
-   return -ENOENT;
+   return -rte_errno;
}
if (priv->sh->config.dv_esw_en && priv->master) {
priv->hw_pop_vlan[MLX5DR_TABLE_TYPE_FDB] =
mlx5dr_action_create_pop_vlan
(priv->dr_ctx, MLX5DR_ACTION_FLAG_HWS_FDB);
if (!priv->hw_pop_vlan[MLX5DR_TABLE_TYPE_FDB])
-   return -ENOENT;
+   return -rte_errno;
priv->hw_push_vlan[MLX5DR_TABLE_TYPE_FDB] =
mlx5dr_action_create_push_vlan
(priv->dr_ctx, MLX5DR_ACTION_FLAG_HWS_FDB);
if (!priv->hw_pop_vlan[MLX5DR_TABLE_TYPE_FDB])
-   return -ENOENT;
+   return -rte_errno;
}
return 0;
 }
@@ -7920,8 +7921,11 @@ flow_hw_configure(struct rte_eth_dev *dev,
}
}
ret = flow_hw_create_vlan(dev);
-   if (ret)
+   if (ret) {
+   rte_flow_error_set(error, -ret, RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+  NULL, "Failed to VLAN actions.");
goto err;
+   }
if (_queue_attr)
mlx5_free(_queue_attr);
if (port_attr->flags & RTE_FLOW_PORT_FLAG_STRICT_QUEUE)
-- 
2.34.1



[PATCH v3] net/mlx5: fix the error in VLAN actions creation

2023-06-30 Thread Bing Zhao
When a failure occurs during the VLAN actions creating, the value
of "rte_errno" is already set by the mlx5dr_action_create*. The
value can be returned directly to reflect the actual reason.

In the meanwhile, the "rte_flow_error" structure should also be set
with clear message explicitly.

Fixes: 773ca0e91ba1 ("net/mlx5: support VLAN push/pop/modify with HWS")
Cc: getel...@nvidia.com
Cc: sta...@dpdk.org

Signed-off-by: Bing Zhao 
Acked-by: Ori Kam 
---
v2: add CC stable
v3: fix typo
---
 drivers/net/mlx5/mlx5_flow_hw.c | 14 +-
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 20941b4fc7..36a7f0989c 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -7125,27 +7125,28 @@ flow_hw_create_vlan(struct rte_eth_dev *dev)
MLX5DR_ACTION_FLAG_HWS_FDB
};
 
+   /* rte_errno is set in the mlx5dr_action* functions. */
for (i = MLX5DR_TABLE_TYPE_NIC_RX; i <= MLX5DR_TABLE_TYPE_NIC_TX; i++) {
priv->hw_pop_vlan[i] =
mlx5dr_action_create_pop_vlan(priv->dr_ctx, flags[i]);
if (!priv->hw_pop_vlan[i])
-   return -ENOENT;
+   return -rte_errno;
priv->hw_push_vlan[i] =
mlx5dr_action_create_push_vlan(priv->dr_ctx, flags[i]);
if (!priv->hw_pop_vlan[i])
-   return -ENOENT;
+   return -rte_errno;
}
if (priv->sh->config.dv_esw_en && priv->master) {
priv->hw_pop_vlan[MLX5DR_TABLE_TYPE_FDB] =
mlx5dr_action_create_pop_vlan
(priv->dr_ctx, MLX5DR_ACTION_FLAG_HWS_FDB);
if (!priv->hw_pop_vlan[MLX5DR_TABLE_TYPE_FDB])
-   return -ENOENT;
+   return -rte_errno;
priv->hw_push_vlan[MLX5DR_TABLE_TYPE_FDB] =
mlx5dr_action_create_push_vlan
(priv->dr_ctx, MLX5DR_ACTION_FLAG_HWS_FDB);
if (!priv->hw_pop_vlan[MLX5DR_TABLE_TYPE_FDB])
-   return -ENOENT;
+   return -rte_errno;
}
return 0;
 }
@@ -7920,8 +7921,11 @@ flow_hw_configure(struct rte_eth_dev *dev,
}
}
ret = flow_hw_create_vlan(dev);
-   if (ret)
+   if (ret) {
+   rte_flow_error_set(error, -ret, RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+  NULL, "Failed to VLAN actions.");
goto err;
+   }
if (_queue_attr)
mlx5_free(_queue_attr);
if (port_attr->flags & RTE_FLOW_PORT_FLAG_STRICT_QUEUE)
-- 
2.34.1



[PATCH v1] crypto/qat: fix struct alignment

2023-06-30 Thread Brian Dooley
The qat_sym_session struct variable alignment was causing a segfault.
AES expansion keys require 16-byte alignment. Added __rte_aligned to
the expansion keys.

Fixes: ca0ba0e48129 ("crypto/qat: default to IPsec MB for computations")

Signed-off-by: Brian Dooley 
---
 drivers/crypto/qat/qat_sym_session.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/qat/qat_sym_session.h 
b/drivers/crypto/qat/qat_sym_session.h
index 68cf3eaaf4..674a62ee12 100644
--- a/drivers/crypto/qat/qat_sym_session.h
+++ b/drivers/crypto/qat/qat_sym_session.h
@@ -142,10 +142,10 @@ struct qat_sym_session {
qat_sym_build_request_t build_request[2];
 #ifndef RTE_QAT_OPENSSL
IMB_MGR *mb_mgr;
-#endif
-   uint64_t expkey[4*15];
-   uint32_t dust[4*15];
+   uint64_t expkey[4*15] __rte_aligned(16);
+   uint32_t dust[4*15] __rte_aligned(16);
uint8_t docsis_key_len;
+#endif
 };
 
 int
-- 
2.25.1



Re: [PATCH v5] gro : fix reordering of packets in GRO library

2023-06-30 Thread kumaraparameshwaran rathinavel
On Tue, Jun 20, 2023 at 1:06 PM Hu, Jiayu  wrote:

> Hi Kumara,
>
> Please see replies inline.
>
> Thanks,
> Jiayu
>
> > -Original Message-
> > From: Kumara Parameshwaran 
> > Sent: Tuesday, November 1, 2022 3:06 PM
> > To: Hu, Jiayu 
> > Cc: dev@dpdk.org; Kumara Parameshwaran
> > ; Kumara Parameshwaran
> > 
> > Subject: [PATCH v5] gro : fix reordering of packets in GRO library
> >
> > From: Kumara Parameshwaran 
> >
> > When a TCP packet contains flags like PSH it is returned immediately to
> the
> > application though there might be packets of the same flow in the GRO
> table.
> > If PSH flag is set on a segment packets up to the segment should be
> delivered
> > immediately. But the current implementation delivers the last arrived
> packet
> > with PSH flag set causing re-ordering
> >
> > With this patch, if a packet does not contain only ACK flag and if there
> are no
> > previous packets for the flow the packet would be returned immediately,
> > else will be merged with the previous segment and the flag on the last
> > segment will be set on the entire segment.
> > This is the behaviour with linux stack as well.
> >
> > Signed-off-by: Kumara Parameshwaran 
> > Co-authored-by: Kumara Parameshwaran 
> > ---
> > v1:
> >   If the received packet is not a pure ACK packet, we check if
> >   there are any previous packets in the flow, if present we indulge
> >   the received packet also in the coalescing logic and update the
> flags
> >   of the last recived packet to the entire segment which would avoid
> >   re-ordering.
> >
> >   Lets say a case where P1(PSH), P2(ACK), P3(ACK)  are received in
> > burst mode,
> >   P1 contains PSH flag and since it does not contain any prior
> packets in
> > the flow
> >   we copy it to unprocess_packets and P2(ACK) and P3(ACK) are
> > merged together.
> >   In the existing case the  P2,P3 would be delivered as single
> segment
> > first and the
> >   unprocess_packets will be copied later which will cause reordering.
> > With the patch
> >   copy the unprocess packets first and then the packets from the GRO
> > table.
> >
> >   Testing done
> >   The csum test-pmd was modifited to support the following
> >   GET request of 10MB from client to server via test-pmd (static arp
> > entries added in client
> >   and server). Enable GRO and TSO in test-pmd where the packets
> > recived from the client mac
> >   would be sent to server mac and vice versa.
> >   In above testing, without the patch the client observerd
> re-ordering
> > of 25 packets
> >   and with the patch there were no packet re-ordering observerd.
> >
> > v2:
> >   Fix warnings in commit and comment.
> >   Do not consider packet as candidate to merge if it contains SYN/RST
> > flag.
> >
> > v3:
> >   Fix warnings.
> >
> > v4:
> >   Rebase with master.
> >
> > v5:
> >   Adding co-author email
> >
> >  lib/gro/gro_tcp4.c | 45 +
> >  lib/gro/rte_gro.c  | 18 +-
> >  2 files changed, 46 insertions(+), 17 deletions(-)
> >
> > diff --git a/lib/gro/gro_tcp4.c b/lib/gro/gro_tcp4.c index
> > 0014096e63..7363c5d540 100644
> > --- a/lib/gro/gro_tcp4.c
> > +++ b/lib/gro/gro_tcp4.c
> > @@ -188,6 +188,19 @@ update_header(struct gro_tcp4_item *item)
> >   pkt->l2_len);
> >  }
> >
> > +static inline void
> > +update_tcp_hdr_flags(struct rte_tcp_hdr *tcp_hdr, struct rte_mbuf *pkt)
> > +{
> > + struct rte_ether_hdr *eth_hdr;
> > + struct rte_ipv4_hdr *ipv4_hdr;
> > + struct rte_tcp_hdr *merged_tcp_hdr;
> > +
> > + eth_hdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
> > + ipv4_hdr = (struct rte_ipv4_hdr *)((char *)eth_hdr + pkt->l2_len);
> > + merged_tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr + pkt-
> > >l3_len);
> > + merged_tcp_hdr->tcp_flags |= tcp_hdr->tcp_flags; }
>
> The Linux kernel updates the TCP flag via "tcp_flag_word(th2) |= flags &
> (TCP_FLAG_FIN | TCP_FLAG_PSH)",
> which only adds FIN and PSH at most to the merge packet.
>
> > +
> >  int32_t
> >  gro_tcp4_reassemble(struct rte_mbuf *pkt,
> >   struct gro_tcp4_tbl *tbl,
> > @@ -206,6 +219,7 @@ gro_tcp4_reassemble(struct rte_mbuf *pkt,
> >   uint32_t i, max_flow_num, remaining_flow_num;
> >   int cmp;
> >   uint8_t find;
> > + uint32_t start_idx;
> >
> >   /*
> >* Don't process the packet whose TCP header length is greater @@ -
> > 219,13 +233,6 @@ gro_tcp4_reassemble(struct rte_mbuf *pkt,
> >   tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr + pkt->l3_len);
> >   hdr_len = pkt->l2_len + pkt->l3_len + pkt->l4_len;
> >
> > - /*
> > -  * Don't process the packet which has FIN, SYN, RST, PSH, URG, ECE
> > -  * or CWR set.
> > -  */
> > - if (tcp_hdr->tcp_flags != RTE_TCP_ACK_FLAG)
> > - return -1;
> > -
> >   /* trim the tail padding bytes */
> >   ip_tlen = rte_be

Re: [PATCH v3 04/11] net/bnxt: update Truflow core

2023-06-30 Thread Ajit Khaparde
On Wed, Jun 28, 2023 at 9:30 PM Ajit Khaparde
 wrote:
>
> On Wed, Jun 28, 2023 at 12:07 PM Thomas Monjalon  wrote:
> >
> > 28/06/2023 18:35, Ajit Khaparde:
> > > On Sat, Jun 10, 2023 at 11:33 AM Thomas Monjalon  
> > > wrote:
> > > > More important, you are doing huge update of many different things
> > > > in one patch.
> > > > It looks like you don't want the community to follow what you are doing.
> > > Actually, no.
> > > As I mentioned above, most of the truflow files are auto generated.
> > > The reason for bundling some of the changes together was to avoid
> > > multiple patches hitting the mail server patch size limit.
> > > We thought it might be better to take the patch size hit on one patch
> > > instead of multiple patches.
> >
> > I don't see how it is better to have one huge patch
> > than multiple big ones.
> Well, its debatable now, considering we are having this discussion.
> But as I said, the current design of the truflow generator scripts tend to 
> make
> a lot of changes even for a small modification or adjustment to the code.
> We had patches which were moving around the same lines of code because
> of the script. That's why we decided to take this approach.
> We could try to split the patch with the template changes, but that may take
> time and we are closing in on rc3 date.

Patches applied to dpdk-next-net-brcm. Thanks


>
>
>
>
>
> >
> > >
> > > We are working on some design changes to the auto generation scripts
> > > which will avoid big churn in the template patches in the future.
> >
> >
> >


smime.p7s
Description: S/MIME Cryptographic Signature


[PATCH] common/mlx5: fix obtaining IB device in LAG mode

2023-06-30 Thread Bing Zhao
In hardware LAG mode, both PFs are in the same E-Switch domain but
the VFs are in the other domains. Moreover, VF has its own dedicated
IB device.

When probing a VF created on the 1st PF, usually its PCIe address
is the same as the PF's except the function part. Then there would
be some wrong VF BDF match on the IB "bond" device due to incomplete
comparison (we do not compare the function part of BDF for bonding
devices to match all bonded PFs).

Adding one extra condition to check whether the current PCIe address
device is a VF will solve the incorrect IB device recognition. Thus
the full address comparison will be done.

Fixes: f956d3d4c33c ("net/mlx5: fix probing with secondary bonding member")
Cc: rongw...@nvidia.com
Cc: sta...@dpdk.org

Signed-off-by: Bing Zhao 
Acked-by: Viacheslav Ovsiienko 
---
 drivers/common/mlx5/linux/mlx5_common_os.c | 16 +---
 drivers/common/mlx5/mlx5_common.h  |  2 +-
 drivers/common/mlx5/mlx5_common_pci.c  |  2 +-
 3 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/drivers/common/mlx5/linux/mlx5_common_os.c 
b/drivers/common/mlx5/linux/mlx5_common_os.c
index aafff60eeb..2ebb8ac8b6 100644
--- a/drivers/common/mlx5/linux/mlx5_common_os.c
+++ b/drivers/common/mlx5/linux/mlx5_common_os.c
@@ -555,7 +555,7 @@ mlx5_os_pd_prepare(struct mlx5_common_device *cdev)
 }
 
 static struct ibv_device *
-mlx5_os_get_ibv_device(const struct rte_pci_addr *addr)
+mlx5_os_get_ibv_device(const struct rte_pci_device *pci_dev)
 {
int n;
struct ibv_device **ibv_list = mlx5_glue->get_device_list(&n);
@@ -564,6 +564,8 @@ mlx5_os_get_ibv_device(const struct rte_pci_addr *addr)
uint8_t guid2[32] = {0};
int ret1, ret2 = -1;
struct rte_pci_addr paddr;
+   const struct rte_pci_addr *addr = &pci_dev->addr;
+   bool is_vf_dev = mlx5_dev_is_vf_pci(pci_dev);
 
if (ibv_list == NULL || !n) {
rte_errno = ENOSYS;
@@ -579,11 +581,11 @@ mlx5_os_get_ibv_device(const struct rte_pci_addr *addr)
if (ret1 > 0)
ret2 = mlx5_get_device_guid(&paddr, guid2, 
sizeof(guid2));
/* Bond device can bond secondary PCIe */
-   if ((strstr(ibv_list[n]->name, "bond") &&
-   ((ret1 > 0 && ret2 > 0 && !memcmp(guid1, guid2, 
sizeof(guid1))) ||
-   (addr->domain == paddr.domain && addr->bus == paddr.bus &&
-addr->devid == paddr.devid))) ||
-!rte_pci_addr_cmp(addr, &paddr)) {
+   if ((strstr(ibv_list[n]->name, "bond") && !is_vf_dev &&
+((ret1 > 0 && ret2 > 0 && !memcmp(guid1, guid2, 
sizeof(guid1))) ||
+ (addr->domain == paddr.domain && addr->bus == paddr.bus &&
+  addr->devid == paddr.devid))) ||
+   !rte_pci_addr_cmp(addr, &paddr)) {
ibv_match = ibv_list[n];
break;
}
@@ -697,7 +699,7 @@ mlx5_os_get_ibv_dev(const struct rte_device *dev)
struct ibv_device *ibv;
 
if (mlx5_dev_is_pci(dev))
-   ibv = mlx5_os_get_ibv_device(&RTE_DEV_TO_PCI_CONST(dev)->addr);
+   ibv = mlx5_os_get_ibv_device(RTE_DEV_TO_PCI_CONST(dev));
else
ibv = mlx5_get_aux_ibv_device(RTE_DEV_TO_AUXILIARY_CONST(dev));
if (ibv == NULL) {
diff --git a/drivers/common/mlx5/mlx5_common.h 
b/drivers/common/mlx5/mlx5_common.h
index 42d938776a..28f9f41528 100644
--- a/drivers/common/mlx5/mlx5_common.h
+++ b/drivers/common/mlx5/mlx5_common.h
@@ -600,7 +600,7 @@ mlx5_dev_is_pci(const struct rte_device *dev);
  */
 __rte_internal
 bool
-mlx5_dev_is_vf_pci(struct rte_pci_device *pci_dev);
+mlx5_dev_is_vf_pci(const struct rte_pci_device *pci_dev);
 
 __rte_internal
 int
diff --git a/drivers/common/mlx5/mlx5_common_pci.c 
b/drivers/common/mlx5/mlx5_common_pci.c
index 5122c596bc..04aad0963c 100644
--- a/drivers/common/mlx5/mlx5_common_pci.c
+++ b/drivers/common/mlx5/mlx5_common_pci.c
@@ -109,7 +109,7 @@ mlx5_dev_is_pci(const struct rte_device *dev)
 }
 
 bool
-mlx5_dev_is_vf_pci(struct rte_pci_device *pci_dev)
+mlx5_dev_is_vf_pci(const struct rte_pci_device *pci_dev)
 {
switch (pci_dev->id.device_id) {
case PCI_DEVICE_ID_MELLANOX_CONNECTX4VF:
-- 
2.34.1



[PATCH] net/mlx5: fix the profile check of meter mark

2023-06-30 Thread Bing Zhao
When creating a meter_mark action, the profile should be specified.
Or else there would be a crash if the pointer to the profile was not
set properly:
  1. creating an action template with only action mask set and using
 this template to create a table
  2. creating a flow rule without setting the profile if the action
 of meter_mark is not fixed

The check should be done inside the action allocation and an error
needs to be returned immediately.

Fixes: 48fbb0e93d06 ("net/mlx5: support flow meter mark indirect action with 
HWS")
Cc: akozy...@nvidia.com
Cc: sta...@dpdk.org

Signed-off-by: Bing Zhao 
Acked-by: Viacheslav Ovsiienko 
---
 drivers/net/mlx5/mlx5_flow_hw.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index b5137a822a..c64b260fea 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -1325,6 +1325,8 @@ flow_hw_meter_mark_alloc(struct rte_eth_dev *dev, 
uint32_t queue,
struct mlx5_flow_meter_info *fm;
uint32_t mtr_id;
 
+   if (meter_mark->profile == NULL)
+   return NULL;
aso_mtr = mlx5_ipool_malloc(priv->hws_mpool->idx_pool, &mtr_id);
if (!aso_mtr)
return NULL;
-- 
2.34.1



[PATCH] net/mlx5: reduce the counter pool name length

2023-06-30 Thread Bing Zhao
The name size of a rte_ring is RTE_MEMZONE_NAMESIZE with the value 32
by default. When creating a HWS counter pool cache, the final string
format was "RG_MLX5_HWS_CNT_POOL_%u_cache/%u" and it could support
less than 1000 variants. For example, if the first %u representing
port id is 100 and it will take all the available characters then the
second %u for queues will be discarded. If there was more than one
rule creation queue, the rte_ring could not be created.

By reducing the fixed character number and using hexadecimal format,
the issue can be overcome with an assumption that not all the integer
fields for queue index is used.

Fixes: 13ea6bdcc7ee ("net/mlx5: support counters in cross port shared mode")
Fixes: 4d368e1da3a4 ("net/mlx5: support flow counter action for HWS")
Cc: jack...@nvidia.com
Cc: sta...@dpdk.org

Signed-off-by: Bing Zhao 
Acked-by: Viacheslav Ovsiienko 
---
 drivers/net/mlx5/mlx5_hws_cnt.c | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_hws_cnt.c b/drivers/net/mlx5/mlx5_hws_cnt.c
index d98df68f39..18d80f34ba 100644
--- a/drivers/net/mlx5/mlx5_hws_cnt.c
+++ b/drivers/net/mlx5/mlx5_hws_cnt.c
@@ -419,8 +419,7 @@ mlx5_hws_cnt_pool_init(struct mlx5_dev_ctx_shared *sh,
goto error;
}
for (qidx = 0; qidx < ccfg->q_num; qidx++) {
-   snprintf(mz_name, sizeof(mz_name), "%s_cache/%u", pcfg->name,
-   qidx);
+   snprintf(mz_name, sizeof(mz_name), "%s_qc/%x", pcfg->name, 
qidx);
cntp->cache->qcache[qidx] = rte_ring_create(mz_name, ccfg->size,
SOCKET_ID_ANY,
RING_F_SP_ENQ | RING_F_SC_DEQ |
@@ -612,12 +611,10 @@ mlx5_hws_cnt_pool_create(struct rte_eth_dev *dev,
int ret = 0;
size_t sz;
 
-   mp_name = mlx5_malloc(MLX5_MEM_ZERO, RTE_MEMZONE_NAMESIZE, 0,
-   SOCKET_ID_ANY);
+   mp_name = mlx5_malloc(MLX5_MEM_ZERO, RTE_MEMZONE_NAMESIZE, 0, 
SOCKET_ID_ANY);
if (mp_name == NULL)
goto error;
-   snprintf(mp_name, RTE_MEMZONE_NAMESIZE, "MLX5_HWS_CNT_POOL_%u",
-   dev->data->port_id);
+   snprintf(mp_name, RTE_MEMZONE_NAMESIZE, "MLX5_HWS_CNT_P_%x", 
dev->data->port_id);
pcfg.name = mp_name;
pcfg.request_num = pattr->nb_counters;
pcfg.alloc_factor = HWS_CNT_ALLOC_FACTOR_DEFAULT;
-- 
2.34.1



[PATCH] app/testpmd: fix the rule number parsing

2023-06-30 Thread Bing Zhao
When creating a template table, the object pointer of the
command line "struct context" was set with an offset from the
original out buffer if there is a template ID.

If the "rules_number" is specified after the template IDs, it
couldn't be set and passed to the API correctly. With this commit,
the pointer is reset before pasring the "rules_number" field.

Fixes: c4b38873346b ("app/testpmd: add flow table management")
Cc: akozy...@nvidia.com
Cc: sta...@dpdk.org

Signed-off-by: Bing Zhao 
---
 app/test-pmd/cmdline_flow.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 5771281125..bd626e2347 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -3369,6 +3369,7 @@ static const struct token token_list[] = {
 NEXT_ENTRY(COMMON_UNSIGNED)),
.args = ARGS(ARGS_ENTRY(struct buffer,
args.table.attr.nb_flows)),
+   .call = parse_table,
},
[TABLE_PATTERN_TEMPLATE] = {
.name = "pattern_template",
@@ -10157,6 +10158,11 @@ parse_table(struct context *ctx, const struct token 
*token,
return -1;
out->args.table.attr.specialize = 
RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG;
return len;
+   case TABLE_RULES_NUMBER:
+   ctx->objdata = 0;
+   ctx->object = out;
+   ctx->objmask = NULL;
+   return len;
default:
return -1;
}
-- 
2.34.1



RE: [PATCH v1] examples/fips_validation: fix digest length in AES GCM

2023-06-30 Thread Dooley, Brian
Hey Samina,

> -Original Message-
> From: Arshad, Samina 
> Sent: Wednesday, June 28, 2023 3:39 PM
> To: Dooley, Brian ; Gowrishankar Muthukrishnan
> 
> Cc: dev@dpdk.org; sta...@dpdk.org; Arshad, Samina
> ; Kovacevic, Marko 
> Subject: [PATCH v1] examples/fips_validation: fix digest length in AES GCM
> 
> For AES GCM non JSON decrypt test cases the digest length is being set
> incorrectly.The digest length is not being cleared after test cases, causing 
> an
> issue when running tests individually without the --path-is-folder flag.
> This fix adds the digest length correctly to the decrypt cases and clears the
> digest length after each test file.
> 
> Fixes: 4aaad2995e13 ("examples/fips_validation: support GCM parsing")
> Cc: marko.kovace...@intel.com
> 
> Signed-off-by: Samina Arshad 
> ---
>  examples/fips_validation/main.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/examples/fips_validation/main.c
> b/examples/fips_validation/main.c index 4237224d9d..6518c959c4 100644
> --- a/examples/fips_validation/main.c
> +++ b/examples/fips_validation/main.c
> @@ -834,7 +834,7 @@ prepare_aead_op(void)
>   RTE_LOG(ERR, USER1, "Not enough memory\n");
>   return -ENOMEM;
>   }
> - env.digest_len = vec.cipher_auth.digest.len;
> + env.digest_len = vec.aead.digest.len;
> 
>   sym->aead.data.length = vec.pt.len;
>   sym->aead.digest.data = env.digest;
> @@ -843,7 +843,7 @@ prepare_aead_op(void)
>   ret = prepare_data_mbufs(&vec.ct);
>   if (ret < 0)
>   return ret;
> -
> + env.digest_len = vec.aead.digest.len;
>   sym->aead.data.length = vec.ct.len;
>   sym->aead.digest.data = vec.aead.digest.val;
>   sym->aead.digest.phys_addr = rte_malloc_virt2iova( @@ -
> 2618,6 +2618,7 @@ fips_test_one_file(void)
>   if (env.digest) {
>   rte_free(env.digest);
>   env.digest = NULL;
> + env.digest_len = 0;
>   }
>   rte_pktmbuf_free(env.mbuf);
> 
> --
> 2.25.1

Acked-by: Brian Dooley 



Re: [dpdk-dev] [PATCH v1] eal: update all buses default scan mode

2023-06-30 Thread Stephen Hemminger
On Sun, 28 Mar 2021 21:12:22 +0800
Xueming Li  wrote:

> When parsing EAL allowed or blocked device arguments, only device bus
> being parsed got default scan mode updated. If the devargs was vdev, PCI
> bus default scan mode not touched, all PCI bus devices will got probed
> even none appear in allowed list.
> 
> This patch update all buses default scan mode when parsing first
> devargs.
> 
> Signed-off-by: Xueming Li 

Looking back at this old patch, and wondering why it never got applied.
Probably because it wasn't clear the exact problem.

It does raise the issue that scan_mode is currently a property
of the bus, not global. This patch would cause setting of allowed list
for PCI to also impact other bus types. That doesn't follow current
practice.

If you want to resubmit, make it per bus.


Re: [dpdk-dev] [PATCH v2 01/11] eal: explain argv behaviour during init

2023-06-30 Thread Stephen Hemminger
On Wed, 10 Mar 2021 14:28:15 +0100
Thomas Monjalon  wrote:

> After argument parsing done by rte_eal_init(),
> the remaining arguments are to be parsed by the application
> by progressing in the argv array.
> In this context, the first string represented by argv[0] is still
> the same program name as the original argv[0],
> while the next strings are the application arguments.
> This is because rte_eal_init() manipulates the argv array
> after EAL parsing, before returning to the application.
> 
> This note was missing in the doxygen comment of the API.
> 
> Signed-off-by: Thomas Monjalon 

I would rather that rte_eal_init() treat the argv arguments
as immutable (ie const). Modifying input arguments is confusing
and can cause some issues.

Other functions (getopt, getopt_long, execv) in glibc use:
char *const argv[];

It would be good if EAL was the same.


[PATCH v2] docs: freebsd: Update to 20.11

2023-06-30 Thread David Young
This patch updates the installation instructions for DPDK on FreeBSD.
It specifies the explicit version of DPDK (20.11) to be installed.
This change is important as the 'dpdk' package is an alias and doesn't
always point to the latest version. By specifying the explicit version,
we make it clear which version is to be installed. The page previously
showed 'pkg install dpdk' without specifying the version.


Signed-off-by: David Young 

---
 doc/guides/freebsd_gsg/install_from_ports.rst | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/doc/guides/freebsd_gsg/install_from_ports.rst 
b/doc/guides/freebsd_gsg/install_from_ports.rst
index d946f3f3b2..ae866cd879 100644
--- a/doc/guides/freebsd_gsg/install_from_ports.rst
+++ b/doc/guides/freebsd_gsg/install_from_ports.rst
@@ -23,7 +23,7 @@ Installing the DPDK Package for FreeBSD
 
 DPDK can be installed on FreeBSD using the command::
 
-   pkg install dpdk
+   pkg install dpdk20.11
 
 After the installation of the DPDK package, instructions will be printed on
 how to install the kernel modules required to use the DPDK. A more
@@ -51,7 +51,7 @@ a pre-compiled binary package.
 On a system with the ports collection installed in ``/usr/ports``, the DPDK
 can be installed using the commands::
 
-cd /usr/ports/net/dpdk
+cd /usr/ports/net/dpdk20.11
 
 make install
 
@@ -123,3 +123,4 @@ via the contigmem module, and 4 NIC ports bound to the 
nic_uio module::
 
For an explanation of the command-line parameters that can be passed to an
DPDK application, see section :ref:`running_sample_app`.
+
-- 
2.41.0.windows.1



[PATCH v2 1/2] hash: fix reading unaligned bits implementation

2023-06-30 Thread Vladimir Medvedkin
Fixes: 28ebff11c2dc ("hash: add predictable RSS")
Cc: sta...@dpdk.org

Acked-by: Konstantin Ananyev 
Tested-by: Konstantin Ananyev 
Signed-off-by: Vladimir Medvedkin 
---
 lib/hash/rte_thash.c | 12 +++-
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c
index 0249883b8d..2228af576b 100644
--- a/lib/hash/rte_thash.c
+++ b/lib/hash/rte_thash.c
@@ -670,7 +670,7 @@ rte_thash_get_gfni_matrices(struct rte_thash_ctx *ctx)
 }
 
 static inline uint8_t
-read_unaligned_byte(uint8_t *ptr, unsigned int len, unsigned int offset)
+read_unaligned_byte(uint8_t *ptr, unsigned int offset)
 {
uint8_t ret = 0;
 
@@ -681,13 +681,14 @@ read_unaligned_byte(uint8_t *ptr, unsigned int len, 
unsigned int offset)
(CHAR_BIT - (offset % CHAR_BIT));
}
 
-   return ret >> (CHAR_BIT - len);
+   return ret;
 }
 
 static inline uint32_t
 read_unaligned_bits(uint8_t *ptr, int len, int offset)
 {
uint32_t ret = 0;
+   int shift;
 
len = RTE_MAX(len, 0);
len = RTE_MIN(len, (int)(sizeof(uint32_t) * CHAR_BIT));
@@ -695,13 +696,14 @@ read_unaligned_bits(uint8_t *ptr, int len, int offset)
while (len > 0) {
ret <<= CHAR_BIT;
 
-   ret |= read_unaligned_byte(ptr, RTE_MIN(len, CHAR_BIT),
-   offset);
+   ret |= read_unaligned_byte(ptr, offset);
offset += CHAR_BIT;
len -= CHAR_BIT;
}
 
-   return ret;
+   shift = (len == 0) ? 0 :
+   (CHAR_BIT - ((len + CHAR_BIT) % CHAR_BIT));
+   return ret >> shift;
 }
 
 /* returns mask for len bits with given offset inside byte */
-- 
2.25.1



[PATCH v2 2/2] test: add additional tests for thash library

2023-06-30 Thread Vladimir Medvedkin
Adds tests comparing the results of applying the output
of rte_thash_get_complement() to the tuple with the result
of calling rte_thash_adjust_tuple().

Suggested-by: Konstantin Ananyev 
Signed-off-by: Konstantin Ananyev 
Signed-off-by: Vladimir Medvedkin 
---
 app/test/test_thash.c | 132 ++
 1 file changed, 132 insertions(+)

diff --git a/app/test/test_thash.c b/app/test/test_thash.c
index 62ba4a9528..53d9611e18 100644
--- a/app/test/test_thash.c
+++ b/app/test/test_thash.c
@@ -804,6 +804,137 @@ test_adjust_tuple(void)
return TEST_SUCCESS;
 }
 
+static uint32_t
+calc_tuple_hash(const uint8_t tuple[TUPLE_SZ], const uint8_t *key)
+{
+   uint32_t i, hash;
+   uint32_t tmp[TUPLE_SZ / sizeof(uint32_t)];
+
+   for (i = 0; i < RTE_DIM(tmp); i++)
+   tmp[i] = rte_be_to_cpu_32(
+   *(const uint32_t *)&tuple[i * sizeof(uint32_t)]);
+
+   hash = rte_softrss(tmp, RTE_DIM(tmp), key);
+   return hash;
+}
+
+static int
+check_adj_tuple(const uint8_t tuple[TUPLE_SZ], const uint8_t *key,
+   uint32_t dhv, uint32_t ohv, uint32_t adjust, uint32_t reta_sz,
+   const char *prefix)
+{
+   uint32_t hash, hashlsb;
+
+   hash = calc_tuple_hash(tuple, key);
+   hashlsb = hash & HASH_MSK(reta_sz);
+
+   printf("%s(%s) for tuple:\n", __func__, prefix);
+   rte_memdump(stdout, NULL, tuple, TUPLE_SZ);
+   printf("\treta_sz: %u,\n"
+   "\torig hash: %#x,\n"
+   "\tdesired: %#x,\n"
+   "\tadjust: %#x,\n"
+   "\tactual: %#x,\n",
+  reta_sz, ohv, dhv, adjust, hashlsb);
+
+   if (dhv == hashlsb) {
+   printf("\t***Succeeded\n");
+   return 0;
+   }
+
+   printf("\t***Failed\n");
+   return -1;
+}
+
+static int
+test_adjust_tuple_mb(uint32_t reta_sz, uint32_t bofs)
+{
+   struct rte_thash_ctx *ctx;
+   struct rte_thash_subtuple_helper *h;
+   const int key_len = 40;
+   const uint8_t *new_key;
+   uint8_t orig_tuple[TUPLE_SZ];
+   uint8_t tuple_1[TUPLE_SZ];
+   uint8_t tuple_2[TUPLE_SZ];
+   uint32_t orig_hash;
+   int rc, ret;
+   uint32_t adj_bits;
+   unsigned int random = rte_rand();
+   unsigned int desired_value = random & HASH_MSK(reta_sz);
+
+   const uint32_t h_offset = offsetof(union rte_thash_tuple, v4.dport) * 
CHAR_BIT;
+   const uint32_t h_size = sizeof(uint16_t) * CHAR_BIT - bofs;
+
+   printf("===%s(reta_sz=%u,bofs=%u)===\n", __func__, reta_sz, bofs);
+
+   memset(orig_tuple, 0xab, sizeof(orig_tuple));
+
+   ctx = rte_thash_init_ctx("test", key_len, reta_sz, NULL, 0);
+   RTE_TEST_ASSERT(ctx != NULL, "can not create thash ctx\n");
+
+   ret = rte_thash_add_helper(ctx, "test", h_size, h_offset);
+   RTE_TEST_ASSERT(ret == 0, "can not add helper, ret %d\n", ret);
+
+   new_key = rte_thash_get_key(ctx);
+
+   h = rte_thash_get_helper(ctx, "test");
+
+   orig_hash = calc_tuple_hash(orig_tuple, new_key);
+
+   adj_bits = rte_thash_get_complement(h, orig_hash, desired_value);
+
+   /* use method #1, update tuple manually */
+   memcpy(tuple_1, orig_tuple, sizeof(tuple_1));
+   {
+   uint16_t nv, ov, *p;
+
+   p = (uint16_t *)(tuple_1 + h_offset / CHAR_BIT);
+   ov = p[0];
+   nv = ov ^ rte_cpu_to_be_16(adj_bits << bofs);
+   printf("%s#%d: ov=%#hx, nv=%#hx, adj=%#x;\n",
+   __func__, __LINE__, ov, nv, adj_bits);
+   p[0] = nv;
+   }
+
+   rc = check_adj_tuple(tuple_1, new_key, desired_value, orig_hash,
+   adj_bits, reta_sz, "method #1");
+   if (h_offset % CHAR_BIT == 0)
+   ret |= rc;
+
+   /* use method #2, use library function to adjust tuple */
+   memcpy(tuple_2, orig_tuple, sizeof(tuple_2));
+
+   rte_thash_adjust_tuple(ctx, h, tuple_2, sizeof(tuple_2),
+   desired_value, 1, NULL, NULL);
+   ret |= check_adj_tuple(tuple_2, new_key, desired_value, orig_hash,
+   adj_bits, reta_sz, "method #2");
+
+   rte_thash_free_ctx(ctx);
+
+   ret |= memcmp(tuple_1, tuple_2, sizeof(tuple_1));
+
+   printf("%s EXIT===\n", __func__);
+   return ret;
+}
+
+static int
+test_adjust_tuple_mult_reta(void)
+{
+   uint32_t i, j, np, nt;
+
+   nt = 0, np = 0;
+   for (i = 0; i < CHAR_BIT; i++) {
+   for (j = 6; j <= RTE_THASH_RETA_SZ_MAX - i; j++) {
+   np += (test_adjust_tuple_mb(j, i) == 0);
+   nt++;
+   }
+   }
+
+   printf("%s: tests executed: %u, test passed: %u\n", __func__, nt, np);
+   RTE_TEST_ASSERT(nt == np, "%u subtests failed", nt - np);
+   return TEST_SUCCESS;
+}
+
 static struct unit_test_suite thash_tests = {
.suite_name = "thash autotest",
.setup = NULL,
@@ -824,6 +955,7 @@ static struct u

[PATCH] fib: fix adding a default route

2023-06-30 Thread Vladimir Medvedkin
Fixed an issue that occurs when
adding a default route as the first route.

Bugzilla ID: 1160
Fixes: 7dc7868b200d ("fib: add DIR24-8 dataplane algorithm")
Cc: sta...@dpdk.org

Signed-off-by: Vladimir Medvedkin 
---
 lib/fib/dir24_8.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/fib/dir24_8.c b/lib/fib/dir24_8.c
index a8ba4f64ca..e77575d62c 100644
--- a/lib/fib/dir24_8.c
+++ b/lib/fib/dir24_8.c
@@ -390,7 +390,7 @@ modify_fib(struct dir24_8_tbl *dp, struct rte_rib *rib, 
uint32_t ip,
(uint32_t)(1ULL << (32 - tmp_depth));
} else {
redge = ip + (uint32_t)(1ULL << (32 - depth));
-   if (ledge == redge)
+   if ((ledge == redge) && (ledge != 0))
break;
ret = install_to_fib(dp, ledge, redge,
next_hop);
-- 
2.25.1



Re: [PATCH] fib: fix adding a default route

2023-06-30 Thread Stephen Hemminger
On Fri, 30 Jun 2023 17:10:35 +
Vladimir Medvedkin  wrote:

>   redge = ip + (uint32_t)(1ULL << (32 - depth));
> - if (ledge == redge)
> + if ((ledge == redge) && (ledge != 0))

Extra parenthesis are not necessary here.


Re: [dpdk-dev] [PATCH 10/10] net/bonding: fix configuration assignment overflow

2023-06-30 Thread Stephen Hemminger
On Mon, 19 Apr 2021 21:34:49 +0800
"Min Hu (Connor)"  wrote:

> From: Chengchang Tang 
> 
> The expression may cause an overflow.
> 
> This patch fix the codeDEX static check warning "INTEGER_OVERFLOW".
> 
> Fixes: 46fb43683679 ("bond: add mode 4")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Chengchang Tang 
> Signed-off-by: Min Hu (Connor) 
> ---
Is the codeDEX static checker publicly available?
Would be good to add to CI infrastructure.


Acked-by: Stephen Hemminger 


Re: [dpdk-dev] [PATCH 06/10] lib/librte_pipeline: fix the use of unsafe strcpy

2023-06-30 Thread Stephen Hemminger
On Mon, 19 Apr 2021 21:34:45 +0800
"Min Hu (Connor)"  wrote:

> From: HongBo Zheng 
> 
> 'strcpy' is called in rte_swx_ctl_table_info_get, this function
> is unsafe, use 'strncpy' instead.
> 
> Fixes: 393b96e2aa2a ("pipeline: add SWX pipeline query API")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: HongBo Zheng 
> Signed-off-by: Min Hu (Connor) 
> ---
>  lib/librte_pipeline/rte_swx_pipeline.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/librte_pipeline/rte_swx_pipeline.c 
> b/lib/librte_pipeline/rte_swx_pipeline.c
> index 4455d91..d4db4dd 100644
> --- a/lib/librte_pipeline/rte_swx_pipeline.c
> +++ b/lib/librte_pipeline/rte_swx_pipeline.c
> @@ -9447,8 +9447,8 @@ rte_swx_ctl_table_info_get(struct rte_swx_pipeline *p,
>   if (!t)
>   return -EINVAL;
>  
> - strcpy(table->name, t->name);
> - strcpy(table->args, t->args);
> + strncpy(table->name, t->name, RTE_SWX_CTL_NAME_SIZE);
> + strncpy(table->args, t->args, RTE_SWX_CTL_NAME_SIZE);
>   table->n_match_fields = t->n_fields;
>   table->n_actions = t->n_actions;
>   table->default_action_is_const = t->default_action_is_const;

This patch is unnecessary.
Both structures declare the same size for the name and args.
Therefore the strcpy is always safe as long as the table structure
is correctly setup with null terminated string. If not there are worse bugs.


Re: [dpdk-dev] [PATCH 08/10] crypto/virtio: fix return values check error

2023-06-30 Thread Stephen Hemminger
On Mon, 19 Apr 2021 21:34:47 +0800
"Min Hu (Connor)"  wrote:

> From: HongBo Zheng 
> 
> In virtio_crypto_pkt_tx_burst, we check the return values of
> virtqueue_crypto_enqueue_xmit, which may returns -ENOSPC/-EMSGSIZE,
> but we only check ENOSPC/EMSGSIZE, and cause the result of checks
> is always false.
> 
> This patch fix this problem.
> 
> Fixes: 82adb12a1fce ("crypto/virtio: support burst enqueue/dequeue")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: HongBo Zheng 
> Signed-off-by: Min Hu (Connor) 

This patch looks correct.

Acked-by: Stephen Hemminger 


Re: [dpdk-dev] [PATCH 07/10] examples/l3fwd: add function return value check

2023-06-30 Thread Stephen Hemminger
On Mon, 19 Apr 2021 21:34:46 +0800
"Min Hu (Connor)"  wrote:

> From: HongBo Zheng 
> 
> Return value of a function 'rte_eth_macaddr_get' called at
> l3fwd_eth_dev_port_setup is not checked, but it is usually
> checked for this function.
> 
> This patch fix this problem.
> 
> Fixes: a65bf3d724df ("examples/l3fwd: add ethdev setup based on eventdev")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: HongBo Zheng 
> Signed-off-by: Min Hu (Connor) 

This looks correct, but only a buggy driver would never set macaddr.

Acked-by: Stephen Hemminger 


RE: [PATCH v1] crypto/qat: fix struct alignment

2023-06-30 Thread Power, Ciara



> -Original Message-
> From: Brian Dooley 
> Sent: Friday 30 June 2023 12:31
> To: Ji, Kai 
> Cc: dev@dpdk.org; gak...@marvell.com; Dooley, Brian
> 
> Subject: [PATCH v1] crypto/qat: fix struct alignment
> 
> The qat_sym_session struct variable alignment was causing a segfault.
> AES expansion keys require 16-byte alignment. Added __rte_aligned to the
> expansion keys.
> 
> Fixes: ca0ba0e48129 ("crypto/qat: default to IPsec MB for computations")
> 
> Signed-off-by: Brian Dooley 
> ---

Acked-by: Ciara Power 


Re: [PATCH] mempool: fix rte primary program coredump

2023-06-30 Thread Stephen Hemminger
On Thu, 27 Jan 2022 11:06:56 +0100
Olivier Matz  wrote:

> > 
> >  this array in primary program is different with secondary program.
> >  so when secondary program call rte_pktmbuf_pool_create_by_ops() with
> >  mempool name “ring_mp_mc”, but the primary program use "bucket" type
> >  to alloc rte_mbuf.
> > 
> >  so sort this array both primary program and secondary program when init
> >  memzone.
> > 
> > Signed-off-by: Tianli Lai   
> 
> I think it is the same problem than the one described here:
> http://inbox.dpdk.org/dev/1583114253-15345-1-git-send-email-xiangxia.m@gmail.com/#r
> 
> To summarize what is said in the thread, sorting ops look dangerous because it
> changes the index during the lifetime of the application. A new proposal was
> made to use a shared memory to ensure the indexes are the same in primary and
> secondaries, but it requires some changes in EAL to have init callbacks at a
> specific place.
> 
> I have a draft patchset that may fix this issue by using the vdev 
> infrastructure
> instead of a specific init, but it is not heavily tested. I can send it here 
> as
> a RFC if you want to try it.
> 
> One thing that is not clear to me is how do you trigger this issue? Why the
> mempool ops are not loaded in the same order in primary and secondary?
> 
> Thanks,
> Olivier

Agree with Olivier, hard coded sort is not the best way to fix this.
Some work is needed to address either the ordering or communicate the list from 
primary/secondary


Re: [dpdk-dev] [PATCH v4] examples/l3fwd: ipv4 and udp/tcp cksum verification through software

2023-06-30 Thread Stephen Hemminger
On Thu, 4 Nov 2021 11:11:02 +
"Walsh, Conor"  wrote:

> > checks if ipv4 and udptcp cksum offload capability available
> > If not available, cksum is verified through software
> > If cksum is corrupt, packet is dropped, rest of the packets
> > are forwarded back.
> > 
> > Bugzilla ID:545
> > Signed-off-by: Usama Nadeem 
> > ---  
> 
> Hi Usama,
> 
> This should be done in a generic way that allows all the lookup methods to 
> support it not just LPM.
> check_software_cksum should go in a common file and be called from LPM, FIB 
> and possibly EM.
> 
> Thanks,
> Conor.

Agree.
This is a real bug in l3fwd-XXX examples.
It needs to be done in a more general way so that applications can use this
design pattern as a template.

Please submit a new version


Re: [dpdk-dev] [PATCH] flow_classify: remove eperimental tag from the API

2023-06-30 Thread Stephen Hemminger
On Wed, 15 Sep 2021 16:16:35 +0100
Bernard Iremonger  wrote:

> This API was introduced in 17.11, removing experimental tag
> to promote to stable state.
> 
> Signed-off-by: Bernard Iremonger 

The API is unmaintained and because of that likely to be deprecated in future.
Marking it as stable at this point would not be a good idea.



[PATCH] app/testpmd: add IPv4 length field matching

2023-06-30 Thread Bing Zhao
In the command line, the total length field translation support is
added to pass the value to the rte_flow API.

Signed-off-by: Bing Zhao 
---
 app/test-pmd/cmdline_flow.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index bd626e2347..738ecf2a40 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -313,6 +313,7 @@ enum index {
ITEM_IPV4,
ITEM_IPV4_VER_IHL,
ITEM_IPV4_TOS,
+   ITEM_IPV4_LENGTH,
ITEM_IPV4_ID,
ITEM_IPV4_FRAGMENT_OFFSET,
ITEM_IPV4_TTL,
@@ -1604,6 +1605,7 @@ static const enum index item_vlan[] = {
 static const enum index item_ipv4[] = {
ITEM_IPV4_VER_IHL,
ITEM_IPV4_TOS,
+   ITEM_IPV4_LENGTH,
ITEM_IPV4_ID,
ITEM_IPV4_FRAGMENT_OFFSET,
ITEM_IPV4_TTL,
@@ -4229,6 +4231,14 @@ static const struct token token_list[] = {
.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv4,
 hdr.type_of_service)),
},
+   [ITEM_IPV4_LENGTH] = {
+   .name = "length",
+   .help = "total length",
+   .next = NEXT(item_ipv4, NEXT_ENTRY(COMMON_UNSIGNED),
+item_param),
+   .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv4,
+hdr.total_length)),
+   },
[ITEM_IPV4_ID] = {
.name = "packet_id",
.help = "fragment packet id",
-- 
2.34.1



[PATCH v2] app/pdump: exit if no device specified

2023-06-30 Thread Stephen Hemminger
Simpler version of earlier patch which had a good idea, was just
implemented with more code than necessary.
If no device is specified don't start the capture loop.

Reported-by: usman.tanveer 
Signed-off-by: Stephen Hemminger 
---
 app/pdump/main.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/app/pdump/main.c b/app/pdump/main.c
index c94606275b28..7a1c7bdf6011 100644
--- a/app/pdump/main.c
+++ b/app/pdump/main.c
@@ -915,6 +915,9 @@ dump_packets(void)
int i;
unsigned int lcore_id = 0;
 
+   if (num_tuples == 0)
+   rte_exit(EXIT_FAILURE, "No device specified for capture\n");
+
if (!multiple_core_capture) {
printf(" core (%u), capture for (%d) tuples\n",
rte_lcore_id(), num_tuples);
-- 
2.39.2