Re: [PATCH v1] net/vhost: add queue status check

2022-01-31 Thread Maxime Coquelin




On 11/19/21 07:30, Li, Miao wrote:

Hi


-Original Message-
From: Maxime Coquelin 
Sent: Tuesday, November 16, 2021 5:36 PM
To: Li, Miao ; dev@dpdk.org
Cc: Xia, Chenbo 
Subject: Re: [PATCH v1] net/vhost: add queue status check



On 11/16/21 10:34, Maxime Coquelin wrote:



On 11/16/21 17:44, Miao Li wrote:

This patch adds queue status check to make sure that vhost monitor
address will not be got until the link between backend and frontend

s/got/gone/?

up and the packets are allowed to be queued.


It needs a fixes tag.


If we don't add this check, rte_vhost_get_monitor_addr will return -EINVAL when check if 
dev is null. But before return, get_device() will be called and print error log 
"device not found". So we want to add this check and return -EINVAL before call 
rte_vhost_get_monitor_addr. If we don't add this check, the vhost monitor address will 
also not be got but vhost will print error log continuously. It have no function impact, 
so I think it is not a fix.




Signed-off-by: Miao Li 
---
   drivers/net/vhost/rte_eth_vhost.c | 2 ++
   1 file changed, 2 insertions(+)

diff --git a/drivers/net/vhost/rte_eth_vhost.c
b/drivers/net/vhost/rte_eth_vhost.c
index 070f0e6dfd..9d600054d8 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -1415,6 +1415,8 @@ vhost_get_monitor_addr(void *rx_queue, struct
rte_power_monitor_cond *pmc)
   int ret;
   if (vq == NULL)
   return -EINVAL;
+    if (unlikely(rte_atomic32_read(&vq->allow_queuing) == 0))
+    return -EINVAL;


Also, EINVAL might not be the right return value here.


I don't know which return value will be better. Do you have any suggestions? 
Thanks!




How does it help?
What does prevent allow_queuing to become zero between the check and the
call to rte_vhost_get_monitor_addr?


This check will prevent vhost to print error log continuously.


You mean, it will prevent it most of the time, as there is still a
window where it can happen, if allow_queueing is set between is check
and the call to rte_vhost_get_monitor_addr.



I think you need to implement the same logic as in eth_vhost_rx(), i.e.
check allow_queueing, set while_queueing, check allow_queueing, do your
stuff and clear while_queuing.


I think the while_queuing is unnecessary because we only read the value in vq 
and this API will only be called as a callback of RX.

Thanks,
Miao




   ret = rte_vhost_get_monitor_addr(vq->vid, vq->virtqueue_id,
   &vhost_pmc);
   if (ret < 0)



Maxime






Re: [PATCH] doc: update recommended IOVA mode for async vhost

2022-01-31 Thread Maxime Coquelin




On 11/22/21 09:49, Xuan Ding wrote:

DPDK 21.11 adds vfio support for DMA device in vhost. This patch
updates recommended IOVA mode in async datapath.

Signed-off-by: Xuan Ding 
---
  doc/guides/prog_guide/vhost_lib.rst | 9 +
  1 file changed, 9 insertions(+)

diff --git a/doc/guides/prog_guide/vhost_lib.rst 
b/doc/guides/prog_guide/vhost_lib.rst
index 76f5d303c9..f72ce75909 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -442,3 +442,12 @@ Finally, a set of device ops is defined for device 
specific operations:
  * ``get_notify_area``
  
Called to get the notify area info of the queue.

+
+Recommended IOVA mode in async datapath
+---
+
+When DMA devices are bound to vfio driver, VA mode is recommended.
+For PA mode, page by page mapping may exceed IOMMU's max capability,
+better to use 1G guest hugepage.
+
+For uio driver, any vfio related error message can be ignored.
\ No newline at end of file


Reviewed-by: Maxime Coquelin 

Thanks,
Maxime



Re: [PATCH] net/virtio: include ipv4 cksum to support cksum offload capability

2022-01-31 Thread Maxime Coquelin

Hi Harold,

On 1/7/22 12:53, Harold Huang wrote:

Device cksum offload capability usually include ipv4 cksum, tcp and udp
cksum offload capability. The application such as OVS usually negotiate
with the drive like this to enable cksum offload.

Signed-off-by: Harold Huang 
---
  drivers/net/virtio/virtio_ethdev.c | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/drivers/net/virtio/virtio_ethdev.c 
b/drivers/net/virtio/virtio_ethdev.c
index c2588369b2..65b03bf0e4 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -3041,6 +3041,7 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SCATTER;
if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {
dev_info->rx_offload_capa |=
+   RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
}
@@ -3055,6 +3056,7 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
if (host_features & (1ULL << VIRTIO_NET_F_CSUM)) {
dev_info->tx_offload_capa |=
+   RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
}


I'm not sure to understand why this is needed, as Vhost lib will always
ensure the IP csum has been calculated. Could you please elaborate?

Thanks,
Maxime



[PATCH v2] common/cnxk: enable NIX Tx interrupts errata

2022-01-31 Thread Harman Kalra
An errata exists whereby NIX may incorrectly overwrite the value in
NIX_SQ_CTX_S[SQ_INT]. This may cause set interrupts to get cleared or
causing an QINT when no error is outstanding.
As a workaround, software should always read all SQ debug registers
and not just rely on NIX_SQINT_E bits set in NIX_SQ_CTX_S[SQ_INT].
Also for detecting SQB faults software must read SQ context and
check id next_sqb is NULL.

Signed-off-by: Harman Kalra 
---
V2:
* Rebase on branch code

 drivers/common/cnxk/roc_nix_irq.c | 64 ++-
 1 file changed, 46 insertions(+), 18 deletions(-)

diff --git a/drivers/common/cnxk/roc_nix_irq.c 
b/drivers/common/cnxk/roc_nix_irq.c
index 7dcd533ea9..71971ef261 100644
--- a/drivers/common/cnxk/roc_nix_irq.c
+++ b/drivers/common/cnxk/roc_nix_irq.c
@@ -196,18 +196,42 @@ nix_lf_sq_irq_get_and_clear(struct nix *nix, uint16_t sq)
return nix_lf_q_irq_get_and_clear(nix, sq, NIX_LF_SQ_OP_INT, ~0x1ff00);
 }
 
-static inline void
+static inline bool
+nix_lf_is_sqb_null(struct dev *dev, int q)
+{
+   bool is_sqb_null = false;
+   volatile void *ctx;
+   int rc;
+
+   rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_SQ, q, &ctx);
+   if (rc) {
+   plt_err("Failed to get sq context");
+   } else {
+   is_sqb_null =
+   roc_model_is_cn9k() ?
+   (((__io struct nix_sq_ctx_s *)ctx)->next_sqb ==
+0) :
+   (((__io struct nix_cn10k_sq_ctx_s *)ctx)
+->next_sqb == 0);
+   }
+
+   return is_sqb_null;
+}
+
+static inline uint8_t
 nix_lf_sq_debug_reg(struct nix *nix, uint32_t off)
 {
+   uint8_t err = 0;
uint64_t reg;
 
reg = plt_read64(nix->base + off);
if (reg & BIT_ULL(44)) {
-   plt_err("SQ=%d err_code=0x%x", (int)((reg >> 8) & 0xf),
-   (uint8_t)(reg & 0xff));
+   err = reg & 0xff;
/* Clear valid bit */
plt_write64(BIT_ULL(44), nix->base + off);
}
+
+   return err;
 }
 
 static void
@@ -229,6 +253,7 @@ nix_lf_q_irq(void *param)
struct dev *dev = &nix->dev;
int q, cq, rq, sq;
uint64_t intr;
+   uint8_t rc;
 
intr = plt_read64(nix->base + NIX_LF_QINTX_INT(qintx));
if (intr == 0)
@@ -269,22 +294,25 @@ nix_lf_q_irq(void *param)
sq = q % nix->qints;
irq = nix_lf_sq_irq_get_and_clear(nix, sq);
 
-   if (irq & BIT_ULL(NIX_SQINT_LMT_ERR)) {
-   plt_err("SQ=%d NIX_SQINT_LMT_ERR", sq);
-   nix_lf_sq_debug_reg(nix, NIX_LF_SQ_OP_ERR_DBG);
-   }
-   if (irq & BIT_ULL(NIX_SQINT_MNQ_ERR)) {
-   plt_err("SQ=%d NIX_SQINT_MNQ_ERR", sq);
-   nix_lf_sq_debug_reg(nix, NIX_LF_MNQ_ERR_DBG);
-   }
-   if (irq & BIT_ULL(NIX_SQINT_SEND_ERR)) {
-   plt_err("SQ=%d NIX_SQINT_SEND_ERR", sq);
-   nix_lf_sq_debug_reg(nix, NIX_LF_SEND_ERR_DBG);
-   }
-   if (irq & BIT_ULL(NIX_SQINT_SQB_ALLOC_FAIL)) {
+   /* Detect LMT store error */
+   rc = nix_lf_sq_debug_reg(nix, NIX_LF_SQ_OP_ERR_DBG);
+   if (rc)
+   plt_err("SQ=%d NIX_SQINT_LMT_ERR, errcode %x", sq, rc);
+
+   /* Detect Meta-descriptor enqueue error */
+   rc = nix_lf_sq_debug_reg(nix, NIX_LF_MNQ_ERR_DBG);
+   if (rc)
+   plt_err("SQ=%d NIX_SQINT_MNQ_ERR, errcode %x", sq, rc);
+
+   /* Detect Send error */
+   rc = nix_lf_sq_debug_reg(nix, NIX_LF_SEND_ERR_DBG);
+   if (rc)
+   plt_err("SQ=%d NIX_SQINT_SEND_ERR, errcode %x", sq, rc);
+
+   /* Detect SQB fault, read SQ context to check SQB NULL case */
+   if (irq & BIT_ULL(NIX_SQINT_SQB_ALLOC_FAIL) ||
+   nix_lf_is_sqb_null(dev, q))
plt_err("SQ=%d NIX_SQINT_SQB_ALLOC_FAIL", sq);
-   nix_lf_sq_debug_reg(nix, NIX_LF_SEND_ERR_DBG);
-   }
}
 
/* Clear interrupt */
-- 
2.18.0



[PATCH 0/5] Adding new features and improvements in cnxk crypto PMD

2022-01-31 Thread Tejasree Kondoj
This series adds ESN, antireplay support and improvements
to cnxk crypto PMDs.

Anoob Joseph (4):
  common/cnxk: add err ctl
  common/cnxk: add ROC cache line size constant
  crypto/cnxk: use unique cache line per inst
  crypto/cnxk: fix updation of number of descriptors

Tejasree Kondoj (1):
  crypto/cnxk: add ESN and antireplay support

 doc/guides/cryptodevs/cnxk.rst|  2 +
 doc/guides/rel_notes/release_22_03.rst|  1 +
 drivers/common/cnxk/cnxk_security.c   |  9 +++-
 drivers/common/cnxk/cnxk_security_ar.h|  2 +-
 drivers/common/cnxk/roc_constants.h   |  5 ++-
 drivers/common/cnxk/roc_cpt.c |  3 --
 drivers/common/cnxk/roc_ie_on.h   |  2 +
 drivers/common/cnxk/roc_ie_ot.h   | 17 +++-
 drivers/crypto/cnxk/cn10k_ipsec.c | 36 +++-
 drivers/crypto/cnxk/cn9k_ipsec.c  | 43 ++-
 drivers/crypto/cnxk/cn9k_ipsec_la_ops.h   | 16 ++-
 .../crypto/cnxk/cnxk_cryptodev_capabilities.c |  4 ++
 drivers/crypto/cnxk/cnxk_cryptodev_ops.c  |  8 +++-
 drivers/crypto/cnxk/cnxk_cryptodev_ops.h  |  4 +-
 14 files changed, 135 insertions(+), 17 deletions(-)

-- 
2.27.0



[PATCH 1/5] common/cnxk: add err ctl

2022-01-31 Thread Tejasree Kondoj
From: Anoob Joseph 

Add err ctl field in SA context.

Signed-off-by: Anoob Joseph 
---
 drivers/common/cnxk/cnxk_security.c |  6 --
 drivers/common/cnxk/roc_ie_ot.h | 17 -
 2 files changed, 20 insertions(+), 3 deletions(-)

diff --git a/drivers/common/cnxk/cnxk_security.c 
b/drivers/common/cnxk/cnxk_security.c
index 6ebf0846f5..035d61180a 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -500,8 +500,10 @@ cnxk_ot_ipsec_outb_sa_fill(struct roc_ot_ipsec_outb_sa *sa,
offset = offsetof(struct roc_ot_ipsec_outb_sa, ctx);
/* Word offset for HW managed SA field */
sa->w0.s.hw_ctx_off = offset / 8;
-   /* Context push size is up to hmac_opad_ipad */
-   sa->w0.s.ctx_push_size = sa->w0.s.hw_ctx_off;
+
+   /* Context push size is up to err ctl in HW ctx */
+   sa->w0.s.ctx_push_size = sa->w0.s.hw_ctx_off + 1;
+
/* Entire context size in 128B units */
offset = sizeof(struct roc_ot_ipsec_outb_sa);
sa->w0.s.ctx_size = (PLT_ALIGN_CEIL(offset, ROC_CTX_UNIT_128B) /
diff --git a/drivers/common/cnxk/roc_ie_ot.h b/drivers/common/cnxk/roc_ie_ot.h
index 923656f4a5..c502c7983f 100644
--- a/drivers/common/cnxk/roc_ie_ot.h
+++ b/drivers/common/cnxk/roc_ie_ot.h
@@ -153,6 +153,13 @@ enum {
ROC_IE_OT_REAS_STS_L3P_ERR = 8,
ROC_IE_OT_REAS_STS_MAX = 9
 };
+
+enum {
+   ROC_IE_OT_ERR_CTL_MODE_NONE = 0,
+   ROC_IE_OT_ERR_CTL_MODE_CLEAR = 1,
+   ROC_IE_OT_ERR_CTL_MODE_RING = 2,
+};
+
 /* Context units in bytes */
 #define ROC_CTX_UNIT_8B  8
 #define ROC_CTX_UNIT_128B128
@@ -235,7 +242,15 @@ union roc_ot_ipsec_outb_iv {
 };
 
 struct roc_ot_ipsec_outb_ctx_update_reg {
-   uint64_t rsvd;
+   union {
+   struct {
+   uint64_t reserved_0_2 : 3;
+   uint64_t address : 57;
+   uint64_t mode : 4;
+   } s;
+   uint64_t u64;
+   } err_ctl;
+
uint64_t esn_val;
uint64_t hard_life;
uint64_t soft_life;
-- 
2.27.0



[PATCH 2/5] crypto/cnxk: add ESN and antireplay support

2022-01-31 Thread Tejasree Kondoj
Adding lookaside IPsec ESN and anti-replay support
through security session update.

Signed-off-by: Tejasree Kondoj 
---
 doc/guides/cryptodevs/cnxk.rst|  2 +
 doc/guides/rel_notes/release_22_03.rst|  1 +
 drivers/common/cnxk/cnxk_security.c   |  3 ++
 drivers/common/cnxk/cnxk_security_ar.h|  2 +-
 drivers/common/cnxk/roc_ie_on.h   |  2 +
 drivers/crypto/cnxk/cn10k_ipsec.c | 36 +++-
 drivers/crypto/cnxk/cn9k_ipsec.c  | 43 ++-
 drivers/crypto/cnxk/cn9k_ipsec_la_ops.h   | 16 ++-
 .../crypto/cnxk/cnxk_cryptodev_capabilities.c |  4 ++
 9 files changed, 103 insertions(+), 6 deletions(-)

diff --git a/doc/guides/cryptodevs/cnxk.rst b/doc/guides/cryptodevs/cnxk.rst
index 3c585175e3..46431dd755 100644
--- a/doc/guides/cryptodevs/cnxk.rst
+++ b/doc/guides/cryptodevs/cnxk.rst
@@ -279,6 +279,8 @@ CN10XX Features supported
 
 * IPv4
 * ESP
+* ESN
+* Anti-replay
 * Tunnel mode
 * Transport mode
 * UDP Encapsulation
diff --git a/doc/guides/rel_notes/release_22_03.rst 
b/doc/guides/rel_notes/release_22_03.rst
index 3bc0630c7c..a992fe85f5 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -64,6 +64,7 @@ New Features
   * Added NULL cipher support in lookaside protocol (IPsec) for CN9K & CN10K.
   * Added AES-XCBC support in lookaside protocol (IPsec) for CN9K & CN10K.
   * Added AES-CMAC support in CN9K & CN10K.
+  * Added ESN and anti-replay support in lookaside protocol (IPsec) for CN10K.
 
 * **Added an API to retrieve event port id of ethdev Rx adapter.**
 
diff --git a/drivers/common/cnxk/cnxk_security.c 
b/drivers/common/cnxk/cnxk_security.c
index 035d61180a..718983d892 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -492,6 +492,9 @@ cnxk_ot_ipsec_outb_sa_fill(struct roc_ot_ipsec_outb_sa *sa,
/* ESN */
sa->w0.s.esn_en = !!ipsec_xfrm->options.esn;
 
+   if (ipsec_xfrm->esn.value)
+   sa->ctx.esn_val = ipsec_xfrm->esn.value - 1;
+
if (ipsec_xfrm->options.udp_encap) {
sa->w10.s.udp_src_port = 4500;
sa->w10.s.udp_dst_port = 4500;
diff --git a/drivers/common/cnxk/cnxk_security_ar.h 
b/drivers/common/cnxk/cnxk_security_ar.h
index 3ec4c296c2..deb38db0d0 100644
--- a/drivers/common/cnxk/cnxk_security_ar.h
+++ b/drivers/common/cnxk/cnxk_security_ar.h
@@ -13,7 +13,7 @@
 
 /* u64 array size to fit anti replay window bits */
 #define AR_WIN_ARR_SZ  
\
-   (PLT_ALIGN_CEIL(CNXK_ON_AR_WIN_SIZE_MAX, BITS_PER_LONG_LONG) /\
+   (PLT_ALIGN_CEIL(CNXK_ON_AR_WIN_SIZE_MAX + 1, BITS_PER_LONG_LONG) / \
 BITS_PER_LONG_LONG)
 
 #define WORD_SHIFT 6
diff --git a/drivers/common/cnxk/roc_ie_on.h b/drivers/common/cnxk/roc_ie_on.h
index aaad87243f..638b02062d 100644
--- a/drivers/common/cnxk/roc_ie_on.h
+++ b/drivers/common/cnxk/roc_ie_on.h
@@ -18,6 +18,8 @@ enum roc_ie_on_ucc_ipsec {
ROC_IE_ON_UCC_SUCCESS = 0,
ROC_IE_ON_AUTH_UNSUPPORTED = 0xB0,
ROC_IE_ON_ENCRYPT_UNSUPPORTED = 0xB1,
+   /* Software defined completion code for anti-replay failed packets */
+   ROC_IE_ON_SWCC_ANTI_REPLAY = 0xE7,
 };
 
 /* Helper macros */
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.c 
b/drivers/crypto/cnxk/cn10k_ipsec.c
index 7f4ccaff99..c95c57a84d 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.c
+++ b/drivers/crypto/cnxk/cn10k_ipsec.c
@@ -239,7 +239,7 @@ cn10k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt, struct 
roc_cpt_lf *lf,
}
 
/* Trigger CTX flush so that data is written back to DRAM */
-   roc_cpt_lf_ctx_flush(lf, in_sa, false);
+   roc_cpt_lf_ctx_flush(lf, in_sa, true);
 
plt_atomic_thread_fence(__ATOMIC_SEQ_CST);
 
@@ -410,6 +410,39 @@ cn10k_sec_session_stats_get(void *device, struct 
rte_security_session *sess,
return 0;
 }
 
+static int
+cn10k_sec_session_update(void *device, struct rte_security_session *sess,
+struct rte_security_session_conf *conf)
+{
+   struct rte_cryptodev *crypto_dev = device;
+   struct cn10k_sec_session *priv;
+   struct roc_cpt *roc_cpt;
+   struct cnxk_cpt_qp *qp;
+   struct cnxk_cpt_vf *vf;
+   int ret;
+
+   priv = get_sec_session_private_data(sess);
+   if (priv == NULL)
+   return -EINVAL;
+
+   qp = crypto_dev->data->queue_pairs[0];
+   if (qp == NULL)
+   return -EINVAL;
+
+   if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+   return -ENOTSUP;
+
+   ret = cnxk_ipsec_xform_verify(&conf->ipsec, conf->crypto_xform);
+   if (ret)
+   return ret;
+
+   vf = crypto_dev->data->dev_private;
+   roc_cpt = &vf->cpt;
+
+   return cn10k_ipsec_outb_sa_create(roc_cpt, &qp->lf, &conf->ipsec,
+ conf->crypto_xform, sess);
+}

[PATCH 3/5] common/cnxk: add ROC cache line size constant

2022-01-31 Thread Tejasree Kondoj
From: Anoob Joseph 

Add ROC cache line size constant.

Signed-off-by: Anoob Joseph 
---
 drivers/common/cnxk/roc_constants.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/common/cnxk/roc_constants.h 
b/drivers/common/cnxk/roc_constants.h
index 5f78823642..38e2087a26 100644
--- a/drivers/common/cnxk/roc_constants.h
+++ b/drivers/common/cnxk/roc_constants.h
@@ -4,8 +4,9 @@
 #ifndef _ROC_CONSTANTS_H_
 #define _ROC_CONSTANTS_H_
 
-/* Alignment */
-#define ROC_ALIGN 128
+/* ROC Cache */
+#define ROC_CACHE_LINE_SZ 128
+#define ROC_ALIGNROC_CACHE_LINE_SZ
 
 /* LMTST constants */
 /* [CN10K, .) */
-- 
2.27.0



[PATCH 4/5] crypto/cnxk: use unique cache line per inst

2022-01-31 Thread Tejasree Kondoj
From: Anoob Joseph 

CPT inflight request is used to track a request that is enqueued to
cryptodev. Having more than one inst use the same cacheline can result
in serialization of CPT result memory writes causing perf degradations.
Align inflight request to ROC cache line to ensure only one result would
be written per cache line..

Signed-off-by: Anoob Joseph 
---
 drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h 
b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
index e521f07585..0656ba9675 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
@@ -40,7 +40,9 @@ struct cpt_inflight_req {
void *mdata;
uint8_t op_flags;
void *qp;
-} __rte_aligned(16);
+} __rte_aligned(ROC_ALIGN);
+
+PLT_STATIC_ASSERT(sizeof(struct cpt_inflight_req) == ROC_CACHE_LINE_SZ);
 
 struct pending_queue {
/** Array of pending requests */
-- 
2.27.0



[PATCH 5/5] crypto/cnxk: fix updation of number of descriptors

2022-01-31 Thread Tejasree Kondoj
From: Anoob Joseph 

Pending queue also need to be adjusted while updating the number of
descriptors.

Fixes: a455fd869cd7 ("common/cnxk: align CPT queue depth to power of 2")
Cc: ano...@marvell.com

Signed-off-by: Anoob Joseph 
---
 drivers/common/cnxk/roc_cpt.c| 3 ---
 drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 8 ++--
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 1bc7a29ef9..4e24850366 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -568,9 +568,6 @@ cpt_lf_init(struct roc_cpt_lf *lf)
if (lf->nb_desc == 0 || lf->nb_desc > CPT_LF_MAX_NB_DESC)
lf->nb_desc = CPT_LF_DEFAULT_NB_DESC;
 
-   /* Update nb_desc to next power of 2 to aid in pending queue checks */
-   lf->nb_desc = plt_align32pow2(lf->nb_desc);
-
/* Allocate memory for instruction queue for CPT LF. */
iq_mem = plt_zmalloc(cpt_lf_iq_mem_calc(lf->nb_desc), ROC_ALIGN);
if (iq_mem == NULL)
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c 
b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index 67a2d9b08e..a5fb68da02 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -361,6 +361,7 @@ cnxk_cpt_queue_pair_setup(struct rte_cryptodev *dev, 
uint16_t qp_id,
struct roc_cpt *roc_cpt = &vf->cpt;
struct rte_pci_device *pci_dev;
struct cnxk_cpt_qp *qp;
+   uint32_t nb_desc;
int ret;
 
if (dev->data->queue_pairs[qp_id] != NULL)
@@ -373,14 +374,17 @@ cnxk_cpt_queue_pair_setup(struct rte_cryptodev *dev, 
uint16_t qp_id,
return -EIO;
}
 
-   qp = cnxk_cpt_qp_create(dev, qp_id, conf->nb_descriptors);
+   /* Update nb_desc to next power of 2 to aid in pending queue checks */
+   nb_desc = plt_align32pow2(conf->nb_descriptors);
+
+   qp = cnxk_cpt_qp_create(dev, qp_id, nb_desc);
if (qp == NULL) {
plt_err("Could not create queue pair %d", qp_id);
return -ENOMEM;
}
 
qp->lf.lf_id = qp_id;
-   qp->lf.nb_desc = conf->nb_descriptors;
+   qp->lf.nb_desc = nb_desc;
 
ret = roc_cpt_lf_init(roc_cpt, &qp->lf);
if (ret < 0) {
-- 
2.27.0



[PATCH 0/2] Adding new cases to lookaside IPsec tests

2022-01-31 Thread Tejasree Kondoj
Adding new test cases to lookaside IPsec tests.
* Set and copy DSCP cases
* ESN and antireplay support

Anoob Joseph (1):
  test/crypto: add copy and set DSCP cases

Tejasree Kondoj (1):
  test/cryptodev: add ESN and Antireplay tests

 app/test/test_cryptodev.c | 352 +-
 app/test/test_cryptodev_security_ipsec.c  | 173 +++--
 app/test/test_cryptodev_security_ipsec.h  |  16 +-
 ...st_cryptodev_security_ipsec_test_vectors.h |   1 +
 doc/guides/rel_notes/release_22_03.rst|   5 +
 5 files changed, 518 insertions(+), 29 deletions(-)

-- 
2.27.0



[PATCH 1/2] test/crypto: add copy and set DSCP cases

2022-01-31 Thread Tejasree Kondoj
From: Anoob Joseph 

Add test cases to verify copy and set DSCP with IPv4 and IPv6 tunnels.

Signed-off-by: Anoob Joseph 
---
 app/test/test_cryptodev.c| 166 +++
 app/test/test_cryptodev_security_ipsec.c | 150 
 app/test/test_cryptodev_security_ipsec.h |  10 ++
 3 files changed, 301 insertions(+), 25 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index ec4a61bdf9..47ad991c31 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -9176,7 +9176,21 @@ test_ipsec_proto_process(const struct ipsec_test_data 
td[],
if (flags->df == TEST_IPSEC_SET_DF_1_INNER_0)
ipsec_xform.tunnel.ipv4.df = 1;
 
+   if (flags->dscp == TEST_IPSEC_SET_DSCP_0_INNER_1)
+   ipsec_xform.tunnel.ipv4.dscp = 0;
+
+   if (flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0)
+   ipsec_xform.tunnel.ipv4.dscp =
+   TEST_IPSEC_DSCP_VAL;
+
} else {
+   if (flags->dscp == TEST_IPSEC_SET_DSCP_0_INNER_1)
+   ipsec_xform.tunnel.ipv6.dscp = 0;
+
+   if (flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0)
+   ipsec_xform.tunnel.ipv6.dscp =
+   TEST_IPSEC_DSCP_VAL;
+
memcpy(&ipsec_xform.tunnel.ipv6.src_addr, &v6_src,
   sizeof(v6_src));
memcpy(&ipsec_xform.tunnel.ipv6.dst_addr, &v6_dst,
@@ -9761,6 +9775,126 @@ test_ipsec_proto_set_df_1_inner_0(const void *data 
__rte_unused)
return test_ipsec_proto_all(&flags);
 }
 
+static int
+test_ipsec_proto_ipv4_copy_dscp_inner_0(const void *data __rte_unused)
+{
+   struct ipsec_test_flags flags;
+
+   memset(&flags, 0, sizeof(flags));
+
+   flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_0;
+
+   return test_ipsec_proto_all(&flags);
+}
+
+static int
+test_ipsec_proto_ipv4_copy_dscp_inner_1(const void *data __rte_unused)
+{
+   struct ipsec_test_flags flags;
+
+   memset(&flags, 0, sizeof(flags));
+
+   flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_1;
+
+   return test_ipsec_proto_all(&flags);
+}
+
+static int
+test_ipsec_proto_ipv4_set_dscp_0_inner_1(const void *data __rte_unused)
+{
+   struct ipsec_test_flags flags;
+
+   if (gbl_driver_id == rte_cryptodev_driver_id_get(
+   RTE_STR(CRYPTODEV_NAME_CN9K_PMD)))
+   return TEST_SKIPPED;
+
+   memset(&flags, 0, sizeof(flags));
+
+   flags.dscp = TEST_IPSEC_SET_DSCP_0_INNER_1;
+
+   return test_ipsec_proto_all(&flags);
+}
+
+static int
+test_ipsec_proto_ipv4_set_dscp_1_inner_0(const void *data __rte_unused)
+{
+   struct ipsec_test_flags flags;
+
+   if (gbl_driver_id == rte_cryptodev_driver_id_get(
+   RTE_STR(CRYPTODEV_NAME_CN9K_PMD)))
+   return TEST_SKIPPED;
+
+   memset(&flags, 0, sizeof(flags));
+
+   flags.dscp = TEST_IPSEC_SET_DSCP_1_INNER_0;
+
+   return test_ipsec_proto_all(&flags);
+}
+
+static int
+test_ipsec_proto_ipv6_copy_dscp_inner_0(const void *data __rte_unused)
+{
+   struct ipsec_test_flags flags;
+
+   memset(&flags, 0, sizeof(flags));
+
+   flags.ipv6 = true;
+   flags.tunnel_ipv6 = true;
+   flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_0;
+
+   return test_ipsec_proto_all(&flags);
+}
+
+static int
+test_ipsec_proto_ipv6_copy_dscp_inner_1(const void *data __rte_unused)
+{
+   struct ipsec_test_flags flags;
+
+   memset(&flags, 0, sizeof(flags));
+
+   flags.ipv6 = true;
+   flags.tunnel_ipv6 = true;
+   flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_1;
+
+   return test_ipsec_proto_all(&flags);
+}
+
+static int
+test_ipsec_proto_ipv6_set_dscp_0_inner_1(const void *data __rte_unused)
+{
+   struct ipsec_test_flags flags;
+
+   if (gbl_driver_id == rte_cryptodev_driver_id_get(
+   RTE_STR(CRYPTODEV_NAME_CN9K_PMD)))
+   return TEST_SKIPPED;
+
+   memset(&flags, 0, sizeof(flags));
+
+   flags.ipv6 = true;
+   flags.tunnel_ipv6 = true;
+   flags.dscp = TEST_IPSEC_SET_DSCP_0_INNER_1;
+
+   return test_ipsec_proto_all(&flags);
+}
+
+static int
+test_ipsec_proto_ipv6_set_dscp_1_inner_0(const void *data __rte_unused)
+{
+   struct ipsec_test_flags flags;
+
+   if (gbl_driver_id == rte_cryptodev_driver_id_get(
+   RTE_STR(CRYPTODEV_NAME_CN9K_PMD)))
+   return TEST_SKIPPED;
+
+   memset(&flags, 0, sizeof(flags));
+
+   flags.ipv6 = true;
+   flags.tunnel_ipv6 = true;
+   flags.dscp = TEST_IPSEC_SET_DSCP_1_INNER_0;
+
+   return test_ipsec_proto_all(&flags);
+}
+
 static int
 test_PDCP_PROTO_all(void)
 {
@@ -14799,6 +14933,38 @@ static struct unit_test_suite ipse

[PATCH 2/2] test/cryptodev: add ESN and Antireplay tests

2022-01-31 Thread Tejasree Kondoj
Adding test cases for IPsec ESN and Antireplay.

Signed-off-by: Tejasree Kondoj 
---
 app/test/test_cryptodev.c | 186 +-
 app/test/test_cryptodev_security_ipsec.c  |  23 ++-
 app/test/test_cryptodev_security_ipsec.h  |   6 +-
 ...st_cryptodev_security_ipsec_test_vectors.h |   1 +
 doc/guides/rel_notes/release_22_03.rst|   5 +
 5 files changed, 217 insertions(+), 4 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 47ad991c31..3ee7bc8e0d 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -9292,6 +9292,18 @@ test_ipsec_proto_process(const struct ipsec_test_data 
td[],
return TEST_SKIPPED;
 
for (i = 0; i < nb_td; i++) {
+   if (flags->antireplay &&
+   (dir == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)) {
+   sess_conf.ipsec.esn.value = td[i].ipsec_xform.esn.value;
+   ret = rte_security_session_update(ctx,
+   ut_params->sec_session, &sess_conf);
+   if (ret) {
+   printf("Could not update sequence number in "
+  "session\n");
+   return TEST_SKIPPED;
+   }
+   }
+
/* Setup source mbuf payload */
ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
@@ -9344,7 +9356,8 @@ test_ipsec_proto_process(const struct ipsec_test_data 
td[],
/* Process crypto operation */
process_crypto_request(dev_id, ut_params->op);
 
-   ret = test_ipsec_status_check(ut_params->op, flags, dir, i + 1);
+   ret = test_ipsec_status_check(&td[i], ut_params->op, flags, dir,
+ i + 1);
if (ret != TEST_SUCCESS)
goto crypto_op_free;
 
@@ -9895,6 +9908,150 @@ test_ipsec_proto_ipv6_set_dscp_1_inner_0(const void 
*data __rte_unused)
return test_ipsec_proto_all(&flags);
 }
 
+static int
+test_ipsec_pkt_replay(const void *test_data, const uint64_t esn[],
+ bool replayed_pkt[], uint32_t nb_pkts, bool esn_en,
+ uint64_t winsz)
+{
+   struct ipsec_test_data td_outb[IPSEC_TEST_PACKETS_MAX];
+   struct ipsec_test_data td_inb[IPSEC_TEST_PACKETS_MAX];
+   struct ipsec_test_flags flags;
+   uint32_t i = 0, ret = 0;
+
+   memset(&flags, 0, sizeof(flags));
+   flags.antireplay = true;
+
+   for (i = 0; i < nb_pkts; i++) {
+   memcpy(&td_outb[i], test_data, sizeof(td_outb[i]));
+   td_outb[i].ipsec_xform.options.iv_gen_disable = 1;
+   td_outb[i].ipsec_xform.replay_win_sz = winsz;
+   td_outb[i].ipsec_xform.options.esn = esn_en;
+   }
+
+   for (i = 0; i < nb_pkts; i++)
+   td_outb[i].ipsec_xform.esn.value = esn[i];
+
+   ret = test_ipsec_proto_process(td_outb, td_inb, nb_pkts, true,
+  &flags);
+   if (ret != TEST_SUCCESS)
+   return ret;
+
+   test_ipsec_td_update(td_inb, td_outb, nb_pkts, &flags);
+
+   for (i = 0; i < nb_pkts; i++) {
+   td_inb[i].ipsec_xform.options.esn = esn_en;
+   /* Set antireplay flag for packets to be dropped */
+   td_inb[i].ar_packet = replayed_pkt[i];
+   }
+
+   ret = test_ipsec_proto_process(td_inb, NULL, nb_pkts, true,
+  &flags);
+
+   return ret;
+}
+
+static int
+test_ipsec_proto_pkt_antireplay(const void *test_data, uint64_t winsz)
+{
+
+   uint32_t nb_pkts = 5;
+   bool replayed_pkt[5];
+   uint64_t esn[5];
+
+   /* 1. Advance the TOP of the window to WS * 2 */
+   esn[0] = winsz * 2;
+   /* 2. Test sequence number within the new window(WS + 1) */
+   esn[1] = winsz + 1;
+   /* 3. Test sequence number less than the window BOTTOM */
+   esn[2] = winsz;
+   /* 4. Test sequence number in the middle of the window */
+   esn[3] = winsz + (winsz / 2);
+   /* 5. Test replay of the packet in the middle of the window */
+   esn[4] = winsz + (winsz / 2);
+
+   replayed_pkt[0] = false;
+   replayed_pkt[1] = false;
+   replayed_pkt[2] = true;
+   replayed_pkt[3] = false;
+   replayed_pkt[4] = true;
+
+   return test_ipsec_pkt_replay(test_data, esn, replayed_pkt, nb_pkts,
+false, winsz);
+}
+
+static int
+test_ipsec_proto_pkt_antireplay1024(const void *test_data)
+{
+   return test_ipsec_proto_pkt_antireplay(test_data, 1024);
+}
+
+static int
+test_ipsec_proto_pkt_antireplay2048(const void *test_data)
+{
+   return test_ipsec_proto_pkt_antireplay(test_data, 2048);
+}
+
+static int
+test_ipsec_proto_pkt_antireplay4096(const 

Re: [PATCH v4 1/3] net/enic: add support for eCPRI matching

2022-01-31 Thread Ferruh Yigit

On 1/28/2022 5:58 PM, John Daley wrote:

eCPRI message can be over Ethernet layer (.1Q supported also) or over
UDP layer. Message header formats are the same in these two variants.

Only up though the first packet header in the PDU can be matched.
RSS on the eCPRI payload is not supported.

Signed-off-by: John Daley 
Reviewed-by: Hyong Youb Kim 


Series applied to dpdk-next-net/main, thanks.


RE: [EXT] Re: [dpdk-dev] [PATCH v2 2/2] app/testpmd: add queue based pfc CLI options

2022-01-31 Thread Sunil Kumar Kori
-Original Message-
From: Ajit Khaparde  
Sent: Thursday, January 27, 2022 10:27 PM
To: Ferruh Yigit 
Cc: Sunil Kumar Kori ; Jerin Jacob Kollanukkaran 
; dev@dpdk.org; Xiaoyun Li ; Aman 
Singh ; Yuying Zhang ; 
tho...@monjalon.net; abo...@pensando.io; andrew.rybche...@oktetlabs.ru; 
beilei.x...@intel.com; bruce.richard...@intel.com; ch...@att.com; 
chenbo@intel.com; ciara.lof...@intel.com; Devendra Singh Rawat 
; ed.cz...@atomicrules.com; evge...@amazon.com; 
gr...@u256.net; g.si...@nxp.com; zhouguoy...@huawei.com; haiyue.w...@intel.com; 
Harman Kalra ; heinrich.k...@corigine.com; 
hemant.agra...@nxp.com; hyon...@cisco.com; igo...@amazon.com; Igor Russkikh 
; jgraj...@cisco.com; jasvinder.si...@intel.com; 
jianw...@trustnetic.com; jiawe...@trustnetic.com; jingjing...@intel.com; 
johnd...@cisco.com; john.mil...@atomicrules.com; linvi...@tuxdriver.com; 
keith.wi...@intel.com; Kiran Kumar Kokkilagadda ; 
ouli...@huawei.com; Liron Himi ; lon...@microsoft.com; 
m...@semihalf.com; spin...@cesnet.cz; ma...@nvidia.com; 
matt.pet...@windriver.com; maxime.coque...@redhat.com; m...@semihalf.com; 
humi...@huawei.com; Pradeep Kumar Nalla ; Nithin Kumar 
Dabilpuram ; qiming.y...@intel.com; 
qi.z.zh...@intel.com; Radha Chintakuntla ; 
rahul.lakkire...@chelsio.com; Rasesh Mody ; 
rosen...@intel.com; sachin.sax...@oss.nxp.com; Satha Koteswara Rao Kottidi 
; Shahed Shaikh ; 
shaib...@amazon.com; shepard.sie...@atomicrules.com; asoma...@amd.com; 
somnath.ko...@broadcom.com; sthem...@microsoft.com; 
steven.webs...@windriver.com; mtetsu...@gmail.com; Veerasenareddy Burru 
; viachesl...@nvidia.com; xiao.w.w...@intel.com; 
cloud.wangxiao...@huawei.com; yisen.zhu...@huawei.com; yongw...@vmware.com; 
xuanziya...@huawei.com
Subject: Re: [EXT] Re: [dpdk-dev] [PATCH v2 2/2] app/testpmd: add queue based 
pfc CLI options

On Thu, Jan 27, 2022 at 2:40 AM Ferruh Yigit  wrote:
>
> On 1/27/2022 7:13 AM, Sunil Kumar Kori wrote:
> >
> >> -Original Message-
> >> From: Ferruh Yigit 
> >> Sent: Tuesday, January 25, 2022 11:07 PM
> >> To: Jerin Jacob Kollanukkaran ; dev@dpdk.org; 
> >> Xiaoyun Li ; Aman Singh 
> >> ; Yuying Zhang 
> >> Cc: tho...@monjalon.net; ajit.khapa...@broadcom.com; 
> >> abo...@pensando.io; andrew.rybche...@oktetlabs.ru; 
> >> beilei.x...@intel.com; bruce.richard...@intel.com; ch...@att.com; 
> >> chenbo@intel.com; ciara.lof...@intel.com; Devendra Singh Rawat 
> >> ; ed.cz...@atomicrules.com; 
> >> evge...@amazon.com; gr...@u256.net; g.si...@nxp.com; 
> >> zhouguoy...@huawei.com; haiyue.w...@intel.com; Harman Kalra 
> >> ; heinrich.k...@corigine.com; 
> >> hemant.agra...@nxp.com; hyon...@cisco.com; igo...@amazon.com; Igor 
> >> Russkikh ; jgraj...@cisco.com; 
> >> jasvinder.si...@intel.com; jianw...@trustnetic.com; 
> >> jiawe...@trustnetic.com; jingjing...@intel.com; johnd...@cisco.com; 
> >> john.mil...@atomicrules.com; linvi...@tuxdriver.com; 
> >> keith.wi...@intel.com; Kiran Kumar Kokkilagadda 
> >> ; ouli...@huawei.com; Liron Himi 
> >> ; lon...@microsoft.com; m...@semihalf.com; 
> >> spin...@cesnet.cz; ma...@nvidia.com; matt.pet...@windriver.com; 
> >> maxime.coque...@redhat.com; m...@semihalf.com; humi...@huawei.com; 
> >> Pradeep Kumar Nalla ; Nithin Kumar Dabilpuram 
> >> ; qiming.y...@intel.com; 
> >> qi.z.zh...@intel.com; Radha Chintakuntla ; 
> >> rahul.lakkire...@chelsio.com; Rasesh Mody ; 
> >> rosen...@intel.com; sachin.sax...@oss.nxp.com; Satha Koteswara Rao 
> >> Kottidi ; Shahed Shaikh 
> >> ; shaib...@amazon.com; 
> >> shepard.sie...@atomicrules.com; asoma...@amd.com; 
> >> somnath.ko...@broadcom.com; sthem...@microsoft.com; 
> >> steven.webs...@windriver.com; Sunil Kumar Kori ; 
> >> mtetsu...@gmail.com; Veerasenareddy Burru ; 
> >> viachesl...@nvidia.com; xiao.w.w...@intel.com; 
> >> cloud.wangxiao...@huawei.com; yisen.zhu...@huawei.com; 
> >> yongw...@vmware.com; xuanziya...@huawei.com
> >> Subject: [EXT] Re: [dpdk-dev] [PATCH v2 2/2] app/testpmd: add queue 
> >> based pfc CLI options
> >>
> >> External Email
> >>
> >> ---
> >> --- On 1/13/2022 10:27 AM, jer...@marvell.com wrote:
> >>> From: Sunil Kumar Kori 
> >>>
> >>> Patch adds command line options to configure queue based priority 
> >>> flow control.
> >>>
> >>> - Syntax command is given as below:
> >>>
> >>> set pfc_queue_ctrl  rx\
> >>> tx
> >>>
> >>
> >> Isn't the order of the paramters odd, it is mixing Rx/Tx config, 
> >> what about ordering Rx and Tx paramters?
> >>
> > It's been kept like this to portray config for rx_pause and tx_pause 
> > separately i.e. mode and corresponding config.
> >
>
> What do you mean 'separately'? You need to provide all arguments anyway, 
> right?
>
> I was thinking first have the Rx arguments, later Tx, like:
>
> rxtx
I think this grouping is better.

>
> Am I missing something, is there a benefit of what you did in this patch?

Mentioned syntax takes input as per below config structure:
struct rte_eth

Re: [PATCH 1/6] net/hns3: fix fail to rollback the max packet size in PF

2022-01-31 Thread Ferruh Yigit

On 1/28/2022 2:07 AM, Min Hu (Connor) wrote:

From: Huisong Li 

HNS3 PF driver use the hns->pf.mps to restore the MTU when a reset occurs.
If user fails to configure the MTU, the MPS of PF may not be restored to
the original value,

Fixes: 25fb790f7868 ("net/hns3: fix HW buffer size on MTU update")
Fixes: 1f5ca0b460cd ("net/hns3: support some device operations")
Fixes: d51867db65c1 ("net/hns3: add initialization")
Cc: sta...@dpdk.org

Signed-off-by: Huisong Li 
Signed-off-by: Min Hu (Connor) 


Series applied to dpdk-next-net/main, thanks.



Re: [PATCH 0/2] bugfix for bonding

2022-01-31 Thread Ferruh Yigit

On 1/28/2022 2:25 AM, Min Hu (Connor) wrote:

Two bugfixed for bonding.

Min Hu (Connor) (2):
   net/bonding: fix promiscuous and allmulticast state
   net/bonding: fix reference count on mbufs



Series applied to dpdk-next-net/main, thanks.


[PATCH v3 1/2] ethdev: define a function to get eth dev structure

2022-01-31 Thread Kumara Parameshwaran
From: Kumara Parameshwaran 

The PMDs would need a function to access the rte_eth_devices
global array

Cc:sta...@dpdk.org

Signed-off-by: Kumara Parameshwaran 
---
 lib/ethdev/ethdev_driver.h | 18 ++
 lib/ethdev/rte_ethdev.c| 11 +++
 lib/ethdev/version.map |  1 +
 3 files changed, 30 insertions(+)

diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index d95605a355..7d27781f7d 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1629,6 +1629,24 @@ rte_eth_hairpin_queue_peer_bind(uint16_t cur_port, 
uint16_t cur_queue,
struct rte_hairpin_peer_info *peer_info,
uint32_t direction);
 
+/**
+ * @internal
+ * Get rte_eth_dev from device name. The device name should be specified
+ * as below:
+ * - PCIe address (Domain:Bus:Device.Function), for example- :2:00.0
+ * - SoC device name, for example- fsl-gmac0
+ * - vdev dpdk name, for example- net_[pcap0|null0|tap0]
+ *
+ * @param name
+ *  pci address or name of the device
+ * @return
+ *   - rte_eth_dev if successful
+ *   - NULL on failure
+ */
+__rte_internal
+struct rte_eth_dev*
+rte_eth_dev_get_by_name(const char *name);
+
 /**
  * @internal
  * Reset the current queue state and configuration to disconnect (unbind) it
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index a1d475a292..968475d107 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -894,6 +894,17 @@ rte_eth_dev_get_port_by_name(const char *name, uint16_t 
*port_id)
return -ENODEV;
 }
 
+struct rte_eth_dev *
+rte_eth_dev_get_by_name(const char *name)
+{
+   uint16_t pid;
+
+   if (rte_eth_dev_get_port_by_name(name, &pid))
+   return NULL;
+
+   return &rte_eth_devices[pid];
+}
+
 static int
 eth_err(uint16_t port_id, int ret)
 {
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index c2fb0669a4..1f7359c846 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -267,6 +267,7 @@ INTERNAL {
rte_eth_dev_callback_process;
rte_eth_dev_create;
rte_eth_dev_destroy;
+   rte_eth_dev_get_by_name;
rte_eth_dev_is_rx_hairpin_queue;
rte_eth_dev_is_tx_hairpin_queue;
rte_eth_dev_probing_finish;
-- 
2.17.1



[PATCH v3 2/2] net/tap: fix to populate fds in secondary process

2022-01-31 Thread Kumara Parameshwaran
From: Kumara Parameshwaran 

When a tap device is hotplugged to primary process which in turn
adds the device to all secondary process, the secondary process
does a tap_mp_attach_queues, but the fds are not populated in
the primary during the probe they are populated during the queue_setup,
added a fix to sync the queues during rte_eth_dev_start

Fixes: 4852aa8f6e21 ("drivers/net: enable hotplug on secondary process")
Cc: sta...@dpdk.org

Signed-off-by: Kumara Parameshwaran 
---
v3:
* Retain tap_sync_queues to retain the attach of secondary process
* Fix coding convention for a function definition
* Renamed rte_get_eth_dev_by_name to rte_eth_dev_get_by_name in sorted
  in version.map
* Remove uninteded blank line

 drivers/net/tap/rte_eth_tap.c | 80 +++
 1 file changed, 80 insertions(+)

diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index f1b48cae82..d13baadbe7 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -67,6 +67,7 @@
 
 /* IPC key for queue fds sync */
 #define TAP_MP_KEY "tap_mp_sync_queues"
+#define TAP_MP_REQ_START_RXTX "tap_mp_req_start_rxtx"
 
 #define TAP_IOV_DEFAULT_MAX 1024
 
@@ -880,11 +881,49 @@ tap_link_set_up(struct rte_eth_dev *dev)
return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 1, LOCAL_AND_REMOTE);
 }
 
+static int
+tap_mp_req_on_rxtx(struct rte_eth_dev *dev)
+{
+   struct rte_mp_msg msg;
+   struct ipc_queues *request_param = (struct ipc_queues *)msg.param;
+   int err;
+   int fd_iterator = 0;
+   struct pmd_process_private *process_private = dev->process_private;
+   int i;
+
+   memset(&msg, 0, sizeof(msg));
+   strlcpy(msg.name, TAP_MP_REQ_START_RXTX, sizeof(msg.name));
+   strlcpy(request_param->port_name, dev->data->name, 
sizeof(request_param->port_name));
+   msg.len_param = sizeof(*request_param);
+   for (i = 0; i < dev->data->nb_tx_queues; i++) {
+   msg.fds[fd_iterator++] = process_private->txq_fds[i];
+   msg.num_fds++;
+   request_param->txq_count++;
+   }
+   for (i = 0; i < dev->data->nb_rx_queues; i++) {
+   msg.fds[fd_iterator++] = process_private->rxq_fds[i];
+   msg.num_fds++;
+   request_param->rxq_count++;
+   }
+
+   err = rte_mp_sendmsg(&msg);
+   if (err < 0) {
+   TAP_LOG(ERR, "Failed to send start req to secondary %d",
+   rte_errno);
+   return -1;
+   }
+
+   return 0;
+}
+
 static int
 tap_dev_start(struct rte_eth_dev *dev)
 {
int err, i;
 
+   if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+   tap_mp_req_on_rxtx(dev);
+
err = tap_intr_handle_set(dev, 1);
if (err)
return err;
@@ -901,6 +940,34 @@ tap_dev_start(struct rte_eth_dev *dev)
return err;
 }
 
+static int
+tap_mp_req_start_rxtx(const struct rte_mp_msg *request, __rte_unused const 
void *peer)
+{
+   struct rte_eth_dev *dev;
+   const struct ipc_queues *request_param =
+   (const struct ipc_queues *)request->param;
+   int fd_iterator;
+   int queue;
+   struct pmd_process_private *process_private;
+
+   dev = rte_eth_dev_get_by_name(request_param->port_name);
+   if (!dev) {
+   TAP_LOG(ERR, "Failed to get dev for %s",
+   request_param->port_name);
+   return -1;
+   }
+   process_private = dev->process_private;
+   fd_iterator = 0;
+   TAP_LOG(DEBUG, "tap_attach rx_q:%d tx_q:%d\n", request_param->rxq_count,
+   request_param->txq_count);
+   for (queue = 0; queue < request_param->txq_count; queue++)
+   process_private->txq_fds[queue] = request->fds[fd_iterator++];
+   for (queue = 0; queue < request_param->rxq_count; queue++)
+   process_private->rxq_fds[queue] = request->fds[fd_iterator++];
+
+   return 0;
+}
+
 /* This function gets called when the current port gets stopped.
  */
 static int
@@ -1084,6 +1151,9 @@ tap_dev_close(struct rte_eth_dev *dev)
 
if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
rte_free(dev->process_private);
+   if (tap_devices_count == 1)
+   rte_mp_action_unregister(TAP_MP_REQ_START_RXTX);
+   tap_devices_count--;
return 0;
}
 
@@ -2445,6 +2515,16 @@ rte_pmd_tap_probe(struct rte_vdev_device *dev)
ret = tap_mp_attach_queues(name, eth_dev);
if (ret != 0)
return -1;
+
+   if (!tap_devices_count) {
+   ret = rte_mp_action_register(TAP_MP_REQ_START_RXTX, 
tap_mp_req_start_rxtx);
+   if (ret < 0 && rte_errno != ENOTSUP) {
+   TAP_LOG(ERR, "tap: Failed to register IPC 
callback: %s",
+   strerror(rte_errno));
+ 

[PATCH v3 1/2] ethdev: define a function to get eth dev structure

2022-01-31 Thread Kumara Parameshwaran
From: Kumara Parameshwaran 

The PMDs would need a function to access the rte_eth_devices
global array

Cc: sta...@dpdk.org

Signed-off-by: Kumara Parameshwaran 
---
 lib/ethdev/ethdev_driver.h | 18 ++
 lib/ethdev/rte_ethdev.c| 11 +++
 lib/ethdev/version.map |  1 +
 3 files changed, 30 insertions(+)

diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index d95605a355..7d27781f7d 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1629,6 +1629,24 @@ rte_eth_hairpin_queue_peer_bind(uint16_t cur_port, 
uint16_t cur_queue,
struct rte_hairpin_peer_info *peer_info,
uint32_t direction);
 
+/**
+ * @internal
+ * Get rte_eth_dev from device name. The device name should be specified
+ * as below:
+ * - PCIe address (Domain:Bus:Device.Function), for example- :2:00.0
+ * - SoC device name, for example- fsl-gmac0
+ * - vdev dpdk name, for example- net_[pcap0|null0|tap0]
+ *
+ * @param name
+ *  pci address or name of the device
+ * @return
+ *   - rte_eth_dev if successful
+ *   - NULL on failure
+ */
+__rte_internal
+struct rte_eth_dev*
+rte_eth_dev_get_by_name(const char *name);
+
 /**
  * @internal
  * Reset the current queue state and configuration to disconnect (unbind) it
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index a1d475a292..968475d107 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -894,6 +894,17 @@ rte_eth_dev_get_port_by_name(const char *name, uint16_t 
*port_id)
return -ENODEV;
 }
 
+struct rte_eth_dev *
+rte_eth_dev_get_by_name(const char *name)
+{
+   uint16_t pid;
+
+   if (rte_eth_dev_get_port_by_name(name, &pid))
+   return NULL;
+
+   return &rte_eth_devices[pid];
+}
+
 static int
 eth_err(uint16_t port_id, int ret)
 {
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index c2fb0669a4..1f7359c846 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -267,6 +267,7 @@ INTERNAL {
rte_eth_dev_callback_process;
rte_eth_dev_create;
rte_eth_dev_destroy;
+   rte_eth_dev_get_by_name;
rte_eth_dev_is_rx_hairpin_queue;
rte_eth_dev_is_tx_hairpin_queue;
rte_eth_dev_probing_finish;
-- 
2.17.1



[PATCH v3 2/2] net/tap: fix to populate fds in secondary process

2022-01-31 Thread Kumara Parameshwaran
From: Kumara Parameshwaran 

When a tap device is hotplugged to primary process which in turn
adds the device to all secondary process, the secondary process
does a tap_mp_attach_queues, but the fds are not populated in
the primary during the probe they are populated during the queue_setup,
added a fix to sync the queues during rte_eth_dev_start

Fixes: 4852aa8f6e21 ("drivers/net: enable hotplug on secondary process")
Cc: sta...@dpdk.org

Signed-off-by: Kumara Parameshwaran 
---
v3:
* Retain tap_sync_queues to retain the attach of secondary process
* Fix coding convention for a function definition
* Renamed rte_get_eth_dev_by_name to rte_eth_dev_get_by_name in sorted
  in version.map
* Remove uninteded blank line

 drivers/net/tap/rte_eth_tap.c | 80 +++
 1 file changed, 80 insertions(+)

diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index f1b48cae82..d13baadbe7 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -67,6 +67,7 @@
 
 /* IPC key for queue fds sync */
 #define TAP_MP_KEY "tap_mp_sync_queues"
+#define TAP_MP_REQ_START_RXTX "tap_mp_req_start_rxtx"
 
 #define TAP_IOV_DEFAULT_MAX 1024
 
@@ -880,11 +881,49 @@ tap_link_set_up(struct rte_eth_dev *dev)
return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 1, LOCAL_AND_REMOTE);
 }
 
+static int
+tap_mp_req_on_rxtx(struct rte_eth_dev *dev)
+{
+   struct rte_mp_msg msg;
+   struct ipc_queues *request_param = (struct ipc_queues *)msg.param;
+   int err;
+   int fd_iterator = 0;
+   struct pmd_process_private *process_private = dev->process_private;
+   int i;
+
+   memset(&msg, 0, sizeof(msg));
+   strlcpy(msg.name, TAP_MP_REQ_START_RXTX, sizeof(msg.name));
+   strlcpy(request_param->port_name, dev->data->name, 
sizeof(request_param->port_name));
+   msg.len_param = sizeof(*request_param);
+   for (i = 0; i < dev->data->nb_tx_queues; i++) {
+   msg.fds[fd_iterator++] = process_private->txq_fds[i];
+   msg.num_fds++;
+   request_param->txq_count++;
+   }
+   for (i = 0; i < dev->data->nb_rx_queues; i++) {
+   msg.fds[fd_iterator++] = process_private->rxq_fds[i];
+   msg.num_fds++;
+   request_param->rxq_count++;
+   }
+
+   err = rte_mp_sendmsg(&msg);
+   if (err < 0) {
+   TAP_LOG(ERR, "Failed to send start req to secondary %d",
+   rte_errno);
+   return -1;
+   }
+
+   return 0;
+}
+
 static int
 tap_dev_start(struct rte_eth_dev *dev)
 {
int err, i;
 
+   if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+   tap_mp_req_on_rxtx(dev);
+
err = tap_intr_handle_set(dev, 1);
if (err)
return err;
@@ -901,6 +940,34 @@ tap_dev_start(struct rte_eth_dev *dev)
return err;
 }
 
+static int
+tap_mp_req_start_rxtx(const struct rte_mp_msg *request, __rte_unused const 
void *peer)
+{
+   struct rte_eth_dev *dev;
+   const struct ipc_queues *request_param =
+   (const struct ipc_queues *)request->param;
+   int fd_iterator;
+   int queue;
+   struct pmd_process_private *process_private;
+
+   dev = rte_eth_dev_get_by_name(request_param->port_name);
+   if (!dev) {
+   TAP_LOG(ERR, "Failed to get dev for %s",
+   request_param->port_name);
+   return -1;
+   }
+   process_private = dev->process_private;
+   fd_iterator = 0;
+   TAP_LOG(DEBUG, "tap_attach rx_q:%d tx_q:%d\n", request_param->rxq_count,
+   request_param->txq_count);
+   for (queue = 0; queue < request_param->txq_count; queue++)
+   process_private->txq_fds[queue] = request->fds[fd_iterator++];
+   for (queue = 0; queue < request_param->rxq_count; queue++)
+   process_private->rxq_fds[queue] = request->fds[fd_iterator++];
+
+   return 0;
+}
+
 /* This function gets called when the current port gets stopped.
  */
 static int
@@ -1084,6 +1151,9 @@ tap_dev_close(struct rte_eth_dev *dev)
 
if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
rte_free(dev->process_private);
+   if (tap_devices_count == 1)
+   rte_mp_action_unregister(TAP_MP_REQ_START_RXTX);
+   tap_devices_count--;
return 0;
}
 
@@ -2445,6 +2515,16 @@ rte_pmd_tap_probe(struct rte_vdev_device *dev)
ret = tap_mp_attach_queues(name, eth_dev);
if (ret != 0)
return -1;
+
+   if (!tap_devices_count) {
+   ret = rte_mp_action_register(TAP_MP_REQ_START_RXTX, 
tap_mp_req_start_rxtx);
+   if (ret < 0 && rte_errno != ENOTSUP) {
+   TAP_LOG(ERR, "tap: Failed to register IPC 
callback: %s",
+   strerror(rte_errno));
+ 

RE: [EXT] [PATCH v2 4/4] crypto: modify return value for asym session create

2022-01-31 Thread Anoob Joseph
Hi Ciara,

Minor nits. Please see inline.

With the fixes,
Acked-by: Anoob Joseph 

Thanks,
Anoob

> -Original Message-
> From: Ciara Power 
> Sent: Monday, January 24, 2022 8:34 PM
> To: dev@dpdk.org
> Cc: roy.fan.zh...@intel.com; Akhil Goyal ; Anoob Joseph
> ; m...@ashroe.eu; Ciara Power
> ; Declan Doherty 
> Subject: [EXT] [PATCH v2 4/4] crypto: modify return value for asym session
> create
> 
> External Email
> 
> --
> Rather than the asym session create function returning a session on success, 
> and
> a NULL value on error, it is modified to now return int values - 0 on success 
> or -
> EINVAL/-ENOTSUP/-ENOMEM on failure.
> The session to be used is passed as input.
> 
> This adds clarity on the failure of the create function, which enables 
> treating the
> -ENOTSUP return as TEST_SKIPPED in test apps.
> 
> Signed-off-by: Ciara Power 
> ---

[snip]

> @@ -744,11 +746,13 @@ cperf_create_session(struct rte_mempool *sess_mp,
>   xform.modex.exponent.data = perf_mod_e;
>   xform.modex.exponent.length = sizeof(perf_mod_e);
> 
> - sess = (void *)rte_cryptodev_asym_session_create(sess_mp,
> dev_id, &xform);
> - if (sess == NULL)
> + ret = rte_cryptodev_asym_session_create(&asym_sess,
> + sess_mp, dev_id, &xform);
> + if (ret < 0) {
> + RTE_LOG(ERR, USER1, "Asym session create failed");

[Anoob] Don't we need \n at the end?

[snip]
 
> @@ -644,9 +645,9 @@ test_rsa_sign_verify(void)
>   struct crypto_testsuite_params_asym *ts_params = &testsuite_params;
>   struct rte_mempool *sess_mpool = ts_params->session_mpool;
>   uint8_t dev_id = ts_params->valid_devs[0];
> - void *sess;
> + void *sess = NULL;
>   struct rte_cryptodev_info dev_info;
> - int status = TEST_SUCCESS;
> + int status = TEST_SUCCESS, ret;

[Anoob] May be move status to the end? Here and in other places.
https://doc.dpdk.org/guides/contributing/coding_style.html#local-variables
 
> 
>   /* Test case supports op with exponent key only,
>* Check in PMD feature flag for RSA exponent key type support.
> @@ -659,12 +660,12 @@ test_rsa_sign_verify(void)
>   return TEST_SKIPPED;
>   }
> 
> - sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id,
> &rsa_xform);
> -
> - if (!sess) {
> + ret = rte_cryptodev_asym_session_create(&sess, sess_mpool,
> + dev_id, &rsa_xform);
> + if (ret < 0) {
>   RTE_LOG(ERR, USER1, "Session creation failed for "
>   "sign_verify\n");
> - status = TEST_FAILED;
> + status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
>   goto error_exit;
>   }
> 

[snip]

> diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h 
> index
> 6a4d6d9934..89739def91 100644
> --- a/lib/cryptodev/rte_cryptodev.h
> +++ b/lib/cryptodev/rte_cryptodev.h
> @@ -990,18 +990,21 @@ rte_cryptodev_sym_session_create(struct
> rte_mempool *mempool);
>  /**
>   * Create asymmetric crypto session header (generic with no private data)
>   *
> + * @param   sessionvoid ** for session to be used
>   * @param   mempoolmempool to allocate asymmetric session
>   * objects from
>   * @param   dev_id   ID of device that we want the session to be used on
>   * @param   xforms   Asymmetric crypto transform operations to apply on flow
>   *   processed with this session
>   * @return
> - *  - On success return pointer to asym-session
> - *  - On failure returns NULL
> + *  - 0 on success.
> + *  - -EINVAL on invalid device ID, or invalid mempool.

[Anoob] PMD can also return an -EINVAL if some invalid configuration is 
requested. May be better to leave this open, like,

"- EINVAL Invalid arguments"

> + *  - -ENOMEM on memory error for session allocation.
> + *  - -ENOTSUP if device doesn't support session configuration.
>   */
>  __rte_experimental
> -void *
> -rte_cryptodev_asym_session_create(struct rte_mempool *mempool,
> +int
> +rte_cryptodev_asym_session_create(void **session, struct rte_mempool
> +*mempool,
>   uint8_t dev_id, struct rte_crypto_asym_xform *xforms);
> 
>  /**
> --
> 2.25.1



RE: [EXT] [PATCH v2 3/4] crypto: add asym session user data API

2022-01-31 Thread Anoob Joseph
Hi Ciara,

Minor nits inline.

Acked-by: Anoob Joseph 

Thanks,
Anoob

> -Original Message-
> From: Ciara Power 
> Sent: Monday, January 24, 2022 8:34 PM
> To: dev@dpdk.org
> Cc: roy.fan.zh...@intel.com; Akhil Goyal ; Anoob Joseph
> ; m...@ashroe.eu; Ciara Power
> ; Declan Doherty 
> Subject: [EXT] [PATCH v2 3/4] crypto: add asym session user data API
> 
> External Email
> 
> --
> A user data field is added to the asymmetric session structure.
> Relevant API added to get/set the field.
> 
> Signed-off-by: Ciara Power 
> 
> ---
> v2: Corrected order of version map entries.
> ---
>  app/test/test_cryptodev_asym.c  |  2 +-
>  lib/cryptodev/cryptodev_pmd.h   |  4 ++-
>  lib/cryptodev/rte_cryptodev.c   | 39 ++---
>  lib/cryptodev/rte_cryptodev.h   | 34 -
>  lib/cryptodev/rte_cryptodev_trace.h |  3 ++-
>  lib/cryptodev/version.map   |  2 ++
>  6 files changed, 76 insertions(+), 8 deletions(-)
> 
> diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
> index f93f39af42..a81d6292f6 100644
> --- a/app/test/test_cryptodev_asym.c
> +++ b/app/test/test_cryptodev_asym.c
> @@ -897,7 +897,7 @@ testsuite_setup(void)
>   }
> 
>   ts_params->session_mpool =
> rte_cryptodev_asym_session_pool_create(
> - "test_asym_sess_mp", TEST_NUM_SESSIONS * 2, 0,
> + "test_asym_sess_mp", TEST_NUM_SESSIONS * 2, 0, 0,
>   SOCKET_ID_ANY);
> 
>   TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
> diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
> index 2d12505d3c..a0f7bb0c05 100644
> --- a/lib/cryptodev/cryptodev_pmd.h
> +++ b/lib/cryptodev/cryptodev_pmd.h
> @@ -636,7 +636,9 @@ __extension__ struct rte_cryptodev_asym_session {
>   /**< Session driver ID. */
>   uint8_t max_priv_session_sz;
>   /**< size of private session data used when creating mempool */
> - uint8_t padding[6];
> + uint16_t user_data_sz;
> + /**< session user data will be placed after sess_data */

[Anoob] The formatting of comments is slightly inconsistent here. Like "Session 
driver ID." v/s "session user data.." For the line you are adding do you mind 
making S capital? Same comment below as well. 
 
> + uint8_t padding[4];
>   uint8_t sess_private_data[0];
>  };
> 
> diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c 
> index
> c10b9bf05f..2a591930de 100644
> --- a/lib/cryptodev/rte_cryptodev.c
> +++ b/lib/cryptodev/rte_cryptodev.c
> @@ -210,6 +210,8 @@ struct rte_cryptodev_sym_session_pool_private_data {
> struct rte_cryptodev_asym_session_pool_private_data {
>   uint8_t max_priv_session_sz;
>   /**< size of private session data used when creating mempool */
> + uint16_t user_data_sz;
> + /**< session user data will be placed after sess_private_data */
>  };
> 
>  int
> @@ -1803,7 +1805,7 @@ rte_cryptodev_sym_session_pool_create(const char
> *name, uint32_t nb_elts,
> 


[PATCH v2 0/2] Adding new cases to lookaside IPsec tests

2022-01-31 Thread Tejasree Kondoj
Adding new test cases to lookaside IPsec tests.
* Set and copy DSCP cases
* ESN and antireplay support

Changes in v2:
* Fixed 32-bit build failure

Anoob Joseph (1):
  test/crypto: add copy and set DSCP cases

Tejasree Kondoj (1):
  test/cryptodev: add ESN and Antireplay tests

 app/test/test_cryptodev.c | 352 +-
 app/test/test_cryptodev_security_ipsec.c  | 173 +++--
 app/test/test_cryptodev_security_ipsec.h  |  16 +-
 ...st_cryptodev_security_ipsec_test_vectors.h |   1 +
 doc/guides/rel_notes/release_22_03.rst|   5 +
 5 files changed, 518 insertions(+), 29 deletions(-)

-- 
2.27.0



[PATCH v2 1/2] test/crypto: add copy and set DSCP cases

2022-01-31 Thread Tejasree Kondoj
From: Anoob Joseph 

Add test cases to verify copy and set DSCP with IPv4 and IPv6 tunnels.

Signed-off-by: Anoob Joseph 
---
 app/test/test_cryptodev.c| 166 +++
 app/test/test_cryptodev_security_ipsec.c | 150 
 app/test/test_cryptodev_security_ipsec.h |  10 ++
 3 files changed, 301 insertions(+), 25 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index ec4a61bdf9..47ad991c31 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -9176,7 +9176,21 @@ test_ipsec_proto_process(const struct ipsec_test_data 
td[],
if (flags->df == TEST_IPSEC_SET_DF_1_INNER_0)
ipsec_xform.tunnel.ipv4.df = 1;
 
+   if (flags->dscp == TEST_IPSEC_SET_DSCP_0_INNER_1)
+   ipsec_xform.tunnel.ipv4.dscp = 0;
+
+   if (flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0)
+   ipsec_xform.tunnel.ipv4.dscp =
+   TEST_IPSEC_DSCP_VAL;
+
} else {
+   if (flags->dscp == TEST_IPSEC_SET_DSCP_0_INNER_1)
+   ipsec_xform.tunnel.ipv6.dscp = 0;
+
+   if (flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0)
+   ipsec_xform.tunnel.ipv6.dscp =
+   TEST_IPSEC_DSCP_VAL;
+
memcpy(&ipsec_xform.tunnel.ipv6.src_addr, &v6_src,
   sizeof(v6_src));
memcpy(&ipsec_xform.tunnel.ipv6.dst_addr, &v6_dst,
@@ -9761,6 +9775,126 @@ test_ipsec_proto_set_df_1_inner_0(const void *data 
__rte_unused)
return test_ipsec_proto_all(&flags);
 }
 
+static int
+test_ipsec_proto_ipv4_copy_dscp_inner_0(const void *data __rte_unused)
+{
+   struct ipsec_test_flags flags;
+
+   memset(&flags, 0, sizeof(flags));
+
+   flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_0;
+
+   return test_ipsec_proto_all(&flags);
+}
+
+static int
+test_ipsec_proto_ipv4_copy_dscp_inner_1(const void *data __rte_unused)
+{
+   struct ipsec_test_flags flags;
+
+   memset(&flags, 0, sizeof(flags));
+
+   flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_1;
+
+   return test_ipsec_proto_all(&flags);
+}
+
+static int
+test_ipsec_proto_ipv4_set_dscp_0_inner_1(const void *data __rte_unused)
+{
+   struct ipsec_test_flags flags;
+
+   if (gbl_driver_id == rte_cryptodev_driver_id_get(
+   RTE_STR(CRYPTODEV_NAME_CN9K_PMD)))
+   return TEST_SKIPPED;
+
+   memset(&flags, 0, sizeof(flags));
+
+   flags.dscp = TEST_IPSEC_SET_DSCP_0_INNER_1;
+
+   return test_ipsec_proto_all(&flags);
+}
+
+static int
+test_ipsec_proto_ipv4_set_dscp_1_inner_0(const void *data __rte_unused)
+{
+   struct ipsec_test_flags flags;
+
+   if (gbl_driver_id == rte_cryptodev_driver_id_get(
+   RTE_STR(CRYPTODEV_NAME_CN9K_PMD)))
+   return TEST_SKIPPED;
+
+   memset(&flags, 0, sizeof(flags));
+
+   flags.dscp = TEST_IPSEC_SET_DSCP_1_INNER_0;
+
+   return test_ipsec_proto_all(&flags);
+}
+
+static int
+test_ipsec_proto_ipv6_copy_dscp_inner_0(const void *data __rte_unused)
+{
+   struct ipsec_test_flags flags;
+
+   memset(&flags, 0, sizeof(flags));
+
+   flags.ipv6 = true;
+   flags.tunnel_ipv6 = true;
+   flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_0;
+
+   return test_ipsec_proto_all(&flags);
+}
+
+static int
+test_ipsec_proto_ipv6_copy_dscp_inner_1(const void *data __rte_unused)
+{
+   struct ipsec_test_flags flags;
+
+   memset(&flags, 0, sizeof(flags));
+
+   flags.ipv6 = true;
+   flags.tunnel_ipv6 = true;
+   flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_1;
+
+   return test_ipsec_proto_all(&flags);
+}
+
+static int
+test_ipsec_proto_ipv6_set_dscp_0_inner_1(const void *data __rte_unused)
+{
+   struct ipsec_test_flags flags;
+
+   if (gbl_driver_id == rte_cryptodev_driver_id_get(
+   RTE_STR(CRYPTODEV_NAME_CN9K_PMD)))
+   return TEST_SKIPPED;
+
+   memset(&flags, 0, sizeof(flags));
+
+   flags.ipv6 = true;
+   flags.tunnel_ipv6 = true;
+   flags.dscp = TEST_IPSEC_SET_DSCP_0_INNER_1;
+
+   return test_ipsec_proto_all(&flags);
+}
+
+static int
+test_ipsec_proto_ipv6_set_dscp_1_inner_0(const void *data __rte_unused)
+{
+   struct ipsec_test_flags flags;
+
+   if (gbl_driver_id == rte_cryptodev_driver_id_get(
+   RTE_STR(CRYPTODEV_NAME_CN9K_PMD)))
+   return TEST_SKIPPED;
+
+   memset(&flags, 0, sizeof(flags));
+
+   flags.ipv6 = true;
+   flags.tunnel_ipv6 = true;
+   flags.dscp = TEST_IPSEC_SET_DSCP_1_INNER_0;
+
+   return test_ipsec_proto_all(&flags);
+}
+
 static int
 test_PDCP_PROTO_all(void)
 {
@@ -14799,6 +14933,38 @@ static struct unit_test_suite ipse

[PATCH v2 2/2] test/cryptodev: add ESN and Antireplay tests

2022-01-31 Thread Tejasree Kondoj
Adding test cases for IPsec ESN and Antireplay.

Signed-off-by: Tejasree Kondoj 
---
 app/test/test_cryptodev.c | 186 +-
 app/test/test_cryptodev_security_ipsec.c  |  23 ++-
 app/test/test_cryptodev_security_ipsec.h  |   6 +-
 ...st_cryptodev_security_ipsec_test_vectors.h |   1 +
 doc/guides/rel_notes/release_22_03.rst|   5 +
 5 files changed, 217 insertions(+), 4 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 47ad991c31..3536b65c52 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -9292,6 +9292,18 @@ test_ipsec_proto_process(const struct ipsec_test_data 
td[],
return TEST_SKIPPED;
 
for (i = 0; i < nb_td; i++) {
+   if (flags->antireplay &&
+   (dir == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)) {
+   sess_conf.ipsec.esn.value = td[i].ipsec_xform.esn.value;
+   ret = rte_security_session_update(ctx,
+   ut_params->sec_session, &sess_conf);
+   if (ret) {
+   printf("Could not update sequence number in "
+  "session\n");
+   return TEST_SKIPPED;
+   }
+   }
+
/* Setup source mbuf payload */
ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
@@ -9344,7 +9356,8 @@ test_ipsec_proto_process(const struct ipsec_test_data 
td[],
/* Process crypto operation */
process_crypto_request(dev_id, ut_params->op);
 
-   ret = test_ipsec_status_check(ut_params->op, flags, dir, i + 1);
+   ret = test_ipsec_status_check(&td[i], ut_params->op, flags, dir,
+ i + 1);
if (ret != TEST_SUCCESS)
goto crypto_op_free;
 
@@ -9895,6 +9908,150 @@ test_ipsec_proto_ipv6_set_dscp_1_inner_0(const void 
*data __rte_unused)
return test_ipsec_proto_all(&flags);
 }
 
+static int
+test_ipsec_pkt_replay(const void *test_data, const uint64_t esn[],
+ bool replayed_pkt[], uint32_t nb_pkts, bool esn_en,
+ uint64_t winsz)
+{
+   struct ipsec_test_data td_outb[IPSEC_TEST_PACKETS_MAX];
+   struct ipsec_test_data td_inb[IPSEC_TEST_PACKETS_MAX];
+   struct ipsec_test_flags flags;
+   uint32_t i = 0, ret = 0;
+
+   memset(&flags, 0, sizeof(flags));
+   flags.antireplay = true;
+
+   for (i = 0; i < nb_pkts; i++) {
+   memcpy(&td_outb[i], test_data, sizeof(td_outb[i]));
+   td_outb[i].ipsec_xform.options.iv_gen_disable = 1;
+   td_outb[i].ipsec_xform.replay_win_sz = winsz;
+   td_outb[i].ipsec_xform.options.esn = esn_en;
+   }
+
+   for (i = 0; i < nb_pkts; i++)
+   td_outb[i].ipsec_xform.esn.value = esn[i];
+
+   ret = test_ipsec_proto_process(td_outb, td_inb, nb_pkts, true,
+  &flags);
+   if (ret != TEST_SUCCESS)
+   return ret;
+
+   test_ipsec_td_update(td_inb, td_outb, nb_pkts, &flags);
+
+   for (i = 0; i < nb_pkts; i++) {
+   td_inb[i].ipsec_xform.options.esn = esn_en;
+   /* Set antireplay flag for packets to be dropped */
+   td_inb[i].ar_packet = replayed_pkt[i];
+   }
+
+   ret = test_ipsec_proto_process(td_inb, NULL, nb_pkts, true,
+  &flags);
+
+   return ret;
+}
+
+static int
+test_ipsec_proto_pkt_antireplay(const void *test_data, uint64_t winsz)
+{
+
+   uint32_t nb_pkts = 5;
+   bool replayed_pkt[5];
+   uint64_t esn[5];
+
+   /* 1. Advance the TOP of the window to WS * 2 */
+   esn[0] = winsz * 2;
+   /* 2. Test sequence number within the new window(WS + 1) */
+   esn[1] = winsz + 1;
+   /* 3. Test sequence number less than the window BOTTOM */
+   esn[2] = winsz;
+   /* 4. Test sequence number in the middle of the window */
+   esn[3] = winsz + (winsz / 2);
+   /* 5. Test replay of the packet in the middle of the window */
+   esn[4] = winsz + (winsz / 2);
+
+   replayed_pkt[0] = false;
+   replayed_pkt[1] = false;
+   replayed_pkt[2] = true;
+   replayed_pkt[3] = false;
+   replayed_pkt[4] = true;
+
+   return test_ipsec_pkt_replay(test_data, esn, replayed_pkt, nb_pkts,
+false, winsz);
+}
+
+static int
+test_ipsec_proto_pkt_antireplay1024(const void *test_data)
+{
+   return test_ipsec_proto_pkt_antireplay(test_data, 1024);
+}
+
+static int
+test_ipsec_proto_pkt_antireplay2048(const void *test_data)
+{
+   return test_ipsec_proto_pkt_antireplay(test_data, 2048);
+}
+
+static int
+test_ipsec_proto_pkt_antireplay4096(const 

Re: [PATCH] app/testpmd: fix bonding mode set

2022-01-31 Thread Ferruh Yigit

On 1/28/2022 2:35 AM, Min Hu (Connor) wrote:

when start testpmd, and type command like this, it will lead to
Segmentation fault, like:

testpmd> create bonded device 4 0
testpmd> add bonding slave 0 2
testpmd> add bonding slave 1 2
testpmd> port start 2
testpmd> set bonding mode 0 2
testpmd> quit
Stopping port 0...
Stopping ports...
...
Bye...
Segmentation fault

The reason to the bug is that rte timer do not be cancelled when quit.
That is, in 'bond_ethdev_start', resources are allocated according to
different bonding mode. In 'bond_ethdev_stop', resources are free by
the corresponding mode.

For example, 'bond_ethdev_start' start bond_mode_8023ad_ext_periodic_cb
timer for bonding mode 4. and 'bond_ethdev_stop' cancel the timer only
when the current bonding mode is 4. If the bonding mode is changed,
and directly quit the process, the timer will still on, and freed memory
will be accessed, then segmentation fault.

'bonding mode'changed means resources changed, reallocate resources for


'bonding mode' changed ...


different mode should be done, that is, device should be restarted.

Fixes: 2950a769315e ("bond: testpmd support")
Cc: sta...@dpdk.org

Signed-off-by: Min Hu (Connor) 


Tested-by: Ferruh Yigit 

Applied to dpdk-next-net/main, thanks.


---
  app/test-pmd/cmdline.c | 13 +
  1 file changed, 13 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index e626b1c7d9..2c47ab0f18 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -5915,6 +5915,19 @@ static void cmd_set_bonding_mode_parsed(void 
*parsed_result,
  {
struct cmd_set_bonding_mode_result *res = parsed_result;
portid_t port_id = res->port_id;
+   struct rte_port *port = &ports[port_id];
+
+   /*
+* Bonding mode changed means resources of device changed, like whether
+* started rte timer or not. Device should be restarted when resources
+* of device changed.
+*/
+   if (port->port_status != RTE_PORT_STOPPED) {
+   fprintf(stderr,
+   "\t Error: Can't config bonding mode when port %d is not 
stopped\n",


Updated as "... Can't set bonding mode ..."


+   port_id);
+   return;
+   }
  
  	/* Set the bonding mode for the relevant port. */

if (0 != rte_eth_bond_mode_set(port_id, res->value))




Re: [PATCH] app/testpmd: fix GENEVE parsing in csum forward mode

2022-01-31 Thread Singh, Aman Deep


On 1/30/2022 4:48 PM, Raja Zidane wrote:

I didn't want to remove the default parsing of tunnel as VxLan because I 
thought it might be used,
Instead I moved it to the end, which makes it detect all supported tunnel 
through udp_dst_port,
And only if no tunnel was matched it would default to VxLan.
That was the reason geneve weren't detected and parsed as vxlan instead, which 
is the bug I was trying to solve.


We can take help/input from i40 maintainers for it.

Hi Beilei Xing,
For setting packet_type as Tunnel, what criteria is used by i40 driver. Is it 
only udp_dst port or any other parameters also.



-Original Message-
From: Singh, Aman Deep  
Sent: Thursday, January 20, 2022 12:47 PM

To: Matan Azrad; Ferruh Yigit; Raja 
Zidane;dev@dpdk.org
Cc:sta...@dpdk.org
Subject: Re: [PATCH] app/testpmd: fix GENEVE parsing in csum forward mode

External email: Use caution opening links or attachments


On 1/18/2022 6:49 PM, Matan Azrad wrote:
  app/test-pmd/csumonly.c | 16 ++--
  1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 2aeea243b6..fe810fecdd 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -254,7 +254,10 @@ parse_gtp(struct rte_udp_hdr *udp_hdr,
  info->l2_len += RTE_ETHER_GTP_HLEN;
  }

-/* Parse a vxlan header */
+/*
+ * Parse a vxlan header.
+ * If a tunnel is detected in 'pkt_type' it will be parsed by default as vxlan.
+ */
  static void
  parse_vxlan(struct rte_udp_hdr *udp_hdr,
  struct testpmd_offload_info *info, @@ -912,17
+915,18 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
  
RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE;
  goto tunnel_update;
  }
- parse_vxlan(udp_hdr, &info,
- m->packet_type);
+ parse_geneve(udp_hdr, &info);
  if (info.is_tunnel) {
  tx_ol_flags |=
- RTE_MBUF_F_TX_TUNNEL_VXLAN;
+
+ RTE_MBUF_F_TX_TUNNEL_GENEVE;
  goto tunnel_update;
  }
- parse_geneve(udp_hdr, &info);
+ /* Always keep last. */
+ parse_vxlan(udp_hdr, &info,
+ m->packet_type);
  if (info.is_tunnel) {
  tx_ol_flags |=
- RTE_MBUF_F_TX_TUNNEL_GENEVE;
+
+ RTE_MBUF_F_TX_TUNNEL_VXLAN;
  goto tunnel_update;
  }
  } else if (info.l4_proto ==
IPPROTO_GRE) {

-Original Message-
From: Ferruh Yigit
Sent: Tuesday, January 18, 2022 3:03 PM
To: Matan Azrad; Raja Zidane;
dev@dpdk.org
Cc:sta...@dpdk.org
Subject: Re: [PATCH] app/testpmd: fix GENEVE parsing in csum forward
mode

External email: Use caution opening links or attachments


On 1/18/2022 12:55 PM, Matan Azrad wrote:

-Original Message-
From: Ferruh Yigit
Sent: Tuesday, January 18, 2022 2:28 PM
To: Matan Azrad; Raja Zidane
;dev@dpdk.org
Cc:sta...@dpdk.org
Subject: Re: [PATCH] app/testpmd: fix GENEVE parsing in csum
forward mode

External email: Use caution opening links or attachments


On 1/18/2022 11:27 AM, Matan Azrad wrote:

-Original Message-
From: Ferruh Yigit
Sent: Tuesday, January 18, 2022 11:52 AM
To: Raja Zidane;dev@dpdk.org
Cc: Matan Azrad;sta...@dpdk.org
Subject: Re: [PATCH] app/testpmd: fix GENEVE parsing in csum
forward mode

External email: Use caution opening links or attachments


On 12/5/2021 3:44 AM, Raja Zidane wrote:

The csum FWD mode parses any received packet to set mbuf
offloads for the transmitting burst, mainly in the checksum/TSO areas.
In the case of a tunnel header, the csum FWD tries to detect
known tunnels by the standard definition using the header'sdata
and fallback to check the packet type in the mbuf to see if the
Rx port driver already sign the packet as a tunnel.
In the fallback case, the csum assumes the tunnel is VXLAN and
parses the tunnel as VXLAN.

As far as I can see there is a VXLAN port check in
'parse_vxlan()', why it is not helping?


The problem is not the vxlan check but the tunnel type in mbuf
that caused the

packet to be detected as vxlan(default) before checking GENEVE tunnel case.
Check is as following:

if (udp_hdr->dst_port != _htons(RTE_VXLAN_DEFAULT_PORT) &&
RTE_ETH_IS_TUNNEL_PKT(pkt_type) == 0)
return;

Do you what is the intention for the
"RTE_ETH_IS_TUNNEL_PKT(pkt_type) ==

0"

check?
Why vxlan parsing doesn't stop when it is not default port?

Maybe some driver

Re: [PATCH v3] net/af_xdp: use libxdp if available

2022-01-31 Thread Ferruh Yigit

On 1/28/2022 9:50 AM, Ciara Loftus wrote:

AF_XDP support is deprecated in libbpf since v0.7.0 [1]. The libxdp library
now provides the functionality which once was in libbpf and which the
AF_XDP PMD relies on. This commit updates the AF_XDP meson build to use the
libxdp library if a version >= v1.2.2 is available. If it is not available,
only versions of libbpf prior to v0.7.0 are allowed, as they still contain
the required AF_XDP functionality.

libbpf still remains a dependency even if libxdp is present, as we use
libbpf APIs for program loading.

The minimum required kernel version for libxdp for use with AF_XDP is v5.3.
For the library to be fully-featured, a kernel v5.10 or newer is
recommended. The full compatibility information can be found in the libxdp
README.

v1.2.2 of libxdp includes an important fix required for linking with DPDK
which is why this version or greater is required. Meson uses pkg-config to
verify the version of libxdp on the system, so it is necessary that the
library is discoverable using pkg-config in order for the PMD to use it. To
verify this, you can run: pkg-config --modversion libxdp

[1] https://github.com/libbpf/libbpf/commit/277846bc6c15

Signed-off-by: Ciara Loftus 


Tested build with combination of following, build looks good
libxdp 1.2.0, libxdp 1.2.2
libbpf 0.7.0, libbpf 0.4.0


But while running testpmd can't find the libxdp.so by default [1], although
setting 'LD_LIBRARY_PATH' works (LD_LIBRARY_PATH=/usr/local/lib64/ for my case),
this wasn't required for libbpf, just checking if this is expected?

Similarly for 'build/drivers/librte_net_af_xdp.so', ldd can find 'libbpf' but
not libxdp.so (although they are in same folder):
$ ldd build/drivers/librte_net_af_xdp.so
libxdp.so.1 => not found
libbpf.so.0 => /usr/local/lib64/libbpf.so.0 (0x7f2ceb86f000)


Again, 'LD_LIBRARY_PATH' works:
$ LD_LIBRARY_PATH=/usr/local/lib64/ ldd build/drivers/librte_net_af_xdp.so
libxdp.so.1 => /usr/local/lib64/libxdp.so.1 (0x7fefa792e000)
libbpf.so.0 => /usr/local/lib64/libbpf.so.0 (0x7fefa78dc000)


But same question, why 'LD_LIBRARY_PATH' is not required for libbpf, but
required for libxdp, any idea?



[1]
./build/app/dpdk-testpmd: error while loading shared libraries: libxdp.so.1: 
cannot open shared object file: No such file or directory


Re: [PATCH v3] net/af_xdp: use libxdp if available

2022-01-31 Thread Bruce Richardson
On Mon, Jan 31, 2022 at 05:59:53PM +, Ferruh Yigit wrote:
> On 1/28/2022 9:50 AM, Ciara Loftus wrote:
> > AF_XDP support is deprecated in libbpf since v0.7.0 [1]. The libxdp
> > library now provides the functionality which once was in libbpf and
> > which the AF_XDP PMD relies on. This commit updates the AF_XDP meson
> > build to use the libxdp library if a version >= v1.2.2 is available. If
> > it is not available, only versions of libbpf prior to v0.7.0 are
> > allowed, as they still contain the required AF_XDP functionality.
> > 
> > libbpf still remains a dependency even if libxdp is present, as we use
> > libbpf APIs for program loading.
> > 
> > The minimum required kernel version for libxdp for use with AF_XDP is
> > v5.3.  For the library to be fully-featured, a kernel v5.10 or newer is
> > recommended. The full compatibility information can be found in the
> > libxdp README.
> > 
> > v1.2.2 of libxdp includes an important fix required for linking with
> > DPDK which is why this version or greater is required. Meson uses
> > pkg-config to verify the version of libxdp on the system, so it is
> > necessary that the library is discoverable using pkg-config in order
> > for the PMD to use it. To verify this, you can run: pkg-config
> > --modversion libxdp
> > 
> > [1] https://github.com/libbpf/libbpf/commit/277846bc6c15
> > 
> > Signed-off-by: Ciara Loftus 
> 
> Tested build with combination of following, build looks good libxdp
> 1.2.0, libxdp 1.2.2 libbpf 0.7.0, libbpf 0.4.0
> 
> 
> But while running testpmd can't find the libxdp.so by default [1],
> although setting 'LD_LIBRARY_PATH' works
> (LD_LIBRARY_PATH=/usr/local/lib64/ for my case), this wasn't required for
> libbpf, just checking if this is expected?
> 
> Similarly for 'build/drivers/librte_net_af_xdp.so', ldd can find 'libbpf'
> but not libxdp.so (although they are in same folder): $ ldd
> build/drivers/librte_net_af_xdp.so libxdp.so.1 => not found libbpf.so.0
> => /usr/local/lib64/libbpf.so.0 (0x7f2ceb86f000) 
> 
> Again, 'LD_LIBRARY_PATH' works: $ LD_LIBRARY_PATH=/usr/local/lib64/ ldd
> build/drivers/librte_net_af_xdp.so libxdp.so.1 =>
> /usr/local/lib64/libxdp.so.1 (0x7fefa792e000) libbpf.so.0 =>
> /usr/local/lib64/libbpf.so.0 (0x7fefa78dc000)
> 
> 
> But same question, why 'LD_LIBRARY_PATH' is not required for libbpf, but
> required for libxdp, any idea?
> 
Did you rerun "ldconfig" to refresh the ldd cache after installing the
new library?


[dpdk-dev] [PATCH v3 1/2] ethdev: support queue-based priority flow control

2022-01-31 Thread jerinj
From: Jerin Jacob 

Based on device support and use-case need, there are two different ways
to enable PFC. The first case is the port level PFC configuration, in
this case, rte_eth_dev_priority_flow_ctrl_set() API shall be used to
configure the PFC, and PFC frames will be generated using based on VLAN
TC value.

The second case is the queue level PFC configuration, in this
case, Any packet field content can be used to steer the packet to the
specific queue using rte_flow or RSS and then use
rte_eth_dev_priority_flow_ctrl_queue_configure() to configure the
TC mapping on each queue.
Based on congestion selected on the specific queue, configured TC
shall be used to generate PFC frames.

Signed-off-by: Jerin Jacob 
Signed-off-by: Sunil Kumar Kori 
---

v2..v1:
- Introduce rte_eth_dev_priority_flow_ctrl_queue_info_get() to
avoid updates to rte_eth_dev_info
- Removed devtools/libabigail.abignore changes
- Address the comment from Ferruh in
http://patches.dpdk.org/project/dpdk/patch/20220113102718.3167282-1-jer...@marvell.com/

 doc/guides/nics/features.rst   |   7 +-
 doc/guides/rel_notes/release_22_03.rst |   6 ++
 lib/ethdev/ethdev_driver.h |  12 ++-
 lib/ethdev/rte_ethdev.c| 132 +
 lib/ethdev/rte_ethdev.h|  89 +
 lib/ethdev/version.map |   4 +
 6 files changed, 247 insertions(+), 3 deletions(-)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 27be2d2576..1cacdc883a 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -379,9 +379,12 @@ Flow control
 Supports configuring link flow control.
 
 * **[implements] eth_dev_ops**: ``flow_ctrl_get``, ``flow_ctrl_set``,
-  ``priority_flow_ctrl_set``.
+  ``priority_flow_ctrl_set``, ``priority_flow_ctrl_queue_info_get``,
+  ``priority_flow_ctrl_queue_configure``
 * **[related]API**: ``rte_eth_dev_flow_ctrl_get()``, 
``rte_eth_dev_flow_ctrl_set()``,
-  ``rte_eth_dev_priority_flow_ctrl_set()``.
+  ``rte_eth_dev_priority_flow_ctrl_set()``,
+  ``rte_eth_dev_priority_flow_ctrl_queue_info_get()``,
+  ``rte_eth_dev_priority_flow_ctrl_queue_configure()``.
 
 
 .. _nic_features_rate_limitation:
diff --git a/doc/guides/rel_notes/release_22_03.rst 
b/doc/guides/rel_notes/release_22_03.rst
index 3bc0630c7c..e988c104e8 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -69,6 +69,12 @@ New Features
 
   The new API ``rte_event_eth_rx_adapter_event_port_get()`` was added.
 
+* **Added an API to enable queue based priority flow ctrl(PFC).**
+
+  New APIs, ``rte_eth_dev_priority_flow_ctrl_queue_info_get()`` and
+  ``rte_eth_dev_priority_flow_ctrl_queue_configure()``, was added.
+
+
 
 Removed Items
 -
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index d95605a355..320a364766 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -533,6 +533,13 @@ typedef int (*flow_ctrl_set_t)(struct rte_eth_dev *dev,
 typedef int (*priority_flow_ctrl_set_t)(struct rte_eth_dev *dev,
struct rte_eth_pfc_conf *pfc_conf);
 
+/** @internal Get info for queue based PFC on an Ethernet device. */
+typedef int (*priority_flow_ctrl_queue_info_get_t)(
+   struct rte_eth_dev *dev, struct rte_eth_pfc_queue_info *pfc_queue_info);
+/** @internal Configure queue based PFC parameter on an Ethernet device. */
+typedef int (*priority_flow_ctrl_queue_config_t)(
+   struct rte_eth_dev *dev, struct rte_eth_pfc_queue_conf *pfc_queue_conf);
+
 /** @internal Update RSS redirection table on an Ethernet device. */
 typedef int (*reta_update_t)(struct rte_eth_dev *dev,
 struct rte_eth_rss_reta_entry64 *reta_conf,
@@ -1080,7 +1087,10 @@ struct eth_dev_ops {
flow_ctrl_set_tflow_ctrl_set; /**< Setup flow control */
/** Setup priority flow control */
priority_flow_ctrl_set_t   priority_flow_ctrl_set;
-
+   /** Priority flow control queue info get */
+   priority_flow_ctrl_queue_info_get_t priority_flow_ctrl_queue_info_get;
+   /** Priority flow control queue configure */
+   priority_flow_ctrl_queue_config_t priority_flow_ctrl_queue_config;
/** Set Unicast Table Array */
eth_uc_hash_table_set_tuc_hash_table_set;
/** Set Unicast hash bitmap */
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index a1d475a292..2ce38cd2c5 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -4022,6 +4022,138 @@ rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id,
return -ENOTSUP;
 }
 
+static inline int
+validate_rx_pause_config(struct rte_eth_dev_info *dev_info, uint8_t tc_max,
+struct rte_eth_pfc_queue_conf *pfc_queue_conf)
+{
+   if ((pfc_queue_conf->mode == RTE_ETH_FC_RX_PAUSE) ||
+   (pfc_queue_conf->mode == RTE_ETH_FC_FULL)) {
+   if (pfc_queue_conf-

[dpdk-dev] [PATCH v3 2/2] app/testpmd: add queue based pfc CLI options

2022-01-31 Thread jerinj
From: Sunil Kumar Kori 

Patch adds command line options to configure queue based
priority flow control.

- Syntax command is given as below:

set pfc_queue_ctrl  rx\
tx

- Example command to configure queue based priority flow control
  on rx and tx side for port 0, Rx queue 0, Tx queue 0 with pause
  time 2047

testpmd> set pfc_queue_ctrl 0 rx on 0 0 tx on 0 0 2047

Signed-off-by: Sunil Kumar Kori 
---
v2..v1
- Sync up the implementation to use new APIs

 app/test-pmd/cmdline.c  | 122 
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  22 
 2 files changed, 144 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index e626b1c7d9..1af0321af0 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -544,6 +544,11 @@ static void cmd_help_long_parsed(void *parsed_result,
"Set the priority flow control parameter on a"
" port.\n\n"
 
+   "set pfc_queue_ctrl (port_id) rx (on|off) (tx_qid)"
+   " (tx_tc) tx (on|off) (rx_qid) (rx_tc) (pause_time)\n"
+   "Set the queue priority flow control parameter on a"
+   " given Rx and Tx queues of a port.\n\n"
+
"set stat_qmap (tx|rx) (port_id) (queue_id) 
(qmapping)\n"
"Set statistics mapping (qmapping 0..15) for RX/TX"
" queue on port.\n"
@@ -7690,6 +7695,122 @@ cmdline_parse_inst_t cmd_priority_flow_control_set = {
},
 };
 
+struct cmd_queue_priority_flow_ctrl_set_result {
+   cmdline_fixed_string_t set;
+   cmdline_fixed_string_t pfc_queue_ctrl;
+   portid_t port_id;
+   cmdline_fixed_string_t rx;
+   cmdline_fixed_string_t rx_pfc_mode;
+   uint16_t tx_qid;
+   uint8_t  tx_tc;
+   cmdline_fixed_string_t tx;
+   cmdline_fixed_string_t tx_pfc_mode;
+   uint16_t rx_qid;
+   uint8_t  rx_tc;
+   uint16_t pause_time;
+};
+
+static void
+cmd_queue_priority_flow_ctrl_set_parsed(void *parsed_result,
+   __rte_unused struct cmdline *cl,
+   __rte_unused void *data)
+{
+   struct cmd_queue_priority_flow_ctrl_set_result *res = parsed_result;
+   struct rte_eth_pfc_queue_conf pfc_queue_conf;
+   int rx_fc_enable, tx_fc_enable;
+   int ret;
+
+   /*
+* Rx on/off, flow control is enabled/disabled on RX side. This can
+* indicate the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx
+* side. Tx on/off, flow control is enabled/disabled on TX side. This
+* can indicate the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at
+* the Tx side.
+*/
+   static enum rte_eth_fc_mode rx_tx_onoff_2_mode[2][2] = {
+   {RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE},
+   {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
+   };
+
+   memset(&pfc_queue_conf, 0, sizeof(struct rte_eth_pfc_queue_conf));
+   rx_fc_enable = (!strncmp(res->rx_pfc_mode, "on", 2)) ? 1 : 0;
+   tx_fc_enable = (!strncmp(res->tx_pfc_mode, "on", 2)) ? 1 : 0;
+   pfc_queue_conf.mode = rx_tx_onoff_2_mode[rx_fc_enable][tx_fc_enable];
+   pfc_queue_conf.rx_pause.tc  = res->tx_tc;
+   pfc_queue_conf.rx_pause.tx_qid = res->tx_qid;
+   pfc_queue_conf.tx_pause.tc  = res->rx_tc;
+   pfc_queue_conf.tx_pause.rx_qid  = res->rx_qid;
+   pfc_queue_conf.tx_pause.pause_time = res->pause_time;
+
+   ret = rte_eth_dev_priority_flow_ctrl_queue_configure(res->port_id,
+&pfc_queue_conf);
+   if (ret != 0) {
+   fprintf(stderr,
+   "bad queue priority flow control parameter, rc = %d\n",
+   ret);
+   }
+}
+
+cmdline_parse_token_string_t cmd_q_pfc_set_set =
+   TOKEN_STRING_INITIALIZER(struct cmd_queue_priority_flow_ctrl_set_result,
+   set, "set");
+cmdline_parse_token_string_t cmd_q_pfc_set_flow_ctrl =
+   TOKEN_STRING_INITIALIZER(struct cmd_queue_priority_flow_ctrl_set_result,
+   pfc_queue_ctrl, "pfc_queue_ctrl");
+cmdline_parse_token_num_t cmd_q_pfc_set_portid =
+   TOKEN_NUM_INITIALIZER(struct cmd_queue_priority_flow_ctrl_set_result,
+   port_id, RTE_UINT16);
+cmdline_parse_token_string_t cmd_q_pfc_set_rx =
+   TOKEN_STRING_INITIALIZER(struct cmd_queue_priority_flow_ctrl_set_result,
+   rx, "rx");
+cmdline_parse_token_string_t cmd_q_pfc_set_rx_mode =
+   TOKEN_STRING_INITIALIZER(struct cmd_queue_priority_flow_ctrl_set_result,
+   rx_pfc_mode, "on#off");
+cmdline_parse_token_num_t cmd_q_pfc_set_tx_qid =
+   TOKEN_NUM_INITIALIZER(struct cmd_queue_priority_flow_ctrl_set_result,
+   tx_qid, RTE_UINT16)

[PATCH] add missing file to meson build for installation

2022-01-31 Thread Martijn Bakker
Signed-off-by: Martijn Bakker 
---
 lib/eal/include/meson.build | 1 +
 1 file changed, 1 insertion(+)

diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index 86468d1a2b..9700494816 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -60,6 +60,7 @@ generic_headers = files(
 'generic/rte_mcslock.h',
 'generic/rte_memcpy.h',
 'generic/rte_pause.h',
+'generic/rte_pflock.h',
 'generic/rte_power_intrinsics.h',
 'generic/rte_prefetch.h',
 'generic/rte_rwlock.h',
-- 
2.25.1



RE: [PATCH v2 03/10] ethdev: bring in async queue-based flow rules

2022-01-31 Thread Ivan Malov

Hi all,

On Thu, 27 Jan 2022, Alexander Kozyrev wrote:


On Wednesday, January 26, 2022 13:54 Ajit Khaparde  
wrote:


On Tue, Jan 25, 2022 at 9:03 PM Alexander Kozyrev 
wrote:


On Monday, January 24, 2022 19:00 Ivan Malov 

wrote:

This series is very helpful as it draws attention to
the problem of making flow API efficient. That said,
there is much room for improvement, especially in
what comes to keeping things clear and concise.

In example, the following APIs

- rte_flow_q_flow_create()
- rte_flow_q_flow_destroy()
- rte_flow_q_action_handle_create()
- rte_flow_q_action_handle_destroy()
- rte_flow_q_action_handle_update()

should probably be transformed into a unified one

int
rte_flow_q_submit_task(uint16_t  port_id,
uint32_t  queue_id,
const struct rte_flow_q_ops_attr *q_ops_attr,
enum rte_flow_q_task_type task_type,
const void   *task_data,
rte_flow_q_task_cookie_t  cookie,
struct rte_flow_error*error);

with a handful of corresponding enum defines and data structures
for these 5 operations.

We were thinking about the unified function for all queue operations.


Good.


But it has too many drawbacks in our opinion:


Is that so?


1. Different operation return different results and unneeded parameters.
q_flow_create gives a flow handle, q_action_handle returns indirect action

handle.

destroy functions return the status. All these cases needs to be handled

differently.


Yes, all of these are to be handled differently, but this does not mean
that one cannot think of a unified handle format. The application can
remember which cookie corresponds to which operation type after all.


Also, the unified function is bloated with various parameters not needed

for all operations.


That is far-fetched.

Non-unified set of APIs is also bloated. Takes long to read. Many
duplicating comments. When one has added a new API for a different
type of task, they will have to duplicate many lines one more time.

In the case of unified API, one has to add a new enum type (one line),
specific (and thus concise) description for it, and the corresponding
structure for the task data. That's it. The rest is up to the PMDs.

Also, it should be possible to make the task data IN-OUT, to return
its task-specific handle (to address your above concern).


Both of these point results in hard to understand API and messy

documentation with

various structures on how to use it in every particular case.


The documentation for the method is always the same. Precise and concise.
The task data structures will have their own descriptions, yes. But that
does not make the documentation messy. Or am I missing something?


2. Performance consideration.
We aimed the new API with the insertion/deletion rate in mind.


Good.


Adding if conditions to distinguish between requested operation will cause

some degradation.


Some degradation.. - how serious would it be? What's for the "if"
conditions, well, I believe the compiler is smart enough to deal
with them efficiently. After all, the suggested approach is
a fixed set of operation (task) types. One can have a
static array of task-specific methods in the PMD.
And only one condition to check the value bounds.


It is preferred to have separate small functions that do one job and make it

efficient.


A noble idea.


Interfaces are still the same.


That is the major point of confusion. The application developer has to
be super-careful to tell the queue version of "flow_create" from the
regular one. The two set of APIs are indeed counterparts, and that's
might be ambiguous. Whilst in the unified approach, readers will
understand that this new API is just a task-submitter for
the asynchronous type of operation.


Glad I made it clearer. Ivan, what do you think about these considerations?


Well, I'm not pushing anyone to abandon the existing approach and switch
to the unified API design. But the above points might not be that
difficult to address. This deserves more discussions.

Any other opinions?






By the way, shouldn't this variety of operation types cover such
from the tunnel offload model? Getting PMD's opaque "tunnel
match" items and "tunnel set" actions - things like that.

Don't quite get the idea. Could you please elaborate more on this?


rte_flow_tunnel_decap_set(), rte_flow_tunnel_action_decap_release();
rte_flow_tunnel_match(), rte_flow_tunnel_item_release().




Also, I suggest that the attribute "drain"
be replaced by "postpone" (invert the meaning).
rte_flow_q_drain() should then be renamed to
rte_flow_q_push_postponed().

The rationale behind my suggestion is that "drain" tricks readers into
thinking that the enqueued operations are going to be completely

purged,

whilst the true intention of the API is to "push" them to the hardwar

RE: [EXT] Re: [PATCH v5 1/2] eal: add API for bus close

2022-01-31 Thread Rohit Raj



> -Original Message-
> From: Thomas Monjalon 
> Sent: Thursday, January 20, 2022 8:28 PM
> To: Rohit Raj 
> Cc: Bruce Richardson ; Ray Kinsella
> ; Dmitry Kozlyuk ; Narcisa Ana
> Maria Vasile ; Dmitry Malloy
> ; Pallavi Kadam ;
> dev@dpdk.org; Nipun Gupta ; Sachin Saxena
> ; Hemant Agrawal ;
> ferruh.yi...@intel.com; david.march...@redhat.com
> Subject: Re: [EXT] Re: [PATCH v5 1/2] eal: add API for bus close
> 
> Caution: EXT Email
> 
> 20/01/2022 15:51, Rohit Raj:
> > Hi Thomas,
> >
> > This "rte_bus_close" API is introduced to do the opposite of what
> "rte_bus_probe" does. Just like there are plug and unplug APIs for plugging 
> and
> unplugging a single device.
> >
> > The API you mentioned, "rte_dev_remove" supports only rte_device.  But we
> also need to close/remove private devices of dpaa and fslmc buses which are
> not exposed directly to user (eg: mempool device).
> > Note that these private devices/bus objects are not associated with a
> particular rte_device but they are available as a resource to be used by any 
> of
> the device based on these hardware specific buses.
> > So, to close these devices, we need a new API which can do this for us. 
> > That is
> why "rte_bus_close" is required.
> 
> You mean some resources are associated to a bus but not to a device?
> It lools very weird. A resource on a bus *is* a device.
> 
> PS: please avoid top-post

FSLMC bus has hardware resources for memory pools, queues, hardware access 
lock(called portal). 
These are common resources, which can be associated with any device. So they 
don't belong to a specific device. 
Eg: mempool resource can be used by both eth and crypto device. So, we cannot 
close mempool while closing just one of the device(It can happen in 
multiprocess applications). So, these resources should be specifically 
closed/freed with the bus instead with a device.

There is no need to expose these devices to users and their usages is limited 
to other devices on the bus. There is no reason to create yet another type of 
device in DPDK to expose these internal only resources.


> > From: Thomas Monjalon 
> > > 10/01/2022 06:26, rohit@nxp.com:
> > > > From: Rohit Raj 
> > > >
> > > > As per the current code we have API for bus probe, but the bus
> > > > close API is missing. This breaks the multi process scenarios as
> > > > objects are not cleaned while terminating the secondary processes.
> > > >
> > > > This patch adds a new API rte_bus_close() for cleanup of bus
> > > > objects which were acquired during probe.
> > >
> > > I don't understand how closing all devices of a bus will help better
> > > than just closing all devices.
> > >
> > > As Ferruh already suggested in the past, we could force closing all
> > > devices in rte_eal_cleanup().
> > > And we already have the function rte_dev_remove().
> 
> 



RE: [PATCH] eal/windows: set pthread affinity

2022-01-31 Thread Tal Shnaiderman
> Subject: [PATCH] eal/windows: set pthread affinity
> 
> External email: Use caution opening links or attachments
> 
> 
> Sometimes OS tries to switch the core. So, bind the lcore thread to a fixed
> core.
> Implement affinity call on Windows similar to Linux.
> 
> Signed-off-by: Qiao Liu 
> Signed-off-by: Pallavi Kadam 
> ---
>  lib/eal/windows/eal.c | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c index
> 67db7f099a..ca3c41aaa7 100644
> --- a/lib/eal/windows/eal.c
> +++ b/lib/eal/windows/eal.c
> @@ -422,6 +422,10 @@ rte_eal_init(int argc, char **argv)
> /* create a thread for each lcore */
> if (eal_thread_create(&lcore_config[i].thread_id) != 0)
> rte_panic("Cannot create thread\n");
> +   ret = pthread_setaffinity_np(lcore_config[i].thread_id,
> +   sizeof(rte_cpuset_t), &lcore_config[i].cpuset);
> +   if (ret != 0)
> +   RTE_LOG(DEBUG, EAL, "Cannot set affinity\n");
> }
> 
> /* Initialize services so drivers can register services during probe. 
> */
> --
> 2.31.1.windows.1

Acked-by: Tal Shnaiderman 



RE: [EXT] Re: [dpdk-stable] [PATCH v2] test: avoid hang if queues are full and Tx fails

2022-01-31 Thread Rakesh Kudurumalla
ping

> -Original Message-
> From: Rakesh Kudurumalla
> Sent: Monday, January 10, 2022 2:35 PM
> To: Thomas Monjalon ; Jerin Jacob Kollanukkaran
> 
> Cc: sta...@dpdk.org; dev@dpdk.org; david.march...@redhat.com;
> ferruh.yi...@intel.com; andrew.rybche...@oktetlabs.ru;
> ajit.khapa...@broadcom.com
> Subject: RE: [EXT] Re: [dpdk-stable] [PATCH v2] test: avoid hang if queues are
> full and Tx fails
> 
> ping
> 
> > -Original Message-
> > From: Rakesh Kudurumalla
> > Sent: Monday, December 13, 2021 12:10 PM
> > To: Thomas Monjalon ; Jerin Jacob Kollanukkaran
> > 
> > Cc: sta...@dpdk.org; dev@dpdk.org; david.march...@redhat.com;
> > ferruh.yi...@intel.com; andrew.rybche...@oktetlabs.ru;
> > ajit.khapa...@broadcom.com
> > Subject: RE: [EXT] Re: [dpdk-stable] [PATCH v2] test: avoid hang if
> > queues are full and Tx fails
> >
> >
> >
> > > -Original Message-
> > > From: Thomas Monjalon 
> > > Sent: Monday, November 29, 2021 2:44 PM
> > > To: Rakesh Kudurumalla ; Jerin Jacob
> > > Kollanukkaran 
> > > Cc: sta...@dpdk.org; dev@dpdk.org; david.march...@redhat.com;
> > > ferruh.yi...@intel.com; andrew.rybche...@oktetlabs.ru;
> > > ajit.khapa...@broadcom.com
> > > Subject: Re: [EXT] Re: [dpdk-stable] [PATCH v2] test: avoid hang if
> > > queues are full and Tx fails
> > >
> > > 29/11/2021 09:52, Rakesh Kudurumalla:
> > > > From: Thomas Monjalon 
> > > > > 22/11/2021 08:59, Rakesh Kudurumalla:
> > > > > > From: Thomas Monjalon 
> > > > > > > 20/07/2021 18:50, Rakesh Kudurumalla:
> > > > > > > > Current pmd_perf_autotest() in continuous mode tries to
> > > > > > > > enqueue MAX_TRAFFIC_BURST completely before starting the
> test.
> > > > > > > > Some drivers cannot accept complete MAX_TRAFFIC_BURST even
> > > > > > > > though
> > > > > rx+tx
> > > > > > > > desc
> > > > > > > count
> > > > > > > > can fit it.
> > > > > > >
> > > > > > > Which driver is failing to do so?
> > > > > > > Why it cannot enqueue 32 packets?
> > > > > >
> > > > > > Octeontx2 driver is failing to enqueue because hardware
> > > > > > buffers are full
> > > > > before test.
> > >
> > > Aren't you stopping the support of octeontx2?
> > > Why do you care now?
> > >  yes we are not supporting octeontx2,but this  issue is observed in
> > > cnxk driver ,current patch fixes the same
> > > > >
> > > > > Why hardware buffers are full?
> > > > Hardware buffers are full because number of number of descriptors
> > > > in continuous mode Is less than MAX_TRAFFIC_BURST, so if enque
> > > > fails , there is no way hardware can drop the Packets .
> > > > pmd_per_autotest application evaluates performance after enqueueing
> packets Initially.
> > > > >
> > > > > > pmd_perf_autotest() in continuous mode tries to enqueue
> > > > > > MAX_TRAFFIC_BURST (2048) before starting the test.
> > > > > >
> > > > > > > > This patch changes behaviour to stop enqueuing after few
> retries.
> > > > > > >
> > > > > > > If there is a real limitation, there will be issues in more
> > > > > > > places than this test program.
> > > > > > > I feel it should be addressed either in the driver or at ethdev 
> > > > > > > level.
> > > > > > >
> > > > > > > [...]
> > > > > > > > @@ -480,10 +483,19 @@ main_loop(__rte_unused void *args)
> > > > > > > > nb_tx = RTE_MIN(MAX_PKT_BURST, num);
> > > > > > > > nb_tx = rte_eth_tx_burst(portid, 0,
> > > > > > > > &tx_burst[idx],
> > > nb_tx);
> > > > > > > > +   if (nb_tx == 0)
> > > > > > > > +   retry_cnt++;
> > > > > > > > num -= nb_tx;
> > > > > > > > idx += nb_tx;
> > > > > > > > +   if (retry_cnt == MAX_RETRY_COUNT) {
> > > > > > > > +   retry_cnt = 0;
> > > > > > > > +   break;
> > > > > > > > +   }
> > >
> > >



Re: [EXT] Re: [PATCH v5 1/2] eal: add API for bus close

2022-01-31 Thread Thomas Monjalon
01/02/2022 06:40, Rohit Raj:
> From: Thomas Monjalon 
> > 20/01/2022 15:51, Rohit Raj:
> > > Hi Thomas,
> > >
> > > This "rte_bus_close" API is introduced to do the opposite of what
> > "rte_bus_probe" does. Just like there are plug and unplug APIs for plugging 
> > and
> > unplugging a single device.
> > >
> > > The API you mentioned, "rte_dev_remove" supports only rte_device.  But we
> > also need to close/remove private devices of dpaa and fslmc buses which are
> > not exposed directly to user (eg: mempool device).
> > > Note that these private devices/bus objects are not associated with a
> > particular rte_device but they are available as a resource to be used by 
> > any of
> > the device based on these hardware specific buses.
> > > So, to close these devices, we need a new API which can do this for us. 
> > > That is
> > why "rte_bus_close" is required.
> > 
> > You mean some resources are associated to a bus but not to a device?
> > It lools very weird. A resource on a bus *is* a device.
> > 
> > PS: please avoid top-post
> 
> FSLMC bus has hardware resources for memory pools, queues, hardware access 
> lock(called portal). 
> These are common resources, which can be associated with any device. So they 
> don't belong to a specific device. 
> Eg: mempool resource can be used by both eth and crypto device. So, we cannot 
> close mempool while closing just one of the device(It can happen in 
> multiprocess applications). So, these resources should be specifically 
> closed/freed with the bus instead with a device.
> 
> There is no need to expose these devices to users and their usages is limited 
> to other devices on the bus. There is no reason to create yet another type of 
> device in DPDK to expose these internal only resources.


OK I understand better now, thanks.

 
> > > From: Thomas Monjalon 
> > > > 10/01/2022 06:26, rohit@nxp.com:
> > > > > From: Rohit Raj 
> > > > >
> > > > > As per the current code we have API for bus probe, but the bus
> > > > > close API is missing. This breaks the multi process scenarios as
> > > > > objects are not cleaned while terminating the secondary processes.
> > > > >
> > > > > This patch adds a new API rte_bus_close() for cleanup of bus
> > > > > objects which were acquired during probe.
> > > >
> > > > I don't understand how closing all devices of a bus will help better
> > > > than just closing all devices.
> > > >
> > > > As Ferruh already suggested in the past, we could force closing all
> > > > devices in rte_eal_cleanup().
> > > > And we already have the function rte_dev_remove().





Re: [EXT] Re: [dpdk-stable] [PATCH v2] test: avoid hang if queues are full and Tx fails

2022-01-31 Thread Thomas Monjalon
octeontx2 driver is removed
Can we close this patch?


01/02/2022 07:30, Rakesh Kudurumalla:
> ping
> 
> > -Original Message-
> > From: Rakesh Kudurumalla
> > Sent: Monday, January 10, 2022 2:35 PM
> > To: Thomas Monjalon ; Jerin Jacob Kollanukkaran
> > 
> > Cc: sta...@dpdk.org; dev@dpdk.org; david.march...@redhat.com;
> > ferruh.yi...@intel.com; andrew.rybche...@oktetlabs.ru;
> > ajit.khapa...@broadcom.com
> > Subject: RE: [EXT] Re: [dpdk-stable] [PATCH v2] test: avoid hang if queues 
> > are
> > full and Tx fails
> > 
> > ping
> > 
> > > -Original Message-
> > > From: Rakesh Kudurumalla
> > > Sent: Monday, December 13, 2021 12:10 PM
> > > To: Thomas Monjalon ; Jerin Jacob Kollanukkaran
> > > 
> > > Cc: sta...@dpdk.org; dev@dpdk.org; david.march...@redhat.com;
> > > ferruh.yi...@intel.com; andrew.rybche...@oktetlabs.ru;
> > > ajit.khapa...@broadcom.com
> > > Subject: RE: [EXT] Re: [dpdk-stable] [PATCH v2] test: avoid hang if
> > > queues are full and Tx fails
> > >
> > >
> > >
> > > > -Original Message-
> > > > From: Thomas Monjalon 
> > > > Sent: Monday, November 29, 2021 2:44 PM
> > > > To: Rakesh Kudurumalla ; Jerin Jacob
> > > > Kollanukkaran 
> > > > Cc: sta...@dpdk.org; dev@dpdk.org; david.march...@redhat.com;
> > > > ferruh.yi...@intel.com; andrew.rybche...@oktetlabs.ru;
> > > > ajit.khapa...@broadcom.com
> > > > Subject: Re: [EXT] Re: [dpdk-stable] [PATCH v2] test: avoid hang if
> > > > queues are full and Tx fails
> > > >
> > > > 29/11/2021 09:52, Rakesh Kudurumalla:
> > > > > From: Thomas Monjalon 
> > > > > > 22/11/2021 08:59, Rakesh Kudurumalla:
> > > > > > > From: Thomas Monjalon 
> > > > > > > > 20/07/2021 18:50, Rakesh Kudurumalla:
> > > > > > > > > Current pmd_perf_autotest() in continuous mode tries to
> > > > > > > > > enqueue MAX_TRAFFIC_BURST completely before starting the
> > test.
> > > > > > > > > Some drivers cannot accept complete MAX_TRAFFIC_BURST even
> > > > > > > > > though
> > > > > > rx+tx
> > > > > > > > > desc
> > > > > > > > count
> > > > > > > > > can fit it.
> > > > > > > >
> > > > > > > > Which driver is failing to do so?
> > > > > > > > Why it cannot enqueue 32 packets?
> > > > > > >
> > > > > > > Octeontx2 driver is failing to enqueue because hardware
> > > > > > > buffers are full
> > > > > > before test.
> > > >
> > > > Aren't you stopping the support of octeontx2?
> > > > Why do you care now?
> > > >  yes we are not supporting octeontx2,but this  issue is observed in
> > > > cnxk driver ,current patch fixes the same
> > > > > >
> > > > > > Why hardware buffers are full?
> > > > > Hardware buffers are full because number of number of descriptors
> > > > > in continuous mode Is less than MAX_TRAFFIC_BURST, so if enque
> > > > > fails , there is no way hardware can drop the Packets .
> > > > > pmd_per_autotest application evaluates performance after enqueueing
> > packets Initially.
> > > > > >
> > > > > > > pmd_perf_autotest() in continuous mode tries to enqueue
> > > > > > > MAX_TRAFFIC_BURST (2048) before starting the test.
> > > > > > >
> > > > > > > > > This patch changes behaviour to stop enqueuing after few
> > retries.
> > > > > > > >
> > > > > > > > If there is a real limitation, there will be issues in more
> > > > > > > > places than this test program.
> > > > > > > > I feel it should be addressed either in the driver or at ethdev 
> > > > > > > > level.
> > > > > > > >
> > > > > > > > [...]
> > > > > > > > > @@ -480,10 +483,19 @@ main_loop(__rte_unused void *args)
> > > > > > > > >   nb_tx = RTE_MIN(MAX_PKT_BURST, num);
> > > > > > > > >   nb_tx = rte_eth_tx_burst(portid, 0,
> > > > > > > > >   &tx_burst[idx],
> > > > nb_tx);
> > > > > > > > > + if (nb_tx == 0)
> > > > > > > > > + retry_cnt++;
> > > > > > > > >   num -= nb_tx;
> > > > > > > > >   idx += nb_tx;
> > > > > > > > > + if (retry_cnt == MAX_RETRY_COUNT) {
> > > > > > > > > + retry_cnt = 0;
> > > > > > > > > + break;
> > > > > > > > > + }
> > > >
> > > >
> 
>