RE: [PATCH] net/iavf:fix slow memory allocation

2022-11-18 Thread Jiale, SongX
> -Original Message-
> From: Kaisen You 
> Sent: Thursday, November 17, 2022 2:57 PM
> To: dev@dpdk.org
> Cc: sta...@dpdk.org; Yang, Qiming ; Zhou, YidingX
> ; You, KaisenX ; Wu,
> Jingjing ; Xing, Beilei ; Zhang,
> Qi Z 
> Subject: [PATCH] net/iavf:fix slow memory allocation
> 
> In some cases, the DPDK does not allocate hugepage heap memory to some
> sockets due to the user setting parameters (e.g. -l 40-79, SOCKET 0 has no
> memory).
> When the interrupt thread runs on the corresponding core of this socket,
> each allocation/release will execute a whole set of heap allocation/release
> operations,resulting in poor performance.
> Instead we call malloc() to get memory from the system's heap space to fix
> this problem.
> 
> Fixes: cb5c1b91f76f ("net/iavf: add thread for event callbacks")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Kaisen You 
> ---
Tested-by: Song Jiale 


[Bug 1131] [22.11-rc3] vmdq && kni meson build error with gcc12.2.1+debug on Fedora37

2022-11-18 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=1131

Thomas Monjalon (tho...@monjalon.net) changed:

   What|Removed |Added

 Status|UNCONFIRMED |RESOLVED
 CC||tho...@monjalon.net
 Resolution|--- |INVALID

--- Comment #1 from Thomas Monjalon (tho...@monjalon.net) ---
Looks like the error is:
/usr/bin/ld: final link failed: No space left on device

-- 
You are receiving this mail because:
You are the assignee for the bug.

[PATCH 00/11] Fixes for clang 15

2022-11-18 Thread David Marchand
Fedora 37 has just been released with clang 15.
The latter seems more picky wrt unused variable.

Fixes have been tested in GHA with a simple patch I used in my own repo:
https://github.com/david-marchand/dpdk/commit/82cd57ae5490
https://github.com/david-marchand/dpdk/actions/runs/3495454457

-- 
David Marchand

David Marchand (11):
  service: fix build with clang 15
  vhost: fix build with clang 15
  bus/dpaa: fix build with clang 15
  net/atlantic: fix build with clang 15
  net/dpaa2: fix build with clang 15
  net/ice: fix build with clang 15
  app/testpmd: fix build with clang 15
  app/testpmd: fix build with clang 15 in flow code
  test/efd: fix build with clang 15
  test/member: fix build with clang 15
  test/event: fix build with clang 15

 app/test-pmd/config.c   | 14 --
 app/test-pmd/noisy_vnf.c|  2 +-
 app/test/test_efd_perf.c|  1 -
 app/test/test_event_timer_adapter.c |  2 --
 app/test/test_member.c  |  1 -
 app/test/test_member_perf.c |  1 -
 drivers/bus/dpaa/base/qbman/bman.h  |  4 +---
 drivers/net/atlantic/atl_rxtx.c |  5 ++---
 drivers/net/dpaa2/dpaa2_rxtx.c  |  4 +---
 drivers/net/ice/ice_ddp_package.c   |  3 ---
 lib/eal/common/rte_service.c|  2 --
 lib/vhost/virtio_net.c  |  2 --
 12 files changed, 5 insertions(+), 36 deletions(-)

-- 
2.38.1



[PATCH 02/11] vhost: fix build with clang 15

2022-11-18 Thread David Marchand
This variable is not used.

Fixes: abeb86525577 ("vhost: remove copy threshold for async path")
Cc: sta...@dpdk.org

Signed-off-by: David Marchand 
---
 lib/vhost/virtio_net.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 4358899718..9abf752f30 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -1877,7 +1877,6 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, 
struct vhost_virtqueue
struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t 
vchan_id)
 {
uint32_t pkt_idx = 0;
-   uint32_t remained = count;
uint16_t n_xfer;
uint16_t num_buffers;
uint16_t num_descs;
@@ -1903,7 +1902,6 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, 
struct vhost_virtqueue
pkts_info[slot_idx].mbuf = pkts[pkt_idx];
 
pkt_idx++;
-   remained--;
vq_inc_last_avail_packed(vq, num_descs);
} while (pkt_idx < count);
 
-- 
2.38.1



[PATCH 01/11] service: fix build with clang 15

2022-11-18 Thread David Marchand
This variable is not used.

Bugzilla ID: 1130
Fixes: 21698354c832 ("service: introduce service cores concept")
Cc: sta...@dpdk.org

Signed-off-by: David Marchand 
---
 lib/eal/common/rte_service.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index bcc2e19077..42ca1d001d 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -107,14 +107,12 @@ rte_service_init(void)
}
 
int i;
-   int count = 0;
struct rte_config *cfg = rte_eal_get_configuration();
for (i = 0; i < RTE_MAX_LCORE; i++) {
if (lcore_config[i].core_role == ROLE_SERVICE) {
if ((unsigned int)i == cfg->main_lcore)
continue;
rte_service_lcore_add(i);
-   count++;
}
}
 
-- 
2.38.1



[PATCH 03/11] bus/dpaa: fix build with clang 15

2022-11-18 Thread David Marchand
This variable is not used.

Fixes: f38f61e982f8 ("bus/dpaa: add BMAN hardware interfaces")
Cc: sta...@dpdk.org

Signed-off-by: David Marchand 
---
 drivers/bus/dpaa/base/qbman/bman.h | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/bus/dpaa/base/qbman/bman.h 
b/drivers/bus/dpaa/base/qbman/bman.h
index 21a6bee778..b2aa93e046 100644
--- a/drivers/bus/dpaa/base/qbman/bman.h
+++ b/drivers/bus/dpaa/base/qbman/bman.h
@@ -519,7 +519,6 @@ static inline int bm_shutdown_pool(struct bm_portal *p, u32 
bpid)
struct bm_mc_command *bm_cmd;
struct bm_mc_result *bm_res;
 
-   int aq_count = 0;
bool stop = false;
 
while (!stop) {
@@ -532,8 +531,7 @@ static inline int bm_shutdown_pool(struct bm_portal *p, u32 
bpid)
if (!(bm_res->verb & BM_MCR_VERB_ACQUIRE_BUFCOUNT)) {
/* Pool is empty */
stop = true;
-   } else
-   ++aq_count;
+   }
};
return 0;
 }
-- 
2.38.1



[PATCH 04/11] net/atlantic: fix build with clang 15

2022-11-18 Thread David Marchand
This variable is not used.

Fixes: 2b1472d7150c ("net/atlantic: implement Tx path")
Cc: sta...@dpdk.org

Signed-off-by: David Marchand 
---
 drivers/net/atlantic/atl_rxtx.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index aeb79bf5a2..cb6f8141a8 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -1127,10 +1127,9 @@ atl_xmit_cleanup(struct atl_tx_queue *txq)
if (txq != NULL) {
sw_ring = txq->sw_ring;
int head = txq->tx_head;
-   int cnt;
-   int i;
+   int cnt = head;
 
-   for (i = 0, cnt = head; ; i++) {
+   while (true) {
txd = &txq->hw_ring[cnt];
 
if (txd->dd)
-- 
2.38.1



[PATCH 05/11] net/dpaa2: fix build with clang 15

2022-11-18 Thread David Marchand
This variable is not used.

Fixes: 4690a6114ff6 ("net/dpaa2: enable error queues optionally")
Cc: sta...@dpdk.org

Signed-off-by: David Marchand 
---
 drivers/net/dpaa2/dpaa2_rxtx.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 5b02260e71..f60e78e1fd 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -620,7 +620,7 @@ dump_err_pkts(struct dpaa2_queue *dpaa2_q)
/* Function receive frames for a given device and VQ */
struct qbman_result *dq_storage;
uint32_t fqid = dpaa2_q->fqid;
-   int ret, num_rx = 0, num_pulled;
+   int ret, num_rx = 0;
uint8_t pending, status;
struct qbman_swp *swp;
const struct qbman_fd *fd;
@@ -660,7 +660,6 @@ dump_err_pkts(struct dpaa2_queue *dpaa2_q)
while (!qbman_check_command_complete(dq_storage))
;
 
-   num_pulled = 0;
pending = 1;
do {
/* Loop until the dq_storage is updated with
@@ -695,7 +694,6 @@ dump_err_pkts(struct dpaa2_queue *dpaa2_q)
 
dq_storage++;
num_rx++;
-   num_pulled++;
} while (pending);
 
dpaa2_q->err_pkts += num_rx;
-- 
2.38.1



[PATCH 06/11] net/ice: fix build with clang 15

2022-11-18 Thread David Marchand
This variable is not used.

Fixes: 0d8d7bd720ba ("net/ice: support DDP dump switch rule binary")
Cc: sta...@dpdk.org

Signed-off-by: David Marchand 
---
 drivers/net/ice/ice_ddp_package.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/net/ice/ice_ddp_package.c 
b/drivers/net/ice/ice_ddp_package.c
index a27a4a2da2..0aa19eb282 100644
--- a/drivers/net/ice/ice_ddp_package.c
+++ b/drivers/net/ice/ice_ddp_package.c
@@ -439,7 +439,6 @@ ice_dump_switch(struct rte_eth_dev *dev, uint8_t **buff2, 
uint32_t *size)
int i = 0;
uint16_t tbl_id = 0;
uint32_t tbl_idx = 0;
-   int tbl_cnt = 0;
uint8_t *buffer = *buff2;
 
hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -481,10 +480,8 @@ ice_dump_switch(struct rte_eth_dev *dev, uint8_t **buff2, 
uint32_t *size)
 
free(buff);
 
-   tbl_cnt++;
if (tbl_idx == 0x) {
tbl_idx = 0;
-   tbl_cnt = 0;
memset(buffer, '\n', sizeof(char));
buffer++;
offset = 0;
-- 
2.38.1



[PATCH 07/11] app/testpmd: fix build with clang 15

2022-11-18 Thread David Marchand
This variable is used to create some artificial load.

Fixes: 3c156061b938 ("app/testpmd: add noisy neighbour forwarding mode")
Cc: sta...@dpdk.org

Signed-off-by: David Marchand 
---
 app/test-pmd/noisy_vnf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/app/test-pmd/noisy_vnf.c b/app/test-pmd/noisy_vnf.c
index b17a1febd3..c65ec6f06a 100644
--- a/app/test-pmd/noisy_vnf.c
+++ b/app/test-pmd/noisy_vnf.c
@@ -57,8 +57,8 @@ do_write(char *vnf_mem)
 static inline void
 do_read(char *vnf_mem)
 {
+   uint64_t r __rte_unused;
uint64_t i = rte_rand();
-   uint64_t r;
 
r = vnf_mem[i % ((noisy_lkup_mem_sz * 1024 * 1024) /
RTE_CACHE_LINE_SIZE)];
-- 
2.38.1



[PATCH 08/11] app/testpmd: fix build with clang 15 in flow code

2022-11-18 Thread David Marchand
This variable is not used and has been copy/pasted in a lot of other
code.

Fixes: 938a184a1870 ("app/testpmd: implement basic support for flow API")
Fixes: 55509e3a49fb ("app/testpmd: support shared flow action")
Fixes: 04cc665fab38 ("app/testpmd: add flow template management")
Fixes: c4b38873346b ("app/testpmd: add flow table management")
Fixes: ecdc927b99f2 ("app/testpmd: add async flow create/destroy operations")
Fixes: d906fff51878 ("app/testpmd: add async indirect actions operations")
Cc: sta...@dpdk.org

Signed-off-by: David Marchand 
---
 app/test-pmd/config.c | 14 --
 1 file changed, 14 deletions(-)

diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 982549ffed..9103ba6c77 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1787,7 +1787,6 @@ port_action_handle_destroy(portid_t port_id,
 {
struct rte_port *port;
struct port_indirect_action **tmp;
-   uint32_t c = 0;
int ret = 0;
 
if (port_id_is_invalid(port_id, ENABLED_WARN) ||
@@ -1822,7 +1821,6 @@ port_action_handle_destroy(portid_t port_id,
}
if (i == n)
tmp = &(*tmp)->next;
-   ++c;
}
return ret;
 }
@@ -2251,7 +2249,6 @@ port_flow_pattern_template_destroy(portid_t port_id, 
uint32_t n,
 {
struct rte_port *port;
struct port_template **tmp;
-   uint32_t c = 0;
int ret = 0;
 
if (port_id_is_invalid(port_id, ENABLED_WARN) ||
@@ -2288,7 +2285,6 @@ port_flow_pattern_template_destroy(portid_t port_id, 
uint32_t n,
}
if (i == n)
tmp = &(*tmp)->next;
-   ++c;
}
return ret;
 }
@@ -2368,7 +2364,6 @@ port_flow_actions_template_destroy(portid_t port_id, 
uint32_t n,
 {
struct rte_port *port;
struct port_template **tmp;
-   uint32_t c = 0;
int ret = 0;
 
if (port_id_is_invalid(port_id, ENABLED_WARN) ||
@@ -2404,7 +2399,6 @@ port_flow_actions_template_destroy(portid_t port_id, 
uint32_t n,
}
if (i == n)
tmp = &(*tmp)->next;
-   ++c;
}
return ret;
 }
@@ -2534,7 +2528,6 @@ port_flow_template_table_destroy(portid_t port_id,
 {
struct rte_port *port;
struct port_table **tmp;
-   uint32_t c = 0;
int ret = 0;
 
if (port_id_is_invalid(port_id, ENABLED_WARN) ||
@@ -2571,7 +2564,6 @@ port_flow_template_table_destroy(portid_t port_id,
}
if (i == n)
tmp = &(*tmp)->next;
-   ++c;
}
return ret;
 }
@@ -2719,7 +2711,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t 
queue_id,
struct rte_flow_op_attr op_attr = { .postpone = postpone };
struct rte_port *port;
struct port_flow **tmp;
-   uint32_t c = 0;
int ret = 0;
struct queue_job *job;
 
@@ -2768,7 +2759,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t 
queue_id,
}
if (i == n)
tmp = &(*tmp)->next;
-   ++c;
}
return ret;
 }
@@ -2836,7 +2826,6 @@ port_queue_action_handle_destroy(portid_t port_id,
const struct rte_flow_op_attr attr = { .postpone = postpone};
struct rte_port *port;
struct port_indirect_action **tmp;
-   uint32_t c = 0;
int ret = 0;
struct queue_job *job;
 
@@ -2886,7 +2875,6 @@ port_queue_action_handle_destroy(portid_t port_id,
}
if (i == n)
tmp = &(*tmp)->next;
-   ++c;
}
return ret;
 }
@@ -3304,7 +3292,6 @@ port_flow_destroy(portid_t port_id, uint32_t n, const 
uint32_t *rule)
 {
struct rte_port *port;
struct port_flow **tmp;
-   uint32_t c = 0;
int ret = 0;
 
if (port_id_is_invalid(port_id, ENABLED_WARN) ||
@@ -3337,7 +3324,6 @@ port_flow_destroy(portid_t port_id, uint32_t n, const 
uint32_t *rule)
}
if (i == n)
tmp = &(*tmp)->next;
-   ++c;
}
return ret;
 }
-- 
2.38.1



[PATCH 09/11] test/efd: fix build with clang 15

2022-11-18 Thread David Marchand
This local variable hides the more global one.
The original intent was probably to use the global one.

Fixes: 0e925aef2779 ("app/test: add EFD functional and perf tests")
Cc: sta...@dpdk.org

Signed-off-by: David Marchand 
---
 app/test/test_efd_perf.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/app/test/test_efd_perf.c b/app/test/test_efd_perf.c
index d7f4d83549..4d04ed93e3 100644
--- a/app/test/test_efd_perf.c
+++ b/app/test/test_efd_perf.c
@@ -153,7 +153,6 @@ setup_keys_and_data(struct efd_perf_params *params, 
unsigned int cycle)
qsort(keys, KEYS_TO_ADD, MAX_KEYSIZE, key_compare);
 
/* Sift through the list of keys and look for duplicates */
-   int num_duplicates = 0;
for (i = 0; i < KEYS_TO_ADD - 1; i++) {
if (memcmp(keys[i], keys[i + 1], params->key_size) == 
0) {
/* This key already exists, try again */
-- 
2.38.1



[PATCH 10/11] test/member: fix build with clang 15

2022-11-18 Thread David Marchand
This local variable hides the more global one.
The original intent was probably to use the global one.

Fixes: 0cc67a96e486 ("test/member: add functional and perf tests")
Cc: sta...@dpdk.org

Signed-off-by: David Marchand 
---
 app/test/test_member.c  | 1 -
 app/test/test_member_perf.c | 1 -
 2 files changed, 2 deletions(-)

diff --git a/app/test/test_member.c b/app/test/test_member.c
index c1b6a7d8b9..4a93f8bff4 100644
--- a/app/test/test_member.c
+++ b/app/test/test_member.c
@@ -573,7 +573,6 @@ setup_keys_and_data(void)
qsort(generated_keys, MAX_ENTRIES, KEY_SIZE, key_compare);
 
/* Sift through the list of keys and look for duplicates */
-   int num_duplicates = 0;
for (i = 0; i < MAX_ENTRIES - 1; i++) {
if (memcmp(generated_keys[i], generated_keys[i + 1],
KEY_SIZE) == 0) {
diff --git a/app/test/test_member_perf.c b/app/test/test_member_perf.c
index 7b6adf913e..2f79888fbd 100644
--- a/app/test/test_member_perf.c
+++ b/app/test/test_member_perf.c
@@ -178,7 +178,6 @@ setup_keys_and_data(struct member_perf_params *params, 
unsigned int cycle,
qsort(keys, KEYS_TO_ADD, MAX_KEYSIZE, key_compare);
 
/* Sift through the list of keys and look for duplicates */
-   int num_duplicates = 0;
for (i = 0; i < KEYS_TO_ADD - 1; i++) {
if (memcmp(keys[i], keys[i + 1],
params->key_size) == 0) {
-- 
2.38.1



[PATCH 11/11] test/event: fix build with clang 15

2022-11-18 Thread David Marchand
This variable is not used.

Fixes: d1f3385d0076 ("test: add event timer adapter auto-test")
Cc: sta...@dpdk.org

Signed-off-by: David Marchand 
---
 app/test/test_event_timer_adapter.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/app/test/test_event_timer_adapter.c 
b/app/test/test_event_timer_adapter.c
index 654c412836..1a440dfd10 100644
--- a/app/test/test_event_timer_adapter.c
+++ b/app/test/test_event_timer_adapter.c
@@ -911,7 +911,6 @@ _cancel_thread(void *args)
 {
RTE_SET_USED(args);
struct rte_event_timer *ev_tim = NULL;
-   uint64_t cancel_count = 0;
uint16_t ret;
 
while (!arm_done || rte_ring_count(timer_producer_ring) > 0) {
@@ -921,7 +920,6 @@ _cancel_thread(void *args)
ret = rte_event_timer_cancel_burst(timdev, &ev_tim, 1);
TEST_ASSERT_EQUAL(ret, 1, "Failed to cancel timer");
rte_mempool_put(eventdev_test_mempool, (void *)ev_tim);
-   cancel_count++;
}
 
return TEST_SUCCESS;
-- 
2.38.1



RE: [PATCH] net/idpf: add supported ptypes get

2022-11-18 Thread Zhang, Qi Z



> -Original Message-
> From: Wu, Jingjing 
> Sent: Friday, November 18, 2022 2:43 PM
> To: Xing, Beilei 
> Cc: dev@dpdk.org; Peng, Yuan 
> Subject: RE: [PATCH] net/idpf: add supported ptypes get
> 
> 
> 
> > -Original Message-
> > From: Xing, Beilei 
> > Sent: Friday, November 18, 2022 11:51 AM
> > To: Wu, Jingjing 
> > Cc: dev@dpdk.org; Peng, Yuan ; Xing, Beilei
> > 
> > Subject: [PATCH] net/idpf: add supported ptypes get
> >
> > From: Beilei Xing 
> >
> > Failed to launch l3fwd, the log shows:
> > port 0 cannot parse packet type, please add --parse-ptype This patch
> > adds dev_supported_ptypes_get ops.
> >
> > Fixes: 549343c25db8 ("net/idpf: support device initialization")
> >
> > Signed-off-by: Beilei Xing 
> Reviewed-by: Jingjing Wu 

Applied to dpdk-next-net-intel.

Thanks
Qi



RE: [PATCH V1] doc: add tested Intel platforms with Intel NICs

2022-11-18 Thread Zhang, Qi Z



> -Original Message-
> From: Peng, Yuan 
> Sent: Thursday, November 17, 2022 1:38 PM
> To: Chen, LingliX ; Zhang, Qi Z
> ; dev@dpdk.org
> Cc: Peng, Yuan 
> Subject: RE: [PATCH V1] doc: add tested Intel platforms with Intel NICs
> 
> Acked-by: Peng, Yuan 
> 
> > -Original Message-
> > From: Chen, LingliX 
> > Sent: Thursday, November 17, 2022 12:36 PM
> > To: Zhang, Qi Z ; dev@dpdk.org
> > Cc: Peng, Yuan ; Chen, LingliX
> > 
> > Subject: [PATCH V1] doc: add tested Intel platforms with Intel NICs
> >
> > Add tested Intel platforms with Intel NICs to v22.11 release note.
> >
> > Signed-off-by: Lingli Chen 
> > ---

Applied to dpdk-next-net-intel.

Thanks
Qi


RE: [PATCH v3] net/idpf: fix crash when launching l3fwd

2022-11-18 Thread Zhang, Qi Z



> -Original Message-
> From: beilei.x...@intel.com 
> Sent: Friday, November 18, 2022 3:03 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Peng, Yuan ; Xing, Beilei
> 
> Subject: [PATCH v3] net/idpf: fix crash when launching l3fwd
> 
> From: Beilei Xing 
> 
> There's core dump when launching l3fwd with 1 queue 1 core. It's because
> NULL pointer is used if fail to configure device.
> This patch removes incorrect check during device configuration, and checks
> NULL pointer when executing VIRTCHNL2_OP_DEALLOC_VECTORS.
> 
> Fixes: 549343c25db8 ("net/idpf: support device initialization")
> Fixes: 70675bcc3a57 ("net/idpf: support RSS")
> Fixes: 37291a68fd78 ("net/idpf: support write back based on ITR expire")
> 
> Signed-off-by: Beilei Xing 

Acked-by: Qi Zhang 

Applied to dpdk-next-net-intel.

Thanks
Qi



RE: [PATCH v3] net/idpf: fix crash when launching l3fwd

2022-11-18 Thread Peng, Yuan


> -Original Message-
> From: Xing, Beilei 
> Sent: Friday, November 18, 2022 3:03 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Peng, Yuan ; Xing, Beilei
> 
> Subject: [PATCH v3] net/idpf: fix crash when launching l3fwd
> 
> From: Beilei Xing 
> 
> There's core dump when launching l3fwd with 1 queue 1 core. It's because
> NULL pointer is used if fail to configure device.
> This patch removes incorrect check during device configuration, and checks
> NULL pointer when executing VIRTCHNL2_OP_DEALLOC_VECTORS.
> 
> Fixes: 549343c25db8 ("net/idpf: support device initialization")
> Fixes: 70675bcc3a57 ("net/idpf: support RSS")
> Fixes: 37291a68fd78 ("net/idpf: support write back based on ITR expire")
> 
> Signed-off-by: Beilei Xing 

Tested-by: Peng, Yuan 


RE: [PATCH] net/idpf: add supported ptypes get

2022-11-18 Thread Peng, Yuan



> -Original Message-
> From: Xing, Beilei 
> Sent: Friday, November 18, 2022 11:51 AM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Peng, Yuan ; Xing, Beilei
> 
> Subject: [PATCH] net/idpf: add supported ptypes get
> 
> From: Beilei Xing 
> 
> Failed to launch l3fwd, the log shows:
> port 0 cannot parse packet type, please add --parse-ptype This patch adds
> dev_supported_ptypes_get ops.
> 
> Fixes: 549343c25db8 ("net/idpf: support device initialization")
> 
> Signed-off-by: Beilei Xing 

Tested-by: Peng, Yuan 


Re: [PATCH] doc: fix max supported packet len for virtio driver

2022-11-18 Thread Zhang, Fan

Hi Yi,

Please add "Fixes: x" description to the commit message.

You may find more information in https://core.dpdk.org/contribute/.

Regards,

Fan

On 11/18/2022 1:26 AM, li...@chinatelecom.cn wrote:

From: Yi Li 

According to VIRTIO_MAX_RX_PKTLEN macro definition, for virtio driver
currently supported pkt size is 9728.

Signed-off-by: Yi Li 
---
  doc/guides/nics/virtio.rst | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst
index aace780249..c422e7347a 100644
--- a/doc/guides/nics/virtio.rst
+++ b/doc/guides/nics/virtio.rst
@@ -43,7 +43,7 @@ Features and Limitations of virtio PMD
  In this release, the virtio PMD provides the basic functionality of packet 
reception and transmission.
  
  *   It supports merge-able buffers per packet when receiving packets and scattered buffer per packet

-when transmitting packets. The packet size supported is from 64 to 1518.
+when transmitting packets. The packet size supported is from 64 to 9728.
  
  *   It supports multicast packets and promiscuous mode.
  


Re: [PATCH 02/11] vhost: fix build with clang 15

2022-11-18 Thread Maxime Coquelin




On 11/18/22 09:53, David Marchand wrote:

This variable is not used.

Fixes: abeb86525577 ("vhost: remove copy threshold for async path")
Cc: sta...@dpdk.org

Signed-off-by: David Marchand 
---
  lib/vhost/virtio_net.c | 2 --
  1 file changed, 2 deletions(-)

diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 4358899718..9abf752f30 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -1877,7 +1877,6 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, 
struct vhost_virtqueue
struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t 
vchan_id)
  {
uint32_t pkt_idx = 0;
-   uint32_t remained = count;
uint16_t n_xfer;
uint16_t num_buffers;
uint16_t num_descs;
@@ -1903,7 +1902,6 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, 
struct vhost_virtqueue
pkts_info[slot_idx].mbuf = pkts[pkt_idx];
  
  		pkt_idx++;

-   remained--;
vq_inc_last_avail_packed(vq, num_descs);
} while (pkt_idx < count);
  


Reviewed-by: Maxime Coquelin 

Thanks,
Maxime



RE: Regarding User Data in DPDK ACL Library.

2022-11-18 Thread Konstantin Ananyev


> On Thu, 17 Nov 2022 19:28:12 +0530
> venkatesh bs  wrote:
> 
> > Hi DPDK Team,
> >
> > After the ACL match for highest priority DPDK Classification API returns
> > User Data Which is as mentioned below in the document.
> >
> > 53. Packet Classification and Access Control — Data Plane Development Kit
> > 22.11.0-rc2 documentation (dpdk.org)
> >
> >
> >- *userdata*: A user-defined value. For each category, a successful
> >match returns the userdata field of the highest priority matched rule. 
> > When
> >no rules match, returned value is zero
> >
> > I Wonder Why User Data Support does not returns 64 bit values,
 
As I remember if first version of ACL code it was something about space savings
to improve performance...
Now I think it is more just a historical reason.
It would be good to change userdata to 64bit, but I presume it will be ABI 
breakage.

> Always its
> > possible that User Data in Application Can be 64bit long, But since 64 bit
> > User data can't be returned by DPDK ACL Library, Application should have
> > the conversion algorithm from 64 to 32 bit during Rule add and vice versa
> > after classification.
> >
> > I Wonder if anyone would have faced this issue, Please suggest any
> > suggestions if somewhere am wrong in understanding/Possible Solution if
> > someone has already gone through this issue.
> >
> > Thanks In Advance.
> > Regards,
> > Venkatesh B Siddappa.
> 
> It looks like all users of this API use the userdata to be the index
> into a table of application specific rules.

Yes, that's the most common way.
Another one would be always (build/search) acl rules with two categories:
rule for both categories will be identical, while data different (low/ho 
32bits),
but that's a bit too awkward from my perspective. 
 





Re: [PATCH] app/testpmd: fix action destruction memory leak

2022-11-18 Thread Ferruh Yigit
On 11/17/2022 8:55 AM, Suanming Mou wrote:
> In case action handle destroy fails, the job memory was not freed
> properly. This commit fixes the possible memory leak in the action
> handle destruction failed case.
> 
> Fixes: c9dc03840873 ("ethdev: add indirect action async query")
> 
> Signed-off-by: Suanming Mou 
> ---
>  app/test-pmd/config.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index 982549ffed..719bdd4261 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -2873,9 +2873,9 @@ port_queue_action_handle_destroy(portid_t port_id,
>   job->type = QUEUE_JOB_TYPE_ACTION_DESTROY;
>   job->pia = pia;
>  
> - if (pia->handle &&
> - rte_flow_async_action_handle_destroy(port_id,
> + if (rte_flow_async_action_handle_destroy(port_id,

Why 'pia->handle' check removed, was it unnecessary to check it at first
place?

>   queue_id, &attr, pia->handle, job, &error)) {
> + free(job);
>   ret = port_flow_complain(&error);
>   continue;
>   }

Just to double check, when this if branch not taken,
'rte_flow_async_action_handle_destroy()' not failed case, testpmd
'port_queue_flow_pull()' functions frees the 'job', right?



[PATCH v3 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-18 Thread Hanumanth Pothula
Validate ethdev parameter 'max_rx_mempools' to know whether
device supports multi-mempool feature or not.

Bugzilla ID: 1128

Signed-off-by: Hanumanth Pothula 
v4:
 - updated if condition.
v3:
 - Simplified conditional check.
 - Corrected spell, whether.
v2:
 - Rebased on tip of next-net/main.
---
 app/test-pmd/testpmd.c | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4e25f77c6a..9fc14e6d6b 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2655,17 +2655,23 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
struct rte_mempool *mpx;
+   struct rte_eth_dev_info dev_info;
unsigned int i, mp_n;
uint32_t prev_hdrs = 0;
int ret;
 
+   ret = rte_eth_dev_info_get(port_id, &dev_info);
+   if (ret != 0)
+   return ret;
+
/* Verify Rx queue configuration is single pool and segment or
 * multiple pool/segment.
+* @see rte_eth_dev_info::max_rx_mempools
 * @see rte_eth_rxconf::rx_mempools
 * @see rte_eth_rxconf::rx_seg
 */
-   if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
-   ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
+   if ((dev_info.max_rx_mempools == 0) && (rx_pkt_nb_segs < 1 ||
+   ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0))) {
/* Single pool/segment configuration */
rx_conf->rx_seg = NULL;
rx_conf->rx_nseg = 0;
-- 
2.25.1



Re: [PATCH] net/nfp: fix issue of data len exceeds descriptor limitation

2022-11-18 Thread Ferruh Yigit
On 11/17/2022 8:33 AM, Chaoyong He wrote:
> From: Long Wu 
> 
> If dma_len is larger than "NFDK_DESC_TX_DMA_LEN_HEAD", the value of
> dma_len bitwise and NFDK_DESC_TX_DMA_LEN_HEAD maybe less than packet
> head length. Fill maximum dma_len in first tx descriptor to make
> sure the whole head is included in the first descriptor. 

I understand the problem, but impact is not clear. I assume this cause
failure on Tx of the package or Tx a corrupted package maybe but can you
please explain in the commit log what observed problem is fixed?

> In addition,
> add some explanation for NFDK code more readable.
> 
> Fixes: c73dced48c8c ("net/nfp: add NFDk Tx")
> Cc: jin@corigine.com
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Long Wu 
> Reviewed-by: Niklas Söderlund 
> Reviewed-by: Chaoyong He 
> ---
>  drivers/net/nfp/nfp_rxtx.c | 27 ++-
>  1 file changed, 26 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
> index b8c874d315..ed88d740fa 100644
> --- a/drivers/net/nfp/nfp_rxtx.c
> +++ b/drivers/net/nfp/nfp_rxtx.c
> @@ -1064,6 +1064,7 @@ nfp_net_nfdk_tx_maybe_close_block(struct nfp_net_txq 
> *txq, struct rte_mbuf *pkt)
>   if (unlikely(n_descs > NFDK_TX_DESC_GATHER_MAX))
>   return -EINVAL;
>  
> + /* Under count by 1 (don't count meta) for the round down to work out */
>   n_descs += !!(pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG);
>  
>   if (round_down(txq->wr_p, NFDK_TX_DESC_BLOCK_CNT) !=
> @@ -1180,6 +1181,7 @@ nfp_net_nfdk_xmit_pkts(void *tx_queue, struct rte_mbuf 
> **tx_pkts, uint16_t nb_pk
>   /* Sending packets */
>   while ((npkts < nb_pkts) && free_descs) {
>   uint32_t type, dma_len, dlen_type, tmp_dlen;
> + uint32_t tmp_hlen;
>   int nop_descs, used_descs;
>  
>   pkt = *(tx_pkts + npkts);
> @@ -1218,8 +1220,23 @@ nfp_net_nfdk_xmit_pkts(void *tx_queue, struct rte_mbuf 
> **tx_pkts, uint16_t nb_pk
>   } else {
>   type = NFDK_DESC_TX_TYPE_GATHER;
>   }
> +
> + /* Implicitly truncates to chunk in below logic */
>   dma_len -= 1;
> - dlen_type = (NFDK_DESC_TX_DMA_LEN_HEAD & dma_len) |
> +
> + /*
> +  * We will do our best to pass as much data as we can in 
> descriptor
> +  * and we need to make sure the first descriptor includes whole
> +  * head since there is limitation in firmware side. Sometimes 
> the
> +  * value of dma_len bitwise and NFDK_DESC_TX_DMA_LEN_HEAD will 
> less

"bitwise and" is confusing while reading, because of 'and', can you
please use '&' instead, I think it is easier to understand that way.

> +  * than packet head len.
> +  */
> + if (dma_len > NFDK_DESC_TX_DMA_LEN_HEAD)
> + tmp_hlen = NFDK_DESC_TX_DMA_LEN_HEAD;
> + else
> + tmp_hlen = dma_len;
> +

What is the point of masking if you already have above check?
Why don't you use 'tmp_hlen' directly, instead of
"(NFDK_DESC_TX_DMA_LEN_HEAD & tmp_hlen)" after above check?

> + dlen_type = (NFDK_DESC_TX_DMA_LEN_HEAD & tmp_hlen) |

Since 'tmp_hlen' is only used one, you may prefer ternary operator to
get rid of temporary variable, but it is up to you based on readability:

dlen_type = (dma_len > NFDK_DESC_TX_DMA_LEN_HEAD ?
NFDK_DESC_TX_DMA_LEN_HEAD : dma_len) |

>   (NFDK_DESC_TX_TYPE_HEAD & (type << 12));
>   ktxds->dma_len_type = rte_cpu_to_le_16(dlen_type);
>   dma_addr = rte_mbuf_data_iova(pkt);
> @@ -1229,10 +1246,18 @@ nfp_net_nfdk_xmit_pkts(void *tx_queue, struct 
> rte_mbuf **tx_pkts, uint16_t nb_pk
>   ktxds->dma_addr_lo = rte_cpu_to_le_32(dma_addr & 0x);
>   ktxds++;
>  
> + /*
> +  * Preserve the original dlen_type, this way below the EOP logic
> +  * can use dlen_type.
> +  */
>   tmp_dlen = dlen_type & NFDK_DESC_TX_DMA_LEN_HEAD;
>   dma_len -= tmp_dlen;
>   dma_addr += tmp_dlen + 1;
>  
> + /*
> +  * The rest of the data (if any) will be in larger dma 
> descritors
> +  * and is handled with the dma_len loop.
> +  */
>   while (pkt) {
>   if (*lmbuf)
>   rte_pktmbuf_free_seg(*lmbuf);



RE: [PATCH v3 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-18 Thread Hanumanth Reddy Pothula
Hi Yingya,
Uploaded new patch-set, can you please help in verifying the same,
https://patches.dpdk.org/project/dpdk/patch/20221118111358.3563760-1-hpoth...@marvell.com/

Regards,
Hanumanth


> -Original Message-
> From: Han, YingyaX 
> Sent: Friday, November 18, 2022 12:22 PM
> To: Ferruh Yigit ; Hanumanth Reddy Pothula
> ; Singh, Aman Deep
> ; Zhang, Yuying ;
> Jiang, YuX 
> Cc: dev@dpdk.org; andrew.rybche...@oktetlabs.ru;
> tho...@monjalon.net; Jerin Jacob Kollanukkaran ;
> Nithin Kumar Dabilpuram 
> Subject: [EXT] RE: [PATCH v3 1/1] app/testpmd: add valid check to verify
> multi mempool feature
> 
> External Email
> 
> --
> There is a new issue after applying the patch.
> Failed to configure buffer_split for a single queue and port can't up.
> The test steps and logs are as follows:
> ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-9 -n 4  -a 31:00.0 --
> force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 --txq=4 --rxq=4
> testpmd> port stop all
> testpmd> port 0 rxq 2 rx_offload buffer_split on show port 0 rx_offload
> testpmd> configuration
> Rx Offloading Configuration of port 0 :
>   Port : RSS_HASH
>   Queue[ 0] : RSS_HASH
>   Queue[ 1] : RSS_HASH
>   Queue[ 2] : RSS_HASH BUFFER_SPLIT
>   Queue[ 3] : RSS_HASH
> testpmd> set rxhdrs eth
> testpmd> port start all
> Configuring Port 0 (socket 0)
> No Rx segmentation offload configured
> Fail to configure port 0 rx queues
> 
> BRs,
> Yingya
> 
> -Original Message-
> From: Ferruh Yigit 
> Sent: Friday, November 18, 2022 7:37 AM
> To: Hanumanth Pothula ; Singh, Aman Deep
> ; Zhang, Yuying ;
> Han, YingyaX ; Jiang, YuX 
> Cc: dev@dpdk.org; andrew.rybche...@oktetlabs.ru;
> tho...@monjalon.net; jer...@marvell.com; ndabilpu...@marvell.com
> Subject: Re: [PATCH v3 1/1] app/testpmd: add valid check to verify multi
> mempool feature
> 
> On 11/17/2022 4:03 PM, Hanumanth Pothula wrote:
> > Validate ethdev parameter 'max_rx_mempools' to know whether device
> > supports multi-mempool feature or not.
> >
> > Bugzilla ID: 1128
> >
> > Signed-off-by: Hanumanth Pothula 
> > v3:
> >  - Simplified conditional check.
> >  - Corrected spell, whether.
> > v2:
> >  - Rebased on tip of next-net/main.
> > ---
> >  app/test-pmd/testpmd.c | 8 +++-
> >  1 file changed, 7 insertions(+), 1 deletion(-)
> >
> > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > 4e25f77c6a..6c3d0948ec 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -2655,16 +2655,22 @@ rx_queue_setup(uint16_t port_id, uint16_t
> rx_queue_id,
> > union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
> > struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
> > struct rte_mempool *mpx;
> > +   struct rte_eth_dev_info dev_info;
> > unsigned int i, mp_n;
> > uint32_t prev_hdrs = 0;
> > int ret;
> >
> > +   ret = rte_eth_dev_info_get(port_id, &dev_info);
> > +   if (ret != 0)
> > +   return ret;
> > +
> > /* Verify Rx queue configuration is single pool and segment or
> >  * multiple pool/segment.
> > +* @see rte_eth_dev_info::max_rx_mempools
> >  * @see rte_eth_rxconf::rx_mempools
> >  * @see rte_eth_rxconf::rx_seg
> >  */
> > -   if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> > +   if ((dev_info.max_rx_mempools == 0) && !(rx_pkt_nb_segs > 1 ||
> > ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
> 0))) {
> > /* Single pool/segment configuration */
> > rx_conf->rx_seg = NULL;
> 
> 
> Hi Yingya, Yu,
> 
> Can you please verify this patch?
> 
> Thanks,
> ferruh


RE: [PATCH v2 0/4] crypto/ccp cleanup

2022-11-18 Thread Uttarwar, Sunil Prakashrao
[AMD Official Use Only - General]

Hi David,

Please find the below update 

- only one DPDK application can use ccp crypto engines (PCI bus allow/blocklist 
is not respected, right?),
Yes, only one crypto device can be used in a DPDK application for the crypto 
operations. This is introduced from the patch crypto/ccp: convert driver from 
vdev to PCI. This is implemented as per community suggestion.

- since only one crypto device is exposed, there is no way for the application 
to dedicate/decide how to distribute crypto operations over the different ccp 
crypto engines available on the system.

When there is no ccp device passed from the application dpdk-test-crypto-perf, 
it tries to probe all CCP devices present on a system and only one device can 
be used. It seems this is bug in the patch implemented for crypto/ccp: convert 
driver from vdev to PCI and we are looking into this. 

Thanks
Sunil

-Original Message-
From: David Marchand  
Sent: Thursday, November 3, 2022 6:39 PM
To: Uttarwar, Sunil Prakashrao 
Cc: Yigit, Ferruh ; Akhil Goyal ; 
Namburu, Chandu-babu ; Sebastian, Selwin 
; dev ; Thomas Monjalon 

Subject: Re: [PATCH v2 0/4] crypto/ccp cleanup

Caution: This message originated from an External Source. Use proper caution 
when opening attachments, clicking links, or responding.


Hello,

On Wed, Nov 2, 2022 at 2:54 PM Uttarwar, Sunil Prakashrao 
 wrote:
>
> [AMD Official Use Only - General]
>
> Hi David,
>
> Please find below response.

Not sure why dev@ was dropped.
Adding it back.


>
> -Original Message-
> From: David Marchand 
> Sent: Wednesday, November 2, 2022 6:18 PM
> To: Uttarwar, Sunil Prakashrao 
> Cc: Yigit, Ferruh ; Akhil Goyal 
> ; Namburu, Chandu-babu 
> Subject: Re: [PATCH v2 0/4] crypto/ccp cleanup
>
> Caution: This message originated from an External Source. Use proper caution 
> when opening attachments, clicking links, or responding.
>
>
> Hello,
>
> On Wed, Nov 2, 2022 at 11:26 AM Uttarwar, Sunil Prakashrao 
>  wrote:
> > As mentioned earlier, observing issues with "crypto/ccp: fix PCI probing" 
> > patch (Floating point exception). Please find the below backtrace .
> >
> > (gdb) r -l 0,4 -n 4 -- --ptest throughput --buffer-sz 64 --burst-sz 
> > 32 --total-ops 3000 --silent --devtype crypto_ccp --optype 
> > cipher-only --cipher-algo aes-cbc --cipher-op encrypt 
> > --cipher-key-sz 16 --cipher-iv-sz 16 Starting program: 
> > /home/cae/sunil/dpdk_main/dpdk/build/app/dpdk-test-crypto-perf -l 0,4 -n 4 
> > -- --ptest throughput --buffer-sz 64 --burst-sz 32 --total-ops 3000 
> > --silent --devtype crypto_ccp --optype cipher-only --cipher-algo aes-cbc 
> > --cipher-op encrypt --cipher-key-sz 16 --cipher-iv-sz 16 [Thread debugging 
> > using libthread_db enabled] Using host libthread_db library 
> > "/lib/x86_64-linux-gnu/libthread_db.so.1".
> > EAL: Detected CPU lcores: 24
> > EAL: Detected NUMA nodes: 2
> > EAL: Detected static linkage of DPDK [New Thread 0x76dc5400 (LWP 
> > 171350)]
> > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket [New Thread
> > 0x765c4400 (LWP 171351)]
> > EAL: Selected IOVA mode 'PA'
> > EAL: VFIO support initialized
> > [New Thread 0x75dc3400 (LWP 171352)]
> > EAL: Probe PCI driver: crypto_ccp (1022:1456) device: :04:00.2 
> > (socket 0)
> > PMD: Initialising :04:00.2 on NUMA node 0
> > PMD: Max number of queue pairs = 8
> > PMD: Authentication offload to CCP
> > CRYPTODEV: User specified device name = :04:00.2
> > CRYPTODEV: Creating cryptodev :04:00.2
> > CRYPTODEV: Initialisation parameters - name: :04:00.2,socket id:
> > 0, max queue pairs: 8
> > EAL: Probe PCI driver: crypto_ccp (1022:1468) device: :05:00.1 
> > (socket 0)
> > PMD: CCP PMD already initialized
> > EAL: Requested device :05:00.1 cannot be used
> > EAL: Probe PCI driver: crypto_ccp (1022:1456) device: :41:00.2 
> > (socket 1)
> > PMD: CCP PMD already initialized
> > EAL: Requested device :41:00.2 cannot be used
> > EAL: Probe PCI driver: crypto_ccp (1022:1468) device: :42:00.1 
> > (socket 1)
> > PMD: CCP PMD already initialized
> > EAL: Requested device :42:00.1 cannot be used [New Thread
> > 0x755c2400 (LWP 171353)]
> > TELEMETRY: No legacy callbacks, legacy socket not created Allocated 
> > pool "sess_mp_0" on socket 0
> >
> > Thread 4 "rte-worker-4" received signal SIGFPE, Arithmetic exception.
> > [Switching to Thread 0x75dc3400 (LWP 171352)] 0x5767397a 
> > in ccp_pmd_enqueue_burst (queue_pair=0x17fefe940, ops=0x75dbe6e0, 
> > nb_ops=32) at ../drivers/crypto/ccp/rte_ccp_pmd.c:97
> > 97  cur_ops = nb_ops / cryptodev_cnt + 
> > (nb_ops)%cryptodev_cnt;
> > (gdb) bt
>
> I have a hard time understanding the logic in this enqueue code...
>
> Is this driver exposing a single crypto device and will "balance"
> crypto operations across all pci devices on the system?
>
> Driver is exposing a single crypto device as physical device and only one 
> device can be used by

RE: [PATCH] app/testpmd: fix action destruction memory leak

2022-11-18 Thread Suanming Mou
Hi,

> -Original Message-
> From: Ferruh Yigit 
> Sent: Friday, November 18, 2022 6:40 PM
> To: Suanming Mou ; david.march...@redhat.com;
> Aman Singh ; Yuying Zhang
> 
> Cc: dev@dpdk.org
> Subject: Re: [PATCH] app/testpmd: fix action destruction memory leak
> 
> On 11/17/2022 8:55 AM, Suanming Mou wrote:
> > In case action handle destroy fails, the job memory was not freed
> > properly. This commit fixes the possible memory leak in the action
> > handle destruction failed case.
> >
> > Fixes: c9dc03840873 ("ethdev: add indirect action async query")
> >
> > Signed-off-by: Suanming Mou 
> > ---
> >  app/test-pmd/config.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index
> > 982549ffed..719bdd4261 100644
> > --- a/app/test-pmd/config.c
> > +++ b/app/test-pmd/config.c
> > @@ -2873,9 +2873,9 @@ port_queue_action_handle_destroy(portid_t
> port_id,
> > job->type = QUEUE_JOB_TYPE_ACTION_DESTROY;
> > job->pia = pia;
> >
> > -   if (pia->handle &&
> > -   rte_flow_async_action_handle_destroy(port_id,
> > +   if (rte_flow_async_action_handle_destroy(port_id,
> 
> Why 'pia->handle' check removed, was it unnecessary to check it at first 
> place?
> 
> > queue_id, &attr, pia->handle, job, &error)) {
> > +   free(job);
> > ret = port_flow_complain(&error);
> > continue;
> > }
> 
> Just to double check, when this if branch not taken,
> 'rte_flow_async_action_handle_destroy()' not failed case, testpmd
> 'port_queue_flow_pull()' functions frees the 'job', right?

Yes, port_queue_flow_pull() will free the 'job'.




Re: [PATCH] net/nfp: fix the problem of mask table free

2022-11-18 Thread Ferruh Yigit
On 11/15/2022 1:13 AM, Chaoyong He wrote:
> The free process of mask table has problem, should use
> 'rte_has_free()' rather than 'rte_free()'.

s/_has_/_hash_/

> 
> Fixes: ac09376096d8 ("net/nfp: add structures and functions for flow offload")
> 
> Signed-off-by: Chaoyong He 
> Reviewed-by: Niklas Söderlund 

Applied to dpdk-next-net/main, thanks.


[PATCH] bus/pci: fix bus info memleak during PCI scan

2022-11-18 Thread Tomasz Zawadzki
During pci_scan_one() for devices that were already registered
the pci_common_set() is called to set some of the fields again.

This resulted in bus_info allocation leaking, so this patch
ensures they are always freed beforehand.

Fixes: 8f4de2dba9b9 ("bus/pci: fill bus specific information")

Signed-off-by: Tomasz Zawadzki 
---
 drivers/bus/pci/pci_common.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/bus/pci/pci_common.c b/drivers/bus/pci/pci_common.c
index 9901c34f4e..9a866055e8 100644
--- a/drivers/bus/pci/pci_common.c
+++ b/drivers/bus/pci/pci_common.c
@@ -114,6 +114,7 @@ pci_common_set(struct rte_pci_device *dev)
/* Otherwise, it uses the internal, canonical form. */
dev->device.name = dev->name;
 
+   free(dev->bus_info);
if (asprintf(&dev->bus_info, "vendor_id=%"PRIx16", device_id=%"PRIx16,
dev->id.vendor_id, dev->id.device_id) != -1)
dev->device.bus_info = dev->bus_info;
-- 
2.38.1



Re: [PATCH 1/3] net/nfp: fix wrong increment of free list counter

2022-11-18 Thread Ferruh Yigit
On 11/18/2022 1:44 AM, Chaoyong He wrote:
> When receiving a packet that is larger than the mbuf size, the Rx
> function will break the receive loop and sent a free list descriptor
> with random DMA address.
> 
> Fix this by moving the increment of the free list descriptor counter
> to after the packet size have been checked and acted on.
> 

Issue seems one of the Rx descriptor is not rearmed properly and may
have random DMA address, which can lead HW to DMA this random address,
so implications can be dangerous.

I suggest updating patch title slightly to highlight the impact:
"net/nfp: fix Rx descriptor DMA address"

> Fixes: bb340f56fcb7 ("net/nfp: fix memory leak in Rx")
> Cc: long...@corigine.com
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Chaoyong He 
> Reviewed-by: Niklas Söderlund 

Series applied to dpdk-next-net/main, thanks.



[PATCH v4 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-18 Thread Hanumanth Pothula
Validate ethdev parameter 'max_rx_mempools' to know whether
device supports multi-mempool feature or not.

Bugzilla ID: 1128

Signed-off-by: Hanumanth Pothula 
v4:
 - updated if condition.
v3:
 - Simplified conditional check.
 - Corrected spell, whether.
v2:
 - Rebased on tip of next-net/main.
---
 app/test-pmd/testpmd.c | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4e25f77c6a..c1b4dbd716 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2655,17 +2655,23 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
struct rte_mempool *mpx;
+   struct rte_eth_dev_info dev_info;
unsigned int i, mp_n;
uint32_t prev_hdrs = 0;
int ret;
 
+   ret = rte_eth_dev_info_get(port_id, &dev_info);
+   if (ret != 0)
+   return ret;
+
/* Verify Rx queue configuration is single pool and segment or
 * multiple pool/segment.
+* @see rte_eth_dev_info::max_rx_mempools
 * @see rte_eth_rxconf::rx_mempools
 * @see rte_eth_rxconf::rx_seg
 */
-   if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
-   ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
+   if ((dev_info.max_rx_mempools == 0) && (rx_pkt_nb_segs <= 1 ||
+   ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0))) {
/* Single pool/segment configuration */
rx_conf->rx_seg = NULL;
rx_conf->rx_nseg = 0;
-- 
2.25.1



RE: [PATCH 01/11] service: fix build with clang 15

2022-11-18 Thread Van Haaren, Harry
> -Original Message-
> From: David Marchand 
> Sent: Friday, November 18, 2022 8:53 AM
> To: dev@dpdk.org
> Cc: sta...@dpdk.org; Van Haaren, Harry ; Jerin 
> Jacob
> 
> Subject: [PATCH 01/11] service: fix build with clang 15
> 
> This variable is not used.
> 
> Bugzilla ID: 1130
> Fixes: 21698354c832 ("service: introduce service cores concept")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: David Marchand 

Acked-by: Harry van Haaren 


Re: [PATCH] app/testpmd: fix action destruction memory leak

2022-11-18 Thread Ferruh Yigit
On 11/18/2022 12:21 PM, Suanming Mou wrote:
> Hi,
> 
>> -Original Message-
>> From: Ferruh Yigit 
>> Sent: Friday, November 18, 2022 6:40 PM
>> To: Suanming Mou ; david.march...@redhat.com;
>> Aman Singh ; Yuying Zhang
>> 
>> Cc: dev@dpdk.org
>> Subject: Re: [PATCH] app/testpmd: fix action destruction memory leak
>>
>> On 11/17/2022 8:55 AM, Suanming Mou wrote:
>>> In case action handle destroy fails, the job memory was not freed
>>> properly. This commit fixes the possible memory leak in the action
>>> handle destruction failed case.
>>>
>>> Fixes: c9dc03840873 ("ethdev: add indirect action async query")
>>>
>>> Signed-off-by: Suanming Mou 
>>> ---
>>>  app/test-pmd/config.c | 4 ++--
>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index
>>> 982549ffed..719bdd4261 100644
>>> --- a/app/test-pmd/config.c
>>> +++ b/app/test-pmd/config.c
>>> @@ -2873,9 +2873,9 @@ port_queue_action_handle_destroy(portid_t
>> port_id,
>>> job->type = QUEUE_JOB_TYPE_ACTION_DESTROY;
>>> job->pia = pia;
>>>
>>> -   if (pia->handle &&
>>> -   rte_flow_async_action_handle_destroy(port_id,
>>> +   if (rte_flow_async_action_handle_destroy(port_id,
>>
>> Why 'pia->handle' check removed, was it unnecessary to check it at first 
>> place?

This seems already discussed and agreed in other thread, so proceeding.

Applied to dpdk-next-net/main, thanks.


RE: [RFC v2] mempool: add API to return pointer to free space on per-core cache

2022-11-18 Thread Morten Brørup
> From: Kamalakshitha Aligeri [mailto:kamalakshitha.alig...@arm.com]
> Sent: Wednesday, 16 November 2022 18.25
> 
> Expose the pointer to free space in per core cache in PMD, so that the
> objects can be directly copied to cache without any temporary storage
> 
> Signed-off-by: Kamalakshitha Aligeri 
> ---

Please build your patch in continuation of my patch [1], and use 
rte_mempool_cache_zc_put_bulk() instead of rte_mempool_get_cache().

[1]: https://inbox.dpdk.org/dev/20221116180419.98937-1...@smartsharesystems.com/

Some initial comments follow inline below.


> v2: Integration of API in vector PMD
> v1: API to return pointer to free space on per-core cache  and
> integration of API in scalar PMD
> 
>  app/test/test_mempool.c | 140 
>  drivers/net/i40e/i40e_rxtx_vec_avx512.c |  46 +++-
>  drivers/net/i40e/i40e_rxtx_vec_common.h |  22 +++-
>  lib/mempool/rte_mempool.h   |  46 
>  4 files changed, 219 insertions(+), 35 deletions(-)
> 
> diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
> index 8e493eda47..a0160336dd 100644
> --- a/app/test/test_mempool.c
> +++ b/app/test/test_mempool.c
> @@ -187,6 +187,142 @@ test_mempool_basic(struct rte_mempool *mp, int
> use_external_cache)
>   return ret;
>  }
> 
> +/* basic tests (done on one core) */
> +static int
> +test_mempool_get_cache(struct rte_mempool *mp, int use_external_cache)
> +{
> + uint32_t *objnum;
> + void **objtable;
> + void *obj, *obj2;
> + char *obj_data;
> + int ret = 0;
> + unsigned int i, j;
> + int offset;
> + struct rte_mempool_cache *cache;
> + void **cache_objs;
> +
> + if (use_external_cache) {
> + /* Create a user-owned mempool cache. */
> + cache =
> rte_mempool_cache_create(RTE_MEMPOOL_CACHE_MAX_SIZE,
> +  SOCKET_ID_ANY);
> + if (cache == NULL)
> + RET_ERR();
> + } else {
> + /* May be NULL if cache is disabled. */
> + cache = rte_mempool_default_cache(mp, rte_lcore_id());
> + }
> +
> + /* dump the mempool status */
> + rte_mempool_dump(stdout, mp);
> +
> + printf("get an object\n");
> + if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0)
> + GOTO_ERR(ret, out);
> + rte_mempool_dump(stdout, mp);
> +
> + /* tests that improve coverage */
> + printf("get object count\n");
> + /* We have to count the extra caches, one in this case. */
> + offset = use_external_cache ? 1 * cache->len : 0;
> + if (rte_mempool_avail_count(mp) + offset != MEMPOOL_SIZE - 1)
> + GOTO_ERR(ret, out);
> +
> + printf("get private data\n");
> + if (rte_mempool_get_priv(mp) != (char *)mp +
> + RTE_MEMPOOL_HEADER_SIZE(mp, mp->cache_size))
> + GOTO_ERR(ret, out);
> +
> +#ifndef RTE_EXEC_ENV_FREEBSD /* rte_mem_virt2iova() not supported on
> bsd */
> + printf("get physical address of an object\n");
> + if (rte_mempool_virt2iova(obj) != rte_mem_virt2iova(obj))
> + GOTO_ERR(ret, out);
> +#endif
> +
> +
> + printf("put the object back\n");
> + cache_objs = rte_mempool_get_cache(mp, 1);

Use rte_mempool_cache_zc_put_bulk() instead.

> + if (cache_objs != NULL)
> + rte_memcpy(cache_objs, &obj, sizeof(void *));
> + else
> + rte_mempool_ops_enqueue_bulk(mp, &obj, 1);

rte_mempool_ops_enqueue_bulk() is an mempool internal function, and it lacks 
proper instrumentation. Use this instead:

rte_mempool_generic_put(mp, &obj, 1, NULL);

> +
> + rte_mempool_dump(stdout, mp);
> +
> + printf("get 2 objects\n");
> + if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0)
> + GOTO_ERR(ret, out);
> + if (rte_mempool_generic_get(mp, &obj2, 1, cache) < 0) {
> + rte_mempool_generic_put(mp, &obj, 1, cache);
> + GOTO_ERR(ret, out);
> + }
> + rte_mempool_dump(stdout, mp);
> +
> + printf("put the objects back\n");
> + cache_objs = rte_mempool_get_cache(mp, 1);

Use rte_mempool_cache_zc_put_bulk() instead.

> + if (cache_objs != NULL)
> + rte_memcpy(mp, &obj, sizeof(void *));
> + else
> + rte_mempool_ops_enqueue_bulk(mp, &obj, 1);

Use rte_mempool_generic_put() instead.

> +
> + cache_objs = rte_mempool_get_cache(mp, 1);

Use rte_mempool_cache_zc_put_bulk() instead.

> + if (cache_objs != NULL)
> + rte_memcpy(mp, &obj2, sizeof(void *));
> + else
> + rte_mempool_ops_enqueue_bulk(mp, &obj2, 1);

Use rte_mempool_generic_put() instead.

> + rte_mempool_dump(stdout, mp);
> +
> + /*
> +  * get many objects: we cannot get them all because the cache
> +  * on other cores may not be empty.
> +  */
> + objtable = malloc(MEMPOOL_SIZE * sizeof(void *));
> + if (objtable == NULL)
> + GOTO_ERR(ret, out);
> +
> + for (i = 0; i 

[PATCH] net/nfp: fix return path in TSO processing function

2022-11-18 Thread Niklas Söderlund
From: Fei Qin 

When enable TSO, nfp_net_nfdk_tx_tso() fills segment information in Tx
descriptor. However, the return path for TSO is lost and the LSO related
fields of Tx descriptor is filled with zeros which prevents packets from
being sent.

This patch fixes the return path in TSO processing function to make sure
TSO works fine.

Fixes: c73dced48c8c ("net/nfp: add NFDk Tx")
Cc: sta...@dpdk.org

Signed-off-by: Fei Qin 
Reviewed-by: Niklas Söderlund 
Reviewed-by: Chaoyong He 
Signed-off-by: Niklas Söderlund 
---
 drivers/net/nfp/nfp_rxtx.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 38377ca2182e..01cffdfde0b4 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -1135,6 +1135,8 @@ nfp_net_nfdk_tx_tso(struct nfp_net_txq *txq, struct 
rte_mbuf *mb)
txd.lso_hdrlen = mb->l2_len + mb->l3_len + mb->l4_len;
txd.lso_totsegs = (mb->pkt_len + mb->tso_segsz) / mb->tso_segsz;
 
+   return txd.raw;
+
 clean_txd:
txd.l3_offset = 0;
txd.l4_offset = 0;
-- 
2.38.1



RE: [PATCH] failsafe: fix segfault on hotplug event

2022-11-18 Thread Konstantin Ananyev



Hi Luc,
 
> > Hi Konstantin,
> >
> > > It is not recommended way to update rte_eth_fp_ops[] contents directly.
> > > There are eth_dev_fp_ops_setup()/ eth_dev_fp_ops_reset() that supposed
> > > to be used for that.
> >
> > Good to know. I see another fix that was made in a different PMD that
> > does exactly the same thing:
> >
> > https://github.com/DPDK/dpdk/commit/bcd68b68415172815e55fc67cf3947c0433baf74
> >
> > CC'ing the authors for awareness.
> >
> > > About the fix itself - while it might help till some extent,
> > > I think it will not remove the problem completely.
> > > There still remain a race-condition between rte_eth_rx_burst() and 
> > > failsafe_eth_rmv_event_callback().
> > > Right now DPDK doesn't support switching PMD fast-ops functions (or 
> > > updating rxq/txq data)
> > > on the fly.
> >
> > Thanks for the information. This is very helpful.
> >
> > Are you saying that the previous code also had that same race
> > condition?

Yes, I believe so. 

> It was only updating the rte_eth_dev structure, but I
> > assume the problem would have been the same since rte_eth_rx_burst()
> > in DPDK versions <=20 use the function pointers in rte_eth_dev, not
> > rte_eth_fp_ops.
> >
> > Can you think of a possible solution to this problem? I'm happy to
> > provide a patch to properly fix the problem. Having your guidance
> > would be extremely helpful.
> >
> > Thanks!
> 
> Changing burst mode on a running device is not safe because
> of lack of locking and/or memory barriers.
> 
> Would have been better to not to do this optimization.
> Just have one rx_burst/tx_burst function and look at what
> ever conditions are present there.

I think Stephen is right - within DPDK it is just not possible to switch RX/TX
function on the fly (without some external synchronization).
So the safe way is to always use safe version of RX/TX call.
I personally don't think such few extra checks will affect performance that 
much.

As another nit: inside failsafe rx_burst functions it is probably better not to 
access dev->data->rx_queues directly,
but call rte_eth_rx_burst(sdev->sdev_port_id, ...); instead.
Same for TX.

Thanks
Konstantin

 




[PATCH v2] doc: update QAT device support

2022-11-18 Thread Brian Dooley
Update what drivers and devices are supported for Asymmetric Crypto
Service on QAT

Signed-off-by: Brian Dooley 
---
 doc/guides/cryptodevs/qat.rst | 17 ++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index 2d895e61ac..76d8187298 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -168,8 +168,8 @@ poll mode crypto driver support for the following hardware 
accelerator devices:
 * ``Intel QuickAssist Technology C62x``
 * ``Intel QuickAssist Technology C3xxx``
 * ``Intel QuickAssist Technology D15xx``
-* ``Intel QuickAssist Technology C4xxx``
 * ``Intel QuickAssist Technology 4xxx``
+* ``Intel QuickAssist Technology 401xxx``
 
 The QAT ASYM PMD has support for:
 
@@ -393,9 +393,15 @@ to see the full table)

+-+-+-+-+--+---+---+++--+++
| Yes | No  | No  | 3   | C4xxx| p | qat_c4xxx | c4xxx  
| 18a0   | 1| 18a1   | 128|

+-+-+-+-+--+---+---+++--+++
-   | Yes | No  | No  | 4   | 4xxx | N/A   | qat_4xxx  | 4xxx   
| 4940   | 4| 4941   | 16 |
+   | Yes | Yes | No  | 4   | 4xxx | linux/5.11+   | qat_4xxx  | 4xxx   
| 4940   | 4| 4941   | 16 |
+   
+-+-+-+-+--+---+---+++--+++
+   | Yes | Yes | Yes | 4   | 4xxx | linux/5.17+   | qat_4xxx  | 4xxx   
| 4940   | 4| 4941   | 16 |
+   
+-+-+-+-+--+---+---+++--+++
+   | Yes | No  | No  | 4   | 4xxx | IDZ/ N/A  | qat_4xxx  | 4xxx   
| 4940   | 4| 4941   | 16 |
+   
+-+-+-+-+--+---+---+++--+++
+   | Yes | Yes | Yes | 4   | 401xxx   | linux/5.19+   | qat_401xxx| 4xxx   
| 4942   | 2| 4943   | 16 |

+-+-+-+-+--+---+---+++--+++
-   | Yes | No  | No  | 4   | 401xxx   | N/A   | qat_401xxx| 4xxx   
| 4942   | 2| 4943   | 16 |
+   | Yes | No  | No  | 4   | 401xxx   | IDZ/ N/A  | qat_401xxx| 4xxx   
| 4942   | 2| 4943   | 16 |

+-+-+-+-+--+---+---+++--+++
 
 * Note: Symmetric mixed crypto algorithms feature on Gen 2 works only with IDZ 
driver version 4.9.0+
@@ -416,6 +422,11 @@ If you are running on a kernel which includes a driver for 
your device, see
 `Installation using IDZ QAT driver`_.
 
 
+.. Note::
+
+The Asymmetric service is not supported by DPDK QAT PMD for the Gen 3 
platform.
+The actual Crypto services enabled on the system depend on QAT driver 
capabilities and hardware slice configuration.
+
 Installation using kernel.org driver
 
 
-- 
2.25.1



DPDK Release Status Meeting 2022-11-17

2022-11-18 Thread Mcnamara, John
Release status meeting minutes 2022-11-17
=

Agenda:
* Release Dates
* Subtrees
* Roadmaps
* LTS
* Defects
* Opens

Participants:
* ARM
* Canonical [No]
* Debian/Microsoft
* Intel
* Marvell
* Nvidia
* Red Hat
* Xilinx/AMD


Release Dates
-

The following are the proposed current dates for 22.11:

* V1 deadline: 24 August   2022
* RC1:  7 October  2022
* RC2: 28 October  2022
* RC3: 14 November 2022
* RC4: 21 November 2022
* Release: 23 November 2022

Subtrees


* next-net
  * RC3 is out.
  * Only fixes from now on.

* next-net-intel
  * Some iavf fixes required in RC4.

* next-net-mlx
  * Majority of code in RC3.
  * Looking at fixes and docs.

* next-net-mrvl
  * No update. Most code merged.

* next-eventdev
  * No update. Most code merged.

* next-virtio
  * Majority of code in RC3.
  * Looking at fixes and docs.

* next-crypto
  * Doc patches.
  * From next release next-crypto will be split with next-bbdev

* main
  * RC3 released.
  * RC4 on Monday 21 November.


LTS
---

Next releases will be:

* 21.11.3
  * In progress.
  * Backport patches have been sent
  * Loooking at RC3 fixes

* 20.11.7
  * Backport patches have been sent
  * Some in progress

* 19.11.14
  * Backport patches have been sent
  * Some in progress

Defects
---

* Bugzilla links, 'Bugs',  added for hosted projects
  * https://www.dpdk.org/hosted-projects/



Opens
-

* None


DPDK Release Status Meetings


The DPDK Release Status Meeting is intended for DPDK Committers to discuss the
status of the master tree and sub-trees, and for project managers to track
progress or milestone dates.

The meeting occurs on every Thursday at 9:30 UTC over Jitsi on 
https://meet.jit.si/DPDK

You don't need an invite to join the meeting but if you want a calendar 
reminder just
send an email to "John McNamara john.mcnam...@intel.com" for the invite.



Re: [PATCH v4 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-18 Thread Ferruh Yigit
On 11/18/2022 2:13 PM, Hanumanth Pothula wrote:
> Validate ethdev parameter 'max_rx_mempools' to know whether
> device supports multi-mempool feature or not.
> 

My preference would be revert the testpmd patch [1] that adds this new
feature after -rc2, and add it back next release with new testpmd
argument and below mentioned changes in setup function.

@Andrew, @Thomas, @Jerin, what do you think?


[1]
4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")

> Bugzilla ID: 1128
> 

Can you please add fixes line?

> Signed-off-by: Hanumanth Pothula 

Please put the changelog after '---', which than git will take it as note.

> v4:
>  - updated if condition.
> v3:
>  - Simplified conditional check.
>  - Corrected spell, whether.
> v2:
>  - Rebased on tip of next-net/main.
> ---
>  app/test-pmd/testpmd.c | 10 --
>  1 file changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 4e25f77c6a..c1b4dbd716 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -2655,17 +2655,23 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>   union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
>   struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
>   struct rte_mempool *mpx;
> + struct rte_eth_dev_info dev_info;
>   unsigned int i, mp_n;
>   uint32_t prev_hdrs = 0;
>   int ret;
>  
> + ret = rte_eth_dev_info_get(port_id, &dev_info);
> + if (ret != 0)
> + return ret;
> +
>   /* Verify Rx queue configuration is single pool and segment or
>* multiple pool/segment.
> +  * @see rte_eth_dev_info::max_rx_mempools
>* @see rte_eth_rxconf::rx_mempools
>* @see rte_eth_rxconf::rx_seg
>*/
> - if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> - ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
> + if ((dev_info.max_rx_mempools == 0) && (rx_pkt_nb_segs <= 1 ||

Using `dev_info.max_rx_mempools` for check means if device supports
multiple mempool, multiple mempool will be configured independent from
user configuration. But user may prefer singe mempool or buffer split.

Right now only PMD support multiple mempool is 'cnxk', so this doesn't
impact others but I think this is not correct.

Instead of re-using testpmd "mbuf-size" parameter (it is already used
for two other features, and this is the reason of the defect) it would
be better to have an explicit parameter for multiple mempool feature.


> + ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0))) {
>   /* Single pool/segment configuration */
>   rx_conf->rx_seg = NULL;
>   rx_conf->rx_nseg = 0;


Logic seems correct, although I have not tested.

Current functions tries to detect the requested feature and setup queues
accordingly, features are:
- single mempool
- packet split (to multiple mempool)
- multiple mempool (various size)

And the logic in the function is:
``
if ( (! multiple mempool) && (! packet split))
setup for single mempool
exit

if (packet split)
setup packet split
else
setup multiple mempool
``

What do you think to
a) simplify logic by making single mempool as fallback and last option,
instead of detecting non existence of other configs
b) have explicit check for multiple mempool

Like:

``
if (packet split)
setup packet split
exit
else if (multiple mempool)
setup multiple mempool
exit

setup for single mempool
``

I think this both solves the defect and simplifies the code.


[PATCH] ring: build with global includes

2022-11-18 Thread Tyler Retzlaff
ring has no dependencies and should be able to be built standalone but
cannot be since it cannot find rte_config.h. this change directs meson
to include global_inc paths just like is done with other libraries
e.g. telemetry.

Tyler Retzlaff (1):
  ring: build with global includes

 lib/ring/meson.build | 2 ++
 1 file changed, 2 insertions(+)

-- 
1.8.3.1



[PATCH] ring: build with global includes

2022-11-18 Thread Tyler Retzlaff
Meson include global_inc so that rte_config.h can be found in the
include path.

Signed-off-by: Tyler Retzlaff 
---
 lib/ring/meson.build | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/ring/meson.build b/lib/ring/meson.build
index c20685c..defd9da 100644
--- a/lib/ring/meson.build
+++ b/lib/ring/meson.build
@@ -1,6 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2017 Intel Corporation
 
+includes = [global_inc]
+
 sources = files('rte_ring.c')
 headers = files('rte_ring.h')
 # most sub-headers are not for direct inclusion
-- 
1.8.3.1



RE: [EXT] Re: [PATCH v4 1/1] app/testpmd: add valid check to verify multi mempool feature

2022-11-18 Thread Hanumanth Reddy Pothula


> -Original Message-
> From: Ferruh Yigit 
> Sent: Saturday, November 19, 2022 2:26 AM
> To: Hanumanth Reddy Pothula ;
> tho...@monjalon.net; andrew.rybche...@oktetlabs.ru; Nithin Kumar
> Dabilpuram 
> Cc: dev@dpdk.org; yux.ji...@intel.com; Jerin Jacob Kollanukkaran
> ; Aman Singh ; Yuying
> Zhang 
> Subject: [EXT] Re: [PATCH v4 1/1] app/testpmd: add valid check to verify
> multi mempool feature
> 
> External Email
> 
> --
> On 11/18/2022 2:13 PM, Hanumanth Pothula wrote:
> > Validate ethdev parameter 'max_rx_mempools' to know whether device
> > supports multi-mempool feature or not.
> >
> 
> My preference would be revert the testpmd patch [1] that adds this new
> feature after -rc2, and add it back next release with new testpmd argument
> and below mentioned changes in setup function.
> 
> @Andrew, @Thomas, @Jerin, what do you think?
> 
> 
> [1]
> 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")
> 
> > Bugzilla ID: 1128
> >
> 
> Can you please add fixes line?
> 
Ack
> > Signed-off-by: Hanumanth Pothula 
> 
> Please put the changelog after '---', which than git will take it as note.
> 
Ack
> > v4:
> >  - updated if condition.
> > v3:
> >  - Simplified conditional check.
> >  - Corrected spell, whether.
> > v2:
> >  - Rebased on tip of next-net/main.
> > ---
> >  app/test-pmd/testpmd.c | 10 --
> >  1 file changed, 8 insertions(+), 2 deletions(-)
> >
> > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > 4e25f77c6a..c1b4dbd716 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -2655,17 +2655,23 @@ rx_queue_setup(uint16_t port_id, uint16_t
> rx_queue_id,
> > union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
> > struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
> > struct rte_mempool *mpx;
> > +   struct rte_eth_dev_info dev_info;
> > unsigned int i, mp_n;
> > uint32_t prev_hdrs = 0;
> > int ret;
> >
> > +   ret = rte_eth_dev_info_get(port_id, &dev_info);
> > +   if (ret != 0)
> > +   return ret;
> > +
> > /* Verify Rx queue configuration is single pool and segment or
> >  * multiple pool/segment.
> > +* @see rte_eth_dev_info::max_rx_mempools
> >  * @see rte_eth_rxconf::rx_mempools
> >  * @see rte_eth_rxconf::rx_seg
> >  */
> > -   if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> > -   ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
> 0))) {
> > +   if ((dev_info.max_rx_mempools == 0) && (rx_pkt_nb_segs <= 1 ||
> 
> Using `dev_info.max_rx_mempools` for check means if device supports
> multiple mempool, multiple mempool will be configured independent from
> user configuration. But user may prefer singe mempool or buffer split.
> 
Please find my suggested logic.

> Right now only PMD support multiple mempool is 'cnxk', so this doesn't
> impact others but I think this is not correct.
> 
> Instead of re-using testpmd "mbuf-size" parameter (it is already used for
> two other features, and this is the reason of the defect) it would be better
> to have an explicit parameter for multiple mempool feature.
> 
> 
> > +   ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) ==
> 0))) {
> > /* Single pool/segment configuration */
> > rx_conf->rx_seg = NULL;
> > rx_conf->rx_nseg = 0;
> 
> 
> Logic seems correct, although I have not tested.
> 
> Current functions tries to detect the requested feature and setup queues
> accordingly, features are:
> - single mempool
> - packet split (to multiple mempool)
> - multiple mempool (various size)
> 
> And the logic in the function is:
> ``
> if ( (! multiple mempool) && (! packet split))
>   setup for single mempool
>   exit
> 
> if (packet split)
>   setup packet split
> else
>   setup multiple mempool
> ``
> 
> What do you think to
> a) simplify logic by making single mempool as fallback and last option,
> instead of detecting non existence of other configs
> b) have explicit check for multiple mempool
> 
> Like:
> 
> ``
> if (packet split)
>   setup packet split
>   exit
> else if (multiple mempool)
>   setup multiple mempool
>   exit
> 
> setup for single mempool
> ``
> 
> I think this both solves the defect and simplifies the code.

Yes Ferruh your suggested logic simplifies the code.

In the lines of your proposed logic,  below if conditions might work fine for 
all features(buffer-split/multi-mempool) supported by PMD and user preference,

if (rx_pkt_nb_segs > 1 ||
rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
/*multi-segment (buffer split)*/
} else if (mbuf_data_size_n > 1 && dev_info.max_rx_mempools > 1) {
/*multi-mempool*/
} else {
/* single pool and segment */
} 

Or  adding new Rx offload parameter for multi_mempool feature, I think it might 
not be required, using dev_info.max_rx_mempools works fine.

if (rx_pkt_nb_s