https://bugs.dpdk.org/show_bug.cgi?id=1591
Bug ID: 1591 Summary: MLX5 Windows : Issue with Packet Loss When Setting Descriptors Above 1<<14 on ConnectX6-DX Product: DPDK Version: 24.11 Hardware: x86 OS: Windows Status: UNCONFIRMED Severity: normal Priority: Normal Component: ethdev Assignee: dev@dpdk.org Reporter: a.polle...@deltacast.tv Target Milestone: --- I am encountering an issue with the ConnectX6-DX on Windows. When I set the number of descriptors to a value greater than 1<<14, all my packets are dropped (imissed), except for the first one. The root cause is unclear, but I observed that the maximum number of descriptors reported by rte_eth_dev_info_get() is 32768 (rx_desc_lim.nb_max), which I believe indicates that the number of descriptors should be set to this value or lower. Here are some test results using testpmd that demonstrate the issue: TEST 1 : 4096 descriptors ./dpdk-testpmd -l 2-3 -n 4 -a 0000:03:00.0 --log-level=8 --log-level=pmd.common.mlx5:8 --log-level=pmd.net.mlx5:8 -- --socket-num=0 --burst=64 --txd=4096 --rxd=4096 --mbcache=512 --rxq=4 --txq=4 --nb-cores=1 --txpkts=1500 -i --forward-mode=rxonly --flow-isolate-all testpmd> show port stats 0 ######################## NIC statistics for port 0 ######################## RX-packets: 1626632 RX-missed: 0 RX-bytes: 2152490218 RX-errors: 0 RX-nombuf: 0 TX-packets: 0 TX-errors: 0 TX-bytes: 0 Throughput (since last show) Rx-pps: 246876 Rx-bps: 2613496560 Tx-pps: 0 Tx-bps: 0 ############################################################################ TEST 2 : 16384 descriptors ./dpdk-testpmd -l 2-3 -n 4 -a 0000:03:00.0 --log-level=8 --log-level=pmd.common.mlx5:8 --log-level=pmd.net.mlx5:8 -- --socket-num=0 --burst=64 --txd=4096 --rxd=16384 --mbcache=512 --rxq=4 --txq=4 --nb-cores=1 --txpkts=1500 -i --forward-mode=rxonly --flow-isolate-all testpmd> show port stats 0 ######################## NIC statistics for port 0 ######################## RX-packets: 2923021 RX-missed: 0 RX-bytes: 3867975188 RX-errors: 0 RX-nombuf: 0 TX-packets: 0 TX-errors: 0 TX-bytes: 0 Throughput (since last show) Rx-pps: 246881 Rx-bps: 2613540240 Tx-pps: 0 Tx-bps: 0 ############################################################################ TEST 3 : 20480 descriptors ./dpdk-testpmd -l 2-3 -n 4 -a 0000:03:00.0 --log-level=8 --log-level=pmd.common.mlx5:8 --log-level=pmd.net.mlx5:8 -- --socket-num=0 --burst=64 --txd=4096 --rxd=20480 --mbcache=512 --rxq=4 --txq=4 --nb-cores=1 --txpkts=1500 -i --forward-mode=rxonly --flow-isolate-all testpmd> show port stats 0 ######################## NIC statistics for port 0 ######################## RX-packets: 1 RX-missed: 2732098 RX-bytes: 1328 RX-errors: 0 RX-nombuf: 0 TX-packets: 0 TX-errors: 0 TX-bytes: 0 Throughput (since last show) Rx-pps: 0 Rx-bps: 0 Tx-pps: 0 Tx-bps: 0 ############################################################################ TEST 4 : 32768 descriptors ./dpdk-testpmd -l 2-3 -n 4 -a 0000:03:00.0 --log-level=8 --log-level=pmd.common.mlx5:8 --log-level=pmd.net.mlx5:8 -- --socket-num=0 --burst=64 --txd=4096 --rxd=32768 --mbcache=512 --rxq=4 --txq=4 --nb-cores=1 --txpkts=1500 -i --forward-mode=rxonly --flow-isolate-all testpmd> show port stats 0 ######################## NIC statistics for port 0 ######################## RX-packets: 1 RX-missed: 1129806 RX-bytes: 1328 RX-errors: 0 RX-nombuf: 0 TX-packets: 0 TX-errors: 0 TX-bytes: 0 Throughput (since last show) Rx-pps: 0 Rx-bps: 0 Tx-pps: 0 Tx-bps: 0 ############################################################################ I was able to reproduce this issue on versions 24.11 and 23.11, using DevX version 24.10.26603. Thank you in advance for your help, Please let me know if you need further information. Thank you. -- You are receiving this mail because: You are the assignee for the bug.