Using a Mellanox ConnectX-5 @ 100Gb on a Supermicro system with DPDK
18.11.1 and pktgen 3.6.6 from the Master branch (downloaded today).
When I configure pktgen to use multiple queues, I'm able to set the
number of TX queues correctly but the number of RX queues doesn't look
correct. pktgen indicates the correct number of queues during startup,
but the stats and xstats pages always show traffic going to a single RX
queue.
For example, here's how pktgen starts:
$ sudo -E
LD_LIBRARY_PATH=/home/davec/src/dpdk/x86_64-native-linuxapp-gcc/lib
/home/davec/src/pktgen/app/x86_64-native-linuxapp-gcc/pktgen -l 1,2-13
-w 04:00.0 -w 04:00.1 -n 3 -- -P -m "[2-7:2-7].0, [8-13:8-13].1";
Copyright (c) <2010-2019>, Intel Corporation. All rights reserved.
Powered by DPDK
EAL: Detected 56 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL: probe driver: 15b3:1019 net_mlx5
EAL: PCI device 0000:04:00.1 on NUMA socket 0
EAL: probe driver: 15b3:1019 net_mlx5
Lua 5.3.5 Copyright (C) 1994-2018 Lua.org, PUC-Rio
*** Copyright (c) <2010-2019>, Intel Corporation. All rights reserved.
*** Pktgen created by: Keith Wiles -- >>> Powered by DPDK <<<
Initialize Port 0 -- TxQ 6, RxQ 6, Src MAC ec:0d:9a:ca:b4:98
Initialize Port 1 -- TxQ 6, RxQ 6, Src MAC ec:0d:9a:ca:b4:99
Port 0: Link Up - speed 100000 Mbps - full-duplex <Enable promiscuous mode>
Port 1: Link Up - speed 100000 Mbps - full-duplex <Enable promiscuous mode>
RX/TX processing lcore: 2 rx: 1 tx: 1
RX/TX processing lcore: 3 rx: 1 tx: 1
RX/TX processing lcore: 4 rx: 1 tx: 1
RX/TX processing lcore: 5 rx: 1 tx: 1
RX/TX processing lcore: 6 rx: 1 tx: 1
RX/TX processing lcore: 7 rx: 1 tx: 1
RX/TX processing lcore: 8 rx: 1 tx: 1
RX/TX processing lcore: 9 rx: 1 tx: 1
RX/TX processing lcore: 10 rx: 1 tx: 1
RX/TX processing lcore: 11 rx: 1 tx: 1
RX/TX processing lcore: 12 rx: 1 tx: 1
RX/TX processing lcore: 13 rx: 1 tx: 1
Note that the number of RX queues is correct.
I use the following commands to start generating traffic. (The link
partner is running DPDK 18.11.1 with the testpmd app configured for "io"
forwarding.)
set all size 64
set all rate 100
set all count 0
set all burst 16
range all src port 1 1 1023 1
range all dst ip 10.0.0.0 10.0.0.0 10.0.255.255 0.0.0.1
range all src ip 10.0.0.0 10.0.0.0 10.0.255.255 0.0.0.1
range 0 src mac 00:00:00:00:00:00 00:00:00:00:00:00 00:12:34:56:78:90
00:00:00:01:01:01
range 0 dst mac 00:20:00:00:00:00 00:20:00:00:00:00 00:98:76:54:32:10
00:00:00:01:01:01
range all size 64 64 64 0
enable all range
start all
Later, when I'm running traffic, the statistics show something different:
Pktgen:/> page stats
| <Real Port Stats Page> Copyright (c) <2010-2019>,
Intel Corporation
Port 0 Pkts Rx/Tx Rx Errors/Missed
Rate Rx/Tx MAC Address
0-04:00.0 542522040/2806635488 0/0
10182516/51993536 EC:0D:9A:CA:B4:98
ipackets opackets ibytes obytes
errors
Q 0: 542522040 546551712 32551322400 32793102720
0
Q 1: 0 451205888 0 27072353280
0
Q 2: 0 457296176 0 27437770560
0
Q 3: 0 455300832 0 27318049920
0
Q 4: 0 442654816 0 26559288960
0
Q 5: 0 453626064 0 27217563840
0
Q 6: 0 0 0 0
0
Q 7: 0 0 0 0
0
Q 8: 0 0 0 0
0
Q 9: 0 0 0 0
0
Q 10: 0 0 0 0
0
Q 11: 0 0 0 0
0
Q 12: 0 0 0 0
0
Q 13: 0 0 0 0
0
Q 14: 0 0 0 0
0
Q 15: 0 0 0 0
0
-- Pktgen Ver: 3.6.6 (DPDK 18.11.1) Powered by DPDK (pid:15485)
Traffic is only received on RX queue 0. Anyone run into this? The link
partner shows traffic received and transmitted on all configured queues
(16 in this case) so I don't think the link partner is dropping traffic
in such a way that the remaining traffic flows to a single RX queue on
the SUT.
Dave