https://bugs.dpdk.org/show_bug.cgi?id=1533

            Bug ID: 1533
           Summary: testpmd performance drops with Mellanox ConnectX6
                    when using 8 cores 8 queues
           Product: DPDK
           Version: 23.11
          Hardware: All
                OS: Linux
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: other
          Assignee: dev@dpdk.org
          Reporter: wangliangx...@hygon.cn
  Target Milestone: ---

Created attachment 287
  --> https://bugs.dpdk.org/attachment.cgi?id=287&action=edit
mpps and packets stats of 8 cores 8 queues

Environment: Intel Cascadelak server running Centos 7.

Mellanox ConnectX6 NIC and used cores are on same one NUMA node.

Input traffic is always line rate 100Gbps, 64 bytes packets, 256 flows. 
Test duration is 30 seconds.

Run testpmd io mode with 7 cores and 7 queues: ./dpdk-testpmd -l 24-32 -n 4 -a
af:00.0 -- --nb-cores=7 --rxq=7 --txq=7 -i

Rx/Tx throughput is 91.6/91.6 MPPS. No TX-dropped packets. 

However, run testpmd io mode with 8 cores and 8 queues: ./dpdk-testpmd -l 24-32
-n 4 -a af:00.0 -- --nb-cores=8 --rxq=8 --txq=8 -i

Rx/Tx throughput is 113.6/85.4 MPPS. The tx is lower than tx of 7 core. There
are a lot of TX-dropped. Please refer to attached picture.

I notice similar issue on other x86 and aarch64 servers too.

-- 
You are receiving this mail because:
You are the assignee for the bug.

Reply via email to