https://bugs.dpdk.org/show_bug.cgi?id=568

            Bug ID: 568
           Summary: mlx5 flow match & drop performance  problem
           Product: DPDK
           Version: 19.11
          Hardware: x86
                OS: Linux
            Status: UNCONFIRMED
          Severity: major
          Priority: Normal
         Component: vhost/virtio
          Assignee: dev@dpdk.org
          Reporter: kangzy1...@qq.com
  Target Milestone: ---

Kernel ver: 5.4.x
FW ver: 5.1-2.3.7
CPU : Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz
NIC: Mellanox Technologies MT27800 Family [ConnectX-5]

dpdk ver: 19.11 & 20.5 & 20.8

testpmd :

testpmd --log-level=8 -c 0xfffffffffffe --socket-mem=5120,5120 -n 4 -r 2 -w
81:00.0 -- -i --rxq=16 --txq=16 --nb-cores=16 --forward-mode icmpecho 

without flows:

1> ixia generate 20Gbps 64bytes TCP traffic &  5 icmp echo request;
2> testpmd icmpecho will drop all & reply all icmp request


with these flows:
flow create 0 ingress pattern eth / ipv4 / tcp / end actions count / drop / end
flow create 0 ingress pattern eth / ipv4 / icmp / end actions queue index 3 /
mark id 3 / end


1> ixia generate 50Gbps 64bytes TCP traffic &  5 icmp echo request;
2> testpmd icmpecho will only receive all icmp request packets & reply;

without drop tcp flows, tespmd is ok!
but with drop tcp flows, testpmd actually can't receive all icmp request
packets! 
it seem that NIC firmware cause icmp packets lost!

-- 
You are receiving this mail because:
You are the assignee for the bug.

Reply via email to