Hareware:  CX5/CX6 DX + Intel(R) Xeon(R) Platinum 9242 CPU @ 
2.30GHz  DPDK version: 19.11.8/20.11/21.05-rc1&2


testpmd with case:
testpmd> flow create 0 ingress pattern eth / ipv4 / udp dst is 53 / end 
actions count / drop / end
testpmd> flow create 0 ingress pattern eth / ipv4 / udp src is 53 / end 
actions count / drop / end
testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end actions count 
/drop end

testpmd> flow list 0
ID      Group   Prio    Attr    Rule
0       0       0       i--     ETH IPV4 UDP => COUNT DROP
1       0       0       i--     ETH IPV4 UDP => COUNT DROP
2       0       0       i--     ETH IPV4 UDP => COUNT DROP

or
testpmd> flow create 0 ingress pattern eth / ipv4 / udp dst is 53 / end 
actions count / rss / end
testpmd> flow create 0 ingress pattern eth / ipv4 / udp src is 53 / end 
actions count / rss / end
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / end actions count 
/ rss / end
testpmd> flow list 0
ID      Group   Prio    Attr    Rule
0       0       0       i--     ETH IPV4 UDP => COUNT RSS
1       0       0       i--     ETH IPV4 UDP => COUNT RSS
2       0       0       i--     ETH IPV4 UDP => COUNT RSS
testpmd> 



as soon as NIC create more than 1 flow ,  CX5/CX6-dx NIC will increment 
'rx_phy_discard_packets'.
only 1 flow no problem!


Is this a CX5/CX6-DX hardware issue? 
or Is it a DPDK mlx5 pmd bugs?


Best Regards!
KANG

Reply via email to