rdma-core version is: rdma-core-52mlnx1-1.52104.x86_64



发件人:Asaf Penso <as...@nvidia.com>
发送日期:2021-04-28 12:22:32
收件人:"dev@dpdk.org" <dev@dpdk.org>,wenxu <we...@ucloud.cn>
主题:Re: [dpdk-dev] nvgre inner rss problem in mlx5
Hello Wenxu,


Thank you for reaching us. I would like to know a few more details before I can 
provide an assistance.

Can you share the version numbers for:

rdma-core

OFED

OS


Regards,


Asaf Penso




From: dev <dev-boun...@dpdk.org> on behalf of wenxu <we...@ucloud.cn>
 Sent: Wednesday, April 28, 2021 6:47:45 AM
 To: dev@dpdk.org <dev@dpdk.org>
 Subject: [dpdk-dev] nvgre inner rss problem in mlx5 

Hi mlnx teams,


 I test the upstream dpdk with the nvgre inner rss action with dpdk-testpmd


 # ./dpdk-testpmd -c 0x1f  -n 4 -m 4096 -w "0000:19:00.1"  
--huge-dir=/mnt/ovsdpdk  -- --forward-mode=rxonly --rxq=4 --txq=4 --auto-start 
--nb-cores=4

 #  testpmd>> flow create 0 ingress pattern eth / ipv4 / nvgre / end actions 
rss level 2 types ip udp tcp end queues 0 1 2 3 end / end


 Add rte flow for nvgre wit action inner(level 2) rss to the queues 0,1,2,3.


 And I test this with the same underlay tunnel but differrent inner ip 
address/udp ports. But Only one queue recieve the packet.




 And if I test this with vxlan case. it works as we expect.
 testpmd>> flow create 0 ingress pattern eth / ipv4 / udp / vxlan / end actions 
rss level 2 types ip udp tcp end queues 0 1 2 3 end / end




 # lspci | grep Ether
 19:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
 19:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]



 Fw version is 16.29.1016
 # ethtool -i net3
 driver: mlx5_core
 version: 5.12.0-rc4+
 firmware-version: 16.29.1016 (MT_0000000080)


 Are there any problems for my test case.


 BR
 wenxu














Reply via email to