Hello Wenxu,

We've integrated this fix - 
https://patchwork.dpdk.org/project/dpdk/patch/20210512102408.7501-1-jiaw...@nvidia.com/

Could you please confirm it resolves your issue?
BTW, have you opened a BZ ticket? If so, could you please send me the link?

Regards,
Asaf Penso
________________________________
From: wenxu <we...@ucloud.cn>
Sent: Tuesday, May 11, 2021 6:10:57 AM
To: Asaf Penso <as...@nvidia.com>
Cc: dev@dpdk.org <dev@dpdk.org>
Subject: Re:RE: Re:: [dpdk-dev] nvgre inner rss problem in mlx5

Will do. Thanks


BR
wenxu


发件人:Asaf Penso <as...@nvidia.com>
发送日期:2021-05-10 16:05:54
收件人:wenxu <we...@ucloud.cn>
抄送人:"dev@dpdk.org" <dev@dpdk.org>
主题:RE: Re:: [dpdk-dev] nvgre inner rss problem in mlx5

Hello Wenxu,



Can you please create a new BZ ticket?

Looks like this is not handled properly in our pmd and we’ll handle it and 
update.



Regards,

Asaf Penso



From: wenxu <we...@ucloud.cn>
Sent: Monday, May 10, 2021 7:54 AM
To: Asaf Penso <as...@nvidia.com>
Cc: dev@dpdk.org
Subject: Re:: [dpdk-dev] nvgre inner rss problem in mlx5





Hi Asaf,



Are there any progress for this case?



BR

wenxu

发件人:Asaf Penso <as...@nvidia.com<mailto:as...@nvidia.com>>
发送日期:2021-04-29 17:06:52
收件人:wenxu <we...@ucloud.cn<mailto:we...@ucloud.cn>>
抄送人:"dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>
主题:RE: Re:RE: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5


Sure, let’s take it offline and come back here with updated results.



Regards,

Asaf Penso



From: wenxu <we...@ucloud.cn<mailto:we...@ucloud.cn>>
Sent: Thursday, April 29, 2021 11:30 AM
To: Asaf Penso <as...@nvidia.com<mailto:as...@nvidia.com>>
Cc: dev@dpdk.org<mailto:dev@dpdk.org>
Subject: Re:RE: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5



Hi Asaf,

I using the upstream dpdk. There are the same issue.

So I think thi s problem I mentioned is not fixed



Could you help us handle with this?



Br

wenxu



发件人:Asaf Penso <as...@nvidia.com<mailto:as...@nvidia.com>>
发送日期:2021-04-28 17:31:03
收件人:wenxu <we...@ucloud.cn<mailto:we...@ucloud.cn>>
抄送人:"dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>
主题:RE: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5

What DPDK version are you using?

Can you try using upstream? We had a fix for a similar issue recently.



Regards,

Asaf Penso



From: wenxu <we...@ucloud.cn<mailto:we...@ucloud.cn>>
Sent: Wednesday, April 28, 2021 8:48 AM
To: Asaf Penso <as...@nvidia.com<mailto:as...@nvidia.com>>
Cc: dev@dpdk.org<mailto:dev@dpdk.org>
Subject: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5



rdma-core version is: rdma-core-52mlnx1-1.52104.x86_64

发件人:Asaf Penso <as...@nvidia.com<mailto:as...@nvidia.com>>
发送日期:2021-04-28 12:22:32
收件人:"dev@dpdk.org<mailto:dev@dpdk.org>" 
<dev@dpdk.org<mailto:dev@dpdk.org>>,wenxu 
<we...@ucloud.cn<mailto:we...@ucloud.cn>>
主题:Re: [dpdk-dev] nvgre inner rss problem in mlx5

Hello Wenxu,

Thank you for reaching us. I would like to know a few more details before I can 
provide an assistance.

Can you share the version numbers for:

rdma-core

OFED

OS

Regards,

Asaf Penso



________________________________

From: dev <dev-boun...@dpdk.org<mailto:dev-boun...@dpdk.org>> on behalf of 
wenxu <we...@ucloud.cn<mailto:we...@ucloud.cn>>
Sent: Wednesday, April 28, 2021 6:47:45 AM
To: dev@dpdk.org<mailto:dev@dpdk.org> <dev@dpdk.org<mailto:dev@dpdk.org>>
Subject: [dpdk-dev] nvgre inner rss problem in mlx5



Hi mlnx teams,


I test the upstream dpdk with the nvgre inner rss action with dpdk-testpmd


# ./dpdk-testpmd -c 0x1f  -n 4 -m 4096 -w "0000:19:00.1"  
--huge-dir=/mnt/ovsdpdk  -- --forward-mode=rxonly --rxq=4 --txq=4 --auto-start 
--nb-cores=4

#  testpmd>> flow create 0 ingress pattern eth / ipv4 / nvgre / end actions rss 
level 2 types ip udp tcp end queues 0 1 2 3 end / end


Add rte flow for nvgre wit action inner(level 2) rss to the queues 0,1,2,3.


And I test this with the same underlay tunnel but differrent inner ip 
address/udp ports. But Only one queue recieve the packet.




And if I test this with vxlan case. it works as we expect.
testpmd>> flow create 0 ingress pattern eth / ipv4 / udp / vxlan / end actions 
rss level 2 types ip udp tcp end queues 0 1 2 3 end / end




# lspci | grep Ether
19:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
19:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]



Fw version is 16.29.1016
# ethtool -i net3
driver: mlx5_core
version: 5.12.0-rc4+
firmware-version: 16.29.1016 (MT_0000000080)


Are there any problems for my test case.


BR
wenxu












Reply via email to