Hello experts,

I am trying to set up the VPP v22.02 with DPDK  v21.11.0 on an azure VM with 
accelerated network enabled - Mellanox  Connect-X3 SRIOV - mlx4 driver.

I have done this changes:

diff --git a/build-data/platforms/vpp.mk b/build-data/platforms/vpp.mk
index 0688ab1b3..3edfa3936 100644
--- a/build-data/platforms/vpp.mk
+++ b/build-data/platforms/vpp.mk
@@ -24,5 +24,3 @@ vpp_TAG_BUILD_TYPE = release
vpp_clang_TAG_BUILD_TYPE = release
vpp_gcov_TAG_BUILD_TYPE = gcov
vpp_coverity_TAG_BUILD_TYPE = coverity
-
-vpp_uses_dpdk_mlx4_pmd = yes
diff --git a/build/external/packages/dpdk.mk b/build/external/packages/dpdk.mk
index 720682618..c44648f8d 100644
--- a/build/external/packages/dpdk.mk
+++ b/build/external/packages/dpdk.mk
@@ -13,12 +13,12 @@

DPDK_PKTMBUF_HEADROOM        ?= 128
DPDK_USE_LIBBSD              ?= n
-DPDK_DEBUG                   ?= n
-DPDK_MLX4_PMD                ?= n
-DPDK_MLX5_PMD                ?= n
-DPDK_MLX5_COMMON_PMD         ?= n
-DPDK_TAP_PMD                 ?= n
-DPDK_FAILSAFE_PMD            ?= n
+DPDK_DEBUG                   ?= y
+DPDK_MLX4_PMD                ?= y
+DPDK_MLX5_PMD                ?= y
+DPDK_MLX5_COMMON_PMD         ?= y
+DPDK_TAP_PMD                 ?= y
+DPDK_FAILSAFE_PMD            ?= y
DPDK_MACHINE                 ?= default
DPDK_MLX_IBV_LINK            ?= static

And I am using this dpdk VPP startup configuration:

dpdk {

## Whitelist specific interface by specifying PCI address

dev a50a:00:02.0 {

name wan

}

#vdev net_failsafe_vsc0,dev(a50a:00:02.0)

vdev net_vdev_netvsc0,iface=eth1

## Blacklist specific device type by specifying PCI vendor:device

## Whitelist entries take precedence

#blacklist f0c1:00:02.0

#blacklist aab2:00:02.0

#blacklist a50a:00:02.0

#blacklist 4b14:00:02.0

}

Basically I am trying to use the eth1 synthetic interface the the associated VF 
interface enP42250s2 for VPP.

ubuntu@azure-edge-184-vEdge:~$ ifconfig eth1

eth1: flags=4098<BROADCAST,MULTICAST>  mtu 1500

ether 60:45:bd:b4:5b:0d  txqueuelen 1000  (Ethernet)

RX packets 8296  bytes 577471 (577.4 KB)

RX errors 0  dropped 0  overruns 0  frame 0

TX packets 50  bytes 4858 (4.8 KB)

TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ubuntu@azure-edge-184-vEdge:~$ ifconfig enP42250s2

enP42250s2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500

ether 60:45:bd:b4:5b:0d  txqueuelen 1000  (Ethernet)

RX packets 0  bytes 0 (0.0 B)

RX errors 0  dropped 0  overruns 0  frame 0

TX packets 0  bytes 0 (0.0 B)

TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ubuntu@azure-edge-184-vEdge:~$ sudo lshw -class network -businfo

Bus info          Device       Class          Description

=========================================================

pci@4b14:00:02.0  enP19220s3   network        MT27500/MT27520 Family 
[ConnectX-3/ConnectX-3 Pro Virtual Functio

pci@a50a:00:02.0  enP42250s2   network        MT27500/MT27520 Family 
[ConnectX-3/ConnectX-3 Pro Virtual Functio

pci@aab2:00:02.0  enP43698s1   network        MT27500/MT27520 Family 
[ConnectX-3/ConnectX-3 Pro Virtual Functio

pci@f0c1:00:02.0  enP61633s4   network        MT27500/MT27520 Family 
[ConnectX-3/ConnectX-3 Pro Virtual Functio

eth0         network        Ethernet interface

eth1         network        Ethernet interface

eth2         network        Ethernet interface

eth3         network        Ethernet interface

vetheefeb9e  network        Ethernet interface

The TX traffic seems to be working, but the RX traffic is always sent to the 
linux synthetic interface (eth1). I can see ton RX counter increasing even when 
the interface is administrative down.
To check the TX interface I am using static ARPs entries.
The ib_uverbs and mlx4_ib driver modules are loaded:
ubuntu@azure-edge-184-vEdge:~$ lsmod | grep mlx
mlx4_ib               196608  0
ib_uverbs             159744  8 mlx4_ib,rdma_ucm
ib_core               360448  9 
rdma_cm,ib_ipoib,mlx4_ib,iw_cm,ib_iser,ib_umad,rdma_ucm,ib_uverbs,ib_cm
mlx4_en               118784  0
mlx4_core             323584  2 mlx4_ib,mlx4_en

Also, if I will turn off the eth1 interface in Linux, the associated enP42250s2 
interface is automatically shuttled down too, but turning on the VPP triggers 
link up on enP42250s2 VF interface:

[81717.812508] mlx4_en: enP42250s2: Steering Mode 2
[81717.826388] mlx4_en: enP42250s2: Link Up
[81717.842831] hv_netvsc 6045bdb4-5b0d-6045-bdb4-5b0d6045bdb4 eth1: Data path 
switched to VF: enP42250s2
[81717.855660] enP42250s2: mtu greater than device maximum
Which means the VPP is able to control the interface, but there still seems to 
be a problem:
vpp# show logging
..........
2022/07/05 12:16:09:032 notice     dpdk           EAL: Detected CPU lcores: 4
2022/07/05 12:16:09:032 notice     dpdk           EAL: Detected NUMA nodes: 1
2022/07/05 12:16:09:032 notice     dpdk           EAL: Detected static linkage 
of DPDK
2022/07/05 12:16:09:032 notice     dpdk           EAL: Selected IOVA mode 'PA'
2022/07/05 12:16:09:032 notice     dpdk           EAL: No available 1048576 kB 
hugepages reported
2022/07/05 12:16:09:032 notice     dpdk           EAL: No free 1048576 kB 
hugepages reported on node 0
2022/07/05 12:16:09:032 notice     dpdk           EAL: No available 1048576 kB 
hugepages reported
2022/07/05 12:16:09:032 notice     dpdk           EAL: VFIO support initialized
2022/07/05 12:16:09:032 notice     dpdk           EAL: Probe PCI driver: 
net_mlx4 (15b3:1004) device: a50a:00:02.0 (socket 0)
2022/07/05 12:16:09:032 notice     dpdk           EAL: failed to parse device 
"net_tap_vsc0"
2022/07/05 12:16:09:032 notice     dpdk           EAL: Driver cannot attach the 
device (net_failsafe_vsc0)
2022/07/05 12:16:09:032 notice     dpdk           EAL: Failed to attach device 
on primary process
2022/07/05 12:16:09:032 notice     dpdk           EAL: failed to parse device 
"net_tap_vsc0"
2022/07/05 12:16:09:032 notice     dpdk           vdev_probe(): failed to 
initialize net_failsafe_vsc0 device
2022/07/05 12:16:09:032 notice     dpdk           EAL: Bus (vdev) probe failed.
2022/07/05 12:16:09:032 notice     dpdk           EAL: VFIO support not 
initialized
2022/07/05 12:16:09:032 notice     dpdk           EAL: Couldn't map new region 
for DMA
2022/07/05 12:16:11:086 notice     ip/neighbor    add: wan, 10.10.21.129
2022/07/05 12:16:11:086 notice     ip/neighbor    add: wan, 10.10.21.133
2022/07/05 12:16:11:086 notice     dpdk           net_mlx4: port 0 unable to 
find virtually contiguous chunk for address (0x1000000000). 
rte_memseg_contig_walk() failed.
vpp# show hardware-interfaces
Name                Idx   Link  Hardware
local0                             0    down  local0
Link speed: unknown
local
*format_dpdk_device:445: rte_eth_dev_rss_hash_conf_get returned -95*
wan                                1     up   wan
Link speed: 40 Gbps
RX Queues:
queue thread         mode
0     main (0)       polling
Ethernet address 60:45:bd:b4:5b:0d
*Mellanox ConnectX-3 Family*
carrier up full duplex max-frame-size 0
flags: admin-up maybe-multiseg tx-offload rx-ip4-cksum
Devargs:
rx: queues 1 (max 1024), desc 1024 (min 0 max 65535 align 1)
tx: queues 1 (max 1024), desc 1024 (min 0 max 65535 align 1)
pci: device 15b3:1004 subsystem 15b3:61b0 address a50a:00:02.00 numa 0
max rx packet len: 65536

By using a vpp configuration like this one:

dpdk {

## Whitelist specific interface by specifying PCI address

#dev a50a:00:02.0 {

# name wan

#}

vdev net_failsafe_vsc0,dev(a50a:00:02.0)

#vdev net_vdev_netvsc0,iface=eth1

## Blacklist specific device type by specifying PCI vendor:device

## Whitelist entries take precedence

blacklist f0c1:00:02.0

blacklist aab2:00:02.0

blacklist a50a:00:02.0

blacklist 4b14:00:02.0

}

I am getting the interface to be Failsafe:
vpp# show hardware-interfaces
Name                Idx   Link  Hardware
format_dpdk_device:445: rte_eth_dev_rss_hash_conf_get returned -95
FailsafeEthernet0                  1     up   FailsafeEthernet0
Link speed: 40 Gbps
RX Queues:
queue thread         mode
0     main (0)       polling
Ethernet address 60:45:bd:b4:5b:0d
*FailsafeEthernet*
carrier up full duplex max-frame-size 0
flags: admin-up maybe-multiseg tx-offload rx-ip4-cksum
Devargs: fd(18),dev(net_tap_vsc0,remote=eth1)
rx: queues 1 (max 1024), desc 1024 (min 0 max 65535 align 1)
tx: queues 1 (max 1024), desc 1024 (min 0 max 65535 align 1)
And the VPP DPDK messages are:

2022/07/05 12:26:06:116 notice     dpdk           EAL: Detected CPU lcores: 4
2022/07/05 12:26:06:116 notice     dpdk           EAL: Detected NUMA nodes: 1
2022/07/05 12:26:06:116 notice     dpdk           EAL: Detected static linkage 
of DPDK
2022/07/05 12:26:06:116 notice     dpdk           EAL: Selected IOVA mode 'PA'
2022/07/05 12:26:06:116 notice     dpdk           EAL: No available 1048576 kB 
hugepages reported
2022/07/05 12:26:06:116 notice     dpdk           EAL: No free 1048576 kB 
hugepages reported on node 0
2022/07/05 12:26:06:116 notice     dpdk           EAL: No available 1048576 kB 
hugepages reported
2022/07/05 12:26:06:116 notice     dpdk           EAL: VFIO support initialized
2022/07/05 12:26:06:116 notice     dpdk           EAL: Probe PCI driver: 
net_mlx4 (15b3:1004) device: a50a:00:02.0 (socket 0)
2022/07/05 12:26:06:116 notice     dpdk           EAL: Failed to attach device 
on primary process
2022/07/05 12:26:06:116 notice     dpdk           EAL: VFIO support not 
initialized
2022/07/05 12:26:06:116 notice     dpdk           EAL: Couldn't map new region 
for DMA
2022/07/05 12:26:06:116 notice     dpdk           net_failsafe: Operation 
rte_eth_dev_set_mtu failed for sub_device 0 with error -22
2022/07/05 12:26:08:155 notice     ip/neighbor    add: FailsafeEthernet0, 
10.10.21.129
2022/07/05 12:26:08:155 notice     ip/neighbor    add: FailsafeEthernet0, 
10.10.21.133
2022/07/05 12:26:08:155 notice     dpdk           net_mlx4: port 1 unable to 
find virtually contiguous chunk for address (0x1000000000). 
rte_memseg_contig_walk() failed.

But the traffic behavior is the same - Tx working fine, RX going to the eth1 
Linux synthetic interface.

Can anyone see what I am missing here? Any suggestion what should I try?
Thank you,
Gabi Florian
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21609): https://lists.fd.io/g/vpp-dev/message/21609
Mute This Topic: https://lists.fd.io/mt/92183482/21656
Mute #mellanox:https://lists.fd.io/g/vpp-dev/mutehashtag/mellanox
Mute #azure:https://lists.fd.io/g/vpp-dev/mutehashtag/azure
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to