Hi Benoit,

Thanks, very unexpected but it works.
I'll prefer to use recommended way
________________________________
От: Benoit Ganne (bganne) <bga...@cisco.com>
Отправлено: 22 марта 2021 г. 17:00
Кому: Юрий Иванов <format_...@outlook.com>; vpp-dev@lists.fd.io 
<vpp-dev@lists.fd.io>
Тема: RE: 40G Mellanox NIC not working

Hi,

If possible, the preferred way of using Mellanox NICs with VPP is with the 
native rdma driver instead of DPDK: 
https://docs.fd.io/vpp/21.06/df/d0e/rdma_doc.html
Otherwise you can always try to rebuild DPDK with mlx5 support, see 
https://git.fd.io/vpp/tree/build/external/packages/dpdk.mk#n18
The instructions you refer to is out-of-date.

Best
ben

> -----Original Message-----
> From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of ???? ??????
> Sent: vendredi 19 mars 2021 15:25
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] 40G Mellanox NIC not working
>
> Hi,
>
> VPP doesn't see mellanox 40G card.
> I've try to compile based on summary instruction
> https://lists.fd.io/g/vpp-dev/message/16211
> But after recompilation it doesn't see NIC in show interface
>
> The main problem that there is no message in log like for intel card
> /usr/bin/vpp[394111]: pci: Skipping PCI device 0000:02:00.0 as host
> interface eno1 is up
>
>
> $ cat /etc/os-release
> NAME="Ubuntu"
> VERSION="20.04.2 LTS (Focal Fossa)
>
> # lshw -c network -businfo
> Bus info          Device     Class          Description
> =======================================================
> pci@0000:01:00.0  enp1s0f0   network        MT27700 Family [ConnectX-4]
> pci@0000:01:00.1  enp1s0f1   network        MT27700 Family [ConnectX-4]
>
> # lsmod | grep mlx
> mlx5_ib               331776  0
> ib_uverbs             147456  1 mlx5_ib
> ib_core               352256  2 ib_uverbs,mlx5_ib
> mlx5_core            1105920  1 mlx5_ib
> pci_hyperv_intf        16384  1 mlx5_core
> mlxfw                  32768  1 mlx5_core
> tls                    90112  1 mlx5_core
>
> Building:
>
> # git clone https://gerrit.fd.io/r/vpp
> # make install-dep
>
>
> After building vpp, I try to run
> $ sudo ifconfig enp1s0f1 down
> $ sudo LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/login/origin/build-
> root/install-vpp_debug-native/external/lib/ ./build-root/build-vpp_debug-
> native/vpp/bin/vpp -c
> ./build-root/install-vpp_debug-native/vpp/etc/vpp/startup.conf
>
> Then go to console
> $ sudo LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/login/origin/build-
> root/install-vpp_debug-native/external/lib/ ./build-root/build-vpp_debug-
> native/vpp/bin/vppctl
> DBGvpp# show interface
>               Name               Idx    State  MTU (L3/IP4/IP6/MPLS)
> Counter          Count
> local0                            0     down          0/0/0/0
> DBGvpp# show pci
> Address      Sock VID:PID     Link Speed    Driver          Product Name
> Vital Product Data
> 0000:01:00.0   0  15b3:1013   8.0 GT/s x8   mlx5_core       CX414A -
> ConnectX-4 QSFP28      PN: MCX414A-GCAT
>
> EC: AG
>
> SN: MT2002X14876
>
> V0: 0x 50 43 49 65 47 65 6e 33 ...
>
> RV: 0x 28 00 00
> 0000:01:00.1   0  15b3:1013   8.0 GT/s x8   mlx5_core       CX414A -
> ConnectX-4 QSFP28      PN: MCX414A-GCAT
>
> EC: AG
>
> SN: MT2002X14876
>
> V0: 0x 50 43 49 65 47 65 6e 33 ...
>
> RV: 0x 28 00 00
> As you can see there is no 40G interface in VPP.
> Maybe be there is some specific prerequisites, OS or driver version?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19000): https://lists.fd.io/g/vpp-dev/message/19000
Mute This Topic: https://lists.fd.io/mt/81456458/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to