If you want to use vfio-pci, you might want to check:
# dmesg | grep Virtualization
[    5.208330] DMAR: Intel(R) Virtualization Technology for Directed I/O
If you don’t see above, vfio-pci will not work and the fix is to enable Intel 
VT-d in BIOS.

Also, uio_pci_generic won’t work with i40e, and if you want to use UIO you have 
to use igb_uio (Built-in in Ubuntu, and complied as KO for RHEL/CentOS).

Hope that helps.

Regards,
Yichen

From: <vpp-dev@lists.fd.io> on behalf of "steven luong via Lists.Fd.Io" 
<sluong=cisco....@lists.fd.io>
Reply-To: "Steven Luong (sluong)" <slu...@cisco.com>
Date: Monday, January 6, 2020 at 8:32 PM
To: Gencli Liu <18600640...@163.com>, "vpp-dev@lists.fd.io" 
<vpp-dev@lists.fd.io>
Cc: "vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] "vppctl show int" no NIC (just local0) #vpp #vnet

It is likely a resource problem – when VPP requests more descriptors and/or 
TX/RX queues for the NIC than the firmware has, DPDK fails to initialize the 
interface. There are few ways to figure out what the problem is.

  1.  Bypass VPP and run testpmd with debug options turned on, something like 
this

--log-level=lib.eal,debug --log-level=pmd,debug

  1.  Reduce your RX/TX queues and descriptors to the minimum for the 
interface. What do you have in the dpdk section for the NIC, anyway?
  2.  Run VPP with bare minimum config.

unix { interactive }

I would start with (3) since it is the easiest. I hope DPDK will discover the 
NIC in show hardware if the interface is already bound to DPDK. If that is the 
case, you can proceed to check and see if your startup.conf oversubscribes the 
descriptors and/or TX/RX queues. If (3) still fails, try (1). It is a bit more 
work. I am sure you’ll figure out how to compile testpmd APP and run it.

Steven

From: <vpp-dev@lists.fd.io> on behalf of Gencli Liu <18600640...@163.com>
Date: Monday, January 6, 2020 at 7:22 PM
To: "vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] "vppctl show int" no NIC (just local0) #vpp #vnet

Hi Ezpeer :
    Thank you for you advice.
    I did a test to update X710's driver(i40e) and X710's Firmware:
    i40e's new version : 2.10.19.30
    Firmware's new version : 6.80  (inter delete NVM's 7.0 and 7.1 verison 
files because they introduced some serious errors).
    (I will try again when inter republish NVM's 7.1).
UIO-driver use vfio-pci.
Even so, NIC still have no driver when use "vppctl show pci".
I aslo switch UIO-driver to uio_pci_generic by modidfy vpp.service and 
startup.conf, the result has little different but still not OK.

This is my environment:
[root@localhost i40e]# cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)

[root@localhost i40e]# uname -a
Linux localhost.localdomain 3.10.0-1062.4.1.el7.x86_64 #1 SMP Fri Oct 18 
17:15:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost i40e]# uname -r
3.10.0-1062.4.1.el7.x86_64

[root@localhost i40e]# modinfo i40e
filename:       
/lib/modules/3.10.0-1062.4.1.el7.x86_64/updates/drivers/net/ethernet/intel/i40e/i40e.ko
version:        2.10.19.30
license:        GPL
description:    Intel(R) 40-10 Gigabit Ethernet Connection Network Driver
author:         Intel Corporation, <e1000-de...@lists.sourceforge.net>
retpoline:      Y
rhelversion:    7.7
srcversion:     9EB781BDF574D047F098566
alias:          pci:v00008086d0000158Bsv*sd*bc*sc*i*
alias:          pci:v00008086d0000158Asv*sd*bc*sc*i*
alias:          pci:v00008086d000037D3sv*sd*bc*sc*i*
alias:          pci:v00008086d000037D2sv*sd*bc*sc*i*
alias:          pci:v00008086d000037D1sv*sd*bc*sc*i*
alias:          pci:v00008086d000037D0sv*sd*bc*sc*i*
alias:          pci:v00008086d000037CFsv*sd*bc*sc*i*
alias:          pci:v00008086d000037CEsv*sd*bc*sc*i*
alias:          pci:v00008086d00000D58sv*sd*bc*sc*i*
alias:          pci:v00008086d00000CF8sv*sd*bc*sc*i*
alias:          pci:v00008086d00001588sv*sd*bc*sc*i*
alias:          pci:v00008086d00001587sv*sd*bc*sc*i*
alias:          pci:v00008086d0000104Fsv*sd*bc*sc*i*
alias:          pci:v00008086d0000104Esv*sd*bc*sc*i*
alias:          pci:v00008086d000015FFsv*sd*bc*sc*i*
alias:          pci:v00008086d00001589sv*sd*bc*sc*i*
alias:          pci:v00008086d00001586sv*sd*bc*sc*i*
alias:          pci:v00008086d00001585sv*sd*bc*sc*i*
alias:          pci:v00008086d00001584sv*sd*bc*sc*i*
alias:          pci:v00008086d00001583sv*sd*bc*sc*i*
alias:          pci:v00008086d00001581sv*sd*bc*sc*i*
alias:          pci:v00008086d00001580sv*sd*bc*sc*i*
alias:          pci:v00008086d00001574sv*sd*bc*sc*i*
alias:          pci:v00008086d00001572sv*sd*bc*sc*i*
depends:        ptp
vermagic:       3.10.0-1062.4.1.el7.x86_64 SMP mod_unload modversions
parm:           debug:Debug level (0=none,...,16=all) (int)

[root@localhost i40e]# ethtool -i p1p3
driver: i40e
version: 2.10.19.30
firmware-version: 6.80 0x80003c64 1.2007.0
expansion-rom-version:
bus-info: 0000:3b:00.2
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

This is vfio-pci error:
[root@localhost ~]# lsmod | grep vfio
vfio_pci               41412  0
vfio_iommu_type1       22440  0
vfio                   32657  3 vfio_iommu_type1,vfio_pci
irqbypass              13503  2 kvm,vfio_pci
[root@localhost ~]#
[root@localhost ~]# dmesg
[   41.670075] VFIO - User Level meta-driver version: 0.3
[   43.380387] i40e 0000:3b:00.0: removed PHC from p1p1
[   43.583958] vfio-pci: probe of 0000:3b:00.0 failed with error -22
[   43.595876] i40e 0000:3b:00.1: removed PHC from p1p2
[   43.811364] vfio-pci: probe of 0000:3b:00.1 failed with error -22

[root@localhost ~]# cat /usr/lib/systemd/system/vpp.service
[Unit]
Description=Vector Packet Processing Process
After=syslog.target network.target auditd.service

[Service]
ExecStartPre=-/bin/rm -f /dev/shm/db /dev/shm/global_vm /dev/shm/vpe-api
#ExecStartPre=-/sbin/modprobe uio_pci_generic
ExecStartPre=-/sbin/modprobe vfio-pci
ExecStartPre=-/sbin/ifconfig p1p1 down
ExecStartPre=-/sbin/ifconfig p1p2 down
ExecStart=/usr/bin/numactl --cpubind=0 --membind=0 /usr/bin/vpp -c 
/etc/vpp/startup.conf
# ExecStart=/usr/bin/vpp -c /etc/vpp/startup.conf
Type=simple
Restart=on-failure
RestartSec=5s
# Uncomment the following line to enable VPP coredumps on crash
# You still need to configure the rest of the system to collect them, see
# 
https://fdio-vpp.readthedocs.io/en/latest/troubleshooting/reportingissues/reportingissues.html#core-files
# for details
#LimitCORE=infinity

[Install]
WantedBy=multi-user.target

[root@localhost ~]# cat /etc/vpp/startup.conf
。。。。。。。。。。。
uio-driver vfio-pci
。。。。。。。。。。。

[root@localhost ~]# vppctl show pci | grep XL710
0000:3b:00.0   0  8086:1572   8.0 GT/s x8                  XL710 40GbE 
Controller          RV: 0x 86
0000:3b:00.1   0  8086:1572   8.0 GT/s x8                  XL710 40GbE 
Controller          RV: 0x 86
0000:3b:00.2   0  8086:1572   8.0 GT/s x8  i40e            XL710 40GbE 
Controller          RV: 0x 86
0000:3b:00.3   0  8086:1572   8.0 GT/s x8  i40e            XL710 40GbE 
Controller          RV: 0x 86
[root@localhost ~]# vppctl show int
          Name               Idx    State  MTU (L3/IP4/IP6/MPLS)     Counter    
      Count
local0                            0     down          0/0/0/0

[root@localhost dpdk]# ./dpdk-devbind --status

Network devices using kernel driver
===================================
0000:18:00.0 'NetXtreme BCM5720 2-port Gigabit Ethernet PCIe 165f' if=em1 
drv=tg3 unused=vfio-pci *Active*
0000:18:00.1 'NetXtreme BCM5720 2-port Gigabit Ethernet PCIe 165f' if=em2 
drv=tg3 unused=vfio-pci
0000:19:00.0 'NetXtreme BCM5720 2-port Gigabit Ethernet PCIe 165f' if=em3 
drv=tg3 unused=vfio-pci
0000:19:00.1 'NetXtreme BCM5720 2-port Gigabit Ethernet PCIe 165f' if=em4 
drv=tg3 unused=vfio-pci *Active*
0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=p1p3 drv=i40e 
unused=vfio-pci
0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=p1p4 drv=i40e 
unused=vfio-pci

Other Network devices
=====================
0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' unused=i40e,vfio-pci
0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' unused=i40e,vfio-pci

When use uio_pci_generic:
[root@localhost dpdk]# ./dpdk-devbind --status
Network devices using kernel driver
===================================
0000:18:00.0 'NetXtreme BCM5720 2-port Gigabit Ethernet PCIe 165f' if=em1 
drv=tg3 unused=uio_pci_generic *Active*
0000:18:00.1 'NetXtreme BCM5720 2-port Gigabit Ethernet PCIe 165f' if=em2 
drv=tg3 unused=uio_pci_generic
0000:19:00.0 'NetXtreme BCM5720 2-port Gigabit Ethernet PCIe 165f' if=em3 
drv=tg3 unused=uio_pci_generic
0000:19:00.1 'NetXtreme BCM5720 2-port Gigabit Ethernet PCIe 165f' if=em4 
drv=tg3 unused=uio_pci_generic *Active*
0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=p1p3 drv=i40e 
unused=uio_pci_generic
0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=p1p4 drv=i40e 
unused=uio_pci_generic

Other Network devices
=====================
0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' 
unused=i40e,uio_pci_generic
0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' 
unused=i40e,uio_pci_generic

[root@localhost dpdk]# dmesg
[   60.590981] Generic UIO driver for PCI 2.3 devices version: 0.01.0
[   61.820558] i40e 0000:3b:00.0: removed PHC from p1p1
[   62.037680] i40e 0000:3b:00.1: removed PHC from p1p2
[root@localhost dpdk]# dmesg | grep UIO
[   60.590981] Generic UIO driver for PCI 2.3 devices version: 0.01.0
[root@localhost dpdk]# dmesg | grep uio
[root@localhost dpdk]#

Thank you!
Regards,
gencli
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15074): https://lists.fd.io/g/vpp-dev/message/15074
Mute This Topic: https://lists.fd.io/mt/69347948/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Mute #vnet: https://lists.fd.io/mk?hashtag=vnet&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to