Spent hours on it, finally root caused it. VPP works fine out of container with 
vfio-pci, while I was running inside of docker container…

I am doing more investigation on it why it makes any difference in-and-out 
container, but appreciate your helps again if you happen to know any context on 
it.

Thanks very much!

Regards,
Yichen

From: Damjan Marion <dmarion.li...@gmail.com>
Date: Wednesday, January 25, 2017 at 10:39
To: "Yichen Wang (yicwang)" <yicw...@cisco.com>
Cc: "vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>, "Ian Wells (iawells)" 
<iawe...@cisco.com>
Subject: Re: [vpp-dev] VPP 17.01 on VFIO-PCI driver



On 25 Jan 2017, at 18:59, Yichen Wang (yicwang) 
<yicw...@cisco.com<mailto:yicw...@cisco.com>> wrote:

Here is the output, sorry for the long list, as I have a lot of Cisco vNICs, 
and Intel NICs on my setup… I’ve highlighted the ones I am using for VPP.

vpp# show pci
Address      Socket VID:PID     Link Speed     Driver              Product Name
0000:01:00.0   0    8086:1521   5.0 GT/s x4    igb                 Cisco  
I350-TX 1Gig LOM
0000:01:00.1   0    8086:1521   5.0 GT/s x4    igb                 Cisco  
I350-TX 1Gig LOM
0000:09:00.0   0    1137:0043   5.0 GT/s x16   enic
0000:10:00.0   0    1137:0043   5.0 GT/s x16   enic
0000:11:00.0   0    1137:0043   5.0 GT/s x16   enic
0000:12:00.0   0    1137:0043   5.0 GT/s x16   enic
0000:81:00.0   1    8086:1572   8.0 GT/s x8    i40e                Cisco(R) 
Ethernet Converged NIC X710-DA4
0000:81:00.1   1    8086:1572   8.0 GT/s x8    vfio-pci            Cisco(R) 
Ethernet Converged NIC X710-DA4
0000:81:00.2   1    8086:1572   8.0 GT/s x8    i40e                Cisco(R) 
Ethernet Converged NIC X710-DA4
0000:81:00.3   1    8086:1572   8.0 GT/s x8    i40e                Cisco(R) 
Ethernet Converged NIC X710-DA4
0000:81:0a.0   1    8086:154c   unknown        pci-stub
0000:81:0a.1   1    8086:154c   unknown        pci-stub
0000:81:0a.2   1    8086:154c   unknown        pci-stub
0000:81:0a.3   1    8086:154c   unknown        pci-stub
0000:81:0a.4   1    8086:154c   unknown        pci-stub
0000:81:0a.5   1    8086:154c   unknown        pci-stub
0000:81:0a.6   1    8086:154c   unknown        pci-stub
0000:81:0a.7   1    8086:154c   unknown        pci-stub
0000:81:0b.0   1    8086:154c   unknown        pci-stub
0000:81:0b.1   1    8086:154c   unknown        pci-stub
0000:81:0b.2   1    8086:154c   unknown        pci-stub
0000:81:0b.3   1    8086:154c   unknown        pci-stub
0000:81:0b.4   1    8086:154c   unknown        pci-stub
0000:81:0b.5   1    8086:154c   unknown        pci-stub
0000:81:0b.6   1    8086:154c   unknown        pci-stub
0000:81:0b.7   1    8086:154c   unknown        pci-stub
0000:13:00.0   0    1137:0043   5.0 GT/s x16   enic
0000:14:00.0   0    1137:0043   5.0 GT/s x16   enic
0000:16:00.0   0    8086:1572   8.0 GT/s x8    i40e                Cisco(R) 
Ethernet Converged NIC X710-DA4
0000:16:00.1   0    8086:1572   8.0 GT/s x8    vfio-pci            Cisco(R) 
Ethernet Converged NIC X710-DA4
0000:16:00.2   0    8086:1572   8.0 GT/s x8    i40e                Cisco(R) 
Ethernet Converged NIC X710-DA4
0000:16:00.3   0    8086:1572   8.0 GT/s x8    i40e                Cisco(R) 
Ethernet Converged NIC X710-DA4
0000:16:0a.0   0    8086:154c   unknown        pci-stub
0000:16:0a.1   0    8086:154c   unknown        pci-stub
0000:16:0a.2   0    8086:154c   unknown        pci-stub
0000:16:0a.3   0    8086:154c   unknown        pci-stub
0000:16:0a.4   0    8086:154c   unknown        pci-stub
0000:16:0a.5   0    8086:154c   unknown        pci-stub
0000:16:0a.6   0    8086:154c   unknown        pci-stub
0000:16:0a.7   0    8086:154c   unknown        pci-stub
0000:16:0b.0   0    8086:154c   unknown        pci-stub
0000:16:0b.1   0    8086:154c   unknown        pci-stub
0000:16:0b.2   0    8086:154c   unknown        pci-stub
0000:16:0b.3   0    8086:154c   unknown        pci-stub
0000:16:0b.4   0    8086:154c   unknown        pci-stub
0000:16:0b.5   0    8086:154c   unknown        pci-stub
0000:16:0b.6   0    8086:154c   unknown        pci-stub
0000:16:0b.7   0    8086:154c   unknown        pci-stub
0000:0a:00.0   0    1137:0043   5.0 GT/s x16   enic
0000:0d:00.0   0    1137:0043   5.0 GT/s x16   enic
0000:0e:00.0   0    1137:0043   5.0 GT/s x16   enic
0000:0f:00.0   0    1137:0043   5.0 GT/s x16   enic

The VPP console will come up, just found no interfaces in “show int”.

This is interesting problem. Have you tried to use testpmd?

Error message you’re getting is comping straight from DPDK and this device 
looks properly bound to vfio-pci.


Thanks very much!

Regards,
Yichen

From: Damjan Marion <dmarion.li...@gmail.com<mailto:dmarion.li...@gmail.com>>
Date: Wednesday, January 25, 2017 at 09:38
To: "Yichen Wang (yicwang)" <yicw...@cisco.com<mailto:yicw...@cisco.com>>
Cc: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>, "Ian Wells (iawells)" 
<iawe...@cisco.com<mailto:iawe...@cisco.com>>
Subject: Re: [vpp-dev] VPP 17.01 on VFIO-PCI driver


On 25 Jan 2017, at 18:03, Yichen Wang (yicwang) 
<yicw...@cisco.com<mailto:yicw...@cisco.com>> wrote:

Yes, I did!

Can you share output of “show pci” from the VPP debug cli?





Regards,
Yichen

On Jan 25, 2017, at 07:12, Damjan Marion 
<dmarion.li...@gmail.com<mailto:dmarion.li...@gmail.com>> wrote:

On 25 Jan 2017, at 05:41, Yichen Wang (yicwang) 
<yicw...@cisco.com<mailto:yicw...@cisco.com>> wrote:

Hi, VPP guys,
I have a RHEL 7.3 setup with Intel X710, and want to bring VPP 17.01 on top of 
it. Among all three DPDK drivers:
(1)     uio_pci_generic is not supported on X710 
(http://dpdk.org/dev/patchwork/patch/19820/), and driver bind failed;
(2)   igb_uio will work perfectly, but does not come with RHEL 7.3 kernel 
directly. Have to build from source to have it;
(3)   vfio-pci, which is the only option left.
According to 
https://wiki.fd.io/view/VPP/Command-line_Arguments#.22dpdk.22_parameters, 
vfio-pci should be supported. However, when I bring it up, VPP is complaining:
EAL: Detected 72 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: Initializing pmd_bond for eth_bond0
EAL: Create bonded device eth_bond0 on port 0 in mode 2 on socket 0.
EAL: PCI device 0000:16:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1572 net_i40e
EAL:   0000:16:00.1 not managed by VFIO driver, skipping
EAL: PCI device 0000:81:00.1 on NUMA socket 1
EAL:   probe driver: 8086:1572 net_i40e
EAL:   0000:81:00.1 not managed by VFIO driver, skipping
DPDK physical memory layout:
Segment 0: phys:0x4b800000, len:534773760, virt:0x7f9c41a00000, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: phys:0x5f7a800000, len:534773760, virt:0x7f5c6f200000, socket_id:1, 
hugepage_sz:2097152, nchannel:0, nrank:0
PMD: bond_ethdev_parse_slave_port_kvarg(142) - Invalid slave port value 
(0000:16:00.1) specified
EAL: Failed to parse slave ports for bonded device eth_bond0
Apparently VPP is not recognizing the interfaces bound to vfio-pci, therefore 
it couldn’t set up bonding afterwards. However I do have those interfaces bound 
to vfio-pci already, here is the output from dpdk-devbind.py:
[root@sjc04-pod6-compute-4 tools]# ./dpdk-devbind.py --status
Network devices using DPDK-compatible driver
============================================
0000:16:00.1 'Ethernet Controller X710 for 10GbE SFP+' drv=vfio-pci unused=i40e
0000:81:00.1 'Ethernet Controller X710 for 10GbE SFP+' drv=vfio-pci unused=i40e
Do we ever tested vfio-pci on X710 before, or did I miss anything? Appreciate 
your helps!
Thanks very much!
Regards,
Yichen

Have you specified:

dpdk {
  uio-driver vfio-pci
}

in startup.conf?








_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to