> On 25 Jan 2017, at 18:03, Yichen Wang (yicwang) <yicw...@cisco.com> wrote:
> 
> Yes, I did!

Can you share output of “show pci” from the VPP debug cli?


> 
> Regards,
> Yichen
> 
> On Jan 25, 2017, at 07:12, Damjan Marion <dmarion.li...@gmail.com 
> <mailto:dmarion.li...@gmail.com>> wrote:
> 
>> 
>>> On 25 Jan 2017, at 05:41, Yichen Wang (yicwang) <yicw...@cisco.com 
>>> <mailto:yicw...@cisco.com>> wrote:
>>> 
>>> Hi, VPP guys,
>>> I have a RHEL 7.3 setup with Intel X710, and want to bring VPP 17.01 on top 
>>> of it. Among all three DPDK drivers:
>>> (1)     uio_pci_generic is not supported on X710 
>>> (http://dpdk.org/dev/patchwork/patch/19820/ 
>>> <http://dpdk.org/dev/patchwork/patch/19820/>), and driver bind failed;
>>> (2)   igb_uio will work perfectly, but does not come with RHEL 7.3 kernel 
>>> directly. Have to build from source to have it;
>>> (3)   vfio-pci, which is the only option left.
>>> According to 
>>> https://wiki.fd.io/view/VPP/Command-line_Arguments#.22dpdk.22_parameters 
>>> <https://wiki.fd.io/view/VPP/Command-line_Arguments#.22dpdk.22_parameters>, 
>>> vfio-pci should be supported. However, when I bring it up, VPP is 
>>> complaining:
>>> EAL: Detected 72 lcore(s)
>>> EAL: No free hugepages reported in hugepages-1048576kB
>>> EAL: Probing VFIO support...
>>> EAL: VFIO support initialized
>>> EAL: Initializing pmd_bond for eth_bond0
>>> EAL: Create bonded device eth_bond0 on port 0 in mode 2 on socket 0.
>>> EAL: PCI device 0000:16:00.1 on NUMA socket 0
>>> EAL:   probe driver: 8086:1572 net_i40e
>>> EAL:   0000:16:00.1 not managed by VFIO driver, skipping
>>> EAL: PCI device 0000:81:00.1 on NUMA socket 1
>>> EAL:   probe driver: 8086:1572 net_i40e
>>> EAL:   0000:81:00.1 not managed by VFIO driver, skipping
>>> DPDK physical memory layout:
>>> Segment 0: phys:0x4b800000, len:534773760, virt:0x7f9c41a00000, 
>>> socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
>>> Segment 1: phys:0x5f7a800000, len:534773760, virt:0x7f5c6f200000, 
>>> socket_id:1, hugepage_sz:2097152, nchannel:0, nrank:0
>>> PMD: bond_ethdev_parse_slave_port_kvarg(142) - Invalid slave port value 
>>> (0000:16:00.1) specified
>>> EAL: Failed to parse slave ports for bonded device eth_bond0
>>> Apparently VPP is not recognizing the interfaces bound to vfio-pci, 
>>> therefore it couldn’t set up bonding afterwards. However I do have those 
>>> interfaces bound to vfio-pci already, here is the output from 
>>> dpdk-devbind.py:
>>> [root@sjc04-pod6-compute-4 tools]# ./dpdk-devbind.py --status
>>> Network devices using DPDK-compatible driver
>>> ============================================
>>> 0000:16:00.1 'Ethernet Controller X710 for 10GbE SFP+' drv=vfio-pci 
>>> unused=i40e
>>> 0000:81:00.1 'Ethernet Controller X710 for 10GbE SFP+' drv=vfio-pci 
>>> unused=i40e
>>> Do we ever tested vfio-pci on X710 before, or did I miss anything? 
>>> Appreciate your helps!
>>> Thanks very much!
>>> Regards,
>>> Yichen
>> 
>> Have you specified:
>> 
>> dpdk {
>>   uio-driver vfio-pci
>> }
>> 
>> in startup.conf?
>> 
>> 
>> 

_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to