Hi Tao,

On 23/05/2016 14:39, DING, TAO wrote:
> Hello dpdk dev,
>
> Do you know if the vfio_pci can be bind to network interface from within 
> RedHat virtual machine ? I read the doc that igb_uio  should not be used ; it 
> is not stable. (http://people.redhat.com/~pmatilai/dpdk-guide/index.html) 
> however I cannot use vfio_pci driver from inside VM.
>
> Currently I am working on a project that migrating a network package capture 
> application into virtual machines so that it can be hosted on  cloud. My 
> intent is using SR-IOR to ensure the data sending from physical NIC to vNIC  
> in line speed; using DPDK inside VM to read data from vNIC to get good 
> performance because the libpcap does not perform well  inside VM.
>
> Following dpdk instruction, I was able to set up the SR-IOV and bind the 
> vfio_pci to virtual Function on the host. Once the VM starts, the Virtual 
> Functions bind to vfio-pci automatically on the host. . The following is the 
> output from host.
> Option: 22
>
>
> Network devices using DPDK-compatible driver
> ============================================
> 0000:04:10.4 'X540 Ethernet Controller Virtual Function' drv=vfio-pci unused=
> 0000:04:10.6 'X540 Ethernet Controller Virtual Function' drv=vfio-pci unused=
> 0000:04:11.4 'X540 Ethernet Controller Virtual Function' drv=vfio-pci unused=
> 0000:04:11.6 'X540 Ethernet Controller Virtual Function' drv=vfio-pci unused=
>
> Network devices using kernel driver
> ===================================
> 0000:01:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=em1 drv=tg3 
> unused=vfio-pci
> 0000:01:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=em2 drv=tg3 
> unused=vfio-pci
> 0000:02:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=em3 drv=tg3 
> unused=vfio-pci
> 0000:02:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=em4 drv=tg3 
> unused=vfio-pci
> 0000:04:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=p3p1 drv=ixgbe 
> unused=vfio-pci *Active*
> 0000:04:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=p3p2 drv=ixgbe 
> unused=vfio-pci
> 0000:04:10.0 'X540 Ethernet Controller Virtual Function' if=p3p1_0 
> drv=ixgbevf unused=vfio-pci
> 0000:04:10.2 'X540 Ethernet Controller Virtual Function' if=p3p1_1 
> drv=ixgbevf unused=vfio-pci
> 0000:04:11.0 'X540 Ethernet Controller Virtual Function' if=p3p1_4 
> drv=ixgbevf unused=vfio-pci
> 0000:04:11.2 'X540 Ethernet Controller Virtual Function' if=p3p1_5 
> drv=ixgbevf unused=vfio-pci
>
> I repeated the same set up within the VM which has 4 Virtual Functions 
> assigned to it, I could not successfully bind any of network devices  to 
> vfio-pci. I followed different suggestion from the web , but no luck.   
> (however I was able to bind UIO driver to the network devices inside the VM)
> One difference I noticed between VM and host is the outcome of IOMMU setting. 
>  On the host, the /sys/kernel/iommu_groups/ is NOT empty , but on the VM , it 
> is empty. I rebooted VM several time. There's not luck.

AFAIK VFIO is not supported in a guest.
https://lists.gnu.org/archive/html/qemu-devel/2015-11/msg04284.html

So you are left with two options, VFIO no-IOMMU or igb_uio, none of them 
safe.
If you have Linux kernel +4.5 and DPDK +16.04, you could use VFIO 
no-IOMMU inside the VM. Otherwise, you are left with igb_uio.
IMHO the main difference is that igb_uio is an out-of-tree kernel module.

Sergio

Reply via email to