On 10-Feb-18 5:53 PM, Ravi Kerur wrote:


On Sat, Feb 10, 2018 at 2:58 AM, Burakov, Anatoly <anatoly.bura...@intel.com <mailto:anatoly.bura...@intel.com>> wrote:

    On 29-Jan-18 10:35 PM, Ravi Kerur wrote:

        Hi Burakov,

        When using vfio-pci on host both VF and PF interfaces works fine
        with dpdk i.e. I don't see DMAR fault messages anymore. However,
        when I attach a VF interface to a VM and start DPDK with
        vfio-pci inside VM I still see DMAR fault messages on host. Both
        host and VM are booted with 'intel-iommu=on' on GRUB. Ping from
        VM with DPDK/vfio-pci doesn't work (I think it's expected
        because of DMAR faults), however, when VF interface uses ixgbevf
        driver ping works.

        Following are some details

        /*****************On VM***************/
        dpdk-devbind -s

        Network devices using DPDK-compatible driver
        ============================================
        0000:00:07.0 '82599 Ethernet Controller Virtual Function'
        drv=vfio-pci unused=ixgbevf

        Network devices using kernel driver
        ===================================
        0000:03:00.0 'Device 1041' if=eth0 drv=virtio-pci
        unused=vfio-pci *Active*
        0000:04:00.0 'Device 1041' if=eth1 drv=virtio-pci unused=vfio-pci
        0000:05:00.0 'Device 1041' if=eth2 drv=virtio-pci unused=vfio-pci

        Other network devices
        =====================
        <none>

        Crypto devices using DPDK-compatible driver
        ===========================================
        <none>

        Crypto devices using kernel driver
        ==================================
        <none>

        Other crypto devices
        ====================
        <none>


        00:07.0 Ethernet controller: Intel Corporation 82599 Ethernet
        Controller Virtual Function (rev 01)
                  Subsystem: Intel Corporation 82599 Ethernet Controller
        Virtual Function
                  Control: I/O- Mem+ BusMaster- SpecCycle- MemWINV-
        VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
                  Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast
         >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
                  Region 0: Memory at fda00000 (64-bit, prefetchable)
        [size=16K]
                  Region 3: Memory at fda04000 (64-bit, prefetchable)
        [size=16K]
                  Capabilities: [70] MSI-X: Enable+ Count=3 Masked-
                          Vector table: BAR=3 offset=00000000
                          PBA: BAR=3 offset=00002000
                  Capabilities: [a0] Express (v1) Root Complex
        Integrated Endpoint, MSI 00
                          DevCap: MaxPayload 128 bytes, PhantFunc 0
                                  ExtTag- RBE-
                          DevCtl: Report errors: Correctable- Non-Fatal-
        Fatal- Unsupported-
                                  RlxdOrd- ExtTag- PhantFunc- AuxPwr-
        NoSnoop-
                                  MaxPayload 128 bytes, MaxReadReq 128 bytes
                          DevSta: CorrErr- UncorrErr- FatalErr-
        UnsuppReq- AuxPwr- TransPend-
                  Capabilities: [100 v1] Advanced Error Reporting
                          UESta:  DLP- SDES- TLP- FCP- CmpltTO-
        CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                          UEMsk:  DLP- SDES- TLP- FCP- CmpltTO-
        CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                          UESvrt: DLP- SDES- TLP- FCP- CmpltTO-
        CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                          CESta:  RxErr- BadTLP- BadDLLP- Rollover-
        Timeout- NonFatalErr-
                          CEMsk:  RxErr- BadTLP- BadDLLP- Rollover-
        Timeout- NonFatalErr-
                          AERCap: First Error Pointer: 00, GenCap-
        CGenEn- ChkCap- ChkEn-
                  Kernel driver in use: vfio-pci
                  Kernel modules: ixgbevf

        /***************on Host*************/
        dmesg | grep DMAR
        ...
        [  978.268143] DMAR: DRHD: handling fault status reg 2
        [  978.268147] DMAR: [DMA Read] *Request device [04:10.0]* fault
        addr 33a128000 [fault reason 06] PTE Read access is not set
        [ 1286.677726] DMAR: DRHD: handling fault status reg 102
        [ 1286.677730] DMAR: [DMA Read] Request device [04:10.0] fault
        addr fb663000 [fault reason 06] PTE Read access is not set
        [ 1676.436145] DMAR: DRHD: handling fault status reg 202
        [ 1676.436149] DMAR: [DMA Read] Request device [04:10.0] fault
        addr 33a128000 [fault reason 06] PTE Read access is not set
        [ 1734.433649] DMAR: DRHD: handling fault status reg 302
        [ 1734.433652] DMAR: [DMA Read] Request device [04:10.0] fault
        addr 33a128000 [fault reason 06] PTE Read access is not set
        [ 2324.428938] DMAR: DRHD: handling fault status reg 402
        [ 2324.428942] DMAR: [DMA Read] Request device [04:10.0] fault
        addr 7770c000 [fault reason 06] PTE Read access is not set
        [ 2388.553640] DMAR: DRHD: handling fault status reg 502
        [ 2388.553643] DMAR: [DMA Read] *Request device [04:10.0]* fault
        addr 33a128000 [fault reason 06] PTE Read access is not set



    Going back to this, i would like to suggest run a few tests to
    ensure that we have all information that we can gather.

    First of all, i'm assuming that you're using native ixgbe Linux
    driver on the host, and that you're only passing through the VF
    device to the VM using VFIO. Is my understanding correct here?

    Now, let's forget about the iommu=pt and igb_uio for a moment. Boot
    both your host and your VM with iommu=on and intel_iommu=on (or
    whatever command-line enables full IOMMU support on both host and
    guest) and do the same tests you've done before. Do you still see
    your issues?

    It would also be very useful to also try native Linux kernel driver
    on the guest *with traffic forwarding* and see how it works in your
    VM. Therefore i would suggest you to compile DPDK with PCAP support,
    bind your (VM) interface to native Linux driver, and use the
    interface via our pcap driver (creating a vdev should do the trick -
    please refer to PCAP PMD documentation [1]). Simple forwarding test
    should be enough - just make sure to pass traffic to and from DPDK
    in both cases, and that it doesn't give you any DMAR errors.

    We can go from there.


Let me just give you what has been tested and working/nonworking scenarios. Some of your questions might get answered as well. Test bed is very simple with 2 VF's created under IXGBE PF on host with one VF interface added to ovs-bridge on host and another VF interface given to guest. Test connectivity between VF's via ping.

Host and guest -- Kernel 4.9
Host -- Qemu 2.11.50 (tried both released 2.11 and tip of the git (2.11.50))
DPDK -- 17.05.1 on host and guest
Host and guest -- booted with GRUB intel_iommu=on (which enables IOMMU). Have tried with "iommu=on and intel_iommu=on" as well, but iommu=on is not needed when intel_iommu=on is set.

Test-scenario-1: Host -- ixgbe_vf driver, Guest ixgbe_vf driver ping works
Test-scenario-2: Host -- DPDK vfio-pci driver, Guest ixgbe_vf driver ping works Test-scenario-3: Host -- DPDK vfio-pci driver, Guest DPDK vfio-pci driver, DMAR errors seen on host, ping doesn't work

OK, that makes it clearer, thanks. Does the third scenario work in other DPDK versions?


DPDK works fine on host with vfio-pci, however, has issues when used inside the guest. Please let me know if more information is needed.

Thanks,
Ravi

    [1] http://dpdk.org/doc/guides/nics/pcap_ring.html
    <http://dpdk.org/doc/guides/nics/pcap_ring.html>

-- Thanks,
    Anatoly




--
Thanks,
Anatoly

Reply via email to