> -----Original Message----- > From: Knut Omang <knut.om...@oracle.com> > Sent: Monday, April 1, 2019 5:24 PM > To: Elijah Shakkour <elija...@mellanox.com>; Peter Xu > <pet...@redhat.com> > Cc: Michael S. Tsirkin <m...@redhat.com>; Alex Williamson > <alex.william...@redhat.com>; Marcel Apfelbaum > <marcel.apfelb...@gmail.com>; Stefan Hajnoczi <stefa...@gmail.com>; > qemu-devel@nongnu.org > Subject: Re: QEMU and vIOMMU support for emulated VF passthrough to > nested (L2) VM > > On Mon, 2019-04-01 at 14:01 +0000, Elijah Shakkour wrote: > > > > > -----Original Message----- > > > From: Peter Xu <pet...@redhat.com> > > > Sent: Monday, April 1, 2019 1:25 PM > > > To: Elijah Shakkour <elija...@mellanox.com> > > > Cc: Knut Omang <knut.om...@oracle.com>; Michael S. Tsirkin > > > <m...@redhat.com>; Alex Williamson <alex.william...@redhat.com>; > > > Marcel Apfelbaum <marcel.apfelb...@gmail.com>; Stefan Hajnoczi > > > <stefa...@gmail.com>; qemu-devel@nongnu.org > > > Subject: Re: QEMU and vIOMMU support for emulated VF passthrough > to > > > nested (L2) VM > > > > > > On Mon, Apr 01, 2019 at 09:12:38AM +0000, Elijah Shakkour wrote: > > > > > > > > > > > > > -----Original Message----- > > > > > From: Peter Xu <pet...@redhat.com> > > > > > Sent: Monday, April 1, 2019 5:47 AM > > > > > To: Elijah Shakkour <elija...@mellanox.com> > > > > > Cc: Knut Omang <knut.om...@oracle.com>; Michael S. Tsirkin > > > > > <m...@redhat.com>; Alex Williamson <alex.william...@redhat.com>; > > > > > Marcel Apfelbaum <marcel.apfelb...@gmail.com>; Stefan Hajnoczi > > > > > <stefa...@gmail.com>; qemu-devel@nongnu.org > > > > > Subject: Re: QEMU and vIOMMU support for emulated VF > passthrough > > > to > > > > > nested (L2) VM > > > > > > > > > > On Sun, Mar 31, 2019 at 11:15:00AM +0000, Elijah Shakkour wrote: > > > > > > > > > > [...] > > > > > > > > > > > I didn't have DMA nor MMIO read/write working with my old > > > > > > command > > > > > line. > > > > > > But, when I removed all CPU flags and only provided "-cpu > > > > > > host", I see that > > > > > MMIO works. > > > > > > Still, DMA read/write from emulated device doesn't work for VF. > > > > > > For > > > > > example: > > > > > > Driver provides me a buffer pointer through MMIO write, this > > > > > > address > > > > > (pointer) is GPA of L2, and when I try to call pci_dma_read() > > > > > with this address I get: > > > > > > " > > > > > > Unassigned mem read 0000000000000000 " > > > > > > > > > > I don't know where this error log was dumped but if it's during > > > > > DMA then I agree it can probably be related to vIOMMU. > > > > > > > > > > > > > This log is dumped from: > > > > memory.c: unassigned_mem_read() > > > > > > > > > > As I said, my problem now is in translation of L2 GPA provided > > > > > > by driver, > > > > > when I call DMA read/write for this address from VF. > > > > > > Any insights? > > > > > > > > > > I just noticed that you were using QEMU 2.12 [1]. If that's the > > > > > case, please rebase to the latest QEMU, at least >=3.0 because > > > > > there's major refactor of the shadow logic during 3.0 devel cycle > AFAICT. > > > > > > > > > > > > > Rebased to QEMU 3.1 > > > > Now I see the address I'm trying to read from in log but still same > > > > error: > > > > " > > > > Unassigned mem read 00000000f0481000 " > > > > What do you suggest? > > > > > > Would you please answer the questions that Knut asked? Is it > > > working for L1 guest? How about PF? > > > > Both VF and PF are working for L1 guest. > > I don't know how to passthrough PF to nested VM in hyper-v. > > On Linux passing through VFs and PFs are the same. > Maybe you can try passthrough with all Linux first? (first PF then VF) ? > > > I don't invoke VF manually in hyper-v and pass it through to nested > > VM. I use hyper-v manager to configure and provide a VF for nested VM > > (I can see the VF only in the nested VM). > > > > Did someone try to run emulated device in linux RH as nested L2 where > > L1 is windows hyper-v? Does DMA read/write work for this emulated > device in this case? > > I have never tried that, I have only used Linux as L2, Windows might be > pickier about what it expects, so starting with Linux to rule that out is > probably a good idea.
Will move to this solution after I/we give up 😊 > > > > > > > You can also try to enable VT-d device log by appending: > > > > > > -trace enable="vtd_*" > > > > > > In case it dumps anything useful for you. Here is the relevant dump (dev 01:00.01 is my VF): " vtd_inv_desc_cc_device context invalidate device 01:00.01 vtd_ce_not_present Context entry bus 1 devfn 1 not present vtd_switch_address_space Device 01:00.1 switching address space (iommu enabled=1) vtd_ce_not_present Context entry bus 1 devfn 1 not present vtd_err Detected invalid context entry when trying to sync shadow page table vtd_iotlb_cc_update IOTLB context update bus 0x1 devfn 0x1 high 0x102 low 0x2d007003 gen 0 -> gen 2 vtd_err_dmar_slpte_resv_error iova 0xf08e7000 level 2 slpte 0x2a54008f7 vtd_fault_disabled Fault processing disabled for context entry vtd_err_dmar_translate dev 01:00.01 iova 0x0 Unassigned mem read 00000000f08e7000 " What do you conclude from this dump? > > > > Is there a way to open those traces to be dumped to stdout/stderr on > > the fly, instead of dtrace? > > It's up to you what tracer(s) to configure when you build QEMU - check out > docs/devel/tracing.txt . There's a few trace events defined in the SR/IOV > patch set, you might want to enable them as well. > > Knut > > > > -- > > > Peter Xu