Hi Alex, thanks for the detailed explanation. it does clarify more to me. I read the vfio_listener_region_add() more carefully. It seems check every memory region against container's host window, for IOMMUv1 vfio device, the host window is always 64bit full range (vfio_host_win_add(container, 0, (hwaddr)-1, info.iova_pgsizes); in vfio_connect_container()), so basically mean all memory region will be pinned and mapped to host IOMMU, is the understanding right?
thanks. On Sun, Jul 12, 2020 at 6:49 AM Alex Williamson <alex.l.william...@gmail.com> wrote: > vfio_dma_map() is the exclusive means that QEMU uses to insert > translations for an assigned device. It is not only used by AMD vIOMMU, in > fact that's probably one of the less tested use vectors, it's used when > QEMU establishes any sort of memory mapping for the VM. Any mapping that > could possibly be a DMA target for the device should filter through the > MemoryListener and result in a call to vfio_dma_map(). This includes all > memory that is considered RAM by the VM, as well as possibly direct mapped > peer-to-peer DMA ranges. When the device is backed by an IOMMU, > vfio_dma_map() will pin pages to establish a fixed IOMMU translation. > vfio_pin_pages() comes into play when the device we're assigning is not > backed by the IOMMU, which can be the case with a mediated device (mdev). > Interactions with these devices are mediated by a vendor driver in the host > kernel where the vendor driver provides device isolation and translation by > acting as an interposer between the user and the device. The > vfio_pin_pages() interface allows the vendor driver in the host kernel to > request page pinning such that the mapping is fixed while DMA is performed > by the physical backing device. Thanks, > > Alex >
_______________________________________________ vfio-users mailing list vfio-users@redhat.com https://www.redhat.com/mailman/listinfo/vfio-users