Hi, Jason We had a discussion about dirty page tracking in VFIO, when vIOMMU is enabled:
https://lists.nongnu.org/archive/html/qemu-devel/2019-09/msg02690.html It's actually a similar model as vhost - Qemu cannot interpose the fast-path DMAs thus relies on the kernel part to track and report dirty page information. Currently Qemu tracks dirty pages in GFN level, thus demanding a translation from IOVA to GPA. Then the open in our discussion is where this translation should happen. Doing the translation in kernel implies a device iotlb flavor, which is what vhost implements today. It requires potentially large tracking structures in the host kernel, but leveraging the existing log_sync flow in Qemu. On the other hand, Qemu may perform log_sync for every removal of IOVA mapping and then do the translation itself, then avoiding the GPA awareness in the kernel side. It needs some change to current Qemu log-sync flow, and may bring more overhead if IOVA is frequently unmapped. So we'd like to hear about your opinions, especially about how you came down to the current iotlb approach for vhost. p.s. Alex's comment is also copied here from original thread. > So vhost must then be configuring a listener across system memory > rather than only against the device AddressSpace like we do in vfio, > such that it get's log_sync() callbacks for the actual GPA space rather > than only the IOVA space. OTOH, QEMU could understand that the device > AddressSpace has a translate function and apply the IOVA dirty bits to > the system memory AddressSpace. Wouldn't it make more sense for > QEMU > to perform a log_sync() prior to removing a MemoryRegionSection within > an AddressSpace and update the GPA rather than pushing GPA awareness > and potentially large tracking structures into the host kernel? Thanks Kevin