On Fri, Jan 15, 2021 at 8:33 AM Nithin Dabilpuram <ndabilpu...@marvell.com> wrote: > > Partial DMA unmap is not supported by VFIO type1 IOMMU > in Linux. Though the return value is zero, the returned > DMA unmap size is not same as expected size. > So add test case and fix to both heap triggered DMA > mapping and user triggered DMA mapping/unmapping. > > Refer vfio_dma_do_unmap() in drivers/vfio/vfio_iommu_type1.c > Snippet of comment is below. > > /* > * vfio-iommu-type1 (v1) - User mappings were coalesced together to > * avoid tracking individual mappings. This means that the > granularity > * of the original mapping was lost and the user was allowed to > attempt > * to unmap any range. Depending on the contiguousness of physical > * memory and page sizes supported by the IOMMU, arbitrary unmaps may > * or may not have worked. We only guaranteed unmap granularity > * matching the original mapping; even though it was untracked here, > * the original mappings are reflected in IOMMU mappings. This > * resulted in a couple unusual behaviors. First, if a range is not > * able to be unmapped, ex. a set of 4k pages that was mapped as a > * 2M hugepage into the IOMMU, the unmap ioctl returns success but > with > * a zero sized unmap. Also, if an unmap request overlaps the first > * address of a hugepage, the IOMMU will unmap the entire hugepage. > * This also returns success and the returned unmap size reflects the > * actual size unmapped. > > * We attempt to maintain compatibility with this "v1" interface, but > * we take control out of the hands of the IOMMU. Therefore, an unmap > * request offset from the beginning of the original mapping will > * return success with zero sized unmap. And an unmap request > covering > * the first iova of mapping will unmap the entire range. > > This behavior can be verified by using first patch and add return check for > dma_unmap.size != len in vfio_type1_dma_mem_map()
Series applied, thanks. -- David Marchand