On ma, 2015-07-06 at 15:57 +0100, Chris Wilson wrote: > On Mon, Jul 06, 2015 at 05:50:37PM +0300, Imre Deak wrote: > > We have 3 types of DMA mappings for GEM objects: > > 1. physically contiguous for stolen and for objects needing contiguous > > memory > > 2. DMA-buf mappings imported via a DMA-buf attach operation > > 3. SG DMA mappings for shmem backed and userptr objects > > > > For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the > > corresponding backing pages and so in practice we create/release the > > mapping in the object's get_pages/put_pages callback. > > > > For 3. the lifetime of the mapping matches that of any existing GPU binding > > of the object, so we'll create the mapping when the object is bound to > > the first vma and release the mapping when the object is unbound from its > > last vma. > > > > Since the object can be bound to multiple vmas, we can end up creating a > > new DMA mapping in the 3. case even if the object already had one. This > > is not allowed by the DMA API and can lead to leaked mapping data and > > IOMMU memory space starvation in certain cases. For example HW IOMMU > > drivers (intel_iommu) allocate a new range from their memory space > > whenever a mapping is created, silently overriding a pre-existing > > mapping. > > > > Fix this by adding new callbacks to create/release the DMA mapping. This > > way we can use the has_dma_mapping flag for objects of the 3. case also > > (so far the flag was only used for the 1. and 2. case) and skip creating > > a new mapping if one exists already. > > > > Note that I also thought about simply creating/releasing the mapping > > when get_pages/put_pages is called. However since creating a DMA mapping > > may have associated resources (at least in case of HW IOMMU) it does > > make sense to release these resources as early as possible. We can > > release the DMA mapping as soon as the object is unbound from the last > > vma, before we drop the backing pages, hence it's worth keeping the two > > operations separate. > > > > I noticed this issue by enabling DMA debugging, which got disabled after > > a while due to its internal mapping tables getting full. It also reported > > errors in connection to random other drivers that did a DMA mapping for > > an address that was previously mapped by i915 but was never released. > > Besides these diagnostic messages and the memory space starvation > > problem for IOMMUs, I'm not aware of this causing a real issue. > > Nope, it is much much simpler. Since we only do the dma prepare/finish > from inside get_pages/put_pages, we can put the calls there. The only > caveat there is userptr worker, but that can be easily fixed up. > > http://cgit.freedesktop.org/~ickle/linux-2.6/commit/?h=nightly&id=f55727d7d6f76aeee687c1f2d31411662ff03b6f
Yes, that's what I meant by creating/releasing the mapping in the get_pages/put_pages callbacks. It does have the disadvantage of keeping on to IOMMU mapping resources longer than it's needed as I described above. > Nak. Right. Your patch doesn't explicitly mention fixing the issues I tracked down, but it does seem to fix them. It would make sens to add this fact to the commit log. --Imre _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/intel-gfx