We need to iterate over the original entries here for the sg_table,
pulling out the struct page for each one, to be remapped. However
currently this incorrectly iterates over the final dma mapped entries,
which is likely just one gigantic sg entry if the iommu is enabled,
leading to us only mapping the first struct page (and any physically
contiguous pages following it), even if there is potentially lots more
data to follow.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/7306
Signed-off-by: Matthew Auld <matthew.a...@intel.com>
Cc: Lionel Landwerlin <lionel.g.landwer...@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursu...@linux.intel.com>
Cc: Ville Syrjälä <ville.syrj...@linux.intel.com>
Cc: Michael J. Ruhl <michael.j.r...@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c 
b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 07eee1c09aaf..05ebbdfd3b3b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -40,13 +40,13 @@ static struct sg_table *i915_gem_map_dma_buf(struct 
dma_buf_attachment *attachme
                goto err;
        }
 
-       ret = sg_alloc_table(st, obj->mm.pages->nents, GFP_KERNEL);
+       ret = sg_alloc_table(st, obj->mm.pages->orig_nents, GFP_KERNEL);
        if (ret)
                goto err_free;
 
        src = obj->mm.pages->sgl;
        dst = st->sgl;
-       for (i = 0; i < obj->mm.pages->nents; i++) {
+       for (i = 0; i < obj->mm.pages->orig_nents; i++) {
                sg_set_page(dst, sg_page(src), src->length, 0);
                dst = sg_next(dst);
                src = sg_next(src);
-- 
2.37.3

Reply via email to