> -----Original Message-----
> From: Cavitt, Jonathan <jonathan.cav...@intel.com>
> Sent: 28 January 2025 02:45
> To: Brian Geffon <bgef...@google.com>; intel-...@lists.freedesktop.org
> Cc: Wilson, Chris P <chris.p.wil...@intel.com>; Saarinen, Jani
> <jani.saari...@intel.com>; Mistat, Tomasz <tomasz.mis...@intel.com>;
> Srinivas, Vidya <vidya.srini...@intel.com>; ville.syrj...@linux.intel.com;
> jani.nik...@linux.intel.com; linux-ker...@vger.kernel.org; dri-
> de...@lists.freedesktop.org; Joonas Lahtinen
> <joonas.lahti...@linux.intel.com>; sta...@vger.kernel.org; Tomasz Figa
> <tf...@google.com>; Cavitt, Jonathan <jonathan.cav...@intel.com>
> Subject: RE: [PATCH v3] drm/i915: Fix page cleanup on DMA remap failure
> 
> -----Original Message-----
> From: Intel-gfx <intel-gfx-boun...@lists.freedesktop.org> On Behalf Of Brian
> Geffon
> Sent: Monday, January 27, 2025 12:44 PM
> To: intel-...@lists.freedesktop.org
> Cc: Wilson, Chris P <chris.p.wil...@intel.com>; Saarinen, Jani
> <jani.saari...@intel.com>; Mistat, Tomasz <tomasz.mis...@intel.com>;
> Srinivas, Vidya <vidya.srini...@intel.com>; ville.syrj...@linux.intel.com;
> jani.nik...@linux.intel.com; linux-ker...@vger.kernel.org; dri-
> de...@lists.freedesktop.org; Joonas Lahtinen
> <joonas.lahti...@linux.intel.com>; Brian Geffon <bgef...@google.com>;
> sta...@vger.kernel.org; Tomasz Figa <tf...@google.com>
> Subject: [PATCH v3] drm/i915: Fix page cleanup on DMA remap failure
> >
> > When converting to folios the cleanup path of shmem_get_pages() was
> > missed. When a DMA remap fails and the max segment size is greater
> > than PAGE_SIZE it will attempt to retry the remap with a PAGE_SIZEd
> > segment size. The cleanup code isn't properly using the folio apis and
> > as a result isn't handling compound pages correctly.
> >
> > v2 -> v3:
> > (Ville) Just use shmem_sg_free_table() as-is in the failure path of
> > shmem_get_pages(). shmem_sg_free_table() will clear mapping
> > unevictable but it will be reset when it retries in shmem_sg_alloc_table().
> >
> > v1 -> v2:
> > (Ville) Fixed locations where we were not clearing mapping unevictable.
> >
> > Cc: sta...@vger.kernel.org
> > Cc: Ville Syrjala <ville.syrj...@linux.intel.com>
> > Cc: Vidya Srinivas <vidya.srini...@intel.com>
> > Link: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13487
> > Link:
> > https://lore.kernel.org/lkml/20250116135636.410164-1-bgef...@google.co
> > m/
> > Fixes: 0b62af28f249 ("i915: convert shmem_sg_free_table() to use a
> > folio_batch")
> > Signed-off-by: Brian Geffon <bgef...@google.com>
> > Suggested-by: Tomasz Figa <tf...@google.com>
> 
> Seems good to me.
> Reviewed-by: Jonathan Cavitt <jonathan.cav...@intel.com> -Jonathan Cavitt
> 

Thank you so much to all.
Tested-by: Vidya Srinivas <vidya.srini...@intel.com>

> > ---
> >  drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 6 +-----
> >  1 file changed, 1 insertion(+), 5 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
> > b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
> > index fe69f2c8527d..ae3343c81a64 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
> > @@ -209,8 +209,6 @@ static int shmem_get_pages(struct
> drm_i915_gem_object *obj)
> >     struct address_space *mapping = obj->base.filp->f_mapping;
> >     unsigned int max_segment = i915_sg_segment_size(i915->drm.dev);
> >     struct sg_table *st;
> > -   struct sgt_iter sgt_iter;
> > -   struct page *page;
> >     int ret;
> >
> >     /*
> > @@ -239,9 +237,7 @@ static int shmem_get_pages(struct
> drm_i915_gem_object *obj)
> >              * for PAGE_SIZE chunks instead may be helpful.
> >              */
> >             if (max_segment > PAGE_SIZE) {
> > -                   for_each_sgt_page(page, sgt_iter, st)
> > -                           put_page(page);
> > -                   sg_free_table(st);
> > +                   shmem_sg_free_table(st, mapping, false, false);
> >                     kfree(st);
> >
> >                     max_segment = PAGE_SIZE;
> > --
> > 2.48.1.262.g85cc9f2d1e-goog
> >
> >

Reply via email to