On 8/30/21 2:09 PM, Maarten Lankhorst wrote:
When we implement delayed destroy, we may have a second
call to the delete_mem_notify() handler, while free_object()
only should be called once.

Move it to bo->destroy(), to ensure it's only called once.
This fixes some weird memory corruption issues with delayed
destroy when async eviction is used.

Signed-off-by: Maarten Lankhorst <maarten.lankho...@linux.intel.com>

Reviewed-by: Thomas Hellström <thomas.hellst...@linux.intel.com>

I wonder whether you could push this early with a Fixes: tag, perhaps. I actually managed to hit this once without any vma- or async modifications, but a bit curious how.

Thanks,

Thomas





---
  drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 9 +++++----
  1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c 
b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
index 59ca53a3ef6a..eaf2ff29dd4a 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
@@ -355,11 +355,8 @@ static void i915_ttm_delete_mem_notify(struct 
ttm_buffer_object *bo)
  {
        struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
- if (likely(obj)) {
-               /* This releases all gem object bindings to the backend. */
+       if (likely(obj))
                i915_ttm_free_cached_io_st(obj);
-               __i915_gem_free_object(obj);
-       }
  }
static struct intel_memory_region *
@@ -886,8 +883,12 @@ void i915_ttm_bo_destroy(struct ttm_buffer_object *bo)
  {
        struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
+ /* This releases all gem object bindings to the backend. */
+       __i915_gem_free_object(obj);
+
        i915_gem_object_release_memory_region(obj);
        mutex_destroy(&obj->ttm.get_io_page.lock);
+
        if (obj->ttm.created)
                call_rcu(&obj->rcu, __i915_gem_free_object_rcu);
  }

Reply via email to