On Mon, 30 Oct 2023 02:01:58 +0300
Dmitry Osipenko <dmitry.osipe...@collabora.com> wrote:

> @@ -238,6 +308,20 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object 
> *shmem)
>       if (refcount_dec_not_one(&shmem->pages_use_count))
>               return;
>  
> +     /*
> +      * Destroying the object is a special case because acquiring
> +      * the obj lock can cause a locking order inversion between
> +      * reservation_ww_class_mutex and fs_reclaim.
> +      *
> +      * This deadlock is not actually possible, because no one should
> +      * be already holding the lock when GEM is released.  Unfortunately
> +      * lockdep is not aware of this detail.  So when the refcount drops
> +      * to zero, we pretend it is already locked.
> +      */
> +     if (!kref_read(&shmem->base.refcount) &&
> +         refcount_dec_and_test(&shmem->pages_use_count))
> +             return drm_gem_shmem_free_pages(shmem);

Uh, with get/put_pages() being moved to the create/free_gem()
hooks, we're back to a situation where pages_use_count > 0 when we
reach gem->refcount == 0, which is not nice. We really need to patch
drivers so they dissociate GEM creation from the backing storage
allocation/reservation + mapping of the BO in GPU VM space.

Reply via email to