On 22/01/18 01:42 AM, Chunming Zhou wrote:


On 2018年01月20日 02:23, Tom St Denis wrote:
On 19/01/18 01:14 PM, Tom St Denis wrote:
Hi all,

In the function ttm_bo_cleanup_refs() it seems possible to get to line 551 without entering the block on 516 which means you'll be unlocking a mutex that wasn't locked.

Now it might be that in the course of the API this pattern cannot be expressed but it's not clear from the function alone that that is the case.


Looking further it seems the behaviour depends on locking in parent callers.  That's kinda a no-no right?  Shouldn't the lock be taken/released in the same function ideally?
Same feelings

Regards,
David Zhou

Attached is a patch that addresses this.

I can't see any obvious race in functions that call ttm_bo_cleanup_refs() between the time they let go of the lock and the time it's taken again in the call.

Running it on my system doesn't produce anything notable though the KASAN with DRI_PRIME=1 issue is still there (this patch neither causes that nor fixes it).

Tom
>From 017ebff407e8d9313f09e4ddd109ac54c3a23fba Mon Sep 17 00:00:00 2001
From: Tom St Denis <tom.stde...@amd.com>
Date: Mon, 22 Jan 2018 06:54:21 -0500
Subject: [PATCH] drm/ttm: Force lock to be taken/released inside
 ttm_bo_cleanup_refs()

The function ttm_bo_cleanup_refs() had conditional behaviour on
the global LRU lock taken outside the function.

This patch changes it so the lock is always untaken before the
function ttm_bo_cleanup_refs() is called and the function takes
the lock itself.

Signed-off-by: Tom St Denis <tom.stde...@amd.com>
---
 drivers/gpu/drm/ttm/ttm_bo.c | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index d33a6bb742a1..cbc4277eb76f 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -487,9 +487,6 @@ static void ttm_bo_cleanup_refs_or_queue(struct ttm_buffer_object *bo)
  * If bo idle, remove from delayed- and lru lists, and unref.
  * If not idle, do nothing.
  *
- * Must be called with lru_lock and reservation held, this function
- * will drop the lru lock and optionally the reservation lock before returning.
- *
  * @interruptible         Any sleeps should occur interruptibly.
  * @no_wait_gpu           Never wait for gpu. Return -EBUSY instead.
  * @unlock_resv           Unlock the reservation lock as well.
@@ -503,6 +500,8 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo,
 	struct reservation_object *resv;
 	int ret;
 
+	spin_lock(&glob->lru_lock);
+
 	if (unlikely(list_empty(&bo->ddestroy)))
 		resv = bo->resv;
 	else
@@ -586,17 +585,12 @@ static bool ttm_bo_delayed_delete(struct ttm_bo_device *bdev, bool remove_all)
 		kref_get(&bo->list_kref);
 		list_move_tail(&bo->ddestroy, &removed);
 
+		spin_unlock(&glob->lru_lock);
 		if (remove_all || bo->resv != &bo->ttm_resv) {
-			spin_unlock(&glob->lru_lock);
 			reservation_object_lock(bo->resv, NULL);
-
-			spin_lock(&glob->lru_lock);
 			ttm_bo_cleanup_refs(bo, false, !remove_all, true);
-
 		} else if (reservation_object_trylock(bo->resv)) {
 			ttm_bo_cleanup_refs(bo, false, !remove_all, true);
-		} else {
-			spin_unlock(&glob->lru_lock);
 		}
 
 		kref_put(&bo->list_kref, ttm_bo_release_list);
@@ -782,6 +776,7 @@ static int ttm_mem_evict_first(struct ttm_bo_device *bdev,
 	kref_get(&bo->list_kref);
 
 	if (!list_empty(&bo->ddestroy)) {
+		spin_unlock(&glob->lru_lock);
 		ret = ttm_bo_cleanup_refs(bo, ctx->interruptible,
 					  ctx->no_wait_gpu, locked);
 		kref_put(&bo->list_kref, ttm_bo_release_list);
@@ -1730,6 +1725,7 @@ int ttm_bo_swapout(struct ttm_bo_global *glob, struct ttm_operation_ctx *ctx)
 	kref_get(&bo->list_kref);
 
 	if (!list_empty(&bo->ddestroy)) {
+		spin_unlock(&glob->lru_lock);
 		ret = ttm_bo_cleanup_refs(bo, false, false, locked);
 		kref_put(&bo->list_kref, ttm_bo_release_list);
 		return ret;
-- 
2.14.3

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Reply via email to