On 16/11/2016 15:27, Chris Wilson wrote:
Avoid requiring struct_mutex for exclusive access to the temporary
dfs_link inside the i915_dependency as not all callers may want to touch
struct_mutex. So rather than force them to take a highly contended
lock, introduce a local lock for the execlists schedule operation.

Reported-by: David Weinehall <david.weineh...@linux.intel.com>
Fixes: 9a151987d709 ("drm/i915: Add execution priority boosting for mmioflips")

Grumble grumble, sloppy review. :I

Signed-off-by: Chris Wilson <ch...@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursu...@intel.com>
Cc: David Weinehall <david.weineh...@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_lrc.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index e23b6a2600fb..10e59ff0d8f1 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -694,6 +694,7 @@ pt_lock_engine(struct i915_priotree *pt, struct 
intel_engine_cs *locked)

 static void execlists_schedule(struct drm_i915_gem_request *request, int prio)
 {
+       static DEFINE_MUTEX(lock);

Good enough for one GPU. :) Consider improving in the future as it is not in the spirit of the driver.

        struct intel_engine_cs *engine = NULL;
        struct i915_dependency *dep, *p;
        struct i915_dependency stack;
@@ -702,8 +703,8 @@ static void execlists_schedule(struct drm_i915_gem_request 
*request, int prio)
        if (prio <= READ_ONCE(request->priotree.priority))
                return;

-       /* Need BKL in order to use the temporary link inside i915_dependency */
-       lockdep_assert_held(&request->i915->drm.struct_mutex);
+       /* Need global lock to use the temporary link inside i915_dependency */
+       mutex_lock(&lock);

        stack.signaler = &request->priotree;
        list_add(&stack.dfs_link, &dfs);
@@ -770,6 +771,8 @@ static void execlists_schedule(struct drm_i915_gem_request 
*request, int prio)
        if (engine)
                spin_unlock_irq(&engine->timeline->lock);

+       mutex_unlock(&lock);
+
        /* XXX Do we need to preempt to make room for us and our deps? */
 }



Reviewed-by: Tvrtko Ursulin <tvrtko.ursu...@intel.com>

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to