On Mon, Aug 16, 2021 at 06:51:22AM -0700, Matthew Brost wrote:
> If the context is reset as a result of the request cancelation the
> context reset G2H is received after schedule disable done G2H which is
> likely the wrong order. The schedule disable done G2H release the
> waiting request cancelation code which resubmits the context. This races
> with the context reset G2H which also wants to resubmit the context but
> in this case it really should be a NOP as request cancelation code owns
> the resubmit. Use some clever tricks of checking the context state to
> seal this race until if / when the GuC firmware is fixed.
> 
> v2:
>  (Checkpatch)
>   - Fix typos
> 
> Fixes: 62eaf0ae217d ("drm/i915/guc: Support request cancellation")
> Signed-off-by: Matthew Brost <matthew.br...@intel.com>
> Cc: <sta...@vger.kernel.org>
> ---
>  .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 43 ++++++++++++++++---
>  1 file changed, 37 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> index 3cd2da6f5c03..c3b7bf7319dd 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> @@ -826,17 +826,35 @@ __unwind_incomplete_requests(struct intel_context *ce)
>  static void __guc_reset_context(struct intel_context *ce, bool stalled)
>  {
>       struct i915_request *rq;
> +     unsigned long flags;
>       u32 head;
> +     bool skip = false;
>  
>       intel_context_get(ce);
>  
>       /*
> -      * GuC will implicitly mark the context as non-schedulable
> -      * when it sends the reset notification. Make sure our state
> -      * reflects this change. The context will be marked enabled
> -      * on resubmission.
> +      * GuC will implicitly mark the context as non-schedulable when it sends
> +      * the reset notification. Make sure our state reflects this change. The
> +      * context will be marked enabled on resubmission.
> +      *
> +      * XXX: If the context is reset as a result of the request cancellation
> +      * this G2H is received after the schedule disable complete G2H which is
> +      * likely wrong as this creates a race between the request cancellation
> +      * code re-submitting the context and this G2H handler. This likely
> +      * should be fixed in the GuC but until if / when that gets fixed we
> +      * need to workaround this. Convert this function to a NOP if a pending
> +      * enable is in flight as this indicates that a request cancellation has
> +      * occurred.
>        */
> -     clr_context_enabled(ce);
> +     spin_lock_irqsave(&ce->guc_state.lock, flags);
> +     if (likely(!context_pending_enable(ce))) {
> +             clr_context_enabled(ce);
> +     } else {
> +             skip = true;
> +     }
> +     spin_unlock_irqrestore(&ce->guc_state.lock, flags);
> +     if (unlikely(skip))
> +             goto out_put;
>  
>       rq = intel_context_find_active_request(ce);
>       if (!rq) {
> @@ -855,6 +873,7 @@ static void __guc_reset_context(struct intel_context *ce, 
> bool stalled)
>  out_replay:
>       guc_reset_state(ce, head, stalled);
>       __unwind_incomplete_requests(ce);
> +out_put:
>       intel_context_put(ce);
>  }
>  
> @@ -1599,6 +1618,13 @@ static void guc_context_cancel_request(struct 
> intel_context *ce,
>                       guc_reset_state(ce, intel_ring_wrap(ce->ring, rq->head),
>                                       true);
>               }
> +
> +             /*
> +              * XXX: Racey if context is reset, see comment in
> +              * __guc_reset_context().
> +              */
> +             flush_work(&ce_to_guc(ce)->ct.requests.worker);

This looks racy, and I think that holds in general for all the flush_work
you're adding: This only flushes the processing of the work, it doesn't
stop any re-queueing (as far as I can tell at least), which means it
doesn't do a hole lot.

Worse, your task is re-queue because it only processes one item at a time.
That means flush_work only flushes the first invocation, but not even
drains them all. So even if you do prevent requeueing somehow, this isn't
what you want. Two solutions.

- flush_work_sync, which flushes until self-requeues are all done too

- Or more preferred, make you're worker a bit more standard for this
  stuff: a) under the spinlock, take the entire list, not just the first
  entry, with list_move or similar to a local list b) process that local
  list in a loop b) don't requeue youreself.

Cheers, Daniel
> +
>               guc_context_unblock(ce);
>       }
>  }
> @@ -2719,7 +2745,12 @@ static void guc_handle_context_reset(struct intel_guc 
> *guc,
>  {
>       trace_intel_context_reset(ce);
>  
> -     if (likely(!intel_context_is_banned(ce))) {
> +     /*
> +      * XXX: Racey if request cancellation has occurred, see comment in
> +      * __guc_reset_context().
> +      */
> +     if (likely(!intel_context_is_banned(ce) &&
> +                !context_blocked(ce))) {
>               capture_error_state(guc, ce);
>               guc_context_replay(ce);
>       }
> -- 
> 2.32.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

Reply via email to