On Thu, Feb 23, 2017 at 08:08:26PM +0100, Michał Winiarski wrote:
> +static void __execlists_try_preempt(struct intel_engine_cs *engine,
> +                               int prio)
> +{
> +     struct drm_i915_gem_request *rq;
> +     int highest_prio = INT_MIN;
> +     int ret;
> +
> +     spin_lock_irq(&engine->timeline->lock);
> +
> +     /* Engine is idle */
> +     if (execlists_elsp_idle(engine))
> +             goto out_unlock;
> +
> +     if (engine->preempt_requested)
> +             goto out_unlock;
> +
> +     list_for_each_entry_reverse(rq, &engine->timeline->requests, link) {
> +             if (!i915_gem_request_completed(rq)) {
> +                     highest_prio = (rq->priotree.priority > highest_prio) ?
> +                              rq->priotree.priority : highest_prio;
> +             } else
> +                     break;
> +     }
> +
> +     /* Bail out if our priority is lower than any of the inflight requests
> +      * (also if there are none requests) */
> +     if (highest_prio == INT_MIN || prio <= highest_prio)
> +             goto out_unlock;
> +
> +     engine->preempt_requested = true;

Here you are meant to unwind the already submitted requests and put them
back onto their rq->timelines.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to