On Wed, Mar 01, 2017 at 12:57:15PM +, Chris Wilson wrote:
> On Thu, Feb 23, 2017 at 08:14:15PM +0100, Michał Winiarski wrote:
> > +static void unsubmit_inflight_requests(struct intel_engine_cs *engine,
> > +struct list_head *resubmit)
> > +{
> > + struct dr
On Thu, Feb 23, 2017 at 08:14:15PM +0100, Michał Winiarski wrote:
> +static void unsubmit_inflight_requests(struct intel_engine_cs *engine,
> + struct list_head *resubmit)
> +{
> + struct drm_i915_gem_request *rq, *prev;
> +
> + assert_spin_locked(&engin
We need to avoid sending new work if the preemption is in progress.
Once it finished, we need to identify and unsubmit the preempted
workload, submit new workload (potentially the one responsible for
preemption) and resubmit the preempted workload.
Signed-off-by: Michał Winiarski
---
drivers/gpu