First of all you need to CC the scheduler maintainers, try to use the 
get_maintainer.pl script. Adding them on CC.

On 10.07.25 08:36, Lin.Cao wrote:
> When Application A submits jobs (a1, a2, a3) and application B submits
> job b1 with a dependency on a2's scheduler fence, killing application A
> before run_job(a1) causes drm_sched_entity_kill_jobs_work() to force
> signal all jobs sequentially. However, due to missing work_run_job or
> work_free_job in entity_kill_job_work(), the scheduler enters sleep
> state, causing application B hang.

Ah! Because of optimizing the dependency when submitting to the same scheduler 
in drm_sched_entity_add_dependency_cb().

Yeah that suddenly starts to make sense.

> Add drm_sched_wakeup() when entity_kill_job_work() to preventing
> scheduler sleep and subsequent application hangs.
> 
> Signed-off-by: Lin.Cao <linca...@amd.com>
> ---
>  drivers/gpu/drm/scheduler/sched_entity.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c 
> b/drivers/gpu/drm/scheduler/sched_entity.c
> index e671aa241720..a22b0f65558a 100644
> --- a/drivers/gpu/drm/scheduler/sched_entity.c
> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
> @@ -180,6 +180,7 @@ static void drm_sched_entity_kill_jobs_work(struct 
> work_struct *wrk)
>       drm_sched_fence_finished(job->s_fence, -ESRCH);
>       WARN_ON(job->s_fence->parent);
>       job->sched->ops->free_job(job);
> +     drm_sched_wakeup(job->sched);

That should probably be after drm_sched_fence_scheduled().

Alternatively we could also drop the optimization in 
drm_sched_entity_add_dependency_cb(), scheduling the work item again has only 
minimal overhead.

Apart from that looks good to me.

Regards,
Christian.

>  }
>  
>  /* Signal the scheduler finished fence when the entity in question is 
> killed. */

Reply via email to