On 6/4/25 10:16, Philipp Stanner wrote:
> struct drm_sched_init_args provides the possibility of letting the
> scheduler use user-controlled workqueues, instead of the scheduler
> creating its own workqueues. It's currently not documented who would
> want to use that.
> 
> Not sharing the submit_wq between driver and scheduler has the advantage
> of no negative intereference between them being able to occur (e.g.,
> MMU notifier callbacks waiting for fences to get signaled). A separate
> timeout_wq should rarely be necessary, since using the system_wq could,
> in the worst case, delay a timeout.
> 
> Discourage the usage of own workqueues in the documentation.
> 
> Suggested-by: Danilo Krummrich <d...@kernel.org>
> Signed-off-by: Philipp Stanner <pha...@kernel.org>
> ---
>  include/drm/gpu_scheduler.h | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index 81dcbfc8c223..11740d745223 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -590,14 +590,17 @@ struct drm_gpu_scheduler {
>   *
>   * @ops: backend operations provided by the driver
>   * @submit_wq: workqueue to use for submission. If NULL, an ordered wq is
> - *          allocated and used.
> + *          allocated and used. It is recommended to pass NULL unless there
> + *          is a good reason not to.

Yeah, that's probably a good idea. I'm not sure why xe and nouveau actually 
wanted that.

>   * @num_rqs: Number of run-queues. This may be at most 
> DRM_SCHED_PRIORITY_COUNT,
>   *        as there's usually one run-queue per priority, but may be less.
>   * @credit_limit: the number of credits this scheduler can hold from all jobs
>   * @hang_limit: number of times to allow a job to hang before dropping it.
>   *           This mechanism is DEPRECATED. Set it to 0.
>   * @timeout: timeout value in jiffies for submitted jobs.
> - * @timeout_wq: workqueue to use for timeout work. If NULL, the system_wq is 
> used.
> + * @timeout_wq: workqueue to use for timeout work. If NULL, the system_wq is
> + *           used. It is recommended to pass NULL unless there is a good
> + *           reason not to.

Well, that's a rather bad idea.

Using a the same single threaded work queue for the timeout of multiple 
schedulers instances has the major advantage of being able to handle their 
occurrence sequentially.

In other words multiple schedulers post their timeout work items on the same 
queue, the first one to run resets the specific HW block in question and 
cancels all timeouts and work items from other schedulers which use the same HW 
block.

It was Sima, I and a few other people who came up with this approach because 
both amdgpu and IIRC panthor implemented that in their own specific way, and as 
usual got it wrong.

If I'm not completely mistaken this approach is now used by amdgpu, panthor, xe 
and imagination and has proven to be rather flexible and reliable. It just 
looks like we never documented that you should do it this way.

Regards,
Christian.

>   * @score: score atomic shared with other schedulers. May be NULL.
>   * @name: name (typically the driver's name). Used for debugging
>   * @dev: associated device. Used for debugging

Reply via email to