On Mon, 11 Sep 2023 19:16:04 -0700
Matthew Brost <matthew.br...@intel.com> wrote:

> @@ -1071,6 +1063,7 @@ static int drm_sched_main(void *param)
>   *
>   * @sched: scheduler instance
>   * @ops: backend operations for this scheduler
> + * @submit_wq: workqueue to use for submission. If NULL, the system_wq is 
> used
>   * @hw_submission: number of hw submissions that can be in flight
>   * @hang_limit: number of times to allow a job to hang before dropping it
>   * @timeout: timeout value in jiffies for the scheduler
> @@ -1084,14 +1077,16 @@ static int drm_sched_main(void *param)
>   */
>  int drm_sched_init(struct drm_gpu_scheduler *sched,
>                  const struct drm_sched_backend_ops *ops,
> +                struct workqueue_struct *submit_wq,
>                  unsigned hw_submission, unsigned hang_limit,
>                  long timeout, struct workqueue_struct *timeout_wq,
>                  atomic_t *score, const char *name, struct device *dev)
>  {
> -     int i, ret;
> +     int i;
>       sched->ops = ops;
>       sched->hw_submission_limit = hw_submission;
>       sched->name = name;
> +     sched->submit_wq = submit_wq ? : system_wq;

My understanding is that the new design is based on the idea of
splitting the drm_sched_main function into work items that can be
scheduled independently so users/drivers can insert their own
steps/works without requiring changes to drm_sched. This approach is
relying on the properties of ordered workqueues (1 work executed at a
time, FIFO behavior) to guarantee that these steps are still executed
in order, and one at a time.

Given what you're trying to achieve I think we should create an ordered
workqueue instead of using the system_wq when submit_wq is NULL,
otherwise you lose this ordering/serialization guarantee which both
the dedicated kthread and ordered wq provide. It will probably work for
most drivers, but might lead to subtle/hard to spot ordering issues.

>       sched->timeout = timeout;
>       sched->timeout_wq = timeout_wq ? : system_wq;
>       sched->hang_limit = hang_limit;
> @@ -1100,23 +1095,15 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
>       for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_COUNT; i++)
>               drm_sched_rq_init(sched, &sched->sched_rq[i]);
>  
> -     init_waitqueue_head(&sched->wake_up_worker);
>       init_waitqueue_head(&sched->job_scheduled);
>       INIT_LIST_HEAD(&sched->pending_list);
>       spin_lock_init(&sched->job_list_lock);
>       atomic_set(&sched->hw_rq_count, 0);
>       INIT_DELAYED_WORK(&sched->work_tdr, drm_sched_job_timedout);
> +     INIT_WORK(&sched->work_submit, drm_sched_main);
>       atomic_set(&sched->_score, 0);
>       atomic64_set(&sched->job_id_count, 0);
> -
> -     /* Each scheduler will run on a seperate kernel thread */
> -     sched->thread = kthread_run(drm_sched_main, sched, sched->name);
> -     if (IS_ERR(sched->thread)) {
> -             ret = PTR_ERR(sched->thread);
> -             sched->thread = NULL;
> -             DRM_DEV_ERROR(sched->dev, "Failed to create scheduler for 
> %s.\n", name);
> -             return ret;
> -     }
> +     sched->pause_submit = false;
>  
>       sched->ready = true;
>       return 0;

Reply via email to