On Mon, Feb 05, 2024 at 09:44:56AM +0100, Christian König wrote:
> Am 02.02.24 um 22:58 schrieb Rodrigo Vivi:
> > On Tue, Jan 30, 2024 at 08:05:29AM +0100, Christian König wrote:
> > > Am 30.01.24 um 04:04 schrieb Matthew Brost:
> > > > Rather then loop over entities until one with a ready job is found,
> > > > re-queue the run job worker when drm_sched_entity_pop_job() returns 
> > > > NULL.
> > > > 
> > > > Fixes: 6dbd9004a55 ("drm/sched: Drain all entities in DRM sched run job 
> > > > worker")
> > First of all there's a small typo in this Fixes tag that needs to be fixed.
> > The correct one is:
> > 
> > Fixes: 66dbd9004a55 ("drm/sched: Drain all entities in DRM sched run job 
> > worker")

Cc: Dave Airlie <airl...@redhat.com>

> > 
> > But I couldn't apply this right now in any of our drm-tip trees because it
> > is not clear where this is coming from originally.
> > 
> > likely amd tree?!
> 
> No, this comes from Matthews work on the DRM scheduler.
> 
> Matthews patches were most likely merged through drm-misc.

the original is not there in drm-misc-next.
it looks like Dave had taken that one directly to drm-next.
So we either need the drm-misc maintainers to have a backmerge or
Dave to take this through the drm-fixes directly.

> 
> Regards,
> Christian.
> 
> > 
> > > > Signed-off-by: Matthew Brost <matthew.br...@intel.com>
> > > Reviewed-by: Christian König <christian.koe...@amd.com>
> > Christian, if this came from the amd, could you please apply it there and
> > propagate through your fixes flow?
> > 
> > Thanks,
> > Rodrigo.
> > 
> > > > ---
> > > >    drivers/gpu/drm/scheduler/sched_main.c | 15 +++++++++------
> > > >    1 file changed, 9 insertions(+), 6 deletions(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
> > > > b/drivers/gpu/drm/scheduler/sched_main.c
> > > > index 8acbef7ae53d..7e90c9f95611 100644
> > > > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > > > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > > > @@ -1178,21 +1178,24 @@ static void drm_sched_run_job_work(struct 
> > > > work_struct *w)
> > > >         struct drm_sched_entity *entity;
> > > >         struct dma_fence *fence;
> > > >         struct drm_sched_fence *s_fence;
> > > > -       struct drm_sched_job *sched_job = NULL;
> > > > +       struct drm_sched_job *sched_job;
> > > >         int r;
> > > >         if (READ_ONCE(sched->pause_submit))
> > > >                 return;
> > > >         /* Find entity with a ready job */
> > > > -       while (!sched_job && (entity = drm_sched_select_entity(sched))) 
> > > > {
> > > > -               sched_job = drm_sched_entity_pop_job(entity);
> > > > -               if (!sched_job)
> > > > -                       complete_all(&entity->entity_idle);
> > > > -       }
> > > > +       entity = drm_sched_select_entity(sched);
> > > >         if (!entity)
> > > >                 return; /* No more work */
> > > > +       sched_job = drm_sched_entity_pop_job(entity);
> > > > +       if (!sched_job) {
> > > > +               complete_all(&entity->entity_idle);
> > > > +               drm_sched_run_job_queue(sched);
> > > > +               return;
> > > > +       }
> > > > +
> > > >         s_fence = sched_job->s_fence;
> > > >         atomic_add(sched_job->credits, &sched->credit_count);
> 

Reply via email to