Am 06.09.24 um 20:06 schrieb Tvrtko Ursulin:
From: Tvrtko Ursulin <tvrtko.ursu...@igalia.com>
Entities run queue can change during drm_sched_entity_push_job() so make
sure to update the score consistently.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursu...@igalia.com>
Fixes: d41a39dda140 ("drm/scheduler: improve job distribution with multiple
queues")
Good catch, that might explain some of the odd behavior we have seen for
load balancing.
Reviewed-by: Christian König <christian.koe...@amd.com>
Cc: Nirmoy Das <nirmoy....@amd.com>
Cc: Christian König <christian.koe...@amd.com>
Cc: Luben Tuikov <ltuiko...@gmail.com>
Cc: Matthew Brost <matthew.br...@intel.com>
Cc: David Airlie <airl...@gmail.com>
Cc: Daniel Vetter <dan...@ffwll.ch>
Cc: dri-devel@lists.freedesktop.org
Cc: <sta...@vger.kernel.org> # v5.9+
---
drivers/gpu/drm/scheduler/sched_entity.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/scheduler/sched_entity.c
b/drivers/gpu/drm/scheduler/sched_entity.c
index 62b07ef7630a..2a910c1df072 100644
--- a/drivers/gpu/drm/scheduler/sched_entity.c
+++ b/drivers/gpu/drm/scheduler/sched_entity.c
@@ -586,7 +586,6 @@ void drm_sched_entity_push_job(struct drm_sched_job
*sched_job)
ktime_t submit_ts;
trace_drm_sched_job(sched_job, entity);
- atomic_inc(entity->rq->sched->score);
WRITE_ONCE(entity->last_user, current->group_leader);
/*
@@ -612,6 +611,7 @@ void drm_sched_entity_push_job(struct drm_sched_job
*sched_job)
rq = entity->rq;
+ atomic_inc(rq->sched->score);
drm_sched_rq_add_entity(rq, entity);
spin_unlock(&entity->rq_lock);