On 16/09/2025 08:41, Philipp Stanner wrote:
On Thu, 2025-09-11 at 15:55 +0100, Tvrtko Ursulin wrote:

On 11/09/2025 15:20, Philipp Stanner wrote:
On Wed, 2025-09-03 at 11:18 +0100, Tvrtko Ursulin wrote:
Move the code dealing with entities entering and exiting run queues to
helpers to logically separate it from jobs entering and exiting entities.

Sorry if I've asked this before, but does this strictly depend on the
preceding patches or could it be branched out?

There is no fundamental dependency so I could re-order and pull it ahead
if you are certain that is what you prefer?

Well, you know my opinion: If it's a general improvement not directly
necessary for a series, it should be send separately.

For this patch, however, I'm not even sure whether it's really
improving the code base. The number of functions seems the same, just
with different names, and the code base gets even slightly larger.

There is one new function actually (pop). But one previously exported gets hidden as implementation details (rbtree update).

Can you elaborate a bit on why you think this patch makes sense?

Before the patch we have sched_main.c implement drm_sched_rq_remove_entity() as an interface operating on run-queues and used by the entity code. It handles both the the entity list and the FIFO rbtree. All good there.

Other two operations which operate on those two data structures are add and pop.

But while the existing drm_sched_rq_add_entity() from the name sounds analogous to drm_sched_rq_remove_entity(), in reality it isn't. It only handles the entity list part, while the FIFO rbtree is open coded in sched_entity.c.

Job pop is the same - rbtree hanlding is open coded in sched_entity.c.

The patch consolidates all of the entity run queue management into contained API which is aligned with the above mentioned existing drm_sched_rq_remove_entity().

Drm_sched_rq_add_entity() gets conceptualy aligned with drm_sched_rq_remove_entity() with rbtree operations removed from sched_entity.c.

Drm_sched_rq_pop_entity() is added with the same effect.

The two new pieces of API are implemented next to drm_sched_rq_remove_entity() in sched_main.c. (Later I move all three, together with other bits relating to run queue management to a new sched_rq.c.)

Regards,

Tvrtko

Signed-off-by: Tvrtko Ursulin <tvrtko.ursu...@igalia.com>
Cc: Christian König <christian.koe...@amd.com>
Cc: Danilo Krummrich <d...@kernel.org>
Cc: Matthew Brost <matthew.br...@intel.com>
Cc: Philipp Stanner <pha...@kernel.org>
---
   drivers/gpu/drm/scheduler/sched_entity.c   | 64 ++-------------
   drivers/gpu/drm/scheduler/sched_internal.h |  8 +-
   drivers/gpu/drm/scheduler/sched_main.c     | 95 +++++++++++++++++++---
   3 files changed, 91 insertions(+), 76 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_entity.c 
b/drivers/gpu/drm/scheduler/sched_entity.c
index 4852006f2308..7a0a52ba87bf 100644
--- a/drivers/gpu/drm/scheduler/sched_entity.c
+++ b/drivers/gpu/drm/scheduler/sched_entity.c
@@ -456,24 +456,9 @@ drm_sched_job_dependency(struct drm_sched_job *job,
        return NULL;
   }
-static ktime_t
-drm_sched_rq_get_rr_ts(struct drm_sched_rq *rq, struct drm_sched_entity 
*entity)
-{
-       ktime_t ts;
-
-       lockdep_assert_held(&entity->lock);
-       lockdep_assert_held(&rq->lock);
-
-       ts = ktime_add_ns(rq->rr_ts, 1);
-       entity->rr_ts = ts;
-       rq->rr_ts = ts;
-
-       return ts;
-}
-
   struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity 
*entity)
   {
-       struct drm_sched_job *sched_job, *next_job;
+       struct drm_sched_job *sched_job;
   sched_job = drm_sched_entity_queue_peek(entity);
        if (!sched_job)
@@ -502,26 +487,7 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct 
drm_sched_entity *entity)
   spsc_queue_pop(&entity->job_queue); - /*
-        * Update the entity's location in the min heap according to
-        * the timestamp of the next job, if any.
-        */
-       next_job = drm_sched_entity_queue_peek(entity);
-       if (next_job) {
-               struct drm_sched_rq *rq;
-               ktime_t ts;
-
-               spin_lock(&entity->lock);
-               rq = entity->rq;
-               spin_lock(&rq->lock);
-               if (drm_sched_policy == DRM_SCHED_POLICY_FIFO)
-                       ts = next_job->submit_ts;
-               else
-                       ts = drm_sched_rq_get_rr_ts(rq, entity);
-               drm_sched_rq_update_fifo_locked(entity, rq, ts);
-               spin_unlock(&rq->lock);
-               spin_unlock(&entity->lock);
-       }
+       drm_sched_rq_pop_entity(entity);
   /* Jobs and entities might have different lifecycles. Since we're
         * removing the job from the entities queue, set the jobs entity pointer
@@ -611,30 +577,10 @@ void drm_sched_entity_push_job(struct drm_sched_job 
*sched_job)
        /* first job wakes up scheduler */
        if (first) {
                struct drm_gpu_scheduler *sched;
-               struct drm_sched_rq *rq;
- /* Add the entity to the run queue */
-               spin_lock(&entity->lock);
-               if (entity->stopped) {
-                       spin_unlock(&entity->lock);
-
-                       DRM_ERROR("Trying to push to a killed entity\n");
-                       return;
-               }
-
-               rq = entity->rq;
-               sched = rq->sched;
-
-               spin_lock(&rq->lock);
-               drm_sched_rq_add_entity(rq, entity);
-               if (drm_sched_policy == DRM_SCHED_POLICY_RR)
-                       submit_ts = entity->rr_ts;
-               drm_sched_rq_update_fifo_locked(entity, rq, submit_ts);
-
-               spin_unlock(&rq->lock);
-               spin_unlock(&entity->lock);
-
-               drm_sched_wakeup(sched);
+               sched = drm_sched_rq_add_entity(entity, submit_ts);
+               if (sched)
+                       drm_sched_wakeup(sched);
        }
   }
   EXPORT_SYMBOL(drm_sched_entity_push_job);
diff --git a/drivers/gpu/drm/scheduler/sched_internal.h 
b/drivers/gpu/drm/scheduler/sched_internal.h
index 7ea5a6736f98..8269c5392a82 100644
--- a/drivers/gpu/drm/scheduler/sched_internal.h
+++ b/drivers/gpu/drm/scheduler/sched_internal.h
@@ -12,13 +12,11 @@ extern int drm_sched_policy;
  void drm_sched_wakeup(struct drm_gpu_scheduler *sched); -void drm_sched_rq_add_entity(struct drm_sched_rq *rq,
-                            struct drm_sched_entity *entity);
+struct drm_gpu_scheduler *
+drm_sched_rq_add_entity(struct drm_sched_entity *entity, ktime_t ts);
   void drm_sched_rq_remove_entity(struct drm_sched_rq *rq,
                                struct drm_sched_entity *entity);
-
-void drm_sched_rq_update_fifo_locked(struct drm_sched_entity *entity,
-                                    struct drm_sched_rq *rq, ktime_t ts);
+void drm_sched_rq_pop_entity(struct drm_sched_entity *entity);
  void drm_sched_entity_select_rq(struct drm_sched_entity *entity);
   struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity 
*entity);
diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
b/drivers/gpu/drm/scheduler/sched_main.c
index 1db0a4aa1d46..c53931e63458 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -151,9 +151,9 @@ static void drm_sched_rq_remove_fifo_locked(struct 
drm_sched_entity *entity,
        }
   }
-void drm_sched_rq_update_fifo_locked(struct drm_sched_entity *entity,
-                                    struct drm_sched_rq *rq,
-                                    ktime_t ts)
+static void drm_sched_rq_update_fifo_locked(struct drm_sched_entity *entity,
+                                           struct drm_sched_rq *rq,
+                                           ktime_t ts)
   {
        /*
         * Both locks need to be grabbed, one to protect from entity->rq change
@@ -191,22 +191,45 @@ static void drm_sched_rq_init(struct drm_gpu_scheduler 
*sched,
   /**
    * drm_sched_rq_add_entity - add an entity
    *
- * @rq: scheduler run queue
    * @entity: scheduler entity
+ * @ts: submission timestamp
    *
    * Adds a scheduler entity to the run queue.
+ *
+ * Returns a DRM scheduler pre-selected to handle this entity.
    */
-void drm_sched_rq_add_entity(struct drm_sched_rq *rq,
-                            struct drm_sched_entity *entity)
+struct drm_gpu_scheduler *
+drm_sched_rq_add_entity(struct drm_sched_entity *entity, ktime_t ts)
   {
-       lockdep_assert_held(&entity->lock);
-       lockdep_assert_held(&rq->lock);
+       struct drm_gpu_scheduler *sched;
+       struct drm_sched_rq *rq;
- if (!list_empty(&entity->list))
-               return;
+       /* Add the entity to the run queue */
+       spin_lock(&entity->lock);
+       if (entity->stopped) {
+               spin_unlock(&entity->lock);
- atomic_inc(rq->sched->score);
-       list_add_tail(&entity->list, &rq->entities);
+               DRM_ERROR("Trying to push to a killed entity\n");
+               return NULL;
+       }
+
+       rq = entity->rq;
+       spin_lock(&rq->lock);
+       sched = rq->sched;
+
+       if (list_empty(&entity->list)) {
+               atomic_inc(sched->score);
+               list_add_tail(&entity->list, &rq->entities);
+       }
+
+       if (drm_sched_policy == DRM_SCHED_POLICY_RR)
+               ts = entity->rr_ts;
+       drm_sched_rq_update_fifo_locked(entity, rq, ts);
+
+       spin_unlock(&rq->lock);
+       spin_unlock(&entity->lock);
+
+       return sched;
   }
  /**
@@ -235,6 +258,54 @@ void drm_sched_rq_remove_entity(struct drm_sched_rq *rq,
        spin_unlock(&rq->lock);
   }
+static ktime_t
+drm_sched_rq_get_rr_ts(struct drm_sched_rq *rq, struct drm_sched_entity 
*entity)
+{
+       ktime_t ts;
+
+       lockdep_assert_held(&entity->lock);
+       lockdep_assert_held(&rq->lock);
+
+       ts = ktime_add_ns(rq->rr_ts, 1);
+       entity->rr_ts = ts;
+       rq->rr_ts = ts;
+
+       return ts;
+}
+
+/**
+ * drm_sched_rq_pop_entity - pops an entity
+ *
+ * @entity: scheduler entity
+ *
+ * To be called every time after a job is popped from the entity.
+ */
+void drm_sched_rq_pop_entity(struct drm_sched_entity *entity)
+{
+       struct drm_sched_job *next_job;
+       struct drm_sched_rq *rq;
+       ktime_t ts;
+
+       /*
+        * Update the entity's location in the min heap according to
+        * the timestamp of the next job, if any.
+        */
+       next_job = drm_sched_entity_queue_peek(entity);
+       if (!next_job)
+               return;
+
+       spin_lock(&entity->lock);
+       rq = entity->rq;
+       spin_lock(&rq->lock);
+       if (drm_sched_policy == DRM_SCHED_POLICY_FIFO)
+               ts = next_job->submit_ts;
+       else
+               ts = drm_sched_rq_get_rr_ts(rq, entity);
+       drm_sched_rq_update_fifo_locked(entity, rq, ts);
+       spin_unlock(&rq->lock);
+       spin_unlock(&entity->lock);
+}
+
   /**
    * drm_sched_rq_select_entity - Select an entity which provides a job to run
    *




Reply via email to