Hi Philipp,

On 08/05/2025 12:03, Philipp Stanner wrote:
On Thu, 2025-04-24 at 11:55 +0200, Philipp Stanner wrote:
The unit tests so far took care manually of avoiding memory leaks
that
might have occurred when calling drm_sched_fini().

The scheduler now takes care by itself of avoiding memory leaks if
the
driver provides the callback
drm_sched_backend_ops.kill_fence_context().

Implement that callback for the unit tests. Remove the manual cleanup
code.

@Tvrtko: On a scale from 1-10, how much do you love this patch? :)

Specific patch aside, it is the series as a whole I would like to be sure there isn't a more elegant way to achieve the same end result.

Like that sketch of a counter proposal I sent for the reasons listed with it. Which were, AFAIR, to avoid needing to add more state machine, to avoid mandating drivers have to keep an internal list, and to align better with the existing prototypes in the sched ops table (where everything operates on jobs).

Regards,

Tvrtko

Signed-off-by: Philipp Stanner <pha...@kernel.org>
---
  .../gpu/drm/scheduler/tests/mock_scheduler.c  | 34 ++++++++++++-----
--
  1 file changed, 21 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
b/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
index f999c8859cf7..a72d26ca8262 100644
--- a/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
+++ b/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
@@ -228,10 +228,30 @@ static void mock_sched_free_job(struct
drm_sched_job *sched_job)
        /* Mock job itself is freed by the kunit framework. */
  }
+static void mock_sched_fence_context_kill(struct drm_gpu_scheduler
*gpu_sched)
+{
+       struct drm_mock_scheduler *sched =
drm_sched_to_mock_sched(gpu_sched);
+       struct drm_mock_sched_job *job;
+       unsigned long flags;
+
+       spin_lock_irqsave(&sched->lock, flags);
+       list_for_each_entry(job, &sched->job_list, link) {
+               spin_lock(&job->lock);
+               if (!dma_fence_is_signaled_locked(&job->hw_fence)) {
+                       dma_fence_set_error(&job->hw_fence, -
ECANCELED);
+                       dma_fence_signal_locked(&job->hw_fence);
+               }
+               complete(&job->done);
+               spin_unlock(&job->lock);
+       }
+       spin_unlock_irqrestore(&sched->lock, flags);
+}
+
  static const struct drm_sched_backend_ops drm_mock_scheduler_ops = {
        .run_job = mock_sched_run_job,
        .timedout_job = mock_sched_timedout_job,
-       .free_job = mock_sched_free_job
+       .free_job = mock_sched_free_job,
+       .kill_fence_context = mock_sched_fence_context_kill,
  };
 /**
@@ -300,18 +320,6 @@ void drm_mock_sched_fini(struct
drm_mock_scheduler *sched)
                drm_mock_sched_job_complete(job);
        spin_unlock_irqrestore(&sched->lock, flags);
- /*
-        * Free completed jobs and jobs not yet processed by the DRM
scheduler
-        * free worker.
-        */
-       spin_lock_irqsave(&sched->lock, flags);
-       list_for_each_entry_safe(job, next, &sched->done_list, link)
-               list_move_tail(&job->link, &list);
-       spin_unlock_irqrestore(&sched->lock, flags);
-
-       list_for_each_entry_safe(job, next, &list, link)
-               mock_sched_free_job(&job->base);
-
        drm_sched_fini(&sched->base);
  }


Reply via email to