On 22/04/2025 13:32, Danilo Krummrich wrote:
On Tue, Apr 22, 2025 at 01:07:47PM +0100, Tvrtko Ursulin wrote:

On 22/04/2025 12:13, Danilo Krummrich wrote:
On Tue, Apr 22, 2025 at 11:39:11AM +0100, Tvrtko Ursulin wrote:
Question I raised is if there are other drivers which manage to clean up
everything correctly (like the mock scheduler does), but trigger that
warning. Maybe there are not and maybe mock scheduler is the only false
positive.

So far the scheduler simply does not give any guideline on how to address the
problem, hence every driver simply does something (or nothing, effectively
ignoring the problem). This is what we want to fix.

The mock scheduler keeps it's own list of pending jobs and on tear down stops
the scheduler's workqueues, traverses it's own list and eventually frees the
pending jobs without updating the scheduler's internal pending list.

So yes, it does avoid memory leaks, but it also leaves the schedulers internal
structures with an invalid state, i.e. the pending list of the scheduler has
pointers to already freed memory.

What if the drm_sched_fini() starts touching the pending list? Then you'd end up
with UAF bugs with this implementation. We cannot invalidate the schedulers
internal structures and yet call scheduler functions - e.g. drm_sched_fini() -
subsequently.

Hence, the current implementation of the mock scheduler is fundamentally flawed.
And so would be *every* driver that still has entries within the scheduler's
pending list.

This is not a false positive, it already caught a real bug -- in the mock
scheduler.

To avoid furher splitting hairs on whether real bugs need to be able to
manifest or not, lets move past this with a conclusion that there are two
potential things to do here:

This is not about splitting hairs, it is about understanding that abusing
knowledge about internals of a component to clean things up is *never* valid.

First one is to either send separately or include in this series something
like:

diff --git a/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
b/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
index f999c8859cf7..7c4df0e890ac 100644
--- a/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
+++ b/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
@@ -300,6 +300,8 @@ void drm_mock_sched_fini(struct drm_mock_scheduler
*sched)
                 drm_mock_sched_job_complete(job);
         spin_unlock_irqrestore(&sched->lock, flags);

+       drm_sched_fini(&sched->base);
+
         /*
          * Free completed jobs and jobs not yet processed by the DRM
scheduler
          * free worker.
@@ -311,8 +313,6 @@ void drm_mock_sched_fini(struct drm_mock_scheduler
*sched)

         list_for_each_entry_safe(job, next, &list, link)
                 mock_sched_free_job(&job->base);
-
-       drm_sched_fini(&sched->base);
  }

  /**

That should satisfy the requirement to "clear" memory about to be freed and
be 100% compliant with drm_sched_fini() kerneldoc (guideline b).

But the new warning from 3/5 here will still be there AFAICT and would you
then agree it is a false positive?

No, I do not agree.

Even if a driver does what you describe it is not the correct thing to do and
having a warning call it out makes sense.

This way of cleaning things up entirely relies on knowing specific scheduler
internals, which if changed, may fall apart.

Secondly, the series should modify all drivers (including the unit tests)
which are known to trigger this false positive.

Again, there are no false positives. It is the scheduler that needs to call
free_job() and other potential cleanups. You can't just stop the scheduler,
leave it in an intermediate state and try to clean it up by hand relying on
knowledge about internals.

Sorry I don't see the argument for the claim it is relying on the internals with the re-positioned drm_sched_fini call. In that case it is fully compliant with:

/**
 * drm_sched_fini - Destroy a gpu scheduler
 *
 * @sched: scheduler instance
 *
 * Tears down and cleans up the scheduler.
 *
 * This stops submission of new jobs to the hardware through
* drm_sched_backend_ops.run_job(). Consequently, drm_sched_backend_ops.free_job()
 * will not be called for all jobs still in drm_gpu_scheduler.pending_list.
* There is no solution for this currently. Thus, it is up to the driver to make
 * sure that:
 *
 *  a) drm_sched_fini() is only called after for all submitted jobs
 *     drm_sched_backend_ops.free_job() has been called or that
* b) the jobs for which drm_sched_backend_ops.free_job() has not been called
 *
* FIXME: Take care of the above problem and prevent this function from leaking
 * the jobs in drm_gpu_scheduler.pending_list under any circumstances.

^^^ recommended solution b).

Consequently, when the pending list isn't empty when drm_sched_fini() is called,
it *always* is a bug.

I am simply arguing that a quick audit of the drivers should be done to see that the dev_err is not added for drivers which clean up in compliance with drm_sched_fini() kerneldoc.

Starting to log errors from those would be adding work for many people in the bug handling chain. Sure you can say lets add the dev_err and then we don't have to look into the code base, just wait for bug reports to come our way. That works too (on some level) but lets please state the intent clearly and explicitly.

Regards,

Tvrtko

Reply via email to