On 07/03/2025 13:41, Philipp Stanner wrote:
Hi,

You forgot to put folks in CC as recipents for the cover letter :(


On Thu, 2025-03-06 at 17:05 +0000, Tvrtko Ursulin wrote:
Move some options out into a new debug specific kconfig file in order
to
make things a bit cleaner.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursu...@igalia.com>
Cc: Christian König <christian.koe...@amd.com>
Cc: Danilo Krummrich <d...@kernel.org>
Cc: Matthew Brost <matthew.br...@intel.com>
Cc: Philipp Stanner <pha...@kernel.org>

We all have our individual work flows, so don't take this as lecturing
or anything – I just suspect that I was forgotten in the cover letter
because you Cc people by hand in the individual patches.

What I do is that I run get_maintainer and then put the individuals
listed there into the --to= field. That sends the entire series to all
of them.

Only sometimes, when there's a huge list of recipents or when the
patches of a series are very independent, I deviate from that rule.

JFYI

Notice it was there in v3, I just omitted to paste it this time.

Anyways, we have a bigger problem about the entire series. I now tested
again with the same setup as yesterday and the faults are indeed gone,
so that's good.

But to be sure I then did run kmemleak and got a list of leaks that is
more than 2000 lines long.

There is this comment for drm_sched_fini which ends with:

"""
...
 * This stops submission of new jobs to the hardware through
* drm_sched_backend_ops.run_job(). Consequently, drm_sched_backend_ops.free_job()
 * will not be called for all jobs still in drm_gpu_scheduler.pending_list.
* There is no solution for this currently. Thus, it is up to the driver to make
 * sure that:
 *
 *  a) drm_sched_fini() is only called after for all submitted jobs
 *     drm_sched_backend_ops.free_job() has been called or that
* b) the jobs for which drm_sched_backend_ops.free_job() has not been called
 *     after drm_sched_fini() ran are freed manually.
 *

* FIXME: Take care of the above problem and prevent this function from leaking
 * the jobs in drm_gpu_scheduler.pending_list under any circumstances.
"""

I got bitten by that. Keep forgetting how fragile the thing is.. :(

Regards,

Tvrtko

Reply via email to