On Wed, Jul 09, 2025 at 11:49:44AM +0100, Tvrtko Ursulin wrote:
>
> On 09/07/2025 05:45, Matthew Brost wrote:
> > On Tue, Jul 08, 2025 at 01:20:32PM +0100, Tvrtko Ursulin wrote:
> > > Currently the job free work item will lock sched->job_list_lock first time
> >
>
> Signed-off-by: Tvrtko Ursulin
> Cc: Christian König
> Cc: Danilo Krummrich
> Cc: Matthew Brost
The patch looks correct, we do have CI failure in a section which
stresses scheduling though. Probably noise though. Do you have Intel
hardware? I suggest running xe_exec_threads
On Tue, Jul 08, 2025 at 01:21:21PM +0100, Tvrtko Ursulin wrote:
> Extract out two copies of the identical code to function epilogue to make
> it smaller and more readable.
>
> Signed-off-by: Tvrtko Ursulin
> Cc: Christian König
> Cc: Danilo Krummrich
> Cc: Matthew Brost
ueue approach. Another
> interesting thing would be C-state residencies and CPU power. But given how
> when the scheduler went from kthread to wq and lost the ability the queue
> more than one job, I don't think back then anyone measured this? In which
> case I suspect we even don
On Mon, Jul 07, 2025 at 02:38:07PM +0200, Christian König wrote:
> On 03.07.25 00:01, Matthew Brost wrote:
> >> diff --git a/drivers/gpu/drm/ttm/tests/ttm_bo_test.c
> >> b/drivers/gpu/drm/ttm/tests/ttm_bo_test.c
> >> index 6c77550c51af..5426b435f702 100644
> &
On Sun, Jul 06, 2025 at 04:44:27PM -0400, Mario Limonciello wrote:
> On 7/4/2025 6:12 AM, Samuel Zhang wrote:
> > This new api is used for hibernation to move GTT BOs to shmem after
> > VRAM eviction. shmem will be flushed to swap disk later to reduce
> > the system memory usage for hibernation.
>
On Wed, Jul 02, 2025 at 01:00:27PM +0200, Christian König wrote:
> Give TTM BOs a separate cleanup function.
>
> The next step in removing the TTM BO reference counting and replacing it
> with the GEM object reference counting.
>
> Signed-off-by: Christian König
> ---
> drivers/gpu/drm/amd/amdg
On Wed, Jul 02, 2025 at 01:00:26PM +0200, Christian König wrote:
> Hi everyone,
>
> v2 of this patch set. I've either pushed or removed the other
> patches from v1, so only two remain.
>
> Pretty straight forward conversation and shouldn't result in any visible
> technical difference.
>
> Please
On Mon, Jun 23, 2025 at 01:37:35PM +0100, Matthew Auld wrote:
> +Matt B who is adding clear-on-free support in xe. I'm not sure if we might
> also see something like this.
>
Thanks for the heads up.
> On 23/06/2025 06:52, Arunpravin Paneer Selvam wrote:
> > - Added a handler in DRM buddy manager
-by: Tvrtko Ursulin
This makes sense in the context of the series (e.g. assuming patch #9 lands).
With that:
Reviewed-by: Matthew Brost
> ---
> drivers/gpu/drm/xe/xe_guc_exec_queue_types.h | 2 ++
> drivers/gpu/drm/xe/xe_guc_submit.c | 7 ++-
> drivers/gpu/drm/xe
t from this series, to make the series completely about the
> new scheduling policy, not general other improvements.
>
>
> P.
>
> >
> > Signed-off-by: Tvrtko Ursulin
> > Cc: Christian König
> > Cc: Danilo Krummrich
> > Cc: Matthew Brost
> > Cc: Philipp Stann
On Fri, May 09, 2025 at 04:33:40PM +0100, Tvrtko Ursulin wrote:
> Replace open-coded helper with the subsystem one.
>
You probably can just send this one by itself as it good cleanup and
independent.
Reviewed-by: Matthew Brost
> Signed-off-by: Tvrtko Ursulin
> ---
> drivers/g
t. It's a large series so I will put most people on Cc only in
> the cover letter as a ping of a sort. Whoever is interested can for now find
> the
> series in the archives.
>
> 1)
> https://lore.kernel.org/dri-devel/20231024160727.282960-1-tvrtko.ursu...@linux.intel.com/
>
ith a container_of.
> > >
> > > This also allows us to remove duplicate definitions of
> > > to_drm_sched_job.
> > >
> > > Signed-off-by: Tvrtko Ursulin
> > > Cc: Christian König
> > > Cc: Danilo Krummrich
> > > Cc: Matthew
ue
> > > > > > to_drm_sched_job
> > > > > > being implemented with a container_of.
> > > > > >
> > > > > > This also allows us to remove duplicate definitions of
> > > > > > to_drm_sched_job.
> > > &g
issnaming in:
>
> commit 6f1cacf4eba7 ("drm/nouveau: Improve variable name in
> nouveau_sched_init()").
>
> Introduce a new struct for the scheduler init parameters and port all
> users.
>
> Signed-off-by: Philipp Stanner
For the Xe changes:
Acked-by: Matthew Brost
On Wed, Jan 22, 2025 at 06:04:58PM +0100, Boris Brezillon wrote:
> On Wed, 22 Jan 2025 16:14:59 +
> Tvrtko Ursulin wrote:
>
> > On 22/01/2025 15:51, Boris Brezillon wrote:
> > > On Wed, 22 Jan 2025 15:08:20 +0100
> > > Philipp Stanner wrote:
> > >
> > >> --- a/drivers/gpu/drm/panthor/pant
On Wed, Jan 22, 2025 at 04:06:10PM +0100, Christian König wrote:
> Am 22.01.25 um 15:48 schrieb Philipp Stanner:
> > On Wed, 2025-01-22 at 15:34 +0100, Christian König wrote:
> > > Am 22.01.25 um 15:08 schrieb Philipp Stanner:
> > > > drm_sched_init() has a great many parameters and upcoming new
>
On Wed, Jan 22, 2025 at 03:48:54PM +0100, Philipp Stanner wrote:
> On Wed, 2025-01-22 at 15:34 +0100, Christian König wrote:
> > Am 22.01.25 um 15:08 schrieb Philipp Stanner:
> > > drm_sched_init() has a great many parameters and upcoming new
> > > functionality for the scheduler might add even mor
d.
>
> That was caused by this commit:
>
> commit 746ae46c11137ba21f0c0c68f082a9d8c1222c78
> Author: Matthew Brost
> Date: Wed Oct 23 16:59:17 2024 -0700
>
> drm/sched: Mark scheduler work queues with WQ_MEM_RECLAIM
>
> drm_gpu_scheduler.submit_wq is used to s
ko's patch is
indeed accurate. For what it's worth, we encountered several similar
bugs in Xe that emerged once we added the correct work queue
annotations.
> > There is no need to use WQ_MEM_RECLAIM for the workqueue or do I miss
> > something?
> >
> >
On Fri, Oct 04, 2024 at 04:28:29PM +0200, Thomas Hellström wrote:
> On Wed, 2024-10-02 at 14:54 +0200, Thomas Hellström wrote:
> > On Wed, 2024-10-02 at 14:45 +0200, Christian König wrote:
> > > Am 02.10.24 um 14:24 schrieb Thomas Hellström:
> > > > The ttm_device_init funcition uses multiple bool
this series?
>
> To cut the long story short, first three patches try to fix this race in three
> places I *think* can manifest in different ways.
>
> Last patch is a trivial optimisation I spotted can be easily done.
>
> Cc: Christian König
> Cc: Alex Deucher
> Cc: Lub
and update all callers,
> apply the errno also to scheduler fences with hw fences
>
Seems responablie to me, but all the caller pass in an errno of zero to
drm_sched_start. Going to change in a follow up?
Anyways, LGTM but will leave RB for a user a user of this interface.
Acked-by
ched_start and probably should as TDR isn't restarted after we
do a device reset if scheduler / entity didn't cause the reset.
Also I think we have a bug in drm_sched_job_begin too wrt to restarting
the TDR even an existing job is running, will follow up on that too.
Anyways:
Reviewed-by: M
ans that priority changes should now work (be able to change the
> > > selected run-queue) for all drivers and engines. In other words
> > > drm_sched_entity_set_priority() should now just work for all cases.
> > >
> > > To enable maintaining its own co
On Thu, Apr 25, 2024 at 08:18:38AM +0200, Christian König wrote:
> Am 24.04.24 um 18:56 schrieb Friedrich Vock:
> > Make each buffer object aware of whether it has been evicted or not.
>
> That reverts some changes we made a couple of years ago.
>
> In general the idea is that eviction isn't some
27 matches
Mail list logo