On 6/3/25 15:00, Christoph Hellwig wrote:
> This is a really weird interface. No one has yet to explain why dmabuf
> is so special that we can't support direct I/O to it when we can support
> it to otherwise exotic mappings like PCI P2P ones.
With udmabuf you can do direct I/O, it's just ineffici
it roughly the
> average number of pages across all pools, freeing more of the cached
> pages every time shrinker core invokes our callback.
>
> Signed-off-by: Tvrtko Ursulin
> Cc: Christian König
> Cc: Thomas Hellström
> ---
> drivers/gpu/drm/ttm/ttm_pool.c | 9
On 6/3/25 13:30, Tvrtko Ursulin wrote:
>
> On 02/06/2025 19:00, Christian König wrote:
>> On 6/2/25 17:25, Tvrtko Ursulin wrote:
>>>
>>> On 02/06/2025 15:42, Christian König wrote:
>>>> On 6/2/25 15:05, Tvrtko Ursulin wrote:
>>>>>
>>
On 6/3/25 11:52, wangtao wrote:
> First determine if dmabuf reads from or writes to the file.
> Then call exporter's rw_file callback function.
>
> Signed-off-by: wangtao
> ---
> drivers/dma-buf/dma-buf.c | 32
> include/linux/dma-buf.h | 16
On 6/3/25 09:52, David Airlie wrote:
> On Tue, Jun 3, 2025 at 5:47 PM Christian König
> wrote:
>>
>> On 6/2/25 22:40, Dave Airlie wrote:
>>> From: Dave Airlie
>>>
>>> Currently you can't see per-device numa aware pools properly.
>>&
On 6/2/25 22:40, Dave Airlie wrote:
> From: Dave Airlie
>
> Currently you can't see per-device numa aware pools properly.
>
> Cc: Christian König
> Signed-off-by: Dave Airlie
Reviewed-by: Christian König
Any follow up patch to wire this up in amdgpu?
Regards,
Chris
his by never freeing less than the shrinker core has requested.
>
> Signed-off-by: Tvrtko Ursulin
> Cc: Christian König
> Cc: Thomas Hellström
> ---
> drivers/gpu/drm/ttm/ttm_pool.c | 25 +
> 1 file changed, 17 insertions(+), 8 deletions(-)
>
> diff
On 6/2/25 17:25, Tvrtko Ursulin wrote:
>
> On 02/06/2025 15:42, Christian König wrote:
>> On 6/2/25 15:05, Tvrtko Ursulin wrote:
>>>
>>> Hi,
>>>
>>> On 15/05/2025 14:15, Christian König wrote:
>>>> Hey drm-misc maintainers,
>>
On 6/2/25 15:05, Tvrtko Ursulin wrote:
>
> Hi,
>
> On 15/05/2025 14:15, Christian König wrote:
>> Hey drm-misc maintainers,
>>
>> can you guys please backmerge drm-next into drm-misc-next?
>>
>> I want to push this patch here but it depends on chang
On 5/30/25 10:40, Herbert Xu wrote:
> Add forward declaration of struct seq_file before using it in
> function prototype.
>
> Fixes: a25efb3863d0 ("dma-buf: add dma_fence_describe and dma_resv_describe
> v2")
I've removed this fixes tag since this is basically just a cleanup and not
really a bu
On 6/1/25 22:50, Dave Airlie wrote:
> Hey,
>
> I've been playing a bit with nouveau on aarch64, and I noticed ttm
> translates ttm_uncached into pgprot_noncached which uses
> MT_DEVICE_nGnRnE. This is of course a device mapping which isn't
> appropriate for memory.
>
> For main memory we should b
On 5/29/25 01:20, Dave Chinner wrote:
> On Thu, May 29, 2025 at 07:53:55AM +1000, Dave Airlie wrote:
>> On Wed, 28 May 2025 at 17:20, Christian König
>> wrote:
>>>
>>> Hi guys,
>>>
>>> On 5/27/25 01:49, Dave Chinner wrote:
>>>> I d
On 5/28/25 14:30, Simona Vetter wrote:
>> Yup, I've seen that a few times. I think we, the DRM community, should
>> stop that. It's just not useful and makes the commit messages larger,
>> both for the human reader while scrolling, as for the hard drive
>> regarding storage size
>
> I do occasiona
on the
scheduler side?
Regards,
Christian.
>
> Thanks,
> Pierre-Eric
>
> Le 28/05/2025 à 13:00, Christian König a écrit :
>> Adding some people who worked on the client name and client id fields.
>>
>> On 5/28/25 09:22, Sunil Khatri wrote:
>>> pid is not
On 5/28/25 14:39, Michel Dänzer wrote:
> On 2025-05-28 14:14, Paneer Selvam, Arunpravin wrote:
>> On 5/28/2025 2:59 PM, Natalie Vock wrote:
>>> On 5/28/25 09:07, Christian König wrote:
>>>>
>>>> But the problem rather seems to be that we sometimes don
On 5/28/25 09:43, Sunil Khatri wrote:
> Add client id to the drm_file_error api, client id
> is a unique id for each drm fd and is quite useful
> for debugging.
>
> Signed-off-by: Sunil Khatri
> ---
> drivers/gpu/drm/drm_file.c | 6 --
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> d
Adding some people who worked on the client name and client id fields.
On 5/28/25 09:22, Sunil Khatri wrote:
> pid is not always the right choice for fd to track
> the caller and hence adding drm client-id to the
> print which is unique for a drm client and can be
> used by driver in debugging
>
On 5/28/25 11:29, Natalie Vock wrote:
> Hi,
>
> On 5/28/25 09:07, Christian König wrote:
>> On 5/27/25 21:43, Natalie Vock wrote:
>>> If we hand out cleared blocks to users, they are expected to write
>>> at least some non-zero values somewhere. If we keep t
Hi guys,
On 5/27/25 01:49, Dave Chinner wrote:
> I disagree - specifically ordered memcg traversal is not something
> that the list_lru implementation is currently doing, nor should it
> be doing.
I realized over night that I didn't fully explored a way of getting both
advantages. And we actual
On 5/27/25 21:43, Natalie Vock wrote:
> If we hand out cleared blocks to users, they are expected to write
> at least some non-zero values somewhere. If we keep the CLEAR bit set on
> the block, amdgpu_fill_buffer will assume there is nothing to do and
> incorrectly skip clearing the block. Ultimat
On 5/27/25 16:35, wangtao wrote:
>> -Original Message-
>> From: Christian König
>> Sent: Thursday, May 22, 2025 7:58 PM
>> To: wangtao ; T.J. Mercier
>>
>> Cc: sumit.sem...@linaro.org; benjamin.gaign...@collabora.com;
>> brian.sta
On 5/26/25 22:13, Dave Airlie wrote:
> On Mon, 26 May 2025 at 18:19, Christian König
> wrote:
>>
>> For the HPC/ML use case this feature is completely irrelevant. ROCm, Cuda,
>> OpenCL, OpenMP etc... don't even expose something like this in their higher
&
Hi Tejun,
On 5/23/25 19:06, Tejun Heo wrote:
> Hello, Christian.
>
> On Fri, May 23, 2025 at 09:58:58AM +0200, Christian König wrote:
> ...
>>> - There's a GPU workload which uses a sizable amount of system memory for
>>> the pool being discussed in thi
On 5/26/25 13:14, Danilo Krummrich wrote:
> (Cc: Matthew)
>
> Let's get this clarified to not work with assumptions. :)
>
> On Mon, May 26, 2025 at 12:59:41PM +0200, Christian König wrote:
>> On 5/24/25 13:17, Danilo Krummrich wrote:
>>> On Fri, May 23, 2025 at
On 5/26/25 11:34, Philipp Stanner wrote:
> On Mon, 2025-05-26 at 11:25 +0200, Christian König wrote:
>> On 5/23/25 16:16, Danilo Krummrich wrote:
>>> On Fri, May 23, 2025 at 04:11:39PM +0200, Danilo Krummrich wrote:
>>>> On Fri, May 23, 2025 at 02:56:40PM +0200, C
On 5/24/25 13:17, Danilo Krummrich wrote:
> On Fri, May 23, 2025 at 04:11:39PM +0200, Danilo Krummrich wrote:
>> On Fri, May 23, 2025 at 02:56:40PM +0200, Christian König wrote:
>>> + if (xas_nomem(&xas, GFP_KERNEL)) {
>>> + xa_lock(&job->dep
On 5/23/25 16:16, Danilo Krummrich wrote:
> On Fri, May 23, 2025 at 04:11:39PM +0200, Danilo Krummrich wrote:
>> On Fri, May 23, 2025 at 02:56:40PM +0200, Christian König wrote:
>>> It turned out that we can actually massively optimize here.
>>>
>>> The previous
ce is also dropped
on error.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 53 ++
1 file changed, 29 insertions(+), 24 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 82df06a
Just to exercise the functionality.
Signed-off-by: Christian König
---
drivers/gpu/drm/scheduler/tests/tests_basic.c | 56 ++-
1 file changed, 55 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/scheduler/tests/tests_basic.c
b/drivers/gpu/drm/scheduler/tests
NOMEM
can come later while adding dependencies.
Signed-off-by: Christian König
---
drivers/gpu/drm/scheduler/sched_main.c | 42 +-
include/drm/gpu_scheduler.h| 2 ++
2 files changed, 43 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/scheduler/sched_m
a bit more code, but should be much faster in the end.
Signed-off-by: Christian König
---
drivers/gpu/drm/scheduler/sched_main.c | 29 ++
1 file changed, 20 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/sched_main.c
b/drivers/gpu/drm/scheduler/sch
Hi guys,
fives try to those patches. I think I finally manage to understand
how xarray works.
There are the high level and lower level API and we can actually save
tons of CPU cycles when we switch to the lower level API for adding
the fences to the xarray.
Looks like this is working now, but I
Hi Tejun,
first of all thanks to Johannes and you for the input, it took me quite some
time to actually get a grip on your concern here.
On 5/22/25 21:51, Tejun Heo wrote:
> Hello,
>
> On Sat, May 17, 2025 at 06:25:02AM +1000, Dave Airlie wrote:
>> I think this is where we have 2 options:
>> (a
On 5/22/25 16:27, Tvrtko Ursulin wrote:
>
> On 22/05/2025 14:41, Christian König wrote:
>> Since we already iterated over the xarray we know at which index the new
>> entry should be stored. So instead of using xa_alloc use xa_store and
>> write into the index direc
On 5/22/25 15:50, Danilo Krummrich wrote:
> On Thu, May 22, 2025 at 03:05:02PM +0200, Christian König wrote:
>> E.g. when you don't know the implementation side use the defined API and
>> don't mess with the internals. If you do know the implementation side then
>>
On 5/22/25 15:43, Philipp Stanner wrote:
>>
>> Well there is no need to implement it, but when it is implemented the
>> caller *must* call it when polling.
>
> I don't understand. Please elaborate on that a bit more. If there's no
> need to implement it, then why can't one have a
> __dma_fence_is_
On 5/22/25 15:25, Alex Deucher wrote:
> On Thu, May 15, 2025 at 4:58 AM Christian König
> wrote:
>>
>> Explicitly adding the scheduler maintainers.
>>
>> On 5/15/25 04:07, Lin.Cao wrote:
>>> Previously we only signaled finished fence which may cause som
NOMEM
can come later while adding dependencies.
Signed-off-by: Christian König
---
drivers/gpu/drm/scheduler/sched_main.c | 37 ++
include/drm/gpu_scheduler.h| 2 ++
2 files changed, 39 insertions(+)
diff --git a/drivers/gpu/drm/scheduler/sched_main.c
b/driver
ce is also dropped
on error.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 53 ++
1 file changed, 29 insertions(+), 24 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 82df06a
Just to exercise the functionality.
Signed-off-by: Christian König
---
drivers/gpu/drm/scheduler/tests/tests_basic.c | 56 ++-
1 file changed, 55 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/scheduler/tests/tests_basic.c
b/drivers/gpu/drm/scheduler/tests
Hi guys,
fourth revision of those patches.
Tvrtko got me on another idea how to avoid returning the index of the
reserved slot to the caller. That simplfies the handling quite a bit and
makes the code more resilent to errors.
Please take another look,
Christian.
Since we already iterated over the xarray we know at which index the new
entry should be stored. So instead of using xa_alloc use xa_store and
write into the index directly.
Signed-off-by: Christian König
---
drivers/gpu/drm/scheduler/sched_main.c | 12 ++--
1 file changed, 6 insertions
On 5/22/25 15:16, Philipp Stanner wrote:
> On Thu, 2025-05-22 at 15:09 +0200, Christian König wrote:
>> On 5/22/25 14:59, Danilo Krummrich wrote:
>>> On Thu, May 22, 2025 at 02:34:33PM +0200, Christian König wrote:
>>>> See all the functions inside include/linux/d
On 5/22/25 14:57, Tvrtko Ursulin wrote:
>
> On 22/05/2025 13:34, Christian König wrote:
>> On 5/22/25 14:20, Philipp Stanner wrote:
>>> On Thu, 2025-05-22 at 14:06 +0200, Christian König wrote:
>>>> On 5/22/25 13:25, Philipp Stanner wrote:
>>>>>
On 5/22/25 14:59, Danilo Krummrich wrote:
> On Thu, May 22, 2025 at 02:34:33PM +0200, Christian König wrote:
>> See all the functions inside include/linux/dma-fence.h can be used by
>> everybody. It's basically the public interface of the dma_fence object.
>
> As y
On 5/22/25 14:42, Philipp Stanner wrote:
> On Thu, 2025-05-22 at 14:34 +0200, Christian König wrote:
>> On 5/22/25 14:20, Philipp Stanner wrote:
>>> On Thu, 2025-05-22 at 14:06 +0200, Christian König wrote:
>>>> On 5/22/25 13:25, Philipp Stanner wrote:
>>>&
On 5/22/25 14:20, Philipp Stanner wrote:
> On Thu, 2025-05-22 at 14:06 +0200, Christian König wrote:
>> On 5/22/25 13:25, Philipp Stanner wrote:
>>> dma_fence_is_signaled_locked(), which is used in
>>> nouveau_fence_context_kill(), can signal fences below the sur
On 5/22/25 13:25, Philipp Stanner wrote:
> dma_fence_is_signaled_locked(), which is used in
> nouveau_fence_context_kill(), can signal fences below the surface
> through a callback.
>
> There is neither need for nor use in doing that when killing a fence
> context.
>
> Replace dma_fence_is_signal
On 5/22/25 10:02, wangtao wrote:
>> -Original Message-
>> From: Christian König
>> Sent: Wednesday, May 21, 2025 7:57 PM
>> To: wangtao ; T.J. Mercier
>>
>> Cc: sumit.sem...@linaro.org; benjamin.gaign...@collabora.com;
>> brian.sta
On 5/22/25 08:56, Jens Wiklander wrote:
> On Wed, May 21, 2025 at 9:13 AM Christian König
> wrote:
>>
>> On 5/20/25 17:16, Jens Wiklander wrote:
>>> Export the dma-buf heap functions declared in .
>>
>> That is what this patch does and that should be ob
On 5/21/25 22:29, Lyude Paul wrote:
> From: Asahi Lina
>
> This is just for basic usage in the DRM shmem abstractions for implied
> locking, not intended as a full DMA Reservation abstraction yet.
Looks good in general, but my question is if it wouldn't be better to export
the higher level drm_
On 5/21/25 16:06, David Francis wrote:
> amdgpu CRIU requires an amdgpu CRIU ioctl. This ioctl
> has a similar interface to the amdkfd CRIU ioctl.
>
> The objects that can be checkpointed and restored are bos and vm
> mappings. Because a single amdgpu bo can have multiple mappings.
> the mappings
On 5/21/25 16:06, David Francis wrote:
> CRIU restore of drm buffer objects requires the ability to create
> or import a buffer object with a specific gem handle.
>
> Add new drm ioctl DRM_IOCTL_GEM_CHANGE_HANDLE, which takes
> the gem handle of an object and moves that object to a
> specified new
On 5/21/25 12:25, wangtao wrote:
> [wangtao] I previously explained that read/sendfile/splice/copy_file_range
> syscalls can't achieve dmabuf direct IO zero-copy.
And why can't you work on improving those syscalls instead of creating a new
IOCTL?
> My focus is enabling dmabuf direct I/O for [reg
On 5/15/25 18:17, Tvrtko Ursulin wrote:
>
> On 15/05/2025 16:00, Christian König wrote:
>> Sometimes drivers need to be able to submit multiple jobs which depend on
>> each other to different schedulers at the same time, but using
>> drm_sched_job_add_dependency() can
Sorry for the delayed reply.
On 5/19/25 11:04, Philipp Stanner wrote:
>>>
>
> Also, if someone preallocates and does not consume the
> slot
> will that
> confuse the iteration in drm_sched_job_dependency()?
No it doesn't. The xarray is filtering NULL an
On 5/21/25 04:23, Dave Airlie wrote:
>>
>> So in the GPU case, you'd charge on allocation, free objects into a
>> cgroup-specific pool, and shrink using a cgroup-specific LRU
>> list. Freed objects can be reused by this cgroup, but nobody else.
>> They're reclaimed through memory pressure inside th
On 5/21/25 06:17, wangtao wrote:
>>> Reducing CPU overhead/power consumption is critical for mobile devices.
>>> We need simpler and more efficient dmabuf direct I/O support.
>>>
>>> As Christian evaluated sendfile performance based on your data, could
>>> you confirm whether the cache was cleared?
On 5/20/25 17:16, Jens Wiklander wrote:
> Export the dma-buf heap functions declared in .
That is what this patch does and that should be obvious by looking at it. You
need to explain why you do this.
Looking at the rest of the series it's most likely ok, but this commit message
should really b
On 5/16/25 21:33, David Francis wrote:
> CRIU restore of drm buffer objects requires the ability to create
> or import a buffer object with a specific gem handle.
>
> Add new drm ioctl DRM_IOCTL_PRIME_CHANGE_GEM_HANDLE, which takes
> the gem handle of an object and moves that object to a
> specifi
On 5/19/25 08:18, Dave Airlie wrote:
> On Mon, 19 May 2025 at 02:28, Christian König
> wrote:
>>
>> On 5/16/25 22:25, Dave Airlie wrote:
>>> On Sat, 17 May 2025 at 06:04, Johannes Weiner wrote:
>>>>> The memory properties are similar to what GFP_DMA o
On 5/19/25 06:08, wangtao wrote:
>
>
>> -Original Message-----
>> From: Christian König
>> Sent: Friday, May 16, 2025 6:29 PM
>> To: wangtao ; sumit.sem...@linaro.org;
>> benjamin.gaign...@collabora.com; brian.star...@arm.com;
>> jstu...@google
On 5/16/25 22:25, Dave Airlie wrote:
> On Sat, 17 May 2025 at 06:04, Johannes Weiner wrote:
>>> The memory properties are similar to what GFP_DMA or GFP_DMA32
>>> provide.
>>>
>>> The reasons we haven't moved this into the core memory management is
>>> because it is completely x86 specific and onl
On 5/16/25 18:41, Johannes Weiner wrote:
>>> Listen, none of this is even remotely new. This isn't the first cache
>>> we're tracking, and it's not the first consumer that can outlive the
>>> controlling cgroup.
>>
>> Yes, I knew about all of that and I find that extremely questionable
>> on existi
On 5/16/25 16:53, Johannes Weiner wrote:
> On Fri, May 16, 2025 at 08:53:07AM +0200, Christian König wrote:
>> On 5/15/25 18:08, Johannes Weiner wrote:
>>>> Stop for a second.
>>>>
>>>> As far as I can see the shrinker for the TTM pool should *not* be
&
On 5/16/25 15:41, Tvrtko Ursulin wrote:
>>> But because TTM shrinker does not currently update shrinkerctl->nr_scanned,
>>> shrinker core assumes TTM looked at full SHRINK_BATCH pages with every
>>> call, and adds and decrements that value to the counters it uses to
>>> determine when to stop tr
On 5/16/25 13:21, Tvrtko Ursulin wrote:
>
> On 16/05/2025 09:23, Christian König wrote:
>> On 5/15/25 22:57, Tvrtko Ursulin wrote:
>>> Currently the TTM pool shrinker ensures it frees at least something every
>>> time it is invoked, but it also lies to the core
Hi Thomas,
sorry for the delayed reply.
On 5/13/25 11:14, Hellstrom, Thomas wrote:
> Hi, Christian
>
> During eviction we want to be able to evict bos that share the VM's
> reservation object but that are currently not bound to the VM since
> they are not part of the current working set.
>
> TT
On 5/16/25 11:49, wangtao wrote:
Please try using udmabuf with sendfile() as confirmed to be working by
>> T.J.
>>> [wangtao] Using buffer IO with dmabuf file read/write requires one
>> memory copy.
>>> Direct IO removes this copy to enable zero-copy. The sendfile system
>>> call reduces memor
Hi Thomas,
On 5/16/25 10:33, Thomas Hellström wrote:
> Hi!
>
> I previously discussed this with Simona on IRC but would like to get
> some feedback also from a wider audience:
>
> We're planning to share dma-bufs using a fast interconnect in a way
> similar to pcie-p2p:
>
> The rough plan is to
On 5/16/25 09:40, wangtao wrote:
>
>
>> -Original Message-----
>> From: Christian König
>> Sent: Thursday, May 15, 2025 10:26 PM
>> To: wangtao ; sumit.sem...@linaro.org;
>> benjamin.gaign...@collabora.com; brian.star...@arm.com;
>> jstu...@google
rching more
> possible pools on an average invocation.
>
> Signed-off-by: Tvrtko Ursulin
> Cc: Christian König
> Cc: Thomas Hellström
> ---
> drivers/gpu/drm/ttm/ttm_pool.c | 39 +-
> 1 file changed, 24 insertions(+), 15 deletions(-)
On 5/15/25 18:08, Johannes Weiner wrote:
>> Stop for a second.
>>
>> As far as I can see the shrinker for the TTM pool should *not* be
>> memcg aware. Background is that pages who enter the pool are
>> considered freed by the application.
>
> They're not free from a system POV until they're back i
Hi guys,
third revision of this patch set. I've re-worked the interface
completely this time since my previous assumptions on how the
reservation function of the xarray work weren't correct at all.
I also added a test case to make sure I've got it right this time.
Please review and comment,
Chri
On 5/15/25 17:04, Waiman Long wrote:
> On 5/15/25 4:55 AM, Christian König wrote:
>> On 5/15/25 05:02, Dave Airlie wrote:
>>>> I have to admit I'm pretty clueless about the gpu driver internals and
>>>> can't really judge how feasible this is. But from a
ce is also dropped
on error.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 49 +++---
1 file changed, 28 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 82df06a
NOMEM
can come later while adding dependencies.
v2: rework implementation an documentation
v3: rework from scratch, use separate function to add preallocated deps
Signed-off-by: Christian König
---
drivers/gpu/drm/scheduler/sched_main.c | 45 ++
include/drm/gpu_schedu
Just to exercise the functionality.
Signed-off-by: Christian König
---
drivers/gpu/drm/scheduler/tests/tests_basic.c | 59 ++-
1 file changed, 58 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/scheduler/tests/tests_basic.c
b/drivers/gpu/drm/scheduler/tests
On 5/15/25 16:03, wangtao wrote:
> [wangtao] My Test Configuration (CPU 1GHz, 5-test average):
> Allocation: 32x32MB buffer creation
> - dmabuf 53ms vs. udmabuf 694ms (10X slower)
> - Note: shmem shows excessive allocation time
Yeah, that is something already noted by others as well. But that is o
> v2:
> * Streamlined init and added kerneldoc.
> * Rebase for amdgpu userq which landed since.
>
> Signed-off-by: Tvrtko Ursulin
> Reviewed-by: Christian König # v1
> ---
> drivers/dma-buf/dma-fence-chain.c | 5 +-
> drivers/dma-buf/dma-fence.c
On 5/15/25 11:05, Philipp Stanner wrote:
> On Thu, 2025-05-15 at 10:48 +0200, Christian König wrote:
>> Explicitly adding the scheduler maintainers.
>>
>> On 5/15/25 04:07, Lin.Cao wrote:
>>> Previously we only signaled finished fence which may cause some
>>
e.
>
> Signed-off-by: Lin.Cao
Reviewed-by: Christian König
Danilo & Philipp can we quickly get an rb for that? I'm volunteering to push it
to drm-misc-fixes and add the necessary stable tags since this is a fix for a
rather ugly bug.
Regards,
Christian.
> ---
> drive
On 5/15/25 05:02, Dave Airlie wrote:
>> I have to admit I'm pretty clueless about the gpu driver internals and
>> can't really judge how feasible this is. But from a cgroup POV, if you
>> want proper memory isolation between groups, it seems to me that's the
>> direction you'd have to take this in.
On 5/14/25 19:07, Maarten Lankhorst wrote:
> Hey,
>
> On 2025-05-14 13:55, Christian König wrote:
>> On 5/14/25 13:41, Maarten Lankhorst wrote:
>>> Hi Dave,
>>>
>>> We've had a small discussion on irc, so I wanted to summarize it here:
>>>
&
I'm going to push patches #1-#6 to drm-misc-next.
They make sense as a stand alone cleanups anyway.
But that here needs a bit more documentation I think.
On 5/13/25 09:45, Tvrtko Ursulin wrote:
> Dma-fence objects currently suffer from a potential use after free problem
> where fences exported t
On 5/13/25 09:45, Tvrtko Ursulin wrote:
> Access the dma-fence internals via the previously added helpers.
>
> Drop the macro while at it, since the length is now more manageable.
>
> Signed-off-by: Tvrtko Ursulin
Reviewed-by: Christian König
> ---
> drive
On 5/13/25 04:06, Hyejeong Choi wrote:
> smp_store_mb() inserts memory barrier after storing operation.
> It is different with what the comment is originally aiming so Null
> pointer dereference can be happened if memory update is reordered.
>
> Signed-off-by: Hyejeong Choi
I've reviewed, add CC
On 5/14/25 13:02, wangtao wrote:
>> -Original Message-
>> From: Christian König
>> Sent: Tuesday, May 13, 2025 9:18 PM
>> To: wangtao ; sumit.sem...@linaro.org;
>> benjamin.gaign...@collabora.com; brian.star...@arm.com;
>> jstu...@google.com;
On 5/14/25 13:41, Maarten Lankhorst wrote:
> Hi Dave,
>
> We've had a small discussion on irc, so I wanted to summarize it here:
>
> All memory allocated should be accounted, even memory that is being
> evicted from VRAM.
That sounds like a really bad idea to me.
> This may cause the process th
On 5/13/25 17:55, T.J. Mercier wrote:
> On Tue, May 13, 2025 at 4:31 AM Christian König
> wrote:
>>
>> On 5/13/25 11:27, wangtao wrote:
>>> Add DMA_BUF_IOCTL_RW_FILE to save/restore data from/to a dma-buf.
>>
>> Similar approach where rejected before in fav
On 5/12/25 08:12, Dave Airlie wrote:
> From: Dave Airlie
>
> Doing proper integration of TTM system memory allocations with
> memcg is a difficult ask, primarily due to difficulties around
> accounting for evictions properly.
>
> However there are systems where userspace will be allocating
> obj
On 5/12/25 08:12, Dave Airlie wrote:
> From: Dave Airlie
>
> This adds the memcg object for any user allocated objects,
> add uses the MEMCG placement flags in the correct places.
>
> Signed-off-by: Dave Airlie
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 5 -
> drivers/gpu/drm/am
On 5/13/25 14:30, wangtao wrote:
>> -Original Message-
>> From: Christian König
>> Sent: Tuesday, May 13, 2025 7:32 PM
>> To: wangtao ; sumit.sem...@linaro.org;
>> benjamin.gaign...@collabora.com; brian.star...@arm.com;
>> jstu...@google.com;
On 5/13/25 11:28, wangtao wrote:
> Support direct file I/O operations for system_heap dma-buf objects.
> Implementation includes:
> 1. Convert sg_table to bio_vec
That is usually illegal for DMA-bufs.
Regards,
Christian.
> 2. Set IOCB_DIRECT when O_DIRECT is supported
> 3. Invoke vfs_iocb_iter_r
On 5/13/25 11:27, wangtao wrote:
> Add DMA_BUF_IOCTL_RW_FILE to save/restore data from/to a dma-buf.
Similar approach where rejected before in favor of using udmabuf.
Is there any reason you can't use that approach as well?
Regards,
Christian.
>
> Signed-off-by: wangtao
> ---
> drivers/dma-
On 5/12/25 11:14, Tvrtko Ursulin wrote:
>
> On 12/05/2025 09:19, Christian König wrote:
>> On 5/9/25 17:33, Tvrtko Ursulin wrote:
>>> With the goal of reducing the need for drivers to touch fence->ops, we
>>> add explicit flags for struct dma_fence_array and st
On 5/12/25 08:08, Krzysztof Karas wrote:
> Hi André,
>
> [...]
>
>> @@ -582,6 +584,14 @@ int drm_dev_wedged_event(struct drm_device *dev,
>> unsigned long method)
>> drm_info(dev, "device wedged, %s\n", method == DRM_WEDGE_RECOVERY_NONE ?
>> "but recovered through reset" :
On 5/12/25 13:12, Hyejeong Choi wrote:
> smp_store_mb() inserts memory barrier after storing operation.
> It is different with what the comment is originally aiming so Null
> pointer dereference can be happened if memory update is reordered.
>
> Signed-off-by: Hyejeong Choi
> ---
> drivers/dma-b
On 5/9/25 17:33, Tvrtko Ursulin wrote:
> Access the dma-fence internals via the previously added helpers.
>
> Signed-off-by: Tvrtko Ursulin
Reviewed-by: Christian König
> ---
> drivers/gpu/drm/i915/gt/intel_gt_requests.c | 4 ++--
> drivers/gpu/drm/i915/i915_req
On 5/9/25 17:33, Tvrtko Ursulin wrote:
> Access the dma-fence internals via the previously added helpers.
>
> Signed-off-by: Tvrtko Ursulin
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgp
1 - 100 of 2943 matches
Mail list logo