From: Dmitry Baryshkov
Switch drm_dp_tunnel.c to use new set of DPCD read / write helpers.
Reviewed-by: Lyude Paul
Acked-by: Jani Nikula
Signed-off-by: Dmitry Baryshkov
---
drivers/gpu/drm/display/drm_dp_tunnel.c | 20 ++--
1 file changed, 10 insertions(+), 10 deletions(-)
d
Reviewed-by: Sunil Khatri
On 3/13/2025 8:11 PM, Alex Deucher wrote:
Plumb in support for disabling kernel queues.
v2: use ring counts per Felix' suggestion
v3: fix stream fault handler, enable EOP interrupts
v4: fix MEC interrupt offset (Sunil)
Signed-off-by: Alex Deucher
---
drivers/gpu/d
Am 14.03.25 um 05:09 schrieb SRINIVASAN SHANMUGAM:
>
> On 3/7/2025 7:18 PM, Christian König wrote:
>> We keep the gang submission fence around in adev, make sure that it
>> stays alive.
>>
>> v2: fix memory leak on retry
>>
>> Signed-off-by: Christian König
>> ---
>> drivers/gpu/drm/amd/amdgpu/a
Set per-process static sh_mem config only once during process
initialization. Move all static changes from update_qpd() which is
called each time a queue is created to set_cache_memory_policy() which
is called once during process initialization.
set_cache_memory_policy() is currently defined only
Am 10.03.25 um 18:08 schrieb Natalie Vock:
> PRT BOs may not have any backing store, so bo->tbo.resource will be
> NULL. Check for that before dereferencing.
>
> Fixes: 0cce5f285d9ae8 ("drm/amdkfd: Check correct memory types for is_system
> variable")
> Signed-off-by: Natalie Vock
Reviewed-by: C
No need to make the workload profile setup dependent
on the results of cancelling the delayed work thread.
We have all of the necessary checking in place for the
workload profile reference counting, so separate the
two. As it is now, we can theoretically end up with
the call from begin_use happeni
On Fri, Mar 14, 2025 at 8:42 PM Balbir Singh wrote:
>
> On 3/15/25 01:18, Bert Karwatzki wrote:
> > Am Samstag, dem 15.03.2025 um 00:34 +1100 schrieb Balbir Singh:
> >> On 3/14/25 17:14, Balbir Singh wrote:
> >>> On 3/14/25 09:22, Bert Karwatzki wrote:
> Am Freitag, dem 14.03.2025 um 08:54 +1
On 03/13, Alex Deucher wrote:
> This would be set by IPs which only accept submissions
> from the kernel, not userspace, such as when kernel
> queues are disabled. Don't expose the rings to userspace
> and reject any submissions in the CS IOCTL.
Just out of curiosity, is CS == Command Submission?
Add proper checks for disable_kq functionality in
gfx helper functions. Add special logic for families
that require the clear state setup.
v2: use ring count as per Felix suggestion
v3: fix num_gfx_rings handling in amdgpu_gfx_graphics_queue_acquire()
Signed-off-by: Alex Deucher
---
drivers/gp
Am Samstag, dem 15.03.2025 um 00:34 +1100 schrieb Balbir Singh:
> On 3/14/25 17:14, Balbir Singh wrote:
> > On 3/14/25 09:22, Bert Karwatzki wrote:
> > > Am Freitag, dem 14.03.2025 um 08:54 +1100 schrieb Balbir Singh:
> > > > On 3/14/25 05:12, Bert Karwatzki wrote:
> > > > > Am Donnerstag, dem 13.0
Move the kfd suspend/resume code into the caller. That
is where the KFD is likely to detect a reset so on the KFD
side there is no need to call them. Also add a mutex to
lock the actual reset sequence.
Fixes: bac38ca8c475 ("drm/amdkfd: implement per queue sdma reset for gfx 9.4+")
Signed-off-by:
PRT BOs may not have any backing store, so bo->tbo.resource will be
NULL. Check for that before dereferencing.
Fixes: 0cce5f285d9ae8 ("drm/amdkfd: Check correct memory types for is_system
variable")
Signed-off-by: Natalie Vock
---
drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c | 5 +++--
1 file changed
Set per-process static sh_mem config only once during process
initialization. Move all static changes from update_qpd() which is
called each time a queue is created to set_cache_memory_policy() which
is called once during process initialization.
set_cache_memory_policy() is currently defined only
For default policy, driver will issue an RMA event when the number of
bad pages is greater than 8 physical rows, rather than reaches 8
physical rows, don't rely on threshold configurable parameters in
default mode.
Signed-off-by: Tao Zhou
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c | 6 ++
Applied. Thanks!
On Mon, Mar 10, 2025 at 8:18 AM SRINIVASAN SHANMUGAM
wrote:
>
>
> On 3/10/2025 4:17 PM, Dan Carpenter wrote:
>
> These lines are indented one tab too far. Delete the extra tabs.
>
> Signed-off-by: Dan Carpenter
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c | 4 ++--
> 1 fil
Shorten the gfx idle worker timeout. This is to sync with
DAL when there is no activity on the screen. Original 1
second can not sync with DAL, so DAL can not apply MALL
when the workload type is not bootup default.
Signed-off-by: Kenneth Feng
---
drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h | 2 +-
Am 10.03.25 um 18:29 schrieb Natalie Vock:
> On 07.03.25 09:39, Christian König wrote:
>> Am 06.03.25 um 18:01 schrieb Natalie Vock:
>>> When userspace requests buffers to be placed into GTT | VRAM, it is
>>> requesting the buffer to be placed into either of these domains. If the
>>> buffer fits in
[AMD Official Use Only - AMD Internal Distribution Only]
Reviewed-by: Hawking Zhang
Regards,
Hawking
-Original Message-
From: amd-gfx On Behalf Of Tao Zhou
Sent: Thursday, March 6, 2025 14:11
To: amd-gfx@lists.freedesktop.org
Cc: Zhou1, Tao
Subject: [PATCH] drm/amdgpu: increase RAS bad
Add proper checks for disable_kq functionality in
gfx helper functions. Add special logic for families
that require the clear state setup.
v2: use ring count as per Felix suggestion
Signed-off-by: Alex Deucher
---
drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c | 3 +++
drivers/gpu/drm/amd/amdgpu/amdg
Thanks for the patch, but someone already fixed this. Thanks!
Alex
On Mon, Mar 10, 2025 at 6:47 AM Dan Carpenter wrote:
>
> This line has a seven space indent instead of a tab.
>
> Signed-off-by: Dan Carpenter
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c | 2 +-
> 1 file changed, 1 insert
No need to make the workload profile setup dependent
on the results of cancelling the delayed work thread.
We have all of the necessary checking in place for the
workload profile reference counting, so separate the
two. As it is now, we can theoretically end up with
the call from begin_use happeni
On 3/14/2025 8:28 PM, Alex Deucher wrote:
> On Fri, Mar 14, 2025 at 10:53 AM Lazar, Lijo wrote:
>>
>>
>>
>> On 3/14/2025 7:17 PM, Alex Deucher wrote:
>>> No need to make the workload profile setup dependent
>>> on the results of cancelling the delayed work thread.
>>> We have all of the necessa
On 3/14/25 17:14, Balbir Singh wrote:
> On 3/14/25 09:22, Bert Karwatzki wrote:
>> Am Freitag, dem 14.03.2025 um 08:54 +1100 schrieb Balbir Singh:
>>> On 3/14/25 05:12, Bert Karwatzki wrote:
Am Donnerstag, dem 13.03.2025 um 22:47 +1100 schrieb Balbir Singh:
>
>
> Anyway, I think th
23 matches
Mail list logo