Alex wrote:
> Thinking about this more, I think the problem might be related to CPU
> access to "VRAM". APUs don't have dedicated VRAM, they use a
> reserved
> carve out region at the top of system memory. For CPU access to this
> memory, we kmap the physical address of the carve out region of
>
Am 19.12.21 um 17:00 schrieb Yann Dirson:
Alex wrote:
Thinking about this more, I think the problem might be related to CPU
access to "VRAM". APUs don't have dedicated VRAM, they use a
reserved
carve out region at the top of system memory. For CPU access to this
memory, we kmap the physical ad
Christian wrote:
> Am 19.12.21 um 17:00 schrieb Yann Dirson:
> > Alex wrote:
> >> Thinking about this more, I think the problem might be related to
> >> CPU
> >> access to "VRAM". APUs don't have dedicated VRAM, they use a
> >> reserved
> >> carve out region at the top of system memory. For CPU a
[AMD Official Use Only]
> -Original Message-
> From: amd-gfx On Behalf Of
> sashank saye
> Sent: Saturday, December 18, 2021 2:56 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Saye, Sashank
> Subject: [PATCH] drm/amdgpu: Send Message to SMU on aldebaran
> passthrough for sbr handling
>
sdma queue number is not correct like on vega20, this patch
promises the setting keeps the same after code refactor.
Additionally, improve code to use switch case to list IP
version to complete kfd device_info structure filling.
This keeps consistency with the IP parse code in amdgpu_discovery.c.
A soft reminder. May I know any comments of this patch, just a minor
warning fix?
Thanks,
Ray
On Mon, Dec 13, 2021 at 02:34:22PM +0800, Huang, Ray wrote:
> Use __string(), __assign_str() and __get_str() helpers in the TRACE_EVENT()
> instead of string definitions in gpu scheduler trace.
>
> [ 1
[AMD Official Use Only]
> -Original Message-
> From: Chen, Guchun
> Sent: December 19, 2021 10:09 PM
> To: amd-gfx@lists.freedesktop.org; Deucher, Alexander
> ; Sider, Graham
> ; Kuehling, Felix ;
> Kim, Jonathan
> Cc: Chen, Guchun
> Subject: [PATCH] drm/amdkfd: correct sdma queue numbe
This patch keeps the setting of sdma queue number to the same
after recent KFD code refactor. Additionally, improve code to
use switch case to list IP version to complete kfd device_info
structure filling for IH version assignment. This makes consistency
with the IP parse code in amdgpu_discovery.c
[Public]
> -Original Message-
> From: Kim, Jonathan
> Sent: Monday, December 20, 2021 12:44 AM
> To: Chen, Guchun ; amd-
> g...@lists.freedesktop.org; Deucher, Alexander
> ; Sider, Graham
> ; Kuehling, Felix
> Subject: RE: [PATCH] drm/amdkfd: correct sdma queue number in kfd device
> ini
[Public]
> sdma queue number is not correct like on vega20, this patch promises
> the
I think you've also fixed Vega12 and Raven (they were being set to 8 before
rather than 2). No need to mention this in your description, just double
checking.
No, sdma queue number on vega12 and Raven is co
[Public]
Email crossed:).
Graham, I have sent v3 to review, and will add you as another reviewed-by when
pushing this patch. Thanks for the review from you and Jonathan.
Merry Xmas!
Regards,
Guchun
-Original Message-
From: Sider, Graham
Sent: Monday, December 20, 2021 2:19 PM
To: Ki
Am 17.12.21 um 23:27 schrieb Andrey Grodzovsky:
Before we initialize schedulers we must know which reset
domain are we in - for single device there iis a single
domain per device and so single wq per device. For XGMI
the reset domain spans the entire XGMI hive and so the
reset wq is per hive.
Am 17.12.21 um 23:27 schrieb Andrey Grodzovsky:
Restrict jobs resubmission to suspend case
only since schedulers not initialised yet on
probe.
Signed-off-by: Andrey Grodzovsky
---
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/dr
Am 17.12.21 um 23:27 schrieb Andrey Grodzovsky:
Use reset domain wq also for non TDR gpu recovery trigers
such as sysfs and RAS. We must serialize all possible
GPU recoveries to gurantee no concurrency there.
For TDR call the original recovery function directly since
it's already executed from wi
Am 17.12.21 um 23:27 schrieb Andrey Grodzovsky:
This patchset is based on earlier work by Boris[1] that allowed to have an
ordered workqueue at the driver level that will be used by the different
schedulers to queue their timeout work. On top of that I also serialized
any GPU reset we trigger fro
15 matches
Mail list logo