This can suppress the error reported on driver loading. Also these
are empty APIs as Vega12/Vega20 has no performance levels.
Change-Id: Ifa322a0e57fe3be4bfd9503f26e8deb7daab096d
Signed-off-by: Evan Quan
---
drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 8
drivers/gpu/drm/amd/pow
Reviewed-by: Feifei Xu
-Original Message-
From: amd-gfx On Behalf Of Evan Quan
Sent: Monday, October 22, 2018 3:20 PM
To: amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander ; Xu, Feifei
; Quan, Evan
Subject: [PATCH] drm/amd/powerplay: commit get_performance_level API as DAL
needed
Am 22.10.18 um 10:40 schrieb Sedat Dilek:
> [SNIP]
>>> Am 29.07.2018 um 15:52 schrieb Sedat Dilek:
Hi,
when compiling with clang-7 and CONFIG_DRM_AMDGPU=m I see the following...
if [ "" = "-pg" ]; then if [ arch/x86/boot/compressed/misc.o !=
"scripts/mod/empty.o" ]
Mhm, good catch.
And yes using the paging queue when it is available sounds like a good
idea to me as well.
So far I've only used it for VM updates to actually test if it works as
expected.
Regards,
Christian.
Am 19.10.18 um 21:53 schrieb Kuehling, Felix:
> [+Christian]
>
> Should the buffer
Am 19.10.18 um 22:52 schrieb Andrey Grodzovsky:
> Problem:
> A particular scheduler may become unsuable (underlying HW) after
> some event (e.g. GPU reset). If it's later chosen by
> the get free sched. policy a command will fail to be
> submitted.
>
> Fix:
> Add a driver specific callback to repor
Am 19.10.18 um 22:52 schrieb Andrey Grodzovsky:
> Start using drm_gpu_scheduler.ready isntead.
Please drop all occurrences of setting sched.ready manually around the
ring tests.
Instead add a helper function into amdgpu_ring.c which does the ring
tests and sets ready depending on the result.
R
csa buffer will be created per ctx, when ctx fini,
the csa buffer and va will be released. so need to
do ctx_mgr fin before vm fini.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/a
Am 22.10.18 um 11:47 schrieb Rex Zhu:
csa buffer will be created per ctx, when ctx fini,
the csa buffer and va will be released. so need to
do ctx_mgr fin before vm fini.
Signed-off-by: Rex Zhu
Reviewed-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 2 +-
1 file chan
Am 22.10.18 um 08:45 schrieb Zhang, Jerry(Junwei):
A question in ttm_bo.c
[SNIP]
int ttm_bo_device_release(struct ttm_bo_device *bdev)
{
@@ -1623,18 +1620,25 @@ int ttm_bo_device_release(struct
ttm_bo_device *bdev)
drm_vma_offset_manager_destroy(&bdev->vma_manager);
+ if (!ret)
+
On Wed, Sep 19, 2018 at 11:47 AM Sedat Dilek wrote:
>
> On Sun, Jul 29, 2018 at 4:39 PM, Christian König
> wrote:
> >> Do you need further informations?
> >
> > No, that is a known issue.
> >
> > Regards,
> > Christian.
> >
>
> Hi Christian,
>
> is/was this issue fixed?
>
> Regards,
> - Sedat -
>
When the va address located in the last pd entry,
the alloc_pts will failed.
caused by
"drm/amdgpu: add amdgpu_vm_entries_mask v2"
commit 72af632549b97ead9251bb155f08fefd1fb6f5c3.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 34 +++---
1 file ch
This re-introduces a 64 division that is not handled correctly with the %
operator.
Alex
From: amd-gfx on behalf of Rex Zhu
Sent: Monday, October 22, 2018 12:09:21 PM
To: amd-gfx@lists.freedesktop.org; Koenig, Christian
Cc: Zhu, Rex
Subject: [PATCH] drm/amdgp
The BIOS signature check does not guarantee integrity of the BIOS image
either way. As I understand it, the signature is just a magic number.
It's not a cryptographic signature. The check is just a sanity check.
Therefore this change doesn't add any meaningful protection against the
scenario you de
On Mon 22-10-18 22:53:22, Arun KS wrote:
> Remove managed_page_count_lock spinlock and instead use atomic
> variables.
I assume this has been auto-generated. If yes, it would be better to
mention the script so that people can review it and regenerate for
comparision. Such a large change is hard to
Remove managed_page_count_lock spinlock and instead use atomic
variables.
Suggested-by: Michal Hocko
Suggested-by: Vlastimil Babka
Signed-off-by: Arun KS
---
As discussed here,
https://patchwork.kernel.org/patch/10627521/#22261253
---
---
arch/csky/mm/init.c | 4 +-
The same statement is later done in kgd_set_pasid_vmid_mapping() already,
so no need to in set_pasid_vmid_mapping() again.
Change-Id: Iaf64b90c7dcb59944fb2012a58473dd063e73c60
Signed-off-by: Yong Zhao
---
drivers/gpu/drm/amd/amdkfd/cik_regs.h | 2 --
drivers/gpu/drm/amd/amdkfd/kf
The flags are added when calling amdgpu_gmc_pd_addr().
Change-Id: Idd85b1ac35d3d100154df8229ea20721d9a7045c
Signed-off-by: Yong Zhao
---
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c | 5 ++---
drivers/gpu/drm/amd/amdkfd/kfd_priv.h | 1 +
2 files changed, 3 insertions(+), 3 delet
Change-Id: I3dcd71955297c53b181f82e7078981230c642c01
Signed-off-by: Yong Zhao
---
drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 64 ---
drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h | 3 ++
2 files changed, 40 insertions(+), 27 deletions(-)
diff --git a/drivers/gpu/drm/amd/amd
Those register settings have been done in gfxhub_v1_0_program_invalidation()
and mmhub_v1_0_program_invalidation().
Change-Id: I9b9b44f17ac2a6ff0c9c78f91885665da75543d0
Signed-off-by: Yong Zhao
---
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c | 17 -
1 file changed, 17 delet
Change-Id: I306305e43d4b4032316909b3f4e3f9f5ca4520ae
Signed-off-by: Yong Zhao
---
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c | 32 +--
1 file changed, 1 insertion(+), 31 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
b/drivers/gpu/drm/amd/
Problem:
A particular scheduler may become unsuable (underlying HW) after
some event (e.g. GPU reset). If it's later chosen by
the get free sched. policy a command will fail to be
submitted.
Fix:
Add a driver specific callback to report the sched status so
rq with bad sched can be avoided in favor
Start using drm_gpu_scheduler.ready isntead.
v3:
Add helper function to run ring test and set
sched.ready flag status accordingly, clean explicit
sched.ready sets from the IP specific files.
Signed-off-by: Andrey Grodzovsky
---
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c| 2 +-
drivers/
After testing looks like this subset of ASICs has GPU reset
working for the most part. Enable reset due to job timeout.
Signed-off-by: Andrey Grodzovsky
---
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 24 +++-
1 file changed, 19 insertions(+), 5 deletions(-)
diff --git a/dr
Use the appropriate mmhub and gfxhub headers rather than adding
them to the gmc9 header.
Signed-off-by: Alex Deucher
---
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c | 3 ++-
drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h | 2 ++
drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h | 6
On Mon, Oct 22, 2018 at 5:20 PM Andrey Grodzovsky
wrote:
>
> After testing looks like this subset of ASICs has GPU reset
> working for the most part. Enable reset due to job timeout.
>
> Signed-off-by: Andrey Grodzovsky
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 24 +++-
On 10/22/2018 08:35 PM, Christian König wrote:
Am 22.10.18 um 08:45 schrieb Zhang, Jerry(Junwei):
A question in ttm_bo.c
[SNIP]
int ttm_bo_device_release(struct ttm_bo_device *bdev)
{
@@ -1623,18 +1620,25 @@ int ttm_bo_device_release(struct
ttm_bo_device *bdev)
drm_vma_offset_manager_de
On 10/23/2018 12:09 AM, Rex Zhu wrote:
When the va address located in the last pd entry,
Do you mean the root PD?
maybe we need roundup root PD in amdgpu_vm_entries_mask() like
amdgpu_vm_num_entries().
BTW, looks amdgpu_vm_entries_mask() is going to replace the
amdgpu_vm_num_entries()
Jer
On 10/22/2018 05:47 PM, Rex Zhu wrote:
csa buffer will be created per ctx, when ctx fini,
the csa buffer and va will be released. so need to
do ctx_mgr fin before vm fini.
Signed-off-by: Rex Zhu
Reviewed-by: Junwei Zhang
---
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 2 +-
1 file changed,
when the VA address located in the last PD entries,
the alloc_pts will faile.
Use the right PD mask instand of hardcode, suggested
by jerry.zhang.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/gp
Thanks Jerry.
Good suggestion.
Use the right mask for PD instand of hardcode.
so don't need to revert the whole patch.
Best Regards
Rex
From: Zhang, Jerry
Sent: Tuesday, October 23, 2018 10:02 AM
To: Zhu, Rex; amd-gfx@lists.freedesktop.org; Koenig, Christian
On 10/23/2018 11:29 AM, Rex Zhu wrote:
when the VA address located in the last PD entries,
the alloc_pts will faile.
Use the right PD mask instand of hardcode, suggested
by jerry.zhang.
Signed-off-by: Rex Zhu
Thanks to verify that.
Feel free to add
Reviewed-by: Junwei Zhang
Also like to ge
On 10/23/2018 01:12 PM, Zhang, Jerry(Junwei) wrote:
On 10/23/2018 11:29 AM, Rex Zhu wrote:
when the VA address located in the last PD entries,
the alloc_pts will faile.
Use the right PD mask instand of hardcode, suggested
by jerry.zhang.
Signed-off-by: Rex Zhu
Thanks to verify that.
Feel fr
No, if the vm size is small, there may only on root pd entry.
we need to make sure the mask >= 0;
Maybe this change revert Christian's commit:
commit 72af632549b97ead9251bb155f08fefd1fb6f5c3
Author: Christian König
Date: Sat Sep 15 10:02:13 2018 +0200
drm/amdgpu: add amdgpu_vm_entries_
Hi Christian
Am 19.10.18 um 18:41 schrieb Christian König:
> As the name says we only need one global instance of ttm_mem_global.
>
> Drop all the driver initialization and just use a single exported
> instance which is initialized during BO global initialization.
>
> Signed-off-by: Christian Kö
34 matches
Mail list logo