[PATCH] drm/amd/powerplay: commit get_performance_level API as DAL needed

2018-10-22 Thread Evan Quan
This can suppress the error reported on driver loading. Also these are empty APIs as Vega12/Vega20 has no performance levels. Change-Id: Ifa322a0e57fe3be4bfd9503f26e8deb7daab096d Signed-off-by: Evan Quan --- drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 8 drivers/gpu/drm/amd/pow

RE: [PATCH] drm/amd/powerplay: commit get_performance_level API as DAL needed

2018-10-22 Thread Xu, Feifei
Reviewed-by: Feifei Xu -Original Message- From: amd-gfx On Behalf Of Evan Quan Sent: Monday, October 22, 2018 3:20 PM To: amd-gfx@lists.freedesktop.org Cc: Deucher, Alexander ; Xu, Feifei ; Quan, Evan Subject: [PATCH] drm/amd/powerplay: commit get_performance_level API as DAL needed

Re: [Linux-v4.18-rc6] modpost-errors when compiling with clang-7 and CONFIG_DRM_AMDGPU=m

2018-10-22 Thread Koenig, Christian
Am 22.10.18 um 10:40 schrieb Sedat Dilek: > [SNIP] >>> Am 29.07.2018 um 15:52 schrieb Sedat Dilek: Hi, when compiling with clang-7 and CONFIG_DRM_AMDGPU=m I see the following... if [ "" = "-pg" ]; then if [ arch/x86/boot/compressed/misc.o != "scripts/mod/empty.o" ]

Re: [PATCH] drm/amdgpu: fix sdma v4 ring is disabled accidently

2018-10-22 Thread Koenig, Christian
Mhm, good catch. And yes using the paging queue when it is available sounds like a good idea to me as well. So far I've only used it for VM updates to actually test if it works as expected. Regards, Christian. Am 19.10.18 um 21:53 schrieb Kuehling, Felix: > [+Christian] > > Should the buffer

Re: [PATCH v2 1/2] drm/sched: Add boolean to mark if sched is ready to work v2

2018-10-22 Thread Koenig, Christian
Am 19.10.18 um 22:52 schrieb Andrey Grodzovsky: > Problem: > A particular scheduler may become unsuable (underlying HW) after > some event (e.g. GPU reset). If it's later chosen by > the get free sched. policy a command will fail to be > submitted. > > Fix: > Add a driver specific callback to repor

Re: [PATCH v2 2/2] drm/amdgpu: Retire amdgpu_ring.ready flag.

2018-10-22 Thread Koenig, Christian
Am 19.10.18 um 22:52 schrieb Andrey Grodzovsky: > Start using drm_gpu_scheduler.ready isntead. Please drop all occurrences of setting sched.ready manually around the ring tests. Instead add a helper function into amdgpu_ring.c which does the ring tests and sets ready depending on the result. R

[PATCH] drm/amdgpu: Reverse the sequence of ctx_mgr_fini and vm_fini in amdgpu_driver_postclose_kms

2018-10-22 Thread Rex Zhu
csa buffer will be created per ctx, when ctx fini, the csa buffer and va will be released. so need to do ctx_mgr fin before vm fini. Signed-off-by: Rex Zhu --- drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/amd/amdgpu/a

Re: [PATCH] drm/amdgpu: Reverse the sequence of ctx_mgr_fini and vm_fini in amdgpu_driver_postclose_kms

2018-10-22 Thread Christian König
Am 22.10.18 um 11:47 schrieb Rex Zhu: csa buffer will be created per ctx, when ctx fini, the csa buffer and va will be released. so need to do ctx_mgr fin before vm fini. Signed-off-by: Rex Zhu Reviewed-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 2 +- 1 file chan

Re: [PATCH 4/5] drm/ttm: initialize globals during device init

2018-10-22 Thread Christian König
Am 22.10.18 um 08:45 schrieb Zhang, Jerry(Junwei): A question in ttm_bo.c [SNIP]     int ttm_bo_device_release(struct ttm_bo_device *bdev)   { @@ -1623,18 +1620,25 @@ int ttm_bo_device_release(struct ttm_bo_device *bdev) drm_vma_offset_manager_destroy(&bdev->vma_manager);   +    if (!ret) +   

Re: [Linux-v4.18-rc6] modpost-errors when compiling with clang-7 and CONFIG_DRM_AMDGPU=m

2018-10-22 Thread Sedat Dilek
On Wed, Sep 19, 2018 at 11:47 AM Sedat Dilek wrote: > > On Sun, Jul 29, 2018 at 4:39 PM, Christian König > wrote: > >> Do you need further informations? > > > > No, that is a known issue. > > > > Regards, > > Christian. > > > > Hi Christian, > > is/was this issue fixed? > > Regards, > - Sedat - >

[PATCH] drm/amdgpu: Fix amdgpu_vm_alloc_pts failed

2018-10-22 Thread Rex Zhu
When the va address located in the last pd entry, the alloc_pts will failed. caused by "drm/amdgpu: add amdgpu_vm_entries_mask v2" commit 72af632549b97ead9251bb155f08fefd1fb6f5c3. Signed-off-by: Rex Zhu --- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 34 +++--- 1 file ch

Re: [PATCH] drm/amdgpu: Fix amdgpu_vm_alloc_pts failed

2018-10-22 Thread Deucher, Alexander
This re-introduces a 64 division that is not handled correctly with the % operator. Alex From: amd-gfx on behalf of Rex Zhu Sent: Monday, October 22, 2018 12:09:21 PM To: amd-gfx@lists.freedesktop.org; Koenig, Christian Cc: Zhu, Rex Subject: [PATCH] drm/amdgp

Re: [PATCH] drm/amdgpu: fix a missing-check bug

2018-10-22 Thread Kuehling, Felix
The BIOS signature check does not guarantee integrity of the BIOS image either way. As I understand it, the signature is just a magic number. It's not a cryptographic signature. The check is just a sanity check. Therefore this change doesn't add any meaningful protection against the scenario you de

Re: [PATCH] mm: convert totalram_pages, totalhigh_pages and managed_pages to atomic.

2018-10-22 Thread Michal Hocko
On Mon 22-10-18 22:53:22, Arun KS wrote: > Remove managed_page_count_lock spinlock and instead use atomic > variables. I assume this has been auto-generated. If yes, it would be better to mention the script so that people can review it and regenerate for comparision. Such a large change is hard to

[PATCH] mm: convert totalram_pages, totalhigh_pages and managed_pages to atomic.

2018-10-22 Thread Arun KS
Remove managed_page_count_lock spinlock and instead use atomic variables. Suggested-by: Michal Hocko Suggested-by: Vlastimil Babka Signed-off-by: Arun KS --- As discussed here, https://patchwork.kernel.org/patch/10627521/#22261253 --- --- arch/csky/mm/init.c | 4 +-

[PATCH 1/2] drm/amdkfd: Delete a duplicate statement in set_pasid_vmid_mapping()

2018-10-22 Thread Zhao, Yong
The same statement is later done in kgd_set_pasid_vmid_mapping() already, so no need to in set_pasid_vmid_mapping() again. Change-Id: Iaf64b90c7dcb59944fb2012a58473dd063e73c60 Signed-off-by: Yong Zhao --- drivers/gpu/drm/amd/amdkfd/cik_regs.h | 2 -- drivers/gpu/drm/amd/amdkfd/kf

[PATCH 2/2] drm/amdkfd: page_table_base already have the flags needed

2018-10-22 Thread Zhao, Yong
The flags are added when calling amdgpu_gmc_pd_addr(). Change-Id: Idd85b1ac35d3d100154df8229ea20721d9a7045c Signed-off-by: Yong Zhao --- drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c | 5 ++--- drivers/gpu/drm/amd/amdkfd/kfd_priv.h | 1 + 2 files changed, 3 insertions(+), 3 delet

[PATCH 2/3] drm/amdgpu: Expose gmc_v9_0_flush_gpu_tlb_helper() for kfd to use

2018-10-22 Thread Zhao, Yong
Change-Id: I3dcd71955297c53b181f82e7078981230c642c01 Signed-off-by: Yong Zhao --- drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 64 --- drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h | 3 ++ 2 files changed, 40 insertions(+), 27 deletions(-) diff --git a/drivers/gpu/drm/amd/amd

[PATCH 1/3] drm/amdkfd: Remove unnecessary register setting when invalidating tlb in kfd

2018-10-22 Thread Zhao, Yong
Those register settings have been done in gfxhub_v1_0_program_invalidation() and mmhub_v1_0_program_invalidation(). Change-Id: I9b9b44f17ac2a6ff0c9c78f91885665da75543d0 Signed-off-by: Yong Zhao --- drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c | 17 - 1 file changed, 17 delet

[PATCH 3/3] drm/amdkfd: Use functions from amdgpu to invalidate vmid in kfd

2018-10-22 Thread Zhao, Yong
Change-Id: I306305e43d4b4032316909b3f4e3f9f5ca4520ae Signed-off-by: Yong Zhao --- drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c | 32 +-- 1 file changed, 1 insertion(+), 31 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c b/drivers/gpu/drm/amd/

[PATCH v3 1/2] drm/sched: Add boolean to mark if sched is ready to work v2

2018-10-22 Thread Andrey Grodzovsky
Problem: A particular scheduler may become unsuable (underlying HW) after some event (e.g. GPU reset). If it's later chosen by the get free sched. policy a command will fail to be submitted. Fix: Add a driver specific callback to report the sched status so rq with bad sched can be avoided in favor

[PATCH v3 2/2] drm/amdgpu: Retire amdgpu_ring.ready flag v3

2018-10-22 Thread Andrey Grodzovsky
Start using drm_gpu_scheduler.ready isntead. v3: Add helper function to run ring test and set sched.ready flag status accordingly, clean explicit sched.ready sets from the IP specific files. Signed-off-by: Andrey Grodzovsky --- drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c| 2 +- drivers/

[PATCH] drm/amdgpu: Enable default GPU reset for dGPU on gfx8/9.

2018-10-22 Thread Andrey Grodzovsky
After testing looks like this subset of ASICs has GPU reset working for the most part. Enable reset due to job timeout. Signed-off-by: Andrey Grodzovsky --- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 24 +++- 1 file changed, 19 insertions(+), 5 deletions(-) diff --git a/dr

[PATCH] drm/amdgpu/amdkfd: clean up mmhub and gfxhub includes

2018-10-22 Thread Alex Deucher
Use the appropriate mmhub and gfxhub headers rather than adding them to the gmc9 header. Signed-off-by: Alex Deucher --- drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c | 3 ++- drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h | 2 ++ drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h | 6

Re: [PATCH] drm/amdgpu: Enable default GPU reset for dGPU on gfx8/9.

2018-10-22 Thread Alex Deucher
On Mon, Oct 22, 2018 at 5:20 PM Andrey Grodzovsky wrote: > > After testing looks like this subset of ASICs has GPU reset > working for the most part. Enable reset due to job timeout. > > Signed-off-by: Andrey Grodzovsky > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 24 +++-

Re: [PATCH 4/5] drm/ttm: initialize globals during device init

2018-10-22 Thread Zhang, Jerry(Junwei)
On 10/22/2018 08:35 PM, Christian König wrote: Am 22.10.18 um 08:45 schrieb Zhang, Jerry(Junwei): A question in ttm_bo.c [SNIP]     int ttm_bo_device_release(struct ttm_bo_device *bdev)   { @@ -1623,18 +1620,25 @@ int ttm_bo_device_release(struct ttm_bo_device *bdev) drm_vma_offset_manager_de

Re: [PATCH] drm/amdgpu: Fix amdgpu_vm_alloc_pts failed

2018-10-22 Thread Zhang, Jerry(Junwei)
On 10/23/2018 12:09 AM, Rex Zhu wrote: When the va address located in the last pd entry, Do you mean the root PD? maybe we need roundup root PD in amdgpu_vm_entries_mask() like amdgpu_vm_num_entries(). BTW, looks amdgpu_vm_entries_mask() is going to replace the amdgpu_vm_num_entries() Jer

Re: [PATCH] drm/amdgpu: Reverse the sequence of ctx_mgr_fini and vm_fini in amdgpu_driver_postclose_kms

2018-10-22 Thread Zhang, Jerry(Junwei)
On 10/22/2018 05:47 PM, Rex Zhu wrote: csa buffer will be created per ctx, when ctx fini, the csa buffer and va will be released. so need to do ctx_mgr fin before vm fini. Signed-off-by: Rex Zhu Reviewed-by: Junwei Zhang --- drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 2 +- 1 file changed,

[PATCH] drm/amdgpu: Fix amdgpu_vm_alloc_pts failed

2018-10-22 Thread Rex Zhu
when the VA address located in the last PD entries, the alloc_pts will faile. Use the right PD mask instand of hardcode, suggested by jerry.zhang. Signed-off-by: Rex Zhu --- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/gp

Re: [PATCH] drm/amdgpu: Fix amdgpu_vm_alloc_pts failed

2018-10-22 Thread Zhu, Rex
Thanks Jerry. Good suggestion. Use the right mask for PD instand of hardcode. so don't need to revert the whole patch. Best Regards Rex From: Zhang, Jerry Sent: Tuesday, October 23, 2018 10:02 AM To: Zhu, Rex; amd-gfx@lists.freedesktop.org; Koenig, Christian

Re: [PATCH] drm/amdgpu: Fix amdgpu_vm_alloc_pts failed

2018-10-22 Thread Zhang, Jerry(Junwei)
On 10/23/2018 11:29 AM, Rex Zhu wrote: when the VA address located in the last PD entries, the alloc_pts will faile. Use the right PD mask instand of hardcode, suggested by jerry.zhang. Signed-off-by: Rex Zhu Thanks to verify that. Feel free to add Reviewed-by: Junwei Zhang Also like to ge

Re: [PATCH] drm/amdgpu: Fix amdgpu_vm_alloc_pts failed

2018-10-22 Thread Zhang, Jerry(Junwei)
On 10/23/2018 01:12 PM, Zhang, Jerry(Junwei) wrote: On 10/23/2018 11:29 AM, Rex Zhu wrote: when the VA address located in the last PD entries, the alloc_pts will faile. Use the right PD mask instand of hardcode, suggested by jerry.zhang. Signed-off-by: Rex Zhu Thanks to verify that. Feel fr

Re: [PATCH] drm/amdgpu: Fix amdgpu_vm_alloc_pts failed

2018-10-22 Thread Zhu, Rex
No, if the vm size is small, there may only on root pd entry. we need to make sure the mask >= 0; Maybe this change revert Christian's commit: commit 72af632549b97ead9251bb155f08fefd1fb6f5c3 Author: Christian König Date: Sat Sep 15 10:02:13 2018 +0200 drm/amdgpu: add amdgpu_vm_entries_

Re: [PATCH 1/5] drm/ttm: use a static ttm_mem_global instance

2018-10-22 Thread Thomas Zimmermann
Hi Christian Am 19.10.18 um 18:41 schrieb Christian König: > As the name says we only need one global instance of ttm_mem_global. > > Drop all the driver initialization and just use a single exported > instance which is initialized during BO global initialization. > > Signed-off-by: Christian Kö