Am 27.02.2018 um 19:36 schrieb Amber Lin:
When using CPU to update page table, we need to kmap all the PDs/PTs after
they are allocated and that requires a TLB shot down on each CPU, which is
quite heavy.
Instead, we map the whole visible VRAM to a kernel address at once. Pages
can be obtained f
Am 27.02.2018 um 17:45 schrieb Alex Deucher:
Some were missing the close parens around options.
Signed-off-by: Alex Deucher
Reviewed-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/am
Drop it, will send V2 patch
-Original Message-
From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf Of Monk
Liu
Sent: 2018年2月28日 15:21
To: amd-gfx@lists.freedesktop.org
Cc: Liu, Monk
Subject: [PATCH 4/4] drm/amdgpu: try again kiq access if not in IRQ
sometimes GPU is s
sometimes GPU is switched to other VFs and won't swich
back soon, so the kiq reg access will not signal within
a short period, instead of busy waiting a long time(MAX_KEQ_REG_WAIT)
and returning TMO we can istead sleep 5ms and try again
later (non irq context)
And since the waiting in kiq_r/weg is
sometimes GPU is switched to other VFs and won't swich
back soon, so the kiq reg access will not signal within
a short period, instead of returning TMO error we in fact
should sleep 1ms and try again later (when not in interrupt
context).
And since there is retry scheme provided so the MAX_KIQ_REG
because this time SDMA may under GPU RESET so its ring->ready
may not true, keep going and GPU scheduler will reschedule
this job if it failed.
give a warning on copy_buffer when go through direct_submit
while ring->ready is false
Change-Id: Ife6cd55e0e843d99900e5bed5418499e88633685
Signed-off-by
1)create a routine "handle_vram_lost" to do the vram
recovery, and put it into amdgpu_device_reset/reset_sriov,
this way no need of the extra paramter to hold the
VRAM LOST information and the related macros can be removed.
3)show vram_recover failure if time out, and set TMO equal to
lockup_timeo
found recover_vram_from_shadow sometimes get executed
in paralle with SDMA scheduler, should stop all
schedulers before doing gpu reset/recover
Change-Id: Ibaef3e3c015f3cf88f84b2eaf95cda95ae1a64e3
Signed-off-by: Monk Liu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 40 +++
Drop this patch.
As Eric point out that
power containment disabled only on Fiji and compute
power profile. It violates PCIe spec and may cause power
supply failed. Enabling it will fix the issue, even the
fix will drop performance of some compute tests.
Best Regards
Rex
___
Good. I will follow in new power profile implementation.
Best Regards
Rex
From: amd-gfx on behalf of Felix
Kuehling
Sent: Wednesday, February 28, 2018 5:01 AM
To: amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH] drm/amd/powerplay: fix power over limit on
Good day,
I have problems with graphics -- from time to time kernel show errors.
Kernel 4.14.20
Log attached.
Sincerely,
Anton Kashcheev.
[0.00] Linux version 4.14.20_1 (void-buildslave@build) (gcc version 7.3.0 (GCC)) #1 SMP PREEMPT Thu Feb 22 14:45:56 UTC 2018
[0.00] Command li
Reviewed-by: Felix Kuehling
On 2018-02-27 03:57 PM, Eric Huang wrote:
> power containment disabled only on Fiji and compute
> power profile. It violates PCIe spec and may cause power
> supply failed. Enabling it will fix the issue, even the
> fix will drop performance of some compute tests.
>
>
power containment disabled only on Fiji and compute
power profile. It violates PCIe spec and may cause power
supply failed. Enabling it will fix the issue, even the
fix will drop performance of some compute tests.
Signed-off-by: Eric Huang
---
drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 7
On 2018-02-27 06:27 AM, Rex Zhu wrote:
> avoid build error:
>
> drivers/gpu/drm/amd/amdgpu/../powerplay/inc/smu9_driver_if.h:342:3: error:
> redeclaration of enumerator ‘WM_COUNT’
>WM_COUNT,
>^
> In file included from
> drivers/gpu/drm/amd/amdgpu/../display/dc/dm_services_types.h:32:0,
>
On Tue, Feb 27, 2018 at 6:27 AM, Rex Zhu wrote:
>
>
> Rex Zhu (7):
> drm/amd/pp: Simply powerplay create code
> drm/amd/dc: Use forward declaration instand of include header file
> drm/amd/pp: Refine powerplay instance
> drm/amdgpu: Notify sbios device ready before send request
> drm/amd
On Tue, Feb 27, 2018 at 6:27 AM, Rex Zhu wrote:
> Change-Id: Id7f674c7faba8d51864353a3a7ea96ba471a60d8
> Signed-off-by: Rex Zhu
Please include a description of why this is needed as we discussed
offline in the patch description. With that fixed:
Reviewed-by: Alex Deucher
> ---
> drivers/gpu/
To answer this question, we haven't tested anything intentionally on the farm.
We don't really have a test to test profiles aside from ensuring that sysfs
returns what we set, and even then that's not part of automation.
Kent
-Original Message-
From: Kuehling, Felix
Sent: Tuesday, Feb
On 2018-02-27 11:27 AM, Alex Deucher wrote:
> On Tue, Feb 27, 2018 at 11:22 AM, Eric Huang wrote:
>> As I mentioned in code review for new power profile, old gfx/compute power
>> profile have two scenarios for auto switching. One is
>> gfx->compute(default)->gfx and other is gfx->compute(custom)->
When using CPU to update page table, we need to kmap all the PDs/PTs after
they are allocated and that requires a TLB shot down on each CPU, which is
quite heavy.
Instead, we map the whole visible VRAM to a kernel address at once. Pages
can be obtained from the offset.
v2: move the mapping base f
Some were missing the close parens around options.
Signed-off-by: Alex Deucher
---
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 4dcddb3d0973
On 2018-02-27 05:16 PM, Harry Wentland wrote:
> On 2018-02-27 11:08 AM, Michel Dänzer wrote:
>> On 2018-02-27 04:54 PM, Leo Li wrote:
>>> On 2018-02-27 05:34 AM, Michel Dänzer wrote:
On 2018-02-26 09:15 PM, Harry Wentland wrote:
> From: "Leo (Sunpeng) Li"
>
> Non-legacy LUT size s
Yes. That is a solution.
Eric
On 2018-02-27 11:27 AM, Alex Deucher wrote:
On Tue, Feb 27, 2018 at 11:22 AM, Eric Huang wrote:
As I mentioned in code review for new power profile, old gfx/compute power
profile have two scenarios for auto switching. One is
gfx->compute(default)->gfx and other
On Tue, Feb 27, 2018 at 11:22 AM, Eric Huang wrote:
> As I mentioned in code review for new power profile, old gfx/compute power
> profile have two scenarios for auto switching. One is
> gfx->compute(default)->gfx and other is gfx->compute(custom)->gfx. New power
> profile only satisfies first one
As I mentioned in code review for new power profile, old gfx/compute
power profile have two scenarios for auto switching. One is
gfx->compute(default)->gfx and other is gfx->compute(custom)->gfx. New
power profile only satisfies first one, but in second one for user
debugging, user setting of p
On 2018-02-27 11:08 AM, Michel Dänzer wrote:
> On 2018-02-27 04:54 PM, Leo Li wrote:
>> On 2018-02-27 05:34 AM, Michel Dänzer wrote:
>>> On 2018-02-26 09:15 PM, Harry Wentland wrote:
From: "Leo (Sunpeng) Li"
Non-legacy LUT size should reflect hw capability. Change size from 256
On 2018-02-27 04:54 PM, Leo Li wrote:
> On 2018-02-27 05:34 AM, Michel Dänzer wrote:
>> On 2018-02-26 09:15 PM, Harry Wentland wrote:
>>> From: "Leo (Sunpeng) Li"
>>>
>>> Non-legacy LUT size should reflect hw capability. Change size from 256
>>> to 4096.
>>>
>>> However, X doesn't seem to play wit
On 2018-02-27 05:34 AM, Michel Dänzer wrote:
On 2018-02-26 09:15 PM, Harry Wentland wrote:
From: "Leo (Sunpeng) Li"
Non-legacy LUT size should reflect hw capability. Change size from 256
to 4096.
However, X doesn't seem to play with legacy LUTs of such size.
Therefore, check for legacy lut
[+Eric]
Compute profile switching code as well as KFD compute support for most
GPUs is not upstream yet. As such, there is probably no requirement
(yet) to keep the compute profile API stable, that we added specifically
for KFD. Once we are upstream that will change.
If you change it now, we'll h
+ Kent and Felix for comment
On Tue, Feb 27, 2018 at 6:21 AM, Rex Zhu wrote:
> The gfx/compute profiling mode switch is only for internally
> test. Not a complete solution and unexpectly upstream.
> so revert it.
>
> Change-Id: I1af1b64a63b6fc12c24cf73df03b083b3661ca02
> Signed-off-by: Rex Zhu
>
Am 27.02.2018 um 16:22 schrieb Amber Lin:
When using CPU to update page table, we need to kmap all the PDs/PTs after
they are allocated and that requires a TLB shot down on each CPU, which is
quite heavy.
Instead, we map the whole visible VRAM to a kernel address at once. Pages
can be obtained f
When using CPU to update page table, we need to kmap all the PDs/PTs after
they are allocated and that requires a TLB shot down on each CPU, which is
quite heavy.
Instead, we map the whole visible VRAM to a kernel address at once. Pages
can be obtained from the offset.
Change-Id: I56574bd544dae27
Oh, wait a second this is for the encode ring, isn't it? I only fixed
the decode ring.
In this case the patch is Reviewed-by: Christian König
Regards,
Christian.
Am 27.02.2018 um 16:13 schrieb Christian König:
You are using outdated code, that has already be fixed on
amd-staging-drm-next.
You are using outdated code, that has already be fixed on
amd-staging-drm-next.
Christian.
Am 27.02.2018 um 16:06 schrieb James Zhu:
Emit frame size should match with corresponding function,
uvd_v6_0_enc_ring_emit_vm_flush has 5 amdgpu_ring_write
Signed-off-by: James Zhu
---
drivers/gpu/dr
Emit frame size should match with corresponding function,
uvd_v6_0_enc_ring_emit_vm_flush has 5 amdgpu_ring_write
Signed-off-by: James Zhu
---
drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
b/driver
Please add a comment above the CP_PQ_WPTR_POLL_CNTL write to explain that it's
to disable the polling. WIth that fixed:
Acked-by: Alex Deucher
From: amd-gfx on behalf of Liu, Monk
Sent: Tuesday, February 27, 2018 12:31:35 AM
To: Alex Deucher
Cc: amd-gfx list
Hi guys,
at least on amdgpu and radeon the page array allocated by
ttm_dma_tt_init is completely unused in the case of DMA-buf sharing. So
I'm trying to get rid of that by only allocating the DMA address array.
Now the only other user of DMA-buf together with ttm_dma_tt_init is
Nouveau. So m
We don't need the page array for prime shared BOs, stop allocating it.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 5 +++--
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu
Unpin the GEM object only after freeing the sg table.
Signed-off-by: Christian König
---
drivers/gpu/drm/drm_prime.c | 32
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index e82a976f0fba
Most of the time we only need the dma addresses.
Signed-off-by: Christian König
---
drivers/gpu/drm/drm_prime.c | 20 ++--
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index c38dacda6119..7856a9b3f8a8 10
Let's stop mangling everything in a single header and create one header
per object instead.
Signed-off-by: Christian König
---
drivers/gpu/drm/ttm/ttm_tt.c| 6 -
include/drm/ttm/ttm_bo_driver.h | 237 +-
include/drm/ttm/ttm_tt.h| 272
This allows drivers to only allocate dma addresses, but not a page
array.
Signed-off-by: Christian König
---
drivers/gpu/drm/ttm/ttm_tt.c | 54
include/drm/ttm/ttm_tt.h | 2 ++
2 files changed, 47 insertions(+), 9 deletions(-)
diff --git a/drive
Get gpu info through adev directly in powerplay
Change-Id: I9cefcc4ecc46124e9136830d1d30ad8f337f941a
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c| 59 --
drivers/gpu/drm/amd/include/cgs_common.h | 34 -
drivers/gpu/drm/a
Am 27.02.2018 um 12:21 schrieb Rex Zhu:
The gfx/compute profiling mode switch is only for internally
test. Not a complete solution and unexpectly upstream.
so revert it.
Change-Id: I1af1b64a63b6fc12c24cf73df03b083b3661ca02
Signed-off-by: Rex Zhu
Patch is Acked-by: Christian König .
Let's hop
Change-Id: I982a5d4dd82273465f28fc9009c3ed2461329ca8
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c | 212 --
drivers/gpu/drm/amd/include/cgs_common.h | 44 --
drivers/gpu/drm/amd/powerplay/hwmgr/Makefile | 2 +-
drivers/gpu/drm/amd/p
Change-Id: Ided33c27ff6d1f1f62d41428039b133e96de1dbd
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c| 4 +---
drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 7 +++
drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c | 1 -
3 files changed, 4 insertions(+),
Change-Id: Id7f674c7faba8d51864353a3a7ea96ba471a60d8
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
index 57afad7..8fa850a 100644
--- a/
Include adev in powerplay instance directly.
so we can remove cgs interface.
Change-Id: Ia2f0a82dc176b616e727d1879f592391442859ee
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/powerplay/amd_powerplay.c | 6 ++
drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c | 7 ---
drivers/gpu/drm/am
avoid build error:
drivers/gpu/drm/amd/amdgpu/../powerplay/inc/smu9_driver_if.h:342:3: error:
redeclaration of enumerator ‘WM_COUNT’
WM_COUNT,
^
In file included from
drivers/gpu/drm/amd/amdgpu/../display/dc/dm_services_types.h:32:0,
from drivers/gpu/drm/amd/amdgpu/../disp
use adev as input parameter to create powerplay handle
directly. remove cgs support of powerplay create.
Signed-off-by: Rex Zhu
Change-Id: Ie3b91e8f67b8e8307ec678df7e8b6d6a6e0c52ae
---
drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c | 22 -
drivers/gpu/drm/amd/amdgpu/amdgpu_po
Rex Zhu (7):
drm/amd/pp: Simply powerplay create code
drm/amd/dc: Use forward declaration instand of include header file
drm/amd/pp: Refine powerplay instance
drm/amdgpu: Notify sbios device ready before send request
drm/amd/pp: Use amdgpu acpi helper functions in powerplay
drm/amd/pp
The gfx/compute profiling mode switch is only for internally
test. Not a complete solution and unexpectly upstream.
so revert it.
Change-Id: I1af1b64a63b6fc12c24cf73df03b083b3661ca02
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.h| 8 -
drivers/gpu/drm/amd/amdgpu
On 2018-02-26 09:15 PM, Harry Wentland wrote:
> From: "Leo (Sunpeng) Li"
>
> Non-legacy LUT size should reflect hw capability. Change size from 256
> to 4096.
>
> However, X doesn't seem to play with legacy LUTs of such size.
> Therefore, check for legacy lut when updating DC states, and update
Am 27.02.2018 um 11:19 schrieb Liu, Monk:
So returning true only means the work is on TODO list, but the work will
*never* get executed after this cancel_delayed_work_sync() returned right ?
Correct yes.
Christian.
-Original Message-
From: Koenig, Christian
Sent: 2018年2月27日 17:53
T
Am 27.02.2018 um 09:47 schrieb Monk Liu:
should use bo_create_kernel instead of split to two
function that create and pin the SA bo
issue:
before this patch, there are DMAR read error in host
side when running SRIOV test, the DMAR address dropped
in the range of SA bo.
fix:
after this cleanups
Am 27.02.2018 um 09:47 schrieb Monk Liu:
From: Emily Deng
the original method will change the wptr value in wb.
v2:
furthur cleanup
Change-Id: I984fabca35d9dcf1f5fa8ef7779b2afb7f7d7370
Signed-off-by: Emily Deng
Signed-off-by: Monk Liu
---
drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 17 +++
Am 27.02.2018 um 09:47 schrieb Monk Liu:
issue:
sometime GFX/MM ib test hit timeout under SRIOV env, root cause
is that engine doesn't come back soon enough so the current
IB test considered as timed out.
fix:
for SRIOV GFX IB test wait time need to be expanded a lot during
SRIOV runtimei mode s
So returning true only means the work is on TODO list, but the work will
*never* get executed after this cancel_delayed_work_sync() returned right ?
-Original Message-
From: Koenig, Christian
Sent: 2018年2月27日 17:53
To: Liu, Monk ; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH 09/22]
Am 27.02.2018 um 09:47 schrieb Monk Liu:
SRIOV doesn't give VF cg/pg feature so the MM's idle_work
is skipped for SR-IOV
v2:
remove superfluous changes
since idle_work is not scheduled for SR-IOV so the condition
check for SR-IOV inside idle_work also can be dropped
Change-Id: I6dd7ea48d23b0fee
The cause is found:
In amdgpu_pci_probe(), we set support_atomic to true no matter sriov or BM,
because "adev" is not created yet so cannot judge SRIOV or not that time ...
But in amdgpu_fbdev_init(), we check if need to call that
disable_unused_function() by amdgpu_device_has_dc_support() which
Is my understanding wrong ?...
The first thing that cancel_delayed_work_sync() does is it takes the
work item from the TODO list.
The it makes sure that the work is currently not running on another CPU.
The return value just indicates if the work was still on the TODO list
or not, e.g. when f
On 2018-02-27 10:26 AM, Liu, Monk wrote:
> I'm not familiar with DC stuffs, I found without this patch DRM always
> reporting error like this:
[...]
> [ 89.936514] [drm:drm_helper_disable_unused_functions [drm_kms_helper]]
> *ERROR* Called for atomic driver, this is not what you want.
"atomi
Okay, I think maybe the problem is sr-iov related, I'll prepare another one
-Original Message-
From: Liu, Monk
Sent: 2018年2月27日 17:27
To: 'Michel Dänzer'
Cc: amd-gfx@lists.freedesktop.org
Subject: RE: [PATCH 3/6] drm/amdgpu: should call drm_helper with dc support
I'm not familiar with
I'm not familiar with DC stuffs, I found without this patch DRM always
reporting error like this:
[ 89.783235] [drm] Found UVD firmware Version: 1.87 Family ID: 17
[ 89.783285] [drm] PSP loading UVD firmware
[ 89.784657] [drm] MM table gpu addr = 0xcfc000, cpu addr = 5ec858a9.
[
I know in theoretically this issue should be equal for sriov or not
But since the gpu performance under SRIOV is only one quarter compared with
bare-metal (assume 4VF)
And the CPU's performance is also limited, so there are chance that the fence
is signaled not as quick as bare-metal
And lead to
On 2018-02-27 09:47 AM, Monk Liu wrote:
> otherwise DRM keep reporting complain on ATOMIC flags
> during kmd reloading
>
> Change-Id: I835b96e6d61c7995bbd5dd5478d056671dde9192
> Signed-off-by: Monk Liu
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c | 2 +-
> 1 file changed, 1 insertion(+), 1 del
> Never use schedule() in a while loop as long as you don't work on core
> locking primitives or interrupt handling or stuff like that. In this
> particular case a single call to cancel_delayed_work_sync() should be
> sufficient.
I thought that only one call on cancel_delayed_work_sync() cannot
should use bo_create_kernel instead of split to two
function that create and pin the SA bo
issue:
before this patch, there are DMAR read error in host
side when running SRIOV test, the DMAR address dropped
in the range of SA bo.
fix:
after this cleanups of SA init and fini, above DMAR
eror gone.
From: Emily Deng
the original method will change the wptr value in wb.
v2:
furthur cleanup
Change-Id: I984fabca35d9dcf1f5fa8ef7779b2afb7f7d7370
Signed-off-by: Emily Deng
Signed-off-by: Monk Liu
---
drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 17 -
1 file changed, 8 insertions(+),
otherwise DRM keep reporting complain on ATOMIC flags
during kmd reloading
Change-Id: I835b96e6d61c7995bbd5dd5478d056671dde9192
Signed-off-by: Monk Liu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu
issue:
sometime GFX/MM ib test hit timeout under SRIOV env, root cause
is that engine doesn't come back soon enough so the current
IB test considered as timed out.
fix:
for SRIOV GFX IB test wait time need to be expanded a lot during
SRIOV runtimei mode since it couldn't really begin before GFX en
otherwise page fault hit in the work if driver
finished too quick
Change-Id: I4fd47ccd836441b1b3f426dec5d364d9df02e23d
Signed-off-by: Monk Liu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_d
SRIOV doesn't give VF cg/pg feature so the MM's idle_work
is skipped for SR-IOV
v2:
remove superfluous changes
since idle_work is not scheduled for SR-IOV so the condition
check for SR-IOV inside idle_work also can be dropped
Change-Id: I6dd7ea48d23b0fee74ecb9e93b53bfe36b0e8164
Signed-off-by: Mon
*** some bug fixings for SRIOV case ***
Emily Deng (1):
drm/amdgpu: Correct sdma_v4 get_wptr(v2)
Monk Liu (5):
drm/amdgpu: don't use MM idle_work for SRIOV(v2)
drm/amdgpu: block till late_init_work done in dev_fini
drm/amdgpu: should call drm_helper with dc support
drm/amdgpu: adjust ti
Am 27.02.2018 um 06:26 schrieb Liu, Monk:
I would rather avoid calling the function in the first place.
I already did it in patch 08, and you also rejected this patch
So I'll consider patch 08 is still valid, and drop this one
Well a good part of patch 08 is still valid. I just rejected that
Am 27.02.2018 um 05:45 schrieb Liu, Monk:
In this case I think it would be much better to wait for the idle work before
trying to unload the driver.
I already did it:
+ if (!amdgpu_sriov_vf(adev))
+ while (cancel_delayed_work_sync(&adev->late_init_work))
+
Am 27.02.2018 um 04:36 schrieb Liu, Monk:
Well then there is something else broken and I actually think that this is the
root cause here.
What if this VM submits four different jobs on four different rings, e.g.
gfx/compute1/compute2/vce
So the fences for those jobs are from different context,
76 matches
Mail list logo