Re: [PATCH v2 1/2] drm/amdgpu: parameterize ttm BO destroy callback

2021-06-15 Thread Christian König
Am 15.06.21 um 11:23 schrieb Nirmoy Das: Make provision to pass different ttm BO destroy callback while creating a amdgpu_bo. v2: remove whitespace. call amdgpu_bo_destroy_base() at the end for cleaner code. Signed-off-by: Nirmoy Das --- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 4

Re: [PATCH v2 1/2] drm/amdgpu: parameterize ttm BO destroy callback

2021-06-15 Thread Christian König
Am 14.06.21 um 21:26 schrieb Nirmoy Das: Make provision to pass different ttm BO destroy callback while creating a amdgpu_bo. v2: pass destroy callback using amdgpu_bo_param. Signed-off-by: Nirmoy Das --- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 52 +- drivers/gpu/dr

Re: [PATCH 2/2] drm/amdgpu: move shadow_list to amdgpu_bo_vm

2021-06-15 Thread Christian König
Am 14.06.21 um 21:26 schrieb Nirmoy Das: Move shadow_list to struct amdgpu_bo_vm as shadow BOs are part of PT/PD BOs. Signed-off-by: Nirmoy Das --- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 5 +++-- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 13 +++-- drivers/gpu/drm/amd/amdgp

[PATCH v2] drm/amd/amdgpu: Use IP discovery data to determine VCN enablement instead of MMSCH

2021-06-15 Thread Peng Ju Zhou
From: Bokun Zhang In the past, we use MMSCH to determine whether a VCN is enabled or not. This is not reliable since after a FLR, MMSCH may report junk data. It is better to use IP discovery data. Signed-off-by: Bokun Zhang Signed-off-by: Peng Ju Zhou --- drivers/gpu/drm/amd/amdgpu/amdgpu_di

Re: [RFC PATCH] drm/ttm: Do page counting after populate callback succeed

2021-06-15 Thread Christian König
Am 15.06.21 um 13:57 schrieb xinhui pan: Amdgpu set SG flag in populate callback. So TTM still count pages in SG BO. It's probably better to fix this instead. E.g. why does amdgpu modify the SG flag during populate and not during initial creation? That doesn't seem to make sense. Christian.

[PATCH] drm/amdkfd: Fix a race between queue destroy and process termination

2021-06-15 Thread xinhui pan
We call free_mqd without dqm lock hold, that causes double free of mqd_mem_obj. Fix it by using a tmp pointer. We need walk through the queues_list with dqm lock hold. Otherwise hit list corruption. Signed-off-by: xinhui pan --- .../drm/amd/amdkfd/kfd_device_queue_manager.c | 17 +-

Re: [PATCH v2 1/2] drm/amdgpu: parameterize ttm BO destroy callback

2021-06-15 Thread Das, Nirmoy
On 6/15/2021 8:53 AM, Christian König wrote: Am 14.06.21 um 21:26 schrieb Nirmoy Das: Make provision to pass different ttm BO destroy callback while creating a amdgpu_bo. v2: pass destroy callback using amdgpu_bo_param. Signed-off-by: Nirmoy Das ---   drivers/gpu/drm/amd/amdgpu/amdgpu_object

Re: [PATCH v2 1/2] drm/amdgpu: parameterize ttm BO destroy callback

2021-06-15 Thread Das, Nirmoy
On 6/14/2021 10:00 PM, Alex Deucher wrote: On Mon, Jun 14, 2021 at 3:27 PM Nirmoy Das wrote: Make provision to pass different ttm BO destroy callback while creating a amdgpu_bo. v2: pass destroy callback using amdgpu_bo_param. Signed-off-by: Nirmoy Das --- drivers/gpu/drm/amd/amdgpu/amdg

Re: [PATCH v2 1/1] drm/amdgpu: remove amdgpu_vm_pt

2021-06-15 Thread Das, Nirmoy
ping. On 6/14/2021 2:31 PM, Nirmoy Das wrote: Page table entries are now in embedded in VM BO, so we do not need struct amdgpu_vm_pt. This patch replaces struct amdgpu_vm_pt with struct amdgpu_vm_bo_base. v2: change "!(cursor->level < AMDGPU_VM_PTB)" --> "(cursor->level == AMDGPU_VM_PTB)" Sig

[PATCH] drm: display: Fix duplicate field initialization in dcn31

2021-06-15 Thread Wan Jiabing
Fix the following coccicheck warning: drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c:917:56-57: pstate_enabled: first occurrence line 935, second occurrence line 937 Signed-off-by: Wan Jiabing --- drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c | 1 - 1 file changed, 1 deletion(-)

[PATCH v2 1/2] drm/amdgpu: parameterize ttm BO destroy callback

2021-06-15 Thread Nirmoy Das
Make provision to pass different ttm BO destroy callback while creating a amdgpu_bo. v2: remove whitespace. call amdgpu_bo_destroy_base() at the end for cleaner code. Signed-off-by: Nirmoy Das --- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 48 -- drivers/gpu/drm/amd/am

[PATCH 2/2] drm/amdgpu: move shadow_list to amdgpu_bo_vm

2021-06-15 Thread Nirmoy Das
Move shadow_list to struct amdgpu_bo_vm as shadow BOs are part of PT/PD BOs. Signed-off-by: Nirmoy Das --- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 5 +++-- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 14 -- drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 6 ++ drivers/gpu/dr

Re: [PATCH v2 1/1] drm/amdgpu: remove amdgpu_vm_pt

2021-06-15 Thread Christian König
Am 14.06.21 um 14:31 schrieb Nirmoy Das: Page table entries are now in embedded in VM BO, so we do not need struct amdgpu_vm_pt. This patch replaces struct amdgpu_vm_pt with struct amdgpu_vm_bo_base. v2: change "!(cursor->level < AMDGPU_VM_PTB)" --> "(cursor->level == AMDGPU_VM_PTB)" Signed-of

Re: [PATCH 1/2] drm/amdgpu/gfx9: fix the doorbell missing when in CGPG issue.

2021-06-15 Thread Alex Deucher
Series is: Reviewed-by: Alex Deucher On Tue, Jun 15, 2021 at 6:04 AM Yifan Zhang wrote: > > If GC has entered CGPG, ringing doorbell > first page doesn't wakeup GC. > Enlarge CP_MEC_DOORBELL_RANGE_UPPER to workaround this issue. > > Signed-off-by: Yifan Zhang > Reviewed-by: Felix Kuehling > --

[PATCH 1/2] drm/amdgpu/gfx9: fix the doorbell missing when in CGPG issue.

2021-06-15 Thread Yifan Zhang
If GC has entered CGPG, ringing doorbell > first page doesn't wakeup GC. Enlarge CP_MEC_DOORBELL_RANGE_UPPER to workaround this issue. Signed-off-by: Yifan Zhang Reviewed-by: Felix Kuehling --- drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 6 +- 1 file changed, 5 insertions(+), 1 deletion(-) dif

[PATCH 2/2] drm/amdgpu/gfx10: enlarge CP_MEC_DOORBELL_RANGE_UPPER to cover full doorbell.

2021-06-15 Thread Yifan Zhang
If GC has entered CGPG, ringing doorbell > first page doesn't wakeup GC. Enlarge CP_MEC_DOORBELL_RANGE_UPPER to workaround this issue. Signed-off-by: Yifan Zhang Reviewed-by: Felix Kuehling --- drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 6 +- 1 file changed, 5 insertions(+), 1 deletion(-) di

[PATCH v3 00/14] New uAPI drm properties for color management

2021-06-15 Thread Werner Sembach
I started work on my proposal for better color handling in Linux display drivers: https://lkml.org/lkml/2021/5/12/764 In this 3rd revision everything except the generalised Broadcast RGB implementation is included. I did however not yet include everything suggested in the feedback for v1 and v2.

[PATCH v3 01/14] drm/amd/display: Remove unnecessary SIGNAL_TYPE_HDMI_TYPE_A check

2021-06-15 Thread Werner Sembach
Remove unnecessary SIGNAL_TYPE_HDMI_TYPE_A check that was performed in the drm_mode_is_420_only() case, but not in the drm_mode_is_420_also() && force_yuv420_output case. Without further knowledge if YCbCr 4:2:0 is supported outside of HDMI, there is no reason to use RGB when the display reports d

[PATCH v3 06/14] drm/uAPI: Add "active color format" drm property as feedback for userspace

2021-06-15 Thread Werner Sembach
Add a new general drm property "active color format" which can be used by graphic drivers to report the used color format back to userspace. There was no way to check which color format got actually used on a given monitor. To surely predict this, one must know the exact capabilities of the monito

[PATCH v3 04/14] drm/amd/display: Add handling for new "active bpc" property

2021-06-15 Thread Werner Sembach
This commit implements the "active bpc" drm property for the AMD GPU driver. Signed-off-by: Werner Sembach --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 19 ++- .../display/amdgpu_dm/amdgpu_dm_mst_types.c | 4 2 files changed, 22 insertions(+), 1 deletion(-) diff -

[PATCH v3 03/14] drm/uAPI: Add "active bpc" as feedback channel for "max bpc" drm property

2021-06-15 Thread Werner Sembach
Add a new general drm property "active bpc" which can be used by graphic drivers to report the applied bit depth per pixel back to userspace. While "max bpc" can be used to change the color depth, there was no way to check which one actually got used. While in theory the driver chooses the best/hi

[PATCH v3 05/14] drm/i915/display: Add handling for new "active bpc" property

2021-06-15 Thread Werner Sembach
This commit implements the "active bpc" drm property for the Intel GPU driver. Signed-off-by: Werner Sembach --- drivers/gpu/drm/i915/display/intel_display.c | 14 ++ drivers/gpu/drm/i915/display/intel_dp.c | 8 ++-- drivers/gpu/drm/i915/display/intel_dp_mst.c | 5 +

[PATCH v3 02/14] drm/amd/display: Add missing cases convert_dc_color_depth_into_bpc

2021-06-15 Thread Werner Sembach
convert_dc_color_depth_into_bpc() that converts the enum dc_color_depth to an integer had the casses for COLOR_DEPTH_999 and COLOR_DEPTH_11 missing. Signed-off-by: Werner Sembach --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 4 1 file changed, 4 insertions(+) diff --git a/dri

[PATCH v3 10/14] drm/amd/display: Add handling for new "active color range" property

2021-06-15 Thread Werner Sembach
This commit implements the "active color range" drm property for the AMD GPU driver. Signed-off-by: Werner Sembach --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 32 +++ .../display/amdgpu_dm/amdgpu_dm_mst_types.c | 4 +++ 2 files changed, 36 insertions(+) diff --git a/d

[PATCH v3 08/14] drm/i915/display: Add handling for new "active color format" property

2021-06-15 Thread Werner Sembach
This commit implements the "active color format" drm property for the Intel GPU driver. Signed-off-by: Werner Sembach --- drivers/gpu/drm/i915/display/intel_display.c | 21 +++- drivers/gpu/drm/i915/display/intel_dp.c | 2 ++ drivers/gpu/drm/i915/display/intel_dp_mst.c |

[PATCH v3 07/14] drm/amd/display: Add handling for new "active color format" property

2021-06-15 Thread Werner Sembach
This commit implements the "active color format" drm property for the AMD GPU driver. Signed-off-by: Werner Sembach --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 28 +-- .../display/amdgpu_dm/amdgpu_dm_mst_types.c | 4 +++ 2 files changed, 30 insertions(+), 2 deletions(-

[PATCH v3 11/14] drm/i915/display: Add handling for new "active color range" property

2021-06-15 Thread Werner Sembach
This commit implements the "active color range" drm property for the Intel GPU driver. Signed-off-by: Werner Sembach --- drivers/gpu/drm/i915/display/intel_display.c | 4 drivers/gpu/drm/i915/display/intel_dp.c | 2 ++ drivers/gpu/drm/i915/display/intel_dp_mst.c | 5 + drivers/gpu

[PATCH v3 13/14] drm/amd/display: Add handling for new "preferred color format" property

2021-06-15 Thread Werner Sembach
This commit implements the "preferred color format" drm property for the AMD GPU driver. Signed-off-by: Werner Sembach --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 24 ++- .../display/amdgpu_dm/amdgpu_dm_mst_types.c | 4 2 files changed, 22 insertions(+), 6 deletio

[PATCH v3 09/14] drm/uAPI: Add "active color range" drm property as feedback for userspace

2021-06-15 Thread Werner Sembach
Add a new general drm property "active color range" which can be used by graphic drivers to report the used color range back to userspace. There was no way to check which color range got actually used on a given monitor. To surely predict this, one must know the exact capabilities of the monitor a

[PATCH v3 12/14] drm/uAPI: Add "preferred color format" drm property as setting for userspace

2021-06-15 Thread Werner Sembach
Add a new general drm property "preferred color format" which can be used by userspace to tell the graphic drivers to which color format to use. Possible options are: - auto (default/current behaviour) - rgb - ycbcr444 - ycbcr422 (not supported by both amdgpu and i915) - ycbcr4

[PATCH v3 14/14] drm/i915/display: Add handling for new "preferred color format" property

2021-06-15 Thread Werner Sembach
This commit implements the "preferred color format" drm property for the Intel GPU driver. Signed-off-by: Werner Sembach --- drivers/gpu/drm/i915/display/intel_dp.c | 7 ++- drivers/gpu/drm/i915/display/intel_dp_mst.c | 5 + drivers/gpu/drm/i915/display/intel_hdmi.c | 5 + 3 f

[PATCH 1/1] drm/amdkfd: remove unused variable

2021-06-15 Thread Nirmoy Das
Remove it. CC: jonathan@amd.com CC: felix.kuehl...@amd.com Fixes: d7b132507384c("drm/amdkfd: fix circular locking on get_wave_state") Signed-off-by: Nirmoy Das --- drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/driv

Re: [PATCH 1/1] drm/amdkfd: remove unused variable

2021-06-15 Thread Das, Nirmoy
On 6/15/2021 12:33 PM, Nirmoy Das wrote: Remove it. CC: jonathan@amd.com CC: felix.kuehl...@amd.com Fixes: d7b132507384c("drm/amdkfd: fix circular locking on get_wave_state") Signed-off-by: Nirmoy Das --- drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c | 3 +-- 1 file changed, 1

RE: [PATCH 1/1] drm/amdkfd: remove unused variable

2021-06-15 Thread Kim, Jonathan
[AMD Official Use Only] Thanks for the catch. Reviewed-by: Jonathan Kim > -Original Message- > From: Das, Nirmoy > Sent: Tuesday, June 15, 2021 6:35 AM > To: amd-gfx@lists.freedesktop.org > Cc: Kim, Jonathan ; Kuehling, Felix > > Subject: Re: [PATCH 1/1] drm/amdkfd: remove unused varia

Re: [PATCH v2 1/2] drm/amdgpu: parameterize ttm BO destroy callback

2021-06-15 Thread Das, Nirmoy
On 6/15/2021 12:48 PM, Christian König wrote: Am 15.06.21 um 11:23 schrieb Nirmoy Das: Make provision to pass different ttm BO destroy callback while creating a amdgpu_bo. v2: remove whitespace. call amdgpu_bo_destroy_base() at the end for cleaner code. Signed-off-by: Nirmoy Das ---  

[PATCH AUTOSEL 5.12 28/33] radeon: use memcpy_to/fromio for UVD fw upload

2021-06-15 Thread Sasha Levin
From: Chen Li [ Upstream commit ab8363d3875a83f4901eb1cc00ce8afd24de6c85 ] I met a gpu addr bug recently and the kernel log tells me the pc is memcpy/memset and link register is radeon_uvd_resume. As we know, in some architectures, optimized memcpy/memset may not work well on device memory. Tri

[PATCH AUTOSEL 5.10 26/30] radeon: use memcpy_to/fromio for UVD fw upload

2021-06-15 Thread Sasha Levin
From: Chen Li [ Upstream commit ab8363d3875a83f4901eb1cc00ce8afd24de6c85 ] I met a gpu addr bug recently and the kernel log tells me the pc is memcpy/memset and link register is radeon_uvd_resume. As we know, in some architectures, optimized memcpy/memset may not work well on device memory. Tri

[PATCH AUTOSEL 5.4 12/15] radeon: use memcpy_to/fromio for UVD fw upload

2021-06-15 Thread Sasha Levin
From: Chen Li [ Upstream commit ab8363d3875a83f4901eb1cc00ce8afd24de6c85 ] I met a gpu addr bug recently and the kernel log tells me the pc is memcpy/memset and link register is radeon_uvd_resume. As we know, in some architectures, optimized memcpy/memset may not work well on device memory. Tri

[PATCH AUTOSEL 4.19 09/12] radeon: use memcpy_to/fromio for UVD fw upload

2021-06-15 Thread Sasha Levin
From: Chen Li [ Upstream commit ab8363d3875a83f4901eb1cc00ce8afd24de6c85 ] I met a gpu addr bug recently and the kernel log tells me the pc is memcpy/memset and link register is radeon_uvd_resume. As we know, in some architectures, optimized memcpy/memset may not work well on device memory. Tri

[PATCH AUTOSEL 4.14 5/8] radeon: use memcpy_to/fromio for UVD fw upload

2021-06-15 Thread Sasha Levin
From: Chen Li [ Upstream commit ab8363d3875a83f4901eb1cc00ce8afd24de6c85 ] I met a gpu addr bug recently and the kernel log tells me the pc is memcpy/memset and link register is radeon_uvd_resume. As we know, in some architectures, optimized memcpy/memset may not work well on device memory. Tri

[PATCH AUTOSEL 4.9 5/5] radeon: use memcpy_to/fromio for UVD fw upload

2021-06-15 Thread Sasha Levin
From: Chen Li [ Upstream commit ab8363d3875a83f4901eb1cc00ce8afd24de6c85 ] I met a gpu addr bug recently and the kernel log tells me the pc is memcpy/memset and link register is radeon_uvd_resume. As we know, in some architectures, optimized memcpy/memset may not work well on device memory. Tri

[PATCH AUTOSEL 4.4 3/3] radeon: use memcpy_to/fromio for UVD fw upload

2021-06-15 Thread Sasha Levin
From: Chen Li [ Upstream commit ab8363d3875a83f4901eb1cc00ce8afd24de6c85 ] I met a gpu addr bug recently and the kernel log tells me the pc is memcpy/memset and link register is radeon_uvd_resume. As we know, in some architectures, optimized memcpy/memset may not work well on device memory. Tri

[PATCH 2/2] drm/amdgpu: move shadow_list to amdgpu_bo_vm

2021-06-15 Thread Nirmoy Das
Move shadow_list to struct amdgpu_bo_vm as shadow BOs are part of PT/PD BOs. Signed-off-by: Nirmoy Das --- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 5 +++-- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 14 -- drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 6 ++ drivers/gpu/dr

[PATCH v3 1/2] drm/amdgpu: parameterize ttm BO destroy callback

2021-06-15 Thread Nirmoy Das
Make provision to pass different ttm BO destroy callback while creating a amdgpu_bo. v3: remove unnecessary amdgpu_bo_destroy_base. v2: remove whitespace. call amdgpu_bo_destroy_base() at the end for cleaner code. Signed-off-by: Nirmoy Das --- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 41

Re: [PATCH 2/2] drm/amdgpu: move shadow_list to amdgpu_bo_vm

2021-06-15 Thread Christian König
Am 15.06.21 um 13:51 schrieb Nirmoy Das: Move shadow_list to struct amdgpu_bo_vm as shadow BOs are part of PT/PD BOs. Signed-off-by: Nirmoy Das --- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 5 +++-- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 14 -- drivers/gpu/drm/amd/am

[RFC PATCH] drm/ttm: Do page counting after populate callback succeed

2021-06-15 Thread xinhui pan
Amdgpu set SG flag in populate callback. So TTM still count pages in SG BO. One easy way to fix this is lets count pages after populate callback. We hit one issue that amdgpu alloc many SG BOs, but TTM try to do swap again and again even if swapout does not swap SG BOs at all. Signed-off-by: xinh

Re: [PATCH 2/2] drm/amdgpu: move shadow_list to amdgpu_bo_vm

2021-06-15 Thread Das, Nirmoy
On 6/15/2021 1:57 PM, Christian König wrote: Am 15.06.21 um 13:51 schrieb Nirmoy Das: Move shadow_list to struct amdgpu_bo_vm as shadow BOs are part of PT/PD BOs. Signed-off-by: Nirmoy Das ---   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c |  5 +++--   drivers/gpu/drm/amd/amdgpu/amdgpu_object

Re: [RFC PATCH] drm/ttm: Do page counting after populate callback succeed

2021-06-15 Thread Pan, Xinhui
> 2021年6月15日 20:01,Christian König 写道: > > Am 15.06.21 um 13:57 schrieb xinhui pan: >> Amdgpu set SG flag in populate callback. So TTM still count pages in SG >> BO. > > It's probably better to fix this instead. E.g. why does amdgpu modify the SG > flag during populate and not during initial

Re: [RFC PATCH] drm/ttm: Do page counting after populate callback succeed

2021-06-15 Thread Christian König
Am 15.06.21 um 14:11 schrieb Pan, Xinhui: 2021年6月15日 20:01,Christian König 写道: Am 15.06.21 um 13:57 schrieb xinhui pan: Amdgpu set SG flag in populate callback. So TTM still count pages in SG BO. It's probably better to fix this instead. E.g. why does amdgpu modify the SG flag during populat

Re: [PATCH] drm: display: Fix duplicate field initialization in dcn31

2021-06-15 Thread Rodrigo Siqueira
On 06/15, Wan Jiabing wrote: > Fix the following coccicheck warning: > drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c:917:56-57: > pstate_enabled: first occurrence line 935, second occurrence line 937 > > Signed-off-by: Wan Jiabing > --- > drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resour

Re: [PATCH v3 03/14] drm/uAPI: Add "active bpc" as feedback channel for "max bpc" drm property

2021-06-15 Thread Werner Sembach
Am 15.06.21 um 16:14 schrieb Werner Sembach: Add a new general drm property "active bpc" which can be used by graphic drivers to report the applied bit depth per pixel back to userspace. While "max bpc" can be used to change the color depth, there was no way to check which one actually got used.

Re: [PATCH 1/1] drm/amdgpu: Use spinlock_irqsave for pasid_lock

2021-06-15 Thread Zeng, Oak
Reviewed-by: Oak Zeng Regards, Oak On 2021-06-14, 6:07 PM, "amd-gfx on behalf of Felix Kuehling" wrote: This should fix a kernel LOCKDEP warning on Vega10: [ 149.416604] [ 149.420877] WARNING: inconsistent lock state [ 149.425152] 5.11

Re: [PATCH 40/40] drm/amdgpu: Correctly disable the I2C IP block

2021-06-15 Thread Alex Deucher
On Mon, Jun 14, 2021 at 1:47 PM Luben Tuikov wrote: > > On long transfers to the EEPROM device, > i.e. write, it is observed that the driver aborts > the transfer. > > The reason for this is that the driver isn't > patient enough--the IC_STATUS register's contents > is 0x27, which is MST_ACTIVITY

Re: [PATCH 20/40] drm/amdgpu: EEPROM respects I2C quirks

2021-06-15 Thread Alex Deucher
On Mon, Jun 14, 2021 at 1:47 PM Luben Tuikov wrote: > > Consult the i2c_adapter.quirks table for > the maximum read/write data length per bus > transaction. Do not exceed this transaction > limit. > > Cc: Jean Delvare > Cc: Alexander Deucher > Cc: Andrey Grodzovsky > Cc: Lijo Lazar > Cc: Stanl

Re: [PATCH 36/40] drm/amdgpu: Optimizations to EEPROM RAS table I/O

2021-06-15 Thread Alex Deucher
On Mon, Jun 14, 2021 at 1:47 PM Luben Tuikov wrote: > > Read and write the table in one go, then using a > separate stage to decode or encode the data and > reading/writing the table, as opposed to on the > fly, which keeps the I2C bus busy. Use a single > read/write to read/write the table or at

Re: [RFC PATCH] drm/ttm: Do page counting after populate callback succeed

2021-06-15 Thread Felix Kuehling
Am 2021-06-15 um 8:18 a.m. schrieb Christian König: > Am 15.06.21 um 14:11 schrieb Pan, Xinhui: >>> 2021年6月15日 20:01,Christian König >>> 写道: >>> >>> Am 15.06.21 um 13:57 schrieb xinhui pan: Amdgpu set SG flag in populate callback. So TTM still count pages in SG BO. >>> It's probably

Re: [PATCH v2] drm/amd/amdgpu: Use IP discovery data to determine VCN enablement instead of MMSCH

2021-06-15 Thread Alex Deucher
On Tue, Jun 15, 2021 at 3:46 AM Peng Ju Zhou wrote: > > From: Bokun Zhang > > In the past, we use MMSCH to determine whether a VCN is enabled or not. > This is not reliable since after a FLR, MMSCH may report junk data. > > It is better to use IP discovery data. > > Signed-off-by: Bokun Zhang >

Re: [PATCH] drm/amdkfd: Fix a race between queue destroy and process termination

2021-06-15 Thread Felix Kuehling
[+Amber, moving amd-gfx to BCC] Amber worked on a related problem on an NPI branch recently in the nocpsch version of this code. We should port that fix to amd-staging-drm-next. Then lets come up with a common solution for the  cpsch code path as well. See one comment inline. Am 2021-06-15 um 4

Re: [PATCH] drm/amdkfd: Fix a race between queue destroy and process termination

2021-06-15 Thread Felix Kuehling
Am 2021-06-15 um 11:32 a.m. schrieb Felix Kuehling: > [+Amber, moving amd-gfx to BCC] Actually didn't move it to BCC. But let's not name that NPI branch in public. ;) Thanks,   Felix > > Amber worked on a related problem on an NPI branch recently in the > nocpsch version of this code. We should

Re: [PATCH] drm: display: Fix duplicate field initialization in dcn31

2021-06-15 Thread Alex Deucher
Applied. Thanks! On Tue, Jun 15, 2021 at 8:54 AM Rodrigo Siqueira wrote: > > On 06/15, Wan Jiabing wrote: > > Fix the following coccicheck warning: > > drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c:917:56-57: > > pstate_enabled: first occurrence line 935, second occurrence line 937 > >

Re: [RFC PATCH] drm/ttm: Do page counting after populate callback succeed

2021-06-15 Thread Christian König
Am 15.06.21 um 17:06 schrieb Felix Kuehling: Am 2021-06-15 um 8:18 a.m. schrieb Christian König: Am 15.06.21 um 14:11 schrieb Pan, Xinhui: 2021年6月15日 20:01,Christian König 写道: Am 15.06.21 um 13:57 schrieb xinhui pan: Amdgpu set SG flag in populate callback. So TTM still count pages in SG B

[PATCH] drm/amdkfd: Fix circular lock in nocpsch path

2021-06-15 Thread Amber Lin
Calling free_mqd inside of destroy_queue_nocpsch_locked can cause a circular lock. destroy_queue_nocpsch_locked is called under a DQM lock, which is taken in MMU notifiers, potentially in FS reclaim context. Taking another lock, which is BO reservation lock from free_mqd, while causing an FS reclai

Re: [PATCH] drm/amdkfd: Fix circular lock in nocpsch path

2021-06-15 Thread Felix Kuehling
[+Xinhui] Am 2021-06-15 um 1:50 p.m. schrieb Amber Lin: > Calling free_mqd inside of destroy_queue_nocpsch_locked can cause a > circular lock. destroy_queue_nocpsch_locked is called under a DQM lock, > which is taken in MMU notifiers, potentially in FS reclaim context. > Taking another lock, whic

[PATCH] drm/amdgpu/vcn3: drop extraneous Beige Goby hunk

2021-06-15 Thread Alex Deucher
Probably a rebase leftover. This doesn't apply to SR-IOV, and the non-SR-IOV code below it already handles this properly. Signed-off-by: Alex Deucher --- drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 5 - 1 file changed, 5 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/driver

Re: [PATCH] drm/amdgpu/vcn3: drop extraneous Beige Goby hunk

2021-06-15 Thread Zhu, James
[AMD Official Use Only] This patch is Reviewed-by: James Zhu Thanks & Best Regards! James Zhu From: amd-gfx on behalf of Alex Deucher Sent: Tuesday, June 15, 2021 5:32 PM To: amd-gfx@lists.freedesktop.org Cc: Deucher, Alexander Subject: [PATCH] drm/amdgp