Am 13.04.21 um 22:57 schrieb Nirmoy Das:
Use bo->tbo.base.size instead of calculating it from num_pages.
Those don't clash with the two I've send out yesterday, don't they?
Signed-off-by: Nirmoy Das
Reviewed-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 2 +-
Am 14.04.21 um 08:48 schrieb Felix Kuehling:
Pages in SG BOs were not allocated by TTM. So don't count them against
TTM's pages limit.
Signed-off-by: Felix Kuehling
Reviewed-by: Christian König
Going to pick that one up for inclusion in drm-misc-next.
Regards,
Christian.
---
drivers/gp
Am 13.04.21 um 23:19 schrieb Mikhail Gavrilov:
On Tue, 13 Apr 2021 at 12:29, Christian König wrote:
Hi Mikhail,
the crash is a known issue and should be fixed by:
commit f63da9ae7584280582cbc834b20cc18bfb203b14
Author: Philip Yang
Date: Thu Apr 1 00:22:23 2021 -0400
drm/amdgpu: re
amdgpu_ttm_tt_unpopulate can be called during bo_destroy. The dmabuf->resv
must not be held by the caller or dma_buf_detach will deadlock. This is
probably not the right fix. I get a recursive lock warning with the
reservation held in ttm_bo_release. Should unmap_attachment move to
backend_unbind i
DMA map kfd_mem_attachments in update_gpuvm_pte. This function is called
with the BO and page tables reserved, so we can safely update the DMA
mapping.
DMA unmap when a BO is unmapped from a GPU and before updating mappings
in restore workers.
Signed-off-by: Felix Kuehling
---
.../gpu/drm/amd/a
This is needed to avoid deadlocks with DMA buf import in the next patch.
Also move PT/PD validation out of kfd_mem_attach, that way the caller
can bo this unconditionally.
Signed-off-by: Felix Kuehling
---
.../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 72 +++
1 file changed, 42
Pages in SG BOs were not allocated by TTM. So don't count them against
TTM's pages limit.
Signed-off-by: Felix Kuehling
---
drivers/gpu/drm/ttm/ttm_tt.c | 27 ++-
1 file changed, 18 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm
Use DMABufs with dynamic attachment to DMA-map GTT BOs on other GPUs.
Signed-off-by: Felix Kuehling
---
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h| 2 +
.../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 74 ++-
2 files changed, 75 insertions(+), 1 deletion(-)
diff --git a/dri
Do AQL queue double-mapping with a single attach call. That will make it
easier to create per-GPU BOs later, to be shared between the two BO VA
mappings on the same GPU.
Freeing the attachments is not necessary if map_to_gpu fails. These will be
cleaned up when the kdg_mem object is destroyed in
a
This patch series fixes DMA-mappings of system memory (GTT and userptr)
for KFD running on multi-GPU systems with IOMMU enabled. One SG-BO per
GPU is needed to maintain the DMA mappings of each BO.
I ran into some reservation issues when unmapping or freeing DMA-buf
imports. There are a few FIXME
This name is more fitting, especially for the changes coming next to
support multi-GPU systems with proper DMA mappings. Cleaned up the code
and renamed some related functions and variables to improve readability.
Signed-off-by: Felix Kuehling
---
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h|
Add BO-type specific helpers functions to DMA-map and unmap
kfd_mem_attachments. Implement this functionality for userptrs by creating
one SG BO per GPU and filling it with a DMA mapping of the pages from the
original mem->bo.
Signed-off-by: Felix Kuehling
---
drivers/gpu/drm/amd/amdgpu/amdgpu_a
For now they all reference the same BO. For correct DMA mappings they will
refer to different BOs per-GPU.
Signed-off-by: Felix Kuehling
---
.../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 22 ++-
1 file changed, 17 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/amd/
Am 13.04.21 um 20:26 schrieb Ramesh Errabolu:
Extend current implementation of SG_TABLE construction method to
allow exportation of sub-buffers of a VRAM BO. This capability will
enable logical partitioning of a VRAM BO into multiple non-overlapping
sub-buffers. One example of this use case is to
Am 08.04.21 um 01:12 schrieb Felix Kuehling:
DRM allows access automatically when it creates a GEM handle for a BO.
KFD BOs don't have GEM handles, so KFD needs to manage access manually.
Ok, double checking the code that makes sense.
Signed-off-by: Felix Kuehling
Acked-by: Christian König
On Wed, Apr 14, 2021 at 02:20:10PM +0800, Du, Xiaojian wrote:
> This patch is to remove the "set" function of pp_dpm_mclk for vangogh.
> For vangogh, mclk bonds with fclk, they will lock each other
> on the same perfomance level. But according to the smu message from pmfw,
> only fclk is allowed to
This patch is to remove the "set" function of pp_dpm_mclk for vangogh.
For vangogh, mclk bonds with fclk, they will lock each other
on the same perfomance level. But according to the smu message from pmfw,
only fclk is allowed to set value manually, so remove the unnecessary
code of "set" function
Du, Xiaojian would like to recall the message, "[PATCH] drm/amd/pm: remove the
"set" function of pp_dpm_mclk for vangogh".
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
We should return -EINVAL instead of success if the "limit" is too high.
Fixes: e098bc9612c2 ("drm/amd/pm: optimize the power related source code
layout")
Signed-off-by: Dan Carpenter
---
drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/dr
If the kmemdup() fails then this should return a negative error code
but it currently returns success
Fixes: b4a7db71ea06 ("drm/amdgpu: add per device user friendly xgmi events for
vega20")
Signed-off-by: Dan Carpenter
---
v2: I sent this patch in Feb but I accidentally added an unrelated
hunk f
[AMD Official Use Only - Internal Distribution Only]
Forward to community for review.
-Original Message-
From: Du, Xiaojian
Sent: 2021年4月13日 16:04
To: brahma_sw_dev
Cc: Huang, Ray ; Quan, Evan ; Wang,
Kevin(Yang) ; Lazar, Lijo ; Du,
Xiaojian
Subject: [PATCH] drm/amd/pm: remove the "s
On Tue, Apr 13, 2021 at 11:04 PM Kenneth Feng wrote:
>
> enable ASPM on navi1x and vega series
Please split this patch into two, one for vega and one for navi1x.
With that fixed, the series is:
Reviewed-by: Alex Deucher
>
> Signed-off-by: Kenneth Feng
> ---
> drivers/gpu/drm/amd/amdgpu/nbio_v
add ASPM support on polaris
Signed-off-by: Kenneth Feng
---
drivers/gpu/drm/amd/amdgpu/vi.c | 193 +++-
1 file changed, 191 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/vi.c b/drivers/gpu/drm/amd/amdgpu/vi.c
index ea338de5818a..735ebbd1148f 1
enable ASPM on navi1x and vega series
Signed-off-by: Kenneth Feng
---
drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c | 128 +
drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c | 125
drivers/gpu/drm/amd/amdgpu/nv.c| 10 +-
drivers/gpu/drm/amd/amdgpu/soc15.c
Our driver supports overlay planes, and as expected, some userspace
compositor takes advantage of these features. If the userspace is not
enabling the cursor, they can use multiple planes as they please.
Nevertheless, we start to have constraints when userspace tries to
enable hardware cursor with
On 2021-04-13 5:24 p.m., Mikhail Gavrilov wrote:
On Tue, 13 Apr 2021 at 04:55, Leo Liu wrote:
It curious why ffmpeg does not cause such issues.
For example such command not cause kernel panic:
$ ffmpeg -f x11grab -framerate 60 -video_size 3840x2160 -i :0.0 -vf
'format=nv12,hwupload' -vaapi_de
On Tue, 13 Apr 2021 at 04:55, Leo Liu wrote:
>
> >It curious why ffmpeg does not cause such issues.
> >For example such command not cause kernel panic:
> >$ ffmpeg -f x11grab -framerate 60 -video_size 3840x2160 -i :0.0 -vf
> >'format=nv12,hwupload' -vaapi_device /dev/dri/renderD128 -vcodec
> >h264
On Tue, 13 Apr 2021 at 12:29, Christian König wrote:
>
> Hi Mikhail,
>
> the crash is a known issue and should be fixed by:
>
> commit f63da9ae7584280582cbc834b20cc18bfb203b14
> Author: Philip Yang
> Date: Thu Apr 1 00:22:23 2021 -0400
>
> drm/amdgpu: reserve fence slot to update page tabl
On Mon, Apr 12, 2021 at 06:07:32PM -0400, Alex Deucher wrote:
> Hi Dave, Daniel,
>
> Same PR as last week plus a few accumulated fixes, rebased on drm-next
> to resolve the dependencies between ttm and scheduler with changes in amdgpu.
>
> The following changes since commit c103b850721e4a79ff9578
On 4/13/21 10:50 PM, Nirmoy Das wrote:
Use bo->tbo.base.size instead of bo->tbo.mem.num_pages << PAGE_SHIFT.
Ignore this please, pressed send-email too quick!
Signed-off-by: Nirmoy Das
---
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c|
Use bo->tbo.base.size instead of calculating it from num_pages.
Signed-off-by: Nirmoy Das
---
drivers/gpu/drm/radeon/radeon_object.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/radeon/radeon_object.c
b/drivers/gpu/drm/radeon/radeon_object.c
index cee11c55
Use bo->tbo.base.size instead of calculating it from num_pages.
Signed-off-by: Nirmoy Das
---
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c| 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_obj
Use bo->tbo.base.size instead of bo->tbo.mem.num_pages << PAGE_SHIFT.
Signed-off-by: Nirmoy Das
---
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c| 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdg
Use bo->tbo.base.size instead of bo->tbo.mem.num_pages << PAGE_SHIFT.
Signed-off-by: Nirmoy Das
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index
On Tue, Apr 13, 2021 at 11:13 AM Li, Dennis wrote:
>
> [AMD Official Use Only - Internal Distribution Only]
>
> Hi, Christian and Andrey,
> We maybe try to implement "wait" callback function of dma_fence_ops,
> when GPU reset or unplug happen, make this callback return - ENODEV, to
> notif
On Tue, Apr 13, 2021 at 9:10 AM Christian König
wrote:
>
> Am 12.04.21 um 22:01 schrieb Andrey Grodzovsky:
> >
> > On 2021-04-12 3:18 p.m., Christian König wrote:
> >> Am 12.04.21 um 21:12 schrieb Andrey Grodzovsky:
> >>> [SNIP]
> >
> > So what's the right approach ? How we guarantee that
On 2021-04-13 2:25 p.m., Christian König wrote:
Am 13.04.21 um 20:18 schrieb Andrey Grodzovsky:
On 2021-04-13 2:03 p.m., Christian König wrote:
Am 13.04.21 um 17:12 schrieb Andrey Grodzovsky:
On 2021-04-13 3:10 a.m., Christian König wrote:
Am 12.04.21 um 22:01 schrieb Andrey Grodzovsky:
Extend current implementation of SG_TABLE construction method to
allow exportation of sub-buffers of a VRAM BO. This capability will
enable logical partitioning of a VRAM BO into multiple non-overlapping
sub-buffers. One example of this use case is to partition a VRAM BO
into two sub-buffers, one f
Am 13.04.21 um 20:18 schrieb Andrey Grodzovsky:
On 2021-04-13 2:03 p.m., Christian König wrote:
Am 13.04.21 um 17:12 schrieb Andrey Grodzovsky:
On 2021-04-13 3:10 a.m., Christian König wrote:
Am 12.04.21 um 22:01 schrieb Andrey Grodzovsky:
On 2021-04-12 3:18 p.m., Christian König wrote:
On 2021-04-13 2:03 p.m., Christian König wrote:
Am 13.04.21 um 17:12 schrieb Andrey Grodzovsky:
On 2021-04-13 3:10 a.m., Christian König wrote:
Am 12.04.21 um 22:01 schrieb Andrey Grodzovsky:
On 2021-04-12 3:18 p.m., Christian König wrote:
Am 12.04.21 um 21:12 schrieb Andrey Grodzovsky:
[
Am 13.04.21 um 19:17 schrieb Ramesh Errabolu:
Extend current implementation of SG_TABLE construction method to
allow exportation of sub-buffers of a VRAM BO. This capability will
enable logical partitioning of a VRAM BO into multiple non-overlapping
sub-buffers. One example of this use case is to
Am 13.04.21 um 17:12 schrieb Andrey Grodzovsky:
On 2021-04-13 3:10 a.m., Christian König wrote:
Am 12.04.21 um 22:01 schrieb Andrey Grodzovsky:
On 2021-04-12 3:18 p.m., Christian König wrote:
Am 12.04.21 um 21:12 schrieb Andrey Grodzovsky:
[SNIP]
So what's the right approach ? How we guar
Extend current implementation of SG_TABLE construction method to
allow exportation of sub-buffers of a VRAM BO. This capability will
enable logical partitioning of a VRAM BO into multiple non-overlapping
sub-buffers. One example of this use case is to partition a VRAM BO
into two sub-buffers, one f
On 2021-04-13 3:10 a.m., Christian König wrote:
Am 12.04.21 um 22:01 schrieb Andrey Grodzovsky:
On 2021-04-12 3:18 p.m., Christian König wrote:
Am 12.04.21 um 21:12 schrieb Andrey Grodzovsky:
[SNIP]
So what's the right approach ? How we guarantee that when running
amdgpu_fence_driver_forc
On Mon, Apr 12, 2021 at 8:15 AM Roy Sun wrote:
>
> Add interface to get the mm clock, temperature and memory load
>
> Signed-off-by: Roy Sun
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 50 +
> include/uapi/drm/amdgpu_drm.h | 12 ++
> 2 files changed, 6
The value of ret set but will rewriten, so just delete.
Signed-off-by: Tian Tao
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 8 +++-
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
index 1fb2a91
All the drivers that implement HDR output call pretty much the same
function to initialise the hdr_output_metadata property, and while the
creation of that property is in a helper, every driver uses the same
code to attach it.
Provide a helper for it as well
Reviewed-by: Harry Wentland
Reviewed-
Our driver while supporting HDR didn't send the proper colorimetry info
in the AVI infoframe.
Let's add the property needed so that the userspace can let us know what
the colorspace is supposed to be.
Signed-off-by: Maxime Ripard
---
Changes from v1:
- New patch
---
drivers/gpu/drm/vc4/vc4_
The intel driver uses the same logic to attach the Colorspace property
in multiple places and we'll need it in vc4 too. Let's move that common
code in a helper.
Signed-off-by: Maxime Ripard
---
Changes from v1:
- New patch
---
drivers/gpu/drm/drm_connector.c | 20 ++
From: Dave Stevenson
Now that we can export deeper colour depths, add in the signalling
for HDR metadata.
Signed-off-by: Dave Stevenson
Signed-off-by: Maxime Ripard
---
Changes from v1:
- Rebased on latest drm-misc-next tag
---
drivers/gpu/drm/vc4/vc4_hdmi.c | 53 +
All the drivers that support the HDR metadata property have a similar
function to compare the metadata from one connector state to the next,
and force a mode change if they differ.
All these functions run pretty much the same code, so let's turn it into
an helper that can be shared across those dr
[AMD Public Use]
Series is:
Reviewed-by: Alex Deucher
From: Zhang, Hawking
Sent: Tuesday, April 13, 2021 9:04 AM
To: Tuikov, Luben ; amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander
Subject: RE: [PATCH 4/4] drm/amdgpu: Fix kernel-doc for the RAS sysfs inte
[AMD Public Use]
Reviewed-by: Alex Deucher
From: amd-gfx on behalf of Kevin Wang
Sent: Tuesday, April 13, 2021 7:51 AM
To: amd-gfx@lists.freedesktop.org
Cc: Wang, Kevin(Yang)
Subject: [PATCH] drm/amdgpu: correction of ucode fw_size calculation errors
correct
[AMD Official Use Only - Internal Distribution Only]
Shouldn't we so something similar for sdma 5.0 as well?
Alex
From: Su, Jinzhou (Joe)
Sent: Tuesday, April 13, 2021 2:23 AM
To: amd-gfx@lists.freedesktop.org
Cc: Huang, Ray ; Deucher, Alexander
; Koenig, Chri
On Mon, Apr 12, 2021 at 7:28 PM Ramesh Errabolu wrote:
>
> Extend current implementation of SG_TABLE construction method to
> allow exportation of sub-buffers of a VRAM BO. This capability will
> enable logical partitioning of a VRAM BO into multiple non-overlapping
> sub-buffers. One example of t
Am 13.04.21 um 15:09 schrieb Sun, Roy:
[AMD Official Use Only - Internal Distribution Only]
ping
-Original Message-
From: Roy Sun
Sent: Monday, April 12, 2021 8:15 PM
To: amd-gfx@lists.freedesktop.org
Cc: Sun, Roy
Subject: [PATCH] drm/amd/amdgpu: Expose some power info through AMDG
[AMD Official Use Only - Internal Distribution Only]
ping
-Original Message-
From: Roy Sun
Sent: Monday, April 12, 2021 8:15 PM
To: amd-gfx@lists.freedesktop.org
Cc: Sun, Roy
Subject: [PATCH] drm/amd/amdgpu: Expose some power info through AMDGPU_INFO
Add interface to get the mm clock,
[AMD Public Use]
Series is
Reviewed-by: Hawking Zhang
Regards,
Hawking
-Original Message-
From: Tuikov, Luben
Sent: Tuesday, April 13, 2021 20:56
To: amd-gfx@lists.freedesktop.org
Cc: Tuikov, Luben ; Deucher, Alexander
; Zhang, Hawking
Subject: [PATCH 4/4] drm/amdgpu: Fix kernel-doc
Imporve the kernel-doc for the RAS sysfs
interface. Fix the grammar, fix the context.
Cc: Alexander Deucher
Cc: Hawking Zhang
Signed-off-by: Luben Tuikov
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 47 +
1 file changed, 24 insertions(+), 23 deletions(-)
diff --git a/
Add bad_page_cnt_threshold to debugfs, an optional
file system used for debugging, for reporting
purposes only--it usually matches the size of
EEPROM but may be different depending on the
"bad_page_threshold" kernel module option.
The "bad_page_cnt_threshold" is a dynamically
computed value. It de
Fix if (ret) --> if (!ret), a bug, for
"retire_page", which caused the kernel to recall
the method with *pos == end of file, and that
bounced back with error. On the first run, we
advanced *pos, but returned 0 back to fs layer,
also a bug.
Fix the logic of the check of the result of
amdgpu_reserve
Remove double-sscanf to scan for %llu and 0x%llx,
as that is not going to work!
The %llu will consume the "0" in "0x" of your
input, and the hex value you think you're entering
will always be 0. That is, a valid hex value can
never be consumed.
On the other hand, just entering a hex number
withou
Am 13.04.21 um 14:14 schrieb Roy Sun:
Tracking devices, process info and fence info using
/proc/pid/fdinfo
Signed-off-by: David M Nieto
Signed-off-by: Roy Sun
---
drivers/gpu/drm/amd/amdgpu/Makefile| 2 +
drivers/gpu/drm/amd/amdgpu/amdgpu.h| 1 +
drivers/gpu/drm/amd/amd
Tracking devices, process info and fence info using
/proc/pid/fdinfo
Signed-off-by: David M Nieto
Signed-off-by: Roy Sun
---
drivers/gpu/drm/amd/amdgpu/Makefile| 2 +
drivers/gpu/drm/amd/amdgpu/amdgpu.h| 1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c| 59
drivers
Update the timestamp of scheduled fence on HW
completion of the previous fences
This allow more accurate tracking of the fence
execution in HW
Signed-off-by: David M Nieto
Signed-off-by: Roy Sun
---
drivers/gpu/drm/scheduler/sched_main.c | 11 +--
1 file changed, 9 insertions(+), 2 del
correct big and little endian problems on different platforms.
Signed-off-by: Kevin Wang
---
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 8
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 2 +-
drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c | 2 +-
drivers/gpu/drm/amd/pm/swsmu/smu
Hello John Clements,
The patch cbb8f989d5a0: "drm/amdgpu: page retire over debugfs
mechanism" from Apr 9, 2021, leads to the following static checker
warning:
drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c:377
amdgpu_ras_debugfs_ctrl_write()
info: return a literal instead of 'ret'
driv
Hi Dennis,
yeah, that just has the same down side of a lot of additional overhead
as the is_signaled callback.
Bouncing cache lines on the CPU isn't funny at all.
Christian.
Am 13.04.21 um 11:13 schrieb Li, Dennis:
[AMD Official Use Only - Internal Distribution Only]
Hi, Christian and Andr
[AMD Official Use Only - Internal Distribution Only]
Hi, Christian and Andrey,
We maybe try to implement "wait" callback function of dma_fence_ops, when
GPU reset or unplug happen, make this callback return - ENODEV, to notify the
caller device lost.
* Must return -ERESTARTSYS i
The value of pipe_id and queue_id are not used under certain
circumstances, so just delete.
Signed-off-by: Tian Tao
---
drivers/gpu/drm/radeon/cik.c | 4
1 file changed, 4 deletions(-)
diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
index 8b7a4f7..42a8afa 100644
-
Yeah agree, a bit more commit text would be nice to have.
Apart from that feel free to add an Acked-by: Christian König
as well.
Christian.
Am 13.04.21 um 08:41 schrieb Huang Rui:
On Tue, Apr 13, 2021 at 02:23:00PM +0800, Su, Jinzhou (Joe) wrote:
Add emit mem sync callback for sdma_v5_2
Hi Mikhail,
the crash is a known issue and should be fixed by:
commit f63da9ae7584280582cbc834b20cc18bfb203b14
Author: Philip Yang
Date: Thu Apr 1 00:22:23 2021 -0400
drm/amdgpu: reserve fence slot to update page table
But that an userspace application can cause a page fault is perfectl
Am 12.04.21 um 22:01 schrieb Andrey Grodzovsky:
On 2021-04-12 3:18 p.m., Christian König wrote:
Am 12.04.21 um 21:12 schrieb Andrey Grodzovsky:
[SNIP]
So what's the right approach ? How we guarantee that when running
amdgpu_fence_driver_force_completion we will signal all the HW
fences and
Am 13.04.21 um 07:36 schrieb Andrey Grodzovsky:
[SNIP]
emit_fence(fence);
*/* We can't wait forever as the HW might be gone at any point*/**
dma_fence_wait_timeout(old_fence, 5S);*
You can pretty much ignore this wait here. It is only as a last
resort so that we never overwrite th
74 matches
Mail list logo