On 2021-04-07 7:12 p.m., Felix Kuehling
wrote:
ROCm user mode has acquired VMs from DRM file descriptors for as long
as it supported the upstream KFD. Legacy code to support older versions
of ROCm is not needed any more.
Reviewed-by: Philip Yang
On 2021-04-07 7:12 p.m., Felix Kuehling
wrote:
amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu needs the drm_priv to allow mmap
to access the BO through the corresponding file descriptor.
Signed-off-by: Felix Kuehling
---
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
On 2021-04-07 7:12 p.m., Felix Kuehling
wrote:
DRM allows access automatically when it creates a GEM handle for a BO.
KFD BOs don't have GEM handles, so KFD needs to manage access manually.
After reading drm vma manager, I understand it uses rbtree
On 2021-04-07 7:12 p.m., Felix Kuehling
wrote:
This shortcut is no longer needed with access managed progerly by KFD.
Reviewed-by: Philip Yang
Signed-off-by: Felix Kuehling
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 7 ---
1
parameter
in the kfd2kgd interface.
This series is Reviewed-by: Philip Yang
Signed-off-by: Felix Kuehling
---
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h| 14 ++--
.../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 69 +++
drivers/gpu/drm/amd/amdkfd
Reviewed-by: Philip Yang for the series.
On 2021-02-12 1:40 a.m., Felix Kuehling
wrote:
If init_cwsr_apu fails, we currently leave the kfd_process structure in
place anyway. The next kfd_open will then succeed, using the existing
kfd_process structure. Fix that
("mm: track mmu notifiers in fs_reclaim_acquire/release")
CC: Daniel Vetter
Signed-off-by: Felix Kuehling
Reviewed-by: Philip Yang
---
drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drive
d4ebc2007040a0aff01bfe1b194085d3867328fd
Author: Philip Yang
Date: Tue Jun 22 00:12:32 2021 -0400
drm/amdkfd: implement counters for vm fault and migration
The analysis is as follows:
2397 int
2398 svm_range_restore_pages(struct amdgpu_device *adev, unsigned int pasid,
2399
/gpu/drm/amd/amdkfd/kfd_svm.c from the following commit:
commit d4ebc2007040a0aff01bfe1b194085d3867328fd
Author: Philip Yang
Date: Tue Jun 22 00:12:32 2021 -0400
drm/amdkfd: implement counters for vm fault and migration
The analysis is as follows:
Thanks. Philip, see inline
On 2022-03-17 11:13 a.m., Lee Jones
wrote:
On Thu, 17 Mar 2022, Felix Kuehling wrote:
Am 2022-03-17 um 11:00 schrieb Lee Jones:
Good afternoon Felix,
Thanks for your review.
Am 2022-03-17 um 09:1
viewed
the bug.
Builds with CONFIG_DRM_AMDGPU=m, CONFIG_HSA_AMD=y, and
CONFIG_HSA_AMD_SVM=y show no new warnings, and our static analyzer no
longer warns about this code.
Fixes: 42de677f7999 ("drm/amdkfd: register svm range")
Signed-off-by: Zhou Qingyang
Reviewed-by: Philip Yang
On 2021-12-15 3:52 a.m.,
cgel@gmail.com wrote:
From: Changcheng Deng
Use max() and min() in order to make code cleaner.
Reported-by: Zeal Robot
Signed-off-by: Changcheng Deng
Reviewed-by: Philip Yang
Applied, thanks
On 2021-12-22 7:37 p.m., Rajneesh
Bhardwaj wrote:
During CRIU restore phase, the VMAs for the virtual address ranges are
not at their final location yet so in this stage, only cache the data
required to successfully resume the svm ranges during an imminent CR
On 2021-12-22 7:37 p.m., Rajneesh
Bhardwaj wrote:
A KFD process may contain a number of virtual address ranges for shared
virtual memory management and each such range can have many SVM
attributes spanning across various nodes within the process boundary.
Thi
On 2021-12-22 7:37 p.m., Rajneesh
Bhardwaj wrote:
Recoverable page faults are represented by the xnack mode setting inside
a kfd process and are used to represent the device page faults. For CR,
we don't consider negative values which are typically used for q
On 2021-12-22 7:37 p.m., Rajneesh
Bhardwaj wrote:
A KFD process may contain a number of virtual address ranges for shared
virtual memory management and each such range can have many SVM
attributes spanning across various nodes within the process boundary.
Thi
On 2022-01-10 7:10 p.m., Felix Kuehling
wrote:
On
2022-01-05 10:22 a.m., philip yang wrote:
On 2021-12-22 7:37 p.m., Rajneesh Bhardwaj wrote:
Recoverable page faults are represented
by the
On 2022-01-10 6:58 p.m., Felix Kuehling
wrote:
On
2022-01-05 9:43 a.m., philip yang wrote:
On 2021-12-22 7:37 p.m., Rajneesh Bhardwaj wrote:
During CRIU restore phase, the VMAs for
the virtual
On 2022-02-15 7:38 p.m., Felix Kuehling
wrote:
Reference:
https://www.kernel.org/doc/html/latest/process/deprecated.html#zero-length-and-one-element-arrays
CC: Changcheng Deng
Signed-off-by: Felix Kuehling
Reviewed-by: Philip Yang
lock that
hmm provides correctly, it can still be converted over to use the
mmu_interval_notifier api instead of hmm_mirror without too much trouble.
This also deletes another place where a driver is associating additional
data (struct amdgpu_mn) with a mmu_struct.
Signed-off-by: Philip Yang
This was to fix application long event wait latency, when app
shader generates lots of event interrupts in short period, the
scheduled work no chance to execute on the same CPU core, this
causes event cannot post/return to app thread which are waiting
the event. To s
Without unsigned long typecast, the size is passed in as zero if page
array size >= 4GB, nr_pages >= 0x10, then sg list converted will
have the first and the last chunk lost.
Signed-off-by: Philip Yang
---
drivers/gpu/drm/drm_prime.c | 2 +-
1 file changed, 1 insertion(+), 1 de
On 2023-08-22 05:43, Christian König
wrote:
Am 21.08.23 um 22:02 schrieb Philip Yang:
Without unsigned long typecast, the size
is passed in as zero if page
array size >= 4GB, nr_pages >= 0x10, t
On 2020-07-23 7:02 p.m., Felix Kuehling wrote:
Am 2020-07-23 um 5:00 a.m. schrieb Christian König:
We can't pipeline that during eviction because the memory needs
to be available immediately.
Signed-off-by: Christian König
---
drivers/gpu/drm/ttm/ttm_bo.c | 12 ++--
1 file changed,
drm-next.
Reviewed-by: Philip Yang
---
Ps: When I try to compile this file, there is a error :
drivers/gpu/drm/amd/amdkfd/kfd_migrate.c:28:10: fatal error: amdgpu_sync.h:
No such file or directory.
Maybe there are some steps I missed or this place need to be corrected?
25 matches
Mail list logo