From: Christian König <christian.koe...@amd.com>
Sent: Tuesday, February 27, 2024 1:54 AM
To: Zeng, Oak <oak.z...@intel.com>; Danilo Krummrich <d...@redhat.com>; Dave 
Airlie <airl...@redhat.com>; Daniel Vetter <dan...@ffwll.ch>; Felix Kuehling 
<felix.kuehl...@amd.com>; jgli...@redhat.com
Cc: Welty, Brian <brian.we...@intel.com>; dri-devel@lists.freedesktop.org; 
intel...@lists.freedesktop.org; Bommu, Krishnaiah <krishnaiah.bo...@intel.com>; 
Ghimiray, Himal Prasad <himal.prasad.ghimi...@intel.com>; 
thomas.hellst...@linux.intel.com; Vishwanathapura, Niranjana 
<niranjana.vishwanathap...@intel.com>; Brost, Matthew 
<matthew.br...@intel.com>; Gupta, saurabhg <saurabhg.gu...@intel.com>
Subject: Re: Making drm_gpuvm work across gpu devices

Hi Oak,
Am 23.02.24 um 21:12 schrieb Zeng, Oak:
Hi Christian,

I go back this old email to ask a question.

sorry totally missed that one.



Quote from your email:
“Those ranges can then be used to implement the SVM feature required for higher 
level APIs and not something you need at the UAPI or even inside the low level 
kernel memory management.”
“SVM is a high level concept of OpenCL, Cuda, ROCm etc.. This should not have 
any influence on the design of the kernel UAPI.”

There are two category of SVM:

  1.  driver svm allocator: this is implemented in user space,  i.g., 
cudaMallocManaged (cuda) or zeMemAllocShared (L0) or clSVMAlloc(openCL). Intel 
already have gem_create/vm_bind in xekmd and our umd implemented clSVMAlloc and 
zeMemAllocShared on top of gem_create/vm_bind. Range A..B of the process 
address space is mapped into a range C..D of the GPU address space, exactly as 
you said.
  2.  system svm allocator:  This doesn’t introduce extra driver API for memory 
allocation. Any valid CPU virtual address can be used directly transparently in 
a GPU program without any extra driver API call. Quote from kernel 
Documentation/vm/hmm.hst: “Any application memory region (private anonymous, 
shared memory, or regular file backed memory) can be used by a device 
transparently” and “to share the address space by duplicating the CPU page 
table in the device page table so the same address points to the same physical 
memory for any valid main memory address in the process address space”. In 
system svm allocator, we don’t need that A..B C..D mapping.

It looks like you were talking of 1). Were you?

No, even when you fully mirror the whole address space from a process into the 
GPU you still need to enable this somehow with an IOCTL.

And while enabling this you absolutely should specify to which part of the 
address space this mirroring applies and where it maps to.


Lets say we have a hardware platform where both CPU and GPU support 57bit 
virtual address range, how do you decide “which part of the address space this 
mirroring applies”? You have to mirror the whole address space (0~2^57-1), do 
you? As you designed it, the gigantic vm_bind/mirroring happens at the process 
initialization time, and at that time, you don’t know which part of the address 
space will be used for gpu program.


I see the system svm allocator as just a special case of the driver allocator 
where not fully backed buffer objects are allocated, but rather sparse one 
which are filled and migrated on demand.

Above statement is true to me. We don’t have BO for system svm allocator. It is 
a sparse one as we don’t map the whole vma to GPU. Our migration policy decide 
which pages/how much of the vma is migrated/mapped to GPU page table.

The difference b/t your mind and mine is, you want a gigantic vma (created 
during the gigantic vm_bind) to be sparsely populated to gpu. While I thought 
vma (xe_vma in xekmd codes) is a place to save memory attributes (such as 
caching, user preferred placement etc). All those memory attributes are range 
based, i.e., user can specify range1 is cached while range2 is uncached. So I 
don’t see how you can manage it with the gigantic vma.

Regards,
Oak


Regards,
Christian.



Reply via email to