Am 30.01.24 um 01:21 schrieb Zeng, Oak:

The example you used to prove that KFD is a design failure, is against *any* design which utilize system allocator and hmm. The way that one proxy process running on host to handle many guest processes, doesn’t fit into the concept of “share address space b/t cpu and gpu”. The shared address space has to be within one process. Your proxy process represent many guest processes. It is a fundamental conflict.

Also your userptr proposal does’t solve this problem either:

Imagine you have guest process1 mapping CPU address range A…B to GPU address range C…D

And you have guest process 2 mapping CPU address range A…B to GPU address range C…D, since process 1 and 2 are two different process, it is legal for process 2 to do the exact same mapping.

Now when gpu shader access address C…D, a gpu page fault happens, what does your proxy process do? Which guest process will this fault be directed to and handled? Except you have extra information/API to tell proxy process and GPU HW, there is no way to figure out.


Well yes, as far as I can see the fundamental design issue in the KFD is that it ties together CPU and GPU address space. That came from the implementation using the ATS/PRI feature to access the CPU address space from the GPU.

If you don't do ATS/PRI then you don't have that restriction and you can do as many GPU address spaces per CPU process as you want. This is just how the hw works.

So in your example above when you have multiple mappings for the range A..B you also have multiple GPU address spaces and so can distinct where the page fault is coming from just by looking at the source of it. All you then need is userfaultfd() to forward the fault to the client and you are pretty much done.

Compared to the shared virtual address space concept of HMM, the userptr design is nothing new except it allows CPU and GPU to use different address to access the same object. If you replace above C…D with A…B, above description becomes a description of the “problem” of HMM/shared virtual address design.

Both design has the same difficulty with your example of the special virtualization environment setup.

As said, we spent effort scoped the userptr solution some time ago. The problem we found enabling userptr with migration were:

 1. The user interface of userptr is not as convenient as system
    allocator. With the userptr solution, user need to call
    userptr_ioctl and vm_bind for *every* single cpu pointer that he
    want to use in a gpu program. While with system allocator,
    programmer just use any cpu pointer directly in gpu program
    without any extra driver ioctls.


And I think exactly that is questionable. Why not at least call it for the whole address space once during initialization?

>     We don’t see the real benefit of using a different Gpu address C…D than the A..B, except you can prove my above reasoning is wrong. In most use cases, you can make GPU C…D == CPU A…B, why bother then?

Because there are cases where this isn't true. We just recently ran into exactly that use case with a customer. It might be that you will never need this, but again the approach should generally be that the kernel exposes the hardware features and as far as I can see the hardware can do this.

And apart from those use cases there is also another good reason for this: CPU are going towards 5 level of page tables and GPUs are lacking behind. It's not unrealistic to run into cases where you can only mirror parts of the CPU address space into the GPU address space because of hardware restrictions. And in this case you absolutely do want the flexibility to have different address ranges.


>     Looked into implementation details, since hmm fundamentally assume a shared virtual address space b/t cpu and device, for the userptr solution to leverage hmm, you need perform address space conversion every time you calls into hmm functions.

Correct, but that is trivial. I mean we do nothing else with VMAs mapping into the address space of files on the CPU either.

Which is by the way a good analogy. The CPU address space consists of anonymous memory and file mappings, where the later covers both real files on a file system as well as devices.

The struct address_space in the Linux kernel for example describes a file address space and not the CPU address space because the later is just a technical tool to form an execution environment which can access the former.

With GPUs it's pretty much the same. You have mappings which can be backed by CPU address space using functionalities like HMM as well as buffer objects created directly through device drivers.

In summary, GPU device is just a piece of HW to accelerate your CPU program.


Well exactly that's not how I see it. CPU accelerators are extensions like SSE, AVX, FPUs etc... GPU are accelerators attached as I/O devices.

And that GPUs are separate to the CPU is a benefit which gives them advantage over CPU based acceleration approaches.

This obviously makes GPUs harder to program and SVM is a method to counter this, but that doesn't make SVM a good design pattern for kernel or device driver interfaces.

If HW allows, it is more convenient to use shared address space b/t cpu and GPU. On old HW (example, no gpu page fault support, or gpu only has a very limited address space), we can disable system allocator/SVM. If you use different address space on modern GPU, why don’t you use different address space on different CPU cores?


Quite simple, modern CPU are homogeneous. From the application point of view they still look more or less the same they did 40 years ago.

GPUs on the other hand look quite a bit different. SVM is now a tool to reduce this difference but it doesn't make the differences in execution environment go away.

And I can only repeat myself that this is actually a good thing, cause otherwise GPUs would loose some of their advantage over CPUs.

Regards,
Christian.

Regards,

Oak

*From:*dri-devel <dri-devel-boun...@lists.freedesktop.org> *On Behalf Of *Christian König
*Sent:* Monday, January 29, 2024 5:20 AM
*To:* Zeng, Oak <oak.z...@intel.com>; Thomas Hellström <thomas.hellst...@linux.intel.com>; Daniel Vetter <dan...@ffwll.ch>; Dave Airlie <airl...@redhat.com> *Cc:* Brost, Matthew <matthew.br...@intel.com>; Felix Kuehling <felix.kuehl...@amd.com>; Welty, Brian <brian.we...@intel.com>; dri-devel@lists.freedesktop.org; Ghimiray, Himal Prasad <himal.prasad.ghimi...@intel.com>; Bommu, Krishnaiah <krishnaiah.bo...@intel.com>; Gupta, saurabhg <saurabhg.gu...@intel.com>; Vishwanathapura, Niranjana <niranjana.vishwanathap...@intel.com>; intel...@lists.freedesktop.org; Danilo Krummrich <d...@redhat.com>
*Subject:* Re: Making drm_gpuvm work across gpu devices

Well Daniel and Dave noted it as well, so I'm just repeating it: Your design choices are not an argument to get something upstream.

It's the job of the maintainers and at the end of the Linus to judge of something is acceptable or not.

As far as I can see a good part of this this idea has been exercised lengthy with KFD and it turned out to not be the best approach.

So from what I've seen the design you outlined is extremely unlikely to go upstream.

Regards,
Christian.

Am 27.01.24 um 03:21 schrieb Zeng, Oak:

    Regarding the idea of expanding userptr to support migration, we
    explored this idea long time ago. It provides similar functions of
    the system allocator but its interface is not as convenient as
    system allocator. Besides the shared virtual address space,
    another benefit of a system allocator is, you can offload cpu
    program to gpu easier, you don’t need to call driver specific API
    (such as register_userptr and vm_bind in this case) for memory
    allocation.

    We also scoped the implementation. It turned out to be big, and
    not as beautiful as hmm. Why we gave up this approach.

    *From:*Christian König <christian.koe...@amd.com>
    <mailto:christian.koe...@amd.com>
    *Sent:* Friday, January 26, 2024 7:52 AM
    *To:* Thomas Hellström <thomas.hellst...@linux.intel.com>
    <mailto:thomas.hellst...@linux.intel.com>; Daniel Vetter
    <dan...@ffwll.ch> <mailto:dan...@ffwll.ch>
    *Cc:* Brost, Matthew <matthew.br...@intel.com>
    <mailto:matthew.br...@intel.com>; Felix Kuehling
    <felix.kuehl...@amd.com> <mailto:felix.kuehl...@amd.com>; Welty,
    Brian <brian.we...@intel.com> <mailto:brian.we...@intel.com>;
    Ghimiray, Himal Prasad <himal.prasad.ghimi...@intel.com>
    <mailto:himal.prasad.ghimi...@intel.com>; Zeng, Oak
    <oak.z...@intel.com> <mailto:oak.z...@intel.com>; Gupta, saurabhg
    <saurabhg.gu...@intel.com> <mailto:saurabhg.gu...@intel.com>;
    Danilo Krummrich <d...@redhat.com> <mailto:d...@redhat.com>;
    dri-devel@lists.freedesktop.org
    <mailto:dri-devel@lists.freedesktop.org>; Bommu, Krishnaiah
    <krishnaiah.bo...@intel.com> <mailto:krishnaiah.bo...@intel.com>;
    Dave Airlie <airl...@redhat.com> <mailto:airl...@redhat.com>;
    Vishwanathapura, Niranjana <niranjana.vishwanathap...@intel.com>
    <mailto:niranjana.vishwanathap...@intel.com>;
    intel...@lists.freedesktop.org <mailto:intel...@lists.freedesktop.org>
    *Subject:* Re: Making drm_gpuvm work across gpu devices

    Am 26.01.24 um 09:21 schrieb Thomas Hellström:


        Hi, all

        On Thu, 2024-01-25 at 19:32 +0100, Daniel Vetter wrote:

            On Wed, Jan 24, 2024 at 09:33:12AM +0100, Christian König wrote:

                Am 23.01.24 um 20:37 schrieb Zeng, Oak:

                    [SNIP]

                    Yes most API are per device based.

                    One exception I know is actually the kfd SVM API. If you 
look at

                    the svm_ioctl function, it is per-process based. Each 
kfd_process

                    represent a process across N gpu devices.

                Yeah and that was a big mistake in my opinion. We should really 
not

                do that

                ever again.

                    Need to say, kfd SVM represent a shared virtual address 
space

                    across CPU and all GPU devices on the system. This is by the

                    definition of SVM (shared virtual memory). This is very 
different

                    from our legacy gpu *device* driver which works for only one

                    device (i.e., if you want one device to access another 
device's

                    memory, you will have to use dma-buf export/import etc).

                Exactly that thinking is what we have currently found as blocker

                for a

                virtualization projects. Having SVM as device independent 
feature

                which

                somehow ties to the process address space turned out to be an

                extremely bad

                idea.

                The background is that this only works for some use cases but 
not

                all of

                them.

                What's working much better is to just have a mirror 
functionality

                which says

                that a range A..B of the process address space is mapped into a

                range C..D

                of the GPU address space.

                Those ranges can then be used to implement the SVM feature 
required

                for

                higher level APIs and not something you need at the UAPI or even

                inside the

                low level kernel memory management.

                When you talk about migrating memory to a device you also do 
this

                on a per

                device basis and *not* tied to the process address space. If you

                then get

                crappy performance because userspace gave contradicting 
information

                where to

                migrate memory then that's a bug in userspace and not something 
the

                kernel

                should try to prevent somehow.

                [SNIP]

                        I think if you start using the same drm_gpuvm for 
multiple

                        devices you

                        will sooner or later start to run into the same mess we 
have

                        seen with

                        KFD, where we moved more and more functionality from 
the KFD to

                        the DRM

                        render node because we found that a lot of the stuff 
simply

                        doesn't work

                        correctly with a single object to maintain the state.

                    As I understand it, KFD is designed to work across devices. 
A

                    single pseudo /dev/kfd device represent all hardware gpu 
devices.

                    That is why during kfd open, many pdd (process device data) 
is

                    created, each for one hardware device for this process.

                Yes, I'm perfectly aware of that. And I can only repeat myself 
that

                I see

                this design as a rather extreme failure. And I think it's one of

                the reasons

                why NVidia is so dominant with Cuda.

                This whole approach KFD takes was designed with the idea of

                extending the

                CPU process into the GPUs, but this idea only works for a few 
use

                cases and

                is not something we should apply to drivers in general.

                A very good example are virtualization use cases where you end 
up

                with CPU

                address != GPU address because the VAs are actually coming from 
the

                guest VM

                and not the host process.

                SVM is a high level concept of OpenCL, Cuda, ROCm etc.. This 
should

                not have

                any influence on the design of the kernel UAPI.

                If you want to do something similar as KFD for Xe I think you 
need

                to get

                explicit permission to do this from Dave and Daniel and maybe 
even

                Linus.

            I think the one and only one exception where an SVM uapi like in kfd

            makes

            sense, is if the _hardware_ itself, not the software stack defined

            semantics that you've happened to build on top of that hw, enforces 
a

            1:1

            mapping with the cpu process address space.

            Which means your hardware is using PASID, IOMMU based translation,

            PCI-ATS

            (address translation services) or whatever your hw calls it and has

            _no_

            device-side pagetables on top. Which from what I've seen all devices

            with

            device-memory have, simply because they need some place to store

            whether

            that memory is currently in device memory or should be translated

            using

            PASID. Currently there's no gpu that works with PASID only, but 
there

            are

            some on-cpu-die accelerator things that do work like that.

            Maybe in the future there will be some accelerators that are fully

            cpu

            cache coherent (including atomics) with something like CXL, and the

            on-device memory is managed as normal system memory with struct page

            as

            ZONE_DEVICE and accelerator va -> physical address translation is

            only

            done with PASID ... but for now I haven't seen that, definitely not

            in

            upstream drivers.

            And the moment you have some per-device pagetables or per-device

            memory

            management of some sort (like using gpuva mgr) then I'm 100% 
agreeing

            with

            Christian that the kfd SVM model is too strict and not a great idea.

            Cheers, Sima

        I'm trying to digest all the comments here, The end goal is to be able

        to support something similar to this here:

        
https://developer.nvidia.com/blog/simplifying-gpu-application-development-with-heterogeneous-memory-management/

        Christian, If I understand you correctly, you're strongly suggesting

        not to try to manage a common virtual address space across different

        devices in the kernel, but merely providing building blocks to do so,

        like for example a generalized userptr with migration support using

        HMM; That way each "mirror" of the CPU mm would be per device and

        inserted into the gpu_vm just like any other gpu_vma, and user-space

        would dictate the A..B -> C..D mapping by choosing the GPU_VA for the

        vma.


    Exactly that, yes.



        Sima, it sounds like you're suggesting to shy away from hmm and not

        even attempt to support this except if it can be done using IOMMU sva

        on selected hardware?


    I think that comment goes more into the direction of: If you have
    ATS/ATC/PRI capable hardware which exposes the functionality to
    make memory reads and writes directly into the address space of
    the CPU then yes an SVM only interface is ok because the hardware
    can't do anything else. But as long as you have something like
    GPUVM then please don't restrict yourself.

    Which I totally agree on as well. The ATS/ATC/PRI combination
    doesn't allow using separate page tables device and CPU and so
    also not separate VAs.

    This was one of the reasons why we stopped using this approach for
    AMD GPUs.

    Regards,
    Christian.



        Could you clarify a bit?

        Thanks,

        Thomas

Reply via email to