Hi Danilo and all,

During the work of Intel's SVM code, we came up the idea of making drm_gpuvm to 
work across multiple gpu devices. See some discussion here: 
https://lore.kernel.org/dri-devel/ph7pr11mb70049e7e6a2f40bf6282ecc292...@ph7pr11mb7004.namprd11.prod.outlook.com/

The reason we try to do this is, for a SVM (shared virtual memory across cpu 
program and all gpu program on all gpu devices) process, the address space has 
to be across all gpu devices. So if we make drm_gpuvm to work across devices, 
then our SVM code can leverage drm_gpuvm as well.

At a first look, it seems feasible because drm_gpuvm doesn't really use the 
drm_device *drm pointer a lot. This param is used only for printing/warning. So 
I think maybe we can delete this drm field from drm_gpuvm.

This way, on a multiple gpu device system, for one process, we can have only 
one drm_gpuvm instance, instead of multiple drm_gpuvm instances (one for each 
gpu device).

What do you think?

Thanks,
Oak

Reply via email to