On Tue, Jan 27, 2026 at 02:37:01PM -0400, Jason Gunthorpe wrote:
> On Tue, Jan 27, 2026 at 08:54:22AM -0800, Matthew Brost wrote:
> 
> > That will likely work for dma-buf, let me see if I can convert our
> > dma-buf flows to use this helper. But it won't work for things like SVM,
> > so it would be desirable to figure out to have an API drivers can use to
> > iova alloc/link/sync/unlink/free for multi-device or just agree we trust
> > drivers enough to use the existing API.
> 
> SVM should be driven with HMM and there is a helper in
> hmm_dma_map_pfn() for this.
> 

Ok, I'm not sure if that will exactly fit how our SVM code is structured.
> Yonatan posted a series to expand it to work with ZONE_DEVICE PRIVATE
> pages but it needs a refresh
> 
> https://lore.kernel.org/linux-rdma/[email protected]/
> 

A brief look, this isn't all that far off from ideas we have in DRM with
ops in the pagemap (DRM pagemap) though to handle P2P mappings. We are
also forward looking to not just DMA connections but high speed fabrics
too. The thinking there was make high speed fabric API look like dma-map
iova alloc/link/sync/unlink/free but let DRM pagemap op pick between the
dma-map API and high speed fabric API based on connection and wrap
everything into DRM common layer (GPU SVM) to map the pages.


> If there are other cases it would be reasonable to discuss enhancing
> hmm_dma_map_pfn().
> 

Let me wrap my head around this one and get back to you. Something to
think about.

Matt

> Jason

Reply via email to