On Thu, Mar 12, 2026 at 09:19:37AM -0300, Jason Gunthorpe wrote:
> On Wed, Mar 11, 2026 at 09:08:47PM +0200, Leon Romanovsky wrote:
> > From: Leon Romanovsky <[email protected]>
> > 
> > The mapping buffers which carry this attribute require DMA coherent system.
> > This means that they can't take SWIOTLB path, can perform CPU cache overlap
> > and doesn't perform cache flushing.
> > 
> > Signed-off-by: Leon Romanovsky <[email protected]>
> > ---
> >  Documentation/core-api/dma-attributes.rst | 12 ++++++++++++
> >  include/linux/dma-mapping.h               |  7 +++++++
> >  include/trace/events/dma.h                |  3 ++-
> >  kernel/dma/debug.c                        |  3 ++-
> >  kernel/dma/mapping.c                      |  6 ++++++
> >  5 files changed, 29 insertions(+), 2 deletions(-)
> > 
> > diff --git a/Documentation/core-api/dma-attributes.rst 
> > b/Documentation/core-api/dma-attributes.rst
> > index 48cfe86cc06d7..69d094f144c70 100644
> > --- a/Documentation/core-api/dma-attributes.rst
> > +++ b/Documentation/core-api/dma-attributes.rst
> > @@ -163,3 +163,15 @@ data corruption.
> >  
> >  All mappings that share a cache line must set this attribute to suppress 
> > DMA
> >  debug warnings about overlapping mappings.
> > +
> > +DMA_ATTR_REQUIRE_COHERENT
> > +-------------------------
> > +
> > +The mapping buffers which carry this attribute require DMA coherent 
> > system. This means
> > +that they can't take SWIOTLB path, can perform CPU cache overlap and 
> > doesn't perform
> > +cache flushing.
> 
> DMA mapping requests with the DMA_ATTR_REQUIRE_COHERENT fail on any
> system where SWIOTLB or cache management is required. This should only
> be used to support uAPI designs that require continuous HW DMA
> coherence with userspace processes, for example RDMA and DRM. At a
> minimum the memory being mapped must be userspace memory from
> pin_user_pages() or similar.
> 
> Drivers should consider using dma_mmap_pages() instead of this
> interface when building their uAPIs, when possible.
> 
> It must never be used in an in-kernel driver that only works with
> kernal memory.
> 
> > @@ -164,6 +164,9 @@ dma_addr_t dma_map_phys(struct device *dev, phys_addr_t 
> > phys, size_t size,
> >     if (WARN_ON_ONCE(!dev->dma_mask))
> >             return DMA_MAPPING_ERROR;
> >  
> > +   if (!dev_is_dma_coherent(dev) && (attrs & DMA_ATTR_REQUIRE_COHERENT))
> > +           return DMA_MAPPING_ERROR;
> 
> This doesn't capture enough conditions.. is_swiotlb_force_bounce(),
> dma_kmalloc_needs_bounce(), dma_capable(), etc all need to be blocked
> too

These checks exist in dma_direct_map_phys() and here is the common check
between direct and IOMMU modes.

Thanks

> 
> So check it inside swiotlb_map() too, and maybe shift the above
> into the existing branches:
> 
>         if (!dev_is_dma_coherent(dev) &&
>             !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO)))
>                 arch_sync_dma_for_device(phys, size, dir);
> 
> Jason

Reply via email to