On Mon, Sep 01, 2025 at 11:47:59PM +0200, Marek Szyprowski wrote:
> I would like to give those patches a try in linux-next, but in meantime 
> I tested it on my test farm and found a regression in dma_map_resource() 
> handling. Namely the dma_map_resource() is no longer possible with size 
> not aligned to kmalloc()'ed buffer, as dma_direct_map_phys() calls 
> dma_kmalloc_needs_bounce(),

Hmm, it's this bit:

        capable = dma_capable(dev, dma_addr, size, !(attrs & DMA_ATTR_MMIO));
        if (unlikely(!capable) || dma_kmalloc_needs_bounce(dev, size, dir)) {
                if (is_swiotlb_active(dev) && !(attrs & DMA_ATTR_MMIO))
                        return swiotlb_map(dev, phys, size, dir, attrs);

                goto err_overflow;
        }

We shouldn't be checking dma_kmalloc_needs_bounce() on mmio as there
is no cache flushing so the "dma safe alignment" for non-coherent DMA
does not apply.

Like you say looks good to me, and more of the surrouding code can be
pulled in too, no sense in repeating the boolean logic:

        if (attrs & DMA_ATTR_MMIO) {
                dma_addr = phys;
                if (unlikely(!dma_capable(dev, dma_addr, size, false)))
                        goto err_overflow;
        } else {
                dma_addr = phys_to_dma(dev, phys);
                if (unlikely(!dma_capable(dev, dma_addr, size, true)) ||
                    dma_kmalloc_needs_bounce(dev, size, dir)) {
                        if (is_swiotlb_active(dev))
                                return swiotlb_map(dev, phys, size, dir, attrs);

                        goto err_overflow;
                }
                if (!dev_is_dma_coherent(dev) &&
                    !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
                        arch_sync_dma_for_device(phys, size, dir);
        }

Jason

Reply via email to