On Mon, Aug 04, 2025 at 03:42:39PM +0300, Leon Romanovsky wrote: > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c > index 399838c17b705..11c5d5f8c0981 100644 > --- a/drivers/iommu/dma-iommu.c > +++ b/drivers/iommu/dma-iommu.c > @@ -1190,11 +1190,9 @@ static inline size_t iova_unaligned(struct iova_domain > *iovad, phys_addr_t phys, > return iova_offset(iovad, phys | size); > } > > -dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, > - unsigned long offset, size_t size, enum dma_data_direction dir, > - unsigned long attrs) > +dma_addr_t iommu_dma_map_phys(struct device *dev, phys_addr_t phys, size_t > size, > + enum dma_data_direction dir, unsigned long attrs) > { > - phys_addr_t phys = page_to_phys(page) + offset; > bool coherent = dev_is_dma_coherent(dev); > int prot = dma_info_to_prot(dir, coherent, attrs); > struct iommu_domain *domain = iommu_get_dma_domain(dev);
No issue with pushing the page_to_phys to the looks like two callers.. It is worth pointing though that today if the page * was a MEMORY_DEVICE_PCI_P2PDMA page then it is illegal to call the swiotlb functions a few lines below this: phys = iommu_dma_map_swiotlb(dev, phys, size, dir, attrs); ie struct page alone as a type is not sufficient to make this function safe for a long time now. So I would add some explanation in the commit message how this will be situated in the final call chains, and maybe leave behind a comment that attrs may not have ATTR_MMIO in this function. I think the answer is iommu_dma_map_phys() is only called for !ATTR_MMIO addresses, and that iommu_dma_map_resource() will be called for ATTR_MMIO? Jason