On 2021-04-06 04:57, Will Deacon wrote:
On Mon, Apr 05, 2021 at 12:11:03PM -0700, Isaac J. Manjarres wrote:
Mapping memory into io-pgtables follows the same semantics
that unmapping memory used to follow (i.e. a buffer will be
mapped one page block per call to the io-pgtable code). This
means that it can be optimized in the same way that unmapping
memory was, so add a map_pages() callback to the io-pgtable
ops structure, so that a range of pages of the same size
can be mapped within the same call.

Signed-off-by: Isaac J. Manjarres <isa...@codeaurora.org>
Suggested-by: Will Deacon <w...@kernel.org>
---
 include/linux/io-pgtable.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h
index 2ed0c057d9e7..019149b204b8 100644
--- a/include/linux/io-pgtable.h
+++ b/include/linux/io-pgtable.h
@@ -143,6 +143,7 @@ struct io_pgtable_cfg {
* struct io_pgtable_ops - Page table manipulation API for IOMMU drivers.
  *
  * @map:          Map a physically contiguous memory region.
+ * @map_pages: Map a physically contiguous range of pages of the same size.
  * @unmap:        Unmap a physically contiguous memory region.
* @unmap_pages: Unmap a range of virtually contiguous pages of the same size.
  * @iova_to_phys: Translate iova to physical address.
@@ -153,6 +154,9 @@ struct io_pgtable_cfg {
 struct io_pgtable_ops {
        int (*map)(struct io_pgtable_ops *ops, unsigned long iova,
                   phys_addr_t paddr, size_t size, int prot, gfp_t gfp);
+       int (*map_pages)(struct io_pgtable_ops *ops, unsigned long iova,
+                        phys_addr_t paddr, size_t pgsize, size_t pgcount,
+                        int prot, gfp_t gfp, size_t *mapped);

How about returning 'size_t' and using IS_ERR_VALUE() instead of adding
the extra 'mapped' argument (i.e. return the size of the region mapped
or an error code)? I don't think we realistically need to care about map
sizes that overlap with the error region.

I'd given that a shot before, but the problem that I kept running into was that in case of an error, if I return an error code, I don't know how much memory was mapped, so that I can invoke iommu_unmap from __iommu_map with that size to
undo the partial mappings from a map_pages() call.

Returning the amount of memory that was mapped in the case of an error will be less than the size that was requested, but then we lose the information about why the error happened, since the error code won't be returned, so that's why I went with the current approach. Do you have any other ideas about how to handle this?

I'd thought of having arm_lpae_map_pages() invoke arm_lpae_unmap_pages(), but the TLB maintenance was a problem, as we wouldn't invoke iommu_iotlb_sync().

Thanks,
Isaac
Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-ker...@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to