On Tue, Jul 29, 2014 at 06:21:48PM +0100, Olav Haugan wrote:
> On 7/29/2014 2:25 AM, Will Deacon wrote:
> > I agree that we can't handle IOMMUs that have a minimum page size larger
> > than the CPU page size, but we should be able to handle the case where the
> > maximum supported page size on the
On 7/29/2014 2:25 AM, Will Deacon wrote:
> Hi Olav,
>
> On Tue, Jul 29, 2014 at 01:50:08AM +0100, Olav Haugan wrote:
>> On 7/28/2014 12:11 PM, Will Deacon wrote:
>>> On Mon, Jul 28, 2014 at 07:38:51PM +0100, Olav Haugan wrote:
+int iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
Hi Olav,
On Tue, Jul 29, 2014 at 01:50:08AM +0100, Olav Haugan wrote:
> On 7/28/2014 12:11 PM, Will Deacon wrote:
> > On Mon, Jul 28, 2014 at 07:38:51PM +0100, Olav Haugan wrote:
> >> +int iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
> >> + struct scatterlist *sg,
Hi Will,
On 7/28/2014 12:11 PM, Will Deacon wrote:
> Hi Olav,
>
> On Mon, Jul 28, 2014 at 07:38:51PM +0100, Olav Haugan wrote:
>> Mapping and unmapping are more often than not in the critical path.
>> map_sg and unmap_sg allows IOMMU driver implementations to optimize
>> the process of mapping an
Hi Olav,
On Mon, Jul 28, 2014 at 07:38:51PM +0100, Olav Haugan wrote:
> Mapping and unmapping are more often than not in the critical path.
> map_sg and unmap_sg allows IOMMU driver implementations to optimize
> the process of mapping and unmapping buffers into the IOMMU page tables.
>
> Instead
Mapping and unmapping are more often than not in the critical path.
map_sg and unmap_sg allows IOMMU driver implementations to optimize
the process of mapping and unmapping buffers into the IOMMU page tables.
Instead of mapping a buffer one page at a time and requiring potentially
expensive TLB op