On 2018/5/31 22:25, Robin Murphy wrote:
> On 31/05/18 14:49, Hanjun Guo wrote:
>> Hi Robin,
>>
>> On 2018/5/31 19:24, Robin Murphy wrote:
>>> On 31/05/18 08:42, Zhen Lei wrote:
In common, a IOMMU unmap operation follow the below steps:
1. remove the mapping in page table of the specifi
Hi Christoph,
On Fri, May 25, 2018 at 11:20:52AM +0200, Christoph Hellwig wrote:
> swiotlb_dma_supported will always return true for the a mask
> large enough to be covered by wired up physical address, so this
> function is pointless.
Shouldn't this be "large enough to cover all wired up physica
Argument "page_size" passing to function "fetch_pte" could be uninitialized
if the function returns NULL. The caller "iommu_unmap_page" checks the
return value but the page_size is used outside the if block.
Signed-off-by: yzhai...@ucr.edu
---
drivers/iommu/amd_iommu.c | 1 +
1 file changed, 1 i
Yes, thank you for your advice. The new patch's been sent.
On Thu, May 31, 2018 at 2:44 AM, Joerg Roedel wrote:
> Hi Yizhuo Zhai,
>
> thanks for your patch, but I think there is a better way to fix that.
> Please see below.
>
> On Wed, May 30, 2018 at 11:02:54PM -0700, Yizhuo Zhai wrote:
> > Var
On 30/05/18 15:06, Thierry Reding wrote:
From: Thierry Reding
Depending on the kernel configuration, early ARM architecture setup code
may have attached the GPU to a DMA/IOMMU mapping that transparently uses
the IOMMU to back the DMA API. Tegra requires special handling for IOMMU
backed buffers
On 30/05/18 15:06, Thierry Reding wrote:
From: Thierry Reding
Instead of setting the DMA ops pointer to NULL, set the correct,
non-IOMMU ops depending on the device's coherency setting.
It looks like it's probably been 4 or 5 years since that became subtly
wrong by virtue of the landscape ch
On 31/05/18 06:55, Baolin Wang wrote:
The device coherent memory uses the bitmap helper functions, which take an
order of PAGE_SIZE, that means the pages size is always a power of 2 as the
allocation region. For Example, allocating 33 MB from a 33 MB dma_mem region
requires 64MB free memory in th
On 31/05/18 06:55, Baolin Wang wrote:
It is incorrect to use mem->size to valid if there are enough coherent
memory to allocate in __dma_alloc_from_coherent(), since some devices
may mark some occupied coherent memory by dma_mark_declared_memory_occupied().
So we can introduce one 'avail' parame
On Wed, May 30, 2018 at 04:25:52PM -0700, Paul Burton wrote:
> > +static const struct octeon_dma_map_ops octeon_gen2_ops = {
> > + .phys_to_dma= octeon_hole_phys_to_dma,
> > + .dma_to_phys= octeon_hole_dma_to_phys,
> > +};
>
> These are pointers to functions of the wrong type, right? p
> +#if IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)
> + if (dev->archdata.mapping) {
> + struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev);
> +
> + arm_iommu_detach_device(dev);
> + arm_iommu_release_mapping(mapping);
> + }
> +#endif
Having this hidd
On 31/05/18 14:49, Hanjun Guo wrote:
Hi Robin,
On 2018/5/31 19:24, Robin Murphy wrote:
On 31/05/18 08:42, Zhen Lei wrote:
In common, a IOMMU unmap operation follow the below steps:
1. remove the mapping in page table of the specified iova range
2. execute tlbi command to invalid the mapping wh
Hi Robin,
On 2018/5/31 19:24, Robin Murphy wrote:
> On 31/05/18 08:42, Zhen Lei wrote:
>> In common, a IOMMU unmap operation follow the below steps:
>> 1. remove the mapping in page table of the specified iova range
>> 2. execute tlbi command to invalid the mapping which is cached in TLB
>> 3. wai
On 31/05/18 08:42, Zhen Lei wrote:
1. Save the related domain pointer in struct iommu_dma_cookie, make iovad
capable call domain->ops->flush_iotlb_all to flush TLB.
2. Define a new iommu capable: IOMMU_CAP_NON_STRICT, which used to indicate
that the iommu domain support non-strict mode.
3
On 31/05/18 08:42, Zhen Lei wrote:
Although the mapping has already been removed in the page table, it maybe
still exist in TLB. Suppose the freed IOVAs is reused by others before the
flush operation completed, the new user can not correctly access to its
meomory.
This change seems reasonable i
On 31/05/18 08:42, Zhen Lei wrote:
In common, a IOMMU unmap operation follow the below steps:
1. remove the mapping in page table of the specified iova range
2. execute tlbi command to invalid the mapping which is cached in TLB
3. wait for the above tlbi operation to be finished
4. free the IOVA
On 31/05/18 08:42, Zhen Lei wrote:
The static function iova_reserve_iommu_regions is only called by function
iommu_dma_init_domain, and the 'if (!dev)' check in iommu_dma_init_domain
affect it only, so we can safely move the check into it. I think it looks
more natural.
As before, I disagree -
On 31/05/18 08:42, Zhen Lei wrote:
In common, a IOMMU unmap operation follow the below steps:
1. remove the mapping in page table of the specified iova range
2. execute tlbi command to invalid the mapping which is cached in TLB
3. wait for the above tlbi operation to be finished
4. free the IOVA
>
> At the moment, the SMMUv3 driver offers only one stage-1 or stage-2
> address space to each device. SMMUv3 allows to associate multiple address
> spaces per device. In addition to the Stream ID (SID), that identifies a
> device,
> we can now have Substream IDs (SSID) identifying an address sp
Hi Yizhuo Zhai,
thanks for your patch, but I think there is a better way to fix that.
Please see below.
On Wed, May 30, 2018 at 11:02:54PM -0700, Yizhuo Zhai wrote:
> Variable "unmap_size" is supposed to be initialized in function fetch_pte.
> However, it's uninitialized if fetch_pte returns NULL
On 30/05/18 20:52, Jacob Pan wrote:
>> However I think the model number should be added to
>> pasid_table_config. For one thing it gives us a simple sanity-check,
>> but it also tells which other fields are valid in pasid_table_config.
>> Arm-smmu-v3 needs at least two additional 8-bit fields descr
To support the non-strict mode, now we only tlbi and sync for the strict
mode. But for the non-leaf case, always follow strict mode.
Signed-off-by: Zhen Lei
---
drivers/iommu/io-pgtable-arm.c | 22 --
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/drivers/iom
1. Add IOMMU_CAP_NON_STRICT capability.
2. Dynamic choose strict or non-strict mode base on the iommu domain type.
Signed-off-by: Zhen Lei
---
drivers/iommu/arm-smmu-v3.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smm
Although the mapping has already been removed in the page table, it maybe
still exist in TLB. Suppose the freed IOVAs is reused by others before the
flush operation completed, the new user can not correctly access to its
meomory.
Signed-off-by: Zhen Lei
---
drivers/iommu/amd_iommu.c | 2 +-
1 fi
1. Save the related domain pointer in struct iommu_dma_cookie, make iovad
capable call domain->ops->flush_iotlb_all to flush TLB.
2. Define a new iommu capable: IOMMU_CAP_NON_STRICT, which used to indicate
that the iommu domain support non-strict mode.
3. During the iommu domain initializatio
In common, a IOMMU unmap operation follow the below steps:
1. remove the mapping in page table of the specified iova range
2. execute tlbi command to invalid the mapping which is cached in TLB
3. wait for the above tlbi operation to be finished
4. free the IOVA resource
5. free the physical memory
The static function iova_reserve_iommu_regions is only called by function
iommu_dma_init_domain, and the 'if (!dev)' check in iommu_dma_init_domain
affect it only, so we can safely move the check into it. I think it looks
more natural.
In addition, the local variable 'ret' is only assigned in the
.flush_iotlb_all can not just wait for previous tlbi operations to be
completed, but should also invalid all TLBs of the related domain.
Signed-off-by: Zhen Lei
---
drivers/iommu/arm-smmu-v3.c | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/arm-smmu-v3
In common, a IOMMU unmap operation follow the below steps:
1. remove the mapping in page table of the specified iova range
2. execute tlbi command to invalid the mapping which is cached in TLB
3. wait for the above tlbi operation to be finished
4. free the IOVA resource
5. free the physical memory
Variable "unmap_size" is supposed to be initialized in function fetch_pte.
However, it's uninitialized if fetch_pte returns NULL. And "unmap_size" is
used outside the return check.
>From 377ccb647d3c6c6747f20a242b035bafc775c3be Mon Sep 17 00:00:00 2001
Signed-off-by: From: "yzhai...@ucr.edu"
---
The device coherent memory uses the bitmap helper functions, which take an
order of PAGE_SIZE, that means the pages size is always a power of 2 as the
allocation region. For Example, allocating 33 MB from a 33 MB dma_mem region
requires 64MB free memory in that region.
Thus we can change to use bi
It is incorrect to use mem->size to valid if there are enough coherent
memory to allocate in __dma_alloc_from_coherent(), since some devices
may mark some occupied coherent memory by dma_mark_declared_memory_occupied().
So we can introduce one 'avail' parameter to save the available device
coheren
31 matches
Mail list logo