Hi,
On 9/10/19 11:15 PM, Konrad Rzeszutek Wilk wrote:
On Fri, Sep 06, 2019 at 02:14:48PM +0800, Lu Baolu wrote:
This splits the size parameter to swiotlb_tbl_map_single() and
swiotlb_tbl_unmap_single() into an alloc_size and a mapping_size
parameter, where the latter one is rounded up to the io
__unmap_single makes several calls to __domain_flush_pages, which
traverses the device list that is protected by the domain lock.
__attach_device and __detach_device).
Also, this is in line with the comment on top of __unmap_single, which
says that the domain lock should be held when calling.
Sig
__map_single makes several calls to __domain_flush_pages, which
traverses the device list that is protected by the domain lock.
Also, this is in line with the comment on top of __map_single, which
says that the domain lock should be held when calling.
Signed-off-by: Filippo Sironi
---
drivers/i
This patch series introduce patches to take the domain lock whenever we call
functions that end up calling __domain_flush_pages. Holding the domain lock is
necessary since __domain_flush_pages traverses the device list, which is
protected by the domain lock.
The first patch in the series adds a c
From: Wei Wang
domain_flush_tlb[_pde] traverses the device list, which is protected by
the domain lock.
Signed-off-by: Wei Wang
Signed-off-by: Filippo Sironi
---
drivers/iommu/amd_iommu.c | 9 +
1 file changed, 9 insertions(+)
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/am
Signed-off-by: Filippo Sironi
---
drivers/iommu/amd_iommu.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 61de81965c44..f026a8c2b218 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -2169,6 +2169,8 @@ sta
iommu_map_page calls into __domain_flush_pages, which requires the
domain lock since it traverses the device list, which the lock protects.
Signed-off-by: Filippo Sironi
---
drivers/iommu/amd_iommu.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iom
On Tue, 2019-09-10 at 10:08 +0200, Joerg Roedel wrote:
> > + "Unknown", "Unknown", "Unknown", "Unknown", "Unknown",
> "Unknown", "Unknown", /* 0x49-0x4F */
>
> Maybe add the number (0x49-0x4f) to the respecting "Unknown" fields?
> If
> we can't give a reason we should give the number for easie
On Tue, Sep 10, 2019 at 9:34 AM Robin Murphy wrote:
>
> On 06/09/2019 22:44, Rob Clark wrote:
> > From: Rob Clark
> >
> > One of the challenges we have to enable the aarch64 laptops upstream
> > is dealing with the fact that the bootloader enables the display and
> > takes the corresponding SMMU
On 10/09/2019 16:34, Rob Clark wrote:
On Tue, Sep 10, 2019 at 1:14 AM Joerg Roedel wrote:
On Fri, Sep 06, 2019 at 02:44:01PM -0700, Rob Clark wrote:
@@ -674,7 +674,7 @@ int iommu_group_add_device(struct iommu_group *group,
struct device *dev)
mutex_lock(&group->mutex);
list_ad
On 06/09/2019 22:44, Rob Clark wrote:
From: Rob Clark
One of the challenges we have to enable the aarch64 laptops upstream
is dealing with the fact that the bootloader enables the display and
takes the corresponding SMMU context-bank out of BYPASS. Unfortunately,
currently, the IOMMU framework
On Tue, Sep 10, 2019 at 8:01 AM Robin Murphy wrote:
>
> On 07/09/2019 18:50, Rob Clark wrote:
> > From: Rob Clark
> >
> > When games, browser, or anything using a lot of GPU buffers exits, there
> > can be many hundreds or thousands of buffers to unmap and free. If the
> > GPU is otherwise suspe
On Tue, Sep 10, 2019 at 1:14 AM Joerg Roedel wrote:
>
> On Fri, Sep 06, 2019 at 02:44:01PM -0700, Rob Clark wrote:
> > @@ -674,7 +674,7 @@ int iommu_group_add_device(struct iommu_group *group,
> > struct device *dev)
> >
> > mutex_lock(&group->mutex);
> > list_add_tail(&device->list,
On Tue, Sep 10, 2019 at 04:53:23PM +0200, Joerg Roedel wrote:
> On Fri, Sep 06, 2019 at 02:14:47PM +0800, Lu Baolu wrote:
> > Lu Baolu (5):
> > swiotlb: Split size parameter to map/unmap APIs
> > iommu/vt-d: Check whether device requires bounce buffer
> > iommu/vt-d: Don't switch off swiotlb
On Fri, Sep 06, 2019 at 02:14:48PM +0800, Lu Baolu wrote:
> This splits the size parameter to swiotlb_tbl_map_single() and
> swiotlb_tbl_unmap_single() into an alloc_size and a mapping_size
> parameter, where the latter one is rounded up to the iommu page
> size.
It does a bit more too. You have t
On 07/09/2019 18:50, Rob Clark wrote:
From: Rob Clark
When games, browser, or anything using a lot of GPU buffers exits, there
can be many hundreds or thousands of buffers to unmap and free. If the
GPU is otherwise suspended, this can cause arm-smmu to resume/suspend
for each buffer, resulting
On Fri, Sep 06, 2019 at 02:14:47PM +0800, Lu Baolu wrote:
> Lu Baolu (5):
> swiotlb: Split size parameter to map/unmap APIs
> iommu/vt-d: Check whether device requires bounce buffer
> iommu/vt-d: Don't switch off swiotlb if bounce page is used
> iommu/vt-d: Add trace events for device dma m
On Tue, Sep 10, 2019 at 09:06:56AM -0400, Qian Cai wrote:
> On Tue, 2019-09-10 at 10:15 +0200, Joerg Roedel wrote:
> > On Sat, Sep 07, 2019 at 04:49:33PM +1000, Adam Zerella wrote:
> > > drivers/iommu/intel-iommu.c | 6 +++---
> > > 1 file changed, 3 insertions(+), 3 deletions(-)
> >
> > Applied,
On 23/08/2019 07:32, Vivek Gautam wrote:
Add reset hook for sdm845 based platforms to turn off
the wait-for-safe sequence.
Understanding how wait-for-safe logic affects USB and UFS performance
on MTP845 and DB845 boards:
Qcom's implementation of arm,mmu-500 adds a WAIT-FOR-SAFE logic
to address
Hi Dongchun,
On Mon, Sep 9, 2019 at 6:27 PM Dongchun Zhu wrote:
>
> Hi Tomasz,
>
> On Fri, 2019-08-23 at 19:01 +0900, Tomasz Figa wrote:
> > Hi Dongchun,
> >
> > On Thu, Aug 08, 2019 at 05:22:15PM +0800, dongchun@mediatek.com wrote:
[snip]
> > > +
> > > /* vertical-timings from sensor */
> >
On Tue, 2019-09-10 at 10:15 +0200, Joerg Roedel wrote:
> On Sat, Sep 07, 2019 at 04:49:33PM +1000, Adam Zerella wrote:
> > drivers/iommu/intel-iommu.c | 6 +++---
> > 1 file changed, 3 insertions(+), 3 deletions(-)
>
> Applied, thanks.
Joerg, not sure if you saw the reply from Lu,
https://lore.
On Sat, Sep 07, 2019 at 04:58:12PM +1000, Adam Zerella wrote:
> There was some simple Sparse warnings related to making some
> signatures static.
And unapplied both of your patches as they causes build failures:
arch/x86/events/amd/iommu.o: In function `perf_iommu_read':
iommu.c:(.text+0xba): und
On Sat, Sep 07, 2019 at 04:58:12PM +1000, Adam Zerella wrote:
> drivers/iommu/amd_iommu.c | 4 ++--
> drivers/iommu/amd_iommu_init.c | 12 ++--
> 2 files changed, 8 insertions(+), 8 deletions(-)
Applied, thanks.
___
iommu mailing list
iomm
On Sat, Sep 07, 2019 at 04:49:33PM +1000, Adam Zerella wrote:
> drivers/iommu/intel-iommu.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
Applied, thanks.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfounda
On Fri, Sep 06, 2019 at 02:44:01PM -0700, Rob Clark wrote:
> @@ -674,7 +674,7 @@ int iommu_group_add_device(struct iommu_group *group,
> struct device *dev)
>
> mutex_lock(&group->mutex);
> list_add_tail(&device->list, &group->devices);
> - if (group->domain)
> + if (group->d
On Fri, Sep 06, 2019 at 11:14:02AM -0700, Kyung Min Park wrote:
> Intel VT-d specification revision 3 added support for Scalable Mode
> Translation for DMA remapping. Add the Scalable Mode fault reasons to
> show detailed fault reasons when the translation fault happens.
>
> Link:
> https://softw
26 matches
Mail list logo