On Mon, Jun 3, 2019 at 11:52 AM Joerg Roedel wrote:
>
> Hi Tom,
>
> On Mon, May 06, 2019 at 07:52:02PM +0100, Tom Murphy wrote:
> > Convert the AMD iommu driver to the dma-iommu api. Remove the iova
> > handling and reserve region code from the AMD iommu driver.
>
> Thank you for your work on this
Use the dev->coherent_dma_mask when allocating in the dma-iommu ops api.
Signed-off-by: Tom Murphy
---
drivers/iommu/dma-iommu.c | 16 +---
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index b383498e2dc3..2a968afd
Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.
Change-log:
v3:
-rename dma_limit to dma_mask
-exit handle_deferred_device early if (!is_kdump_kernel())
-remove pointless calls to handle_deferred_device
v2:
-Rebase on t
Add a gfp_t parameter to the iommu_ops::map function.
Remove the needless locking in the AMD iommu driver.
The iommu_ops::map function (or the iommu_map function which calls it)
was always supposed to be sleepable (according to Joerg's comment in
this thread: https://lore.kernel.org/patchwork/patc
Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.
Signed-off-by: Tom Murphy
---
drivers/iommu/Kconfig | 1 +
drivers/iommu/amd_iommu.c | 680 --
2 files changed, 70 insertions(+
Handle devices which defer their attach to the iommu in the dma-iommu api
Signed-off-by: Tom Murphy
---
drivers/iommu/dma-iommu.c | 28 +++-
1 file changed, 27 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 7a96c2c8f
Just to make this clear, I won't apply Christoph's patch (the one in
this email thread) and instead the only change I will make is to
rename dma_limit to dma_mask.
On Tue, Apr 30, 2019 at 1:05 PM Robin Murphy wrote:
>
> On 30/04/2019 12:32, Christoph Hellwig wrote:
> > On Tue, Apr 30, 2019 at 12:
The AMD driver already solves this problem and uses the generic
iommu_request_dm_for_dev function. It seems like both drivers have the
same problem and could use the same solution. Is there any reason we
can't have use the same solution for the intel and amd driver?
Could we just copy the impleme
On Mon, May 6, 2019 at 2:48 AM Lu Baolu wrote:
>
> Hi,
>
> On 5/4/19 9:23 PM, Tom Murphy wrote:
> > Set the dma_ops per device so we can remove the iommu_no_mapping code.
> >
> > Signed-off-by: Tom Murphy
> > ---
> > drivers/iommu/intel-iommu.c | 85 +++--
> > 1
It looks like there is a bug in this code.
The behavior before this patch in __intel_map_single was that
iommu_no_mapping would call remove the attached si_domain for 32 bit
devices (in the dmar_remove_one_dev_info(dev) call in
iommu_no_mapping) and then allocate a new domain in
get_valid_domain
On Sun, May 5, 2019 at 3:44 AM Lu Baolu wrote:
>
> Hi,
>
> On 5/4/19 9:23 PM, Tom Murphy wrote:
> > static int intel_iommu_add_device(struct device *dev)
> > {
> > + struct dmar_domain *dmar_domain;
> > + struct iommu_domain *domain;
> > struct intel_iommu *iommu;
> > struct
Add a wrapper for iommu_dma_free_cpu_cached_iovas in the dma-iommu api
path to help with the intel-iommu driver conversion to the dma-iommu api
path
Signed-off-by: Tom Murphy
---
drivers/iommu/dma-iommu.c | 9 +
include/linux/dma-iommu.h | 3 +++
2 files changed, 12 insertions(+)
diff -
Convert the intel iommu driver to the dma-iommu api to allow us to
remove the iova handling code and the reserved region code
Signed-off-by: Tom Murphy
---
drivers/iommu/Kconfig | 1 +
drivers/iommu/intel-iommu.c | 405 ++--
include/linux/intel-iommu.h |
To match the dma-ops api path the DMA_PTE_READ should be set if ZLR
isn't supported in the iommu
Signed-off-by: Tom Murphy
---
drivers/iommu/intel-iommu.c | 13 -
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
i
Add a new iommu_ops::flush_iotlb_range function which allows us to flush
the entire range of an iommu_unmap and implement it for the amd and
intel iommu drivers.
remove the iotlb_range_add because it isn't used anywhere.
Signed-off-by: Tom Murphy
---
drivers/iommu/amd_iommu.c | 14 +---
There is no reason to keep track of the iovas in the non-dma ops path.
All this code seems to be pointless and can be removed.
Signed-off-by: Tom Murphy
---
drivers/iommu/intel-iommu.c | 94 +
1 file changed, 33 insertions(+), 61 deletions(-)
diff --git a/dri
Currently the iova flush queue implementation in the dma-iommu api path
doesn't handle freelists. Change the unmap_fast code to allow it to
return any freelists which need to be handled.
Signed-off-by: Tom Murphy
---
drivers/iommu/dma-iommu.c | 39 +++--
drivers
Convert the intel iommu driver to the dma-ops api so that we can remove a bunch
of repeated code.
This patchset depends on the "iommu/vt-d: Delegate DMA domain to generic iommu"
and
"iommu/amd: Convert the AMD iommu driver to the dma-iommu api" patch sets which
haven't
yet merged so this is jus
Set the dma_ops per device so we can remove the iommu_no_mapping code.
Signed-off-by: Tom Murphy
---
drivers/iommu/intel-iommu.c | 85 +++--
1 file changed, 6 insertions(+), 79 deletions(-)
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
in
On Tue, Apr 30, 2019 at 2:42 PM Robin Murphy wrote:
>
> On 30/04/2019 01:29, Tom Murphy wrote:
> > Handle devices which defer their attach to the iommu in the dma-iommu api
>
> I've just spent a while trying to understand what this is about...
>
> AFAICS it's a kdump thing where the regular defaul
Use the dev->coherent_dma_mask when allocating in the dma-iommu ops api.
Signed-off-by: Tom Murphy
---
drivers/iommu/dma-iommu.c | 16 +---
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index c18f74ad1e8b..df031049
Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.
Signed-off-by: Tom Murphy
---
drivers/iommu/Kconfig | 1 +
drivers/iommu/amd_iommu.c | 680 --
2 files changed, 70 insertions(+
Add a gfp_t parameter to the iommu_ops::map function.
Remove the needless locking in the AMD iommu driver.
The iommu_ops::map function (or the iommu_map function which calls it)
was always supposed to be sleepable (according to Joerg's comment in
this thread: https://lore.kernel.org/patchwork/patc
Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.
Change-log:
v2:
-Rebase on top of this series:
http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dma-iommu-ops.3
-Add a gfp_t parameter to the iommu_ops::ma
Handle devices which defer their attach to the iommu in the dma-iommu api
Signed-off-by: Tom Murphy
---
drivers/iommu/dma-iommu.c | 30 ++
1 file changed, 30 insertions(+)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 7a96c2c8f56b..c18f74ad
On Mon, Apr 29, 2019 at 12:59 PM Christoph Hellwig wrote:
>
> On Sat, Apr 27, 2019 at 03:20:35PM +0100, Tom Murphy wrote:
> > I am working on another patch to improve the intel iotlb flushing in
> > the iommu ops patch which should cover this too.
>
> So are you looking into converting the intel-i
check if there is a not-present cache present and flush it if there is.
Signed-off-by: Tom Murphy
---
drivers/iommu/amd_iommu.c | 19 +++
1 file changed, 15 insertions(+), 4 deletions(-)
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index f7cdd2ab7f11..ebd06
> The iommu_map_page function is called once per physical page that is
> mapped, so in the worst case for every 4k mapping established. So it is
> not the right place to put this check in.
Ah, you're right, that was careless of me.
> From a quick glance this check belongs into the map_sg() and th
I can see two potential problems with these patches that should be addressed:
The default domain of a group can be changed to type
IOMMU_DOMAIN_IDENTITY via the command line. With these patches we are
returning the si_domain for type IOMMU_DOMAIN_IDENTITY. There's a
chance the shared si_domain cou
check if there is a not-present cache present and flush it if there is.
Signed-off-by: Tom Murphy
---
drivers/iommu/amd_iommu.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index f7cdd2ab7f11..91fe5cb10f50 100644
--- a/drivers/iom
On Wed, Apr 24, 2019 at 4:55 PM Joerg Roedel wrote:
>
> On Wed, Apr 24, 2019 at 07:58:19AM -0700, Christoph Hellwig wrote:
> > I'd be tempted to do that. But lets just ask Joerg if he has
> > any opinion..
>
> The reason was that it is an unlikely path, as hardware implementations
> are not allow
PM Christoph Hellwig wrote:
>
> On Wed, Apr 24, 2019 at 03:18:59PM +0100, Tom Murphy via iommu wrote:
> > check if there is a not-present cache present and flush it if there is.
> >
> > Signed-off-by: Tom Murphy
> > ---
> > drivers/iommu/amd_iommu.c | 6 ++
&
check if there is a not-present cache present and flush it if there is.
Signed-off-by: Tom Murphy
---
drivers/iommu/amd_iommu.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index f7cdd2ab7f11..8ef43224aae0 100644
--- a/drivers/io
These checks were intended to handle devices not mapped by the IOMMU.
Since the AMD IOMMU driver uses per-device dma_ops these functions can
no longer be called by direct mapped devices. So these checks aren't
needed anymore.
Signed-off-by: Tom Murphy
---
drivers/iommu/amd_iommu.c | 10 ++---
>That said, I've now gone and looked and AFAICS both the Intel...
Ah, I missed that, you're right.
>...and AMD
It doesn't look like it. On AMD the cache is flushed during
iommu_ops::map only if the there are page table pages to free (if
we're allocating a large page and freeing the sub pages), rig
I hoped this could be an exception, it's easier to grok without the
line break and isn't crazy long. Because you mentioned it I'll fix it.
On Mon, Apr 15, 2019 at 7:31 AM Christoph Hellwig wrote:
>
> On Thu, Apr 11, 2019 at 07:47:32PM +0100, Tom M
This is a cut and paste from the current amd_iommu driver. I really
have no idea if it's a good idea or not. It looks like
joerg.roe...@amd.com might be the person to ask.
@Joerg Roedel should we keep this?
On Mon, Apr 15, 2019 at 7:33 AM Christoph Hellwig wrote:
>
> > +static void amd_iommu_flu
dma_ops_domain_free() expects domain to be in a global list.
Arguably, could be called before protection_domain_init().
Signed-off-by: Dmitry Safonov
Signed-off-by: Tom Murphy
---
drivers/iommu/amd_iommu.c | 10 ++
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/drivers/io
>This seems like a fix to the existing code and should probably go out first.
I'll send this patch out on it's own now.
On Mon, Apr 15, 2019 at 7:23 AM Christoph Hellwig wrote:
>
> On Thu, Apr 11, 2019 at 07:47:38PM +0100, Tom Murphy via iommu wrote:
> > dma_ops_domain
Now that we are using the dma-iommu api we have a lot of unused code.
This patch removes all that unused code.
Signed-off-by: Tom Murphy
---
drivers/iommu/amd_iommu.c | 209 --
1 file changed, 209 deletions(-)
diff --git a/drivers/iommu/amd_iommu.c b/drivers/
dma_ops_domain_free() expects domain to be in a global list.
Arguably, could be called before protection_domain_init().
Signed-off-by: Dmitry Safonov
Signed-off-by: Tom Murphy
---
drivers/iommu/amd_iommu.c | 10 ++
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/drivers/io
Add iommu_dma_map_page_coherent function to allow mapping pages through
the dma-iommu api using the dev->coherent_dma_mask mask instead of the
dev->dma_mask mask
Signed-off-by: Tom Murphy
---
drivers/iommu/dma-iommu.c | 25 -
include/linux/dma-iommu.h | 3 +++
2 files ch
To convert the AMD iommu driver to the dma-iommu we need to wrap some of
the iova reserve functions.
Signed-off-by: Tom Murphy
---
drivers/iommu/dma-iommu.c | 27 +++
include/linux/dma-iommu.h | 7 +++
2 files changed, 34 insertions(+)
diff --git a/drivers/iommu/dma
Implement flush_np_cache for the AMD iommu driver. This allows the amd
iommu non present cache to be flushed if amd_iommu_np_cache is set.
Signed-off-by: Tom Murphy
---
drivers/iommu/amd_iommu.c | 13 +
1 file changed, 13 insertions(+)
diff --git a/drivers/iommu/amd_iommu.c b/driver
Convert the AMD iommu driver to use the dma-iommu api.
Signed-off-by: Tom Murphy
---
drivers/iommu/Kconfig | 1 +
drivers/iommu/amd_iommu.c | 217 +-
2 files changed, 77 insertions(+), 141 deletions(-)
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/
Both the AMD and Intel drivers can cache not present IOTLB entries. To
convert these drivers to the dma-iommu api we need a generic way to
flush the NP cache. IOMMU drivers which have a NP cache can implement
the .flush_np_cache function in the iommu ops struct. I will implement
.flush_np_cache for
Instead of using a spin lock I removed the mutex lock from both the
amd_iommu_map and amd_iommu_unmap path as well. iommu_map doesn’t lock
while mapping and so if iommu_map is called by two different threads on
the same iova region it results in a race condition even with the locks.
So the locking
The iommu ops .map function (or the iommu_map function which calls it)
was always supposed to be sleepable (according to Joerg's comment in
this thread: https://lore.kernel.org/patchwork/patch/977520/ ) and so
should probably have had a "might_sleep()" since it was written. However
currently the dm
Convert the AMD iommu driver to the dma-iommu api and remove the iova
handling code from the AMD iommu driver.
Tom Murphy (9):
iommu/dma-iommu: Add iommu_map_atomic
iommu/dma-iommu: Add function to flush any cached not present IOTLB
entries
iommu/dma-iommu: Add iommu_dma_copy_reserved_io
49 matches
Mail list logo