On 4/29/19 10:23 AM, Joerg Roedel wrote:
> On Fri, Apr 26, 2019 at 11:55:12AM -0400, Qian Cai wrote:
>> https://git.sr.ht/~cai/linux-debug/blob/master/dmesg
>
> Thanks, I can't see any definitions for unity ranges or exclusion ranges
> in the IVRS table dump, which makes it even more weird.
>
On Tue, Apr 30, 2019 at 09:11:07PM +, Paul Burton wrote:
> Right but dma_direct_alloc_pages() already checks for the PageHighMem
> case & returns before ever calling arch_dma_prep_coherent(), no?
True. And of course it can't be remapped into the uncached segment
anyway. So yes, we should dro
Hi Christoph,
On Tue, Apr 30, 2019 at 10:29:47PM +0200, Christoph Hellwig wrote:
> On Tue, Apr 30, 2019 at 08:10:43PM +, Paul Burton wrote:
> > This series looks like a nice cleanup to me - the one thing that puzzles
> > me is the !PageHighMem check above.
> >
> > As far as I can see arch_dma
Hi Jacob,
On 4/30/19 8:01 PM, Jacob Pan wrote:
> On Tue, 30 Apr 2019 09:29:40 +0200
> Eric Auger wrote:
>
>> Extended Capability Register PSS field (PASID Size Supported)
>> corresponds to the PASID bit size -1.
>>
>> "A value of N in this field indicates hardware supports PASID
>> field of N+1
On Tue, Apr 30, 2019 at 08:10:43PM +, Paul Burton wrote:
> This series looks like a nice cleanup to me - the one thing that puzzles
> me is the !PageHighMem check above.
>
> As far as I can see arch_dma_prep_coherent() should never be called with
> a highmem page, so would it make more sense t
On Thu, 25 Apr 2019 11:41:05 +0100
Jean-Philippe Brucker wrote:
> On 25/04/2019 11:17, Auger Eric wrote:
> >> +/**
> >> + * ioasid_alloc - Allocate an IOASID
> >> + * @set: the IOASID set
> >> + * @min: the minimum ID (inclusive)
> >> + * @max: the maximum ID (exclusive)
> >> + * @private: data p
Hi Christoph,
On Tue, Apr 30, 2019 at 07:00:30AM -0400, Christoph Hellwig wrote:
> diff --git a/arch/mips/mm/dma-noncoherent.c b/arch/mips/mm/dma-noncoherent.c
> index f9549d2fbea3..f739f42c9d3c 100644
> --- a/arch/mips/mm/dma-noncoherent.c
> +++ b/arch/mips/mm/dma-noncoherent.c
> @@ -44,33 +44,26
On Tue, Apr 30, 2019 at 05:18:33PM +0200, Christoph Hellwig wrote:
> On Tue, Apr 30, 2019 at 01:37:54PM +0100, Robin Murphy wrote:
> > On 30/04/2019 11:56, Christoph Hellwig wrote:
> >> So while I really, really like this cleanup it turns out it isn't
> >> actually safe for arm :( arm remaps the C
On Tue, 30 Apr 2019 09:29:40 +0200
Eric Auger wrote:
> Extended Capability Register PSS field (PASID Size Supported)
> corresponds to the PASID bit size -1.
>
> "A value of N in this field indicates hardware supports PASID
> field of N+1 bits (For example, value of 7 in this field,
> indicates 8
On Tue, 30 Apr 2019 09:05:01 +0200
Auger Eric wrote:
> On 4/29/19 5:25 PM, Jacob Pan wrote:
> > On Fri, 26 Apr 2019 18:15:27 +0200
> > Auger Eric wrote:
> >
> >> Hi Jacob,
> >>
> >> On 4/24/19 1:31 AM, Jacob Pan wrote:
> >>> When supporting guest SVA with emulated IOMMU, the guest PASID
> >
Hi Jacob,
On 4/30/19 7:15 PM, Jacob Pan wrote:
> On Tue, 30 Apr 2019 06:41:13 +0200
> Auger Eric wrote:
>
>> Hi Jacob,
>>
>> On 4/29/19 11:29 PM, Jacob Pan wrote:
>>> On Sat, 27 Apr 2019 11:04:04 +0200
>>> Auger Eric wrote:
>>>
Hi Jacob,
On 4/24/19 1:31 AM, Jacob Pan wrote:
Hi Jacob,
On 4/30/19 7:22 PM, Jacob Pan wrote:
> On Tue, 30 Apr 2019 08:57:30 +0200
> Auger Eric wrote:
>
>> On 4/30/19 12:41 AM, Jacob Pan wrote:
>>> On Fri, 26 Apr 2019 19:23:03 +0200
>>> Auger Eric wrote:
>>>
Hi Jacob,
On 4/24/19 1:31 AM, Jacob Pan wrote:
> When Shared Vir
On Tue, 30 Apr 2019 08:57:30 +0200
Auger Eric wrote:
> On 4/30/19 12:41 AM, Jacob Pan wrote:
> > On Fri, 26 Apr 2019 19:23:03 +0200
> > Auger Eric wrote:
> >
> >> Hi Jacob,
> >> On 4/24/19 1:31 AM, Jacob Pan wrote:
> >>> When Shared Virtual Address (SVA) is enabled for a guest OS via
> >>>
On Tue, 30 Apr 2019 06:41:13 +0200
Auger Eric wrote:
> Hi Jacob,
>
> On 4/29/19 11:29 PM, Jacob Pan wrote:
> > On Sat, 27 Apr 2019 11:04:04 +0200
> > Auger Eric wrote:
> >
> >> Hi Jacob,
> >>
> >> On 4/24/19 1:31 AM, Jacob Pan wrote:
> >>> When Shared Virtual Memory is exposed to a guest v
(catching up on email)
On Wed, Apr 24, 2019 at 09:26:52PM +0200, Christoph Hellwig wrote:
> On Wed, Apr 24, 2019 at 11:33:11AM -0700, Nicolin Chen wrote:
> > I feel it's similar to my previous set, which did most of these
> > internally except the renaming part. But Catalin had a concern
> > that
On Tue, Apr 30, 2019 at 01:52:26PM +0100, Robin Murphy wrote:
> As Catalin pointed out before, many of the users above may still have
> implicit assumptions about the default CMA area, i.e. that this won't
> return something above the limit they originally passed to
> dma_contiguous_reserve(). I
On Tue, Apr 30, 2019 at 01:37:54PM +0100, Robin Murphy wrote:
> On 30/04/2019 11:56, Christoph Hellwig wrote:
>> So while I really, really like this cleanup it turns out it isn't
>> actually safe for arm :( arm remaps the CMA allocation in place
>> instead of using a new mapping, which can be done
On Tue, Apr 30, 2019 at 2:42 PM Robin Murphy wrote:
>
> On 30/04/2019 01:29, Tom Murphy wrote:
> > Handle devices which defer their attach to the iommu in the dma-iommu api
>
> I've just spent a while trying to understand what this is about...
>
> AFAICS it's a kdump thing where the regular defaul
On 30/04/2019 01:29, Tom Murphy wrote:
Handle devices which defer their attach to the iommu in the dma-iommu api
I've just spent a while trying to understand what this is about...
AFAICS it's a kdump thing where the regular default domain attachment
may lead to ongoing DMA traffic from the cr
Hi Julien,
On 4/29/19 4:44 PM, Julien Grall wrote:
> A recent change split iommu_dma_map_msi_msg() in two new functions. The
> function was still implemented to avoid modifying all the callers at
> once.
>
> Now that all the callers have been reworked, iommu_dma_map_msi_msg() can
> be removed.
>
Hu Julien,
On 4/29/19 4:44 PM, Julien Grall wrote:
> On RT, iommu_dma_map_msi_msg() may be called from non-preemptible
> context. This will lead to a splat with CONFIG_DEBUG_ATOMIC_SLEEP as
> the function is using spin_lock (they can sleep on RT).
>
> iommu_dma_map_msi_msg() is used to map the MS
Hi
On 4/29/19 4:44 PM, Julien Grall wrote:
> When an MSI doorbell is located downstream of an IOMMU, it is required
> to swizzle the physical address with an appropriately-mapped IOVA for any
> device attached to one of our DMA ops domain.
>
> At the moment, the allocation of the mapping may be d
On 30/04/2019 02:55, Nicolin Chen wrote:
Both dma_alloc_from_contiguous() and dma_release_from_contiguous()
are very simply implemented, but requiring callers to pass certain
parameters like count and align, and taking a boolean parameter to
check __GFP_NOWARN in the allocation flags. So every fu
On 30/04/2019 11:56, Christoph Hellwig wrote:
So while I really, really like this cleanup it turns out it isn't
actually safe for arm :( arm remaps the CMA allocation in place
instead of using a new mapping, which can be done because they don't
share PMDs with the kernel.
So we'll probably need
Hi Julien,
On 4/29/19 4:44 PM, Julien Grall wrote:
> its_irq_compose_msi_msg() may be called from non-preemptible context.
> However, on RT, iommu_dma_map_msi_msg requires to be called from a
> preemptible context.
>
> A recent change split iommu_dma_map_msi_msg() in two new functions:
> one that
Hi Julien,
On 4/29/19 4:44 PM, Julien Grall wrote:
> gicv2m_compose_msi_msg() may be called from non-preemptible context.
> However, on RT, iommu_dma_map_msi_msg() requires to be called from a
> preemptible context.
>
> A recent change split iommu_dma_map_msi_msg() in two new functions:
> one tha
On 30/04/2019 12:32, Christoph Hellwig wrote:
On Tue, Apr 30, 2019 at 12:27:02PM +0100, Robin Murphy wrote:
Hmm, I don't think we need the DMA mask for the MSI mapping, this
should probably always use a 64-bit mask.
If that were true then we wouldn't need DMA masks for regular mappings
either.
On 29/04/2019 20:01, Christoph Hellwig wrote:
On Mon, Apr 29, 2019 at 01:35:46PM +0100, Robin Murphy wrote:
On 22/04/2019 18:59, Christoph Hellwig wrote:
The nr_pages checks should be done for all mmap requests, not just those
using remap_pfn_range.
I think it probably makes sense now to just
On Tue, Apr 30, 2019 at 12:27:02PM +0100, Robin Murphy wrote:
> > Hmm, I don't think we need the DMA mask for the MSI mapping, this
> > should probably always use a 64-bit mask.
>
> If that were true then we wouldn't need DMA masks for regular mappings
> either. If we have to map the MSI doorbell
On 30/04/2019 12:12, Christoph Hellwig wrote:
static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
- size_t size, int prot, struct iommu_domain *domain)
+ size_t size, int prot, struct iommu_domain *domain,
+ dma_addr_t dma_limit)
C
The arm/arm64 symbol for big endian builds is CONFIG_CPU_BIG_ENDIAN,
not CONFIG_BIG_ENDIAN.
Signed-off-by: Christoph Hellwig
---
drivers/iommu/qcom_iommu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/qcom_iommu.c b/drivers/iommu/qcom_iommu.c
index 8cdd3f0595
> static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
> - size_t size, int prot, struct iommu_domain *domain)
> + size_t size, int prot, struct iommu_domain *domain,
> + dma_addr_t dma_limit)
Can we just call this dma_mask?
> static void i
This export is not used in modular code, which is a good thing as
everyone should use the proper DMA API instead.
Signed-off-by: Christoph Hellwig
---
arch/mips/mm/cache.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c
index 3da216988672..33b4093
Stop providing our own arch alloc/free hooks and just expose the segment
offset and use the generic dma-direct allocator.
Signed-off-by: Christoph Hellwig
---
arch/nios2/Kconfig| 1 +
arch/nios2/include/asm/page.h | 6 --
arch/nios2/mm/dma-mapping.c | 34 +++--
Stop providing our arch alloc/free hooks and just expose the segment
offset instead.
Signed-off-by: Christoph Hellwig
---
arch/mips/Kconfig | 1 +
arch/mips/include/asm/page.h | 3 ---
arch/mips/jazz/jazzdma.c | 6 --
arch/mips/mm/dma-noncoherent.c | 27 ++
With most of the previous functionality now elsewhere a lot of the
headers included in this file are not needed.
Signed-off-by: Christoph Hellwig
---
arch/arm64/mm/dma-mapping.c | 10 --
1 file changed, 10 deletions(-)
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping
Stop providing our own arch alloc/free hooks for nommu platforms and
just expose the segment offset and use the generic dma-direct
allocator.
Signed-off-by: Christoph Hellwig
---
arch/microblaze/Kconfig | 2 +
arch/microblaze/mm/consistent.c | 97 +++--
2 fil
Add a Kconfig symbol that indicates an architecture provides a
arch_dma_prep_coherent implementation, and provide a stub otherwise.
This will allow the generic dma-iommu code to use it while still
allowing to be built for cache coherent architectures.
Signed-off-by: Christoph Hellwig
Reviewed-by
A few architectures support uncached kernel segments. In that case we get
an uncached mapping for a given physica address by using an offset in the
uncached segement. Implement support for this scheme in the generic
dma-direct code instead of duplicating it in arch hooks.
Signed-off-by: Christop
Hi all,
can you take a look at this series? It lifts the support for mips-style
uncached segements to the dma-direct layer, thus removing the need
to have arch_dma_alloc/free routines for these architectures.
___
iommu mailing list
iommu@lists.linux-fou
Signed-off-by: Christoph Hellwig
Acked-by: Robin Murphy
Reviewed-by: Mukesh Ojha
---
arch/arm64/mm/dma-mapping.c | 15 +--
1 file changed, 1 insertion(+), 14 deletions(-)
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index d1661f78eb4d..184ef9ccd69d 100644
Virtual addresses return from dma(m)_alloc_coherent are opaque in what
backs then, and drivers must not poke into them. Switch the driver
to use the generic DMA API mmap helper to avoid these games.
Signed-off-by: Christoph Hellwig
---
drivers/video/fbdev/au1100fb.c | 24 ---
So while I really, really like this cleanup it turns out it isn't
actually safe for arm :( arm remaps the CMA allocation in place
instead of using a new mapping, which can be done because they don't
share PMDs with the kernel.
So we'll probably need a __dma_alloc_from_contiguous version with
an a
Inline __iommu_dma_mmap_pfn into the main function, and use the
fact that __iommu_dma_get_pages return NULL for remapped contigous
allocations to simplify the code flow a bit.
Signed-off-by: Christoph Hellwig
Reviewed-by: Robin Murphy
---
drivers/iommu/dma-iommu.c | 46 ++---
For entirely dma coherent architectures there is no requirement to ever
remap dma coherent allocation. Move all the remap and pool code under
IS_ENABLED() checks and drop the Kconfig dependency.
Signed-off-by: Christoph Hellwig
Reviewed-by: Robin Murphy
---
drivers/iommu/Kconfig | 1 -
dr
Inline __iommu_dma_get_sgtable_page into the main function, and use the
fact that __iommu_dma_get_pages return NULL for remapped contigous
allocations to simplify the code flow a bit.
Signed-off-by: Christoph Hellwig
Reviewed-by: Robin Murphy
---
drivers/iommu/dma-iommu.c | 45 +++--
Signed-off-by: Christoph Hellwig
Acked-by: Robin Murphy
---
drivers/iommu/dma-iommu.c | 13 +
include/linux/dma-iommu.h | 13 +
2 files changed, 2 insertions(+), 24 deletions(-)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index bbd475be567a..58c35b
From: Robin Murphy
Most of it can double up to serve the failure cleanup path for
iommu_dma_alloc().
Signed-off-by: Robin Murphy
---
drivers/iommu/dma-iommu.c | 12
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
From: Robin Murphy
Shuffle around the self-contained atomic and non-contiguous cases to
return early and get out of the way of the CMA case that we're about to
work on next.
Signed-off-by: Robin Murphy
[hch: slight changes to the code flow]
Signed-off-by: Christoph Hellwig
---
drivers/iommu/d
From: Robin Murphy
Always remapping CMA allocations was largely a bodge to keep the freeing
logic manageable when it was split between here and an arch wrapper. Now
that it's all together and streamlined, we can relax that limitation.
Signed-off-by: Robin Murphy
Signed-off-by: Christoph Hellwig
From: Robin Murphy
Most importantly clear up the size / iosize confusion. Also rename addr
to cpu_addr to match the surrounding code and make the intention a little
more clear.
Signed-off-by: Robin Murphy
[hch: split from a larger patch]
Signed-off-by: Christoph Hellwig
---
drivers/iommu/dma
All the logic in iommu_dma_alloc that deals with page allocation from
the CMA or page allocators can be split into a self-contained helper,
and we can than map the result of that or the atomic pool allocation
with the iommu later. This also allows reusing __iommu_dma_free to
tear down the allocati
From: Robin Murphy
The freeing logic was made particularly horrible by part of it being
opaque to the arch wrapper, which led to a lot of convoluted repetition
to ensure each path did everything in the right order. Now that it's
all private, we can pick apart and consolidate the logically-distinc
Instead of having a separate code path for the non-blocking alloc_pages
and CMA allocations paths merge them into one. There is a slight
behavior change here in that we try the page allocator if CMA fails.
This matches what dma-direct and other iommu drivers do and will be
needed to use the dma-io
Move the call to dma_common_pages_remap into __iommu_dma_alloc and
rename it to iommu_dma_alloc_remap. This creates a self-contained
helper for remapped pages allocation and mapping.
Signed-off-by: Christoph Hellwig
Reviewed-by: Robin Murphy
---
drivers/iommu/dma-iommu.c | 54 +
We only have a single caller of this function left, so open code it there.
Signed-off-by: Christoph Hellwig
Reviewed-by: Robin Murphy
---
drivers/iommu/dma-iommu.c | 21 ++---
1 file changed, 2 insertions(+), 19 deletions(-)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iomm
From: Robin Murphy
Since we duplicate the find_vm_area() logic a few times in places where
we only care aboute the pages, factor out a helper to abstract it.
Signed-off-by: Robin Murphy
[hch: don't warn when not finding a region, as we'll rely on that later]
Signed-off-by: Christoph Hellwig
--
From: Robin Murphy
The remaining internal callsites don't care about having prototypes
compatible with the relevant dma_map_ops callbacks, so the extra
level of indirection just wastes space and complictaes things.
Signed-off-by: Robin Murphy
Signed-off-by: Christoph Hellwig
---
drivers/iommu
From: Robin Murphy
Most of the callers don't care, and the couple that do already have the
domain to hand for other reasons are in slow paths where the (trivial)
overhead of a repeated lookup will be utterly immaterial.
Signed-off-by: Robin Murphy
[hch: dropped the hunk touching iommu_dma_get_m
No need for a __KERNEL__ guard outside uapi and add a missing comment
describing the #else cpp statement. Last but not least include
instead of the asm version, which is frowned upon.
Signed-off-by: Christoph Hellwig
Reviewed-by: Robin Murphy
---
include/linux/dma-iommu.h | 6 ++
1 file c
arch_dma_prep_coherent can handle physically contiguous ranges larger
than PAGE_SIZE just fine, which means we don't need a page-based
iterator.
Signed-off-by: Christoph Hellwig
Reviewed-by: Robin Murphy
---
drivers/iommu/dma-iommu.c | 14 +-
1 file changed, 5 insertions(+), 9 delet
There is nothing really arm64 specific in the iommu_dma_ops
implementation, so move it to dma-iommu.c and keep a lot of symbols
self-contained. Note the implementation does depend on the
DMA_DIRECT_REMAP infrastructure for now, so we'll have to make the
DMA_IOMMU support depend on it, but this wil
Moving this function up to its unmap counterpart helps to keep related
code together for the following changes.
Signed-off-by: Christoph Hellwig
Reviewed-by: Robin Murphy
---
drivers/iommu/dma-iommu.c | 46 +++
1 file changed, 23 insertions(+), 23 deletions(-
We now have a arch_dma_prep_coherent architecture hook that is used
for the generic DMA remap allocator, and we should use the same
interface for the dma-iommu code.
Signed-off-by: Christoph Hellwig
Reviewed-by: Robin Murphy
---
arch/arm64/mm/dma-mapping.c | 8 +---
drivers/iommu/dma-iommu.
Add a Kconfig symbol that indicates an architecture provides a
arch_dma_prep_coherent implementation, and provide a stub otherwise.
This will allow the generic dma-iommu code to use it while still
allowing to be built for cache coherent architectures.
Signed-off-by: Christoph Hellwig
Reviewed-by
Hi Robin,
please take a look at this series, which implements a completely generic
set of dma_map_ops for IOMMU drivers. This is done by taking the
existing arm64 code, moving it to drivers/iommu and then massaging it
so that it can also work for architectures with DMA remapping. This
should hel
DMA allocations that can't sleep may return non-remapped addresses, but
we do not properly handle them in the mmap and get_sgtable methods.
Resolve non-vmalloc addresses using virt_to_page to handle this corner
case.
Signed-off-by: Christoph Hellwig
Reviewed-by: Robin Murphy
---
arch/arm64/mm/d
Hi Srinath,
On 4/12/19 5:13 AM, Srinath Mannam wrote:
> IPROC host has the limitation that it can use only those address ranges
> given by dma-ranges property as inbound address. So that the memory
> address holes in dma-ranges should be reserved to allocate as DMA address.
>
> Inbound address of
On 30/04/2019 03:02, Lu Baolu wrote:
Hi Robin,
On 4/29/19 7:06 PM, Robin Murphy wrote:
On 29/04/2019 06:10, Lu Baolu wrote:
Hi Christoph,
On 4/26/19 11:04 PM, Christoph Hellwig wrote:
On Thu, Apr 25, 2019 at 10:07:19AM +0800, Lu Baolu wrote:
This is not VT-d specific. It's just how generic
If alloc_pages_node() fails, pasid_table is leaked. Free it.
Fixes: cc580e41260db ("iommu/vt-d: Per PCI device pasid table interfaces")
Signed-off-by: Eric Auger
---
drivers/iommu/intel-pasid.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/intel-pasid.c b/
Extended Capability Register PSS field (PASID Size Supported)
corresponds to the PASID bit size -1.
"A value of N in this field indicates hardware supports PASID
field of N+1 bits (For example, value of 7 in this field,
indicates 8-bit PASIDs are supported)".
Fix the computation of intel_pasid_ma
Hi Robin,
On 4/8/19 2:18 PM, Eric Auger wrote:
> This series allows a virtualizer to program the nested stage mode.
> This is useful when both the host and the guest are exposed with
> an SMMUv3 and a PCI device is assigned to the guest using VFIO.
>
> In this mode, the physical IOMMU must be pro
On 4/29/19 5:25 PM, Jacob Pan wrote:
> On Fri, 26 Apr 2019 18:15:27 +0200
> Auger Eric wrote:
>
>> Hi Jacob,
>>
>> On 4/24/19 1:31 AM, Jacob Pan wrote:
>>> When supporting guest SVA with emulated IOMMU, the guest PASID
>>> table is shadowed in VMM. Updates to guest vIOMMU PASID table
>>> will
73 matches
Mail list logo