By default, for performance consideration, Intel IOMMU
driver won't flush IOTLB immediately after a buffer is
unmapped. It schedules a thread and flushes IOTLB in a
batched mode. This isn't suitable for untrusted device
since it still can access the memory even if it isn't
supposed to do so.
Cc: A
This adds the support to determine the isolation type
of a mediated device group by checking whether it has
an iommu device. If an iommu device exists, an iommu
domain will be allocated and then attached to the iommu
device. Otherwise, keep the same behavior as it is.
Cc: Ashok Raj
Cc: Jacob Pan
Hi,
The Mediate Device is a framework for fine-grained physical device
sharing across the isolated domains. Currently the mdev framework
is designed to be independent of the platform IOMMU support. As the
result, the DMA isolation relies on the mdev parent device in a
vendor specific way.
There a
A parent device might create different types of mediated
devices. For example, a mediated device could be created
by the parent device with full isolation and protection
provided by the IOMMU. One usage case could be found on
Intel platforms where a mediated device is an assignable
subset of a PCI,
This adds helpers to attach or detach a domain to a
group. This will replace iommu_attach_group() which
only works for non-mdev devices.
If a domain is attaching to a group which includes the
mediated devices, it should attach to the iommu device
(a pci device which represents the mdev in iommu sc
IPROC host has the limitation that it can use only those address ranges
given by dma-ranges property as inbound address. So that the memory
address holes in dma-ranges should be reserved to allocate as DMA address.
Inbound address of host accessed by PCIe devices will not be translated
before it c
Few SOCs have limitation that their PCIe host can't allow few inbound
address ranges. Allowed inbound address ranges are listed in dma-ranges
DT property and this address ranges are required to do IOVA mapping.
Remaining address ranges have to be reserved in IOVA mapping.
PCIe Host driver of those
Add a dma_ranges field in PCI host bridge structure to hold resource
entries list of memory regions in sorted order given through dma-ranges
DT property.
While initializing IOMMU domain of PCI EPs connected to that host bridge
This list of resources will be processed and IOVAs for the address hole
dma_ranges field of PCI host bridge structure has resource entries in
sorted order of address range given through dma-ranges DT property. This
list is the accessible DMA address range. So that this resource list will
be processed and reserve IOVA address to the inaccessible address holes in
the lis
Hi Joerg,
On 4/11/19 11:19 PM, Joerg Roedel wrote:
Hi Lu Baolu,
thanks for these patches!
On Mon, Mar 25, 2019 at 09:30:27AM +0800, Lu Baolu wrote:
Lu Baolu (9):
iommu: Add APIs for multiple domains per device
iommu/vt-d: Make intel_iommu_enable_pasid() more generic
iommu/vt-d: Add p
Thanks for the reply Robin.
On Wed, Apr 10, 2019 at 10:20:38AM +0100, Robin Murphy wrote:
> On 09/04/2019 23:47, Nicolin Chen wrote:
> > According to the routine of iommu_dma_alloc(), it allocates an iova
> > then does iommu_map() to map the iova to a physical address of new
> > allocated pages. H
Now that we are using the dma-iommu api we have a lot of unused code.
This patch removes all that unused code.
Signed-off-by: Tom Murphy
---
drivers/iommu/amd_iommu.c | 209 --
1 file changed, 209 deletions(-)
diff --git a/drivers/iommu/amd_iommu.c b/drivers/
dma_ops_domain_free() expects domain to be in a global list.
Arguably, could be called before protection_domain_init().
Signed-off-by: Dmitry Safonov
Signed-off-by: Tom Murphy
---
drivers/iommu/amd_iommu.c | 10 ++
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/drivers/io
Add iommu_dma_map_page_coherent function to allow mapping pages through
the dma-iommu api using the dev->coherent_dma_mask mask instead of the
dev->dma_mask mask
Signed-off-by: Tom Murphy
---
drivers/iommu/dma-iommu.c | 25 -
include/linux/dma-iommu.h | 3 +++
2 files ch
To convert the AMD iommu driver to the dma-iommu we need to wrap some of
the iova reserve functions.
Signed-off-by: Tom Murphy
---
drivers/iommu/dma-iommu.c | 27 +++
include/linux/dma-iommu.h | 7 +++
2 files changed, 34 insertions(+)
diff --git a/drivers/iommu/dma
Implement flush_np_cache for the AMD iommu driver. This allows the amd
iommu non present cache to be flushed if amd_iommu_np_cache is set.
Signed-off-by: Tom Murphy
---
drivers/iommu/amd_iommu.c | 13 +
1 file changed, 13 insertions(+)
diff --git a/drivers/iommu/amd_iommu.c b/driver
Convert the AMD iommu driver to use the dma-iommu api.
Signed-off-by: Tom Murphy
---
drivers/iommu/Kconfig | 1 +
drivers/iommu/amd_iommu.c | 217 +-
2 files changed, 77 insertions(+), 141 deletions(-)
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/
Both the AMD and Intel drivers can cache not present IOTLB entries. To
convert these drivers to the dma-iommu api we need a generic way to
flush the NP cache. IOMMU drivers which have a NP cache can implement
the .flush_np_cache function in the iommu ops struct. I will implement
.flush_np_cache for
Instead of using a spin lock I removed the mutex lock from both the
amd_iommu_map and amd_iommu_unmap path as well. iommu_map doesn’t lock
while mapping and so if iommu_map is called by two different threads on
the same iova region it results in a race condition even with the locks.
So the locking
The iommu ops .map function (or the iommu_map function which calls it)
was always supposed to be sleepable (according to Joerg's comment in
this thread: https://lore.kernel.org/patchwork/patch/977520/ ) and so
should probably have had a "might_sleep()" since it was written. However
currently the dm
Convert the AMD iommu driver to the dma-iommu api and remove the iova
handling code from the AMD iommu driver.
Tom Murphy (9):
iommu/dma-iommu: Add iommu_map_atomic
iommu/dma-iommu: Add function to flush any cached not present IOTLB
entries
iommu/dma-iommu: Add iommu_dma_copy_reserved_io
On Fri, Apr 05, 2019 at 09:15:25AM +0800, Dongli Zhang wrote:
> So far the kernel only prints the requested size if swiotlb buffer if full.
> It is not possible to know whether it is simply an out of buffer, or it is
> because swiotlb cannot allocate buffer with the requested size due to
> fragment
On Thu, Apr 11, 2019 at 01:36:02PM -0400, Steven Rostedt wrote:
> I guess the issue is when you get a 41 patch series, and there's only
> one patch you need to look at. There's times I get Cc'd on patch sets
> that I have no idea why I'm on the Cc. If I skim the patch set and
> don't see a relevanc
On Thu, 11 Apr 2019 19:21:30 +0200
Christoph Hellwig wrote:
> > Bah. People complain about overly broad cc-lists and the context is on
> > lkml. But sure, I just bounced it to you.
>
> People should stop complaining about that. Deleting a mail is a single
> keystroke. Finding all the patches
On Thu, Apr 11, 2019 at 07:26:58PM +0200, Christoph Hellwig wrote:
> Thomas just posted a major rework in this area. I think you are
> best off rebasing it on top of that series and feeding it to him.
Actually.. Given that this series hasn't been merged yet and given
how trivial this change is I
Thomas just posted a major rework in this area. I think you are
best off rebasing it on top of that series and feeding it to him.
Otherwise this looks good to me:
Reviewed-by: Christoph Hellwig
___
iommu mailing list
iommu@lists.linux-foundation.org
h
On Wed, Apr 10, 2019 at 02:08:19PM +0200, Thomas Gleixner wrote:
> On Wed, 10 Apr 2019, Christoph Hellwig wrote:
>
> > On Wed, Apr 10, 2019 at 12:28:22PM +0200, Thomas Gleixner wrote:
> > > Replace the indirection through struct stack_trace with an invocation of
> > > the storage array based inter
On Wed, Apr 10, 2019 at 06:14:05PM +0200, Christoph Hellwig wrote:
> below are three relatively simple patches to clean up the
> bypass case and make it share more code with the dma-direct
> implementation.
These look simple and straightforward to me, applied.
Can you please also Cc LKML on iommu
On Wed, Apr 10, 2019 at 06:50:14PM +0200, Christoph Hellwig wrote:
> The AMD iommu dma_ops are only attached on a per-device basis when an
> actual translation is needed. Remove the leftover bypass support which
> in parts was already broken (e.g. it always returns 0 from ->map_sg).
>
> Use the o
On Wed, Apr 10, 2019 at 04:21:08PM +0100, Jean-Philippe Brucker wrote:
> Commit e5567f5f6762 ("PCI/ATS: Add pci_prg_resp_pasid_required()
> interface.") added a common interface to check the PASID bit in the PRI
> capability. Use it in the AMD driver.
>
> Signed-off-by: Jean-Philippe Brucker
> --
On Wed, Apr 10, 2019 at 04:15:16PM +0100, Jean-Philippe Brucker wrote:
> drivers/iommu/iommu.c | 104 ++
> include/linux/iommu.h | 70
> 2 files changed, 174 insertions(+)
Applied to the api-features branch for now, thanks Jean
Hi Lu Baolu,
thanks for these patches!
On Mon, Mar 25, 2019 at 09:30:27AM +0800, Lu Baolu wrote:
> Lu Baolu (9):
> iommu: Add APIs for multiple domains per device
> iommu/vt-d: Make intel_iommu_enable_pasid() more generic
> iommu/vt-d: Add per-device IOMMU feature ops entries
> iommu/vt-d
+robh, +mrutland for DT
On 01/04/2019 17:44, Marc Gonzalez wrote:
> Unused at the moment, just future-proofing the DTS.
>
> Signed-off-by: Marc Gonzalez
> ---
> Documentation/devicetree/bindings/iommu/arm,smmu.txt | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/Documentation/devicetr
+robh, +mrutland for DT
On 01/04/2019 17:40, Marc Gonzalez wrote:
> The MSM8998 ANOC1(*) SMMU services BLSP2, PCIe, UFS, and USB.
> (*) Aggregate Network-on-Chip #1
>
> Based on the following DTS downstream:
> https://source.codeaurora.org/quic/la/kernel/msm-4.4/tree/arch/arm/boot/dts/qcom/msm-a
On Thu, Apr 11, 2019 at 11:00:56AM +0200, Ulf Hansson wrote:
> > blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, mq->queue);
> > if (mmc_can_erase(card))
> > mmc_queue_setup_discard(mq->queue, card);
> >
> > - blk_queue_bounce_limit(mq->queue, limit);
> > + i
On Tue, Apr 09, 2019 at 05:03:52PM +0300, David Woodhouse wrote:
> On Tue, 2019-04-09 at 15:59 +0200, Christoph Hellwig wrote:
> > Hi David and Joerg,
> >
> > do you remember a good reason why intel-iommu is not using per-device
> > dma_map_ops like the AMD iommu or the various ARM iommus?
> >
>
On Wed, Apr 03, 2019 at 04:35:21PM +0800, Shaokun Zhang wrote:
> From: Jinyu Qi
>
> In struct iova_domain, there are three atomic variables, the former two
> are about TLB flush counters which use atomic_add operation, anoter is
> used to flush timer that use cmpxhg operation.
> These variables
On Mon, Apr 01, 2019 at 08:11:00PM +0100, Robin Murphy wrote:
> With the diff below squashed in to address my outstanding style nits,
>
> Acked-by: Robin Murphy
>
> I don't foresee any conflicting io-pgtable changes to prevent this going via
> DRM, but I'll leave the final say up to Joerg.
No o
On Wed, Apr 03, 2019 at 08:14:18AM +0300, Dmitry Osipenko wrote:
> Joerg, could you please apply this series?
Applied, thanks.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
On 11/04/2019 12:01, Robin Murphy wrote:
The crash occurs for the same reason.
In this case, on the really_probe() failure path, we are still
clearing
the DMA ops prior to releasing the device's managed memories.
This patch fixes this issue by reordering the DMA ops teardown
and the
call to de
On 11/04/2019 09:50, John Garry wrote:
On 04/04/2019 12:17, John Garry wrote:
On 03/04/2019 10:20, John Garry wrote:
On 03/04/2019 09:14, Greg KH wrote:
On Wed, Apr 03, 2019 at 09:02:36AM +0100, John Garry wrote:
On 28/03/2019 10:08, John Garry wrote:
In commit 376991db4b64 ("driver core: Po
On Wed, Apr 10, 2019 at 02:21:31PM -0700, Jacob Pan wrote:
> On Tue, 9 Apr 2019 20:37:55 +0300
> Andriy Shevchenko wrote:
> > On Tue, Apr 09, 2019 at 09:43:28AM -0700, Jacob Pan wrote:
> > > On Tue, 9 Apr 2019 13:07:18 +0300
> > > Andriy Shevchenko wrote:
> > > > On Mon, Apr 08, 2019 at 04:59:2
On 11/04/2019 11:09, Stanimir Varbanov wrote:
> On 4/11/19 11:44 AM, Marc Gonzalez wrote:
>
>> Since we just want to map 0x100, we don't need an iommu-map-mask.
>
> Do you see warnings during boot about missing property?
Absent iommu-map-mask property is expected. No warning:
https://elixir.bo
Hi Marc,
On 4/11/19 11:44 AM, Marc Gonzalez wrote:
> On 10/04/2019 17:32, Stanimir Varbanov wrote:
>
>> Few comments inline.
>
> I'll send v3.
>
> Changes:
> - Move all X-names props *after* corresponding X(s) prop
> - Drop comments
>
>
>>> + iommu-map = <0x100 &
Hi Christoph,
On Thu, 11 Apr 2019 at 09:10, Christoph Hellwig wrote:
>
> Just like we do for all other block drivers. Especially as the limit
> imposed at the moment might be way to pessimistic for iommus.
I would appreciate some information in the changelog, as it's quite
unclear of what this
On 04/04/2019 12:17, John Garry wrote:
On 03/04/2019 10:20, John Garry wrote:
On 03/04/2019 09:14, Greg KH wrote:
On Wed, Apr 03, 2019 at 09:02:36AM +0100, John Garry wrote:
On 28/03/2019 10:08, John Garry wrote:
In commit 376991db4b64 ("driver core: Postpone DMA tear-down until
after
devres
On 10/04/2019 17:32, Stanimir Varbanov wrote:
> Few comments inline.
I'll send v3.
Changes:
- Move all X-names props *after* corresponding X(s) prop
- Drop comments
>> +iommu-map = <0x100 &anoc1_smmu 0x1480 1>;
>
> iommu-map-mask? It is optional but I had t
On Wed, Apr 03, 2019 at 08:21:48PM +0200, Geert Uytterhoeven wrote:
> During PSCI system suspend, R-Car Gen3 SoCs are powered down, and all
> IPMMU state is lost. Hence after s2ram, devices wired behind an IPMMU,
> and configured to use it, will see their DMA operations hang.
>
> To fix this, res
On Thu, Apr 11, 2019 at 10:32:40AM +0200, Simon Horman wrote:
> On Wed, Apr 03, 2019 at 08:21:46PM +0200, Geert Uytterhoeven wrote:
> > The maximum number of micro-TLBs per IPMMU instance is not fixed, but
> > depends on the SoC type. Hence move it from struct ipmmu_vmsa_device to
> > struct ipmmu
On Wed, Apr 03, 2019 at 08:21:47PM +0200, Geert Uytterhoeven wrote:
> ipmmu_domain_init_context() takes care of (1) initializing the software
> domain, and (2) initializing the hardware context for the domain.
>
> Extract the code to initialize the hardware context into a new subroutine
> ipmmu_do
On Wed, Apr 03, 2019 at 08:21:46PM +0200, Geert Uytterhoeven wrote:
> The maximum number of micro-TLBs per IPMMU instance is not fixed, but
> depends on the SoC type. Hence move it from struct ipmmu_vmsa_device to
> struct ipmmu_features, and set up the correct value for both R-Car Gen2
> and Gen3
On Wed, Apr 03, 2019 at 08:21:45PM +0200, Geert Uytterhoeven wrote:
> Make the IPMMU_CTX_MAX constant unsigned, to match the type of
> ipmmu_features.number_of_contexts.
>
> This allows to use plain min() instead of type-casting min_t().
>
> Signed-off-by: Geert Uytterhoeven
> Reviewed-by: Laure
On Wed, Apr 03, 2019 at 08:21:44PM +0200, Geert Uytterhoeven wrote:
> On R-Car Gen3, the faulting virtual address is a 40-bit address, and
> comprised of two registers. Read the upper address part, and combine
> both parts, when running on a 64-bit system.
>
> Signed-off-by: Geert Uytterhoeven
>
On Wed, Apr 03, 2019 at 08:21:43PM +0200, Geert Uytterhoeven wrote:
> As of commit 7af9a5fdb9e0ca33 ("iommu/ipmmu-vmsa: Use
> iommu_device_sysfs_add()/remove()"), IOMMU devices show up under
> /sys/class/iommus/, but their "devices" subdirectories are empty.
Hi Geert,
Should the path be /sys/clas
On Thu, Apr 11, 2019 at 10:10:28AM +0200, Simon Horman wrote:
> On Wed, Apr 03, 2019 at 08:21:43PM +0200, Geert Uytterhoeven wrote:
> > As of commit 7af9a5fdb9e0ca33 ("iommu/ipmmu-vmsa: Use
> > iommu_device_sysfs_add()/remove()"), IOMMU devices show up under
> > /sys/class/iommus/, but their "devic
Just drop two pointless _attrs prefixes to make the code a little
more grep-able.
Signed-off-by: Christoph Hellwig
---
drivers/xen/swiotlb-xen.c | 17 +++--
1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 877b
We can simply loop over the segments and map them, removing lots of
duplicate code.
Signed-off-by: Christoph Hellwig
---
drivers/xen/swiotlb-xen.c | 68 ++-
1 file changed, 10 insertions(+), 58 deletions(-)
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen
Hi all,
below are a couple of cleanups for swiotlb-xen.c. They were done in
preparation of eventually using the dma-noncoherent.h cache flushing
hooks, but that final goal will need some major work to the arm32 code
first. Until then I think these patches might be better in mainline
than in my l
Get rid of the grand multiplexer and implement the sync_single_for_cpu
and sync_single_for_device methods directly, and then loop over them
for the scatterlist based variants.
Note that this also loses a few comments related to highlevel DMA API
concepts, which have nothing to do with the swiotlb-
Refactor the code a bit to make further changes easier.
Signed-off-by: Christoph Hellwig
---
drivers/xen/swiotlb-xen.c | 31 ---
1 file changed, 16 insertions(+), 15 deletions(-)
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 9a951504dc12..5
The comments are spot on and should be near the central API, not just
near a single implementation.
Signed-off-by: Christoph Hellwig
---
arch/arm/mm/dma-mapping.c | 11 ---
kernel/dma/mapping.c | 11 +++
2 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/arch/a
Hi everyone,
I though I got rid of all non-highmem, non-ISA block layer bounce
buffering a while ago, but I missed the MMC case. While I still plan to
also kill off the highmem bouncing there I won't get to it this merge
window, so for now I'd like to make some progress and move MMC to the
DMA la
Just like we do for all other block drivers. Especially as the limit
imposed at the moment might be way to pessimistic for iommus.
Signed-off-by: Christoph Hellwig
---
drivers/mmc/core/queue.c | 7 ++-
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/drivers/mmc/core/queue.c b/
These days the DMA mapping code must bounce buffer for any not supported
address, and if they driver needs to optimize for natively supported
ranged it should use dma_get_required_mask.
Signed-off-by: Christoph Hellwig
---
arch/arm/include/asm/dma-mapping.h | 7 ---
include/linux/dma-mapping
64 matches
Mail list logo