On Wed, Mar 27, 2019 at 02:34:56PM +0800, Lu Baolu wrote:
> - During the v1 review cycle, we discussed the possibility
> of reusing swiotlb code to avoid code dumplication, but
> we found the swiotlb implementations are not ready for the
> use of bounce page pool.
> https://lkml.o
By default, for performance consideration, Intel IOMMU
driver won't flush IOTLB immediately after a buffer is
unmapped. It schedules a thread and flushes IOTLB in a
batched mode. This isn't suitable for untrusted device
since it still can access the memory even if it isn't
supposed to do so.
Cc: A
The Intel VT-d hardware uses paging for DMA remapping.
The minimum mapped window is a page size. The device
drivers may map buffers not filling the whole IOMMU
window. This allows the device to access to possibly
unrelated memory and a malicious device could exploit
this to perform DMA attacks. To
This adds the dma sync ops for dma buffers used by any
untrusted device. We need to sync such buffers because
they might have been mapped with bounce pages.
Cc: Ashok Raj
Cc: Jacob Pan
Signed-off-by: Lu Baolu
Tested-by: Xu Pengfei
Tested-by: Mika Westerberg
---
drivers/iommu/intel-iommu.c |
This adds a helper to check whether a device needs to
use bounce buffer. It also provides a boot time option
to disable the bounce buffer. Users can use this to
prevent the iommu driver from using the bounce buffer
for performance gain.
Cc: Ashok Raj
Cc: Jacob Pan
Signed-off-by: Lu Baolu
Tested
This adds two helpers to map or unmap a physically
contiguous memory region in the page table of an
iommu domain.
Cc: Ashok Raj
Cc: Jacob Pan
Signed-off-by: Lu Baolu
Tested-by: Xu Pengfei
Tested-by: Mika Westerberg
---
drivers/iommu/intel-iommu.c | 35 +++
inc
This adds the APIs for bounce buffer specified dma sync
ops.
Cc: Ashok Raj
Cc: Jacob Pan
Signed-off-by: Lu Baolu
Tested-by: Xu Pengfei
Tested-by: Mika Westerberg
---
drivers/iommu/intel-pgtable.c | 44 +++
include/linux/intel-iommu.h | 4
2 files chang
This adds a helper to walk a contiguous dma address
and divide the address space into possiblely three
parts: a start partial page, middle full pages and
an end partial page, and call the callback for each
part of the address.
Cc: Ashok Raj
Cc: Jacob Pan
Signed-off-by: Lu Baolu
Tested-by: Xu Pe
This adds trace support for the Intel IOMMU driver. It
also declares some events which could be used to trace
the events when an IOVA is being mapped or unmapped in
a domain.
Cc: Ashok Raj
Cc: Jacob Pan
Signed-off-by: Mika Westerberg
Signed-off-by: Lu Baolu
---
drivers/iommu/Makefile
This consolidates the code with a helper.
Signed-off-by: Lu Baolu
---
drivers/iommu/intel-iommu.c | 21 +++--
include/linux/intel-iommu.h | 20
2 files changed, 23 insertions(+), 18 deletions(-)
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-
This adds the APIs for bounce buffer specified domain
map() and unmap(). The start and end partial pages will
be mapped with bounce buffered pages instead. This will
enhance the security of DMA buffer by isolating the DMA
attacks from malicious devices.
Cc: Ashok Raj
Cc: Jacob Pan
Signed-off-by:
The Thunderbolt vulnerabilities are public and have a nice
name as Thunderclap [1] [3] nowadays. This patch series aims
to mitigate those concerns.
An external PCI device is a PCI peripheral device connected
to the system through an external bus, such as Thunderbolt.
What makes it different is tha
This series of patches try to save single pages from CMA area bypassing
all CMA single page alloctions and allocating normal pages instead, as
all addresses within one single page are contiguous.
We had once applied the PATCH-5 but reverted it as actually not all the
callers handled the fallback a
The CMA allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area. So this patch adds fallback routines.
Signed-off-by: Nicolin Chen
---
arch/arm/mm/dma-mapping.c | 13 ++---
1 file changed, 10 inse
The CMA allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area.
So this patch moves the alloc_pages() call to the fallback routines.
Signed-off-by: Nicolin Chen
---
Changlog
v1->v2:
* PATCH-2: Initializ
The addresses within a single page are always contiguous, so it's
not so necessary to always allocate one single page from CMA area.
Since the CMA area has a limited predefined size of space, it may
run out of space in heavy use cases, where there might be quite a
lot CMA pages being allocated for
The cma allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area. So this patch adds fallback routines.
Signed-off-by: Nicolin Chen
---
arch/arm64/mm/dma-mapping.c | 19 ---
1 file changed,
The CMA allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area. So this patch adds fallback routines.
Note: amd_iommu driver uses dma_alloc_from_contiguous() as a fallback
allocation and uses alloc_p
On Tue, Mar 26, 2019 at 03:49:56PM -0700, Nicolin Chen wrote:
> @@ -116,7 +116,7 @@ int __init dma_atomic_pool_init(gfp_t gfp, pgprot_t prot)
> if (dev_get_cma_area(NULL))
> page = dma_alloc_from_contiguous(NULL, nr_pages,
>pool_si
The cma allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area. So this patch adds fallback routines.
Signed-off-by: Nicolin Chen
---
arch/arm64/mm/dma-mapping.c | 19 ---
1 file changed,
The CMA allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area. So this patch adds fallback routines.
Note: amd_iommu driver uses dma_alloc_from_contiguous() as a fallback
allocation and uses alloc_p
The addresses within a single page are always contiguous, so it's
not so necessary to always allocate one single page from CMA area.
Since the CMA area has a limited predefined size of space, it may
run out of space in heavy use cases, where there might be quite a
lot CMA pages being allocated for
The CMA allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area. So this patch adds fallback routines.
Signed-off-by: Nicolin Chen
---
arch/arm/mm/dma-mapping.c | 13 ++---
1 file changed, 10 inse
The CMA allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area.
So this patch moves the alloc_pages() call to the fallback routines.
Signed-off-by: Nicolin Chen
---
kernel/dma/remap.c | 2 +-
1 file cha
This series of patches try to save single pages from CMA area bypassing
all CMA single page alloctions and allocating normal pages instead, as
all addresses within one single page are contiguous.
We had once applied the PATCH-5 but reverted it as actually not all the
callers handled the fallback a
On Mon, 25 Mar 2019 09:30:36 +0800
Lu Baolu wrote:
> This adds the support to determine the isolation type
> of a mediated device group by checking whether it has
> an iommu device. If an iommu device exists, an iommu
> domain will be allocated and then attached to the iommu
> device. Otherwise,
On Mon, 25 Mar 2019 09:30:35 +0800
Lu Baolu wrote:
> This adds helpers to attach or detach a domain to a
> group. This will replace iommu_attach_group() which
> only works for non-mdev devices.
>
> If a domain is attaching to a group which includes the
> mediated devices, it should attach to the
On Mon, 25 Mar 2019 09:30:34 +0800
Lu Baolu wrote:
> A parent device might create different types of mediated
> devices. For example, a mediated device could be created
> by the parent device with full isolation and protection
> provided by the IOMMU. One usage case could be found on
> Intel plat
On 26/03/2019 12:31, Geert Uytterhoeven wrote:
Hi John,
CC robh
On Tue, Mar 26, 2019 at 12:42 PM John Garry wrote:
Memory is incorrectly freed using the direct ops, as dma_map_ops = NULL.
Oops...
After reversing the order of the calls to arch_teardown_dma_ops() and
devres_release_all(), dma_
On Wed, Jan 30, 2019 at 08:44:27AM +0100, Christoph Hellwig wrote:
> On Tue, Jan 29, 2019 at 09:36:08PM -0500, Michael S. Tsirkin wrote:
> > This has been discussed ad nauseum. virtio is all about compatibility.
> > Losing a couple of lines of code isn't worth breaking working setups.
> > People th
Hi John,
CC robh
On Tue, Mar 26, 2019 at 12:42 PM John Garry wrote:
> > Memory is incorrectly freed using the direct ops, as dma_map_ops = NULL.
> > Oops...
> >
> > After reversing the order of the calls to arch_teardown_dma_ops() and
> > devres_release_all(), dma_map_ops is still valid, and the
Memory is incorrectly freed using the direct ops, as dma_map_ops = NULL.
Oops...
After reversing the order of the calls to arch_teardown_dma_ops() and
devres_release_all(), dma_map_ops is still valid, and the DMA memory is
now released using __iommu_free_attrs():
+sata_rcar ee30.sata: d
On 3/25/2019 7:00 AM, Lu Baolu wrote:
> This adds the support to determine the isolation type
> of a mediated device group by checking whether it has
> an iommu device. If an iommu device exists, an iommu
> domain will be allocated and then attached to the iommu
> device. Otherwise, keep the sam
On 2/22/2019 7:49 AM, Lu Baolu wrote:
> This adds helpers to attach or detach a domain to a
> group. This will replace iommu_attach_group() which
> only works for non-mdev devices.
>
> If a domain is attaching to a group which includes the
> mediated devices, it should attach to the iommu devic
On 3/25/2019 7:00 AM, Lu Baolu wrote:
> A parent device might create different types of mediated
> devices. For example, a mediated device could be created
> by the parent device with full isolation and protection
> provided by the IOMMU. One usage case could be found on
> Intel platforms where
On 3/26/2019 2:39 AM, Bjorn Andersson wrote:
On Sun 09 Sep 23:25 PDT 2018, Vivek Gautam wrote:
There are scnenarios where drivers are required to make a
scm call in atomic context, such as in one of the qcom's
arm-smmu-500 errata [1].
[1] ("https://source.codeaurora.org/quic/la/kernel/msm-4.
36 matches
Mail list logo