From: Christoph Hellwig Sent: Sunday, February 27, 2022 6:31 AM
>
> Pass a bool to pass if swiotlb needs to be enabled based on the
> addressing needs and replace the verbose argument with a set of
> flags, including one to force enable bounce buffering.
>
> Note that this patch removes the poss
On 2/18/22 8:55 AM, Lu Baolu wrote:
v6:
- Refine comments and commit mesages.
- Rename iommu_group_set_dma_owner() to iommu_group_claim_dma_owner().
- Rename iommu_device_use/unuse_kernel_dma() to
iommu_device_use/unuse_default_domain().
- Remove unnecessary EXPORT_SYMBOL_GPL.
The iommu group changes notifer is not referenced in the tree. Remove it
to avoid dead code.
Suggested-by: Christoph Hellwig
Signed-off-by: Lu Baolu
Reviewed-by: Jason Gunthorpe
---
include/linux/iommu.h | 23 -
drivers/iommu/iommu.c | 75 ---
The iommu core and driver core have been enhanced to avoid unsafe driver
binding to a live group after iommu_group_set_dma_owner(PRIVATE_USER)
has been called. There's no need to register iommu group notifier. This
removes the iommu group notifer which contains BUG_ON() and WARN().
Signed-off-by:
From: Jason Gunthorpe
commit 60720a0fc646 ("vfio: Add device tracking during unbind") added the
unbound list to plug a problem with KVM where KVM_DEV_VFIO_GROUP_DEL
relied on vfio_group_get_external_user() succeeding to return the
vfio_group from a group file descriptor. The unbound list allowed
As DMA ownership is claimed for the iommu group when a VFIO group is
added to a VFIO container, the VFIO group viability is guaranteed as long
as group->container_users > 0. Remove those unnecessary group viability
checks which are only hit when group->container_users is not zero.
The only remaini
Claim group dma ownership when an IOMMU group is set to a container,
and release the dma ownership once the iommu group is unset from the
container.
This change disallows some unsafe bridge drivers to bind to non-ACS
bridges while devices under them are assigned to user space. This is an
intention
If a switch lacks ACS P2P Request Redirect, a device below the switch can
bypass the IOMMU and DMA directly to other devices below the switch, so
all the downstream devices must be in the same IOMMU group as the switch
itself.
The existing VFIO framework allows the portdrv driver to be bound to th
The current VFIO implementation allows pci-stub driver to be bound to
a PCI device with other devices in the same IOMMU group being assigned
to userspace. The pci-stub driver has no dependencies on DMA or the
IOVA mapping of the device, but it does prevent the user from having
direct access to the
The devices on platform/amba/fsl-mc/PCI buses could be bound to drivers
with the device DMA managed by kernel drivers or user-space applications.
Unfortunately, multiple devices may be placed in the same IOMMU group
because they cannot be isolated from each other. The DMA on these devices
must eith
Stop sharing platform_dma_configure() helper as they are about to have
their own bus dma_configure callbacks.
Signed-off-by: Lu Baolu
Reviewed-by: Jason Gunthorpe
---
include/linux/platform_device.h | 2 --
drivers/amba/bus.c | 19 ++-
drivers/base/platform.c
The bus_type structure defines dma_configure() callback for bus drivers
to configure DMA on the devices. This adds the paired dma_cleanup()
callback and calls it during driver unbinding so that bus drivers can do
some cleanup work.
One use case for this paired DMA callbacks is for the bus driver t
Multiple devices may be placed in the same IOMMU group because they
cannot be isolated from each other. These devices must either be
entirely under kernel control or userspace control, never a mixture.
This adds dma ownership management in iommu core and exposes several
interfaces for the device d
Hi folks,
The iommu group is the minimal isolation boundary for DMA. Devices in
a group can access each other's MMIO registers via peer to peer DMA
and also need share the same I/O address space.
Once the I/O address space is assigned to user control it is no longer
available to the dma_map* API,
The pull request you sent on Sun, 27 Feb 2022 19:12:02 +0100:
> git://git.infradead.org/users/hch/dma-mapping.git tags/dma-mapping-5.17-1
has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/98f3e84f8df66f1ac9d04b6d8093993c9bfd69e6
Thank you!
--
Deet-doot-dot, I am a bot.
Le 27/02/2022 à 15:30, Christoph Hellwig a écrit :
> Pass a bool to pass if swiotlb needs to be enabled based on the
> addressing needs and replace the verbose argument with a set of
> flags, including one to force enable bounce buffering.
>
> Note that this patch removes the possibility to forc
The following changes since commit 754e0b0e35608ed5206d6a67a791563c631cec07:
Linux 5.17-rc4 (2022-02-13 12:13:30 -0800)
are available in the Git repository at:
git://git.infradead.org/users/hch/dma-mapping.git tags/dma-mapping-5.17-1
for you to fetch changes up to ddbd89deb7d32b1fbb879f48d6
CONFIG_DMA_REMAP is used to build a few helpers around the core
vmalloc code, and to use them in case there is a highmem page in
dma-direct, and to make dma coherent allocations be able to use
non-contiguous pages allocations for DMA allocations in the dma-iommu
layer.
Right now it needs to be exp
gets pulled in by all drivers using the DMA API.
Remove x86 internal variables and unnecessary includes from it.
Signed-off-by: Christoph Hellwig
---
arch/x86/include/asm/dma-mapping.h | 11 ---
arch/x86/include/asm/iommu.h | 2 ++
2 files changed, 2 insertions(+), 11 deletions(-
Allow to pass a remap argument to the swiotlb initialization functions
to handle the Xen/x86 remap case. ARM/ARM64 never did any remapping
from xen_swiotlb_fixup, so we don't even need that quirk.
Signed-off-by: Christoph Hellwig
---
arch/arm/xen/mm.c | 23 +++---
arch/x86/includ
Power SVM wants to allocate a swiotlb buffer that is not restricted to
low memory for the trusted hypervisor scheme. Consolidate the support
for this into the swiotlb_init interface by adding a new flag.
Signed-off-by: Christoph Hellwig
---
arch/powerpc/include/asm/svm.h | 4
arch/p
Pass a bool to pass if swiotlb needs to be enabled based on the
addressing needs and replace the verbose argument with a set of
flags, including one to force enable bounce buffering.
Note that this patch removes the possibility to force xen-swiotlb
use using swiotlb=force on the command line on x8
The IOMMU table tries to separate the different IOMMUs into different
backends, but actually requires various cross calls.
Rewrite the code to do the generic swiotlb/swiotlb-xen setup directly
in pci-dma.c and then just call into the IOMMU drivers.
Signed-off-by: Christoph Hellwig
---
arch/ia64
Use the generic swiotlb initialization helper instead of open coding it.
Signed-off-by: Christoph Hellwig
---
arch/mips/cavium-octeon/dma-octeon.c | 15 ++-
arch/mips/pci/pci-octeon.c | 2 +-
2 files changed, 3 insertions(+), 14 deletions(-)
diff --git a/arch/mips/cavium-
Let the caller chose a zone to allocate from. This will be used
later on by the xen-swiotlb initialization on arm.
Signed-off-by: Christoph Hellwig
Reviewed-by: Anshuman Khandual
---
arch/x86/pci/sta2x11-fixup.c | 2 +-
include/linux/swiotlb.h | 2 +-
kernel/dma/swiotlb.c | 4 ++--
Remove the bogus Xen override that was usually larger than the actual
size and just calculate the value on demand. Note that
swiotlb_max_segment still doesn't make sense as an interface and should
eventually be removed.
Signed-off-by: Christoph Hellwig
Reviewed-by: Anshuman Khandual
---
driver
swiotlb_late_init_with_default_size is an overly verbose name that
doesn't even catch what the function is doing, given that the size is
not just a default but the actual requested size.
Rename it to swiotlb_init_late.
Signed-off-by: Christoph Hellwig
Reviewed-by: Anshuman Khandual
---
arch/x8
Use the more specific is_swiotlb_active check instead of checking the
global swiotlb_force variable.
Signed-off-by: Christoph Hellwig
Reviewed-by: Anshuman Khandual
---
kernel/dma/direct.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/dma/direct.h b/kernel/dma/direc
Hi all,
this series tries to clean up the swiotlb initialization, including
that of swiotlb-xen. To get there is also removes the x86 iommu table
infrastructure that massively obsfucates the initialization path.
Git tree:
git://git.infradead.org/users/hch/misc.git swiotlb-init-cleanup
Gitw
If force bouncing is enabled we can't release the buffers.
Signed-off-by: Christoph Hellwig
Reviewed-by: Anshuman Khandual
---
kernel/dma/swiotlb.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index f1e7ea160b433..36fbf1181d285 100644
--- a/
30 matches
Mail list logo