On Wed, 2022-06-15 at 18:14 +0100, Robin Murphy wrote:
> On 2022-06-15 17:12, yf.wang--- via iommu wrote:
> > From: Yunfei Wang
> >
> > Rename MTK_IOMMU_TLB_ADDR to MTK_IOMMU_ADDR, and update
> > MTK_IOMMU_ADDR
> > definition for better generality.
> >
> > Signed-off-by: Ning Li
> > Signed-off-
> From: Nicolin Chen
> Sent: Thursday, June 16, 2022 8:03 AM
>
> All devices in emulated_iommu_groups have pinned_page_dirty_scope
> set, so the update_dirty_scope in the first list_for_each_entry
> is always false. Clean it up, and move the "if update_dirty_scope"
> part from the detach_group_don
> From: Nicolin Chen
> Sent: Thursday, June 16, 2022 8:03 AM
>
> The domain->ops validation was added, as a precaution, for mixed-driver
> systems. However, at this moment only one iommu driver is possible. So
> remove it.
It's true on a physical platform. But I'm not sure whether a virtual plat
On Wed, 2022-06-15 at 18:03 +0100, Robin Murphy wrote:
> On 2022-06-15 17:12, yf.w...@mediatek.com wrote:
> >
> > static phys_addr_t iopte_to_paddr(arm_v7s_iopte pte, int lvl,
> > struct io_pgtable_cfg *cfg)
> > {
> > @@ -240,10 +245,17 @@ static void *__arm_v7s_a
On Mon, 2022-06-13 at 10:13 +0200, AngeloGioacchino Del Regno wrote:
> Il 13/06/22 07:32, Yong Wu ha scritto:
> > On Thu, 2022-06-09 at 12:08 +0200, AngeloGioacchino Del Regno
> > wrote:
> > > On some SoCs (of which only MT8195 is supported at the time of
> > > writing),
> > > the "R" and "W" (I/O)
> From: Nicolin Chen
> Sent: Thursday, June 16, 2022 8:03 AM
>
> From: Jason Gunthorpe
>
> The KVM mechanism for controlling wbinvd is based on OR of the coherency
> property of all devices attached to a guest, no matter those devices are
> attached to a single domain or multiple domains.
>
>
> From: Nicolin Chen
> Sent: Thursday, June 16, 2022 8:03 AM
>
> Cases like VFIO wish to attach a device to an existing domain that was
> not allocated specifically from the device. This raises a condition
> where the IOMMU driver can fail the domain attach because the domain and
> device are inc
Just remove a unused variable that only is for mtk_iommu_v1.
Fixes: 9485a04a5bb9 ("iommu/mediatek: Separate mtk_iommu_data for v1 and v2")
Signed-off-by: Yong Wu
---
drivers/iommu/mtk_iommu.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iomm
No functional change. Just improve safety from dts.
All the larbs that connect to one IOMMU must connect with the same
smi-common. This patch checks all the mediatek,smi property for each
larb, If their mediatek,smi are different, it will return fails.
Also avoid there is no available smi-larb nod
From: Guenter Roeck
Fix the smatch warnings:
drivers/iommu/mtk_iommu.c:878 mtk_iommu_mm_dts_parse() error: uninitialized
symbol 'larbnode'.
If someone abuse the dtsi node(Don't follow the definition of dt-binding),
for example "mediatek,larbs" is provided as boolean property, "larb_nr"
will be z
The mtk_iommu_mm_dts_parse will parse the smi larbs nodes. if the i+1
larb is parsed fail(return -EINVAL), we should of_node_put for the 0..i
larbs. In the fail path, one of_node_put matches with of_parse_phandle in
it.
Fixes: d2e9a1102cfc ("iommu/mediatek: Contain MM IOMMU flow with the MM TYPE")
Mute the probe defer log:
[2.654806] mtk-iommu 14018000.iommu: mm dts parse fail(-517).
[2.656168] mtk-iommu 1c01f000.iommu: mm dts parse fail(-517).
Fixes: d2e9a1102cfc ("iommu/mediatek: Contain MM IOMMU flow with the MM TYPE")
Signed-off-by: Yong Wu
Reviewed-by: AngeloGioacchino Del Re
This patchset contains misc improve patches:
[1/5] When mt8195 v7, I added a error log for dts parse fail, but it
doesn't ignore probe_defer case.(v6 doesn't have this err log.)
[2/5] Add a error path for MM dts parse.
[3/5][4/5] To improve safety from dts. Base on this:
https://lore.kernel.org/li
Hi,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on joro-iommu/next]
[also build test ERROR on linus/master v5.19-rc2 next-20220615]
[cannot apply to arm-perf/for-next/perf]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting
Hi, Jean
On 2022/6/15 下午11:16, Jean-Philippe Brucker wrote:
Hi,
On Fri, Jun 10, 2022 at 08:34:23PM +0800, Zhangfei Gao wrote:
The uacce parent's module can be removed when uacce is working,
which may cause troubles.
If rmmod/uacce_remove happens just after fops_open: bind_queue,
the uacce_rem
> From: Baolu Lu
> Sent: Wednesday, June 15, 2022 9:10 PM
>
> On 2022/6/15 14:22, Tian, Kevin wrote:
> >> From: Baolu Lu
> >> Sent: Tuesday, June 14, 2022 3:21 PM
> >>
> >> On 2022/6/14 14:49, Tian, Kevin wrote:
> From: Lu Baolu
> Sent: Tuesday, June 14, 2022 10:51 AM
>
> The
On Thu, Jun 16, 2022 at 10:09:49AM +0800, Baolu Lu wrote:
> External email: Use caution opening links or attachments
>
>
> On 2022/6/16 08:03, Nicolin Chen wrote:
> > diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> > index 44016594831d..0dd13330fe12 100644
> > --- a/drive
On 2022/6/15 23:40, Jason Gunthorpe wrote:
On Fri, Jun 10, 2022 at 01:37:20PM +0800, Baolu Lu wrote:
On 2022/6/9 20:49, Jason Gunthorpe wrote:
+void iommu_free_pgtbl_pages(struct iommu_domain *domain,
+ struct list_head *pages)
+{
+ struct page *page, *next;
+
+
On 2022/6/16 08:03, Nicolin Chen wrote:
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 44016594831d..0dd13330fe12 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -4323,7 +4323,7 @@ static int prepare_domain_attach_device(struct
iommu_do
From: Brijesh Singh
To support SNP, IOMMU needs to be enabled, and prohibits IOMMU
configurations where DTE[Mode]=0, which means it cannot be supported with
IOMMU passthrough domain (a.k.a IOMMU_DOMAIN_IDENTITY),
and when AMD IOMMU driver is configured to not use the IOMMU host (v1) page
table. O
The ACPI IVRS table can contain multiple IVHD blocks. Each block contains
information used to initialize each IOMMU instance.
Currently, init_iommu_all sequentially process IVHD block and initialize
IOMMU instance one-by-one. However, certain features require all IOMMUs
to be configured in the sam
The IOMMUv2 APIs (for supporting shared virtual memory with PASID)
configures the domain with IOMMU v2 page table, and sets DTE[Mode]=0.
This configuration cannot be supported on SNP-enabled system.
Signed-off-by: Suravee Suthikulpanit
---
drivers/iommu/amd/init.c | 7 ++-
1 file changed, 6
Once SNP is enabled (by executing SNP_INIT command), IOMMU can no longer
support the passthrough domain (i.e. IOMMU_DOMAIN_IDENTITY).
The SNP_INIT command is called early in the boot process, and would fail
if the kernel is configure to default to passthrough mode.
After the system is already boo
EFR[SNPSup] needs to be checked early in the boot process, since it is
used to determine how IOMMU driver configures other IOMMU features
and data structures. This check can be done as soon as the IOMMU driver
finishes parsing IVHDs.
Introduce a variable for tracking the SNP support status, which
On AMD system with SNP enabled, IOMMU hardware checks the host translation
valid (TV) and guest translation valid (GV) bits in the device table entry
(DTE) before accessing the corresponded page tables.
However, current IOMMU driver sets the TV bit for all devices regardless
of whether the host pa
SNP-enabled system requires IOMMU v1 page table to be configured with
non-zero DTE[Mode] for DMA-capable devices. This effects a number of
usecases such as IOMMU pass-through mode and AMD IOMMUv2 APIs for
binding/unbinding pasid.
The series introduce a global variable to check SNP-enabled state
du
The function check_feature_on_all_iommus() checks to ensure if an IOMMU
feature support bit is set on the Extended Feature Register (EFR).
Current logic iterates through all IOMMU, and returns false when it
found the first unset bit.
To provide more thorough checking, modify the logic to iterate t
Un-inline the domain specific logic from the attach/detach_group ops into
two paired functions vfio_iommu_alloc_attach_domain() and
vfio_iommu_detach_destroy_domain() that strictly deal with creating and
destroying struct vfio_domains.
Add the logic to check for EMEDIUMTYPE return code of iommu_at
The domain->ops validation was added, as a precaution, for mixed-driver
systems. However, at this moment only one iommu driver is possible. So
remove it.
Per discussion with Robin, in future when many can be permitted we will
rely on the IOMMU core code to check the domain->ops:
https://lore.kerne
All devices in emulated_iommu_groups have pinned_page_dirty_scope
set, so the update_dirty_scope in the first list_for_each_entry
is always false. Clean it up, and move the "if update_dirty_scope"
part from the detach_group_done routine to the domain_list part.
Rename the "detach_group_done" goto
Cases like VFIO wish to attach a device to an existing domain that was
not allocated specifically from the device. This raises a condition
where the IOMMU driver can fail the domain attach because the domain and
device are incompatible with each other.
This is a soft failure that can be resolved b
From: Jason Gunthorpe
The KVM mechanism for controlling wbinvd is based on OR of the coherency
property of all devices attached to a guest, no matter those devices are
attached to a single domain or multiple domains.
So, there is no value in trying to push a device that could do enforced
cache c
This is a preparatory series for IOMMUFD v2 patches. It enforces error
code -EMEDIUMTYPE in iommu_attach_device() and iommu_attach_group() when
an IOMMU domain and a device/group are incompatible. It also drops the
useless domain->ops check since it won't fail in current environment.
These allow V
On Wed, Jun 15, 2022 at 07:35:00AM +, Tian, Kevin wrote:
> External email: Use caution opening links or attachments
>
>
> > From: Nicolin Chen
> > Sent: Wednesday, June 15, 2022 4:45 AM
> >
> > Hi Kevin,
> >
> > On Wed, Jun 08, 2022 at 11:48:27PM +, Tian, Kevin wrote:
> > > > > > The KVM
On Wed, Jun 15, 2022 at 01:36:50PM -0500, Steve Wahl wrote:
> To support up to 64 sockets with 10 DMAR units each (640), make the
> value of DMAR_UNITS_SUPPORTED adjustable by a config variable,
> CONFIG_DMAR_UNITS_SUPPORTED, and make it's default 1024 when MAXSMP is
> set.
>
> If the available ha
To support up to 64 sockets with 10 DMAR units each (640), make the
value of DMAR_UNITS_SUPPORTED adjustable by a config variable,
CONFIG_DMAR_UNITS_SUPPORTED, and make it's default 1024 when MAXSMP is
set.
If the available hardware exceeds DMAR_UNITS_SUPPORTED (previously set
to MAX_IO_APICS, or
On 2022-06-15 17:12, yf.wang--- via iommu wrote:
From: Yunfei Wang
Single memory zone feature will remove ZONE_DMA32 and ZONE_DMA. So add
the quirk IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT to let level 1 and level 2
pgtable support at most 35bit PA.
I'm not sure how this works in practice, given tha
On 2022-06-15 17:12, yf.wang--- via iommu wrote:
From: Yunfei Wang
Rename MTK_IOMMU_TLB_ADDR to MTK_IOMMU_ADDR, and update MTK_IOMMU_ADDR
definition for better generality.
Signed-off-by: Ning Li
Signed-off-by: Yunfei Wang
---
drivers/iommu/mtk_iommu.c | 8
1 file changed, 4 inser
On 2022-06-15 17:12, yf.w...@mediatek.com wrote:
From: Yunfei Wang
Single memory zone feature will remove ZONE_DMA32 and ZONE_DMA and
cause pgtable PA size larger than 32bit.
Since Mediatek IOMMU hardware support at most 35bit PA in pgtable,
so add a quirk to allow the PA of pgtables support u
From: Yunfei Wang
Single memory zone feature will remove ZONE_DMA32 and ZONE_DMA and
cause pgtable PA size larger than 32bit.
Since Mediatek IOMMU hardware support at most 35bit PA in pgtable,
so add a quirk to allow the PA of pgtables support up to bit35.
Signed-off-by: Ning Li
Signed-off-by:
From: Yunfei Wang
Single memory zone feature will remove ZONE_DMA32 and ZONE_DMA. So add
the quirk IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT to let level 1 and level 2
pgtable support at most 35bit PA.
Signed-off-by: Ning Li
Signed-off-by: Yunfei Wang
---
drivers/iommu/mtk_iommu.c | 14 +-
From: Yunfei Wang
Rename MTK_IOMMU_TLB_ADDR to MTK_IOMMU_ADDR, and update MTK_IOMMU_ADDR
definition for better generality.
Signed-off-by: Ning Li
Signed-off-by: Yunfei Wang
---
drivers/iommu/mtk_iommu.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/iommu/
Add EREMOTEIO error return to dma_map_sgtable() which will be used
by .map_sg() implementations that detect P2PDMA pages that the
underlying DMA device cannot access.
Signed-off-by: Logan Gunthorpe
Reviewed-by: Jason Gunthorpe
---
kernel/dma/mapping.c | 4 +++-
1 file changed, 3 insertions(+),
Add iov_iter_get_pages_flags() and iov_iter_get_pages_alloc_flags()
which take a flags argument that is passed to get_user_pages_fast().
This is so that FOLL_PCI_P2PDMA can be passed when appropriate.
Signed-off-by: Logan Gunthorpe
---
include/linux/uio.h | 6 ++
lib/iov_iter.c | 25 +
pci_p2pdma_map_type() will be needed by the dma-iommu map_sg
implementation because it will need to determine the mapping type
ahead of actually doing the mapping to create the actual IOMMU mapping.
Prototypes for this helper are added to dma-map-ops.h as they are only
useful to dma map implementa
Introduce a supports_pci_p2pdma() operation in nvme_ctrl_ops to
replace the fixed NVME_F_PCI_P2PDMA flag such that the dma_map_ops
flags can be checked for PCI P2PDMA support.
Signed-off-by: Logan Gunthorpe
Reviewed-by: Chaitanya Kulkarni
---
drivers/nvme/host/core.c | 3 ++-
drivers/nvme/host
The dma_map operations now support P2PDMA pages directly. So remove
the calls to pci_p2pdma_[un]map_sg_attrs() and replace them with calls
to dma_map_sgtable().
dma_map_sgtable() returns more complete error codes than dma_map_sg()
and allows differentiating EREMOTEIO errors in case an unsupported
When a PCI P2PDMA page is seen, set the IOVA length of the segment
to zero so that it is not mapped into the IOVA. Then, in finalise_sg(),
apply the appropriate bus address to the segment. The IOVA is not
created if the scatterlist only consists of P2PDMA pages.
A P2PDMA page may have three possib
Consecutive zone device pages should not be merged into the same sgl
or bvec segment with other types of pages or if they belong to different
pgmaps. Otherwise getting the pgmap of a given segment is not possible
without scanning the entire segment. This helper returns true either if
both pages are
Add PCI P2PDMA support for dma_direct_map_sg() so that it can map
PCI P2PDMA pages directly without a hack in the callers. This allows
for heterogeneous SGLs that contain both P2PDMA and regular pages.
A P2PDMA page may have three possible outcomes when being mapped:
1) If the data path between
Consecutive zone device pages should not be merged into the same sgl
or bvec segment with other types of pages or if they belong to different
pgmaps. Otherwise getting the pgmap of a given segment is not possible
without scanning the entire segment. This helper returns true either if
both pages are
Make use of the third free LSB in scatterlist's page_link on 64bit systems.
The extra bit will be used by dma_[un]map_sg_p2pdma() to determine when a
given SGL segments dma_address points to a PCI bus address.
dma_unmap_sg_p2pdma() will need to perform different cleanup when a
segment is marked as
Add pci_p2pdma_map_segment() as a helper for simple dma_map_sg()
implementations. It takes an scatterlist segment that must point to a
pci_p2pdma struct page and will map it if the mapping requires a bus
address.
The return value indicates whether the mapping required a bus address
or whether the
When a bio's queue supports PCI P2PDMA, set FOLL_PCI_P2PDMA for
iov_iter_get_pages_flags(). This allows PCI P2PDMA pages to be
passed from userspace and enables the NVMe passthru requests to
use P2PDMA pages.
Signed-off-by: Logan Gunthorpe
---
block/blk-map.c | 7 ++-
1 file changed, 6 inser
GUP Callers that expect PCI P2PDMA pages can now set FOLL_PCI_P2PDMA to
allow obtaining P2PDMA pages. If GUP is called without the flag and a
P2PDMA page is found, it will return an error.
FOLL_PCI_P2PDMA cannot be set if FOLL_LONGTERM is set.
Signed-off-by: Logan Gunthorpe
---
include/linux/mm
This interface is superseded by support in dma_map_sg() which now supports
heterogeneous scatterlists. There are no longer any users, so remove it.
Signed-off-by: Logan Gunthorpe
Acked-by: Bjorn Helgaas
Reviewed-by: Jason Gunthorpe
Reviewed-by: Max Gurtovoy
---
drivers/pci/p2pdma.c | 66
Introduce pci_mmap_p2pmem() which is a helper to allocate and mmap
a hunk of p2pmem into userspace.
Pages are allocated from the genalloc in bulk with their reference
count set to one. They are returned to the genalloc when the page is put
through p2pdma_page_free() (the reference count is once ag
Attempt to find the mapping type for P2PDMA pages on the first
DMA map attempt if it has not been done ahead of time.
Previously, the mapping type was expected to be calculated ahead of
time, but if pages are to come from userspace then there's no
way to ensure the path was checked ahead of time.
Hi,
This patchset continues my work to add userspace P2PDMA access using
O_DIRECT NVMe devices. This posting cleans up the way the pages are
stored in the VMA and relies on proper reference counting that was
fixed up recently in the kernel. The new method uses vm_insert_page()
in pci_mmap_p2pmem()
Add a flags member to the dma_map_ops structure with one flag to
indicate support for PCI P2PDMA.
Also, add a helper to check if a device supports PCI P2PDMA.
Signed-off-by: Logan Gunthorpe
Reviewed-by: Jason Gunthorpe
---
include/linux/dma-map-ops.h | 10 ++
include/linux/dma-mapping.
Introduce the helper function ib_dma_pci_p2p_dma_supported() to check
if a given ib_device can be used in P2PDMA transfers. This ensures
the ib_device is not using virt_dma and also that the underlying
dma_device supports P2PDMA.
Use the new helper in nvme-rdma to replace the existing check for
ib
Allow userspace to obtain CMB memory by mmaping the controller's
char device. The mmap call allocates and returns a hunk of CMB memory,
(the offset is ignored) so userspace does not have control over the
address within the CMB.
A VMA allocated in this way will only be usable by drivers that set
FO
When a bio's queue supports PCI P2PDMA, set FOLL_PCI_P2PDMA for
iov_iter_get_pages_flags(). This allows PCI P2PDMA pages to be passed
from userspace and enables the O_DIRECT path in iomap based filesystems
and direct to block devices.
Signed-off-by: Logan Gunthorpe
---
block/bio.c | 8 +++-
dma_map_sg() now supports the use of P2PDMA pages so pci_p2pdma_map_sg()
is no longer necessary and may be dropped. This means the
rdma_rw_[un]map_sg() helpers are no longer necessary. Remove it all.
Signed-off-by: Logan Gunthorpe
Reviewed-by: Jason Gunthorpe
---
drivers/infiniband/core/rw.c |
On Fri, Jun 10, 2022 at 01:37:20PM +0800, Baolu Lu wrote:
> On 2022/6/9 20:49, Jason Gunthorpe wrote:
> > > +void iommu_free_pgtbl_pages(struct iommu_domain *domain,
> > > + struct list_head *pages)
> > > +{
> > > + struct page *page, *next;
> > > +
> > > + if (!domain->concurre
Hi,
On Fri, Jun 10, 2022 at 08:34:23PM +0800, Zhangfei Gao wrote:
> The uacce parent's module can be removed when uacce is working,
> which may cause troubles.
>
> If rmmod/uacce_remove happens just after fops_open: bind_queue,
> the uacce_remove can not remove the bound queue since it is not
> a
On 15/06/2022 11:57, Robin Murphy wrote:
> On 2022-06-15 10:53, Steven Price wrote:
>> On 18/04/2022 01:49, Lu Baolu wrote:
>>> Multiple devices may be placed in the same IOMMU group because they
>>> cannot be isolated from each other. These devices must either be
>>> entirely under kernel control
On Wed, Jun 15, 2022 at 09:38:35AM +0800, Baolu Lu wrote:
> On 2022/6/15 05:12, Steve Wahl wrote:
> > On Tue, Jun 14, 2022 at 12:01:45PM -0700, Jerry Snitselaar wrote:
> > > On Tue, Jun 14, 2022 at 11:45:35AM -0500, Steve Wahl wrote:
> > > > On Tue, Jun 14, 2022 at 10:21:29AM +0800, Baolu Lu wrote:
Am 15. Juni 2022 15:17:00 MESZ schrieb Christoph Hellwig :
>On Wed, Jun 15, 2022 at 02:15:33PM +0100, Robin Murphy wrote:
>> Put simply, if you want to call dma_map_single() on a buffer, then that
>> buffer needs to be allocated with kmalloc() (or technically alloc_pages(),
>> but then dma_map_pa
On Tue, 2022-06-14 at 13:56 +0100, Will Deacon wrote:
> Hi,
>
> For some reason, this series has landed in my spam folder so
> apologies
> for the delay :/
>
>
> > +static arm_v7s_iopte paddr_to_iopte(phys_addr_t paddr, int lvl,
> > + struct io_pgtable_cfg *cfg)
> >
On Wed, Jun 15, 2022 at 02:15:33PM +0100, Robin Murphy wrote:
> Put simply, if you want to call dma_map_single() on a buffer, then that
> buffer needs to be allocated with kmalloc() (or technically alloc_pages(),
> but then dma_map_page() would make more sense when dealing with entire
> pages.
On 2022-06-15 13:11, Frank Wunderlich wrote:
Hi,
i have upported a wifi-driver (mt6625l for armhf) for some time and fall now
(at least 5.18) in the
"rejecting DMA map of vmalloc memory" error [1].
maybe anybody here can guide me on how to nail it down and maybe fix it.
as far as i have debug
On 2022/6/15 14:22, Tian, Kevin wrote:
From: Baolu Lu
Sent: Tuesday, June 14, 2022 3:21 PM
On 2022/6/14 14:49, Tian, Kevin wrote:
From: Lu Baolu
Sent: Tuesday, June 14, 2022 10:51 AM
The disable_dmar_iommu() is called when IOMMU initialization fails or
the IOMMU is hot-removed from the system
On 2022/6/15 14:13, Tian, Kevin wrote:
From: Baolu Lu
Sent: Wednesday, June 15, 2022 9:54 AM
On 2022/6/14 14:43, Tian, Kevin wrote:
From: Lu Baolu
Sent: Tuesday, June 14, 2022 10:51 AM
The domain_translation_struct debugfs node is used to dump the DMAR
page
tables for the PCI devices. It poten
Il 15/06/22 14:09, Matthias Brugger ha scritto:
On 09/06/2022 12:08, AngeloGioacchino Del Regno wrote:
On some SoCs (of which only MT8195 is supported at the time of writing),
the "R" and "W" (I/O) enable bits for the IOMMUs are in the pericfg_ao
register space and not in the IOMMU space: as i
Hi,
i have upported a wifi-driver (mt6625l for armhf) for some time and fall now
(at least 5.18) in the
"rejecting DMA map of vmalloc memory" error [1].
maybe anybody here can guide me on how to nail it down and maybe fix it.
as far as i have debugged it, it uses dma_map_single [2] to get dma m
On 09/06/2022 12:08, AngeloGioacchino Del Regno wrote:
On some SoCs (of which only MT8195 is supported at the time of writing),
the "R" and "W" (I/O) enable bits for the IOMMUs are in the pericfg_ao
register space and not in the IOMMU space: as it happened already with
infracfg, it is expected
On 2022-06-15 10:53, Steven Price wrote:
On 18/04/2022 01:49, Lu Baolu wrote:
Multiple devices may be placed in the same IOMMU group because they
cannot be isolated from each other. These devices must either be
entirely under kernel control or userspace control, never a mixture.
This adds dma o
On Wed, 15 Jun 2022 at 02:01, Emma Anholt wrote:
>
> Required for turning on per-process page tables for the GPU.
>
> Signed-off-by: Emma Anholt
Reviewed-by: Dmitry Baryshkov
> ---
>
> drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/drivers/
On Wed, 15 Jun 2022 at 02:01, Emma Anholt wrote:
>
> This is an SMMU for the adreno gpu, and adding this compatible lets
> the driver use per-fd page tables, which are required for security
> between GPU clients.
>
> Signed-off-by: Emma Anholt
> ---
>
> Tested with a full deqp-vk run on RB5, whic
From: Jon Nettleton
Check if there is any RMR info associated with the devices behind
the SMMU and if any, install bypass SMRs for them. This is to
keep any ongoing traffic associated with these devices alive
when we enable/reset SMMU during probe().
Signed-off-by: Jon Nettleton
Signed-off-by:
Check if there is any RMR info associated with the devices behind
the SMMUv3 and if any, install bypass STEs for them. This is to
keep any ongoing traffic associated with these devices alive
when we enable/reset SMMUv3 during probe().
Tested-by: Hanjun Guo
Signed-off-by: Shameer Kolothum
---
dr
By default, disable_bypass flag is set and any dev without
an iommu domain installs STE with CFG_ABORT during
arm_smmu_init_bypass_stes(). Introduce a "force" flag and
move the STE update logic to arm_smmu_init_bypass_stes()
so that we can force it to install CFG_BYPASS STE for specific
SIDs.
This
This will provide a way for SMMU drivers to retrieve StreamIDs
associated with IORT RMR nodes and use that to set bypass settings
for those IDs.
Tested-by: Steven Price
Tested-by: Laurentiu Tudor
Tested-by: Hanjun Guo
Reviewed-by: Hanjun Guo
Signed-off-by: Shameer Kolothum
---
drivers/acpi/a
Introduce a helper to check the sid range and to init the l2 strtab
entries(bypass). This will be useful when we have to initialize the
l2 strtab with bypass for RMR SIDs.
Tested-by: Hanjun Guo
Acked-by: Will Deacon
Signed-off-by: Shameer Kolothum
---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.
Parse through the IORT RMR nodes and populate the reserve region list
corresponding to a given IOMMU and device(optional). Also, go through
the ID mappings of the RMR node and retrieve all the SIDs associated
with it.
Reviewed-by: Lorenzo Pieralisi
Tested-by: Steven Price
Tested-by: Laurentiu Tu
Currently IORT provides a helper to retrieve HW MSI reserve regions.
Change this to a generic helper to retrieve any IORT related reserve
regions. This will be useful when we add support for RMR nodes in
subsequent patches.
[Lorenzo: For ACPI IORT]
Reviewed-by: Lorenzo Pieralisi
Reviewed-by: Chri
At present iort_iommu_msi_get_resv_regions() returns the number of
MSI reserved regions on success and there are no users for this.
The reserved region list will get populated anyway for platforms
that require the HW MSI region reservation. Hence, change the
function to return void instead.
Review
A callback is introduced to struct iommu_resv_region to free memory
allocations associated with the reserved region. This will be useful
when we introduce support for IORT RMR based reserved regions.
Reviewed-by: Christoph Hellwig
Tested-by: Steven Price
Tested-by: Laurentiu Tudor
Tested-by: Ha
Hi
v12 --> v13
-No changes. Rebased to 5.19-rc1.
-Picked up tags received from Laurentiu, Hanjun and Will. Thanks!.
Thanks,
Shameer
From old:
We have faced issues with 3408iMR RAID controller cards which
fail to boot when SMMU is enabled. This is because these
controllers make use of host me
On 18/04/2022 01:49, Lu Baolu wrote:
> Multiple devices may be placed in the same IOMMU group because they
> cannot be isolated from each other. These devices must either be
> entirely under kernel control or userspace control, never a mixture.
>
> This adds dma ownership management in iommu core
> From: Nicolin Chen
> Sent: Wednesday, June 15, 2022 4:45 AM
>
> Hi Kevin,
>
> On Wed, Jun 08, 2022 at 11:48:27PM +, Tian, Kevin wrote:
> > > > > The KVM mechanism for controlling wbinvd is only triggered during
> > > > > kvm_vfio_group_add(), meaning it is a one-shot test done once the
> >
92 matches
Mail list logo