Add mt8186 iommu supports.
Signed-off-by: Anan Sun
Signed-off-by: Yong Wu
---
drivers/iommu/mtk_iommu.c | 17 +
1 file changed, 17 insertions(+)
diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
index d9ca9ffe404c..174a2f3bd68a 100644
--- a/drivers/iommu/mtk_io
Add mt8186 iommu binding. "-mm" means the iommu is for Multimedia.
Signed-off-by: Yong Wu
Acked-by: Krzysztof Kozlowski
Reviewed-by: Rob Herring
---
.../bindings/iommu/mediatek,iommu.yaml| 4 +
.../dt-bindings/memory/mt8186-memory-port.h | 217 ++
2 files changed, 2
This patchset adds mt8186 iommu support.
Base on v5.17-rc1 and mt8195 iommu v5[1].
[1]
https://lore.kernel.org/linux-mediatek/20220217113453.13658-1-yong...@mediatek.com/
Change note:
v2: a) Add a comment "mm: m4u" in the code for readable.
v1:
https://lore.kernel.org/linux-mediatek/202201250
The VT-d spec requires (10.4.4 Global Command Register, TE
field) that:
Hardware implementations supporting DMA draining must drain
any in-flight DMA read/write requests queued within the
Root-Complex before completing the translation enable
command and reflecting the status of the command through
On Tue, Feb 22, 2022 at 09:30:30PM +, Robin Murphy wrote:
> > But the pattern that this copies in arm_smmu_bus_init is really
> > ugly. I think we need to figure out a way todo that without having
> > to export all the low-level bus types.
>
> Yup, as it happens that was the first step on my
On 2/23/22 7:53 AM, Jason Gunthorpe wrote:
To spell it out, the scheme I'm proposing looks like this:
Well, I already got this, it is what is in driver_or_DMA_API_token()
that matters
I think you are suggesting to do something like:
if (!READ_ONCE(dev->driver) || ???)
return NULL;
From: Zi Yan
has_unmovable_pages() is only used in mm/page_isolation.c. Move it from
mm/page_alloc.c and make it static.
Signed-off-by: Zi Yan
Reviewed-by: Oscar Salvador
Reviewed-by: Mike Rapoport
---
include/linux/page-isolation.h | 2 -
mm/page_alloc.c| 119 -
From: Zi Yan
alloc_contig_range() now only needs to be aligned to pageblock_order,
drop virtio_mem size requirement that it needs to be the max of
pageblock_order and MAX_ORDER.
Signed-off-by: Zi Yan
---
drivers/virtio/virtio_mem.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
d
From: Zi Yan
alloc_contig_range() worked at MAX_ORDER-1 granularity to avoid merging
pageblocks with different migratetypes. It might unnecessarily convert
extra pageblocks at the beginning and at the end of the range. Change
alloc_contig_range() to work at pageblock granularity.
Special handlin
From: Zi Yan
Enable set_migratetype_isolate() to check specified sub-range for
unmovable pages during isolation. Page isolation is done
at max(MAX_ORDER_NR_PAEGS, pageblock_nr_pages) granularity, but not all
pages within that granularity are intended to be isolated. For example,
alloc_contig_rang
From: Zi Yan
Hi all,
This patchset tries to remove the MAX_ORDER-1 alignment requirement for CMA
and alloc_contig_range(). It prepares for my upcoming changes to make
MAX_ORDER adjustable at boot time[1]. It is on top of mmotm-2022-02-14-17-46.
Changelog
===
V6
---
1. Resolved compilation error
From: Zi Yan
Now alloc_contig_range() works at pageblock granularity. Change CMA
allocation, which uses alloc_contig_range(), to use pageblock_order
alignment.
Signed-off-by: Zi Yan
---
include/linux/cma.h| 4 ++--
include/linux/mmzone.h | 5 +
mm/page_alloc.c| 4 ++--
3 files
On Tue, Feb 22, 2022 at 09:18:23PM +, Robin Murphy wrote:
> > Still not sure I see what you are thinking though..
>
> What part of "How hard is it to hold group->mutex when reading or writing
> group->owner?" sounded like "complex lockless algorithm", exactly?
group->owner is not the issue,
On 2022-02-22 16:21, Christoph Hellwig wrote:
On Fri, Feb 18, 2022 at 01:39:45PM +0200, Mikko Perttunen wrote:
The context bus is a "dummy" bus that contains struct devices that
correspond to IOMMU contexts assigned through Host1x to processes.
Even when host1x itself is built as a module, the
On 2022-02-22 15:16, Jason Gunthorpe wrote:
On Tue, Feb 22, 2022 at 10:58:37AM +, Robin Murphy wrote:
On 2022-02-21 23:48, Jason Gunthorpe wrote:
On Mon, Feb 21, 2022 at 08:43:33PM +, Robin Murphy wrote:
On 2022-02-19 07:32, Christoph Hellwig wrote:
So we are back to the callback madn
dmar_ats_supported() is defined in and only used by iommu.c
so that declare it as a static function and move the
code accordingly.
Signed-off-by: Yian Chen
---
drivers/iommu/intel/iommu.c | 164 ++--
include/linux/intel-iommu.h | 1 -
2 files changed, 82 inserti
The devices in BIOS SATC (SoC integrated Address Translation Cache)
table are all trusted devices to use ATS. This patch set enables
ATS for them.
---
v2:
- Use dmar_find_matched_satc_unit() to avoid hard coded
return value.
- add static declaration for dmar_ats_supported()
(the functi
Starting from Intel VT-d v3.2, Intel platform BIOS can provide
additional SATC table structure. SATC table includes a list of
SoC integrated devices that support ATC (Address translation
cache).
Enabling ATC (via ATS capability) can be a functional requirement
for SATC device operation or an optio
On Fri, Feb 18, 2022 at 01:39:46PM +0200, Mikko Perttunen wrote:
> +
> +/*
> + * Due to an issue with T194 NVENC, only 38 bits can be used.
> + * Anyway, 256GiB of IOVA ought to be enough for anyone.
> + */
> +static dma_addr_t context_device_dma_mask = DMA_BIT_MASK(38);
You need a mask per device
On Fri, Feb 18, 2022 at 01:39:45PM +0200, Mikko Perttunen wrote:
> The context bus is a "dummy" bus that contains struct devices that
> correspond to IOMMU contexts assigned through Host1x to processes.
>
> Even when host1x itself is built as a module, the bus is registered
> in built-in code so t
On Tue, Feb 22, 2022 at 11:07:19PM +0800, Tianyu Lan wrote:
> Thanks for your comment. That means we need to expose an
> swiotlb_device_init() interface to allocate bounce buffer and initialize
> io tlb mem entry. DMA API Current rmem_swiotlb_device_init() only works
> for platform with device tr
gets pulled in by all drivers using the DMA API.
Remove x86 internal variables and unnecessary includes from it.
Signed-off-by: Christoph Hellwig
---
arch/x86/include/asm/dma-mapping.h | 11 ---
arch/x86/include/asm/iommu.h | 2 ++
2 files changed, 2 insertions(+), 11 deletions(-
Allow to pass a remap argument to the swiotlb initialization functions
to handle the Xen/x86 remap case. ARM/ARM64 never did any remapping
from xen_swiotlb_fixup, so we don't even need that quirk.
Signed-off-by: Christoph Hellwig
---
arch/arm/xen/mm.c | 23 +++---
arch/x86/includ
Power SVM wants to allocate a swiotlb buffer that is not restricted to
low memory for the trusted hypervisor scheme. Consolidate the support
for this into the swiotlb_init interface by adding a new flag.
Signed-off-by: Christoph Hellwig
---
arch/powerpc/include/asm/svm.h | 4
arch/p
Pass a bool to pass if swiotlb needs to be enabled based on the
addressing needs and replace the verbose argument with a set of
flags, including one to force enable bounce buffering.
Note that this patch removes the possibility to force xen-swiotlb
use using swiotlb=force on the command line on x8
The IOMMU table tries to separate the different IOMMUs into different
backends, but actually requires various cross calls.
Rewrite the code to do the generic swiotlb/swiotlb-xen setup directly
in pci-dma.c and then just call into the IOMMU drivers.
Signed-off-by: Christoph Hellwig
---
arch/ia64
Use the generic swiotlb initialization helper instead of open coding it.
Signed-off-by: Christoph Hellwig
---
arch/mips/cavium-octeon/dma-octeon.c | 15 ++-
arch/mips/pci/pci-octeon.c | 2 +-
2 files changed, 3 insertions(+), 14 deletions(-)
diff --git a/arch/mips/cavium-
Let the caller chose a zone to allocate from.
Signed-off-by: Christoph Hellwig
---
arch/x86/pci/sta2x11-fixup.c | 2 +-
include/linux/swiotlb.h | 2 +-
kernel/dma/swiotlb.c | 4 ++--
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/x86/pci/sta2x11-fixup.c b/arch/x
swiotlb_late_init_with_default_size is an overly verbose name that
doesn't even catch what the function is doing, given that the size is
not just a default but the actual requested size.
Rename it to swiotlb_init_late.
Signed-off-by: Christoph Hellwig
---
arch/x86/pci/sta2x11-fixup.c | 2 +-
in
Remove the bogus Xen override that was usually larger than the actual
size and just calculate the value on demand. Note that
swiotlb_max_segment still doesn't make sense as an interface and should
eventually be removed.
Signed-off-by: Christoph Hellwig
---
drivers/xen/swiotlb-xen.c | 2 --
inc
If force bouncing is enabled we can't release the bufffers.
Signed-off-by: Christoph Hellwig
---
kernel/dma/swiotlb.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index f1e7ea160b433..36fbf1181d285 100644
--- a/kernel/dma/swiotlb.c
+++ b/kern
Hi all,
this series tries to clean up the swiotlb initialization, including
that of swiotlb-xen. To get there is also removes the x86 iommu table
infrastructure that massively obsfucates the initialization path.
Git tree:
git://git.infradead.org/users/hch/misc.git swiotlb-init-cleanup
Gitw
Use the more specific is_swiotlb_active check instead of checking the
global swiotlb_force variable.
Signed-off-by: Christoph Hellwig
---
kernel/dma/direct.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 4632b0f4f72eb..4dc16e0
On Tue, Feb 22, 2022 at 10:58:37AM +, Robin Murphy wrote:
> On 2022-02-21 23:48, Jason Gunthorpe wrote:
> > On Mon, Feb 21, 2022 at 08:43:33PM +, Robin Murphy wrote:
> > > On 2022-02-19 07:32, Christoph Hellwig wrote:
> > > > So we are back to the callback madness instead of the nice and si
On 2/22/2022 4:05 PM, Christoph Hellwig wrote:
On Mon, Feb 21, 2022 at 11:14:58PM +0800, Tianyu Lan wrote:
Sorry. The boot failure is not related with these patches and the issue
has been fixed in the latest upstream code.
There is a performance bottleneck due to io tlb mem's spin lock durin
+
static irqreturn_t hisi_ptt_irq(int irq, void *context)
{
struct hisi_ptt *hisi_ptt = context;
@@ -169,7 +233,7 @@ static irqreturn_t hisi_ptt_irq(int irq, void *context)
if (!(status & HISI_PTT_TRACE_INT_STAT_MASK))
return IRQ_NONE;
- return IRQ_HANDLED
On 21/02/2022 08:43, Yicong Yang wrote:
HiSilicon PCIe tune and trace device(PTT) is a PCIe Root Complex
integrated Endpoint(RCiEP) device, providing the capability
to dynamically monitor and tune the PCIe traffic, and trace
the TLP headers.
Add the driver for the device to enable the trace func
22.02.2022 13:54, Mikko Perttunen пишет:
> On 2/22/22 12:46, Dmitry Osipenko wrote:
>> 22.02.2022 11:27, Mikko Perttunen пишет:
>>> On 2/21/22 22:10, Dmitry Osipenko wrote:
21.02.2022 14:44, Mikko Perttunen пишет:
> On 2/19/22 20:54, Dmitry Osipenko wrote:
>> 19.02.2022 21:49, Dmitry O
On 2022-02-21 23:48, Jason Gunthorpe wrote:
On Mon, Feb 21, 2022 at 08:43:33PM +, Robin Murphy wrote:
On 2022-02-19 07:32, Christoph Hellwig wrote:
So we are back to the callback madness instead of the nice and simple
flag? Sigh.
TBH, I *think* this part could be a fair bit simpler. It l
On 2/22/22 12:46, Dmitry Osipenko wrote:
22.02.2022 11:27, Mikko Perttunen пишет:
On 2/21/22 22:10, Dmitry Osipenko wrote:
21.02.2022 14:44, Mikko Perttunen пишет:
On 2/19/22 20:54, Dmitry Osipenko wrote:
19.02.2022 21:49, Dmitry Osipenko пишет:
18.02.2022 14:39, Mikko Perttunen пишет:
+sta
22.02.2022 11:27, Mikko Perttunen пишет:
> On 2/21/22 22:10, Dmitry Osipenko wrote:
>> 21.02.2022 14:44, Mikko Perttunen пишет:
>>> On 2/19/22 20:54, Dmitry Osipenko wrote:
19.02.2022 21:49, Dmitry Osipenko пишет:
> 18.02.2022 14:39, Mikko Perttunen пишет:
>> +static int vic_get_stream
On 2/21/22 22:02, Dmitry Osipenko wrote:
21.02.2022 15:06, Mikko Perttunen пишет:
On 2/19/22 20:35, Dmitry Osipenko wrote:
18.02.2022 14:39, Mikko Perttunen пишет:
+ if (context->memory_context &&
context->client->ops->get_streamid_offset) {
^^^
+ int offset =
context
On 2/21/22 22:10, Dmitry Osipenko wrote:
21.02.2022 14:44, Mikko Perttunen пишет:
On 2/19/22 20:54, Dmitry Osipenko wrote:
19.02.2022 21:49, Dmitry Osipenko пишет:
18.02.2022 14:39, Mikko Perttunen пишет:
+static int vic_get_streamid_offset(struct tegra_drm_client *client)
+{
+ struct vic
On Mon, Feb 21, 2022 at 11:14:58PM +0800, Tianyu Lan wrote:
> Sorry. The boot failure is not related with these patches and the issue
> has been fixed in the latest upstream code.
>
> There is a performance bottleneck due to io tlb mem's spin lock during
> performance test. All devices'io queues us
44 matches
Mail list logo