On Mon, 2021-02-08 at 07:10 +0100, Lukas Bulwahn wrote:
> Commit 6af4873852c4 ("MAINTAINERS: Add entry for MediaTek IOMMU") mentions
> the pattern 'drivers/iommu/mtk-iommu*', but the files are actually named
> with an underscore, not with a hyphen.
>
> Hence, ./scripts/get_maintainer.pl --self-tes
Commit 6af4873852c4 ("MAINTAINERS: Add entry for MediaTek IOMMU") mentions
the pattern 'drivers/iommu/mtk-iommu*', but the files are actually named
with an underscore, not with a hyphen.
Hence, ./scripts/get_maintainer.pl --self-test=patterns complains:
warning: no file matches F:drivers/i
> -Original Message-
> From: David Rientjes [mailto:rient...@google.com]
> Sent: Monday, February 8, 2021 3:18 PM
> To: Song Bao Hua (Barry Song)
> Cc: Matthew Wilcox ; Wangzhou (B)
> ; linux-ker...@vger.kernel.org;
> iommu@lists.linux-foundation.org; linux...@kvack.org;
> linux-arm-ker
On 2/4/21 12:31 PM, Anshuman Khandual wrote:
> The following warning gets triggered while trying to boot a 64K page size
> without THP config kernel on arm64 platform.
>
> WARNING: CPU: 5 PID: 124 at mm/vmstat.c:1080 __fragmentation_index+0xa4/0xc0
> Modules linked in:
> CPU: 5 PID: 124 Comm: k
> -Original Message-
> From: owner-linux...@kvack.org [mailto:owner-linux...@kvack.org] On Behalf Of
> Matthew Wilcox
> Sent: Monday, February 8, 2021 2:31 PM
> To: Song Bao Hua (Barry Song)
> Cc: Wangzhou (B) ; linux-ker...@vger.kernel.org;
> iommu@lists.linux-foundation.org; linux...@
On Sun, 7 Feb 2021, Song Bao Hua (Barry Song) wrote:
> NUMA balancer is just one of many reasons for page migration. Even one
> simple alloc_pages() can cause memory migration in just single NUMA
> node or UMA system.
>
> The other reasons for page migration include but are not limited to:
> * me
On Sun, Feb 07, 2021 at 10:24:28PM +, Song Bao Hua (Barry Song) wrote:
> > > In high-performance I/O cases, accelerators might want to perform
> > > I/O on a memory without IO page faults which can result in dramatically
> > > increased latency. Current memory related APIs could not achieve thi
On 2021/2/5 3:52, Robin Murphy wrote:
> On 2021-01-28 15:17, Keqian Zhu wrote:
>> From: jiangkunkun
>>
>> During dirty log tracking, user will try to retrieve dirty log from
>> iommu if it supports hardware dirty log. This adds a new interface
[...]
>> static void arm_lpae_restrict_pgsizes(s
> On Feb 7, 2021, at 12:31 AM, Zhou Wang wrote:
>
> SVA(share virtual address) offers a way for device to share process virtual
> address space safely, which makes more convenient for user space device
> driver coding. However, IO page faults may happen when doing DMA
> operations. As the late
> -Original Message-
> From: Matthew Wilcox [mailto:wi...@infradead.org]
> Sent: Monday, February 8, 2021 10:34 AM
> To: Wangzhou (B)
> Cc: linux-ker...@vger.kernel.org; iommu@lists.linux-foundation.org;
> linux...@kvack.org; linux-arm-ker...@lists.infradead.org;
> linux-...@vger.kernel
On Sun, Feb 7, 2021 at 9:18 AM Zhou Wang wrote:
> diff --git a/arch/arm64/include/asm/unistd32.h
> b/arch/arm64/include/asm/unistd32.h
> index cccfbbe..3f49529 100644
> --- a/arch/arm64/include/asm/unistd32.h
> +++ b/arch/arm64/include/asm/unistd32.h
> @@ -891,6 +891,8 @@ __SYSCALL(__NR_faccessa
On Sun, Feb 07, 2021 at 04:18:03PM +0800, Zhou Wang wrote:
> SVA(share virtual address) offers a way for device to share process virtual
> address space safely, which makes more convenient for user space device
> driver coding. However, IO page faults may happen when doing DMA
> operations. As the
The pull request you sent on Sun, 7 Feb 2021 17:26:05 +0100:
> git://git.infradead.org/users/hch/dma-mapping.git tags/dma-mapping-5.11-2
has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/ff92acb220c506f14aea384a07b130b87ac1489a
Thank you!
--
Deet-doot-dot, I am a bot.
Any comments?
On Tue, Feb 02, 2021 at 10:51:03AM +0100, Christoph Hellwig wrote:
> Hi all,
>
> this series adds the new noncontiguous DMA allocation API requested by
> various media driver maintainers.
>
> Changes since v1:
> - document that flush_kernel_vmap_range and invalidate_kernel_vmap_ra
On 06/02/2021 04:02, Suravee Suthikulpanit wrote:
> Tj,
>
> I have posted RFCv3 in the BZ
> https://bugzilla.kernel.org/show_bug.cgi?id=201753.
>
> RFCv3 patch adds the logic to retry checking after 20msec wait for each
> retry loop since I have founded that certain platform takes about 10msec
>
The following changes since commit dd86e7fa07a3ec33c92c957ea7b642c4702516a0:
Merge tag 'pci-v5.11-fixes-2' of
git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci (2021-02-04 16:05:40
-0800)
are available in the Git repository at:
git://git.infradead.org/users/hch/dma-mapping.git tags
Lift the double initialization protection from xen-swiotlb to the core
code to avoid exposing too many swiotlb internals. Also upgrade the
check to a warning as it should not happen.
Signed-off-by: Christoph Hellwig
---
drivers/xen/swiotlb-xen.c | 7 ---
kernel/dma/swiotlb.c | 8 ++
Split xen_swiotlb_init into a normal an an early case. That makes both
much simpler and more readable, and also allows marking the early
code as __init and x86-only.
Signed-off-by: Christoph Hellwig
---
arch/arm/xen/mm.c | 2 +-
arch/x86/xen/pci-swiotlb-xen.c | 4 +-
drivers/xe
Use the local variable that is passed to swiotlb_init_with_tbl for
freeing the memory in the failure case to isolate the code a little
better from swiotlb internals.
Signed-off-by: Christoph Hellwig
---
arch/powerpc/platforms/pseries/svm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
d
Use the existing variable that holds the physical address for
xen_io_tlb_end to simplify xen_swiotlb_dma_supported a bit, and remove
the otherwise unused xen_io_tlb_end variable and the xen_virt_to_bus
helper.
Signed-off-by: Christoph Hellwig
---
drivers/xen/swiotlb-xen.c | 10 ++
1 file
The xen_io_tlb_start and xen_io_tlb_nslabs variables ar now only used in
xen_swiotlb_init, so replace them with local variables.
Signed-off-by: Christoph Hellwig
---
drivers/xen/swiotlb-xen.c | 57 +--
1 file changed, 25 insertions(+), 32 deletions(-)
diff --
Use the is_swiotlb_buffer to check if a physical address is
a swiotlb buffer. This works because xen-swiotlb does use the
same buffer as the main swiotlb code, and xen_io_tlb_{start,end}
are just the addresses for it that went through phys_to_virt.
Signed-off-by: Christoph Hellwig
---
drivers/x
The xen_set_nslabs function is a little weird, as it has just one
caller, that caller passes a global variable as the argument,
which is then overriden in the function and a derivative of it
returned. Just add a cpp symbol for the default size using a readable
constant and open code the remaining
Signed-off-by: Christoph Hellwig
---
drivers/xen/swiotlb-xen.c | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index b2d9e77059bf5a..621a20c1143597 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen
Hi Konrad,
this series contains a bunch of swiotlb cleanups, mostly to reduce the
amount of internals exposed to code outside of swiotlb.c, which should
helper to prepare for supporting multiple different bounce buffer pools.
___
iommu mailing list
iommu
From: Jianxiong Gao
The PRP addressing scheme requires all PRP entries except for the
first one to have a zero offset into the NVMe controller pages (which
can be different from the Linux PAGE_SIZE). Use the min_align_mask
device parameter to ensure that swiotlb does not change the address
of th
Respect the min_align_mask in struct device_dma_parameters in swiotlb.
There are two parts to it:
1) for the lower bits of the alignment inside the io tlb slot, just
extent the size of the allocation and leave the start of the slot
empty
2) for the high bits ensure we find a slot that m
swiotlb_tbl_map_single currently nevers sets a tlb_addr that is not
aligned to the tlb bucket size. But we're going to add such a case
soon, for which this adjustment would be bogus.
Signed-off-by: Christoph Hellwig
---
kernel/dma/swiotlb.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/ker
Split out a bunch of a self-contained helpers to make the function easier
to follow.
Signed-off-by: Christoph Hellwig
---
kernel/dma/swiotlb.c | 179 +--
1 file changed, 89 insertions(+), 90 deletions(-)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swio
Remove a layer of pointless indentation, replace a hard to follow
ternary expression with a plain if/else.
Signed-off-by: Christoph Hellwig
---
kernel/dma/swiotlb.c | 41 +
1 file changed, 21 insertions(+), 20 deletions(-)
diff --git a/kernel/dma/swiotlb.
Factor out a helper to find the number of slots for a given size.
Signed-off-by: Christoph Hellwig
---
kernel/dma/swiotlb.c | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 7705821dcdbd27..9492219b0743ae 100644
--
Hi all,
this series make NVMe happy when running with swiotlb. This caters
towards to completely broken NVMe controllers that ignore the
specification (hello to the biggest cloud provider on the planet!),
to crappy SOC that have addressing limitations, or "secure"
virtualization that force bounce
Add a new IO_TLB_SIZE define instead open coding it using
IO_TLB_SHIFT all over.
Signed-off-by: Christoph Hellwig
---
include/linux/swiotlb.h | 1 +
kernel/dma/swiotlb.c| 14 +++---
2 files changed, 8 insertions(+), 7 deletions(-)
diff --git a/include/linux/swiotlb.h b/include/linu
From: Jianxiong Gao
Some devices rely on the address offset in a page to function
correctly (NVMe driver as an example). These devices may use
a different page size than the Linux kernel. The address offset
has to be preserved upon mapping, and in order to do so, we
need to record the page_offset
Replace the very genericly named OFFSET macro with a little inline
helper that hardcodes the alignment to the only value ever passed.
Signed-off-by: Christoph Hellwig
---
kernel/dma/swiotlb.c | 20 +---
1 file changed, 13 insertions(+), 7 deletions(-)
diff --git a/kernel/dma/swi
On Thu, Feb 04, 2021 at 09:40:23AM +0100, Christoph Hellwig wrote:
> So one thing that has been on my mind for a while: I'd really like
> to kill the separate dma ops in Xen swiotlb. If we compare xen-swiotlb
> to swiotlb the main difference seems to be:
>
> - additional reasons to bounce I/O v
Hi Robin,
On 2021/2/5 3:52, Robin Murphy wrote:
> On 2021-01-28 15:17, Keqian Zhu wrote:
>> From: jiangkunkun
>>
>> During dirty log tracking, user will try to retrieve dirty log from
>> iommu if it supports hardware dirty log. This adds a new interface
>> named sync_dirty_log in iommu layer and
Hi Robin,
On 2021/2/5 3:52, Robin Murphy wrote:
> On 2021-01-28 15:17, Keqian Zhu wrote:
>> From: jiangkunkun
>>
>> When stop dirty log tracking, we need to recover all block descriptors
>> which are splited when start dirty log tracking. This adds a new
>> interface named merge_page in iommu lay
Hi Yi,
On 2021/2/7 17:56, Yi Sun wrote:
> Hi,
>
> On 21-01-28 23:17:41, Keqian Zhu wrote:
>
> [...]
>
>> +static void vfio_dma_dirty_log_start(struct vfio_iommu *iommu,
>> + struct vfio_dma *dma)
>> +{
>> +struct vfio_domain *d;
>> +
>> +list_for_each_ent
Hi,
On 21-01-28 23:17:41, Keqian Zhu wrote:
[...]
> +static void vfio_dma_dirty_log_start(struct vfio_iommu *iommu,
> + struct vfio_dma *dma)
> +{
> + struct vfio_domain *d;
> +
> + list_for_each_entry(d, &iommu->domain_list, next) {
> + /* Go
SVA(share virtual address) offers a way for device to share process virtual
address space safely, which makes more convenient for user space device
driver coding. However, IO page faults may happen when doing DMA
operations. As the latency of IO page fault is relatively big, DMA
performance will be
This series adds a new mempinfd syscall to offer a common way to pin/unpin
memory.
Patch 1/2 is about mempinfd codes.
Patch 2/2 adds a simple test tool about mempinfd.
Change logs:
v2 -> v3:
- Follow suggestions from Greg and Kevin, add a new syscall.
- Add input check.
- Use xa_i
This test gets a fd from new mempinfd syscall and creates multiple threads
to do pin/unpin memory.
Signed-off-by: Zhou Wang
Suggested-by: Barry Song
---
tools/testing/selftests/vm/Makefile | 1 +
tools/testing/selftests/vm/mempinfd.c | 131 ++
2 files changed
Hi Robin,
On 2021/2/5 3:51, Robin Murphy wrote:
> On 2021-01-28 15:17, Keqian Zhu wrote:
>> From: jiangkunkun
>>
>> Block descriptor is not a proper granule for dirty log tracking. This
>> adds a new interface named split_block in iommu layer and arm smmuv3
>> implements it, which splits block de
44 matches
Mail list logo