Normally free_pgtables needs to lock affected VMAs except for the case
when VMAs were isolated under VMA write-lock. munmap() does just that,
isolating while holding appropriate locks and then downgrading mmap_lock
and dropping per-VMA locks before freeing page tables.
Add a parameter to free_pgtab
While unmapping VMAs, adjacent VMAs might be able to grow into the area
being unmapped. In such cases write-lock adjacent VMAs to prevent this
growth.
Signed-off-by: Suren Baghdasaryan
---
mm/mmap.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/mmap.c b/mm/mmap.c
Assert there are no holders of VMA lock for reading when it is about to be
destroyed.
Signed-off-by: Suren Baghdasaryan
---
kernel/fork.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/fork.c b/kernel/fork.c
index 9141427a98b2..a08cc0e2bfde 100644
--- a/kernel/fork.c
+++ b/kernel/
Page fault handlers might need to fire MMU notifications while a new
notifier is being registered. Modify mm_take_all_locks to write-lock all
VMAs and prevent this race with page fault handlers that would hold VMA
locks. VMAs are locked before i_mmap_rwsem and anon_vma to keep the same
locking orde
Per-vma locking mechanism will search for VMA under RCU protection and
then after locking it, has to ensure it was not removed from the VMA
tree after we found it. To make this check efficient, introduce a
vma->detached flag to mark VMAs which were removed from the VMA tree.
Signed-off-by: Suren B
Introduce lock_vma_under_rcu function to lookup and lock a VMA during
page fault handling. When VMA is not found, can't be locked or changes
after being locked, the function returns NULL. The lookup is performed
under RCU protection to prevent the found VMA from being destroyed before
the VMA lock
When vma->anon_vma is not set, page fault handler will set it by either
reusing anon_vma of an adjacent VMA if VMAs are compatible or by
allocating a new one. find_mergeable_anon_vma() walks VMA tree to find
a compatible adjacent VMA and that requires not only the faulting VMA
to be stable but also
Add a new flag to distinguish page faults handled under protection of
per-vma lock.
Signed-off-by: Suren Baghdasaryan
Reviewed-by: Laurent Dufour
---
include/linux/mm.h | 3 ++-
include/linux/mm_types.h | 1 +
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/include/linux/mm.
Due to the possibility of do_swap_page dropping mmap_lock, abort fault
handling under VMA lock and retry holding mmap_lock. This can be handled
more gracefully in the future.
Signed-off-by: Suren Baghdasaryan
Reviewed-by: Laurent Dufour
---
mm/memory.c | 5 +
1 file changed, 5 insertions(+)
Due to the possibility of handle_userfault dropping mmap_lock, avoid fault
handling under VMA lock and retry holding mmap_lock. This can be handled
more gracefully in the future.
Signed-off-by: Suren Baghdasaryan
Suggested-by: Peter Xu
---
mm/memory.c | 9 +
1 file changed, 9 insertions
Add a new CONFIG_PER_VMA_LOCK_STATS config option to dump extra
statistics about handling page fault under VMA lock.
Signed-off-by: Suren Baghdasaryan
---
include/linux/vm_event_item.h | 6 ++
include/linux/vmstat.h| 6 ++
mm/Kconfig.debug | 6 ++
mm/memory.c
Attempt VMA lock-based page fault handling first, and fall back to the
existing mmap_lock-based handling if that fails.
Signed-off-by: Suren Baghdasaryan
---
arch/x86/Kconfig| 1 +
arch/x86/mm/fault.c | 36
2 files changed, 37 insertions(+)
diff --git a
Attempt VMA lock-based page fault handling first, and fall back to the
existing mmap_lock-based handling if that fails.
Signed-off-by: Suren Baghdasaryan
---
arch/arm64/Kconfig| 1 +
arch/arm64/mm/fault.c | 36
2 files changed, 37 insertions(+)
diff --g
From: Laurent Dufour
Attempt VMA lock-based page fault handling first, and fall back to the
existing mmap_lock-based handling if that fails.
Copied from "x86/mm: try VMA lock-based page fault handling first"
Signed-off-by: Laurent Dufour
Signed-off-by: Suren Baghdasaryan
---
arch/powerpc/mm/f
call_rcu() can take a long time when callback offloading is enabled.
Its use in the vm_area_free can cause regressions in the exit path when
multiple VMAs are being freed.
Because exit_mmap() is called only after the last mm user drops its
refcount, the page fault handlers can't be racing with it.
vma->lock being part of the vm_area_struct causes performance regression
during page faults because during contention its count and owner fields
are constantly updated and having other parts of vm_area_struct used
during page fault handling next to them causes constant cache line
bouncing. Fix that
On Thu, 2023-02-16 at 00:55 +0100, Erhard F. wrote:
> Just noticed a build failure on 6.2-rc7 for my Talos 2 (.config
> attached):
>
> # make
> CALL scripts/checksyscalls.sh
> UPD include/generated/utsversion.h
> CC init/version-timestamp.o
> LD .tmp_vmlinux.kallsyms1
> l
Power10 Performance Monitoring Unit (PMU) provides events
to understand stall cycles of different pipeline stages.
These events along with completed instructions provides
useful metrics for application tuning.
Patch implements the json changes to collect counter statistics
to present the high leve
Le 16/02/2023 à 00:55, Erhard F. a écrit :
> Just noticed a build failure on 6.2-rc7 for my Talos 2 (.config attached):
>
> # make
>CALLscripts/checksyscalls.sh
>UPD include/generated/utsversion.h
>CC init/version-timestamp.o
>LD .tmp_vmlinux.kallsyms1
> ld: l
Le 15/02/2023 à 23:44, Leo Li a écrit :
>
>
>> -Original Message-
>> From: Herve Codina
>> Sent: Thursday, January 26, 2023 2:32 AM
>> To: Herve Codina ; Leo Li
>> ; Rob Herring ; Krzysztof
>> Kozlowski ; Liam Girdwood
>> ; Mark Brown ; Christophe
>> Leroy ; Michael Ellerman
>> ; Nicho
tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git
next
branch HEAD: b0ae5b6f3c298a005b73556740526c0e24a5633c powerpc/kexec_file:
print error string on usable memory property update failure
elapsed time: 1071m
configs tested: 85
configs skipped: 3
The following con
When a user updates a variable through the PLPKS secvar interface, we take
the first 8 bytes of the data written to the update attribute to pass
through to the H_PKS_SIGNED_UPDATE hcall as flags. These bytes are always
written in big-endian format.
Currently, the flags bytes are memcpy()ed into a
Le 16/02/2023 à 06:09, Rohan McLure a écrit :
> KCSAN instruments calls to atomic builtins, and will in turn call these
> builtins itself. As such, architectures supporting KCSAN must have
> compiler support for these atomic primitives.
>
> Since 32-bit systems are unlikely to have 64-bit compil
Le 16/02/2023 à 06:09, Rohan McLure a écrit :
> Enable HAVE_ARCH_KCSAN on all powerpc platforms, permitting use of the
> kernel concurrency sanitiser through the CONFIG_KCSAN_* kconfig options.
>
> Boots and passes selftests on 32-bit and 64-bit platforms. See
> documentation in Documentation/de
101 - 124 of 124 matches
Mail list logo