As said in commit f2c2cbcc35d4 ("powerpc: Use pr_warn instead of
pr_warning"), removing pr_warning so all logging messages use a
consistent _warn style. Let's do it.
Cc: Benjamin Herrenschmidt
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Kefeng Wang
---
dr
As said in commit f2c2cbcc35d4 ("powerpc: Use pr_warn instead of
pr_warning"), removing pr_warning so all logging messages use a
consistent _warn style. Let's do it.
Cc: Benjamin Herrenschmidt
Cc: linuxppc-dev@lists.ozlabs.org
Reviewed-by: Sergey Senozhatsky
Signed-off-
pathway.suse.cz/
v2:
- Collect ack-by/review-by
- Fix some indentation and alignment
- Split patches in drivers/platform/x86/ suggested by Andy Shevchenko
- Drop staging changes which already merged
Kefeng Wang (33):
alpha: Use pr_warn instead of pr_warning
arm64: Use pr_warn instead of pr_warn
quot;David S. Miller"
Cc: Stanislav Kinsburskii
Signed-off-by: Kefeng Wang
---
arch/alpha/include/asm/io.h| 6 --
arch/arm/include/asm/io.h | 6 --
arch/hexagon/include/asm/io.h | 6 --
arch/m68k/include/asm/io_mm.h | 6 --
arch/mips/include/asm/io.h | 7 ---
On 2023/11/20 3:34, Geert Uytterhoeven wrote:
On Sat, Nov 18, 2023 at 11:09 AM Kefeng Wang wrote:
The asm-generic/io.h already has default definition, remove unnecessary
arch's defination.
Cc: Richard Henderson
Cc: Ivan Kokshaysky
Cc: Russell King
Cc: Brian Cain
Cc: "
quot;David S. Miller"
Cc: Stanislav Kinsburskii
Reviewed-by: Geert Uytterhoeven [m68k]
Acked-by: Geert Uytterhoeven [m68k]
Reviewed-by: Geert Uytterhoeven [sh]
Signed-off-by: Kefeng Wang
---
v2:
- remove mips change, since it needs more extra works for enable
ar
On 2023/11/20 14:40, Arnd Bergmann wrote:
On Mon, Nov 20, 2023, at 01:39, Kefeng Wang wrote:
On 2023/11/20 3:34, Geert Uytterhoeven wrote:
On Sat, Nov 18, 2023 at 11:09 AM Kefeng Wang wrote:
-/*
- * Convert a physical pointer to a virtual kernel pointer for /dev/mem
- * access
Directly use show_mem() instead of show_free_areas(0, NULL), then
make show_free_areas() a static function.
Signed-off-by: Kefeng Wang
---
arch/sparc/kernel/setup_32.c | 2 +-
include/linux/mm.h | 12
mm/internal.h| 6 ++
mm/nommu.c
Directly call __show_mem(0, NULL, MAX_NR_ZONES - 1) in show_mem()
to remove the arguments of show_mem().
Signed-off-by: Kefeng Wang
---
arch/powerpc/xmon/xmon.c | 2 +-
drivers/tty/sysrq.c | 2 +-
drivers/tty/vt/keyboard.c | 2 +-
include/linux/mm.h| 4 ++--
init/initramfs.c
On 2023/6/29 23:17, Matthew Wilcox wrote:
On Thu, Jun 29, 2023 at 06:43:56PM +0800, Kefeng Wang wrote:
Directly call __show_mem(0, NULL, MAX_NR_ZONES - 1) in show_mem()
to remove the arguments of show_mem().
Do you mean, "All callers of show_mem() pass 0 and NULL, so we can
remove th
Thanks,
On 2023/6/29 23:00, kernel test robot wrote:
Hi Kefeng,
kernel test robot noticed the following build errors:
[auto build test ERROR on akpm-mm/mm-everything]
url:
https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/mm-make-show_free_areas-static/20230629-182958
base
All callers of show_free_areas() pass 0 and NULL, so we can directly
use show_mem() instead of show_free_areas(0, NULL), which could make
show_free_areas() a static function.
Signed-off-by: Kefeng Wang
---
v2: update commit log and fix a missing show_free_areas() conversion
arch/sparc/kernel
All callers of show_mem() pass 0 and NULL, so we can remove the two
arguments by directly calling __show_mem(0, NULL, MAX_NR_ZONES - 1)
in show_mem().
Signed-off-by: Kefeng Wang
---
v2: update commit log
arch/powerpc/xmon/xmon.c | 2 +-
drivers/tty/sysrq.c | 2 +-
drivers/tty/vt
out commit 38b3aec8e8d2 "mm: drop per-VMA
lock when returning VM_FAULT_RETRY or VM_FAULT_COMPLETED", and in the end,
we enable this feature on ARM32/Loongarch too.
This is based on next-20230713, only built test(no loongarch compiler,
so except loongarch).
Kefeng Wang (10):
mm: add a generic
Use new try_vma_locked_page_fault() helper to simplify code.
No functional change intended.
Signed-off-by: Kefeng Wang
---
arch/x86/mm/fault.c | 39 +++
1 file changed, 15 insertions(+), 24 deletions(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
Use new try_vma_locked_page_fault() helper to simplify code.
No functional change intended.
Signed-off-by: Kefeng Wang
---
arch/riscv/mm/fault.c | 38 +++---
1 file changed, 15 insertions(+), 23 deletions(-)
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm
Use new try_vma_locked_page_fault() helper to simplify code.
No functional change intended.
Signed-off-by: Kefeng Wang
---
arch/powerpc/mm/fault.c | 54 +
1 file changed, 22 insertions(+), 32 deletions(-)
diff --git a/arch/powerpc/mm/fault.c b/arch
Cleanup __do_page_fault() by reuse bad_area_nosemaphore and
bad_area label.
Signed-off-by: Kefeng Wang
---
arch/loongarch/mm/fault.c | 36 +++-
1 file changed, 11 insertions(+), 25 deletions(-)
diff --git a/arch/loongarch/mm/fault.c b/arch/loongarch/mm/fault.c
Add access_error() to check whether vma could be accessible or not,
which will be used __do_page_fault() and later vma locked based page
fault.
Signed-off-by: Kefeng Wang
---
arch/loongarch/mm/fault.c | 30 --
1 file changed, 20 insertions(+), 10 deletions(-)
diff
Attempt VMA lock-based page fault handling first, and fall back
to the existing mmap_lock-based handling if that fails.
Signed-off-by: Kefeng Wang
---
arch/loongarch/Kconfig| 1 +
arch/loongarch/mm/fault.c | 26 ++
2 files changed, 27 insertions(+)
diff --git a
out commit 38b3aec8e8d2 "mm: drop per-VMA
lock when returning VM_FAULT_RETRY or VM_FAULT_COMPLETED", and in the end,
we enable this feature on ARM32/Loongarch too.
This is based on next-20230713, only built test(no loongarch compiler,
so except loongarch).
Kefeng Wang (10):
mm: add a generic
easy to support this feature on new architectures.
Signed-off-by: Kefeng Wang
---
include/linux/mm.h | 28
mm/memory.c| 42 ++
2 files changed, 70 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
Use new try_vma_locked_page_fault() helper to simplify code.
No functional change intended.
Signed-off-by: Kefeng Wang
---
arch/arm64/mm/fault.c | 28 +---
1 file changed, 5 insertions(+), 23 deletions(-)
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index
Use new try_vma_locked_page_fault() helper to simplify code.
No functional change intended.
Signed-off-by: Kefeng Wang
---
arch/s390/mm/fault.c | 23 ++-
1 file changed, 6 insertions(+), 17 deletions(-)
diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
index
Attempt VMA lock-based page fault handling first, and fall back
to the existing mmap_lock-based handling if that fails.
Signed-off-by: Kefeng Wang
---
arch/arm/Kconfig| 1 +
arch/arm/mm/fault.c | 15 ++-
2 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/arch/arm
Please ignore this one...
On 2023/7/13 17:51, Kefeng Wang wrote:
Add a generic VMA lock-based page fault handler in mm core, and convert
architectures to use it, which eliminate architectures's duplicated codes.
With it, we can avoid multiple changes in architectures's code if
On 2023/7/14 4:12, Suren Baghdasaryan wrote:
On Thu, Jul 13, 2023 at 9:15 AM Matthew Wilcox wrote:
+int try_vma_locked_page_fault(struct vm_locked_fault *vmlf, vm_fault_t *ret)
+{
+ struct vm_area_struct *vma;
+ vm_fault_t fault;
On Thu, Jul 13, 2023 at 05:53:29PM +0800, Kefeng
On 2023/7/14 9:52, Kefeng Wang wrote:
On 2023/7/14 4:12, Suren Baghdasaryan wrote:
On Thu, Jul 13, 2023 at 9:15 AM Matthew Wilcox
wrote:
+int try_vma_locked_page_fault(struct vm_locked_fault *vmlf,
vm_fault_t *ret)
+{
+ struct vm_area_struct *vma;
+ vm_fault_t fault;
On Thu
Attempt VMA lock-based page fault handling first, and fall back
to the existing mmap_lock-based handling if that fails.
Signed-off-by: Kefeng Wang
---
arch/loongarch/Kconfig| 1 +
arch/loongarch/mm/fault.c | 37 +++--
2 files changed, 32 insertions(+), 6
e
could re-use struct vm_fault to record and check vma accessable by each
own implementation.
Signed-off-by: Kefeng Wang
---
include/linux/mm.h | 17 +
include/linux/mm_types.h | 2 ++
mm/memory.c | 39 +++
3 files c
Use new try_vma_locked_page_fault() helper to simplify code.
No functional change intended.
Signed-off-by: Kefeng Wang
---
arch/riscv/mm/fault.c | 58 ++-
1 file changed, 24 insertions(+), 34 deletions(-)
diff --git a/arch/riscv/mm/fault.c b/arch/riscv
Use new try_vma_locked_page_fault() helper to simplify code.
No functional change intended.
Signed-off-by: Kefeng Wang
---
arch/powerpc/mm/fault.c | 66 -
1 file changed, 32 insertions(+), 34 deletions(-)
diff --git a/arch/powerpc/mm/fault.c b/arch
scv/loongarch)
headfile.
- re-use struct vm_fault instead of adding new struct vm_locked_fault,
per Matthew Wilcox, add necessary pt_regs/fault error code/vm flags
into vm_fault since they could be used in arch_vma_access_error()
- add special VM_FAULT_NONE and make try_vma_locked_page_fault()
Add access_error() to check whether vma could be accessible or not,
which will be used __do_page_fault() and later vma locked based page
fault.
Signed-off-by: Kefeng Wang
---
arch/loongarch/mm/fault.c | 30 --
1 file changed, 20 insertions(+), 10 deletions(-)
diff
Cleanup __do_page_fault() by reuse bad_area_nosemaphore and
bad_area label.
Signed-off-by: Kefeng Wang
---
arch/loongarch/mm/fault.c | 48 +--
1 file changed, 16 insertions(+), 32 deletions(-)
diff --git a/arch/loongarch/mm/fault.c b/arch/loongarch/mm
Attempt VMA lock-based page fault handling first, and fall back
to the existing mmap_lock-based handling if that fails.
Signed-off-by: Kefeng Wang
---
arch/arm/Kconfig| 1 +
arch/arm/mm/fault.c | 35 +--
2 files changed, 26 insertions(+), 10 deletions
Use new try_vma_locked_page_fault() helper to simplify code.
No functional change intended.
Signed-off-by: Kefeng Wang
---
arch/x86/mm/fault.c | 55 +++--
1 file changed, 23 insertions(+), 32 deletions(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm
Use new try_vma_locked_page_fault() helper to simplify code.
No functional change intended.
Signed-off-by: Kefeng Wang
---
arch/s390/mm/fault.c | 66 ++--
1 file changed, 27 insertions(+), 39 deletions(-)
diff --git a/arch/s390/mm/fault.c b/arch/s390/mm
Use new try_vma_locked_page_fault() helper to simplify code, also
pass struct vmf to __do_page_fault() directly instead of each
independent variable. No functional change intended.
Signed-off-by: Kefeng Wang
---
arch/arm64/mm/fault.c | 60 ---
1 file
On 2023/8/22 17:38, Christophe Leroy wrote:
Le 21/08/2023 à 14:30, Kefeng Wang a écrit :
Use new try_vma_locked_page_fault() helper to simplify code.
No functional change intended.
Does it really simplifies code ? It's 32 insertions versus 34 deletions
so only removing 2 lines.
Y
On 2023/8/24 15:12, Alexander Gordeev wrote:
On Mon, Aug 21, 2023 at 08:30:47PM +0800, Kefeng Wang wrote:
Hi Kefeng,
The ARCH_SUPPORTS_PER_VMA_LOCK are enabled by more and more architectures,
eg, x86, arm64, powerpc and s390, and riscv, those implementation are very
similar which results
On 2023/8/24 16:32, Heiko Carstens wrote:
On Thu, Aug 24, 2023 at 10:16:33AM +0200, Alexander Gordeev wrote:
On Mon, Aug 21, 2023 at 08:30:50PM +0800, Kefeng Wang wrote:
Use new try_vma_locked_page_fault() helper to simplify code.
No functional change intended.
Signed-off-by: Kefeng Wang
valid() completely.
Signed-off-by: Kefeng Wang
---
arch/alpha/include/asm/pgtable.h | 2 -
arch/arc/include/asm/pgtable-bits-arcv2.h | 2 -
arch/arm/include/asm/pgtable-nommu.h | 2 -
arch/arm/include/asm/pgtable.h| 4 --
arch/arm64/include/asm/pgtable.h | 2 -
On 2023/3/25 14:08, Mike Rapoport wrote:
From: "Mike Rapoport (IBM)"
It is enough to keep default values for base and huge pages without
letting users to override ARCH_FORCE_MAX_ORDER.
Drop the prompt to make the option unvisible in *config.
Acked-by: Kirill A. Shutemov
Reviewed-by: Zi Ya
cations of changing MAX_ORDER before actually amending it and
ranges don't help here.
Drop ranges in definition of ARCH_FORCE_MAX_ORDER and make its prompt
visible only if EXPERT=y
Acked-by: Kirill A. Shutemov
Reviewed-by: Zi Yan
Signed-off-by: Mike Rapoport (IBM)
Reviewed-by: Kefeng Wang
-
an
Signed-off-by: Mike Rapoport (IBM)
Reviewed-by: Kefeng Wang
---
arch/arm64/Kconfig | 24
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7324032af859..cc11cdcf5a00 100644
--- a/arch/arm64/Kconfig
The vm_flags of vma already checked under per-VMA lock, if it is a
bad access, directly set fault to VM_FAULT_BADACCESS and handle error,
no need to lock_mm_and_find_vma() and check vm_flags again, the latency
time reduce 34% in lmbench 'lat_sig -P 1 prot lat_sig'.
Signed-off-by: K
The vm_flags of vma already checked under per-VMA lock, if it is a
bad access, directly set fault to VM_FAULT_BADACCESS and handle error,
so no need to lock_mm_and_find_vma() and check vm_flags again.
Signed-off-by: Kefeng Wang
---
arch/arm/mm/fault.c | 4 +++-
1 file changed, 3 insertions
The vm_flags of vma already checked under per-VMA lock, if it is a
bad access, directly handle error and return, there is no need to
lock_mm_and_find_vma() and check vm_flags again.
Signed-off-by: Kefeng Wang
---
arch/powerpc/mm/fault.c | 33 -
1 file changed, 20
The vm_flags of vma already checked under per-VMA lock, if it is a
bad access, directly handle error and return, there is no need to
lock_mm_and_find_vma() and check vm_flags again.
Signed-off-by: Kefeng Wang
---
arch/riscv/mm/fault.c | 5 -
1 file changed, 4 insertions(+), 1 deletion
-> 0.19198
Only build test on other archs except arm64.
Kefeng Wang (7):
arm64: mm: cleanup __do_page_fault()
arm64: mm: accelerate pagefault when VM_FAULT_BADACCESS
arm: mm: accelerate pagefault when VM_FAULT_BADACCESS
powerpc: mm: accelerate pagefault when badaccess
riscv: mm: acceler
The vm_flags of vma already checked under per-VMA lock, if it is a
bad access, directly handle error and return, there is no need to
lock_mm_and_find_vma() and check vm_flags again.
Signed-off-by: Kefeng Wang
---
arch/s390/mm/fault.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff
The __do_page_fault() only check vma->flags and call handle_mm_fault(),
and only called by do_page_fault(), let's squash it into do_page_fault()
to cleanup code.
Signed-off-by: Kefeng Wang
---
arch/arm64/mm/fault.c | 27 +++
1 file changed, 7 insertions(+), 20 d
The vm_flags of vma already checked under per-VMA lock, if it is a
bad access, directly handle error and return, there is no need to
lock_mm_and_find_vma() and check vm_flags again.
Signed-off-by: Kefeng Wang
---
arch/x86/mm/fault.c | 23 ++-
1 file changed, 14 insertions
On 2024/4/3 13:30, Suren Baghdasaryan wrote:
On Tue, Apr 2, 2024 at 10:19 PM Suren Baghdasaryan wrote:
On Tue, Apr 2, 2024 at 12:53 AM Kefeng Wang wrote:
The vm_flags of vma already checked under per-VMA lock, if it is a
bad access, directly set fault to VM_FAULT_BADACCESS and handle
On 2024/4/3 13:59, Suren Baghdasaryan wrote:
On Tue, Apr 2, 2024 at 12:53 AM Kefeng Wang wrote:
The vm_flags of vma already checked under per-VMA lock, if it is a
bad access, directly handle error and return, there is no need to
lock_mm_and_find_vma() and check vm_flags again.
Signed-off
The __do_page_fault() only calls handle_mm_fault() after vm_flags
checked, and it is only called by do_page_fault(), let's squash
it into do_page_fault() to cleanup code.
Reviewed-by: Suren Baghdasaryan
Signed-off-by: Kefeng Wang
---
arch/arm64/mm/fault.c | 27 +++--
dled under per-VMA lock, count it as a vma lock
event with VMA_LOCK_SUCCESS.
Reviewed-by: Suren Baghdasaryan
Signed-off-by: Kefeng Wang
---
arch/arm64/mm/fault.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 9b
-> 0.19198
Only build test on other archs except arm64.
v2:
- a better changelog, and describe the counting changes, suggested by
Suren Baghdasaryan
- add RB
Kefeng Wang (7):
arm64: mm: cleanup __do_page_fault()
arm64: mm: accelerate pagefault when VM_FAULT_BADACCESS
arm: mm: acceler
mmap_lock. Since the page faut is handled under per-VMA lock,
count it as a vma lock event with VMA_LOCK_SUCCESS.
Signed-off-by: Kefeng Wang
---
arch/powerpc/mm/fault.c | 33 -
1 file changed, 20 insertions(+), 13 deletions(-)
diff --git a/arch/powerpc/mm/fault.c b/arch
: Suren Baghdasaryan
Signed-off-by: Kefeng Wang
---
arch/arm/mm/fault.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index 439dc6a26bb9..5c4b417e24f9 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -294,7 +294,9
The vm_flags of vma already checked under per-VMA lock, if it is a
bad access, directly handle error, no need to retry with mmap_lock
again. Since the page faut is handled under per-VMA lock, count it
as a vma lock event with VMA_LOCK_SUCCESS.
Signed-off-by: Kefeng Wang
---
arch/s390/mm/fault.c
: Kefeng Wang
---
arch/riscv/mm/fault.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
index 3ba1d4dde5dd..b3fcf7d67efb 100644
--- a/arch/riscv/mm/fault.c
+++ b/arch/riscv/mm/fault.c
@@ -292,7 +292,10 @@ void handle_page_fault
page faut is handled under per-VMA lock, count it
as a vma lock event with VMA_LOCK_SUCCESS.
Reviewed-by: Suren Baghdasaryan
Signed-off-by: Kefeng Wang
---
arch/x86/mm/fault.c | 23 ++-
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/arch/x86/mm/fault.c b/arch
On 2024/4/4 4:45, Andrew Morton wrote:
On Wed, 3 Apr 2024 16:37:58 +0800 Kefeng Wang
wrote:
After VMA lock-based page fault handling enabled, if bad access met
under per-vma lock, it will fallback to mmap_lock-based handling,
so it leads to unnessary mmap lock and vma find again. A test
On 2024/4/10 15:32, Alexandre Ghiti wrote:
Hi Kefeng,
On 03/04/2024 10:38, Kefeng Wang wrote:
The access_error() of vma already checked under per-VMA lock, if it
is a bad access, directly handle error, no need to retry with mmap_lock
again. Since the page faut is handled under per-VMA lock
On 2024/4/11 1:28, Alexandre Ghiti wrote:
On 10/04/2024 10:07, Kefeng Wang wrote:
On 2024/4/10 15:32, Alexandre Ghiti wrote:
Hi Kefeng,
On 03/04/2024 10:38, Kefeng Wang wrote:
The access_error() of vma already checked under per-VMA lock, if it
is a bad access, directly handle error, no
mem_init_print_info() is called in mem_init() on each architecture,
and pass NULL argument, cleanup it by using void argument and move
it into mm_init().
Signed-off-by: Kefeng Wang
---
arch/alpha/mm/init.c | 1 -
arch/arc/mm/init.c | 1 -
arch/arm/mm/init.c
On 2021/3/16 22:47, Christophe Leroy wrote:
Le 16/03/2021 à 15:26, Kefeng Wang a écrit :
mem_init_print_info() is called in mem_init() on each architecture,
and pass NULL argument, cleanup it by using void argument and move
it into mm_init().
Signed-off-by: Kefeng Wang
---
arch/alpha/mm
mem_init_print_info() is called in mem_init() on each architecture,
and pass NULL argument, so using void argument and move it into mm_init().
Signed-off-by: Kefeng Wang
---
Resend with 'str' line cleanup, and only test on ARM64 qemu.
arch/alpha/mm/init.c | 1 -
arch/arc
mem_init_print_info() is called in mem_init() on each architecture,
and pass NULL argument, so using void argument and move it into mm_init().
Acked-by: Dave Hansen
Signed-off-by: Kefeng Wang
---
v2:
- Cleanup 'str' line suggested by Christophe and ACK
arch/alpha/mm/init.c
On 2021/3/17 13:48, Christophe Leroy wrote:
Le 17/03/2021 à 02:52, Kefeng Wang a écrit :
mem_init_print_info() is called in mem_init() on each architecture,
and pass NULL argument, so using void argument and move it into
mm_init().
Acked-by: Dave Hansen
Signed-off-by: Kefeng Wang
On 2021/3/18 2:48, Dave Hansen wrote:
On 3/16/21 6:52 PM, Kefeng Wang wrote:
mem_init_print_info() is called in mem_init() on each architecture,
and pass NULL argument, so using void argument and move it into mm_init().
Acked-by: Dave Hansen
It's not a big deal but you might want t
Hi Andrew, kindly ping
On 2021/3/17 9:52, Kefeng Wang wrote:
mem_init_print_info() is called in mem_init() on each architecture,
and pass NULL argument, so using void argument and move it into mm_init().
Acked-by: Dave Hansen
Signed-off-by: Kefeng Wang
---
v2:
- Cleanup 'str' line
ssell King
Signed-off-by: Kefeng Wang
---
arch/alpha/Kconfig | 5 +
arch/arm/Kconfig | 3 ---
arch/arm64/Kconfig | 9 +
arch/ia64/Kconfig | 4 +---
arch/m68k/Kconfig | 5 +
arch/
Acked-by: Catalin Marinas # for arm64
Acked-by: Geert Uytterhoeven # for m68k
Acked-by: Mike Rapoport
Signed-off-by: Kefeng Wang
---
v2:
-i386 can't enable ZONE_DMA32, fix it.
-make ZONE_DMA default y on X86 as before.
-collect ACKs
arch/alpha/Kconfig | 5 +
a
Use setup_initial_init_mm() helper to simplify code.
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Kefeng Wang
---
arch/powerpc/kernel/setup-common.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/arch/powerpc/kernel
On 2021/5/29 23:22, Christophe Leroy wrote:
Santosh Sivaraj a écrit :
Kefeng Wang writes:
Use setup_initial_init_mm() helper to simplify code.
I only got that patch, and patchwork as well
(https://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=246315)
Can you tell where I
On 2021/5/30 0:16, Christophe Leroy wrote:
Kefeng Wang a écrit :
Use setup_initial_init_mm() helper to simplify code.
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Kefeng Wang
---
arch/powerpc/kernel/setup-common.c | 5 +
1 file
: linuxppc-dev@lists.ozlabs.org
Cc: linux-ri...@lists.infradead.org
Cc: linux...@vger.kernel.org
Cc: linux-s...@vger.kernel.org
Cc: x...@kernel.org
Signed-off-by: Kefeng Wang
---
include/linux/mm_types.h | 8
1 file changed, 8 insertions(+)
diff --git a/include/linux/mm_types.h b/include
kernel.org
Cc: linux-s...@vger.kernel.org
Kefeng Wang (15):
mm: add setup_initial_init_mm() helper
arc: convert to setup_initial_init_mm()
arm: convert to setup_initial_init_mm()
arm64: convert to setup_initial_init_mm()
csky: convert to setup_initial_init_mm()
h8300: convert to s
Use setup_initial_init_mm() helper to simplify code.
Note klimit is (unsigned long) _end, with new helper,
will use _end directly.
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Kefeng Wang
---
arch/powerpc/kernel/setup-common.c | 5 +
1
On 2021/6/4 17:57, Christophe Leroy wrote:
klimit is a global variable initialised at build time with the
value of _end.
This variable is never modified, so _end symbol can be used directly.
Remove klimit.
Reviewed-by: Kefeng Wang
Signed-off-by: Christophe Leroy
Cc: Kefeng Wang
On 2021/6/7 5:29, Mike Rapoport wrote:
Hello Kefeng,
On Fri, Jun 04, 2021 at 03:06:18PM +0800, Kefeng Wang wrote:
Add setup_initial_init_mm() helper, then use it
to cleanup the text, data and brk setup code.
v2:
- change argument from "char *" to "void *" s
On 2021/6/7 5:31, Mike Rapoport wrote:
Hello Kefeng,
On Fri, Jun 04, 2021 at 03:06:19PM +0800, Kefeng Wang wrote:
Add setup_initial_init_mm() helper to setup kernel text,
data and brk.
Cc: linux-snps-...@lists.infradead.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-c
: linuxppc-dev@lists.ozlabs.org
Cc: linux-ri...@lists.infradead.org
Cc: linux...@vger.kernel.org
Cc: linux-s...@vger.kernel.org
Cc: x...@kernel.org
Signed-off-by: Kefeng Wang
---
v3: declaration in mm.h, implemention in init-mm.c
include/linux/mm.h | 3 +++
mm/init-mm.c | 9 +
2 files
On 2021/6/7 13:48, Christophe Leroy wrote:
Hi Kefeng,
Le 07/06/2021 à 02:55, Kefeng Wang a écrit :
On 2021/6/7 5:29, Mike Rapoport wrote:
Hello Kefeng,
On Fri, Jun 04, 2021 at 03:06:18PM +0800, Kefeng Wang wrote:
Add setup_initial_init_mm() helper, then use it
to cleanup the text, data
: linuxppc-dev@lists.ozlabs.org
Cc: linux-ri...@lists.infradead.org
Cc: linux...@vger.kernel.org
Cc: linux-s...@vger.kernel.org
Cc: x...@kernel.org
Signed-off-by: Kefeng Wang
---
include/linux/mm.h | 3 +++
mm/init-mm.c | 9 +
2 files changed, 12 insertions(+)
diff --git a/include
Use setup_initial_init_mm() helper to simplify code.
Note klimit is (unsigned long) _end, with new helper,
will use _end directly.
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Kefeng Wang
---
arch/powerpc/kernel/setup-common.c | 5 +
1
ext() naming for x86 special
check.
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: x...@kernel.org
Signed-off-by: Kefeng Wang
---
arch/x86/mm/init_32.c | 14 +-
1 file changed, 5 insertions(+), 9 deletions(-)
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index bd90b8fe81e4..52374
After commit 4ba66a976072 ("arch: remove blackfin port"),
no need arch-specific text/data check.
Cc: Arnd Bergmann
Signed-off-by: Kefeng Wang
---
include/asm-generic/sections.h | 16
include/linux/kallsyms.h | 3 +--
kernel/locking/lockdep.c | 3 --
: Steven Rostedt (VMware)
Acked-by: Sergey Senozhatsky
Signed-off-by: Kefeng Wang
---
include/linux/kallsyms.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/include/linux/kallsyms.h b/include/linux/kallsyms.h
index 2a241e3f063f..b016c62f30a6 100644
--- a/include/linux
ernel.org/linux-arch/20210626073439.150586-1-wangkefeng.w...@huawei.com
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-a...@vger.kernel.org
Cc: b...@vger.kernel.org
Kefeng Wang (9):
kallsyms: Remove arch specific text and data check
kallsyms: Fix address-checks for kernel related range
secti
Directly use is_kernel() helper in kernel_or_module_addr().
Cc: Andrey Ryabinin
Cc: Alexander Potapenko
Cc: Andrey Konovalov
Cc: Dmitry Vyukov
Signed-off-by: Kefeng Wang
---
mm/kasan/report.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/kasan/report.c b/mm/kasan
The core_kernel_text() should check the gate area, as it is part
of kernel text range.
Cc: Steven Rostedt
Signed-off-by: Kefeng Wang
---
kernel/extable.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/kernel/extable.c b/kernel/extable.c
index 98ca627ac5ef..0ba383d850ff
Use is_kernel_text() and is_kernel_inittext() helper to simplify code,
also drop etext, _stext, _sinittext, _einittext declaration which
already declared in section.h.
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Kefeng
An internal __is_kernel() helper which only check the kernel address ranges,
and an internal __is_kernel_text() helper which only check text section ranges.
Signed-off-by: Kefeng Wang
---
include/asm-generic/sections.h | 29 +
include/linux/kallsyms.h | 4
Move core_kernel_data() into sections.h and rename it to
is_kernel_core_data(), also make it return bool value, then
update all the callers.
Cc: Arnd Bergmann
Cc: Steven Rostedt
Cc: Ingo Molnar
Cc: "David S. Miller"
Signed-off-by: Kefeng Wang
---
include/asm-generic/secti
The is_kernel_inittext() and init_kernel_text() are with same
functionality, let's just keep is_kernel_inittext() and move
it into sections.h, then update all the callers.
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: Arnd Bergmann
Cc: x...@kernel.org
Signed-off-by: Kefeng Wang
---
arch/x86/k
On 2021/9/29 1:51, Christophe Leroy wrote:
Le 26/09/2021 à 09:20, Kefeng Wang a écrit :
Use is_kernel_text() and is_kernel_inittext() helper to simplify code,
also drop etext, _stext, _sinittext, _einittext declaration which
already declared in section.h.
Cc: Michael Ellerman
Cc
1 - 100 of 131 matches
Mail list logo