[PATCH 1/8] powerpc/mm: Make slice specific to book3s/64
Since commit 555904d07eef ("powerpc/8xx: MM_SLICE is not needed anymore") only book3s/64 selects CONFIG_PPC_MM_SLICES. Move slice.c into mm/book3s64/ Signed-off-by: Christophe Leroy --- arch/powerpc/mm/Makefile | 1 - arch/powerpc/mm/book3s64/Makefile | 1 + arch/powerpc/mm/{ => book3s64}/slice.c | 0 arch/powerpc/mm/nohash/mmu_context.c | 2 -- arch/powerpc/mm/nohash/tlb.c | 4 5 files changed, 1 insertion(+), 7 deletions(-) rename arch/powerpc/mm/{ => book3s64}/slice.c (100%) diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile index df8172da2301..d4c20484dad9 100644 --- a/arch/powerpc/mm/Makefile +++ b/arch/powerpc/mm/Makefile @@ -14,7 +14,6 @@ obj-$(CONFIG_PPC_MMU_NOHASH) += nohash/ obj-$(CONFIG_PPC_BOOK3S_32)+= book3s32/ obj-$(CONFIG_PPC_BOOK3S_64)+= book3s64/ obj-$(CONFIG_NUMA) += numa.o -obj-$(CONFIG_PPC_MM_SLICES)+= slice.o obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o obj-$(CONFIG_NOT_COHERENT_CACHE) += dma-noncoherent.o obj-$(CONFIG_PPC_COPRO_BASE) += copro_fault.o diff --git a/arch/powerpc/mm/book3s64/Makefile b/arch/powerpc/mm/book3s64/Makefile index 1b56d3af47d4..30951668c684 100644 --- a/arch/powerpc/mm/book3s64/Makefile +++ b/arch/powerpc/mm/book3s64/Makefile @@ -18,6 +18,7 @@ obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += hash_hugepage.o obj-$(CONFIG_PPC_SUBPAGE_PROT) += subpage_prot.o obj-$(CONFIG_SPAPR_TCE_IOMMU) += iommu_api.o obj-$(CONFIG_PPC_PKEY) += pkeys.o +obj-$(CONFIG_PPC_MM_SLICES)+= slice.o # Instrumenting the SLB fault path can lead to duplicate SLB entries KCOV_INSTRUMENT_slb.o := n diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/book3s64/slice.c similarity index 100% rename from arch/powerpc/mm/slice.c rename to arch/powerpc/mm/book3s64/slice.c diff --git a/arch/powerpc/mm/nohash/mmu_context.c b/arch/powerpc/mm/nohash/mmu_context.c index 44b2b5e7cabe..b8dfe66bdf18 100644 --- a/arch/powerpc/mm/nohash/mmu_context.c +++ b/arch/powerpc/mm/nohash/mmu_context.c @@ -320,8 +320,6 @@ int init_new_context(struct task_struct *t, struct mm_struct *mm) * have id == 0) and don't alter context slice inherited via fork (which * will have id != 0). */ - if (mm->context.id == 0) - slice_init_new_context_exec(mm); mm->context.id = MMU_NO_CONTEXT; mm->context.active = 0; pte_frag_set(&mm->context, NULL); diff --git a/arch/powerpc/mm/nohash/tlb.c b/arch/powerpc/mm/nohash/tlb.c index 89353d4f5604..4822dfd6c246 100644 --- a/arch/powerpc/mm/nohash/tlb.c +++ b/arch/powerpc/mm/nohash/tlb.c @@ -782,9 +782,5 @@ void __init early_init_mmu(void) #ifdef CONFIG_PPC_47x early_init_mmu_47x(); #endif - -#ifdef CONFIG_PPC_MM_SLICES - mm_ctx_set_slb_addr_limit(&init_mm.context, SLB_ADDR_LIMIT_DEFAULT); -#endif } #endif /* CONFIG_PPC64 */ -- 2.33.1
[PATCH 8/8] powerpc/mm: Properly randomise mmap with slices
Now that powerpc switched to default topdown mmap layout, mm->mmap_base is properly randomised. However slice_find_area_bottomup() doesn't use mm->mmap_base but uses the fixed TASK_UNMAPPED_BASE instead. slice_find_area_bottomup() being used as a fallback to slice_find_area_topdown(), it can't use mm->mmap_base directly. Instead of always using TASK_UNMAPPED_BASE as base address, leave it to the caller. When called from slice_find_area_topdown() TASK_UNMAPPED_BASE is used. Otherwise mm->mmap_base is used. Signed-off-by: Christophe Leroy --- arch/powerpc/mm/book3s64/slice.c | 18 +++--- 1 file changed, 7 insertions(+), 11 deletions(-) diff --git a/arch/powerpc/mm/book3s64/slice.c b/arch/powerpc/mm/book3s64/slice.c index 8327a43d29cb..0fef63763e6d 100644 --- a/arch/powerpc/mm/book3s64/slice.c +++ b/arch/powerpc/mm/book3s64/slice.c @@ -276,20 +276,18 @@ static bool slice_scan_available(unsigned long addr, } static unsigned long slice_find_area_bottomup(struct mm_struct *mm, - unsigned long len, + unsigned long addr, unsigned long len, const struct slice_mask *available, int psize, unsigned long high_limit) { int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT); - unsigned long addr, found, next_end; + unsigned long found, next_end; struct vm_unmapped_area_info info; info.flags = 0; info.length = len; info.align_mask = PAGE_MASK & ((1ul << pshift) - 1); info.align_offset = 0; - - addr = TASK_UNMAPPED_BASE; /* * Check till the allow max value for this mmap request */ @@ -322,12 +320,12 @@ static unsigned long slice_find_area_bottomup(struct mm_struct *mm, } static unsigned long slice_find_area_topdown(struct mm_struct *mm, -unsigned long len, +unsigned long addr, unsigned long len, const struct slice_mask *available, int psize, unsigned long high_limit) { int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT); - unsigned long addr, found, prev; + unsigned long found, prev; struct vm_unmapped_area_info info; unsigned long min_addr = max(PAGE_SIZE, mmap_min_addr); @@ -335,8 +333,6 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm, info.length = len; info.align_mask = PAGE_MASK & ((1ul << pshift) - 1); info.align_offset = 0; - - addr = mm->mmap_base; /* * If we are trying to allocate above DEFAULT_MAP_WINDOW * Add the different to the mmap_base. @@ -377,7 +373,7 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm, * can happen with large stack limits and large mmap() * allocations. */ - return slice_find_area_bottomup(mm, len, available, psize, high_limit); + return slice_find_area_bottomup(mm, TASK_UNMAPPED_BASE, len, available, psize, high_limit); } @@ -386,9 +382,9 @@ static unsigned long slice_find_area(struct mm_struct *mm, unsigned long len, int topdown, unsigned long high_limit) { if (topdown) - return slice_find_area_topdown(mm, len, mask, psize, high_limit); + return slice_find_area_topdown(mm, mm->mmap_base, len, mask, psize, high_limit); else - return slice_find_area_bottomup(mm, len, mask, psize, high_limit); + return slice_find_area_bottomup(mm, mm->mmap_base, len, mask, psize, high_limit); } static inline void slice_copy_mask(struct slice_mask *dst, -- 2.33.1
[PATCH 3/8] powerpc/mm: Remove asm/slice.h
Move necessary stuff in asm/book3s/64/slice.h and remove asm/slice.h Signed-off-by: Christophe Leroy --- arch/powerpc/include/asm/book3s/64/hash.h | 3 ++ arch/powerpc/include/asm/book3s/64/mmu-hash.h | 1 + arch/powerpc/include/asm/book3s/64/slice.h| 18 + arch/powerpc/include/asm/page.h | 1 - arch/powerpc/include/asm/slice.h | 37 --- 5 files changed, 22 insertions(+), 38 deletions(-) delete mode 100644 arch/powerpc/include/asm/slice.h diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h index 25f8e90985eb..27be22e6f848 100644 --- a/arch/powerpc/include/asm/book3s/64/hash.h +++ b/arch/powerpc/include/asm/book3s/64/hash.h @@ -99,6 +99,9 @@ * Defines the address of the vmemap area, in its own region on * hash table CPUs. */ +#ifdef CONFIG_HUGETLB_PAGE +#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA +#endif #define HAVE_ARCH_UNMAPPED_AREA #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h index 3004f3323144..b4b2ca111f75 100644 --- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h +++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h @@ -18,6 +18,7 @@ * complete pgtable.h but only a portion of it. */ #include +#include #include #include diff --git a/arch/powerpc/include/asm/book3s/64/slice.h b/arch/powerpc/include/asm/book3s/64/slice.h index f0d3194ba41b..5b0f7105bc8b 100644 --- a/arch/powerpc/include/asm/book3s/64/slice.h +++ b/arch/powerpc/include/asm/book3s/64/slice.h @@ -2,6 +2,8 @@ #ifndef _ASM_POWERPC_BOOK3S_64_SLICE_H #define _ASM_POWERPC_BOOK3S_64_SLICE_H +#ifndef __ASSEMBLY__ + #define SLICE_LOW_SHIFT28 #define SLICE_LOW_TOP (0x1ul) #define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT) @@ -13,4 +15,20 @@ #define SLB_ADDR_LIMIT_DEFAULT DEFAULT_MAP_WINDOW_USER64 +struct mm_struct; + +unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, + unsigned long flags, unsigned int psize, + int topdown); + +unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr); + +void slice_set_range_psize(struct mm_struct *mm, unsigned long start, + unsigned long len, unsigned int psize); + +void slice_init_new_context_exec(struct mm_struct *mm); +void slice_setup_new_exec(void); + +#endif /* __ASSEMBLY__ */ + #endif /* _ASM_POWERPC_BOOK3S_64_SLICE_H */ diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h index 254687258f42..62e0c6f12869 100644 --- a/arch/powerpc/include/asm/page.h +++ b/arch/powerpc/include/asm/page.h @@ -329,6 +329,5 @@ static inline unsigned long kaslr_offset(void) #include #endif /* __ASSEMBLY__ */ -#include #endif /* _ASM_POWERPC_PAGE_H */ diff --git a/arch/powerpc/include/asm/slice.h b/arch/powerpc/include/asm/slice.h deleted file mode 100644 index be4acc52e8ec.. --- a/arch/powerpc/include/asm/slice.h +++ /dev/null @@ -1,37 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _ASM_POWERPC_SLICE_H -#define _ASM_POWERPC_SLICE_H - -#ifdef CONFIG_PPC_BOOK3S_64 -#include -#endif - -#ifndef __ASSEMBLY__ - -struct mm_struct; - -#ifdef CONFIG_PPC_BOOK3S_64 - -#ifdef CONFIG_HUGETLB_PAGE -#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA -#endif -#define HAVE_ARCH_UNMAPPED_AREA -#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN - -unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, - unsigned long flags, unsigned int psize, - int topdown); - -unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr); - -void slice_set_range_psize(struct mm_struct *mm, unsigned long start, - unsigned long len, unsigned int psize); - -void slice_init_new_context_exec(struct mm_struct *mm); -void slice_setup_new_exec(void); - -#endif /* CONFIG_PPC_BOOK3S_64 */ - -#endif /* __ASSEMBLY__ */ - -#endif /* _ASM_POWERPC_SLICE_H */ -- 2.33.1
[PATCH 5/8] powerpc/mm: Call radix__arch_get_unmapped_area() from arch_get_unmapped_area()
Instead of setting mm->get_unmapped_area() to either arch_get_unmapped_area() or radix__arch_get_unmapped_area(), always set it to arch_get_unmapped_area() and call radix__arch_get_unmapped_area() from there when radix is enabled. To keep radix__arch_get_unmapped_area() static, move it to slice.c Do the same with radix__arch_get_unmapped_area_topdown() Signed-off-by: Christophe Leroy --- arch/powerpc/mm/book3s64/slice.c | 104 ++ arch/powerpc/mm/mmap.c | 123 --- 2 files changed, 104 insertions(+), 123 deletions(-) diff --git a/arch/powerpc/mm/book3s64/slice.c b/arch/powerpc/mm/book3s64/slice.c index 62848c5fa2d6..8327a43d29cb 100644 --- a/arch/powerpc/mm/book3s64/slice.c +++ b/arch/powerpc/mm/book3s64/slice.c @@ -639,12 +639,113 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, } EXPORT_SYMBOL_GPL(slice_get_unmapped_area); +/* + * Same function as generic code used only for radix, because we don't need to overload + * the generic one. But we will have to duplicate, because hash select + * HAVE_ARCH_UNMAPPED_AREA + */ +static unsigned long +radix__arch_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, + unsigned long pgoff, unsigned long flags) +{ + struct mm_struct *mm = current->mm; + struct vm_area_struct *vma; + int fixed = (flags & MAP_FIXED); + unsigned long high_limit; + struct vm_unmapped_area_info info; + + high_limit = DEFAULT_MAP_WINDOW; + if (addr >= high_limit || (fixed && (addr + len > high_limit))) + high_limit = TASK_SIZE; + + if (len > high_limit) + return -ENOMEM; + + if (fixed) { + if (addr > high_limit - len) + return -ENOMEM; + return addr; + } + + if (addr) { + addr = PAGE_ALIGN(addr); + vma = find_vma(mm, addr); + if (high_limit - len >= addr && addr >= mmap_min_addr && + (!vma || addr + len <= vm_start_gap(vma))) + return addr; + } + + info.flags = 0; + info.length = len; + info.low_limit = mm->mmap_base; + info.high_limit = high_limit; + info.align_mask = 0; + + return vm_unmapped_area(&info); +} + +static unsigned long +radix__arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, + const unsigned long len, const unsigned long pgoff, + const unsigned long flags) +{ + struct vm_area_struct *vma; + struct mm_struct *mm = current->mm; + unsigned long addr = addr0; + int fixed = (flags & MAP_FIXED); + unsigned long high_limit; + struct vm_unmapped_area_info info; + + high_limit = DEFAULT_MAP_WINDOW; + if (addr >= high_limit || (fixed && (addr + len > high_limit))) + high_limit = TASK_SIZE; + + if (len > high_limit) + return -ENOMEM; + + if (fixed) { + if (addr > high_limit - len) + return -ENOMEM; + return addr; + } + + if (addr) { + addr = PAGE_ALIGN(addr); + vma = find_vma(mm, addr); + if (high_limit - len >= addr && addr >= mmap_min_addr && + (!vma || addr + len <= vm_start_gap(vma))) + return addr; + } + + info.flags = VM_UNMAPPED_AREA_TOPDOWN; + info.length = len; + info.low_limit = max(PAGE_SIZE, mmap_min_addr); + info.high_limit = mm->mmap_base + (high_limit - DEFAULT_MAP_WINDOW); + info.align_mask = 0; + + addr = vm_unmapped_area(&info); + if (!(addr & ~PAGE_MASK)) + return addr; + VM_BUG_ON(addr != -ENOMEM); + + /* +* A failed mmap() very likely causes application failure, +* so fall back to the bottom-up function here. This scenario +* can happen with large stack limits and large mmap() +* allocations. +*/ + return radix__arch_get_unmapped_area(filp, addr0, len, pgoff, flags); +} + unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { + if (radix_enabled()) + return radix__arch_get_unmapped_area(filp, addr, len, pgoff, flags); + return slice_get_unmapped_area(addr, len, flags, mm_ctx_user_psize(¤t->mm->context), 0); } @@ -655,6 +756,9 @@ unsigned long arch_get_unmapped_area_topdown(struct file *filp, const unsigned long pgoff, c
[PATCH 6/8] mm: Allow arch specific arch_randomize_brk() with CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
Commit e7142bf5d231 ("arm64, mm: make randomization selected by generic topdown mmap layout") introduced a default version of arch_randomize_brk() provided when CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT is selected. powerpc could select CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT but needs to provide its own arch_randomize_brk(). In order to allow that, don't make CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT select CONFIG_ARCH_HAS_ELF_RANDOMIZE. Instead, ensure that selecting CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT and selecting CONFIG_ARCH_HAS_ELF_RANDOMIZE has the same effect. Then only provide the default arch_randomize_brk() when the architecture has not selected CONFIG_ARCH_HAS_ELF_RANDOMIZE. Cc: Alexandre Ghiti Signed-off-by: Christophe Leroy --- arch/Kconfig | 1 - fs/binfmt_elf.c | 3 ++- include/linux/elf-randomize.h | 3 ++- mm/util.c | 2 ++ 4 files changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 26b8ed11639d..ef3ce947b7a1 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -1000,7 +1000,6 @@ config HAVE_ARCH_COMPAT_MMAP_BASES config ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT bool depends on MMU - select ARCH_HAS_ELF_RANDOMIZE config HAVE_STACK_VALIDATION bool diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index f8c7f26f1fbb..28968a189a91 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -1287,7 +1287,8 @@ static int load_elf_binary(struct linux_binprm *bprm) * (since it grows up, and may collide early with the stack * growing down), and into the unused ELF_ET_DYN_BASE region. */ - if (IS_ENABLED(CONFIG_ARCH_HAS_ELF_RANDOMIZE) && + if ((IS_ENABLED(CONFIG_ARCH_HAS_ELF_RANDOMIZE) || +IS_ENABLED(CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT)) && elf_ex->e_type == ET_DYN && !interpreter) { mm->brk = mm->start_brk = ELF_ET_DYN_BASE; } diff --git a/include/linux/elf-randomize.h b/include/linux/elf-randomize.h index da0dbb7b6be3..1e471ca7caaf 100644 --- a/include/linux/elf-randomize.h +++ b/include/linux/elf-randomize.h @@ -4,7 +4,8 @@ struct mm_struct; -#ifndef CONFIG_ARCH_HAS_ELF_RANDOMIZE +#if !defined(CONFIG_ARCH_HAS_ELF_RANDOMIZE) && \ + !defined(CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT) static inline unsigned long arch_mmap_rnd(void) { return 0; } # if defined(arch_randomize_brk) && defined(CONFIG_COMPAT_BRK) # define compat_brk_randomized diff --git a/mm/util.c b/mm/util.c index e58151a61255..edb9e94cceb5 100644 --- a/mm/util.c +++ b/mm/util.c @@ -344,6 +344,7 @@ unsigned long randomize_stack_top(unsigned long stack_top) } #ifdef CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT +#ifndef CONFIG_ARCH_HAS_ELF_RANDOMIZE unsigned long arch_randomize_brk(struct mm_struct *mm) { /* Is the current task 32bit ? */ @@ -352,6 +353,7 @@ unsigned long arch_randomize_brk(struct mm_struct *mm) return randomize_page(mm->brk, SZ_1G); } +#endif unsigned long arch_mmap_rnd(void) { -- 2.33.1
[PATCH 7/8] powerpc/mm: Convert to default topdown mmap layout
Select CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT and remove arch/powerpc/mm/mmap.c This change provides standard randomisation of mmaps. See commit 8b8addf891de ("x86/mm/32: Enable full randomization on i386 and X86_32") for all the benefits of mmap randomisation. Signed-off-by: Christophe Leroy --- arch/powerpc/Kconfig | 1 + arch/powerpc/include/asm/processor.h | 2 - arch/powerpc/mm/Makefile | 2 +- arch/powerpc/mm/mmap.c | 105 --- 4 files changed, 2 insertions(+), 108 deletions(-) delete mode 100644 arch/powerpc/mm/mmap.c diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index dea74d7717c0..05ddcf99cb34 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -158,6 +158,7 @@ config PPC select ARCH_USE_MEMTEST select ARCH_USE_QUEUED_RWLOCKS if PPC_QUEUED_SPINLOCKS select ARCH_USE_QUEUED_SPINLOCKSif PPC_QUEUED_SPINLOCKS + select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT select ARCH_WANT_IPC_PARSE_VERSION select ARCH_WANT_IRQS_OFF_ACTIVATE_MM select ARCH_WANT_LD_ORPHAN_WARN diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h index e39bd0ff69f3..d906b14dd599 100644 --- a/arch/powerpc/include/asm/processor.h +++ b/arch/powerpc/include/asm/processor.h @@ -378,8 +378,6 @@ static inline void prefetchw(const void *x) #define spin_lock_prefetch(x) prefetchw(x) -#define HAVE_ARCH_PICK_MMAP_LAYOUT - /* asm stubs */ extern unsigned long isa300_idle_stop_noloss(unsigned long psscr_val); extern unsigned long isa300_idle_stop_mayloss(unsigned long psscr_val); diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile index d4c20484dad9..503a6e249940 100644 --- a/arch/powerpc/mm/Makefile +++ b/arch/powerpc/mm/Makefile @@ -5,7 +5,7 @@ ccflags-$(CONFIG_PPC64):= $(NO_MINIMAL_TOC) -obj-y := fault.o mem.o pgtable.o mmap.o maccess.o pageattr.o \ +obj-y := fault.o mem.o pgtable.o maccess.o pageattr.o \ init_$(BITS).o pgtable_$(BITS).o \ pgtable-frag.o ioremap.o ioremap_$(BITS).o \ init-common.o mmu_context.o drmem.o \ diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c deleted file mode 100644 index 5972d619d274.. --- a/arch/powerpc/mm/mmap.c +++ /dev/null @@ -1,105 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * flexible mmap layout support - * - * Copyright 2003-2004 Red Hat Inc., Durham, North Carolina. - * All Rights Reserved. - * - * Started by Ingo Molnar - */ - -#include -#include -#include -#include -#include -#include -#include -#include - -/* - * Top of mmap area (just below the process stack). - * - * Leave at least a ~128 MB hole. - */ -#define MIN_GAP (128*1024*1024) -#define MAX_GAP (TASK_SIZE/6*5) - -static inline int mmap_is_legacy(struct rlimit *rlim_stack) -{ - if (current->personality & ADDR_COMPAT_LAYOUT) - return 1; - - if (rlim_stack->rlim_cur == RLIM_INFINITY) - return 1; - - return sysctl_legacy_va_layout; -} - -unsigned long arch_mmap_rnd(void) -{ - unsigned long shift, rnd; - - shift = mmap_rnd_bits; -#ifdef CONFIG_COMPAT - if (is_32bit_task()) - shift = mmap_rnd_compat_bits; -#endif - rnd = get_random_long() % (1ul << shift); - - return rnd << PAGE_SHIFT; -} - -static inline unsigned long stack_maxrandom_size(void) -{ - if (!(current->flags & PF_RANDOMIZE)) - return 0; - - /* 8MB for 32bit, 1GB for 64bit */ - if (is_32bit_task()) - return (1<<23); - else - return (1<<30); -} - -static inline unsigned long mmap_base(unsigned long rnd, - struct rlimit *rlim_stack) -{ - unsigned long gap = rlim_stack->rlim_cur; - unsigned long pad = stack_maxrandom_size() + stack_guard_gap; - - /* Values close to RLIM_INFINITY can overflow. */ - if (gap + pad > gap) - gap += pad; - - if (gap < MIN_GAP) - gap = MIN_GAP; - else if (gap > MAX_GAP) - gap = MAX_GAP; - - return PAGE_ALIGN(DEFAULT_MAP_WINDOW - gap - rnd); -} - -/* - * This function, called very early during the creation of a new - * process VM image, sets up which VM layout function to use: - */ -void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack) -{ - unsigned long random_factor = 0UL; - - if (current->flags & PF_RANDOMIZE) - random_factor = arch_mmap_rnd(); - - /* -* Fall back to the standard layout if the personality -* bit is set, or if the expected stack growth is unlimited: -*/ - if (mmap_is_legacy(rlim_stack)) { - mm->mmap_base = TASK_UNMAPPED_BASE; -
[PATCH 0/8] Convert powerpc to default topdown mmap layout
This series converts powerpc to default topdown mmap layout. powerpc provides its own arch_get_unmapped_area() only when slices are needed, which is only for book3s/64. First part of the series moves slices into book3s/64 specific directories and cleans up other subarchitectures. Then a small modification is done to core mm to allow powerpc to still provide its own arch_randomize_brk() Last part converts to default topdown mmap layout. Christophe Leroy (8): powerpc/mm: Make slice specific to book3s/64 powerpc/mm: Remove CONFIG_PPC_MM_SLICES powerpc/mm: Remove asm/slice.h powerpc/mm: Move vma_mmu_pagesize() and hugetlb_get_unmapped_area() to slice.c powerpc/mm: Call radix__arch_get_unmapped_area() from arch_get_unmapped_area() mm: Allow arch specific arch_randomize_brk() with CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT powerpc/mm: Convert to default topdown mmap layout powerpc/mm: Properly randomise mmap with slices arch/Kconfig | 1 - arch/powerpc/Kconfig | 1 + arch/powerpc/include/asm/book3s/64/hash.h | 5 +- arch/powerpc/include/asm/book3s/64/mmu-hash.h | 1 + arch/powerpc/include/asm/book3s/64/slice.h| 18 ++ arch/powerpc/include/asm/hugetlb.h| 2 +- arch/powerpc/include/asm/paca.h | 5 - arch/powerpc/include/asm/page.h | 1 - arch/powerpc/include/asm/processor.h | 2 - arch/powerpc/include/asm/slice.h | 46 arch/powerpc/kernel/paca.c| 5 - arch/powerpc/mm/Makefile | 3 +- arch/powerpc/mm/book3s64/Makefile | 2 +- arch/powerpc/mm/book3s64/hash_utils.c | 14 -- arch/powerpc/mm/{ => book3s64}/slice.c| 144 ++- arch/powerpc/mm/hugetlbpage.c | 28 --- arch/powerpc/mm/mmap.c| 228 -- arch/powerpc/mm/nohash/mmu_context.c | 2 - arch/powerpc/mm/nohash/tlb.c | 4 - arch/powerpc/platforms/Kconfig.cputype| 4 - fs/binfmt_elf.c | 3 +- include/linux/elf-randomize.h | 3 +- mm/util.c | 2 + 23 files changed, 165 insertions(+), 359 deletions(-) delete mode 100644 arch/powerpc/include/asm/slice.h rename arch/powerpc/mm/{ => book3s64}/slice.c (84%) delete mode 100644 arch/powerpc/mm/mmap.c -- 2.33.1
[PATCH 2/8] powerpc/mm: Remove CONFIG_PPC_MM_SLICES
CONFIG_PPC_MM_SLICES is always selected by book3s/64. CONFIG_PPC_MM_SLICES is never selected by other platforms. Remove it. Signed-off-by: Christophe Leroy --- arch/powerpc/include/asm/book3s/64/hash.h | 2 -- arch/powerpc/include/asm/hugetlb.h| 2 +- arch/powerpc/include/asm/paca.h | 5 - arch/powerpc/include/asm/slice.h | 13 ++--- arch/powerpc/kernel/paca.c| 5 - arch/powerpc/mm/book3s64/Makefile | 3 +-- arch/powerpc/mm/book3s64/hash_utils.c | 14 -- arch/powerpc/mm/hugetlbpage.c | 4 ++-- arch/powerpc/platforms/Kconfig.cputype| 4 9 files changed, 6 insertions(+), 46 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h index 674fe0e890dc..25f8e90985eb 100644 --- a/arch/powerpc/include/asm/book3s/64/hash.h +++ b/arch/powerpc/include/asm/book3s/64/hash.h @@ -99,10 +99,8 @@ * Defines the address of the vmemap area, in its own region on * hash table CPUs. */ -#ifdef CONFIG_PPC_MM_SLICES #define HAVE_ARCH_UNMAPPED_AREA #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN -#endif /* CONFIG_PPC_MM_SLICES */ /* PTEIDX nibble */ #define _PTEIDX_SECONDARY 0x8 diff --git a/arch/powerpc/include/asm/hugetlb.h b/arch/powerpc/include/asm/hugetlb.h index f18c543bc01d..83f067d4d2f3 100644 --- a/arch/powerpc/include/asm/hugetlb.h +++ b/arch/powerpc/include/asm/hugetlb.h @@ -24,7 +24,7 @@ static inline int is_hugepage_only_range(struct mm_struct *mm, unsigned long addr, unsigned long len) { - if (IS_ENABLED(CONFIG_PPC_MM_SLICES) && !radix_enabled()) + if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && !radix_enabled()) return slice_is_hugepage_only_range(mm, addr, len); return 0; } diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h index dc05a862e72a..20bef2e8533b 100644 --- a/arch/powerpc/include/asm/paca.h +++ b/arch/powerpc/include/asm/paca.h @@ -149,13 +149,8 @@ struct paca_struct { #endif /* CONFIG_PPC_BOOK3E */ #ifdef CONFIG_PPC_BOOK3S -#ifdef CONFIG_PPC_MM_SLICES unsigned char mm_ctx_low_slices_psize[BITS_PER_LONG / BITS_PER_BYTE]; unsigned char mm_ctx_high_slices_psize[SLICE_ARRAY_SIZE]; -#else - u16 mm_ctx_user_psize; - u16 mm_ctx_sllp; -#endif #endif /* diff --git a/arch/powerpc/include/asm/slice.h b/arch/powerpc/include/asm/slice.h index 0bdd9c62eca0..be4acc52e8ec 100644 --- a/arch/powerpc/include/asm/slice.h +++ b/arch/powerpc/include/asm/slice.h @@ -10,7 +10,7 @@ struct mm_struct; -#ifdef CONFIG_PPC_MM_SLICES +#ifdef CONFIG_PPC_BOOK3S_64 #ifdef CONFIG_HUGETLB_PAGE #define HAVE_ARCH_HUGETLB_UNMAPPED_AREA @@ -30,16 +30,7 @@ void slice_set_range_psize(struct mm_struct *mm, unsigned long start, void slice_init_new_context_exec(struct mm_struct *mm); void slice_setup_new_exec(void); -#else /* CONFIG_PPC_MM_SLICES */ - -static inline void slice_init_new_context_exec(struct mm_struct *mm) {} - -static inline unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr) -{ - return 0; -} - -#endif /* CONFIG_PPC_MM_SLICES */ +#endif /* CONFIG_PPC_BOOK3S_64 */ #endif /* __ASSEMBLY__ */ diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c index 4208b4044d12..a61f6fdcfb00 100644 --- a/arch/powerpc/kernel/paca.c +++ b/arch/powerpc/kernel/paca.c @@ -346,16 +346,11 @@ void copy_mm_to_paca(struct mm_struct *mm) #ifdef CONFIG_PPC_BOOK3S mm_context_t *context = &mm->context; -#ifdef CONFIG_PPC_MM_SLICES VM_BUG_ON(!mm_ctx_slb_addr_limit(context)); memcpy(&get_paca()->mm_ctx_low_slices_psize, mm_ctx_low_slices(context), LOW_SLICE_ARRAY_SZ); memcpy(&get_paca()->mm_ctx_high_slices_psize, mm_ctx_high_slices(context), TASK_SLICE_ARRAY_SZ(context)); -#else /* CONFIG_PPC_MM_SLICES */ - get_paca()->mm_ctx_user_psize = context->user_psize; - get_paca()->mm_ctx_sllp = context->sllp; -#endif #else /* !CONFIG_PPC_BOOK3S */ return; #endif diff --git a/arch/powerpc/mm/book3s64/Makefile b/arch/powerpc/mm/book3s64/Makefile index 30951668c684..f8562c79c59f 100644 --- a/arch/powerpc/mm/book3s64/Makefile +++ b/arch/powerpc/mm/book3s64/Makefile @@ -4,7 +4,7 @@ ccflags-y := $(NO_MINIMAL_TOC) CFLAGS_REMOVE_slb.o = $(CC_FLAGS_FTRACE) -obj-y += hash_pgtable.o hash_utils.o slb.o \ +obj-y += hash_pgtable.o hash_utils.o slb.o slice.o \ mmu_context.o pgtable.o hash_tlb.o obj-$(CONFIG_PPC_NATIVE) += hash_native.o obj-$(CONFIG_PPC_RADIX_MMU)+= radix_pgtable.o radix_tlb.o @@ -18,7 +18,6 @@ obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += hash_hugepage.o obj-$(CONFIG_PPC_SUBPAGE_PROT) += subpage_prot.o obj-$(CONFIG_SPAPR_TCE_IOMMU) += iommu_api.o obj-$(CONFIG_PPC_PKE
[PATCH 4/8] powerpc/mm: Move vma_mmu_pagesize() and hugetlb_get_unmapped_area() to slice.c
vma_mmu_pagesize() is only required for slices, otherwise there is a generic weak version. hugetlb_get_unmapped_area() is dedicated to slices. Move them to slice.c Signed-off-by: Christophe Leroy --- arch/powerpc/mm/book3s64/slice.c | 22 ++ arch/powerpc/mm/hugetlbpage.c| 28 2 files changed, 22 insertions(+), 28 deletions(-) diff --git a/arch/powerpc/mm/book3s64/slice.c b/arch/powerpc/mm/book3s64/slice.c index 82b45b1cb973..62848c5fa2d6 100644 --- a/arch/powerpc/mm/book3s64/slice.c +++ b/arch/powerpc/mm/book3s64/slice.c @@ -779,4 +779,26 @@ int slice_is_hugepage_only_range(struct mm_struct *mm, unsigned long addr, return !slice_check_range_fits(mm, maskp, addr, len); } + +unsigned long vma_mmu_pagesize(struct vm_area_struct *vma) +{ + /* With radix we don't use slice, so derive it from vma*/ + if (radix_enabled()) + return vma_kernel_pagesize(vma); + + return 1UL << mmu_psize_to_shift(get_slice_psize(vma->vm_mm, vma->vm_start)); +} + +unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr, + unsigned long len, unsigned long pgoff, + unsigned long flags) +{ + struct hstate *hstate = hstate_file(file); + int mmu_psize = shift_to_mmu_psize(huge_page_shift(hstate)); + + if (radix_enabled()) + return radix__hugetlb_get_unmapped_area(file, addr, len, pgoff, flags); + + return slice_get_unmapped_area(addr, len, flags, mmu_psize, 1); +} #endif diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index 10c3b2b8e9d8..eb9de09e49a3 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -542,34 +542,6 @@ struct page *follow_huge_pd(struct vm_area_struct *vma, return page; } -#ifdef CONFIG_PPC_BOOK3S_64 -unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr, - unsigned long len, unsigned long pgoff, - unsigned long flags) -{ - struct hstate *hstate = hstate_file(file); - int mmu_psize = shift_to_mmu_psize(huge_page_shift(hstate)); - -#ifdef CONFIG_PPC_RADIX_MMU - if (radix_enabled()) - return radix__hugetlb_get_unmapped_area(file, addr, len, - pgoff, flags); -#endif - return slice_get_unmapped_area(addr, len, flags, mmu_psize, 1); -} -#endif - -unsigned long vma_mmu_pagesize(struct vm_area_struct *vma) -{ - /* With radix we don't use slice, so derive it from vma*/ - if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && !radix_enabled()) { - unsigned int psize = get_slice_psize(vma->vm_mm, vma->vm_start); - - return 1UL << mmu_psize_to_shift(psize); - } - return vma_kernel_pagesize(vma); -} - bool __init arch_hugetlb_valid_size(unsigned long size) { int shift = __ffs(size); -- 2.33.1
Re: [PATCH 6/8] mm: Allow arch specific arch_randomize_brk() with CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
Hi Christophe, Le 22/11/2021 à 09:48, Christophe Leroy a écrit : Commit e7142bf5d231 ("arm64, mm: make randomization selected by generic topdown mmap layout") introduced a default version of arch_randomize_brk() provided when CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT is selected. powerpc could select CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT but needs to provide its own arch_randomize_brk(). In order to allow that, don't make CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT select CONFIG_ARCH_HAS_ELF_RANDOMIZE. Instead, ensure that selecting CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT and selecting CONFIG_ARCH_HAS_ELF_RANDOMIZE has the same effect. This feels weird to me since if CONFIG_ARCH_HAS_ELF_RANDOMIZE is used somewhere else at some point, it is not natural to add CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT: can't we use a __weak function or a new CONFIG_ARCH_HAS_RANDOMIZE_BRK? Thanks, Alex Then only provide the default arch_randomize_brk() when the architecture has not selected CONFIG_ARCH_HAS_ELF_RANDOMIZE. Cc: Alexandre Ghiti Signed-off-by: Christophe Leroy --- arch/Kconfig | 1 - fs/binfmt_elf.c | 3 ++- include/linux/elf-randomize.h | 3 ++- mm/util.c | 2 ++ 4 files changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 26b8ed11639d..ef3ce947b7a1 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -1000,7 +1000,6 @@ config HAVE_ARCH_COMPAT_MMAP_BASES config ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT bool depends on MMU - select ARCH_HAS_ELF_RANDOMIZE config HAVE_STACK_VALIDATION bool diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index f8c7f26f1fbb..28968a189a91 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -1287,7 +1287,8 @@ static int load_elf_binary(struct linux_binprm *bprm) * (since it grows up, and may collide early with the stack * growing down), and into the unused ELF_ET_DYN_BASE region. */ - if (IS_ENABLED(CONFIG_ARCH_HAS_ELF_RANDOMIZE) && + if ((IS_ENABLED(CONFIG_ARCH_HAS_ELF_RANDOMIZE) || +IS_ENABLED(CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT)) && elf_ex->e_type == ET_DYN && !interpreter) { mm->brk = mm->start_brk = ELF_ET_DYN_BASE; } diff --git a/include/linux/elf-randomize.h b/include/linux/elf-randomize.h index da0dbb7b6be3..1e471ca7caaf 100644 --- a/include/linux/elf-randomize.h +++ b/include/linux/elf-randomize.h @@ -4,7 +4,8 @@ struct mm_struct; -#ifndef CONFIG_ARCH_HAS_ELF_RANDOMIZE +#if !defined(CONFIG_ARCH_HAS_ELF_RANDOMIZE) && \ + !defined(CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT) static inline unsigned long arch_mmap_rnd(void) { return 0; } # if defined(arch_randomize_brk) && defined(CONFIG_COMPAT_BRK) # define compat_brk_randomized diff --git a/mm/util.c b/mm/util.c index e58151a61255..edb9e94cceb5 100644 --- a/mm/util.c +++ b/mm/util.c @@ -344,6 +344,7 @@ unsigned long randomize_stack_top(unsigned long stack_top) } #ifdef CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT +#ifndef CONFIG_ARCH_HAS_ELF_RANDOMIZE unsigned long arch_randomize_brk(struct mm_struct *mm) { /* Is the current task 32bit ? */ @@ -352,6 +353,7 @@ unsigned long arch_randomize_brk(struct mm_struct *mm) return randomize_page(mm->brk, SZ_1G); } +#endif unsigned long arch_mmap_rnd(void) {
Re: [PATCH 6/8] mm: Allow arch specific arch_randomize_brk() with CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
Le 22/11/2021 à 12:22, Alex Ghiti a écrit : Hi Christophe, Le 22/11/2021 à 09:48, Christophe Leroy a écrit : Commit e7142bf5d231 ("arm64, mm: make randomization selected by generic topdown mmap layout") introduced a default version of arch_randomize_brk() provided when CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT is selected. powerpc could select CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT but needs to provide its own arch_randomize_brk(). In order to allow that, don't make CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT select CONFIG_ARCH_HAS_ELF_RANDOMIZE. Instead, ensure that selecting CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT and selecting CONFIG_ARCH_HAS_ELF_RANDOMIZE has the same effect. This feels weird to me since if CONFIG_ARCH_HAS_ELF_RANDOMIZE is used somewhere else at some point, it is not natural to add CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT: can't we use a __weak function or a new CONFIG_ARCH_HAS_RANDOMIZE_BRK? Yes I also found things a bit weird. CONFIG_ARCH_HAS_RANDOMIZE_BRK could be an idea but how different would it be from CONFIG_ARCH_HAS_ELF_RANDOMIZE ? In fact I find it weird that CONFIG_ARCH_HAS_ELF_RANDOMIZE is selected by CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT and not by the arch itself. On the other hand CONFIG_ARCH_HAS_ELF_RANDOMIZE also handles arch_mmap_rnd() and here we are talking about arch_randomize_brk() only. In the begining I was thinking about adding a CONFIG_ARCH_WANT_DEFAULT_RANDOMIZE_BRK, but it was meaning adding it to the few other arches selecting CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT. So I think I will go for the __weak function option. Thanks Christophe
Re: [PATCH 6/8] mm: Allow arch specific arch_randomize_brk() with CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
On 11/22/21 12:47, Christophe Leroy wrote: Le 22/11/2021 à 12:22, Alex Ghiti a écrit : Hi Christophe, Le 22/11/2021 à 09:48, Christophe Leroy a écrit : Commit e7142bf5d231 ("arm64, mm: make randomization selected by generic topdown mmap layout") introduced a default version of arch_randomize_brk() provided when CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT is selected. powerpc could select CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT but needs to provide its own arch_randomize_brk(). In order to allow that, don't make CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT select CONFIG_ARCH_HAS_ELF_RANDOMIZE. Instead, ensure that selecting CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT and selecting CONFIG_ARCH_HAS_ELF_RANDOMIZE has the same effect. This feels weird to me since if CONFIG_ARCH_HAS_ELF_RANDOMIZE is used somewhere else at some point, it is not natural to add CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT: can't we use a __weak function or a new CONFIG_ARCH_HAS_RANDOMIZE_BRK? Yes I also found things a bit weird. CONFIG_ARCH_HAS_RANDOMIZE_BRK could be an idea but how different would it be from CONFIG_ARCH_HAS_ELF_RANDOMIZE ? In fact I find it weird that CONFIG_ARCH_HAS_ELF_RANDOMIZE is selected by CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT and not by the arch itself. IIRC, this was a request from Kees Cook who wanted to enforce this security measure. On the other hand CONFIG_ARCH_HAS_ELF_RANDOMIZE also handles arch_mmap_rnd() and here we are talking about arch_randomize_brk() only. In the begining I was thinking about adding a CONFIG_ARCH_WANT_DEFAULT_RANDOMIZE_BRK, but it was meaning adding it to the few other arches selecting CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT. So I think I will go for the __weak function option. Ok, thanks. Alex Thanks Christophe
Re: Build regressions/improvements in v5.16-rc2
On Mon, Nov 22, 2021 at 12:28 PM Geert Uytterhoeven wrote: > Below is the list of build error/warning regressions/improvements in > v5.16-rc2[1] compared to v5.15[2]. > > Summarized: > - build errors: +13/-12 > - build warnings: +3/-26 > > JFYI, when comparing v5.16-rc2[1] to v5.16-rc1[3], the summaries are: > - build errors: +6/-12 + /kisskb/src/drivers/mtd/nand/raw/mpc5121_nfc.c: error: unused variable 'mtd' [-Werror=unused-variable]: => 294:19 ppc32_allmodconfig (patch sent) + /kisskb/src/drivers/video/fbdev/nvidia/nvidia.c: error: passing argument 1 of 'iounmap' discards 'volatile' qualifier from pointer target type [-Werror=discarded-qualifiers]: => 1439:10, 1414:10 + /kisskb/src/drivers/video/fbdev/riva/fbdev.c: error: passing argument 1 of 'iounmap' discards 'volatile' qualifier from pointer target type [-Werror=discarded-qualifiers]: => 2095:11, 2062:11 um-all{mod,yes}config + /kisskb/src/fs/netfs/read_helper.c: error: implicit declaration of function 'flush_dcache_folio' [-Werror=implicit-function-declaration]: => 435:4 sparc-allmodconfig sparc64-allmodconfig + /kisskb/src/fs/ntfs/aops.c: error: the frame size of 2192 bytes is larger than 2048 bytes [-Werror=frame-larger-than=]: => 1311:1 ppc64le_allmodconfig + /kisskb/src/fs/ntfs/aops.c: error: the frame size of 2256 bytes is larger than 2048 bytes [-Werror=frame-larger-than=]: => 1311:1 powerpc-allyesconfig > Thanks to the linux-next team for providing the build service. > > [1] > http://kisskb.ellerman.id.au/kisskb/branch/linus/head/136057256686de39cc3a07c2e39ef6bc43003ff6/ > (all 90 configs) > [3] > http://kisskb.ellerman.id.au/kisskb/branch/linus/head/fa55b7dcdc43c1aa1ba12bca9d2dd4318c2a0dbf/ > (all 90 configs) Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say "programmer" or something like that. -- Linus Torvalds
[powerpc:fixes-test] BUILD SUCCESS f01ad0b9f2dcfd006e1fa77104fdf989980ff20f
tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git fixes-test branch HEAD: f01ad0b9f2dcfd006e1fa77104fdf989980ff20f powerpc/32: Fix hardlockup on vmap stack overflow elapsed time: 801m configs tested: 168 configs skipped: 125 The following configs have been built successfully. More configs may be tested in the coming days. gcc tested configs: arm defconfig arm64allyesconfig arm64 defconfig arm allyesconfig arm allmodconfig i386 randconfig-c001-20211122 mips randconfig-c004-20211122 parisc alldefconfig m68k m5208evb_defconfig armpleb_defconfig xtensa nommu_kc705_defconfig powerpc mpc885_ads_defconfig sh se7705_defconfig mips tb0219_defconfig mips bmips_be_defconfig powerpc mpc83xx_defconfig sh shx3_defconfig mips cavium_octeon_defconfig sh se7750_defconfig mipsmalta_qemu_32r6_defconfig ia64generic_defconfig mips ci20_defconfig powerpc mpc5200_defconfig powerpc bluestone_defconfig arm at91_dt_defconfig arm stm32_defconfig pariscgeneric-32bit_defconfig powerpc mpc8540_ads_defconfig archsdk_defconfig m68k amcore_defconfig alpha defconfig armneponset_defconfig shhp6xx_defconfig powerpc arches_defconfig arm tegra_defconfig sh sh7710voipgw_defconfig shsh7785lcr_defconfig sh sh03_defconfig shdreamcast_defconfig openrisc simple_smp_defconfig armtrizeps4_defconfig sh se7206_defconfig sh sh7724_generic_defconfig powerpc mpc8313_rdb_defconfig m68km5272c3_defconfig xtensa virt_defconfig powerpc mpc834x_itxgp_defconfig ia64 alldefconfig arm lpd270_defconfig arm orion5x_defconfig powerpc pasemi_defconfig powerpc ep8248e_defconfig m68k bvme6000_defconfig arm am200epdkit_defconfig armspear6xx_defconfig riscv allnoconfig nios2 defconfig mips ip28_defconfig arm rpc_defconfig mipse55_defconfig arc defconfig m68k m5475evb_defconfig powerpc mgcoge_defconfig powerpc tqm8548_defconfig sh rts7751r2dplus_defconfig i386 allyesconfig powerpcge_imp3a_defconfig sh ap325rxa_defconfig openrisc or1klitex_defconfig armxcep_defconfig x86_64 defconfig sh defconfig mips cu1830-neo_defconfig m68k m5275evb_defconfig arm vf610m4_defconfig shedosk7705_defconfig mips rt305x_defconfig arm imx_v4_v5_defconfig arm corgi_defconfig powerpc pq2fads_defconfig mipsar7_defconfig arc axs103_defconfig sh rsk7264_defconfig mips xway_defconfig arm nhk8815_defconfig powerpc mpc836x_rdk_defconfig m68kmvme16x_defconfig powerpcmpc7448_hpc2_defconfig xtensa alldefconfig sh urquell_defconfig arm tct_hammer_defconfig sh apsh4a3a_defconfig powerpc asp8347_defconfig powerpc makalu_defconfig arm randconfig-c002-20211122 ia64 allmodconfig ia64defconfig ia64 allyesconfig m68k allmodconfig m68kdefconfig m68k
[powerpc:next-test] BUILD SUCCESS b92d1aabe9aace6ffd3399cae2ba52b6a927f7d7
tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next-test branch HEAD: b92d1aabe9aace6ffd3399cae2ba52b6a927f7d7 powerpc/watchdog: read TB close to where it is used elapsed time: 795m configs tested: 203 configs skipped: 16 The following configs have been built successfully. More configs may be tested in the coming days. gcc tested configs: arm defconfig arm64allyesconfig arm64 defconfig arm allyesconfig arm allmodconfig i386 randconfig-c001-20211122 mips randconfig-c004-20211122 nds32alldefconfig powerpc microwatt_defconfig mipsvocore2_defconfig armmulti_v7_defconfig parisc alldefconfig m68k m5208evb_defconfig armpleb_defconfig xtensa nommu_kc705_defconfig h8300 edosk2674_defconfig powerpc mpc885_ads_defconfig sh se7705_defconfig mips tb0219_defconfig mips bmips_be_defconfig powerpc mpc83xx_defconfig sh shx3_defconfig sh se7722_defconfig arm iop32x_defconfig powerpc ppc40x_defconfig mips cavium_octeon_defconfig sh se7750_defconfig mipsmalta_qemu_32r6_defconfig ia64generic_defconfig mips ci20_defconfig powerpc mpc5200_defconfig powerpc bluestone_defconfig arm at91_dt_defconfig arm stm32_defconfig pariscgeneric-32bit_defconfig arm axm55xx_defconfig i386 alldefconfig armmmp2_defconfig arm u8500_defconfig arc haps_hs_defconfig sh sdk7786_defconfig powerpc ppc44x_defconfig m68k multi_defconfig powerpc mpc8540_ads_defconfig archsdk_defconfig m68k amcore_defconfig alpha defconfig armneponset_defconfig shhp6xx_defconfig powerpc arches_defconfig arm tegra_defconfig sh sh7710voipgw_defconfig shsh7785lcr_defconfig sh sh03_defconfig shdreamcast_defconfig openrisc simple_smp_defconfig armtrizeps4_defconfig sh se7206_defconfig sh sh7724_generic_defconfig powerpc mpc8313_rdb_defconfig m68km5272c3_defconfig xtensa virt_defconfig powerpc mpc834x_itxgp_defconfig ia64 alldefconfig arm lpd270_defconfig arm orion5x_defconfig m68kdefconfig powerpc pasemi_defconfig powerpc ep8248e_defconfig m68k bvme6000_defconfig arm am200epdkit_defconfig armspear6xx_defconfig riscv allnoconfig nios2 3c120_defconfig mips loongson1c_defconfig shapsh4ad0a_defconfig arm versatile_defconfig mipse55_defconfig armoxnas_v6_defconfig nios2 defconfig mips ip28_defconfig arm rpc_defconfig arc defconfig m68k m5475evb_defconfig powerpc mgcoge_defconfig powerpc tqm8548_defconfig sh rts7751r2dplus_defconfig armzeus_defconfig mips tb0287_defconfig armdove_defconfig xtensa defconfig powerpc mpc8315_rdb_defconfig i386 allyesconfig powerpcge_imp3a_defconfig sh ap325rxa_defconfig openrisc or1klitex_defconfig armxcep_defconfig x86_64 defconfig sh defconfig mips cu1830-neo_defconfig m68k
Re: [RFC patch 2/5] ASoC: tlv320aic31xx: Add support for pll_r coefficient
On Fri, Nov 19, 2021 at 12:32:45PM -0300, Ariel D'Alessandro wrote: > When the clock used by the codec is BCLK, the operation parameters need > to be calculated from input sample rate and format. Low frequency rates > required different r multipliers, in order to achieve a higher PLL > output frequency. > > Signed-off-by: Michael Trimarchi > Signed-off-by: Ariel D'Alessandro Did Michael write this code (in which case there should be a From: from him) or did he work on the code with you? The signoffs are a little confusing. signature.asc Description: PGP signature
Re: [RFC patch 2/5] ASoC: tlv320aic31xx: Add support for pll_r coefficient
Hi Mark On Mon, Nov 22, 2021 at 3:22 PM Mark Brown wrote: > > On Fri, Nov 19, 2021 at 12:32:45PM -0300, Ariel D'Alessandro wrote: > > When the clock used by the codec is BCLK, the operation parameters need > > to be calculated from input sample rate and format. Low frequency rates > > required different r multipliers, in order to achieve a higher PLL > > output frequency. > > > > Signed-off-by: Michael Trimarchi > > Signed-off-by: Ariel D'Alessandro > > Did Michael write this code (in which case there should be a From: from > him) or did he work on the code with you? The signoffs are a little > confusing. It's fine. We are working together Michael
Re: [RFC patch 2/5] ASoC: tlv320aic31xx: Add support for pll_r coefficient
On Mon, Nov 22, 2021 at 03:24:42PM +0100, Michael Nazzareno Trimarchi wrote: > On Mon, Nov 22, 2021 at 3:22 PM Mark Brown wrote: > > On Fri, Nov 19, 2021 at 12:32:45PM -0300, Ariel D'Alessandro wrote: > > > When the clock used by the codec is BCLK, the operation parameters need > > > to be calculated from input sample rate and format. Low frequency rates > > > required different r multipliers, in order to achieve a higher PLL > > > output frequency. > > > Signed-off-by: Michael Trimarchi > > > Signed-off-by: Ariel D'Alessandro > > Did Michael write this code (in which case there should be a From: from > > him) or did he work on the code with you? The signoffs are a little > > confusing. > It's fine. We are working together In such situations it's best to include a Co-developed-by tag to say what's going on, that makes it clear what's going on. signature.asc Description: PGP signature
Re: [PATCH 1/8] powerpc/mm: Make slice specific to book3s/64
Hi Christophe, I love your patch! Perhaps something to improve: [auto build test WARNING on powerpc/next] [also build test WARNING on hnaz-mm/master linus/master v5.16-rc2 next-2028] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Christophe-Leroy/Convert-powerpc-to-default-topdown-mmap-layout/20211122-165115 base: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next config: powerpc64-randconfig-s031-20211122 (attached as .config) compiler: powerpc64-linux-gcc (GCC) 11.2.0 reproduce: wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # apt-get install sparse # sparse version: v0.6.4-dirty # https://github.com/0day-ci/linux/commit/1d0b7cc86d08f25f595b52d8c39ba9ca1d29a30a git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Christophe-Leroy/Convert-powerpc-to-default-topdown-mmap-layout/20211122-165115 git checkout 1d0b7cc86d08f25f595b52d8c39ba9ca1d29a30a # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=powerpc64 If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot All warnings (new ones prefixed by >>): arch/powerpc/mm/book3s64/slice.c: In function 'slice_get_unmapped_area': >> arch/powerpc/mm/book3s64/slice.c:639:1: warning: the frame size of 1040 >> bytes is larger than 1024 bytes [-Wframe-larger-than=] 639 | } | ^ vim +639 arch/powerpc/mm/book3s64/slice.c 3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 428 d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 429 unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 430 unsigned long flags, unsigned int psize, 34d07177b802e9 arch/powerpc/mm/slice.c Michel Lespinasse 2013-04-29 431 int topdown) d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 432 { d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 433 struct slice_mask good_mask; f3207c124e7aa8 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-03-22 434 struct slice_mask potential_mask; d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin2018-03-07 435 const struct slice_mask *maskp; d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin2018-03-07 436 const struct slice_mask *compat_maskp = NULL; d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 437 int fixed = (flags & MAP_FIXED); d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 438 int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT); 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 439 unsigned long page_size = 1UL << pshift; d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 440 struct mm_struct *mm = current->mm; 3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 441 unsigned long newaddr; f4ea6dcb08ea2c arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-03-30 442 unsigned long high_limit; d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 443 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 444 high_limit = DEFAULT_MAP_WINDOW; 35602f82d0c765 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 445 if (addr >= high_limit || (fixed && (addr + len > high_limit))) 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 446 high_limit = TASK_SIZE; 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 447 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 448 if (len > high_limit) 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 449 return -ENOMEM; 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 450 if (len & (page_size - 1)) 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 451 return -EINVAL; 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 452 if (fixed) { 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 453 if (addr & (page_size - 1)) 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 454 return
[powerpc:merge] BUILD SUCCESS 95c6ab13ec7e63e5e8628e237082431779d270f3
tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git merge branch HEAD: 95c6ab13ec7e63e5e8628e237082431779d270f3 Automatic merge of 'master' into merge (2021-11-22 10:52) elapsed time: 921m configs tested: 137 configs skipped: 3 The following configs have been built successfully. More configs may be tested in the coming days. gcc tested configs: arm64 defconfig arm64allyesconfig arm allmodconfig arm allyesconfig arm defconfig i386 randconfig-c001-20211122 mips randconfig-c004-20211122 m68k m5208evb_defconfig armpleb_defconfig xtensa nommu_kc705_defconfig parisc alldefconfig armqcom_defconfig arc axs103_smp_defconfig m68k sun3x_defconfig mips ci20_defconfig openrisc alldefconfig sh kfr2r09_defconfig mips mtx1_defconfig ia64 tiger_defconfig arm iop32x_defconfig sh magicpanelr2_defconfig powerpc mpc8540_ads_defconfig archsdk_defconfig m68k amcore_defconfig alpha defconfig armneponset_defconfig shhp6xx_defconfig arm at91_dt_defconfig mips decstation_r4k_defconfig sh rts7751r2d1_defconfig powerpc motionpro_defconfig sh microdev_defconfig powerpc microwatt_defconfig arm collie_defconfig sh se7721_defconfig powerpc tqm5200_defconfig powerpc pasemi_defconfig powerpc ep8248e_defconfig m68k bvme6000_defconfig arm am200epdkit_defconfig nios2 defconfig mips ip28_defconfig arm rpc_defconfig mipse55_defconfig i386 allyesconfig powerpcge_imp3a_defconfig sh ap325rxa_defconfig openrisc or1klitex_defconfig armxcep_defconfig x86_64 defconfig arm pxa255-idp_defconfig arm mv78xx0_defconfig powerpc tqm8540_defconfig microblaze defconfig m68k apollo_defconfig arm aspeed_g5_defconfig shmigor_defconfig microblaze mmu_defconfig mipsmalta_qemu_32r6_defconfig arm randconfig-c002-20211122 ia64 allmodconfig ia64defconfig ia64 allyesconfig m68kdefconfig m68k allyesconfig m68k allmodconfig arc allyesconfig nds32 allnoconfig cskydefconfig alphaallyesconfig nds32 defconfig nios2allyesconfig arc defconfig sh allmodconfig h8300allyesconfig xtensa allyesconfig parisc defconfig s390 allyesconfig s390 allmodconfig parisc allyesconfig s390defconfig i386defconfig i386 debian-10.3 sparcallyesconfig sparc defconfig mips allmodconfig mips allyesconfig powerpc allnoconfig powerpc allmodconfig powerpc allyesconfig x86_64 randconfig-a015-20211122 x86_64 randconfig-a016-20211122 i386 randconfig-a015-20211122 i386 randconfig-a012-20211122 i386 randconfig-a013-20211122 i386 randconfig-a014-20211122 i386 randconfig-a011-20211122 i386 randconfig-a016-20211122 x86_64 randconfig-a001-20211121 x86_64 randconfig-a003-20211
Re: [PATCH v2 2/5] preempt/dynamic: Introduce preempt mode accessors
On 16/11/21 14:29, Christophe Leroy wrote: > Le 10/11/2021 à 21:24, Valentin Schneider a écrit : >> CONFIG_PREEMPT{_NONE, _VOLUNTARY} designate either: >> o The build-time preemption model when !PREEMPT_DYNAMIC >> o The default boot-time preemption model when PREEMPT_DYNAMIC >> >> IOW, using those on PREEMPT_DYNAMIC kernels is meaningless - the actual >> model could have been set to something else by the "preempt=foo" cmdline >> parameter. >> >> Introduce a set of helpers to determine the actual preemption mode used by >> the live kernel. >> >> Suggested-by: Marco Elver >> Signed-off-by: Valentin Schneider >> --- >> include/linux/sched.h | 16 >> kernel/sched/core.c | 11 +++ >> 2 files changed, 27 insertions(+) >> >> diff --git a/include/linux/sched.h b/include/linux/sched.h >> index 5f8db54226af..0640d5622496 100644 >> --- a/include/linux/sched.h >> +++ b/include/linux/sched.h >> @@ -2073,6 +2073,22 @@ static inline void cond_resched_rcu(void) >> #endif >> } >> >> +#ifdef CONFIG_PREEMPT_DYNAMIC >> + >> +extern bool is_preempt_none(void); >> +extern bool is_preempt_voluntary(void); >> +extern bool is_preempt_full(void); > > Those are trivial tests supposed to be used in fast pathes. They should > be static inlines in order to minimise the overhead. > >> + >> +#else >> + >> +#define is_preempt_none() IS_ENABLED(CONFIG_PREEMPT_NONE) >> +#define is_preempt_voluntary() IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY) >> +#define is_preempt_full() IS_ENABLED(CONFIG_PREEMPT) > > Would be better to use static inlines here as well instead of macros. > I realize I stripped all ppc folks from the cclist after dropping the ppc snippet, but you guys might still be interested - my bad. That's done in v3: https://lore.kernel.org/lkml/2022185203.280040-1-valentin.schnei...@arm.com/ >> + >> +#endif >> + >> +#define is_preempt_rt() IS_ENABLED(CONFIG_PREEMPT_RT) >> + >> /* >>* Does a critical section need to be broken due to another >>* task waiting?: (technically does not depend on CONFIG_PREEMPTION, >> diff --git a/kernel/sched/core.c b/kernel/sched/core.c >> index 97047aa7b6c2..9db7f77e53c3 100644 >> --- a/kernel/sched/core.c >> +++ b/kernel/sched/core.c >> @@ -6638,6 +6638,17 @@ static void __init preempt_dynamic_init(void) >> } >> } >> >> +#define PREEMPT_MODE_ACCESSOR(mode) \ >> +bool is_preempt_##mode(void) >> \ >> +{ >> \ >> +WARN_ON_ONCE(preempt_dynamic_mode == >> preempt_dynamic_undefined); \ > > Not sure using WARN_ON is a good idea here, as it may be called very > early, see comment on powerpc patch. Bah, I was gonna say that you *don't* want users of is_preempt_*() to be called before the "final" preemption model is set up (such users would need to make use of static_calls), but I realize there's a debug interface to flip the preemption model at will... Say an initcall sees is_preempt_voluntary() and sets things up accordingly, and then the debug knob switches to preempt_full. I don't think there's much we can really do here though :/ > >> +return preempt_dynamic_mode == preempt_dynamic_##mode; >> \ >> +} > > I'm not sure that's worth a macro. You only have 3 accessors, 2 lines of > code each. Just define all 3 in plain text. > > CONFIG_PREEMPT_DYNAMIC is based on using strategies like static_calls in > order to minimise the overhead. For those accessors you should use the > same kind of approach and use things like jump_labels in order to not > redo the test at each time and minimise overhead as much as possible. > That's a valid point, though the few paths that need patching up and don't make use of static calls already (AFAICT the ppc irq path I was touching in v2 needs to make use of irqentry_exit_cond_resched()) really seem like slow-paths. >> + >> +PREEMPT_MODE_ACCESSOR(none) >> +PREEMPT_MODE_ACCESSOR(voluntary) >> +PREEMPT_MODE_ACCESSOR(full) >> + >> #else /* !CONFIG_PREEMPT_DYNAMIC */ >> >> static inline void preempt_dynamic_init(void) { } >>
Re: [PATCH v2 3/5] powerpc: Use preemption model accessors
On 16/11/21 14:41, Christophe Leroy wrote: > Le 10/11/2021 à 21:24, Valentin Schneider a écrit : >> Per PREEMPT_DYNAMIC, checking CONFIG_PREEMPT doesn't tell you the actual >> preemption model of the live kernel. Use the newly-introduced accessors >> instead. > > Is that change worth it for now ? As far as I can see powerpc doesn't > have DYNAMIC PREEMPT, a lot of work needs to be done before being able > to use it: > - Implement GENERIC_ENTRY > - Implement STATIC_CALLS (already done on PPC32, to be done on PPC64) > You're right, I ditched this patch for v3 - AFAICT the change wasn't even valid as the preempt_schedule_irq() call needs to be replaced with irqentry_exit_cond_resched() (IOW this needs to make use of the generic entry code). >> >> sched_init() -> preempt_dynamic_init() happens way before IRQs are set up, >> so this should be fine. > > It looks like you are mixing up interrupts and IRQs (also known as > "external interrupts"). > > ISI (Instruction Storage Interrupt) and DSI (Data Storage Interrupt) for > instance are also interrupts. They happen everytime there is a page > fault so may happen pretty early. > > Traps generated by WARN_ON() are also interrupts that may happen at any > time. > Michael pointed this out and indeed triggering a WARN_ON() there is not super smart. Thanks for teaching me a bit of what I'm putting my grubby hands in :)
Re: [PATCH v1] KVM: PPC: Book3S HV: Prevent POWER7/8 TLB flush flushing SLB
Nicholas Piggin writes: > The POWER9 ERAT flush instruction is a SLBIA with IH=7, which is a > reserved value on POWER7/8. On POWER8 this invalidates the SLB entries > above index 0, similarly to SLBIA IH=0. > > If the SLB entries are invalidated, and then the guest is bypassed, the > host SLB does not get re-loaded, so the bolted entries above 0 will be > lost. This can result in kernel stack access causing a SLB fault. > > Kernel stack access causing a SLB fault was responsible for the infamous > mega bug (search "Fix SLB reload bug"). Although since commit > 48e7b7695745 ("powerpc/64s/hash: Convert SLB miss handlers to C") that > starts using the kernel stack in the SLB miss handler, it might only > result in an infinite loop of SLB faults. In any case it's a bug. > > Fix this by only executing the instruction on >= POWER9 where IH=7 is > defined not to invalidate the SLB. POWER7/8 don't require this ERAT > flush. > > Fixes: 5008711259201 ("KVM: PPC: Book3S HV: Invalidate ERAT when flushing > guest TLB entries") > Signed-off-by: Nicholas Piggin Reviewed-by: Fabiano Rosas > --- > arch/powerpc/kvm/book3s_hv_builtin.c | 5 - > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c > b/arch/powerpc/kvm/book3s_hv_builtin.c > index fcf4760a3a0e..70b7a8f97153 100644 > --- a/arch/powerpc/kvm/book3s_hv_builtin.c > +++ b/arch/powerpc/kvm/book3s_hv_builtin.c > @@ -695,6 +695,7 @@ static void flush_guest_tlb(struct kvm *kvm) > "r" (0) : "memory"); > } > asm volatile("ptesync": : :"memory"); > + // POWER9 congruence-class TLBIEL leaves ERAT. Flush it now. > asm volatile(PPC_RADIX_INVALIDATE_ERAT_GUEST : : :"memory"); > } else { > for (set = 0; set < kvm->arch.tlb_sets; ++set) { > @@ -705,7 +706,9 @@ static void flush_guest_tlb(struct kvm *kvm) > rb += PPC_BIT(51); /* increment set number */ > } > asm volatile("ptesync": : :"memory"); > - asm volatile(PPC_ISA_3_0_INVALIDATE_ERAT : : :"memory"); > + // POWER9 congruence-class TLBIEL leaves ERAT. Flush it now. > + if (cpu_has_feature(CPU_FTR_ARCH_300)) > + asm volatile(PPC_ISA_3_0_INVALIDATE_ERAT : : :"memory"); > } > }
Re: [PATCH] powerpc/signal32: Use struct_group() to zero spe regs
On Mon, Nov 22, 2021 at 04:43:36PM +1100, Michael Ellerman wrote: > LEROY Christophe writes: > > Le 18/11/2021 à 21:36, Kees Cook a écrit : > >> In preparation for FORTIFY_SOURCE performing compile-time and run-time > >> field bounds checking for memset(), avoid intentionally writing across > >> neighboring fields. > >> > >> Add a struct_group() for the spe registers so that memset() can correctly > >> reason > >> about the size: > >> > >> In function 'fortify_memset_chk', > >> inlined from 'restore_user_regs.part.0' at > >> arch/powerpc/kernel/signal_32.c:539:3: > >> >> include/linux/fortify-string.h:195:4: error: call to > >> '__write_overflow_field' declared with attribute warning: detected write > >> beyond size of field (1st parameter); maybe use struct_group()? > >> [-Werror=attribute-warning] > >> 195 |__write_overflow_field(); > >> |^~~~ > >> > >> Reported-by: kernel test robot > >> Signed-off-by: Kees Cook > > > > Reviewed-by: Christophe Leroy > > Acked-by: Michael Ellerman Thanks! Should I take this via my tree, or do you want to take it via ppc? -Kees -- Kees Cook
Re: [PATCH 0/2] of: remove reserved regions count restriction
On Fri, Nov 19, 2021 at 03:58:17PM +0800, Calvin Zhang wrote: > The count of reserved regions in /reserved-memory was limited because > the struct reserved_mem array was defined statically. This series sorts > out reserved memory code and allocates that array from early allocator. > > Note: reserved region with fixed location must be reserved before any > memory allocation. While struct reserved_mem array should be allocated > after allocator is activated. We make early_init_fdt_scan_reserved_mem() > do reservation only and add another call to initialize reserved memory. > So arch code have to change for it. I think much simpler would be to use the same constant for sizing memblock.reserved and reserved_mem arrays. If there is too much reserved regions in the device tree, reserving them in memblock will fail anyway because memblock also starts with static array for memblock.reserved, so doing one pass with memblock_reserve() and another to set up reserved_mem wouldn't help anyway. > I'm only familiar with arm and arm64 architectures. Approvals from arch > maintainers are required. Thank you all. > > Calvin Zhang (2): > of: Sort reserved_mem related code > of: reserved_mem: Remove reserved regions count restriction > > arch/arc/mm/init.c | 3 + > arch/arm/kernel/setup.c| 2 + > arch/arm64/kernel/setup.c | 3 + > arch/csky/kernel/setup.c | 3 + > arch/h8300/kernel/setup.c | 2 + > arch/mips/kernel/setup.c | 3 + > arch/nds32/kernel/setup.c | 3 + > arch/nios2/kernel/setup.c | 2 + > arch/openrisc/kernel/setup.c | 3 + > arch/powerpc/kernel/setup-common.c | 3 + > arch/riscv/kernel/setup.c | 2 + > arch/sh/kernel/setup.c | 3 + > arch/xtensa/kernel/setup.c | 2 + > drivers/of/fdt.c | 107 +--- > drivers/of/of_private.h| 12 +- > drivers/of/of_reserved_mem.c | 189 - > include/linux/of_reserved_mem.h| 4 + > 17 files changed, 207 insertions(+), 139 deletions(-) > > -- > 2.30.2 > -- Sincerely yours, Mike.
Re: [PATCH 0/2] of: remove reserved regions count restriction
On Sun, Nov 21, 2021 at 08:43:47AM +0200, Mike Rapoport wrote: >On Fri, Nov 19, 2021 at 03:58:17PM +0800, Calvin Zhang wrote: >> The count of reserved regions in /reserved-memory was limited because >> the struct reserved_mem array was defined statically. This series sorts >> out reserved memory code and allocates that array from early allocator. >> >> Note: reserved region with fixed location must be reserved before any >> memory allocation. While struct reserved_mem array should be allocated >> after allocator is activated. We make early_init_fdt_scan_reserved_mem() >> do reservation only and add another call to initialize reserved memory. >> So arch code have to change for it. > >I think much simpler would be to use the same constant for sizing >memblock.reserved and reserved_mem arrays. > >If there is too much reserved regions in the device tree, reserving them in >memblock will fail anyway because memblock also starts with static array >for memblock.reserved, so doing one pass with memblock_reserve() and >another to set up reserved_mem wouldn't help anyway. Yes. This happens only if there are two many fixed reserved regions. memblock.reserved can be resized after paging. I also find another problem. Initializing dynamic reservation after paging would fail to mark it no-map because no-map flag works when doing direct mapping. This seems to be a circular dependency. Thank You, Calvin
Re: [PATCH 1/8] powerpc/mm: Make slice specific to book3s/64
Hi Christophe, I love your patch! Yet something to improve: [auto build test ERROR on powerpc/next] [also build test ERROR on hnaz-mm/master linus/master v5.16-rc2 next-2028] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Christophe-Leroy/Convert-powerpc-to-default-topdown-mmap-layout/20211122-165115 base: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next config: powerpc64-randconfig-r021-20211122 (attached as .config) compiler: powerpc64-linux-gcc (GCC) 11.2.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/0day-ci/linux/commit/1d0b7cc86d08f25f595b52d8c39ba9ca1d29a30a git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Christophe-Leroy/Convert-powerpc-to-default-topdown-mmap-layout/20211122-165115 git checkout 1d0b7cc86d08f25f595b52d8c39ba9ca1d29a30a # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross ARCH=powerpc If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot All errors (new ones prefixed by >>): arch/powerpc/mm/book3s64/slice.c: In function 'slice_get_unmapped_area': >> arch/powerpc/mm/book3s64/slice.c:639:1: error: the frame size of 1056 bytes >> is larger than 1024 bytes [-Werror=frame-larger-than=] 639 | } | ^ cc1: all warnings being treated as errors vim +639 arch/powerpc/mm/book3s64/slice.c 3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 428 d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 429 unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 430 unsigned long flags, unsigned int psize, 34d07177b802e9 arch/powerpc/mm/slice.c Michel Lespinasse 2013-04-29 431 int topdown) d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 432 { d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 433 struct slice_mask good_mask; f3207c124e7aa8 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-03-22 434 struct slice_mask potential_mask; d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin2018-03-07 435 const struct slice_mask *maskp; d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin2018-03-07 436 const struct slice_mask *compat_maskp = NULL; d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 437 int fixed = (flags & MAP_FIXED); d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 438 int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT); 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 439 unsigned long page_size = 1UL << pshift; d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 440 struct mm_struct *mm = current->mm; 3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 441 unsigned long newaddr; f4ea6dcb08ea2c arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-03-30 442 unsigned long high_limit; d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 443 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 444 high_limit = DEFAULT_MAP_WINDOW; 35602f82d0c765 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 445 if (addr >= high_limit || (fixed && (addr + len > high_limit))) 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 446 high_limit = TASK_SIZE; 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 447 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 448 if (len > high_limit) 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 449 return -ENOMEM; 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 450 if (len & (page_size - 1)) 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 451 return -EINVAL; 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 452 if (fixed) { 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 453 if (addr & (page_size - 1)) 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin2017-11-10 454 return -EINVAL; 6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin
Re: [RFC patch 0/5] Support BCLK input clock in tlv320aic31xx
On Fri, 19 Nov 2021 12:32:43 -0300, Ariel D'Alessandro wrote: > The tlv320aic31xx codec allows using BCLK as the input clock for PLL, > deriving all the frequencies through a set of divisors. > > In this case, codec sysclk is determined by the hwparams sample > rate/format. So its frequency must be updated from the codec itself when > these are changed. > > [...] Applied to https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git for-next Thanks! [1/5] ASoC: tlv320aic31xx: Fix typo in BCLK clock name commit: 7016fd940adf2f4d86032339b546c6ecd737062f [2/5] ASoC: tlv320aic31xx: Add support for pll_r coefficient commit: 2664b24a8c51c21b24c2b37b7f10d6485c35b7c1 [3/5] ASoC: tlv320aic31xx: Add divs for bclk as clk_in commit: 6e6752a9c78738e27bde6da5cefa393b589276bb [4/5] ASoC: tlv320aic31xx: Handle BCLK set as PLL input configuration commit: c5d22d5e12e776fee4e346dc098fe51d00c2f983 [5/5] ASoC: fsl-asoc-card: Support fsl,imx-audio-tlv320aic31xx codec commit: 8c9b9cfb7724685ce705f511b882f30597596536 All being well this means that it will be integrated into the linux-next tree (usually sometime in the next 24 hours) and sent to Linus during the next merge window (or sooner if it is a bug fix), however if problems are discovered then the patch may be dropped or reverted. You may get further e-mails resulting from automated or manual testing and review of the tree, please engage with people reporting problems and send followup patches addressing any issues that are reported if needed. If any updates are required or you are submitting further changes they should be sent as incremental updates against current git, existing patches will not be replaced. Please add any relevant lists and maintainers to the CCs when replying to this mail. Thanks, Mark
Re: [PATCH 6/8] mm: Allow arch specific arch_randomize_brk() with CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
Hi Christophe, I love your patch! Yet something to improve: [auto build test ERROR on powerpc/next] [also build test ERROR on hnaz-mm/master linus/master v5.16-rc2 next-2028] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Christophe-Leroy/Convert-powerpc-to-default-topdown-mmap-layout/20211122-165115 base: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next config: arm-randconfig-r005-20211122 (attached as .config) compiler: arm-linux-gnueabi-gcc (GCC) 11.2.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/0day-ci/linux/commit/e5949ff1a8e5cae8e9ac2ec3a39849bf2e73eb34 git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Christophe-Leroy/Convert-powerpc-to-default-topdown-mmap-layout/20211122-165115 git checkout e5949ff1a8e5cae8e9ac2ec3a39849bf2e73eb34 # save the attached .config to linux build tree mkdir build_dir COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=arm SHELL=/bin/bash If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot All errors (new ones prefixed by >>): arm-linux-gnueabi-ld: fs/binfmt_elf.o: in function `load_elf_binary': >> binfmt_elf.c:(.text+0x16d8): undefined reference to `arch_randomize_brk' --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org .config.gz Description: application/gzip
Re: [PATCH v2] powerpc/64s: introduce CONFIG_MAXSMP to test very large SMP
Michael Ellerman writes: > Christophe Leroy writes: >> Le 09/11/2021 à 07:51, Nicholas Piggin a écrit : > ... >>> diff --git a/arch/powerpc/platforms/Kconfig.cputype >>> b/arch/powerpc/platforms/Kconfig.cputype >>> index a208997ade88..14c275e0ff93 100644 >>> --- a/arch/powerpc/platforms/Kconfig.cputype >>> +++ b/arch/powerpc/platforms/Kconfig.cputype >>> @@ -475,9 +475,14 @@ config SMP >>> >>> If you don't know what to do here, say N. >>> >>> +# MAXSMP sets 8192 if COMPILE_TEST because that's what x86 has flushed out. >>> +# Exceeding that will cause a lot of compile errors. Have to deal with >>> those >>> +# first. >>> config NR_CPUS >>> - int "Maximum number of CPUs (2-8192)" if SMP >>> - range 2 8192 if SMP >>> + int "Maximum number of CPUs (2-8192)" if SMP && !MAXSMP >>> + range 2 16384 if SMP >>> + default 16384 if MAXSMP && !COMPILE_TEST >>> + default 8192 if MAXSMP && COMPILE_TEST >> >> You can do less complex. First hit becomes the default, so you can do: >> >> default 8192 if MAXSMP && COMPILE_TEST >> default 16384 if MAXSMP > > I did that when applying. But I'll have to drop it, it breaks the allyesconfig build: In file included from /home/michael/linux/arch/powerpc/include/asm/paravirt.h:15, from /home/michael/linux/arch/powerpc/include/asm/qspinlock.h:6, from /home/michael/linux/arch/powerpc/include/asm/spinlock.h:7, from /home/michael/linux/include/linux/spinlock.h:93, from /home/michael/linux/include/linux/mmzone.h:8, from /home/michael/linux/include/linux/gfp.h:6, from /home/michael/linux/include/linux/mm.h:10, from /home/michael/linux/arch/powerpc/platforms/powernv/idle.c:9: /home/michael/linux/arch/powerpc/include/asm/cputhreads.h: In function ‘cpu_thread_mask_to_cores.constprop’: /home/michael/linux/arch/powerpc/include/asm/cputhreads.h:61:1: error: the frame size of 2064 bytes is larger than 2048 bytes [-Werror=frame-larger-than=] 61 | } | ^ /home/michael/linux/arch/powerpc/platforms/powernv/idle.c: In function ‘store_fastsleep_workaround_applyonce’: /home/michael/linux/arch/powerpc/platforms/powernv/idle.c:220:1: error: the frame size of 2080 bytes is larger than 2048 bytes [-Werror=frame-larger-than=] 220 | } | ^ cc1: all warnings being treated as errors make[4]: *** [/home/michael/linux/scripts/Makefile.build:287: arch/powerpc/platforms/powernv/idle.o] Error 1 make[4]: *** Waiting for unfinished jobs make[3]: *** [/home/michael/linux/scripts/Makefile.build:549: arch/powerpc/platforms/powernv] Error 2 make[3]: *** Waiting for unfinished jobs /home/michael/linux/arch/powerpc/kvm/book3s_hv_interrupts.S: Assembler messages: /home/michael/linux/arch/powerpc/kvm/book3s_hv_interrupts.S:66: Error: operand out of range (0x00010440 is not between 0x8000 and 0x7ffc) make[3]: *** [/home/michael/linux/scripts/Makefile.build:388: arch/powerpc/kvm/book3s_hv_interrupts.o] Error 1 make[3]: *** Waiting for unfinished jobs make[2]: *** [/home/michael/linux/scripts/Makefile.build:549: arch/powerpc/platforms] Error 2 make[2]: *** Waiting for unfinished jobs make[2]: *** [/home/michael/linux/scripts/Makefile.build:549: arch/powerpc/kvm] Error 2 make[1]: *** [/home/michael/linux/Makefile:1846: arch/powerpc] Error 2 make[1]: *** Waiting for unfinished jobs make: *** [Makefile:219: __sub-make] Error 2 cheers
Re: [PATCH v2] powerpc/64s: introduce CONFIG_MAXSMP to test very large SMP
Excerpts from Michael Ellerman's message of November 23, 2021 11:01 am: > Michael Ellerman writes: >> Christophe Leroy writes: >>> Le 09/11/2021 à 07:51, Nicholas Piggin a écrit : >> ... diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype index a208997ade88..14c275e0ff93 100644 --- a/arch/powerpc/platforms/Kconfig.cputype +++ b/arch/powerpc/platforms/Kconfig.cputype @@ -475,9 +475,14 @@ config SMP If you don't know what to do here, say N. +# MAXSMP sets 8192 if COMPILE_TEST because that's what x86 has flushed out. +# Exceeding that will cause a lot of compile errors. Have to deal with those +# first. config NR_CPUS - int "Maximum number of CPUs (2-8192)" if SMP - range 2 8192 if SMP + int "Maximum number of CPUs (2-8192)" if SMP && !MAXSMP + range 2 16384 if SMP + default 16384 if MAXSMP && !COMPILE_TEST + default 8192 if MAXSMP && COMPILE_TEST >>> >>> You can do less complex. First hit becomes the default, so you can do: >>> >>> default 8192 if MAXSMP && COMPILE_TEST >>> default 16384 if MAXSMP >> >> I did that when applying. > > But I'll have to drop it, it breaks the allyesconfig build: Ah, you still need patch 1/2 sorry I confused things by only re-sending this one. https://patchwork.ozlabs.org/project/linuxppc-dev/patch/20211105035042.1398309-1-npig...@gmail.com/ Thanks, Nick > > In file included from > /home/michael/linux/arch/powerpc/include/asm/paravirt.h:15, >from > /home/michael/linux/arch/powerpc/include/asm/qspinlock.h:6, >from > /home/michael/linux/arch/powerpc/include/asm/spinlock.h:7, >from /home/michael/linux/include/linux/spinlock.h:93, >from /home/michael/linux/include/linux/mmzone.h:8, >from /home/michael/linux/include/linux/gfp.h:6, >from /home/michael/linux/include/linux/mm.h:10, >from > /home/michael/linux/arch/powerpc/platforms/powernv/idle.c:9: > /home/michael/linux/arch/powerpc/include/asm/cputhreads.h: In function > ‘cpu_thread_mask_to_cores.constprop’: > /home/michael/linux/arch/powerpc/include/asm/cputhreads.h:61:1: error: the > frame size of 2064 bytes is larger than 2048 bytes > [-Werror=frame-larger-than=] > 61 | } > | ^ > /home/michael/linux/arch/powerpc/platforms/powernv/idle.c: In function > ‘store_fastsleep_workaround_applyonce’: > /home/michael/linux/arch/powerpc/platforms/powernv/idle.c:220:1: error: the > frame size of 2080 bytes is larger than 2048 bytes > [-Werror=frame-larger-than=] > 220 | } > | ^ > cc1: all warnings being treated as errors > make[4]: *** [/home/michael/linux/scripts/Makefile.build:287: > arch/powerpc/platforms/powernv/idle.o] Error 1 > make[4]: *** Waiting for unfinished jobs > make[3]: *** [/home/michael/linux/scripts/Makefile.build:549: > arch/powerpc/platforms/powernv] Error 2 > make[3]: *** Waiting for unfinished jobs > /home/michael/linux/arch/powerpc/kvm/book3s_hv_interrupts.S: Assembler > messages: > /home/michael/linux/arch/powerpc/kvm/book3s_hv_interrupts.S:66: Error: > operand out of range (0x00010440 is not between 0x8000 > and 0x7ffc) > make[3]: *** [/home/michael/linux/scripts/Makefile.build:388: > arch/powerpc/kvm/book3s_hv_interrupts.o] Error 1 > make[3]: *** Waiting for unfinished jobs > make[2]: *** [/home/michael/linux/scripts/Makefile.build:549: > arch/powerpc/platforms] Error 2 > make[2]: *** Waiting for unfinished jobs > make[2]: *** [/home/michael/linux/scripts/Makefile.build:549: > arch/powerpc/kvm] Error 2 > make[1]: *** [/home/michael/linux/Makefile:1846: arch/powerpc] Error 2 > make[1]: *** Waiting for unfinished jobs > make: *** [Makefile:219: __sub-make] Error 2 > > cheers >
Re: [PATCH v2] powerpc/64s: introduce CONFIG_MAXSMP to test very large SMP
Excerpts from Nicholas Piggin's message of November 23, 2021 3:14 pm: > Excerpts from Michael Ellerman's message of November 23, 2021 11:01 am: >> Michael Ellerman writes: >>> Christophe Leroy writes: Le 09/11/2021 à 07:51, Nicholas Piggin a écrit : >>> ... > diff --git a/arch/powerpc/platforms/Kconfig.cputype > b/arch/powerpc/platforms/Kconfig.cputype > index a208997ade88..14c275e0ff93 100644 > --- a/arch/powerpc/platforms/Kconfig.cputype > +++ b/arch/powerpc/platforms/Kconfig.cputype > @@ -475,9 +475,14 @@ config SMP > > If you don't know what to do here, say N. > > +# MAXSMP sets 8192 if COMPILE_TEST because that's what x86 has flushed > out. > +# Exceeding that will cause a lot of compile errors. Have to deal with > those > +# first. > config NR_CPUS > - int "Maximum number of CPUs (2-8192)" if SMP > - range 2 8192 if SMP > + int "Maximum number of CPUs (2-8192)" if SMP && !MAXSMP > + range 2 16384 if SMP > + default 16384 if MAXSMP && !COMPILE_TEST > + default 8192 if MAXSMP && COMPILE_TEST You can do less complex. First hit becomes the default, so you can do: default 8192 if MAXSMP && COMPILE_TEST default 16384 if MAXSMP >>> >>> I did that when applying. >> >> But I'll have to drop it, it breaks the allyesconfig build: > > Ah, you still need patch 1/2 sorry I confused things by only re-sending > this one. > > https://patchwork.ozlabs.org/project/linuxppc-dev/patch/20211105035042.1398309-1-npig...@gmail.com/ Actually KVM will also be broken, I sent a patch for it but there is some discussion of fixing it a different way. So maybe leave out the maxsmp patch for now (or make it depend on BROKEN?). I can re-send maybe next merge window if the other pieces are in place. If you could still take that ^^ patch for now would be good though. Thanks, Nick
Re: [PATCH V4 0/1] powerpc/perf: Clear pending PMI in ppmu callbacks
Excerpts from Athira Rajeev's message of November 20, 2021 12:36 am: > > >> On 21-Jul-2021, at 11:18 AM, Athira Rajeev >> wrote: >> >> Running perf fuzzer testsuite popped up below messages >> in the dmesg logs: >> >> "Can't find PMC that caused IRQ" >> >> This means a PMU exception happened, but none of the PMC's (Performance >> Monitor Counter) were found to be overflown. Perf interrupt handler checks >> the PMC's to see which PMC has overflown and if none of the PMCs are >> overflown ( counter value not >= 0x8000 ), it throws warning: >> "Can't find PMC that caused IRQ". >> >> Powerpc has capability to mask and replay a performance monitoring >> interrupt (PMI). In case of replayed PMI, there are some corner cases >> that clears the PMCs after masking. In such cases, the perf interrupt >> handler will not find the active PMC values that had caused the overflow >> and thus leading to this message. This patchset attempts to fix those >> corner cases. >> >> However there is one more case in PowerNV where these messages are >> emitted during system wide profiling or when a specific CPU is monitored >> for an event. That is, when a counter overflow just before entering idle >> and a PMI gets triggered after wakeup from idle. Since PMCs >> are not saved in the idle path, perf interrupt handler will not >> find overflown counter value and emits the "Can't find PMC" messages. >> This patch documents this race condition in powerpc core-book3s. >> >> Patch fixes the ppmu callbacks to disable pending interrupt before clearing >> the overflown PMC and documents the race condition in idle path. >> >> Changelog: >> changes from v3 -> v4 >> Addressed review comments from Nicholas Piggin >> - Added comment explaining the need to clear MMCR0 PMXE bit in >> pmu disable callback. >> - Added a check to display warning if there is a PMI pending >> bit set in Paca without any overflown PMC. >> - Removed the condition check before clearing pending PMI >> in 'clear_pmi_irq_pending' function. >> - Added reviewed by from Nicholas Piggin. >> >> Changes from v2 -> v3 >> Addressed review comments from Nicholas Piggin >> - Moved the clearing of PMI bit to power_pmu_disable. >> In previous versions, this was done in power_pmu_del, >> power_pmu_stop/enable callbacks before clearing of PMC's. >> - power_pmu_disable is called before any event gets deleted >> or stopped. If more than one event is running in the PMU, >> we may clear the PMI bit for an event which is not going >> to be deleted/stopped. Hence introduced check in >> power_pmu_enable to set back PMI to avoid dropping of valid >> samples in such cases. >> - Disable MMCR0 PMXE bit in pmu disable callback which otherwise >> could trigger PMI when PMU is getting disabled. >> Changes from v1 -> v2 >> Addressed review comments from Nicholas Piggin >> - Moved the PMI pending check and clearing function >> to arch/powerpc/include/asm/hw_irq.h and renamed >> function to "get_clear_pmi_irq_pending" >> - Along with checking for pending PMI bit in Paca, >> look for PMAO bit in MMCR0 register to decide on >> pending PMI interrupt. >> >> Athira Rajeev (1): >> powerpc/perf: Fix PMU callbacks to clear pending PMI before resetting >>an overflown PMC > > Hi, > > Please let me know if there are any review comments for this patch. > > Thanks > Athira It seems good to me. It already has my R-B. Would be good if we can get this one merged. Thanks, Nick