Le 04/11/2022 à 18:27, Andrew Donnellan a écrit : > Enable CONFIG_VMAP_STACK for book3s64. > > To do this, we need to make some slight adjustments to set the stack SLB > entry up for vmalloc rather than linear. > > For now, only enable if KVM_BOOK3S_64_HV is disabled (there's some real mode > handlers we need to fix there).
There is one missing point : with VMAP_STACK, a stack overflow will generate a page fault. You have to handle it at interrupt entry, before going back to virtual mode, otherwise it will fault forever. See how it is done in arch/powerpc/kernel/head_32.h, in macro EXCEPTION_PROLOG_1 > > Signed-off-by: Andrew Donnellan <a...@linux.ibm.com> > --- > arch/powerpc/kernel/process.c | 4 ++++ > arch/powerpc/mm/book3s64/slb.c | 11 +++++++++-- > arch/powerpc/platforms/Kconfig.cputype | 1 + > 3 files changed, 14 insertions(+), 2 deletions(-) > > diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c > index 07917726c629..cadf2db5a2a8 100644 > --- a/arch/powerpc/kernel/process.c > +++ b/arch/powerpc/kernel/process.c > @@ -1685,7 +1685,11 @@ static void setup_ksp_vsid(struct task_struct *p, > unsigned long sp) > { > #ifdef CONFIG_PPC_64S_HASH_MMU > unsigned long sp_vsid; > +#ifdef CONFIG_VMAP_STACK > + unsigned long llp = mmu_psize_defs[mmu_vmalloc_psize].sllp; > +#else /* CONFIG_VMAP_STACK */ > unsigned long llp = mmu_psize_defs[mmu_linear_psize].sllp; > +#endif /* CONFIG_VMAP_STACK */ I think you could use IS_ENABLED() instead of an ifdef: unsigned long llp; if (IS_ENABLED(CONFIG_VMAP_STACK)) llp = mmu_psize_defs[mmu_vmalloc_psize].sllp; else llp = mmu_psize_defs[mmu_linear_psize].sllp; > > if (radix_enabled()) > return; > diff --git a/arch/powerpc/mm/book3s64/slb.c b/arch/powerpc/mm/book3s64/slb.c > index 6956f637a38c..0e21f0eaa7bb 100644 > --- a/arch/powerpc/mm/book3s64/slb.c > +++ b/arch/powerpc/mm/book3s64/slb.c > @@ -541,7 +541,7 @@ void slb_set_size(u16 size) > void slb_initialize(void) > { > unsigned long linear_llp, vmalloc_llp, io_llp; > - unsigned long lflags; > + unsigned long lflags, kstack_flags; > static int slb_encoding_inited; > #ifdef CONFIG_SPARSEMEM_VMEMMAP > unsigned long vmemmap_llp; > @@ -582,11 +582,18 @@ void slb_initialize(void) > * get_paca()->kstack hasn't been initialized yet. > * For secondary cpus, we need to bolt the kernel stack entry now. > */ > + > +#ifdef CONFIG_VMAP_STACK > + kstack_flags = SLB_VSID_KERNEL | vmalloc_llp; > +#else > + kstack_flags = SLB_VSID_KERNEL | linear_llp; > +#endif Same, should be if (IS_ENABLED(CONFIG_VMAP_STACK)) kstack_flags = SLB_VSID_KERNEL | vmalloc_llp; else kstack_flags = SLB_VSID_KERNEL | linear_llp; > slb_shadow_clear(KSTACK_INDEX); > if (raw_smp_processor_id() != boot_cpuid && > (get_paca()->kstack & slb_esid_mask(mmu_kernel_ssize)) > > PAGE_OFFSET) > create_shadowed_slbe(get_paca()->kstack, > - mmu_kernel_ssize, lflags, KSTACK_INDEX); > + mmu_kernel_ssize, kstack_flags, > + KSTACK_INDEX); > > asm volatile("isync":::"memory"); > } > diff --git a/arch/powerpc/platforms/Kconfig.cputype > b/arch/powerpc/platforms/Kconfig.cputype > index 0c4eed9aea80..998317257797 100644 > --- a/arch/powerpc/platforms/Kconfig.cputype > +++ b/arch/powerpc/platforms/Kconfig.cputype > @@ -104,6 +104,7 @@ config PPC_BOOK3S_64 > select IRQ_WORK > select PPC_64S_HASH_MMU if !PPC_RADIX_MMU > select KASAN_VMALLOC if KASAN > + select HAVE_ARCH_VMAP_STACK if KVM_BOOK3S_64_HV = n Is it different from select HAVE_ARCH_VMAP_STACK if !KVM_BOOK3S_64_HV ? > > config PPC_BOOK3E_64 > bool "Embedded processors"