On 2024-11-08 15:48:53 [+0530], Shrikanth Hegde wrote: > Define preempt lazy bit for Powerpc. Use bit 9 which is free and within > 16 bit range of NEED_RESCHED, so compiler can issue single andi. > > Since Powerpc doesn't use the generic entry/exit, add lazy check at exit > to user. CONFIG_PREEMPTION is defined for lazy/full/rt so use it for > return to kernel. > > Ran a few benchmarks and db workload on Power10. Performance is close to > preempt=none/voluntary. It is possible that some patterns would > differ in lazy[2]. More details of preempt lazy is here [1] > > Since Powerpc system can have large core count and large memory, > preempt lazy is going to be helpful in avoiding soft lockup issues. > > [1]: https://lore.kernel.org/lkml/20241007074609.447006...@infradead.org/ > [2]: > https://lore.kernel.org/all/1a973dda-c79e-4d95-935b-e4b93eb07...@linux.ibm.com/
The lazy bits are only in tip. Reviewed-by: Sebastian Andrzej Siewior <bige...@linutronix.de> > Signed-off-by: Shrikanth Hegde <sshe...@linux.ibm.com> > --- > diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c > index af62ec974b97..8f4acc55407b 100644 > --- a/arch/powerpc/kernel/interrupt.c > +++ b/arch/powerpc/kernel/interrupt.c > @@ -396,7 +396,7 @@ notrace unsigned long > interrupt_exit_kernel_prepare(struct pt_regs *regs) > /* Returning to a kernel context with local irqs enabled. */ > WARN_ON_ONCE(!(regs->msr & MSR_EE)); > again: > - if (IS_ENABLED(CONFIG_PREEMPT)) { > + if (IS_ENABLED(CONFIG_PREEMPTION)) { > /* Return to preemptible kernel context */ > if (unlikely(read_thread_flags() & _TIF_NEED_RESCHED)) { > if (preempt_count() == 0) Shouldn't exit_vmx_usercopy() get also this s@CONFIG_PREEMPT@CONFIG_PREEMPTION@ change ? Sebastian