On 9/16/25 9:46 AM, Shrikanth Hegde wrote:
On 9/9/25 2:32 AM, Mukesh Kumar Chaurasiya wrote:
Enable generic entry/exit path for ppc irq.
Signed-off-by: Mukesh Kumar Chaurasiya <mchau...@linux.ibm.com>
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/entry-common.h | 93 ++---
arch/powerpc/include/asm/interrupt.h | 492 +++---------------------
arch/powerpc/kernel/interrupt.c | 9 +-
arch/powerpc/kernel/interrupt_64.S | 2 -
5 files changed, 92 insertions(+), 505 deletions(-)
\
diff --git a/arch/powerpc/kernel/interrupt.c
b/arch/powerpc/kernel/interrupt.c
index f53d432f60870..7bb8a31b24ea7 100644
--- a/arch/powerpc/kernel/interrupt.c
+++ b/arch/powerpc/kernel/interrupt.c
@@ -297,13 +297,8 @@ notrace unsigned long
interrupt_exit_kernel_prepare(struct pt_regs *regs)
/* Returning to a kernel context with local irqs enabled. */
WARN_ON_ONCE(!(regs->msr & MSR_EE));
again:
- if (need_irq_preemption()) {
- /* Return to preemptible kernel context */
- if (unlikely(read_thread_flags() & _TIF_NEED_RESCHED)) {
- if (preempt_count() == 0)
- preempt_schedule_irq();
- }
- }
+ if (need_irq_preemption())
+ irqentry_exit_cond_resched();
irqentry_exit_cond_resched is also called in irqentry_exit. It would
be better if we can find ways to avoid calling it again.
I see a loop here. But comment says it is not enabling irq again. so
the loop is bounded. So might be okay to remove cond_resched here. do run
preemptirq, irq tracers to ensure that is case.
Sure.
Also, what is this "soft_interrupts"?
You mean soft masked interrupts?
It's a mechanism to buffer interrupts without disabling the ee bit so
that we can replay those interrupts later.
check_return_regs_valid(regs);
diff --git a/arch/powerpc/kernel/interrupt_64.S
b/arch/powerpc/kernel/interrupt_64.S
index 1ad059a9e2fef..6aa88fe91fb6a 100644
--- a/arch/powerpc/kernel/interrupt_64.S
+++ b/arch/powerpc/kernel/interrupt_64.S
@@ -418,8 +418,6 @@ _ASM_NOKPROBE_SYMBOL(interrupt_return_\srr\())
beq interrupt_return_\srr\()_kernel
interrupt_return_\srr\()_user: /* make backtraces match the _kernel
variant */
_ASM_NOKPROBE_SYMBOL(interrupt_return_\srr\()_user)
- addi r3,r1,STACK_INT_FRAME_REGS
- bl CFUNC(interrupt_exit_user_prepare)
#ifndef CONFIG_INTERRUPT_SANITIZE_REGISTERS
cmpdi r3,0
bne- .Lrestore_nvgprs_\srr