Prevent interrupt restore from allowing racing hard interrupts going ahead of previous soft-pending ones, by using the soft-masked restart handler to allow a store to clear the soft-mask while knowing nothing is soft-pending.
This probably doesn't matter much in practice, but it's a simple demonstrator / test case to exercise the restart table logic. Signed-off-by: Nicholas Piggin <npig...@gmail.com> --- arch/powerpc/kernel/irq.c | 81 ++++++++++++++++++++++++--------------- 1 file changed, 51 insertions(+), 30 deletions(-) diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c index 08a747b92735..a032701e81be 100644 --- a/arch/powerpc/kernel/irq.c +++ b/arch/powerpc/kernel/irq.c @@ -224,54 +224,75 @@ notrace void arch_local_irq_restore(unsigned long mask) { unsigned char irq_happened; - /* Write the new soft-enabled value */ - irq_soft_mask_set(mask); - if (mask) + /* Write the new soft-enabled value if it is a disable */ + if (mask) { + irq_soft_mask_set(mask); return; + } /* - * From this point onward, we can take interrupts, preempt, - * etc... unless we got hard-disabled. We check if an event - * happened. If none happened, we know we can just return. - * - * We may have preempted before the check below, in which case - * we are checking the "new" CPU instead of the old one. This - * is only a problem if an event happened on the "old" CPU. + * After the stb, interrupts are unmasked and there are no interrupts + * pending replay. The restart sequence makes this atomic with + * respect to soft-masked interrupts. If this was just a simple code + * sequence, a soft-masked interrupt could become pending right after + * the comparison and before the stb. * - * External interrupt events will have caused interrupts to - * be hard-disabled, so there is no problem, we - * cannot have preempted. + * This allows interrupts to be unmasked without hard disabling, and + * also without new hard interrupts coming in ahead of pending ones. */ + asm_volatile_goto( +"1: \n" +" lbz 9,%0(13) \n" +" cmpwi 9,0 \n" +" bne %l[happened] \n" +" stb 9,%1(13) \n" +"2: \n" + RESTART_TABLE(1b, 2b, 1b) + : : "i" (offsetof(struct paca_struct, irq_happened)), + "i" (offsetof(struct paca_struct, irq_soft_mask)) + : "cr0", "r9" + : happened); + + if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) + WARN_ON_ONCE(!(mfmsr() & MSR_EE)); + + return; + +happened: irq_happened = get_irq_happened(); - if (!irq_happened) { + if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) + WARN_ON_ONCE(!irq_happened); + + if (irq_happened == PACA_IRQ_HARD_DIS) { if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) - WARN_ON_ONCE(!(mfmsr() & MSR_EE)); + WARN_ON_ONCE(mfmsr() & MSR_EE); + irq_soft_mask_set(IRQS_ENABLED); + local_paca->irq_happened = 0; + __hard_irq_enable(); return; } - /* We need to hard disable to replay. */ + /* Have interrupts to replay, need to hard disable first */ if (!(irq_happened & PACA_IRQ_HARD_DIS)) { - if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) - WARN_ON_ONCE(!(mfmsr() & MSR_EE)); + if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) { + if (!(mfmsr() & MSR_EE)) { + /* + * An interrupt could have come in and cleared + * MSR[EE] and set IRQ_HARD_DIS, so check + * IRQ_HARD_DIS again and warn if it is still + * clear. + */ + irq_happened = get_irq_happened(); + WARN_ON_ONCE(!(irq_happened & PACA_IRQ_HARD_DIS)); + } + } __hard_irq_disable(); local_paca->irq_happened |= PACA_IRQ_HARD_DIS; } else { - /* - * We should already be hard disabled here. We had bugs - * where that wasn't the case so let's dbl check it and - * warn if we are wrong. Only do that when IRQ tracing - * is enabled as mfmsr() can be costly. - */ if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) { if (WARN_ON_ONCE(mfmsr() & MSR_EE)) __hard_irq_disable(); } - - if (irq_happened == PACA_IRQ_HARD_DIS) { - local_paca->irq_happened = 0; - __hard_irq_enable(); - return; - } } /* -- 2.23.0