Hello, Frederic! I don't see the following commit in mainline, but figured I should check with you guys to see if the problem got solved in some other way. Unless I hear otherwise, I will continue to carry this patch in -rcu and will send it along for the v5.13 merge window.
Thanx, Paul ------------------------------------------------------------------------ commit 650c433b46ca9601378c9d170d5dc0e24dd42822 Author: Frederic Weisbecker <frede...@kernel.org> Date: Fri Jan 8 13:50:12 2021 +0100 timer: Report ignored local enqueue in nohz mode Enqueuing a local timer after the tick has been stopped will result in the timer being ignored until the next random interrupt. Perform sanity checks to report these situations. Cc: Peter Zijlstra <pet...@infradead.org> Cc: Thomas Gleixner <t...@linutronix.de> Cc: Ingo Molnar<mi...@kernel.org> Cc: Rafael J. Wysocki <rafael.j.wyso...@intel.com> Signed-off-by: Frederic Weisbecker <frede...@kernel.org> Signed-off-by: Paul E. McKenney <paul...@kernel.org> diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ca2bb62..4822371 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -674,6 +674,26 @@ int get_nohz_timer_target(void) return cpu; } +static void wake_idle_assert_possible(void) +{ +#ifdef CONFIG_SCHED_DEBUG + /* Timers are re-evaluated after idle IRQs */ + if (in_hardirq()) + return; + /* + * Same as hardirqs, assuming they are executing + * on IRQ tail. Ksoftirqd shouldn't reach here + * as the timer base wouldn't be idle. And inline + * softirq processing after a call to local_bh_enable() + * within idle loop sound too fun to be considered here. + */ + if (in_serving_softirq()) + return; + + WARN_ON_ONCE("Late timer enqueue may be ignored\n"); +#endif +} + /* * When add_timer_on() enqueues a timer into the timer wheel of an * idle CPU then this timer might expire before the next timer event @@ -688,8 +708,10 @@ static void wake_up_idle_cpu(int cpu) { struct rq *rq = cpu_rq(cpu); - if (cpu == smp_processor_id()) + if (cpu == smp_processor_id()) { + wake_idle_assert_possible(); return; + } if (set_nr_and_not_polling(rq->idle)) smp_send_reschedule(cpu);