On Mon, Jun 09, 2025 at 02:01:24PM -0400, Joel Fernandes wrote: > During rcu_read_unlock_special(), if this happens during irq_exit(), we > can lockup if an IPI is issued. This is because the IPI itself triggers > the irq_exit() path causing a recursive lock up. > > This is precisely what Xiongfeng found when invoking a BPF program on > the trace_tick_stop() tracepoint As shown in the trace below. Fix by > using context-tracking to tell us if we're still in an IRQ. > context-tracking keeps track of the IRQ until after the tracepoint, so > it cures the issues. > > irq_exit() > __irq_exit_rcu() > /* in_hardirq() returns false after this */ > preempt_count_sub(HARDIRQ_OFFSET) > tick_irq_exit()
@Frederic, while we are at it, what's the purpose of in_hardirq() in tick_irq_exit()? For nested interrupt detection? Regards, Boqun > tick_nohz_irq_exit() > tick_nohz_stop_sched_tick() > trace_tick_stop() /* a bpf prog is hooked on this trace point */ > __bpf_trace_tick_stop() > bpf_trace_run2() > rcu_read_unlock_special() > /* will send a IPI to itself */ > irq_work_queue_on(&rdp->defer_qs_iw, rdp->cpu); > > A simple reproducer can also be obtained by doing the following in > tick_irq_exit(). It will hang on boot without the patch: > > static inline void tick_irq_exit(void) > { > + rcu_read_lock(); > + WRITE_ONCE(current->rcu_read_unlock_special.b.need_qs, true); > + rcu_read_unlock(); > + > > While at it, add some comments to this code. > > Reported-by: Xiongfeng Wang <wangxiongfe...@huawei.com> > Closes: > https://lore.kernel.org/all/9acd5f9f-6732-7701-6880-4b51190aa...@huawei.com/ > Tested-by: Xiongfeng Wang <wangxiongfe...@huawei.com> > Signed-off-by: Joel Fernandes <joelagn...@nvidia.com> [...]