On Mon, May 07, 2018 at 02:21:17PM -0400, Steven Rostedt wrote:
> Peter, Ingo or Thomas,
> 
> Can you queue this up? Or would you prefer that I take it?

This patch is a bug fix, could it be queued for v4.17 -rc cycle or for
linux-next?

I have also included it below again:

thanks,

 - Joel

---8<-----------------------

From: "Joel Fernandes (Google)" <j...@joelfernandes.org>
Date: Tue, 1 May 2018 18:44:39 -0700
Subject: [PATCH] softirq: reorder trace_softirqs_on to prevent lockdep splat

I'm able to reproduce a lockdep splat with config options:
CONFIG_PROVE_LOCKING=y,
CONFIG_DEBUG_LOCK_ALLOC=y and
CONFIG_PREEMPTIRQ_EVENTS=y

$ echo 1 > /d/tracing/events/preemptirq/preempt_enable/enable

[   26.112609] DEBUG_LOCKS_WARN_ON(current->softirqs_enabled)
[   26.112636] WARNING: CPU: 0 PID: 118 at kernel/locking/lockdep.c:3854
[...]
[   26.144229] Call Trace:
[   26.144926]  <IRQ>
[   26.145506]  lock_acquire+0x55/0x1b0
[   26.146499]  ? __do_softirq+0x46f/0x4d9
[   26.147571]  ? __do_softirq+0x46f/0x4d9
[   26.148646]  trace_preempt_on+0x8f/0x240
[   26.149744]  ? trace_preempt_on+0x4d/0x240
[   26.150862]  ? __do_softirq+0x46f/0x4d9
[   26.151930]  preempt_count_sub+0x18a/0x1a0
[   26.152985]  __do_softirq+0x46f/0x4d9
[   26.153937]  irq_exit+0x68/0xe0
[   26.154755]  smp_apic_timer_interrupt+0x271/0x280
[   26.156056]  apic_timer_interrupt+0xf/0x20
[   26.157105]  </IRQ>

The issue was this:

preempt_count = 1 << SOFTIRQ_SHIFT

        __local_bh_enable(cnt = 1 << SOFTIRQ_SHIFT) {
                if (softirq_count() == (cnt && SOFTIRQ_MASK)) {
                        trace_softirqs_on() {
                                current->softirqs_enabled = 1;
                        }
                }
                preempt_count_sub(cnt) {
                        trace_preempt_on() {
                                tracepoint() {
                                        rcu_read_lock_sched() {
                                                // jumps into lockdep

Where preempt_count still has softirqs disabled, but
current->softirqs_enabled is true, and we get a splat.

Cc: sta...@vger.kernel.org
Reviewed-by: Steven Rostedt (VMware) <rost...@goodmis.org>
Fixes: d59158162e032 ("tracing: Add support for preempt and irq enable/disable 
events")
Signed-off-by: Joel Fernandes (Google) <j...@joelfernandes.org>
---
 kernel/softirq.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index 177de3640c78..8a040bcaa033 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -139,9 +139,13 @@ static void __local_bh_enable(unsigned int cnt)
 {
        lockdep_assert_irqs_disabled();
 
+       if (preempt_count() == cnt)
+               trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
+
        if (softirq_count() == (cnt & SOFTIRQ_MASK))
                trace_softirqs_on(_RET_IP_);
-       preempt_count_sub(cnt);
+
+       __preempt_count_sub(cnt);
 }
 
 /*
-- 
2.17.0.921.gf22659ad46-goog

Reply via email to