On 06/16/2015 07:45 AM, Alexei Starovoitov wrote: > On 6/15/15 7:14 PM, Paul E. McKenney wrote: >> >> Why do you believe that it is better to fix it within call_rcu()? > > found it: > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > index 8cf7304b2867..a3be09d482ae 100644 > --- a/kernel/rcu/tree.c > +++ b/kernel/rcu/tree.c > @@ -935,9 +935,9 @@ bool notrace rcu_is_watching(void) > { > bool ret; > > - preempt_disable(); > + preempt_disable_notrace(); > ret = __rcu_is_watching(); > - preempt_enable(); > + preempt_enable_notrace(); > return ret; > } > > the rcu_is_watching() and __rcu_is_watching() are already marked > notrace, so imo it's a good 'fix'. > What was happening is that the above preempt_enable was triggering > recursive call_rcu that was indeed messing 'rdp' that was > prepared by __call_rcu and before __call_rcu_core could use that. > > btw, also noticed that local_irq_save done by note_gp_changes > is partially redundant. In __call_rcu_core path the irqs are > already disabled. > >> Perhaps you are self-deadlocking within __call_rcu_core(). If you have >> not already done so, please try running with CONFIG_PROVE_LOCKING=y. > > yes, I had CONFIG_PROVE_LOCKING on. > >> I suspect that your problem may range quite a bit further than just >> call_rcu(). For example, in your stack trace, you have a recursive >> call to debug_object_activate(), which might not be such good thing. > > nope :) recursive debug_object_activate() is fine. > with the above 'fix' the trace.patch is now passing.
It still crashes for me with the original test program [ 145.908013] [<ffffffff810d1da1>] ? __rcu_reclaim+0x101/0x3d0 [ 145.908013] [<ffffffff810d1ca0>] ? rcu_barrier_func+0x250/0x250 [ 145.908013] [<ffffffff810abc03>] ? trace_hardirqs_on_caller+0xf3/0x240 [ 145.908013] [<ffffffff810d9afa>] rcu_do_batch+0x2ea/0x6b0 [ 145.908013] [<ffffffff8151a803>] ? __this_cpu_preempt_check+0x13/0x20 [ 145.908013] [<ffffffff810abc03>] ? trace_hardirqs_on_caller+0xf3/0x240 [ 145.921092] [<ffffffff81b6f072>] ? _raw_spin_unlock_irqrestore+0x42/0x80 [ 145.921092] [<ffffffff810d2794>] ? rcu_report_qs_rnp+0x1b4/0x3f0 [ 145.921092] [<ffffffff8151a803>] ? __this_cpu_preempt_check+0x13/0x20 [ 145.921092] [<ffffffff810d9f96>] rcu_process_callbacks+0xd6/0x6a0 [ 145.921092] [<ffffffff81060042>] __do_softirq+0xe2/0x670 [ 145.921092] [<ffffffff810605ef>] run_ksoftirqd+0x1f/0x60 [ 145.921092] [<ffffffff81081843>] smpboot_thread_fn+0x193/0x2a0 [ 145.921092] [<ffffffff810816b0>] ? sort_range+0x30/0x30 [ 145.921092] [<ffffffff8107da12>] kthread+0xf2/0x110 [ 145.921092] [<ffffffff81b6a523>] ? wait_for_completion+0xc3/0x120 [ 145.921092] [<ffffffff8108a77b>] ? preempt_count_sub+0xab/0xf0 [ 145.921092] [<ffffffff8107d920>] ? kthread_create_on_node+0x240/0x240 [ 145.921092] [<ffffffff81b6ff02>] ret_from_fork+0x42/0x70 [ 145.921092] [<ffffffff8107d920>] ? kthread_create_on_node+0x240/0x240 > Why I'm digging into all of these? Well, to find out when > it's safe to finally do call_rcu. If I will use deferred kfree > approach in bpf maps, I need to know when it's safe to finally > call_rcu and it's not an easy answer. > kprobes potentially can be placed in any part of call_rcu stack, > so things can go messy quickly. So it helps to understand the call_rcu > logic well enough to come up with good solution. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/