Le Wed, Oct 01, 2025 at 07:48:13AM -0700, Paul E. McKenney a écrit :
> This commit saves more than 500 lines of RCU code by re-implementing
> RCU Tasks Trace in terms of SRCU-fast.  Follow-up work will remove
> more code that does not cause problems by its presence, but that is no
> longer required.
> 
> This variant places smp_mb() in rcu_read_{,un}lock_trace(), which will
> be removed on common-case architectures in a later commit.

The changelog doesn't mention what this is ordering :-)

> 
> [ paulmck: Apply kernel test robot, Boqun Feng, and Zqiang feedback. ]
> [ paulmck: Split out Tiny SRCU fixes per Andrii Nakryiko feedback. ]
> 
> Signed-off-by: Paul E. McKenney <paul...@kernel.org>
> Tested-by: kernel test robot <oliver.s...@intel.com>
> Cc: Andrii Nakryiko <and...@kernel.org>
> Cc: Alexei Starovoitov <a...@kernel.org>
> Cc: Peter Zijlstra <pet...@infradead.org>
> Cc: <b...@vger.kernel.org>
> ---
[...]
> @@ -50,12 +50,14 @@ static inline void rcu_read_lock_trace(void)
>  {
>       struct task_struct *t = current;
>  
> -     WRITE_ONCE(t->trc_reader_nesting, READ_ONCE(t->trc_reader_nesting) + 1);
> -     barrier();
> -     if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB) &&
> -         t->trc_reader_special.b.need_mb)
> -             smp_mb(); // Pairs with update-side barriers
> -     rcu_lock_acquire(&rcu_trace_lock_map);
> +     if (t->trc_reader_nesting++) {
> +             // In case we interrupted a Tasks Trace RCU reader.
> +             rcu_try_lock_acquire(&rcu_tasks_trace_srcu_struct.dep_map);
> +             return;
> +     }
> +     barrier();  // nesting before scp to protect against interrupt handler.
> +     t->trc_reader_scp = srcu_read_lock_fast(&rcu_tasks_trace_srcu_struct);
> +     smp_mb(); // Placeholder for more selective ordering

Mysterious :-)

>  }
>  
>  /**
> @@ -69,26 +71,75 @@ static inline void rcu_read_lock_trace(void)
>   */
>  static inline void rcu_read_unlock_trace(void)
>  {
> -     int nesting;
> +     struct srcu_ctr __percpu *scp;
>       struct task_struct *t = current;
>  
> -     rcu_lock_release(&rcu_trace_lock_map);
> -     nesting = READ_ONCE(t->trc_reader_nesting) - 1;
> -     barrier(); // Critical section before disabling.
> -     // Disable IPI-based setting of .need_qs.
> -     WRITE_ONCE(t->trc_reader_nesting, INT_MIN + nesting);
> -     if (likely(!READ_ONCE(t->trc_reader_special.s)) || nesting) {
> -             WRITE_ONCE(t->trc_reader_nesting, nesting);
> -             return;  // We assume shallow reader nesting.
> -     }
> -     WARN_ON_ONCE(nesting != 0);
> -     rcu_read_unlock_trace_special(t);
> +     smp_mb(); // Placeholder for more selective ordering

Bizarre :-)

> +     scp = t->trc_reader_scp;
> +     barrier();  // scp before nesting to protect against interrupt handler.

What is it protecting against interrupt?

> +     if (!--t->trc_reader_nesting)
> +             srcu_read_unlock_fast(&rcu_tasks_trace_srcu_struct, scp);
> +     else
> +             srcu_lock_release(&rcu_tasks_trace_srcu_struct.dep_map);
> +}

Thanks (very happy to see all the rest of the code going away!)

-- 
Frederic Weisbecker
SUSE Labs

Reply via email to