On Thu, Oct 2, 2025 at 6:38 AM Paul E. McKenney <paul...@kernel.org> wrote: > > On Wed, Oct 01, 2025 at 06:37:33PM -0700, Alexei Starovoitov wrote: > > On Wed, Oct 1, 2025 at 7:48 AM Paul E. McKenney <paul...@kernel.org> wrote: > > > > > > +static inline struct srcu_ctr __percpu *rcu_read_lock_tasks_trace(void) > > > +{ > > > + struct srcu_ctr __percpu *ret = > > > __srcu_read_lock_fast(&rcu_tasks_trace_srcu_struct); > > > + > > > + rcu_try_lock_acquire(&rcu_tasks_trace_srcu_struct.dep_map); > > > + if (!IS_ENABLED(CONFIG_TASKS_TRACE_RCU_NO_MB)) > > > + smp_mb(); // Provide ordering on noinstr-incomplete > > > architectures. > > > + return ret; > > > +} > > > > ... > > > > > @@ -50,14 +97,15 @@ static inline void rcu_read_lock_trace(void) > > > { > > > struct task_struct *t = current; > > > > > > + rcu_try_lock_acquire(&rcu_tasks_trace_srcu_struct.dep_map); > > > if (t->trc_reader_nesting++) { > > > // In case we interrupted a Tasks Trace RCU reader. > > > - > > > rcu_try_lock_acquire(&rcu_tasks_trace_srcu_struct.dep_map); > > > return; > > > } > > > barrier(); // nesting before scp to protect against interrupt > > > handler. > > > - t->trc_reader_scp = > > > srcu_read_lock_fast(&rcu_tasks_trace_srcu_struct); > > > - smp_mb(); // Placeholder for more selective ordering > > > + t->trc_reader_scp = > > > __srcu_read_lock_fast(&rcu_tasks_trace_srcu_struct); > > > + if (!IS_ENABLED(CONFIG_TASKS_TRACE_RCU_NO_MB)) > > > + smp_mb(); // Placeholder for more selective ordering > > > } > > > > Since srcu_fast() __percpu pointers must be incremented/decremented > > within the same task, should we expose "raw" rcu_read_lock_tasks_trace() > > at all? > > rcu_read_lock_trace() stashes that pointer within a task, > > so implementation guarantees that unlock will happen within the same task, > > while _tasks_trace() requires the user not to do stupid things. > > > > I guess it's fine to have both versions and the amount of copy paste > > seems justified, but I keep wondering. > > Especially since _tasks_trace() needs more work on bpf trampoline > > side to pass this pointer around from lock to unlock. > > We can add extra 8 bytes to struct bpf_tramp_run_ctx and save it there, > > but set/reset run_ctx operates on current anyway, so it's not clear > > which version will be faster. I suspect _trace() will be good enough. > > Especially since trc_reader_nesting is kinda an optimization. > > The idea is to convert callers and get rid of rcu_read_lock_trace() > in favor of rcu_read_lock_tasks_trace(), the reason being the slow > task_struct access on x86. But if the extra storage is an issue for > some use cases, we can keep both. In that case, I would of course reduce > the copy-pasta in a future patch.
slow task_struct access on x86? That's news to me. Why is it slow? static __always_inline struct task_struct *get_current(void) { if (IS_ENABLED(CONFIG_USE_X86_SEG_SUPPORT)) return this_cpu_read_const(const_current_task); return this_cpu_read_stable(current_task); } The former is used with gcc 14+ while later is with clang. I don't understand the difference between the two. I'm guessing gcc14+ can be optimized better within the function, but both look plenty fast. We need current access anyway for run_ctx.