On Wed, Mar 04, 2026 at 11:02:23PM +0100, Peter Zijlstra wrote:
> On Wed, Mar 04, 2026 at 04:56:16PM +0000, Dmitry Ilvokhin wrote:
> > Move the percpu_up_read() slowpath out of the inline function into a new
> > __percpu_up_read_slowpath() to avoid binary size increase from adding a
> > tracepoint to an inlined function.
> >
> > Signed-off-by: Dmitry Ilvokhin <[email protected]>
> > ---
> >  include/linux/percpu-rwsem.h  | 15 +++------------
> >  kernel/locking/percpu-rwsem.c | 18 ++++++++++++++++++
> >  2 files changed, 21 insertions(+), 12 deletions(-)
> > 
> > diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h
> > index c8cb010d655e..89506895365c 100644
> > --- a/include/linux/percpu-rwsem.h
> > +++ b/include/linux/percpu-rwsem.h
> > @@ -107,6 +107,8 @@ static inline bool percpu_down_read_trylock(struct 
> > percpu_rw_semaphore *sem)
> >     return ret;
> >  }
> >  
> > +void __percpu_up_read_slowpath(struct percpu_rw_semaphore *sem);
> > +
> 
> extern for consistency with all the other declarations in this header.
> 
> s/_slowpath//, the corresponding down function also doesn't have
> _slowpath on.

Thanks for the feedback, Peter. Applied both suggestions locally.

Reply via email to