On Wed, Mar 04, 2026 at 04:56:16PM +0000, Dmitry Ilvokhin wrote:
> Move the percpu_up_read() slowpath out of the inline function into a new
> __percpu_up_read_slowpath() to avoid binary size increase from adding a
> tracepoint to an inlined function.
>
> Signed-off-by: Dmitry Ilvokhin <[email protected]>
> ---
>  include/linux/percpu-rwsem.h  | 15 +++------------
>  kernel/locking/percpu-rwsem.c | 18 ++++++++++++++++++
>  2 files changed, 21 insertions(+), 12 deletions(-)
> 
> diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h
> index c8cb010d655e..89506895365c 100644
> --- a/include/linux/percpu-rwsem.h
> +++ b/include/linux/percpu-rwsem.h
> @@ -107,6 +107,8 @@ static inline bool percpu_down_read_trylock(struct 
> percpu_rw_semaphore *sem)
>       return ret;
>  }
>  
> +void __percpu_up_read_slowpath(struct percpu_rw_semaphore *sem);
> +

extern for consistency with all the other declarations in this header.

s/_slowpath//, the corresponding down function also doesn't have
_slowpath on.

>  static inline void percpu_up_read(struct percpu_rw_semaphore *sem)
>  {
>       rwsem_release(&sem->dep_map, _RET_IP_);

Reply via email to