From: Uros Bizjak <ubiz...@gmail.com> [ Upstream commit eb887c4567d1b0e7684c026fe7df44afa96589e6 ]
Use atomic64_inc_return(&ref) instead of atomic64_add_return(1, &ref) to use optimized implementation and ease register pressure around the primitive for targets that implement optimized variant. Cc: Steven Rostedt <rost...@goodmis.org> Cc: Masami Hiramatsu <mhira...@kernel.org> Cc: Mathieu Desnoyers <mathieu.desnoy...@efficios.com> Link: https://lore.kernel.org/20241007085651.48544-1-ubiz...@gmail.com Signed-off-by: Uros Bizjak <ubiz...@gmail.com> Signed-off-by: Steven Rostedt (Google) <rost...@goodmis.org> Signed-off-by: Sasha Levin <sas...@kernel.org> --- kernel/trace/trace_clock.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/trace/trace_clock.c b/kernel/trace/trace_clock.c index 4702efb00ff21..4cb2ebc439be6 100644 --- a/kernel/trace/trace_clock.c +++ b/kernel/trace/trace_clock.c @@ -154,5 +154,5 @@ static atomic64_t trace_counter; */ u64 notrace trace_clock_counter(void) { - return atomic64_add_return(1, &trace_counter); + return atomic64_inc_return(&trace_counter); } -- 2.43.0