Annotate atomic_add_return() to avoid signed overflow instrumentation.
It is expected to wrap around.

Cc: Will Deacon <w...@kernel.org>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Boqun Feng <boqun.f...@gmail.com>
Cc: Mark Rutland <mark.rutl...@arm.com>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Borislav Petkov <b...@alien8.de>
Cc: Dave Hansen <dave.han...@linux.intel.com>
Cc: x...@kernel.org
Cc: "H. Peter Anvin" <h...@zytor.com>
Signed-off-by: Kees Cook <keesc...@chromium.org>
---
 arch/x86/include/asm/atomic.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index 55a55ec04350..4120cdd87da8 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -80,7 +80,7 @@ static __always_inline bool arch_atomic_add_negative(int i, 
atomic_t *v)
 }
 #define arch_atomic_add_negative arch_atomic_add_negative
 
-static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
+static __always_inline __signed_wrap int arch_atomic_add_return(int i, 
atomic_t *v)
 {
        return i + xadd(&v->counter, i);
 }
-- 
2.34.1


Reply via email to