On Mon, Jan 22, 2024 at 04:26:45PM -0800, Kees Cook wrote:
> Annotate atomic_add_return() to avoid signed overflow instrumentation.
> It is expected to wrap around.
> 
> Cc: Will Deacon <w...@kernel.org>
> Cc: Peter Zijlstra <pet...@infradead.org>
> Cc: Boqun Feng <boqun.f...@gmail.com>
> Cc: Mark Rutland <mark.rutl...@arm.com>
> Cc: Thomas Gleixner <t...@linutronix.de>
> Cc: Ingo Molnar <mi...@redhat.com>
> Cc: Borislav Petkov <b...@alien8.de>
> Cc: Dave Hansen <dave.han...@linux.intel.com>
> Cc: x...@kernel.org
> Cc: "H. Peter Anvin" <h...@zytor.com>
> Signed-off-by: Kees Cook <keesc...@chromium.org>
> ---
>  arch/x86/include/asm/atomic.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
> index 55a55ec04350..4120cdd87da8 100644
> --- a/arch/x86/include/asm/atomic.h
> +++ b/arch/x86/include/asm/atomic.h
> @@ -80,7 +80,7 @@ static __always_inline bool arch_atomic_add_negative(int i, 
> atomic_t *v)
>  }
>  #define arch_atomic_add_negative arch_atomic_add_negative
>  
> -static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
> +static __always_inline __signed_wrap int arch_atomic_add_return(int i, 
> atomic_t *v)
>  {
>       return i + xadd(&v->counter, i);
>  }

I think that here (and in the arm64 patch) it'd be better to use add_wrap() on
the specific statement, i.e. have:

static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
{
        return add_wrap(i, xadd(&v->counter, i));
}

... since otherwise the annotation could applly to the '+' or something else
(e.g. if the 'xadd() part is a special macro), and the annotation might
unexpectedly hide things if we add other statements here in future.

Mark.

> -- 
> 2.34.1
> 

Reply via email to