On Fri, Apr 26, 2024 at 08:40:50AM +0100, David Howells wrote:
> Kees Cook wrote:
>
> > - return i + xadd(&v->counter, i);
> > + return wrapping_add(int, i, xadd(&v->counter, i));
>
> Ewww. Can't you just mark the variable as wrapping in some way, either by:
>
> unsigned int __cyclic
Kees Cook wrote:
> - return i + xadd(&v->counter, i);
> + return wrapping_add(int, i, xadd(&v->counter, i));
Ewww. Can't you just mark the variable as wrapping in some way, either by:
unsigned int __cyclic counter;
or, to use rxrpc packet sequence numbers as an example:
On Thu, Apr 25, 2024 at 11:17:52AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 24, 2024 at 04:20:20PM -0700, Kees Cook wrote:
>
> > > This is arse-about-face. Signed stuff wraps per -fno-strict-overflow.
> > > We've been writing code for years under that assumption.
> >
> > Right, which is why thi
On Thu, Apr 25, 2024 at 11:15:17AM +0100, Mark Rutland wrote:
> To be clear, I dislike the function annotation because then it applies to
> *everything* within the function, which is overly broad and the intent becomes
> unclear. That makes it painful to refactor the code (since e.g. if we want to
On Thu, Apr 25, 2024 at 11:28:12AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 24, 2024 at 04:30:50PM -0700, Kees Cook wrote:
>
> > > That is, anything that actively warns about signed overflow when build
> > > with -fno-strict-overflow is a bug. If you want this warning you have to
> > > explicitl
On Wed, Apr 24, 2024 at 03:45:07PM -0700, Kees Cook wrote:
> On Thu, Apr 25, 2024 at 12:41:41AM +0200, Peter Zijlstra wrote:
> > On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:
> >
> > > @@ -82,7 +83,7 @@ static __always_inline bool
> > > arch_atomic_add_negative(int i, atomic_t *v)
>
On Wed, Apr 24, 2024 at 04:30:50PM -0700, Kees Cook wrote:
> > That is, anything that actively warns about signed overflow when build
> > with -fno-strict-overflow is a bug. If you want this warning you have to
> > explicitly mark things.
>
> This is confusing UB with "overflow detection". We're
On Wed, Apr 24, 2024 at 04:20:20PM -0700, Kees Cook wrote:
> > This is arse-about-face. Signed stuff wraps per -fno-strict-overflow.
> > We've been writing code for years under that assumption.
>
> Right, which is why this is going to take time to roll out. :) What we
> were really doing with -fn
On Thu, Apr 25, 2024 at 01:05:00AM +0200, Peter Zijlstra wrote:
> On Thu, Apr 25, 2024 at 12:54:36AM +0200, Peter Zijlstra wrote:
> > On Wed, Apr 24, 2024 at 03:45:07PM -0700, Kees Cook wrote:
> > > On Thu, Apr 25, 2024 at 12:41:41AM +0200, Peter Zijlstra wrote:
> > > > On Wed, Apr 24, 2024 at 12:1
On Thu, Apr 25, 2024 at 12:54:36AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 24, 2024 at 03:45:07PM -0700, Kees Cook wrote:
> > On Thu, Apr 25, 2024 at 12:41:41AM +0200, Peter Zijlstra wrote:
> > > On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:
> > >
> > > > @@ -82,7 +83,7 @@ static _
On Thu, Apr 25, 2024 at 12:54:36AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 24, 2024 at 03:45:07PM -0700, Kees Cook wrote:
> > On Thu, Apr 25, 2024 at 12:41:41AM +0200, Peter Zijlstra wrote:
> > > On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:
> > >
> > > > @@ -82,7 +83,7 @@ static _
On Wed, Apr 24, 2024 at 03:45:07PM -0700, Kees Cook wrote:
> On Thu, Apr 25, 2024 at 12:41:41AM +0200, Peter Zijlstra wrote:
> > On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:
> >
> > > @@ -82,7 +83,7 @@ static __always_inline bool
> > > arch_atomic_add_negative(int i, atomic_t *v)
>
On Thu, Apr 25, 2024 at 12:41:41AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:
>
> > @@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int
> > i, atomic_t *v)
> >
> > static __always_inline int arch_atomic_add_return(int i, ato
On Thu, Apr 25, 2024 at 12:41:41AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:
>
> > @@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int
> > i, atomic_t *v)
> >
> > static __always_inline int arch_atomic_add_return(int i, ato
On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:
> @@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int i,
> atomic_t *v)
>
> static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
> {
> - return i + xadd(&v->counter, i);
> + return wrapp
Use add_wrap() to annotate the addition in atomic_add_return() as
expecting to wrap around.
Signed-off-by: Kees Cook
---
Cc: Will Deacon
Cc: Peter Zijlstra
Cc: Boqun Feng
Cc: Mark Rutland
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: Dave Hansen
Cc: x...@kernel.org
Cc: "H. P
16 matches
Mail list logo