On Sat, Apr 20, 2019 at 11:50:14PM +0900, Akira Yokosawa wrote:
> On Fri, 19 Apr 2019 11:06:41 -0700, Paul E. McKenney wrote:
> > On Sat, Apr 20, 2019 at 12:06:58AM +0900, Akira Yokosawa wrote:
> >> Hi Paul,
> >>
> [...]
> >
> >>> + (1) The compiler can reorder the load from a to precede the
>
On Fri, 19 Apr 2019 11:06:41 -0700, Paul E. McKenney wrote:
> On Sat, Apr 20, 2019 at 12:06:58AM +0900, Akira Yokosawa wrote:
>> Hi Paul,
>>
[...]
>
>>> + (1) The compiler can reorder the load from a to precede the
>>> + atomic_dec(), (2) Because x86 smp_mb__before_atomic() is only a
>>> +
Hi Paul,
Please find inline comments below.
On Fri, 19 Apr 2019 05:47:20 -0700, Paul E. McKenney wrote:
> On Fri, Apr 19, 2019 at 02:53:02AM +0200, Andrea Parri wrote:
>>> Are you saying that on x86, atomic_inc() acts as a full memory barrier
>>> but not as a compiler barrier, and vice versa for
> > + (1) The compiler can reorder the load from a to precede the
> > + atomic_dec(), (2) Because x86 smp_mb__before_atomic() is only a
> > + compiler barrier, the CPU can reorder the preceding store to
> > + obj->dead with the later load from a.
> > +
> > + This could be avoide
On Sat, Apr 20, 2019 at 12:06:58AM +0900, Akira Yokosawa wrote:
> Hi Paul,
>
> Please find inline comments below.
>
> On Fri, 19 Apr 2019 05:47:20 -0700, Paul E. McKenney wrote:
> > On Fri, Apr 19, 2019 at 02:53:02AM +0200, Andrea Parri wrote:
> >>> Are you saying that on x86, atomic_inc() acts a
On Fri, Apr 19, 2019 at 02:53:02AM +0200, Andrea Parri wrote:
> > Are you saying that on x86, atomic_inc() acts as a full memory barrier
> > but not as a compiler barrier, and vice versa for
> > smp_mb__after_atomic()? Or that neither atomic_inc() nor
> > smp_mb__after_atomic() implements a ful
On Fri, 19 Apr 2019, Paul E. McKenney wrote:
> On Fri, Apr 19, 2019 at 02:53:02AM +0200, Andrea Parri wrote:
> > > Are you saying that on x86, atomic_inc() acts as a full memory barrier
> > > but not as a compiler barrier, and vice versa for
> > > smp_mb__after_atomic()? Or that neither atomic_
On Fri, Apr 19, 2019 at 10:34:06AM -0400, Alan Stern wrote:
> On Fri, 19 Apr 2019, Paul E. McKenney wrote:
>
> > On Fri, Apr 19, 2019 at 02:53:02AM +0200, Andrea Parri wrote:
> > > > Are you saying that on x86, atomic_inc() acts as a full memory barrier
> > > > but not as a compiler barrier, and
> Are you saying that on x86, atomic_inc() acts as a full memory barrier
> but not as a compiler barrier, and vice versa for
> smp_mb__after_atomic()? Or that neither atomic_inc() nor
> smp_mb__after_atomic() implements a full memory barrier?
I'd say the former; AFAICT, these boil down to:
On Thu, 18 Apr 2019, Paul E. McKenney wrote:
> > Are you saying that on x86, atomic_inc() acts as a full memory barrier
> > but not as a compiler barrier, and vice versa for
> > smp_mb__after_atomic()? Or that neither atomic_inc() nor
> > smp_mb__after_atomic() implements a full memory barrier
On Thu, Apr 18, 2019 at 01:44:36PM -0400, Alan Stern wrote:
> On Thu, 18 Apr 2019, Andrea Parri wrote:
>
> > > Another question is "should the kernel permit smp_mb__{before,after}*()
> > > anywhere other than immediately before or after the primitive being
> > > strengthened?"
> >
> > Mmh, I do t
On Thu, 18 Apr 2019, Andrea Parri wrote:
> > Another question is "should the kernel permit smp_mb__{before,after}*()
> > anywhere other than immediately before or after the primitive being
> > strengthened?"
>
> Mmh, I do think that keeping these barriers "immediately before or after
> the primit
> Another question is "should the kernel permit smp_mb__{before,after}*()
> anywhere other than immediately before or after the primitive being
> strengthened?"
Mmh, I do think that keeping these barriers "immediately before or after
the primitive being strengthened" is a good practice (readabilit
On Mon, Apr 15, 2019 at 09:50:03AM -0400, Alan Stern wrote:
> On Mon, 15 Apr 2019, Paul E. McKenney wrote:
>
> > Another question is "should the kernel permit smp_mb__{before,after}*()
> > anywhere other than immediately before or after the primitive being
> > strengthened?"
> >
> > Thoughts?
>
On Mon, 15 Apr 2019, Paul E. McKenney wrote:
> Another question is "should the kernel permit smp_mb__{before,after}*()
> anywhere other than immediately before or after the primitive being
> strengthened?"
>
> Thoughts?
How would the kernel forbid other constructions?
Alan
On Sat, Apr 13, 2019 at 11:39:38PM +0200, Andrea Parri wrote:
> On Tue, Apr 09, 2019 at 08:01:32AM -0700, Paul E. McKenney wrote:
> > On Tue, Apr 09, 2019 at 03:36:18AM +0200, Andrea Parri wrote:
> > > > > The formula was more along the line of "do not assume either of these
> > > > > cases to hold
On Tue, Apr 09, 2019 at 08:01:32AM -0700, Paul E. McKenney wrote:
> On Tue, Apr 09, 2019 at 03:36:18AM +0200, Andrea Parri wrote:
> > > > The formula was more along the line of "do not assume either of these
> > > > cases to hold; use barrier() is you need an unconditional barrier..."
> > > > AFAIC
On Tue, Apr 09, 2019 at 03:36:18AM +0200, Andrea Parri wrote:
> > > The formula was more along the line of "do not assume either of these
> > > cases to hold; use barrier() is you need an unconditional barrier..."
> > > AFAICT, all current implementations of smp_mb__{before,after}_atomic()
> > > pr
> > The formula was more along the line of "do not assume either of these
> > cases to hold; use barrier() is you need an unconditional barrier..."
> > AFAICT, all current implementations of smp_mb__{before,after}_atomic()
> > provides a compiler barrier with either barrier() or "memory" clobber.
>
On Mon, 8 Apr 2019, Andrea Parri wrote:
> > > > > I'd have:
> > > > >
> > > > > *x = 1; /* A */
> > > > > smp_mb__before_atomic();
> > > > > r0 = xchg_relaxed(x, 2); /* B (read or write part) */
> > > > >
> > > > > => (A ->barrier B)
> > > >
> > > > Perhaps so. It wasn'
> > > > I'd have:
> > > >
> > > > *x = 1; /* A */
> > > > smp_mb__before_atomic();
> > > > r0 = xchg_relaxed(x, 2); /* B (read or write part) */
> > > >
> > > > => (A ->barrier B)
> > >
> > > Perhaps so. It wasn't clear initially how these should be treated, so
>
On Sat, 6 Apr 2019, Andrea Parri wrote:
> > > I'd have:
> > >
> > > *x = 1; /* A */
> > > smp_mb__before_atomic();
> > > r0 = xchg_relaxed(x, 2); /* B (read or write part) */
> > >
> > > => (A ->barrier B)
> >
> > Perhaps so. It wasn't clear initially how these should be treated, so
>
> > I'd have:
> >
> > *x = 1; /* A */
> > smp_mb__before_atomic();
> > r0 = xchg_relaxed(x, 2); /* B (read or write part) */
> >
> > => (A ->barrier B)
>
> Perhaps so. It wasn't clear initially how these should be treated, so
> I just ignored them. For example, what should happ
On Tue, 2 Apr 2019, Andrea Parri wrote:
> On Tue, Mar 19, 2019 at 03:38:19PM -0400, Alan Stern wrote:
> > LKMM maintainers and other interested parties:
> >
> > Here is my latest proposal for extending the Linux Kernel Memory Model
> > to handle plain accesses and data races. The document below
On Tue, Mar 19, 2019 at 03:38:19PM -0400, Alan Stern wrote:
> LKMM maintainers and other interested parties:
>
> Here is my latest proposal for extending the Linux Kernel Memory Model
> to handle plain accesses and data races. The document below explains
> the rationale behind the proposal, and
LKMM maintainers and other interested parties:
Here is my latest proposal for extending the Linux Kernel Memory Model
to handle plain accesses and data races. The document below explains
the rationale behind the proposal, and a patch (based on the dev
branch of Paul's linux-rcu tree) is attach
26 matches
Mail list logo