On Sat, Apr 27, 2019 at 10:17:38AM +0200, Andrea Parri wrote:
> On Tue, Apr 23, 2019 at 06:30:10AM -0700, Paul E. McKenney wrote:
> > On Tue, Apr 23, 2019 at 02:32:09PM +0200, Peter Zijlstra wrote:
> > > On Sat, Apr 20, 2019 at 01:54:40AM -0700, Paul E. McKenney wrote:
> > > >         And atomic_set(): set_preempt_state().  This fails
> > > >         on x86, s390, and TSO friends, does it not?  Or is
> > > >         this ARM-only?  Still, why not just smp_mb() before and
> > > >         after?  Same issue in __kernfs_new_node(), bio_cnt_set(),
> > > >         sbitmap_queue_update_wake_batch(), 
> > > > 
> > > >         Ditto for atomic64_set() in __ceph_dir_set_complete().
> > > > 
> > > >         Ditto for atomic_read() in rvt_qp_is_avail().  This function
> > > >         has a couple of other oddly placed smp_mb__before_atomic().
> > > 
> > > That are just straight up bugs. The atomic_t.txt file clearly specifies
> > > the barriers only apply to RmW ops and both _set() and _read() are
> > > specified to not be a RmW.
> > 
> > Agreed.  The "Ditto" covers my atomic_set() consternation.  ;-)
> 
> I was working on some of these before the Easter break [1, 2]: the plan
> was to continue next week, but by addressing the remaining cases with a
> conservative s/that barrier/smp_mb at first; unless you've other plans?
> 
>   Andrea
> 
> [1] 
> http://lkml.kernel.org/r/1555417031-27356-1-git-send-email-andrea.pa...@amarulasolutions.com
> [2] 
> http://lkml.kernel.org/r/[email protected]

Sounds good to me!  ;-)

                                                                Thanx, Paul

Reply via email to