On Wed, Jul 11, 2018 at 06:34:56PM +0200, Peter Zijlstra wrote: > On Wed, Jul 11, 2018 at 10:43:45AM +0100, Will Deacon wrote: > > Hi Alan, > > > > On Tue, Jul 10, 2018 at 02:18:13PM -0400, Alan Stern wrote: > > > More than one kernel developer has expressed the opinion that the LKMM > > > should enforce ordering of writes by locking. In other words, given > > > the following code: > > > > > > WRITE_ONCE(x, 1); > > > spin_unlock(&s): > > > spin_lock(&s); > > > WRITE_ONCE(y, 1); > > > > > > the stores to x and y should be propagated in order to all other CPUs, > > > even though those other CPUs might not access the lock s. In terms of > > > the memory model, this means expanding the cumul-fence relation. > > > > > > Locks should also provide read-read (and read-write) ordering in a > > > similar way. Given: > > > > > > READ_ONCE(x); > > > spin_unlock(&s); > > > spin_lock(&s); > > > READ_ONCE(y); // or WRITE_ONCE(y, 1); > > > > > > the load of x should be executed before the load of (or store to) y. > > > The LKMM already provides this ordering, but it provides it even in > > > the case where the two accesses are separated by a release/acquire > > > pair of fences rather than unlock/lock. This would prevent > > > architectures from using weakly ordered implementations of release and > > > acquire, which seems like an unnecessary restriction. The patch > > > therefore removes the ordering requirement from the LKMM for that > > > case. > > > > > > All the architectures supported by the Linux kernel (including RISC-V) > > > do provide this ordering for locks, albeit for varying reasons. > > > Therefore this patch changes the model in accordance with the > > > developers' wishes. > > > > > > Signed-off-by: Alan Stern <st...@rowland.harvard.edu> > > > > Thanks, I'm happy with this version of the patch: > > > > Reviewed-by: Will Deacon <will.dea...@arm.com> > > Me too! Thanks Alan. > > Acked-by: Peter Zijlstra (Intel) <pet...@infradead.org>
And I applies you ask as well, thank you! Thanx, Paul