On 12/10, Paul E. McKenney wrote: > > On Tue, Dec 10, 2013 at 05:44:37PM +0100, Oleg Nesterov wrote: > > > > Well, but smp_mb__before_spinlock + LOCK is not wmb... But it is not > > the full barrier. It should guarantee that, say, > > > > CONDITION = true; // 1 > > > > // try_to_wake_up > > smp_mb__before_spinlock(); > > spin_lock(&task->pi_lock); > > > > if (!(p->state & state)) // 2 > > return; > > > > can't race with with set_current_state() + check(CONDITION), this means > > that 1 and 2 above must not be reordered. > > > > But a LOAD before before spin_lock() can leak into the critical section. > > > > Perhaps this should be clarified somehow, or perhaps it should actually > > imply mb (if combined with LOCK). > > If we leave the implementation the same, does the following capture the > constraints? > > Memory operations issued before the LOCK may be completed after > the LOCK operation has completed. An smp_mb__before_spinlock(), > combined with a following LOCK, orders prior loads against > subsequent stores
prior stores against subsequent loads ;) Otherwise - thanks! Oleg. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/