On 12/09, Paul E. McKenney wrote:
>
> @@ -1626,7 +1626,10 @@ for each construct.  These operations all imply 
> certain barriers:
>       operation has completed.
>
>       Memory operations issued before the LOCK may be completed after the LOCK
> -     operation has completed.
> +     operation has completed.  An smp_mb__before_spinlock(), combined
> +     with a following LOCK, acts as an smp_wmb().  Note the "w",
> +     this is smp_wmb(), not smp_mb().

Well, but smp_mb__before_spinlock + LOCK is not wmb... But it is not
the full barrier. It should guarantee that, say,

        CONDITION = true;               // 1

        // try_to_wake_up
        smp_mb__before_spinlock();
        spin_lock(&task->pi_lock);

        if (!(p->state & state))        // 2
                return;         

can't race with with set_current_state() + check(CONDITION), this means
that 1 and 2 above must not be reordered.

But a LOAD before before spin_lock() can leak into the critical section.

Perhaps this should be clarified somehow, or perhaps it should actually
imply mb (if combined with LOCK).

Oleg

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to