On 12/09, Paul E. McKenney wrote:
>
> This commit therefore adds a smp_mb__after_unlock_lock(), which may be
> placed after a LOCK primitive to restore the full-memory-barrier semantic.
> All definitions are currently no-ops, but will be upgraded for some
> architectures when queued locks arrive.

I am wondering, perhaps smp_mb__after_unlock() makes more sense?

Note that it already has the potential user:

        --- x/kernel/sched/wait.c
        +++ x/kernel/sched/wait.c
        @@ -176,8 +176,9 @@ prepare_to_wait(wait_queue_head_t *q, wa
                spin_lock_irqsave(&q->lock, flags);
                if (list_empty(&wait->task_list))
                        __add_wait_queue(q, wait);
        -       set_current_state(state);
        +       __set_current_state(state);
                spin_unlock_irqrestore(&q->lock, flags);
        +       smp_mb__after_unlock();
         }
         EXPORT_SYMBOL(prepare_to_wait);
         
        @@ -190,8 +191,9 @@ prepare_to_wait_exclusive(wait_queue_hea
                spin_lock_irqsave(&q->lock, flags);
                if (list_empty(&wait->task_list))
                        __add_wait_queue_tail(q, wait);
        -       set_current_state(state);
        +       __set_current_state(state);
                spin_unlock_irqrestore(&q->lock, flags);
        +       smp_mb__after_unlock();
         }
         EXPORT_SYMBOL(prepare_to_wait_exclusive);
         

Assuming it can also be used "later", after another LOCK, like in
your example in 5/7.

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to