----- Original Message -----
> 
> [ Removed npig...@kernel.dk as I keep getting bounces from that addr ]

Yep, me too. However this his the address that shows up in the
MAINTAINERS file. Weird.

> 
> On Tue, 17 Mar 2015 01:45:25 +0000 (UTC)
> Mathieu Desnoyers <mathieu.desnoy...@efficios.com> wrote:
> 
[...]
> 
> Can you please fix your mail client to not include the entire header in
> your replies please.

Done, thanks for pointing it out!

> 
> > Let's consider the following memory barrier scenario performed in
> > user-space on an architecture with very relaxed ordering. PowerPC comes
> > to mind.
> > 
> > https://lwn.net/Articles/573436/
> > scenario 12:
> > 
> > CPU 0                   CPU 1
> > CAO(x) = 1;             r3 = CAO(y);
> > cmm_smp_wmb();          cmm_smp_rmb();
> > CAO(y) = 1;             r4 = CAO(x);
> > 
> > BUG_ON(r3 == 1 && r4 == 0)
> > 
> > 
> > We tweak it to use sys_membarrier on CPU 1, and a simple compiler
> > barrier() on CPU 0:
> > 
> > CPU 0                   CPU 1
> > CAO(x) = 1;             r3 = CAO(y);
> > barrier();              sys_membarrier();
> > CAO(y) = 1;             r4 = CAO(x);
> > 
> > BUG_ON(r3 == 1 && r4 == 0)
> > 
> > Now if CPU 1 executes sys_membarrier while CPU 0 is preempted after both
> > stores, we have:
> > 
> > CPU 0                           CPU 1
> > CAO(x) = 1;
> >   [1st store is slow to
> >    reach other cores]
> > CAO(y) = 1;
> >   [2nd store reaches other
> >    cores more quickly]
> > [preempted]
> >                                 r3 = CAO(y)
> >                                   (may see y = 1)
> >                                 sys_membarrier()
> > Scheduler changes rq->curr.
> >                                 skips CPU 0, because rq->curr has
> >                                   been updated.
> >                                 [return to userspace]
> >                                 r4 = CAO(x)
> >                                   (may see x = 0)
> >                                 BUG_ON(r3 == 1 && r4 == 0) -> fails.
> > load_cr3, with implied
> >   memory barrier, comes
> >   after CPU 1 has read "x".
> > 
> > The only way to make this scenario work is if a memory barrier is added
> > before updating rq->curr. (we could also do a similar scenario for the
> > needed barrier after store to rq->curr).
> 
> Hmm, I wonder if anything were to break if rq->curr was updated after
> the context_switch() call?
> 
> Would that help?
> 
>       this_cpu_write(saved_next, next);
>       rq = context_switch(rq, prev, next);
>       rq->curr = this_cpu_read(saved_next);

Assuming there is a full memory barrier (e.g. load_cr3) within
context_switch, it would help for ordering memory accesses that
are performed prior to the preemption, but not for memory accesses
to be performed immediately after return to userspace from preemption.

Thanks,

Mathieu

> 
> As I recently found out that this_cpu_read/write() is not that nice on
> all architectures, something else may need to be updated. Or we can add
> a temp variable on the rq.
> 
>       rq->saved_next = next;
>       rq = context_switch(rq, prev, next);
>       rq->curr = rq->saved_next;
> 
> -- Steve
> 
> 

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to