On Wed, Sep 02, 2015 at 12:10:58PM -0400, Chris Metcalf wrote:
> On 09/02/2015 08:55 AM, Peter Zijlstra wrote:
> >So here goes..
> >
> >Chris, I'm awfully sorry, but I seem to be Tile challenged.
> >
> >TileGX seems to define:
> >
> >#define smp_mb__before_atomic()      smp_mb()
> >#define smp_mb__after_atomic()       smp_mb()
> >
> >However, its atomic_add_return() implementation looks like:
> >
> >static inline int atomic_add_return(int i, atomic_t *v)
> >{
> >     int val;
> >     smp_mb();  /* barrier for proper semantics */
> >     val = __insn_fetchadd4((void *)&v->counter, i) + i;
> >     barrier();  /* the "+ i" above will wait on memory */
> >     return val;
> >}
> >
> >Which leaves me confused on smp_mb__after_atomic().
> 
> Are you concerned about whether it has proper memory
> barrier semantics already, i.e. full barriers before and after?
> In fact we do have a full barrier before, but then because of the
> "+ i" / "barrier()", we know that the only other operation since
> the previous mb(), namely the read of v->counter, has
> completed after the atomic operation.  As a result we can
> omit explicitly having a second barrier.
> 
> It does seem like all the current memory-order semantics are
> correct, unless I'm missing something!

So I'm reading that code like:

        MB
 [RmW]  ret = *val += i


So what is stopping later memory ops like:

   [R]  a = *foo
   [S]  *bar = b

>From getting reordered with the RmW, like:

        MB

   [R]  a = *foo
   [S]  *bar = b

 [RmW]  ret = *val += i

Are you saying Tile does not reorder things like that? If so, why then
is smp_mb__after_atomic() a full mb(). If it does, I don't see how your
add_return is correct.

Alternatively I'm just confused..
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to