[PATCH] powerpc: Fix comment typos in arch/powerpc/include/asm/bitops.h

2014-11-10 Thread Boqun Feng
B of the second word should be bit 32 rather than 31, because bit 31 is already in the first word. This patch fixes these typos. Signed-off-by: Boqun Feng --- arch/powerpc/include/asm/bitops.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/include/asm/bit

Re: [PATCH] arch/powerpc: use BUILD_BUG() when detect unfit {cmp}xchg, size

2016-02-23 Thread Boqun Feng
+ BUILD_BUG(); Maybe we can use BUILD_BUG_ON_MSG(1, "Unsupported size for xchg"), which could provide more information. With or without this verbosity: Acked-by: Boqun Feng Regards, Boqun > return x; > } > > @@ -124,7 +119,7 @@ __xchg_local(volatile void *ptr

Re: [PATCH V2] powerpc: Implement {cmp}xchg for u8 and u16

2016-04-19 Thread Boqun Feng
Hi Xinhui, On Tue, Apr 19, 2016 at 02:29:34PM +0800, Pan Xinhui wrote: > From: Pan Xinhui > > Implement xchg{u8,u16}{local,relaxed}, and > cmpxchg{u8,u16}{,local,acquire,relaxed}. > > It works on all ppc. > Nice work! AFAICT, your work doesn't depend on anything that ppc-specific, right? So

[PATCH powerpc/next RESEND] powerpc: spinlock: Fix spin_unlock_wait()

2016-04-19 Thread Boqun Feng
barriers in loops and consolidating the implementations for PPC32 and PPC64 into one. Suggested-by: "Paul E. McKenney" Signed-off-by: Boqun Feng Reviewed-by: "Paul E. McKenney" --- arch/powerpc/include/asm/spinlock.h | 48 - arch/powerpc/l

Re: [PATCH V3] powerpc: Implement {cmp}xchg for u8 and u16

2016-04-21 Thread Boqun Feng
On Thu, Apr 21, 2016 at 11:35:07PM +0800, Pan Xinhui wrote: > On 2016年04月20日 22:24, Peter Zijlstra wrote: > > On Wed, Apr 20, 2016 at 09:24:00PM +0800, Pan Xinhui wrote: > > > >> +#define __XCHG_GEN(cmp, type, sfx, skip, v) > >> \ > >> +static __always_inline unsigne

Re: [PATCH V3] powerpc: Implement {cmp}xchg for u8 and u16

2016-04-21 Thread Boqun Feng
On Fri, Apr 22, 2016 at 09:59:22AM +0800, Pan Xinhui wrote: > On 2016年04月21日 23:52, Boqun Feng wrote: > > On Thu, Apr 21, 2016 at 11:35:07PM +0800, Pan Xinhui wrote: > >> On 2016年04月20日 22:24, Peter Zijlstra wrote: > >>> On Wed, Apr 20, 2016 at 09:2

Re: [PATCH V4] powerpc: Implement {cmp}xchg for u8 and u16

2016-04-27 Thread Boqun Feng
On Wed, Apr 27, 2016 at 05:16:45PM +0800, Pan Xinhui wrote: > From: Pan Xinhui > > Implement xchg{u8,u16}{local,relaxed}, and > cmpxchg{u8,u16}{,local,acquire,relaxed}. > > It works on all ppc. > > remove volatile of first parameter in __cmpxchg_local and __cmpxchg > > Suggested-by: Peter Zijl

Re: [PATCH V4] powerpc: Implement {cmp}xchg for u8 and u16

2016-04-27 Thread Boqun Feng
On Wed, Apr 27, 2016 at 09:58:17PM +0800, Boqun Feng wrote: > On Wed, Apr 27, 2016 at 05:16:45PM +0800, Pan Xinhui wrote: > > From: Pan Xinhui > > > > Implement xchg{u8,u16}{local,relaxed}, and > > cmpxchg{u8,u16}{,local,acquire,relaxed}. > > > > It works

Re: [PATCH V4] powerpc: Implement {cmp}xchg for u8 and u16

2016-04-27 Thread Boqun Feng
On Wed, Apr 27, 2016 at 09:58:17PM +0800, Boqun Feng wrote: > On Wed, Apr 27, 2016 at 05:16:45PM +0800, Pan Xinhui wrote: > > From: Pan Xinhui > > > > Implement xchg{u8,u16}{local,relaxed}, and > > cmpxchg{u8,u16}{,local,acquire,relaxed}. > > > > It works

Re: [PATCH V4] powerpc: Implement {cmp}xchg for u8 and u16

2016-04-27 Thread Boqun Feng
On Wed, Apr 27, 2016 at 10:50:34PM +0800, Boqun Feng wrote: > > Sorry, my bad, we can't implement cmpxchg like this.. please ignore > this, I should really go to bed soon... > > But still, we can save the "tmp" for xchg() I think. > No.. we can't. Sorr

[RFC 0/5] atomics: powerpc: implement relaxed/acquire/release variants of some atomics

2015-08-27 Thread Boqun Feng
Relaxed/acquire/release variants of atomic operations {add,sub}_return and {cmp,}xchg are introduced by commit: "atomics: add acquire/release/relaxed variants of some atomic operations" which is now on locking/core branch of tip tree. By default, the generic code will implement relaxed variants

[RFC 1/5] atomics: add test for atomic operations with _relaxed variants

2015-08-27 Thread Boqun Feng
that we can examine their assembly code. Signed-off-by: Boqun Feng --- lib/atomic64_test.c | 91 ++--- 1 file changed, 59 insertions(+), 32 deletions(-) diff --git a/lib/atomic64_test.c b/lib/atomic64_test.c index 83c33a5b..0484437 100644 --- a/lib

[RFC 2/5] atomics: introduce arch_atomic_op_{acquire, release, fence} helpers

2015-08-27 Thread Boqun Feng
variants based on _relaxed variants. Signed-off-by: Boqun Feng --- include/linux/atomic.h | 16 1 file changed, 16 insertions(+) diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 00a5763..622255b 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h

[RFC 3/5] powerpc: atomic: implement atomic{, 64}_{add, sub}_return_* variants

2015-08-27 Thread Boqun Feng
ot; otherwise. For full ordered semantics, like the original ones, smp_lwsync() is put before relaxed variants and smp_mb__after_atomic() is put after. Signed-off-by: Boqun Feng --- arch/powerpc/include/asm/atomic.h | 88 --- 1 file changed, 64 insertions(+), 24 deletions(-)

[RFC 4/5] powerpc: atomic: implement xchg_* and atomic{, 64}_xchg_* variants

2015-08-27 Thread Boqun Feng
Implement xchg_relaxed and define atomic{,64}_xchg_* as xchg_relaxed, based on these _relaxed variants, release/acquire variants can be built. Note that xchg_relaxed and atomic_{,64}_xchg_relaxed are not compiler barriers. Signed-off-by: Boqun Feng --- arch/powerpc/include/asm/atomic.h | 2

[RFC 5/5] powerpc: atomic: implement cmpxchg{, 64}_* and atomic{, 64}_cmpxchg_* variants

2015-08-27 Thread Boqun Feng
Unlike other atomic operation variants, cmpxchg{,64}_acquire and atomic{,64}_cmpxchg_acquire don't have acquire semantics if the cmp part fails, so we need to implement these using assembly. Note cmpxchg{,64}_relaxed and atomic{,64}_cmpxchg_relaxed are not compiler barriers. Signed-off-by:

Re: [RFC 2/5] atomics: introduce arch_atomic_op_{acquire,release,fence} helpers

2015-08-28 Thread Boqun Feng
Hi Peter, On Fri, Aug 28, 2015 at 01:36:14PM +0200, Peter Zijlstra wrote: > On Fri, Aug 28, 2015 at 10:48:16AM +0800, Boqun Feng wrote: > > Some architectures may have their special barriers for acquire, release > > and fence semantics, general memory barriers(smp_mb_

Re: [RFC 3/5] powerpc: atomic: implement atomic{,64}_{add,sub}_return_* variants

2015-08-28 Thread Boqun Feng
Hi Peter, On Fri, Aug 28, 2015 at 12:48:54PM +0200, Peter Zijlstra wrote: > On Fri, Aug 28, 2015 at 10:48:17AM +0800, Boqun Feng wrote: > > +/* > > + * Since {add,sub}_return_relaxed and xchg_relaxed are implemented with > > + * a "bne-" instruction at the end, so

Re: [RFC 3/5] powerpc: atomic: implement atomic{,64}_{add,sub}_return_* variants

2015-08-28 Thread Boqun Feng
On Fri, Aug 28, 2015 at 08:06:14PM +0800, Boqun Feng wrote: > Hi Peter, > > On Fri, Aug 28, 2015 at 12:48:54PM +0200, Peter Zijlstra wrote: > > On Fri, Aug 28, 2015 at 10:48:17AM +0800, Boqun Feng wrote: > > > +/* > > > + * Since {add,sub}_return_relaxed and

Re: [RFC 3/5] powerpc: atomic: implement atomic{,64}_{add,sub}_return_* variants

2015-08-28 Thread Boqun Feng
On Fri, Aug 28, 2015 at 05:39:21PM +0200, Peter Zijlstra wrote: > On Fri, Aug 28, 2015 at 10:16:02PM +0800, Boqun Feng wrote: > > > > Ah.. just read through the thread you mentioned, I might misunderstand > > you, probably because I didn't understand RCpc well.. > &g

[Question] Is little endian supported on all the platforms?

2015-08-30 Thread Boqun Feng
Hi all, I hit a strange build error on v4.2, when I try to build a LE kernel with a slightly modification of the ppc64_defconfig. What I did is just make ppc64_defconfig and make menuconfig to set CPU_LITTLE_ENDIAN=y, and then build the kernel. I did a little research myself, and found out the er

Re: [Question] Is little endian supported on all the platforms?

2015-08-31 Thread Boqun Feng
On Mon, Aug 31, 2015 at 04:52:38PM +1000, Benjamin Herrenschmidt wrote: > On Mon, 2015-08-31 at 14:44 +0800, Boqun Feng wrote: > > Hi all, > > > > I hit a strange build error on v4.2, when I try to build a LE kernel > > with a slightly modification of the ppc64_de

Re: [Question] Is little endian supported on all the platforms?

2015-08-31 Thread Boqun Feng
On Mon, Aug 31, 2015 at 09:19:26PM +1000, Michael Ellerman wrote: > On Mon, 2015-08-31 at 15:53 +0800, Boqun Feng wrote: > > On Mon, Aug 31, 2015 at 04:52:38PM +1000, Benjamin Herrenschmidt wrote: > > > On Mon, 2015-08-31 at 14:44 +0800, Boqun Feng wrote: > > > > Hi

Re: [Question] Is little endian supported on all the platforms?

2015-09-01 Thread Boqun Feng
emove them from being built for a LE kernel. For 32bit only platforms, nothing needs to be done, because LE depends on PPC64. For 64bit supported platforms, add CPU_BIG_ENDIAN to dependencies explicitly [Suggested-by: Cédric Le Goater ]. Signed-off-by: Boqun Feng --- arch/powerpc/platforms/ce

[PATCH] powerpc: Kconfig: remove BE-only platforms from LE kernel build

2015-09-06 Thread Boqun Feng
[Suggested-by: Cédric Le Goater ]. Signed-off-by: Boqun Feng --- arch/powerpc/platforms/cell/Kconfig | 4 ++-- arch/powerpc/platforms/maple/Kconfig| 2 +- arch/powerpc/platforms/pasemi/Kconfig | 2 +- arch/powerpc/platforms/powermac/Kconfig | 2 +- arch/powerpc/platforms/ps3/Kconfig | 2

Re: [PATCH] powerpc: Kconfig: remove BE-only platforms from LE kernel build

2015-09-08 Thread Boqun Feng
Hi Michael, On Wed, Sep 09, 2015 at 12:26:44PM +1000, Michael Ellerman wrote: > On Mon, 2015-09-07 at 07:58 +0800, Boqun Feng wrote: > > diff --git a/arch/powerpc/platforms/cell/Kconfig > > b/arch/powerpc/platforms/cell/Kconfig > > index 2f23133..808a904 100644 > > -

[RFC v2 0/7] atomics: powerpc: Implement relaxed/acquire/release variants of some atomics

2015-09-16 Thread Boqun Feng
Link for v1: https://lkml.org/lkml/2015/8/27/798 Changes since v1: * avoid to introduce macro arch_atomic_op_*() * also fix the problem that cmpxchg, xchg and their atomic_ versions are not full barriers in PPC implementation. * rebase on v4.3-rc1 Relaxed/acquire/re

[RFC v2 3/7] powerpc: atomic: Implement atomic{, 64}_{add, sub}_return_* variants

2015-09-16 Thread Boqun Feng
omic_op_fence is defined as smp_lwsync() + _relaxed + smp_mb__after_atomic() to guarantee a fully order. Implement atomic{,64}_{add,sub}_return_relaxed, and build other variants with these helpers. Signed-off-by: Boqun Feng --- arch/powerpc/include/asm/atomic.h | 88 ++

[RFC v2 1/7] atomics: Add test for atomic operations with _relaxed variants

2015-09-16 Thread Boqun Feng
that we can examine their assembly code. Signed-off-by: Boqun Feng --- lib/atomic64_test.c | 91 ++--- 1 file changed, 59 insertions(+), 32 deletions(-) diff --git a/lib/atomic64_test.c b/lib/atomic64_test.c index 83c33a5b..0484437 100644 --- a/lib

[RFC v2 4/7] powerpc: atomic: Implement xchg_* and atomic{, 64}_xchg_* variants

2015-09-16 Thread Boqun Feng
Implement xchg_relaxed and define atomic{,64}_xchg_* as xchg_relaxed, based on these _relaxed variants, release/acquire variants can be built. Note that xchg_relaxed and atomic_{,64}_xchg_relaxed are not compiler barriers. Signed-off-by: Boqun Feng --- arch/powerpc/include/asm/atomic.h | 2

[RFC v2 5/7] powerpc: atomic: Implement cmpxchg{, 64}_* and atomic{, 64}_cmpxchg_* variants

2015-09-16 Thread Boqun Feng
Unlike other atomic operation variants, cmpxchg{,64}_acquire and atomic{,64}_cmpxchg_acquire don't have acquire semantics if the cmp part fails, so we need to implement these using assembly. Note cmpxchg{,64}_relaxed and atomic{,64}_cmpxchg_relaxed are not compiler barriers. Signed-off-by:

[RFC v2 2/7] atomics: Allow architectures to define their own __atomic_op_* helpers

2015-09-16 Thread Boqun Feng
-by: Boqun Feng --- include/linux/atomic.h | 10 ++ 1 file changed, 10 insertions(+) diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 00a5763..590c023 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -34,20 +34,29 @@ * The idea here is to build

[RFC v2 6/7] powerpc: atomic: Make atomic{, 64}_xchg and xchg a full barrier

2015-09-16 Thread Boqun Feng
-off-by: Boqun Feng --- arch/powerpc/include/asm/cmpxchg.h | 64 -- 1 file changed, 64 deletions(-) diff --git a/arch/powerpc/include/asm/cmpxchg.h b/arch/powerpc/include/asm/cmpxchg.h index f40f295..9f0379a 100644 --- a/arch/powerpc/include/asm/cmpxchg.h +++ b

[RFC v2 7/7] powerpc: atomic: Make atomic{, 64}_cmpxchg and cmpxchg a full barrier

2015-09-16 Thread Boqun Feng
__cmpxchg_{u32,u64} respectively to guarantee full-barrier semantics of atomic{,64}_cmpxchg() and cmpxchg(). Signed-off-by: Boqun Feng --- arch/powerpc/include/asm/cmpxchg.h | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/include/asm/cmpxchg.h b/arch/powerpc

Re: [PATCH] powerpc: Kconfig: remove BE-only platforms from LE kernel build

2015-09-17 Thread Boqun Feng
Ping ;-) Regards, Boqun On Mon, Sep 07, 2015 at 07:58:00AM +0800, Boqun Feng wrote: > Currently, little endian is only supported on powernv and pseries, > however, Kconfigs still allow us to include other platforms in a LE > kernel, this may result in space wasting or even build erro

Re: [PATCH] powerpc: Kconfig: remove BE-only platforms from LE kernel build

2015-09-18 Thread Boqun Feng
On Fri, Sep 18, 2015 at 07:49:56PM +1000, Michael Ellerman wrote: > On Fri, 2015-09-18 at 08:22 +0800, Boqun Feng wrote: > > Ping ;-) > > Hi Boqun, > Hello, > We keep track of patches in patchwork: > > http://patchwork.ozlabs.org/project/linuxppc-dev/list/?submi

Re: [RFC v2 3/7] powerpc: atomic: Implement atomic{,64}_{add,sub}_return_* variants

2015-09-19 Thread Boqun Feng
Hi Will, On Fri, Sep 18, 2015 at 05:59:02PM +0100, Will Deacon wrote: > On Wed, Sep 16, 2015 at 04:49:31PM +0100, Boqun Feng wrote: > > On powerpc, we don't need a general memory barrier to achieve acquire and > > release semantics, so __atomic_op_{acquire,release} can be i

Re: [RFC v2 3/7] powerpc: atomic: Implement atomic{,64}_{add,sub}_return_* variants

2015-09-20 Thread Boqun Feng
On Sat, Sep 19, 2015 at 11:33:10PM +0800, Boqun Feng wrote: > Hi Will, > > On Fri, Sep 18, 2015 at 05:59:02PM +0100, Will Deacon wrote: > > On Wed, Sep 16, 2015 at 04:49:31PM +0100, Boqun Feng wrote: > > > On powerpc, we don't need a general memory barrier to achi

Re: [RFC v2 3/7] powerpc: atomic: Implement atomic{,64}_{add,sub}_return_* variants

2015-09-21 Thread Boqun Feng
On Mon, Sep 21, 2015 at 11:24:27PM +0100, Will Deacon wrote: > Hi Boqun, > > On Sun, Sep 20, 2015 at 09:23:03AM +0100, Boqun Feng wrote: > > On Sat, Sep 19, 2015 at 11:33:10PM +0800, Boqun Feng wrote: > > > On Fri, Sep 18, 2015 at 05:59:02PM +0100, Will Deacon wrote: >

Re: [RFC v2 3/7] powerpc: atomic: Implement atomic{,64}_{add,sub}_return_* variants

2015-09-21 Thread Boqun Feng
On Tue, Sep 22, 2015 at 07:26:56AM +0800, Boqun Feng wrote: > On Mon, Sep 21, 2015 at 11:24:27PM +0100, Will Deacon wrote: > > Hi Boqun, > > > > On Sun, Sep 20, 2015 at 09:23:03AM +0100, Boqun Feng wrote: > > > On Sat, Sep 19, 2015 at 11:33:10PM +0800, Boqun Feng

Re: [RFC v2 3/7] powerpc: atomic: Implement atomic{,64}_{add,sub}_return_* variants

2015-09-22 Thread Boqun Feng
On Tue, Sep 22, 2015 at 08:25:40AM -0700, Paul E. McKenney wrote: > On Tue, Sep 22, 2015 at 07:37:04AM +0800, Boqun Feng wrote: > > On Tue, Sep 22, 2015 at 07:26:56AM +0800, Boqun Feng wrote: > > > On Mon, Sep 21, 2015 at 11:24:27PM +0100, Will Deacon wrote: > > > >

Re: [RFC v2 3/7] powerpc: atomic: Implement atomic{,64}_{add,sub}_return_* variants

2015-09-25 Thread Boqun Feng
On Fri, Sep 25, 2015 at 02:29:04PM -0700, Paul E. McKenney wrote: > On Wed, Sep 23, 2015 at 08:07:55AM +0800, Boqun Feng wrote: > > On Tue, Sep 22, 2015 at 08:25:40AM -0700, Paul E. McKenney wrote: > > > On Tue, Sep 22, 2015 at 07:37:04AM +0800, Boqun Feng wrote: > > > &

Re: [RFC v2 6/7] powerpc: atomic: Make atomic{,64}_xchg and xchg a full barrier

2015-10-01 Thread Boqun Feng
Hi Peter, Please forgive me for the format of my reply. I'm travelling, and replying from my phone. 2015年10月1日 下午7:28,"Peter Zijlstra" 写道: > > On Wed, Sep 16, 2015 at 11:49:34PM +0800, Boqun Feng wrote: > > According to memory-barriers.txt, xchg and its atomic{,64}_ v

Re: [RFC v2 5/7] powerpc: atomic: Implement cmpxchg{,64}_* and atomic{,64}_cmpxchg_* variants

2015-10-09 Thread Boqun Feng
Hi Peter, Sorry for replying late. On Thu, Oct 01, 2015 at 02:27:16PM +0200, Peter Zijlstra wrote: > On Wed, Sep 16, 2015 at 11:49:33PM +0800, Boqun Feng wrote: > > Unlike other atomic operation variants, cmpxchg{,64}_acquire and > > atomic{,64}_cmpxchg_acquire don't have

Re: [RFC v2 5/7] powerpc: atomic: Implement cmpxchg{,64}_* and atomic{,64}_cmpxchg_* variants

2015-10-11 Thread Boqun Feng
On Sat, Oct 10, 2015 at 09:58:05AM +0800, Boqun Feng wrote: > Hi Peter, > > Sorry for replying late. > > On Thu, Oct 01, 2015 at 02:27:16PM +0200, Peter Zijlstra wrote: > > On Wed, Sep 16, 2015 at 11:49:33PM +0800, Boqun Feng wrote: > > > Unlike other atomic o

Re: [RFC v2 4/7] powerpc: atomic: Implement xchg_* and atomic{,64}_xchg_* variants

2015-10-11 Thread Boqun Feng
Hi Paul, On Thu, Oct 01, 2015 at 11:03:01AM -0700, Paul E. McKenney wrote: > On Thu, Oct 01, 2015 at 07:13:04PM +0200, Peter Zijlstra wrote: > > On Thu, Oct 01, 2015 at 08:09:09AM -0700, Paul E. McKenney wrote: > > > On Thu, Oct 01, 2015 at 02:24:40PM +0200, Peter Zijlstra wrote: > > > > > > I mu

Re: [RFC v2 5/7] powerpc: atomic: Implement cmpxchg{,64}_* and atomic{,64}_cmpxchg_* variants

2015-10-12 Thread Boqun Feng
On Mon, Oct 12, 2015 at 08:46:21AM +0200, Peter Zijlstra wrote: > On Sun, Oct 11, 2015 at 06:25:20PM +0800, Boqun Feng wrote: > > On Sat, Oct 10, 2015 at 09:58:05AM +0800, Boqun Feng wrote: > > > Hi Peter, > > > > > > Sorry for replying late. > > > &

Re: [RFC v2 1/7] atomics: Add test for atomic operations with _relaxed variants

2015-10-12 Thread Boqun Feng
On Mon, Oct 12, 2015 at 10:30:34AM +0100, Will Deacon wrote: > On Wed, Sep 16, 2015 at 04:49:29PM +0100, Boqun Feng wrote: > > Some atomic operations now have _{relaxed, acquire, release} variants, > > this patch then adds some trivial tests for two purpose: > > > > 1.

[PATCH v3 0/6] atomics: powerpc: Implement relaxed/acquire/release variants of some atomics

2015-10-12 Thread Boqun Feng
Hi, This is v3 of the series. Link for v1: https://lkml.org/lkml/2015/8/27/798 Link for v2: https://lkml.org/lkml/2015/9/16/527 Paul, Peter and Will, thank you all for the comments and suggestions, that's really a lot of fun to discuss these with you and very enlightening to me ;-) Changes sinc

[PATCH v3 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

2015-10-12 Thread Boqun Feng
PPC_ATOMIC_EXIT_BARRIER in __{cmp,}xchg_{u32,u64} respectively to guarantee a full barrier semantics of atomic{,64}_{cmp,}xchg() and {cmp,}xchg(). This patch is a complement of commit b97021f85517 ("powerpc: Fix atomic_xxx_return barrier semantics"). Cc: sta...@vger.kernel.org # 3.4.y- Signed-off-by:

[PATCH v3 2/6] atomics: Add test for atomic operations with _relaxed variants

2015-10-12 Thread Boqun Feng
that we can examine their assembly code. Signed-off-by: Boqun Feng --- lib/atomic64_test.c | 120 ++-- 1 file changed, 79 insertions(+), 41 deletions(-) diff --git a/lib/atomic64_test.c b/lib/atomic64_test.c index 83c33a5b..18e422b 100644 --- a/lib

[PATCH v3 3/6] atomics: Allow architectures to define their own __atomic_op_* helpers

2015-10-12 Thread Boqun Feng
-by: Boqun Feng --- include/linux/atomic.h | 10 ++ 1 file changed, 10 insertions(+) diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 27e580d..947c1dc 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -43,20 +43,29 @@ static inline int atomic_read_ctrl

[PATCH v3 4/6] powerpc: atomic: Implement atomic{, 64}_*_return_* variants

2015-10-12 Thread Boqun Feng
__atomic_op_fence is defined as smp_lwsync() + _relaxed + smp_mb__after_atomic() to guarantee a full barrier. Implement atomic{,64}_{add,sub,inc,dec}_return_relaxed, and build other variants with these helpers. Signed-off-by: Boqun Feng --- arch/powerpc/include/asm/atomic.h | 122 ++

[PATCH v3 5/6] powerpc: atomic: Implement xchg_* and atomic{, 64}_xchg_* variants

2015-10-12 Thread Boqun Feng
Implement xchg_relaxed and atomic{,64}_xchg_relaxed, based on these _relaxed variants, release/acquire variants and fully ordered versions can be built. Note that xchg_relaxed and atomic_{,64}_xchg_relaxed are not compiler barriers. Signed-off-by: Boqun Feng --- arch/powerpc/include/asm

[PATCH v3 6/6] powerpc: atomic: Implement cmpxchg{, 64}_* and atomic{, 64}_cmpxchg_* variants

2015-10-12 Thread Boqun Feng
, we keep the assembly implementation of fully ordered cmpxchg operations. Note cmpxchg{,64}_relaxed and atomic{,64}_cmpxchg_relaxed are not compiler barriers. Signed-off-by: Boqun Feng --- arch/powerpc/include/asm/atomic.h | 10 +++ arch/powerpc/include/asm/cmpxchg.h | 141

Re: [PATCH v3 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

2015-10-12 Thread Boqun Feng
Oops.. sorry. I will resend this one with correct address list. On Mon, Oct 12, 2015 at 10:14:01PM +0800, Boqun Feng wrote: > According to memory-barriers.txt, xchg, cmpxchg and their atomic{,64}_ > versions all need to imply a full barrier, however they are now just > RELEASE+ACQUIRE,

[PATCH RESEND v3 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

2015-10-12 Thread Boqun Feng
PPC_ATOMIC_EXIT_BARRIER in __{cmp,}xchg_{u32,u64} respectively to guarantee a full barrier semantics of atomic{,64}_{cmp,}xchg() and {cmp,}xchg(). This patch is a complement of commit b97021f85517 ("powerpc: Fix atomic_xxx_return barrier semantics"). Cc: # 3.4.y- Signed-off-by: Boqun Feng --- arch/power

Re: [PATCH v3 4/6] powerpc: atomic: Implement atomic{, 64}_*_return_* variants

2015-10-13 Thread Boqun Feng
On Tue, Oct 13, 2015 at 02:21:32PM +0100, Will Deacon wrote: > On Mon, Oct 12, 2015 at 10:14:04PM +0800, Boqun Feng wrote: [snip] > > +/* > > + * Since {add,sub}_return_relaxed and xchg_relaxed are implemented with > > + * a "bne-" instruction at the end, so

Re: [PATCH v3 6/6] powerpc: atomic: Implement cmpxchg{,64}_* and atomic{,64}_cmpxchg_* variants

2015-10-13 Thread Boqun Feng
On Tue, Oct 13, 2015 at 02:24:04PM +0100, Will Deacon wrote: > On Mon, Oct 12, 2015 at 10:14:06PM +0800, Boqun Feng wrote: > > Implement cmpxchg{,64}_relaxed and atomic{,64}_cmpxchg_relaxed, based on > > which _release variants can be built. > > > > To avoid super

Re: [PATCH v3 6/6] powerpc: atomic: Implement cmpxchg{,64}_* and atomic{,64}_cmpxchg_* variants

2015-10-13 Thread Boqun Feng
On Tue, Oct 13, 2015 at 10:32:59PM +0800, Boqun Feng wrote: > On Tue, Oct 13, 2015 at 02:24:04PM +0100, Will Deacon wrote: > > On Mon, Oct 12, 2015 at 10:14:06PM +0800, Boqun Feng wrote: > > > Implement cmpxchg{,64}_relaxed and atomic{,64}_cmpxchg_relaxed, based on > > &

Re: [PATCH v3 6/6] powerpc: atomic: Implement cmpxchg{,64}_* and atomic{,64}_cmpxchg_* variants

2015-10-13 Thread Boqun Feng
On Tue, Oct 13, 2015 at 03:43:33PM +0100, Will Deacon wrote: > On Tue, Oct 13, 2015 at 10:32:59PM +0800, Boqun Feng wrote: [snip] > > > > Mostly because of the comments in include/linux/atomic.h: > > > > * For compound atomics performing both a load and a store, ACQ

Re: [PATCH v3 6/6] powerpc: atomic: Implement cmpxchg{,64}_* and atomic{,64}_cmpxchg_* variants

2015-10-13 Thread Boqun Feng
On Tue, Oct 13, 2015 at 04:04:27PM +0100, Will Deacon wrote: > On Tue, Oct 13, 2015 at 10:58:30PM +0800, Boqun Feng wrote: > > On Tue, Oct 13, 2015 at 03:43:33PM +0100, Will Deacon wrote: > > > Putting a barrier in the middle of that critical section is probably a > > >

Re: [PATCH RESEND v3 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

2015-10-13 Thread Boqun Feng
On Wed, Oct 14, 2015 at 11:10:00AM +1100, Michael Ellerman wrote: > On Mon, 2015-10-12 at 22:30 +0800, Boqun Feng wrote: > > According to memory-barriers.txt, xchg, cmpxchg and their atomic{,64}_ > > versions all need to imply a full barrier, however they are now just > > RELE

Re: [PATCH v3 4/6] powerpc: atomic: Implement atomic{, 64}_*_return_* variants

2015-10-13 Thread Boqun Feng
On Tue, Oct 13, 2015 at 09:35:54PM +0800, Boqun Feng wrote: > On Tue, Oct 13, 2015 at 02:21:32PM +0100, Will Deacon wrote: > > On Mon, Oct 12, 2015 at 10:14:04PM +0800, Boqun Feng wrote: > [snip] > > > +/* > > > + * Since {add,sub}_return_relaxed and xchg_relaxed ar

Re: [PATCH v3 6/6] powerpc: atomic: Implement cmpxchg{,64}_* and atomic{,64}_cmpxchg_* variants

2015-10-13 Thread Boqun Feng
On Tue, Oct 13, 2015 at 04:04:27PM +0100, Will Deacon wrote: > On Tue, Oct 13, 2015 at 10:58:30PM +0800, Boqun Feng wrote: > > On Tue, Oct 13, 2015 at 03:43:33PM +0100, Will Deacon wrote: > > > Putting a barrier in the middle of that critical section is probably a > > >

Re: [PATCH RESEND v3 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

2015-10-14 Thread Boqun Feng
On Wed, Oct 14, 2015 at 10:06:13AM +0200, Peter Zijlstra wrote: > On Wed, Oct 14, 2015 at 08:51:34AM +0800, Boqun Feng wrote: > > On Wed, Oct 14, 2015 at 11:10:00AM +1100, Michael Ellerman wrote: > > > > Thanks for fixing this. In future you should send a patch like this as a

[PATCH tip/locking/core v4 0/6] atomics: powerpc: Implement relaxed/acquire/release variants of some atomics

2015-10-14 Thread Boqun Feng
Hi all, This is v4 of the series. Link for v1: https://lkml.org/lkml/2015/8/27/798 Link for v2: https://lkml.org/lkml/2015/9/16/527 Link for v3: https://lkml.org/lkml/2015/10/12/368 Changes since v3: * avoid to introduce smp_acquire_barrier__after_atomic() (Will Deacon) * e

[PATCH tip/locking/core v4 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

2015-10-14 Thread Boqun Feng
PPC_ATOMIC_EXIT_BARRIER in __{cmp,}xchg_{u32,u64} respectively to guarantee a full barrier semantics of atomic{,64}_{cmp,}xchg() and {cmp,}xchg(). This patch is a complement of commit b97021f85517 ("powerpc: Fix atomic_xxx_return barrier semantics"). Acked-by: Michael Ellerman Cc: # 3.4+ Signed-off-by:

[PATCH tip/locking/core v4 2/6] atomics: Add test for atomic operations with _relaxed variants

2015-10-14 Thread Boqun Feng
that we can examine their assembly code. Signed-off-by: Boqun Feng --- lib/atomic64_test.c | 120 ++-- 1 file changed, 79 insertions(+), 41 deletions(-) diff --git a/lib/atomic64_test.c b/lib/atomic64_test.c index 83c33a5b..18e422b 100644 --- a/lib

[PATCH tip/locking/core v4 3/6] atomics: Allow architectures to define their own __atomic_op_* helpers

2015-10-14 Thread Boqun Feng
-by: Boqun Feng --- include/linux/atomic.h | 10 ++ 1 file changed, 10 insertions(+) diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 27e580d..947c1dc 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -43,20 +43,29 @@ static inline int atomic_read_ctrl

[PATCH tip/locking/core v4 4/6] powerpc: atomic: Implement atomic{, 64}_*_return_* variants

2015-10-14 Thread Boqun Feng
defined as smp_lwsync() + _relaxed + smp_mb__after_atomic() to guarantee a full barrier. Implement atomic{,64}_{add,sub,inc,dec}_return_relaxed, and build other variants with these helpers. Signed-off-by: Boqun Feng --- arch/powerpc/include/asm/atomic.h | 116 ---

[PATCH tip/locking/core v4 5/6] powerpc: atomic: Implement xchg_* and atomic{, 64}_xchg_* variants

2015-10-14 Thread Boqun Feng
Implement xchg_relaxed and atomic{,64}_xchg_relaxed, based on these _relaxed variants, release/acquire variants and fully ordered versions can be built. Note that xchg_relaxed and atomic_{,64}_xchg_relaxed are not compiler barriers. Signed-off-by: Boqun Feng --- arch/powerpc/include/asm

[PATCH tip/locking/core v4 6/6] powerpc: atomic: Implement cmpxchg{, 64}_* and atomic{, 64}_cmpxchg_* variants

2015-10-14 Thread Boqun Feng
piler barriers. Signed-off-by: Boqun Feng --- arch/powerpc/include/asm/atomic.h | 10 +++ arch/powerpc/include/asm/cmpxchg.h | 149 - 2 files changed, 158 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/includ

Re: [PATCH tip/locking/core v4 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

2015-10-14 Thread Boqun Feng
On Wed, Oct 14, 2015 at 02:44:53PM -0700, Paul E. McKenney wrote: > On Wed, Oct 14, 2015 at 11:04:19PM +0200, Peter Zijlstra wrote: > > On Wed, Oct 14, 2015 at 01:19:17PM -0700, Paul E. McKenney wrote: > > > Suppose we have something like the following, where "a" and "x" are both > > > initially ze

Re: [PATCH tip/locking/core v4 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

2015-10-14 Thread Boqun Feng
On Thu, Oct 15, 2015 at 08:53:21AM +0800, Boqun Feng wrote: > On Wed, Oct 14, 2015 at 02:44:53PM -0700, Paul E. McKenney wrote: > > On Wed, Oct 14, 2015 at 11:04:19PM +0200, Peter Zijlstra wrote: > > > On Wed, Oct 14, 2015 at 01:19:17PM -0700, Paul E. McKenney wrote: >

Re: [PATCH tip/locking/core v4 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

2015-10-14 Thread Boqun Feng
Hi Paul, On Thu, Oct 15, 2015 at 08:53:21AM +0800, Boqun Feng wrote: > On Wed, Oct 14, 2015 at 02:44:53PM -0700, Paul E. McKenney wrote: [snip] > > To that end, the herd tool can make a diagram of what it thought > > happened, and I have attached it. I used this diagram to try and

Re: [PATCH tip/locking/core v4 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

2015-10-14 Thread Boqun Feng
On Wed, Oct 14, 2015 at 08:07:05PM -0700, Paul E. McKenney wrote: > On Thu, Oct 15, 2015 at 08:53:21AM +0800, Boqun Feng wrote: [snip] > > > > I'm afraid more than that, the above litmus also shows that > > > > CPU 0

Re: [PATCH tip/locking/core v4 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

2015-10-15 Thread Boqun Feng
On Thu, Oct 15, 2015 at 11:35:44AM +0100, Will Deacon wrote: > > So arm64 is ok. Doesn't lwsync order store->store observability for PPC? > I did some litmus and put the result here. My understanding might be wrong, and I think Paul can explain the lwsync and store->store order better ;-) When

Re: [PATCH tip/locking/core v4 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

2015-10-15 Thread Boqun Feng
On Wed, Oct 14, 2015 at 01:19:17PM -0700, Paul E. McKenney wrote: > On Wed, Oct 14, 2015 at 11:55:56PM +0800, Boqun Feng wrote: > > According to memory-barriers.txt, xchg, cmpxchg and their atomic{,64}_ > > versions all need to imply a full barrier, however they are now just >

Re: [PATCH tip/locking/core v4 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

2015-10-18 Thread Boqun Feng
On Thu, Oct 15, 2015 at 09:30:40AM -0700, Paul E. McKenney wrote: > On Thu, Oct 15, 2015 at 12:48:03PM +0800, Boqun Feng wrote: > > On Wed, Oct 14, 2015 at 08:07:05PM -0700, Paul E. McKenney wrote: [snip] > > > > > Why not try creating a longer litmus test that requires P0

Re: [PATCH v2] barriers: introduce smp_mb__release_acquire and update documentation

2015-10-18 Thread Boqun Feng
On Fri, Oct 09, 2015 at 10:40:39AM +0100, Will Deacon wrote: > On Fri, Oct 09, 2015 at 10:31:38AM +0200, Peter Zijlstra wrote: [snip] > > > > So lots of little confusions added up to complete fail :-{ > > > > Mostly I think it was the UNLOCK x + LOCK x are fully ordered (where I > > forgot: but n

Re: [PATCH tip/locking/core v4 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

2015-10-20 Thread Boqun Feng
On Wed, Oct 14, 2015 at 01:19:17PM -0700, Paul E. McKenney wrote: > > Am I missing something here? If not, it seems to me that you need > the leading lwsync to instead be a sync. > > Of course, if I am not missing something, then this applies also to the > value-returning RMW atomic operations t

Re: [PATCH v2] barriers: introduce smp_mb__release_acquire and update documentation

2015-10-20 Thread Boqun Feng
On Mon, Oct 19, 2015 at 12:23:24PM +0200, Peter Zijlstra wrote: > On Mon, Oct 19, 2015 at 09:17:18AM +0800, Boqun Feng wrote: > > This is confusing me right now. ;-) > > > > Let's use a simple example for only one primitive, as I understand it, > > if we say a pr

Re: [PATCH v2] barriers: introduce smp_mb__release_acquire and update documentation

2015-10-20 Thread Boqun Feng
On Mon, Oct 12, 2015 at 04:30:48PM -0700, Paul E. McKenney wrote: > On Fri, Oct 09, 2015 at 07:33:28PM +0100, Will Deacon wrote: > > On Fri, Oct 09, 2015 at 10:43:27AM -0700, Paul E. McKenney wrote: > > > On Fri, Oct 09, 2015 at 10:51:29AM +0100, Will Deacon wrote: [snip] > > > > > We could also i

Re: [PATCH tip/locking/core v4 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

2015-10-21 Thread Boqun Feng
On Tue, Oct 20, 2015 at 02:28:35PM -0700, Paul E. McKenney wrote: > On Tue, Oct 20, 2015 at 11:21:47AM +0200, Peter Zijlstra wrote: > > On Tue, Oct 20, 2015 at 03:15:32PM +0800, Boqun Feng wrote: > > > On Wed, Oct 14, 2015 at 01:19:17PM -0700, Paul E. McKenney wrote: > > &

Re: [PATCH tip/locking/core v4 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

2015-10-22 Thread Boqun Feng
On Wed, Oct 21, 2015 at 09:48:25PM +0200, Peter Zijlstra wrote: > On Wed, Oct 21, 2015 at 12:35:23PM -0700, Paul E. McKenney wrote: > > > > > > I ask this because I recall Peter once bought up a discussion: > > > > > > > > > > > > https://lkml.org/lkml/2015/8/26/596 > > > > So a full barrier on o

Re: [PATCH tip/locking/core v4 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

2015-10-24 Thread Boqun Feng
On Sat, Oct 24, 2015 at 12:26:27PM +0200, Peter Zijlstra wrote: > On Thu, Oct 22, 2015 at 08:07:16PM +0800, Boqun Feng wrote: > > On Wed, Oct 21, 2015 at 09:48:25PM +0200, Peter Zijlstra wrote: > > > On Wed, Oct 21, 2015 at 12:35:23PM -0700, Paul E. McKenney wrote: > >

Re: [PATCH tip/locking/core v4 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

2015-10-25 Thread Boqun Feng
On Sat, Oct 24, 2015 at 07:53:56PM +0800, Boqun Feng wrote: > On Sat, Oct 24, 2015 at 12:26:27PM +0200, Peter Zijlstra wrote: > > > > Right, futexes are a pain; and I think we all agreed we didn't want to > > go rely on implementation details unless we absolutely _

Re: [PATCH tip/locking/core v4 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

2015-10-25 Thread Boqun Feng
On Wed, Oct 21, 2015 at 12:36:38PM -0700, Paul E. McKenney wrote: > On Wed, Oct 21, 2015 at 10:18:33AM +0200, Peter Zijlstra wrote: > > On Tue, Oct 20, 2015 at 02:28:35PM -0700, Paul E. McKenney wrote: > > > I am not seeing a sync there, but I really have to defer to the > > > maintainers on this o

Re: [PATCH tip/locking/core v4 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

2015-10-26 Thread Boqun Feng
On Mon, Oct 26, 2015 at 11:20:01AM +0900, Michael Ellerman wrote: > > Sorry guys, these threads are so long I tend not to read them very actively :} > > Looking at the system call path, the straight line path does not include any > barriers. I can't see any hidden in macros either. > > We also h

Re: [PATCH tip/locking/core v4 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier

2015-10-26 Thread Boqun Feng
On Mon, Oct 26, 2015 at 02:20:21PM +1100, Paul Mackerras wrote: > On Wed, Oct 21, 2015 at 10:18:33AM +0200, Peter Zijlstra wrote: > > On Tue, Oct 20, 2015 at 02:28:35PM -0700, Paul E. McKenney wrote: > > > I am not seeing a sync there, but I really have to defer to the > > > maintainers on this one

[PATCH tip/locking/core v5 0/6] atomics: powerpc: Implement relaxed/acquire/release variants of some atomics

2015-10-26 Thread Boqun Feng
Hi all, This is v5 of the series. Link for v1: https://lkml.org/lkml/2015/8/27/798 Link for v2: https://lkml.org/lkml/2015/9/16/527 Link for v3: https://lkml.org/lkml/2015/10/12/368 Link for v4: https://lkml.org/lkml/2015/10/14/670 Changes since v4: * define PPC_ATOMIC_ENTRY_BARRIER as "s

[PATCH tip/locking/core v5 1/6] powerpc: atomic: Make _return atomics and *{cmp}xchg fully ordered

2015-10-26 Thread Boqun Feng
RIER and PPC_ATOMIC_EXIT_BARRIER in __{cmp,}xchg_{u32,u64} respectively to guarantee fully ordered semantics of atomic{,64}_{cmp,}xchg() and {cmp,}xchg(), as a complement of commit b97021f85517 ("powerpc: Fix atomic_xxx_return barrier semantics"). Cc: # 3.4+ Signed-off-by: Boqun Feng ---

[PATCH tip/locking/core v5 2/6] atomics: Add test for atomic operations with _relaxed variants

2015-10-26 Thread Boqun Feng
that we can examine their assembly code. Signed-off-by: Boqun Feng --- lib/atomic64_test.c | 120 ++-- 1 file changed, 79 insertions(+), 41 deletions(-) diff --git a/lib/atomic64_test.c b/lib/atomic64_test.c index 83c33a5b..18e422b 100644 --- a/lib

[PATCH tip/locking/core v5 3/6] atomics: Allow architectures to define their own __atomic_op_* helpers

2015-10-26 Thread Boqun Feng
-by: Boqun Feng --- include/linux/atomic.h | 10 ++ 1 file changed, 10 insertions(+) diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 27e580d..947c1dc 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -43,20 +43,29 @@ static inline int atomic_read_ctrl

[PATCH tip/locking/core v5 4/6] powerpc: atomic: Implement atomic{, 64}_*_return_* variants

2015-10-26 Thread Boqun Feng
on the platform without "lwsync", we can use "isync" rather than "sync" as an acquire barrier. Therefore in __atomic_op_acquire() we use PPC_ACQUIRE_BARRIER, which is barrier() on UP, "lwsync" if available and "isync" otherwise. Implement atomic{

[PATCH tip/locking/core v5 5/6] powerpc: atomic: Implement xchg_* and atomic{, 64}_xchg_* variants

2015-10-26 Thread Boqun Feng
Implement xchg_relaxed and atomic{,64}_xchg_relaxed, based on these _relaxed variants, release/acquire variants and fully ordered versions can be built. Note that xchg_relaxed and atomic_{,64}_xchg_relaxed are not compiler barriers. Signed-off-by: Boqun Feng --- arch/powerpc/include/asm

[PATCH tip/locking/core v5 6/6] powerpc: atomic: Implement cmpxchg{, 64}_* and atomic{, 64}_cmpxchg_* variants

2015-10-26 Thread Boqun Feng
piler barriers. Signed-off-by: Boqun Feng --- arch/powerpc/include/asm/atomic.h | 10 +++ arch/powerpc/include/asm/cmpxchg.h | 149 - 2 files changed, 158 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/includ

Re: [PATCH tip/locking/core v5 1/6] powerpc: atomic: Make _return atomics and *{cmp}xchg fully ordered

2015-10-26 Thread Boqun Feng
On Mon, Oct 26, 2015 at 05:50:52PM +0800, Boqun Feng wrote: > This patch fixes two problems to make value-returning atomics and > {cmp}xchg fully ordered on PPC. > > According to memory-barriers.txt: > > > Any atomic operation that modifies some state in memory and returns

[PATCH RESEND tip/locking/core v5 1/6] powerpc: atomic: Make _return atomics and *{cmp}xchg fully ordered

2015-10-26 Thread Boqun Feng
RIER and PPC_ATOMIC_EXIT_BARRIER in __{cmp,}xchg_{u32,u64} respectively to guarantee fully ordered semantics of atomic{,64}_{cmp,}xchg() and {cmp,}xchg(), as a complement of commit b97021f85517 ("powerpc: Fix atomic_xxx_return barrier semantics"). Cc: # 3.4+ Signed-off-by: Boqun Feng ---

  1   2   >