B of the second word should be bit 32 rather
than 31, because bit 31 is already in the first word.
This patch fixes these typos.
Signed-off-by: Boqun Feng
---
arch/powerpc/include/asm/bitops.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/bit
+ BUILD_BUG();
Maybe we can use BUILD_BUG_ON_MSG(1, "Unsupported size for xchg"), which
could provide more information.
With or without this verbosity:
Acked-by: Boqun Feng
Regards,
Boqun
> return x;
> }
>
> @@ -124,7 +119,7 @@ __xchg_local(volatile void *ptr
Hi Xinhui,
On Tue, Apr 19, 2016 at 02:29:34PM +0800, Pan Xinhui wrote:
> From: Pan Xinhui
>
> Implement xchg{u8,u16}{local,relaxed}, and
> cmpxchg{u8,u16}{,local,acquire,relaxed}.
>
> It works on all ppc.
>
Nice work!
AFAICT, your work doesn't depend on anything that ppc-specific, right?
So
barriers in loops and consolidating the implementations for PPC32 and
PPC64 into one.
Suggested-by: "Paul E. McKenney"
Signed-off-by: Boqun Feng
Reviewed-by: "Paul E. McKenney"
---
arch/powerpc/include/asm/spinlock.h | 48 -
arch/powerpc/l
On Thu, Apr 21, 2016 at 11:35:07PM +0800, Pan Xinhui wrote:
> On 2016年04月20日 22:24, Peter Zijlstra wrote:
> > On Wed, Apr 20, 2016 at 09:24:00PM +0800, Pan Xinhui wrote:
> >
> >> +#define __XCHG_GEN(cmp, type, sfx, skip, v)
> >> \
> >> +static __always_inline unsigne
On Fri, Apr 22, 2016 at 09:59:22AM +0800, Pan Xinhui wrote:
> On 2016年04月21日 23:52, Boqun Feng wrote:
> > On Thu, Apr 21, 2016 at 11:35:07PM +0800, Pan Xinhui wrote:
> >> On 2016年04月20日 22:24, Peter Zijlstra wrote:
> >>> On Wed, Apr 20, 2016 at 09:2
On Wed, Apr 27, 2016 at 05:16:45PM +0800, Pan Xinhui wrote:
> From: Pan Xinhui
>
> Implement xchg{u8,u16}{local,relaxed}, and
> cmpxchg{u8,u16}{,local,acquire,relaxed}.
>
> It works on all ppc.
>
> remove volatile of first parameter in __cmpxchg_local and __cmpxchg
>
> Suggested-by: Peter Zijl
On Wed, Apr 27, 2016 at 09:58:17PM +0800, Boqun Feng wrote:
> On Wed, Apr 27, 2016 at 05:16:45PM +0800, Pan Xinhui wrote:
> > From: Pan Xinhui
> >
> > Implement xchg{u8,u16}{local,relaxed}, and
> > cmpxchg{u8,u16}{,local,acquire,relaxed}.
> >
> > It works
On Wed, Apr 27, 2016 at 09:58:17PM +0800, Boqun Feng wrote:
> On Wed, Apr 27, 2016 at 05:16:45PM +0800, Pan Xinhui wrote:
> > From: Pan Xinhui
> >
> > Implement xchg{u8,u16}{local,relaxed}, and
> > cmpxchg{u8,u16}{,local,acquire,relaxed}.
> >
> > It works
On Wed, Apr 27, 2016 at 10:50:34PM +0800, Boqun Feng wrote:
>
> Sorry, my bad, we can't implement cmpxchg like this.. please ignore
> this, I should really go to bed soon...
>
> But still, we can save the "tmp" for xchg() I think.
>
No.. we can't. Sorr
Relaxed/acquire/release variants of atomic operations {add,sub}_return and
{cmp,}xchg are introduced by commit:
"atomics: add acquire/release/relaxed variants of some atomic operations"
which is now on locking/core branch of tip tree.
By default, the generic code will implement relaxed variants
that we can examine their assembly code.
Signed-off-by: Boqun Feng
---
lib/atomic64_test.c | 91 ++---
1 file changed, 59 insertions(+), 32 deletions(-)
diff --git a/lib/atomic64_test.c b/lib/atomic64_test.c
index 83c33a5b..0484437 100644
--- a/lib
variants based on _relaxed variants.
Signed-off-by: Boqun Feng
---
include/linux/atomic.h | 16
1 file changed, 16 insertions(+)
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 00a5763..622255b 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
ot; otherwise.
For full ordered semantics, like the original ones, smp_lwsync() is put
before relaxed variants and smp_mb__after_atomic() is put after.
Signed-off-by: Boqun Feng
---
arch/powerpc/include/asm/atomic.h | 88 ---
1 file changed, 64 insertions(+), 24 deletions(-)
Implement xchg_relaxed and define atomic{,64}_xchg_* as xchg_relaxed,
based on these _relaxed variants, release/acquire variants can be built.
Note that xchg_relaxed and atomic_{,64}_xchg_relaxed are not compiler
barriers.
Signed-off-by: Boqun Feng
---
arch/powerpc/include/asm/atomic.h | 2
Unlike other atomic operation variants, cmpxchg{,64}_acquire and
atomic{,64}_cmpxchg_acquire don't have acquire semantics if the cmp part
fails, so we need to implement these using assembly.
Note cmpxchg{,64}_relaxed and atomic{,64}_cmpxchg_relaxed are not
compiler barriers.
Signed-off-by:
Hi Peter,
On Fri, Aug 28, 2015 at 01:36:14PM +0200, Peter Zijlstra wrote:
> On Fri, Aug 28, 2015 at 10:48:16AM +0800, Boqun Feng wrote:
> > Some architectures may have their special barriers for acquire, release
> > and fence semantics, general memory barriers(smp_mb_
Hi Peter,
On Fri, Aug 28, 2015 at 12:48:54PM +0200, Peter Zijlstra wrote:
> On Fri, Aug 28, 2015 at 10:48:17AM +0800, Boqun Feng wrote:
> > +/*
> > + * Since {add,sub}_return_relaxed and xchg_relaxed are implemented with
> > + * a "bne-" instruction at the end, so
On Fri, Aug 28, 2015 at 08:06:14PM +0800, Boqun Feng wrote:
> Hi Peter,
>
> On Fri, Aug 28, 2015 at 12:48:54PM +0200, Peter Zijlstra wrote:
> > On Fri, Aug 28, 2015 at 10:48:17AM +0800, Boqun Feng wrote:
> > > +/*
> > > + * Since {add,sub}_return_relaxed and
On Fri, Aug 28, 2015 at 05:39:21PM +0200, Peter Zijlstra wrote:
> On Fri, Aug 28, 2015 at 10:16:02PM +0800, Boqun Feng wrote:
> >
> > Ah.. just read through the thread you mentioned, I might misunderstand
> > you, probably because I didn't understand RCpc well..
> &g
Hi all,
I hit a strange build error on v4.2, when I try to build a LE kernel
with a slightly modification of the ppc64_defconfig. What I did is just
make ppc64_defconfig and make menuconfig to set CPU_LITTLE_ENDIAN=y, and
then build the kernel.
I did a little research myself, and found out the er
On Mon, Aug 31, 2015 at 04:52:38PM +1000, Benjamin Herrenschmidt wrote:
> On Mon, 2015-08-31 at 14:44 +0800, Boqun Feng wrote:
> > Hi all,
> >
> > I hit a strange build error on v4.2, when I try to build a LE kernel
> > with a slightly modification of the ppc64_de
On Mon, Aug 31, 2015 at 09:19:26PM +1000, Michael Ellerman wrote:
> On Mon, 2015-08-31 at 15:53 +0800, Boqun Feng wrote:
> > On Mon, Aug 31, 2015 at 04:52:38PM +1000, Benjamin Herrenschmidt wrote:
> > > On Mon, 2015-08-31 at 14:44 +0800, Boqun Feng wrote:
> > > > Hi
emove them from being built
for a LE kernel.
For 32bit only platforms, nothing needs to be done, because LE depends on
PPC64. For 64bit supported platforms, add CPU_BIG_ENDIAN to dependencies
explicitly [Suggested-by: Cédric Le Goater ].
Signed-off-by: Boqun Feng
---
arch/powerpc/platforms/ce
[Suggested-by: Cédric Le Goater ].
Signed-off-by: Boqun Feng
---
arch/powerpc/platforms/cell/Kconfig | 4 ++--
arch/powerpc/platforms/maple/Kconfig| 2 +-
arch/powerpc/platforms/pasemi/Kconfig | 2 +-
arch/powerpc/platforms/powermac/Kconfig | 2 +-
arch/powerpc/platforms/ps3/Kconfig | 2
Hi Michael,
On Wed, Sep 09, 2015 at 12:26:44PM +1000, Michael Ellerman wrote:
> On Mon, 2015-09-07 at 07:58 +0800, Boqun Feng wrote:
> > diff --git a/arch/powerpc/platforms/cell/Kconfig
> > b/arch/powerpc/platforms/cell/Kconfig
> > index 2f23133..808a904 100644
> > -
Link for v1: https://lkml.org/lkml/2015/8/27/798
Changes since v1:
* avoid to introduce macro arch_atomic_op_*()
* also fix the problem that cmpxchg, xchg and their atomic_
versions are not full barriers in PPC implementation.
* rebase on v4.3-rc1
Relaxed/acquire/re
omic_op_fence is defined as smp_lwsync() + _relaxed +
smp_mb__after_atomic() to guarantee a fully order.
Implement atomic{,64}_{add,sub}_return_relaxed, and build other variants
with these helpers.
Signed-off-by: Boqun Feng
---
arch/powerpc/include/asm/atomic.h | 88 ++
that we can examine their assembly code.
Signed-off-by: Boqun Feng
---
lib/atomic64_test.c | 91 ++---
1 file changed, 59 insertions(+), 32 deletions(-)
diff --git a/lib/atomic64_test.c b/lib/atomic64_test.c
index 83c33a5b..0484437 100644
--- a/lib
Implement xchg_relaxed and define atomic{,64}_xchg_* as xchg_relaxed,
based on these _relaxed variants, release/acquire variants can be built.
Note that xchg_relaxed and atomic_{,64}_xchg_relaxed are not compiler
barriers.
Signed-off-by: Boqun Feng
---
arch/powerpc/include/asm/atomic.h | 2
Unlike other atomic operation variants, cmpxchg{,64}_acquire and
atomic{,64}_cmpxchg_acquire don't have acquire semantics if the cmp part
fails, so we need to implement these using assembly.
Note cmpxchg{,64}_relaxed and atomic{,64}_cmpxchg_relaxed are not
compiler barriers.
Signed-off-by:
-by: Boqun Feng
---
include/linux/atomic.h | 10 ++
1 file changed, 10 insertions(+)
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 00a5763..590c023 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -34,20 +34,29 @@
* The idea here is to build
-off-by: Boqun Feng
---
arch/powerpc/include/asm/cmpxchg.h | 64 --
1 file changed, 64 deletions(-)
diff --git a/arch/powerpc/include/asm/cmpxchg.h
b/arch/powerpc/include/asm/cmpxchg.h
index f40f295..9f0379a 100644
--- a/arch/powerpc/include/asm/cmpxchg.h
+++ b
__cmpxchg_{u32,u64} respectively to guarantee full-barrier semantics
of atomic{,64}_cmpxchg() and cmpxchg().
Signed-off-by: Boqun Feng
---
arch/powerpc/include/asm/cmpxchg.h | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/include/asm/cmpxchg.h
b/arch/powerpc
Ping ;-)
Regards,
Boqun
On Mon, Sep 07, 2015 at 07:58:00AM +0800, Boqun Feng wrote:
> Currently, little endian is only supported on powernv and pseries,
> however, Kconfigs still allow us to include other platforms in a LE
> kernel, this may result in space wasting or even build erro
On Fri, Sep 18, 2015 at 07:49:56PM +1000, Michael Ellerman wrote:
> On Fri, 2015-09-18 at 08:22 +0800, Boqun Feng wrote:
> > Ping ;-)
>
> Hi Boqun,
>
Hello,
> We keep track of patches in patchwork:
>
> http://patchwork.ozlabs.org/project/linuxppc-dev/list/?submi
Hi Will,
On Fri, Sep 18, 2015 at 05:59:02PM +0100, Will Deacon wrote:
> On Wed, Sep 16, 2015 at 04:49:31PM +0100, Boqun Feng wrote:
> > On powerpc, we don't need a general memory barrier to achieve acquire and
> > release semantics, so __atomic_op_{acquire,release} can be i
On Sat, Sep 19, 2015 at 11:33:10PM +0800, Boqun Feng wrote:
> Hi Will,
>
> On Fri, Sep 18, 2015 at 05:59:02PM +0100, Will Deacon wrote:
> > On Wed, Sep 16, 2015 at 04:49:31PM +0100, Boqun Feng wrote:
> > > On powerpc, we don't need a general memory barrier to achi
On Mon, Sep 21, 2015 at 11:24:27PM +0100, Will Deacon wrote:
> Hi Boqun,
>
> On Sun, Sep 20, 2015 at 09:23:03AM +0100, Boqun Feng wrote:
> > On Sat, Sep 19, 2015 at 11:33:10PM +0800, Boqun Feng wrote:
> > > On Fri, Sep 18, 2015 at 05:59:02PM +0100, Will Deacon wrote:
>
On Tue, Sep 22, 2015 at 07:26:56AM +0800, Boqun Feng wrote:
> On Mon, Sep 21, 2015 at 11:24:27PM +0100, Will Deacon wrote:
> > Hi Boqun,
> >
> > On Sun, Sep 20, 2015 at 09:23:03AM +0100, Boqun Feng wrote:
> > > On Sat, Sep 19, 2015 at 11:33:10PM +0800, Boqun Feng
On Tue, Sep 22, 2015 at 08:25:40AM -0700, Paul E. McKenney wrote:
> On Tue, Sep 22, 2015 at 07:37:04AM +0800, Boqun Feng wrote:
> > On Tue, Sep 22, 2015 at 07:26:56AM +0800, Boqun Feng wrote:
> > > On Mon, Sep 21, 2015 at 11:24:27PM +0100, Will Deacon wrote:
> > > >
On Fri, Sep 25, 2015 at 02:29:04PM -0700, Paul E. McKenney wrote:
> On Wed, Sep 23, 2015 at 08:07:55AM +0800, Boqun Feng wrote:
> > On Tue, Sep 22, 2015 at 08:25:40AM -0700, Paul E. McKenney wrote:
> > > On Tue, Sep 22, 2015 at 07:37:04AM +0800, Boqun Feng wrote:
> > > &
Hi Peter,
Please forgive me for the format of my reply. I'm travelling,
and replying from my phone.
2015年10月1日 下午7:28,"Peter Zijlstra" 写道:
>
> On Wed, Sep 16, 2015 at 11:49:34PM +0800, Boqun Feng wrote:
> > According to memory-barriers.txt, xchg and its atomic{,64}_ v
Hi Peter,
Sorry for replying late.
On Thu, Oct 01, 2015 at 02:27:16PM +0200, Peter Zijlstra wrote:
> On Wed, Sep 16, 2015 at 11:49:33PM +0800, Boqun Feng wrote:
> > Unlike other atomic operation variants, cmpxchg{,64}_acquire and
> > atomic{,64}_cmpxchg_acquire don't have
On Sat, Oct 10, 2015 at 09:58:05AM +0800, Boqun Feng wrote:
> Hi Peter,
>
> Sorry for replying late.
>
> On Thu, Oct 01, 2015 at 02:27:16PM +0200, Peter Zijlstra wrote:
> > On Wed, Sep 16, 2015 at 11:49:33PM +0800, Boqun Feng wrote:
> > > Unlike other atomic o
Hi Paul,
On Thu, Oct 01, 2015 at 11:03:01AM -0700, Paul E. McKenney wrote:
> On Thu, Oct 01, 2015 at 07:13:04PM +0200, Peter Zijlstra wrote:
> > On Thu, Oct 01, 2015 at 08:09:09AM -0700, Paul E. McKenney wrote:
> > > On Thu, Oct 01, 2015 at 02:24:40PM +0200, Peter Zijlstra wrote:
> >
> > > > I mu
On Mon, Oct 12, 2015 at 08:46:21AM +0200, Peter Zijlstra wrote:
> On Sun, Oct 11, 2015 at 06:25:20PM +0800, Boqun Feng wrote:
> > On Sat, Oct 10, 2015 at 09:58:05AM +0800, Boqun Feng wrote:
> > > Hi Peter,
> > >
> > > Sorry for replying late.
> > >
&
On Mon, Oct 12, 2015 at 10:30:34AM +0100, Will Deacon wrote:
> On Wed, Sep 16, 2015 at 04:49:29PM +0100, Boqun Feng wrote:
> > Some atomic operations now have _{relaxed, acquire, release} variants,
> > this patch then adds some trivial tests for two purpose:
> >
> > 1.
Hi,
This is v3 of the series.
Link for v1: https://lkml.org/lkml/2015/8/27/798
Link for v2: https://lkml.org/lkml/2015/9/16/527
Paul, Peter and Will, thank you all for the comments and suggestions,
that's really a lot of fun to discuss these with you and very
enlightening to me ;-)
Changes sinc
PPC_ATOMIC_EXIT_BARRIER in
__{cmp,}xchg_{u32,u64} respectively to guarantee a full barrier
semantics of atomic{,64}_{cmp,}xchg() and {cmp,}xchg().
This patch is a complement of commit b97021f85517 ("powerpc: Fix
atomic_xxx_return barrier semantics").
Cc: sta...@vger.kernel.org # 3.4.y-
Signed-off-by:
that we can examine their assembly code.
Signed-off-by: Boqun Feng
---
lib/atomic64_test.c | 120 ++--
1 file changed, 79 insertions(+), 41 deletions(-)
diff --git a/lib/atomic64_test.c b/lib/atomic64_test.c
index 83c33a5b..18e422b 100644
--- a/lib
-by: Boqun Feng
---
include/linux/atomic.h | 10 ++
1 file changed, 10 insertions(+)
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 27e580d..947c1dc 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -43,20 +43,29 @@ static inline int atomic_read_ctrl
__atomic_op_fence is defined as smp_lwsync() + _relaxed +
smp_mb__after_atomic() to guarantee a full barrier.
Implement atomic{,64}_{add,sub,inc,dec}_return_relaxed, and build other
variants with these helpers.
Signed-off-by: Boqun Feng
---
arch/powerpc/include/asm/atomic.h | 122 ++
Implement xchg_relaxed and atomic{,64}_xchg_relaxed, based on these
_relaxed variants, release/acquire variants and fully ordered versions
can be built.
Note that xchg_relaxed and atomic_{,64}_xchg_relaxed are not compiler
barriers.
Signed-off-by: Boqun Feng
---
arch/powerpc/include/asm
, we keep the assembly implementation of fully
ordered cmpxchg operations.
Note cmpxchg{,64}_relaxed and atomic{,64}_cmpxchg_relaxed are not
compiler barriers.
Signed-off-by: Boqun Feng
---
arch/powerpc/include/asm/atomic.h | 10 +++
arch/powerpc/include/asm/cmpxchg.h | 141
Oops.. sorry. I will resend this one with correct address list.
On Mon, Oct 12, 2015 at 10:14:01PM +0800, Boqun Feng wrote:
> According to memory-barriers.txt, xchg, cmpxchg and their atomic{,64}_
> versions all need to imply a full barrier, however they are now just
> RELEASE+ACQUIRE,
PPC_ATOMIC_EXIT_BARRIER in
__{cmp,}xchg_{u32,u64} respectively to guarantee a full barrier
semantics of atomic{,64}_{cmp,}xchg() and {cmp,}xchg().
This patch is a complement of commit b97021f85517 ("powerpc: Fix
atomic_xxx_return barrier semantics").
Cc: # 3.4.y-
Signed-off-by: Boqun Feng
---
arch/power
On Tue, Oct 13, 2015 at 02:21:32PM +0100, Will Deacon wrote:
> On Mon, Oct 12, 2015 at 10:14:04PM +0800, Boqun Feng wrote:
[snip]
> > +/*
> > + * Since {add,sub}_return_relaxed and xchg_relaxed are implemented with
> > + * a "bne-" instruction at the end, so
On Tue, Oct 13, 2015 at 02:24:04PM +0100, Will Deacon wrote:
> On Mon, Oct 12, 2015 at 10:14:06PM +0800, Boqun Feng wrote:
> > Implement cmpxchg{,64}_relaxed and atomic{,64}_cmpxchg_relaxed, based on
> > which _release variants can be built.
> >
> > To avoid super
On Tue, Oct 13, 2015 at 10:32:59PM +0800, Boqun Feng wrote:
> On Tue, Oct 13, 2015 at 02:24:04PM +0100, Will Deacon wrote:
> > On Mon, Oct 12, 2015 at 10:14:06PM +0800, Boqun Feng wrote:
> > > Implement cmpxchg{,64}_relaxed and atomic{,64}_cmpxchg_relaxed, based on
> > &
On Tue, Oct 13, 2015 at 03:43:33PM +0100, Will Deacon wrote:
> On Tue, Oct 13, 2015 at 10:32:59PM +0800, Boqun Feng wrote:
[snip]
> >
> > Mostly because of the comments in include/linux/atomic.h:
> >
> > * For compound atomics performing both a load and a store, ACQ
On Tue, Oct 13, 2015 at 04:04:27PM +0100, Will Deacon wrote:
> On Tue, Oct 13, 2015 at 10:58:30PM +0800, Boqun Feng wrote:
> > On Tue, Oct 13, 2015 at 03:43:33PM +0100, Will Deacon wrote:
> > > Putting a barrier in the middle of that critical section is probably a
> > >
On Wed, Oct 14, 2015 at 11:10:00AM +1100, Michael Ellerman wrote:
> On Mon, 2015-10-12 at 22:30 +0800, Boqun Feng wrote:
> > According to memory-barriers.txt, xchg, cmpxchg and their atomic{,64}_
> > versions all need to imply a full barrier, however they are now just
> > RELE
On Tue, Oct 13, 2015 at 09:35:54PM +0800, Boqun Feng wrote:
> On Tue, Oct 13, 2015 at 02:21:32PM +0100, Will Deacon wrote:
> > On Mon, Oct 12, 2015 at 10:14:04PM +0800, Boqun Feng wrote:
> [snip]
> > > +/*
> > > + * Since {add,sub}_return_relaxed and xchg_relaxed ar
On Tue, Oct 13, 2015 at 04:04:27PM +0100, Will Deacon wrote:
> On Tue, Oct 13, 2015 at 10:58:30PM +0800, Boqun Feng wrote:
> > On Tue, Oct 13, 2015 at 03:43:33PM +0100, Will Deacon wrote:
> > > Putting a barrier in the middle of that critical section is probably a
> > >
On Wed, Oct 14, 2015 at 10:06:13AM +0200, Peter Zijlstra wrote:
> On Wed, Oct 14, 2015 at 08:51:34AM +0800, Boqun Feng wrote:
> > On Wed, Oct 14, 2015 at 11:10:00AM +1100, Michael Ellerman wrote:
>
> > > Thanks for fixing this. In future you should send a patch like this as a
Hi all,
This is v4 of the series.
Link for v1: https://lkml.org/lkml/2015/8/27/798
Link for v2: https://lkml.org/lkml/2015/9/16/527
Link for v3: https://lkml.org/lkml/2015/10/12/368
Changes since v3:
* avoid to introduce smp_acquire_barrier__after_atomic()
(Will Deacon)
* e
PPC_ATOMIC_EXIT_BARRIER in
__{cmp,}xchg_{u32,u64} respectively to guarantee a full barrier
semantics of atomic{,64}_{cmp,}xchg() and {cmp,}xchg().
This patch is a complement of commit b97021f85517 ("powerpc: Fix
atomic_xxx_return barrier semantics").
Acked-by: Michael Ellerman
Cc: # 3.4+
Signed-off-by:
that we can examine their assembly code.
Signed-off-by: Boqun Feng
---
lib/atomic64_test.c | 120 ++--
1 file changed, 79 insertions(+), 41 deletions(-)
diff --git a/lib/atomic64_test.c b/lib/atomic64_test.c
index 83c33a5b..18e422b 100644
--- a/lib
-by: Boqun Feng
---
include/linux/atomic.h | 10 ++
1 file changed, 10 insertions(+)
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 27e580d..947c1dc 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -43,20 +43,29 @@ static inline int atomic_read_ctrl
defined as smp_lwsync() + _relaxed +
smp_mb__after_atomic() to guarantee a full barrier.
Implement atomic{,64}_{add,sub,inc,dec}_return_relaxed, and build other
variants with these helpers.
Signed-off-by: Boqun Feng
---
arch/powerpc/include/asm/atomic.h | 116 ---
Implement xchg_relaxed and atomic{,64}_xchg_relaxed, based on these
_relaxed variants, release/acquire variants and fully ordered versions
can be built.
Note that xchg_relaxed and atomic_{,64}_xchg_relaxed are not compiler
barriers.
Signed-off-by: Boqun Feng
---
arch/powerpc/include/asm
piler barriers.
Signed-off-by: Boqun Feng
---
arch/powerpc/include/asm/atomic.h | 10 +++
arch/powerpc/include/asm/cmpxchg.h | 149 -
2 files changed, 158 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/atomic.h
b/arch/powerpc/includ
On Wed, Oct 14, 2015 at 02:44:53PM -0700, Paul E. McKenney wrote:
> On Wed, Oct 14, 2015 at 11:04:19PM +0200, Peter Zijlstra wrote:
> > On Wed, Oct 14, 2015 at 01:19:17PM -0700, Paul E. McKenney wrote:
> > > Suppose we have something like the following, where "a" and "x" are both
> > > initially ze
On Thu, Oct 15, 2015 at 08:53:21AM +0800, Boqun Feng wrote:
> On Wed, Oct 14, 2015 at 02:44:53PM -0700, Paul E. McKenney wrote:
> > On Wed, Oct 14, 2015 at 11:04:19PM +0200, Peter Zijlstra wrote:
> > > On Wed, Oct 14, 2015 at 01:19:17PM -0700, Paul E. McKenney wrote:
>
Hi Paul,
On Thu, Oct 15, 2015 at 08:53:21AM +0800, Boqun Feng wrote:
> On Wed, Oct 14, 2015 at 02:44:53PM -0700, Paul E. McKenney wrote:
[snip]
> > To that end, the herd tool can make a diagram of what it thought
> > happened, and I have attached it. I used this diagram to try and
On Wed, Oct 14, 2015 at 08:07:05PM -0700, Paul E. McKenney wrote:
> On Thu, Oct 15, 2015 at 08:53:21AM +0800, Boqun Feng wrote:
[snip]
> >
> > I'm afraid more than that, the above litmus also shows that
> >
> > CPU 0
On Thu, Oct 15, 2015 at 11:35:44AM +0100, Will Deacon wrote:
>
> So arm64 is ok. Doesn't lwsync order store->store observability for PPC?
>
I did some litmus and put the result here. My understanding might be
wrong, and I think Paul can explain the lwsync and store->store order
better ;-)
When
On Wed, Oct 14, 2015 at 01:19:17PM -0700, Paul E. McKenney wrote:
> On Wed, Oct 14, 2015 at 11:55:56PM +0800, Boqun Feng wrote:
> > According to memory-barriers.txt, xchg, cmpxchg and their atomic{,64}_
> > versions all need to imply a full barrier, however they are now just
>
On Thu, Oct 15, 2015 at 09:30:40AM -0700, Paul E. McKenney wrote:
> On Thu, Oct 15, 2015 at 12:48:03PM +0800, Boqun Feng wrote:
> > On Wed, Oct 14, 2015 at 08:07:05PM -0700, Paul E. McKenney wrote:
[snip]
> >
> > > Why not try creating a longer litmus test that requires P0
On Fri, Oct 09, 2015 at 10:40:39AM +0100, Will Deacon wrote:
> On Fri, Oct 09, 2015 at 10:31:38AM +0200, Peter Zijlstra wrote:
[snip]
> >
> > So lots of little confusions added up to complete fail :-{
> >
> > Mostly I think it was the UNLOCK x + LOCK x are fully ordered (where I
> > forgot: but n
On Wed, Oct 14, 2015 at 01:19:17PM -0700, Paul E. McKenney wrote:
>
> Am I missing something here? If not, it seems to me that you need
> the leading lwsync to instead be a sync.
>
> Of course, if I am not missing something, then this applies also to the
> value-returning RMW atomic operations t
On Mon, Oct 19, 2015 at 12:23:24PM +0200, Peter Zijlstra wrote:
> On Mon, Oct 19, 2015 at 09:17:18AM +0800, Boqun Feng wrote:
> > This is confusing me right now. ;-)
> >
> > Let's use a simple example for only one primitive, as I understand it,
> > if we say a pr
On Mon, Oct 12, 2015 at 04:30:48PM -0700, Paul E. McKenney wrote:
> On Fri, Oct 09, 2015 at 07:33:28PM +0100, Will Deacon wrote:
> > On Fri, Oct 09, 2015 at 10:43:27AM -0700, Paul E. McKenney wrote:
> > > On Fri, Oct 09, 2015 at 10:51:29AM +0100, Will Deacon wrote:
[snip]
>
> > > > We could also i
On Tue, Oct 20, 2015 at 02:28:35PM -0700, Paul E. McKenney wrote:
> On Tue, Oct 20, 2015 at 11:21:47AM +0200, Peter Zijlstra wrote:
> > On Tue, Oct 20, 2015 at 03:15:32PM +0800, Boqun Feng wrote:
> > > On Wed, Oct 14, 2015 at 01:19:17PM -0700, Paul E. McKenney wrote:
> > &
On Wed, Oct 21, 2015 at 09:48:25PM +0200, Peter Zijlstra wrote:
> On Wed, Oct 21, 2015 at 12:35:23PM -0700, Paul E. McKenney wrote:
> > > > > > I ask this because I recall Peter once bought up a discussion:
> > > > > >
> > > > > > https://lkml.org/lkml/2015/8/26/596
>
> > > So a full barrier on o
On Sat, Oct 24, 2015 at 12:26:27PM +0200, Peter Zijlstra wrote:
> On Thu, Oct 22, 2015 at 08:07:16PM +0800, Boqun Feng wrote:
> > On Wed, Oct 21, 2015 at 09:48:25PM +0200, Peter Zijlstra wrote:
> > > On Wed, Oct 21, 2015 at 12:35:23PM -0700, Paul E. McKenney wrote:
> >
On Sat, Oct 24, 2015 at 07:53:56PM +0800, Boqun Feng wrote:
> On Sat, Oct 24, 2015 at 12:26:27PM +0200, Peter Zijlstra wrote:
> >
> > Right, futexes are a pain; and I think we all agreed we didn't want to
> > go rely on implementation details unless we absolutely _
On Wed, Oct 21, 2015 at 12:36:38PM -0700, Paul E. McKenney wrote:
> On Wed, Oct 21, 2015 at 10:18:33AM +0200, Peter Zijlstra wrote:
> > On Tue, Oct 20, 2015 at 02:28:35PM -0700, Paul E. McKenney wrote:
> > > I am not seeing a sync there, but I really have to defer to the
> > > maintainers on this o
On Mon, Oct 26, 2015 at 11:20:01AM +0900, Michael Ellerman wrote:
>
> Sorry guys, these threads are so long I tend not to read them very actively :}
>
> Looking at the system call path, the straight line path does not include any
> barriers. I can't see any hidden in macros either.
>
> We also h
On Mon, Oct 26, 2015 at 02:20:21PM +1100, Paul Mackerras wrote:
> On Wed, Oct 21, 2015 at 10:18:33AM +0200, Peter Zijlstra wrote:
> > On Tue, Oct 20, 2015 at 02:28:35PM -0700, Paul E. McKenney wrote:
> > > I am not seeing a sync there, but I really have to defer to the
> > > maintainers on this one
Hi all,
This is v5 of the series.
Link for v1: https://lkml.org/lkml/2015/8/27/798
Link for v2: https://lkml.org/lkml/2015/9/16/527
Link for v3: https://lkml.org/lkml/2015/10/12/368
Link for v4: https://lkml.org/lkml/2015/10/14/670
Changes since v4:
* define PPC_ATOMIC_ENTRY_BARRIER as "s
RIER and PPC_ATOMIC_EXIT_BARRIER in
__{cmp,}xchg_{u32,u64} respectively to guarantee fully ordered semantics
of atomic{,64}_{cmp,}xchg() and {cmp,}xchg(), as a complement of commit
b97021f85517 ("powerpc: Fix atomic_xxx_return barrier semantics").
Cc: # 3.4+
Signed-off-by: Boqun Feng
---
that we can examine their assembly code.
Signed-off-by: Boqun Feng
---
lib/atomic64_test.c | 120 ++--
1 file changed, 79 insertions(+), 41 deletions(-)
diff --git a/lib/atomic64_test.c b/lib/atomic64_test.c
index 83c33a5b..18e422b 100644
--- a/lib
-by: Boqun Feng
---
include/linux/atomic.h | 10 ++
1 file changed, 10 insertions(+)
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 27e580d..947c1dc 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -43,20 +43,29 @@ static inline int atomic_read_ctrl
on the platform without "lwsync", we can use "isync"
rather than "sync" as an acquire barrier. Therefore in
__atomic_op_acquire() we use PPC_ACQUIRE_BARRIER, which is barrier() on
UP, "lwsync" if available and "isync" otherwise.
Implement atomic{
Implement xchg_relaxed and atomic{,64}_xchg_relaxed, based on these
_relaxed variants, release/acquire variants and fully ordered versions
can be built.
Note that xchg_relaxed and atomic_{,64}_xchg_relaxed are not compiler
barriers.
Signed-off-by: Boqun Feng
---
arch/powerpc/include/asm
piler barriers.
Signed-off-by: Boqun Feng
---
arch/powerpc/include/asm/atomic.h | 10 +++
arch/powerpc/include/asm/cmpxchg.h | 149 -
2 files changed, 158 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/atomic.h
b/arch/powerpc/includ
On Mon, Oct 26, 2015 at 05:50:52PM +0800, Boqun Feng wrote:
> This patch fixes two problems to make value-returning atomics and
> {cmp}xchg fully ordered on PPC.
>
> According to memory-barriers.txt:
>
> > Any atomic operation that modifies some state in memory and returns
RIER and PPC_ATOMIC_EXIT_BARRIER in
__{cmp,}xchg_{u32,u64} respectively to guarantee fully ordered semantics
of atomic{,64}_{cmp,}xchg() and {cmp,}xchg(), as a complement of commit
b97021f85517 ("powerpc: Fix atomic_xxx_return barrier semantics").
Cc: # 3.4+
Signed-off-by: Boqun Feng
---
1 - 100 of 158 matches
Mail list logo