On Tue, Oct 27, 2015 at 01:33:47PM +1100, Michael Ellerman wrote:
> On Mon, 2015-26-10 at 10:15:36 UTC, Boqun Feng wrote:
> > This patch fixes two problems to make value-returning atomics and
> > {cmp}xchg fully ordered on PPC.
>
> Hi Boqun,
>
> Can you please split
On Tue, Oct 27, 2015 at 11:06:52AM +0800, Boqun Feng wrote:
> On Tue, Oct 27, 2015 at 01:33:47PM +1100, Michael Ellerman wrote:
> > On Mon, 2015-26-10 at 10:15:36 UTC, Boqun Feng wrote:
> > > This patch fixes two problems to make value-returning atomics and
> > > {cm
On Fri, Oct 30, 2015 at 08:56:33AM +0800, Boqun Feng wrote:
> On Tue, Oct 27, 2015 at 11:06:52AM +0800, Boqun Feng wrote:
> > On Tue, Oct 27, 2015 at 01:33:47PM +1100, Michael Ellerman wrote:
> > > On Mon, 2015-26-10 at 10:15:36 UTC, Boqun Feng wrote:
> > > > This pa
0/14/970
To fix this, we define PPC_ATOMIC_ENTRY_BARRIER as "sync" to guarantee
the fully-ordered semantics.
This also makes futex atomics fully ordered, which can avoid possible
memory ordering problems if userspace code relies on futex system call
for fully ordered semantics.
Cc: # 3.4+
Signed-off-b
ly
ordered" for PPC_ATOMIC_ENTRY_BARRIER definition.
Cc: # 3.4+
Signed-off-by: Boqun Feng
---
arch/powerpc/include/asm/cmpxchg.h | 16
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/include/asm/cmpxchg.h
b/arch/powerpc/include/asm/cmpxchg.h
in
On Mon, Nov 02, 2015 at 09:22:40AM +0800, Boqun Feng wrote:
> > On Tue, Oct 27, 2015 at 11:06:52AM +0800, Boqun Feng wrote:
> > > To summerize:
> > >
> > > patch 1(split to two), 3, 4(remove inc/dec implementation), 5, 6 sent as
> > > powerpc patches for
Hi all,
This is v6 of the series.
Link for v1: https://lkml.org/lkml/2015/8/27/798
Link for v2: https://lkml.org/lkml/2015/9/16/527
Link for v3: https://lkml.org/lkml/2015/10/12/368
Link for v4: https://lkml.org/lkml/2015/10/14/670
Link for v5: https://lkml.org/lkml/2015/10/26/141
Changes since
-by: Boqun Feng
---
include/linux/atomic.h | 10 ++
1 file changed, 10 insertions(+)
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 301de78..5f3ee5a 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -34,20 +34,29 @@
* The idea here is to build
on the platform without "lwsync", we can use "isync"
rather than "sync" as an acquire barrier. Therefore in
__atomic_op_acquire() we use PPC_ACQUIRE_BARRIER, which is barrier() on
UP, "lwsync" if available and "isync" otherwise.
Implement atomic{
Implement xchg{,64}_relaxed and atomic{,64}_xchg_relaxed, based on these
_relaxed variants, release/acquire variants and fully ordered versions
can be built.
Note that xchg{,64}_relaxed and atomic_{,64}_xchg_relaxed are not
compiler barriers.
Signed-off-by: Boqun Feng
---
arch/powerpc/include
piler barriers.
Signed-off-by: Boqun Feng
---
arch/powerpc/include/asm/atomic.h | 10 +++
arch/powerpc/include/asm/cmpxchg.h | 149 -
2 files changed, 158 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/atomic.h
b/arch/powerpc/includ
On Fri, Dec 18, 2015 at 09:12:50AM -0800, Davidlohr Bueso wrote:
> I've left this series testing overnight on a power7 box and so far so good,
> nothing has broken.
Davidlohr, thank you for your testing!
Regards,
Boqun
signature.asc
Description: PGP signature
___
On Wed, Dec 23, 2015 at 01:40:05PM +1100, Michael Ellerman wrote:
> On Tue, 2015-12-15 at 22:24 +0800, Boqun Feng wrote:
>
> > Hi all,
> >
> > This is v6 of the series.
> >
> > Link for v1: https://lkml.org/lkml/2015/8/27/798
> > Link for v2: https://l
On Wed, Dec 23, 2015 at 01:40:05PM +1100, Michael Ellerman wrote:
> On Tue, 2015-12-15 at 22:24 +0800, Boqun Feng wrote:
>
> > Hi all,
> >
> > This is v6 of the series.
> >
> > Link for v1: https://lkml.org/lkml/2015/8/27/798
> > Link for v2: https://l
On Sun, Dec 27, 2015 at 06:53:39PM +1100, Michael Ellerman wrote:
> On Wed, 2015-12-23 at 18:54 +0800, Boqun Feng wrote:
> > On Wed, Dec 23, 2015 at 01:40:05PM +1100, Michael Ellerman wrote:
> > > On Tue, 2015-12-15 at 22:24 +0800, Boqun Feng wrote:
> > > > Hi all,
Hi Michael,
On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for powerpc
> for use by virtualization.
>
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
>
> This reduces the amount of arch-specific boile
On Tue, Jan 05, 2016 at 10:51:17AM +0200, Michael S. Tsirkin wrote:
> On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> > Hi Michael,
> >
> > On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > > This defines __smp_xxx barrier
On Tue, Jan 05, 2016 at 06:16:48PM +0200, Michael S. Tsirkin wrote:
[snip]
> > > > Another thing is that smp_lwsync() may have a third user(other than
> > > > smp_load_acquire() and smp_store_release()):
> > > >
> > > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > >
> > > >
Hi all,
I will resend this one to avoid a potential conflict with:
http://article.gmane.org/gmane.linux.kernel/2116880
by open coding smp_lwsync() with:
__asm__ __volatile__(PPC_ACQUIRE_BARRIER "" : : : "memory");
Regards,
Boqun
signature.asc
Description: PGP signature
__
therwise.
Implement atomic{,64}_{add,sub,inc,dec}_return_relaxed, and build other
variants with these helpers.
Signed-off-by: Boqun Feng
---
arch/powerpc/include/asm/atomic.h | 147 ++
1 file changed, 85 insertions(+), 62 deletions(-)
diff --git a/arch/powerpc/incl
elease().
> >
> >
> > But I think removing smp_lwsync() is a good idea and actually I think we
> > can go further to remove __smp_lwsync() and let __smp_load_acquire and
> > __smp_store_release call __lwsync() directly, but that is another thing.
> >
> > Anyway, I will
Hi Will,
On Tue, Jan 26, 2016 at 12:16:09PM +, Will Deacon wrote:
> On Mon, Jan 25, 2016 at 10:03:22PM -0800, Paul E. McKenney wrote:
> > On Mon, Jan 25, 2016 at 04:42:43PM +, Will Deacon wrote:
> > > On Fri, Jan 15, 2016 at 01:58:53PM -0800, Paul E. McKenney wrote:
> > > > PPC Overlapping
Hi Paul,
On Mon, Jan 18, 2016 at 07:46:29AM -0800, Paul E. McKenney wrote:
> On Mon, Jan 18, 2016 at 04:19:29PM +0800, Herbert Xu wrote:
> > Paul E. McKenney wrote:
> > >
> > > You could use SYNC_ACQUIRE() to implement read_barrier_depends() and
> > > smp_read_barrier_depends(), but SYNC_RMB prob
On Tue, Jan 26, 2016 at 03:29:21PM -0800, Paul E. McKenney wrote:
> On Tue, Jan 26, 2016 at 02:33:40PM -0800, Linus Torvalds wrote:
> > On Tue, Jan 26, 2016 at 2:15 PM, Linus Torvalds
> > wrote:
> > >
> > > You might as well just write it as
> > >
> > > struct foo x = READ_ONCE(*ptr);
> > >
On Sat, Apr 22, 2023 at 09:28:39PM +0200, Joel Fernandes wrote:
> On Sat, Apr 22, 2023 at 2:47 PM Zhouyi Zhou wrote:
> >
> > Dear PowerPC and RCU developers:
> > During the RCU torture test on mainline (on the VM of Opensource Lab
> > of Oregon State University), SRCU-P failed with __stack_chk_fai
On Mon, Apr 24, 2023 at 10:13:51AM -0500, Segher Boessenkool wrote:
> Hi!
>
> On Mon, Apr 24, 2023 at 11:14:00PM +1000, Michael Ellerman wrote:
> > Boqun Feng writes:
> > > On Sat, Apr 22, 2023 at 09:28:39PM +0200, Joel Fernandes wrote:
> > >> On Sat, Apr 22
On Mon, Apr 24, 2023 at 12:29:00PM -0500, Segher Boessenkool wrote:
> On Mon, Apr 24, 2023 at 08:28:55AM -0700, Boqun Feng wrote:
> > On Mon, Apr 24, 2023 at 10:13:51AM -0500, Segher Boessenkool wrote:
> > > At what points can r13 change? Only when some particular function
Hi,
On Wed, Mar 31, 2021 at 02:30:32PM +, guo...@kernel.org wrote:
> From: Guo Ren
>
> Some architectures don't have sub-word swap atomic instruction,
> they only have the full word's one.
>
> The sub-word swap only improve the performance when:
> NR_CPUS < 16K
> * 0- 7: locked byte
> *
Hi Nicholas,
On Wed, Nov 11, 2020 at 09:07:23PM +1000, Nicholas Piggin wrote:
> All the cool kids are doing it.
>
> Signed-off-by: Nicholas Piggin
> ---
> arch/powerpc/include/asm/atomic.h | 681 ++---
> arch/powerpc/include/asm/cmpxchg.h | 62 +--
> 2 files changed, 2
On Tue, Dec 22, 2020 at 01:52:50PM +1000, Nicholas Piggin wrote:
> Excerpts from Boqun Feng's message of November 14, 2020 1:30 am:
> > Hi Nicholas,
> >
> > On Wed, Nov 11, 2020 at 09:07:23PM +1000, Nicholas Piggin wrote:
> >> All the cool kids are doing it.
> >>
> >> Signed-off-by: Nicholas Pigg
arch-specific
> arch_spin_unlock_wait().
>
> Signed-off-by: Paul E. McKenney
> Cc: Benjamin Herrenschmidt
> Cc: Paul Mackerras
> Cc: Michael Ellerman
> Cc:
> Cc: Will Deacon
> Cc: Peter Zijlstra
> Cc: Alan Stern
> Cc: Andrea Parri
> Cc: Linus Torvalds
Acked-by:
On Mon, Dec 05, 2016 at 10:19:21AM -0500, Pan Xinhui wrote:
> This patch add basic code to enable qspinlock on powerpc. qspinlock is
> one kind of fairlock implementation. And seen some performance improvement
> under some scenarios.
>
> queued_spin_unlock() release the lock by just one write of N
On Mon, Dec 05, 2016 at 10:19:22AM -0500, Pan Xinhui wrote:
> pSeries/powerNV will use qspinlock from now on.
>
> Signed-off-by: Pan Xinhui
> ---
> arch/powerpc/platforms/pseries/Kconfig | 8
> 1 file changed, 8 insertions(+)
>
> diff --git a/arch/powerpc/platforms/pseries/Kconfig
> b
On Mon, Dec 05, 2016 at 10:19:23AM -0500, Pan Xinhui wrote:
> Add two corresponding helper functions to support pv-qspinlock.
>
> For normal use, __spin_yield_cpu will confer current vcpu slices to the
> target vcpu(say, a lock holder). If target vcpu is not specified or it
> is in running state,
On Wed, May 16, 2018 at 04:13:16PM -0400, Mathieu Desnoyers wrote:
> - On May 16, 2018, at 12:18 PM, Peter Zijlstra pet...@infradead.org wrote:
>
> > On Mon, Apr 30, 2018 at 06:44:26PM -0400, Mathieu Desnoyers wrote:
> >> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> >> index c32a
On Thu, May 17, 2018, at 11:28 PM, Mathieu Desnoyers wrote:
> - On May 16, 2018, at 9:19 PM, Boqun Feng boqun.f...@gmail.com wrote:
>
> > On Wed, May 16, 2018 at 04:13:16PM -0400, Mathieu Desnoyers wrote:
> >> - On May 16, 2018, at 12:18 PM, Peter Zijlstra
On Fri, May 18, 2018 at 02:17:17PM -0400, Mathieu Desnoyers wrote:
> - On May 17, 2018, at 7:50 PM, Boqun Feng boqun.f...@gmail.com wrote:
> [...]
> >> > I think you're right. So we have to introduce callsite to rseq_syscall()
> >> > in syscall path, somethin
Hi Xinhui,
On Mon, Sep 19, 2016 at 05:23:55AM -0400, Pan Xinhui wrote:
> Add two corresponding helper functions to support pv-qspinlock.
>
> For normal use, __spin_yield_cpu will confer current vcpu slices to the
> target vcpu(say, a lock holder). If target vcpu is not specified or it
> is in run
On Thu, Oct 20, 2016 at 05:27:54PM -0400, Pan Xinhui wrote:
> Commit ("x86, kvm: support vcpu preempted check") add one field "__u8
> preempted" into struct kvm_steal_time. This field tells if one vcpu is
> running or not.
>
> It is zero if 1) some old KVM deos not support this filed. 2) the vcpu
for percpu symbols. Usage of "lp" is similar to "ls", except
that we could add a cpu number to choose the variable of which cpu we
want to lookup. If no cpu number is given, lookup for current cpu.
Signed-off-by: Boqun Feng
---
arch/powerpc/xmon/xmon.c | 28 ++
for percpu symbols. Usage of "lp" is similar to "ls", except
that we could add a cpu number to choose the variable of which cpu we
want to lookup. If no cpu number is given, lookup for current cpu.
Signed-off-by: Boqun Feng
---
v1 --> v2:
o Using per_cpu_pt
x27;m certainly not an expert of qemu or bash programming, there
may be something I am missing in those patches. So tests and comments
are welcome ;-)
Regards,
Boqun
Boqun Feng (4):
rcutorture/doc: Add a new way to create initrd using dracut
rcutorture: Use vmlinux as the fallback kernel
tions where host's initramfs couldn't be used.
Signed-off-by: Boqun Feng
---
tools/testing/selftests/rcutorture/doc/initrd.txt | 22 ++
1 file changed, 22 insertions(+)
diff --git a/tools/testing/selftests/rcutorture/doc/initrd.txt
b/tools/testing/selftests
${TORTURE_BOOT_IMAGE} is not set on non-x86 architectures, also fixes
several places that hard-code "bzImage" as $KERNEL.
This also fixes a problem that PPC doesn't have a bzImage file as build
results.
Signed-off-by: Boqun Feng
---
tools/testing/selftests/rcutorture/bin/functio
The option "-soundhw pcspk" gives me a error on PPC as follow:
qemu-system-ppc64: ISA bus not available for pcspk
, which means this option doesn't work on ppc by default. So simply make
this an x86-specific option via identify_qemu_args().
Signed-off-by: Boqun Feng
---
Do not restrict the cpu type to POWER7 for QEMU as we have POWER8 now.
Signed-off-by: Boqun Feng
---
tools/testing/selftests/rcutorture/bin/functions.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/rcutorture/bin/functions.sh
b/tools/testing
_locked_sync()
-> spin_unlock() -> spin_lock()
This patch therefore fixes the issue and also cleans the
arch_spin_unlock_wait() a little bit by removing superfluous memory
barriers in loops and consolidating the implementations for PPC32 and
PPC64 into one.
Suggested-by
On Mon, Jun 06, 2016 at 02:52:05PM +1000, Michael Ellerman wrote:
> On Fri, 2016-03-06 at 03:49:48 UTC, Boqun Feng wrote:
> > There is an ordering issue with spin_unlock_wait() on powerpc, because
> > the spin_lock primitive is an ACQUIRE and an ACQUIRE is only ordering
> >
On Thu, Jun 09, 2016 at 10:23:28PM +1000, Michael Ellerman wrote:
> On Wed, 2016-06-08 at 15:59 +0200, Peter Zijlstra wrote:
> > On Wed, Jun 08, 2016 at 11:49:20PM +1000, Michael Ellerman wrote:
> >
> > > > Ok; what tree does this go in? I have this dependent series which I'd
> > > > like to get so
On Fri, Jun 10, 2016 at 01:25:03AM +0800, Boqun Feng wrote:
> On Thu, Jun 09, 2016 at 10:23:28PM +1000, Michael Ellerman wrote:
> > On Wed, 2016-06-08 at 15:59 +0200, Peter Zijlstra wrote:
> > > On Wed, Jun 08, 2016 at 11:49:20PM +1000, Michael Ellerman wrote:
> > >
&
t; spin_lock()
This patch therefore fixes the issue and also cleans the
arch_spin_unlock_wait() a little bit by removing superfluous memory
barriers in loops and consolidating the implementations for PPC32 and
PPC64 into one.
Suggested-by: "Paul E. McKenney"
Signed-off-by: Boqun F
On Mon, Jun 27, 2016 at 01:41:28PM -0400, Pan Xinhui wrote:
> this supports to fix lock holder preempted issue which run as a guest
>
> for kernel users, we could use bool vcpu_is_preempted(int cpu) to detech
> if one vcpu is preempted or not.
>
> The default implementation is a macrodefined by f
and dedicated mode.
> So add lppaca_dedicated_proc macro in lppaca.h
>
> Suggested-by: Boqun Feng
> Signed-off-by: Pan Xinhui
> ---
> arch/powerpc/include/asm/lppaca.h | 6 ++
> arch/powerpc/include/asm/spinlock.h | 15 +++
> 2 files changed, 21 insertions
On Tue, Jun 28, 2016 at 11:39:18AM +0800, xinhui wrote:
[snip]
> > > +{
> > > + struct lppaca *lp = &lppaca_of(cpu);
> > > +
> > > + if (unlikely(!(lppaca_shared_proc(lp) ||
> > > + lppaca_dedicated_proc(lp
> >
> > Do you want to detect whether we are running in a guest(ie. pse
ss than nr_cpu_ids in the sibling CPU iteration code.
Signed-off-by: Boqun Feng
---
arch/powerpc/kernel/smp.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 25a39052bf6b..9c6f3fd58059 100644
--- a/arch/powerpc/ke
swait_active() check in swake_up()
and swake_up_all().
Reported-by: Steven Rostedt
Signed-off-by: Boqun Feng
---
kernel/sched/swait.c | 6 --
1 file changed, 6 deletions(-)
diff --git a/kernel/sched/swait.c b/kernel/sched/swait.c
index 3d5610dcce11..2227e183e202 100644
--- a/kernel/sched/swai
On Fri, Jul 28, 2017 at 11:41:29AM -0700, Paul E. McKenney wrote:
> On Fri, Jul 28, 2017 at 07:55:30AM -0700, Paul E. McKenney wrote:
> > On Fri, Jul 28, 2017 at 08:54:16PM +0800, Boqun Feng wrote:
> > > Hi Jonathan,
> > >
> > > FWIW, there is wakeup-missing
On Fri, Jul 28, 2017 at 12:09:56PM -0700, Paul E. McKenney wrote:
> On Fri, Jul 28, 2017 at 11:41:29AM -0700, Paul E. McKenney wrote:
> > On Fri, Jul 28, 2017 at 07:55:30AM -0700, Paul E. McKenney wrote:
> > > On Fri, Jul 28, 2017 at 08:54:16PM +0800, Boqun Feng wrote:
>
>
101 - 158 of 158 matches
Mail list logo