Add a lock contention tracepoint in the queued spinlock slowpath.
Also add the __lockfunc annotation so that in_lock_functions()
works as expected.
Signed-off-by: Nysal Jan K.A.
---
arch/powerpc/lib/qspinlock.c | 19 ++-
1 file changed, 10 insertions(+), 9 deletions(-)
diff
On Wed, Jul 30, 2025 at 08:46:28AM +0200, Christophe Leroy wrote:
>
>
> Le 25/07/2025 à 10:14, Nysal Jan K.A. a écrit :
> > @@ -718,16 +720,17 @@ void queued_spin_lock_slowpath(struct qspinlock *lock)
> > if (IS_ENABLED(CONFIG_PARAVIRT_SPINLOCKS) &
Add a lock contention tracepoint in the queued spinlock slowpath.
Also add the __lockfunc annotation so that in_lock_functions()
works as expected.
Signed-off-by: Nysal Jan K.A.
---
arch/powerpc/lib/qspinlock.c | 13 -
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a
-418f-a84c-9c6360dc5...@linux.ibm.com
Signed-off-by: Nysal Jan K.A.
---
The "Fixes:" SHA1 points to the commit in mm-nonmm-unstable and will need
updating
kernel/watchdog.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index 80d1
7ffc)
Fix the issue by manually adding nop instructions using the preprocessor.
Fixes: 46036188ea1f5 ("selftests/mm: build with -O2")
Reported-by: Madhavan Srinivasan
Signed-off-by: Nysal Jan K.A.
---
tools/testing/selftests/mm/pkey-powerpc.h | 12 +++-
1 file changed, 11 inser
50910acd6f615 ("selftests/mm: use sys_pkey helpers consistently")
Signed-off-by: Madhavan Srinivasan
Signed-off-by: Nysal Jan K.A.
---
tools/testing/selftests/mm/pkey-powerpc.h | 2 ++
tools/testing/selftests/mm/pkey_util.c| 1 +
2 files changed, 3 insertions(+)
diff --git a/tool
c/boot/Makefile
> @@ -33,6 +33,7 @@ else
> endif
>
> ifdef CONFIG_PPC64_BOOT_WRAPPER
> +BOOTTARGETFLAGS += -std=gnu11
> BOOTTARGETFLAGS += -m64
> BOOTTARGETFLAGS += -mabi=elfv2
> ifdef CONFIG_PPC64_ELF_ABI_V2
> --
> 2.47.1
>
>
Reviewed-by: Nysal Jan K.A.
356 +4
e843419@0b02_d7e7_408 8 - -8
e843419@01bb_21d2_868 8 - -8
finish_task_switch.isra 592 548 -44
Signed-off-by: Nysal Jan K.A.
Reviewed-by: Mathieu Desnoyers
Reviewed-by: Michael Ellerman
Revi
356 +4
e843419@0b02_d7e7_408 8 - -8
e843419@01bb_21d2_868 8 - -8
finish_task_switch.isra 592 548 -44
Signed-off-by: Nysal Jan K.A.
---
V1 -> V2:
- Add results for aarch64
- Add a comment describing t
On Fri, Oct 25, 2024 at 11:29:38AM +1100, Michael Ellerman wrote:
> [To += Mathieu]
>
> "Nysal Jan K.A." writes:
> > From: "Nysal Jan K.A"
> >
> > On architectures where ARCH_HAS_SYNC_CORE_BEFORE_USERMODE
> > is not sel
From: "Nysal Jan K.A"
On architectures where ARCH_HAS_SYNC_CORE_BEFORE_USERMODE
is not selected, sync_core_before_usermode() is a no-op.
In membarrier_mm_sync_core_before_usermode() the compiler does not
eliminate redundant branches and the load of mm->membarrier_state
for thi
...@vger.kernel.org # v6.2+
Reported-by: Geetika Moolchandani
Reported-by: Vaishnavi Bhat
Reported-by: Jijo Varghese
Signed-off-by: Nysal Jan K.A.
Reviewed-by: Nicholas Piggin
---
arch/powerpc/lib/qspinlock.c | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch
On Wed, Aug 28, 2024 at 01:52:33PM GMT, Michael Ellerman wrote:
> "Nysal Jan K.A." writes:
> > If an interrupt occurs in queued_spin_lock_slowpath() after we increment
> > qnodesp->count and before node->lock is initialized, another CPU might
> > see stale lo
On Wed, Aug 28, 2024 at 01:19:46PM GMT, Nicholas Piggin wrote:
> What probably makes it really difficult to hit is that I think both
> locks A and B need contention from other sources to push them into
> queueing slow path. I guess that's omitted for brevity in the flow
> above, which is fine.
>
tika Moolchandani
Reported-by: Vaishnavi Bhat
Reported-by: Jijo Varghese
Signed-off-by: Nysal Jan K.A.
---
arch/powerpc/lib/qspinlock.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/arch/powerpc/lib/qspinlock.c b/arch/powerpc/lib/qspinlock.c
index 5de4dd549f6e..59861c665cef 100644
--- a
include/asm/dtl.h
> +++ b/arch/powerpc/include/asm/dtl.h
> @@ -1,6 +1,7 @@
> #ifndef _ASM_POWERPC_DTL_H
> #define _ASM_POWERPC_DTL_H
>
> +#include
> #include
> #include
The above include is redundant now and can be removed.
Reviewed-by: Nysal Jan K
From: "Nysal Jan K.A"
topology_is_core_online() checks if the core a CPU belongs to
is online. The core is online if at least one of the sibling
CPUs is online. The first CPU of an online core is also online
in the common case, so this should be fairly quick.
Signed-off-by: Nys
From: "Nysal Jan K.A"
After the addition of HOTPLUG_SMT support for PowerPC [1] there was a
regression reported [2] when enabling SMT. On a system with at least
one offline core, when enabling SMT, the expectation is that no CPUs
of offline cores are made online.
On a POWER9 system wi
From: "Nysal Jan K.A"
If a core is offline then enabling SMT should not online CPUs of
this core. By enabling SMT, what is intended is either changing the SMT
value from "off" to "on" or setting the SMT level (threads per core) from a
lower to higher value.
On Pow
On Tue, Jun 25, 2024 at 12:36:33AM GMT, Shrikanth Hegde wrote:
> > --- a/arch/powerpc/include/asm/topology.h
> > +++ b/arch/powerpc/include/asm/topology.h
> > @@ -145,6 +145,7 @@ static inline int cpu_to_coregroup_id(int cpu)
> >
> > #ifdef CONFIG_HOTPLUG_SMT
> > #include
> > +#include
>
> I
On Thu, Jun 13, 2024 at 09:34:10PM GMT, Michael Ellerman wrote:
> "Nysal Jan K.A." writes:
> > From: "Nysal Jan K.A"
> >
> > After the addition of HOTPLUG_SMT support for PowerPC [1] there was a
> > regression reported [2] when enabling SMT.
>
From: "Nysal Jan K.A"
topology_is_core_online() checks if the core a CPU belongs to
is online. The core is online if at least one of the sibling
CPUs is online. The first CPU of an online core is also online
in the common case, so this should be fairly quick.
Signed-off-by: Nys
From: "Nysal Jan K.A"
If a core is offline then enabling SMT should not online CPUs of
this core. By enabling SMT, what is intended is either changing the SMT
value from "off" to "on" or setting the SMT level (threads per core) from a
lower to higher value.
On Pow
From: "Nysal Jan K.A"
After the addition of HOTPLUG_SMT support for PowerPC [1] there was a
regression reported [2] when enabling SMT. On a system with at least
one offline core, when enabling SMT, the expectation is that no CPUs
of offline cores are made online.
On a POWER9 system wi
c | 119 +++
> 1 file changed, 52 insertions(+), 67 deletions(-)
>
> --
> 2.42.0
>
Just a minor comment regarding patch 2.
For the series:
Reviewed-by: Nysal Jan K.A
On Mon, Oct 16, 2023 at 10:43:01PM +1000, Nicholas Piggin wrote:
> If a queued waiter notices the lock owner or the previous waiter has
> been preempted, it attempts to mark the lock sleepy, but it does this
> as a try-set operation using the original lock value it got when
> queueing, which will b
Michael,
Any comments on this one?
On Fri, Feb 24, 2023 at 11:02:31AM +, Christophe Leroy wrote:
>
>
> Le 24/02/2023 à 11:39, Nysal Jan K.A a écrit :
> > [Vous ne recevez pas souvent de courriers de ny...@linux.ibm.com. Découvrez
> > pourquoi ceci est impor
Remove arch_atomic_try_cmpxchg_lock function as it is no longer used
since commit 9f61521c7a28 ("powerpc/qspinlock: powerpc qspinlock
implementation")
Signed-off-by: Nysal Jan K.A
---
arch/powerpc/include/asm/atomic.h | 29 -
1 file changed, 29 deletion
28 matches
Mail list logo