p irq before calling schedule, indicates that IRQ must
be enabled while calling it.
Reviewed-by:: Shrikanth Hegde
On 8/5/25 11:13, Srikar Dronamraju wrote:
* Shrikanth Hegde [2025-08-01 19:27:22]:
Could you please add a link to patch on power utils on how it is being consumed?
I am not sure I understood your query, it looks a bit ambiguous.
If your query is on how lparcfg data is being consumed
Jan K.A.
Tried the patch and able to see tracepoints.
Reviewed-by: Shrikanth Hegde
---
arch/powerpc/lib/qspinlock.c | 19 ++-
1 file changed, 10 insertions(+), 9 deletions(-)
diff --git a/arch/powerpc/lib/qspinlock.c b/arch/powerpc/lib/qspinlock.c
index bcc7e4dff8c3
On 7/16/25 16:15, Srikar Dronamraju wrote:
Systems can now be partitioned into resource groups. By default all
systems will be part of default resource group. Once a resource group is
created, and resources allocated to the resource group, those resources
will be removed from the default resour
moved.
This is tested on powerVM PREEMPT_DYNAMIC=y/n and on arm64 by Mark
v1->v2:
- Rebase to 6.16-rc6
- Collected the tags
Ingo, Peter,
Can this go via tip sched tree?
Both arm64, powerpc have acked the changes and been tested.
Shrikanth Hegde (1):
sched: preempt: Move dynamic keys i
: Shrikanth Hegde
Acked-by: Mark Rutland
Acked-by: Will Deacon
Acked-by: Madhavan Srinivasan
---
arch/arm64/include/asm/preempt.h | 1 -
arch/arm64/kernel/entry-common.c | 8
arch/powerpc/include/asm/preempt.h | 16
arch/powerpc/kernel/interrupt.c| 4
Hi Greg, Thanks for looking into the patches.
On Thu, Jun 26, 2025 at 12:41:07AM +0530, Shrikanth Hegde wrote:
Add a sysfs file called "avoid" which prints the current CPUs
makred as avoid.
This could be used by userspace components or tools such as irqbalance.
/sys/devices/
On 6/26/25 03:25, Yury Norov wrote:
On Thu, Jun 26, 2025 at 12:40:59AM +0530, Shrikanth Hegde wrote:
This is a followup version if [1] with few additions. This is still an RFC
and would like get feedback on the idea and suggestions on improvement.
v1->v2:
- Renamed to cpu_avoid_mask
Hi Yury, Thanks for taking a look at this.
On Thu, Jun 26, 2025 at 12:41:08AM +0530, Shrikanth Hegde wrote:
Reference patch for how an architecture can make use of this infra.
This is not meant to be merged. Instead the vp_manual_hint should either
come from hardware or could be derived
On 6/26/25 05:32, Yury Norov wrote:
On Thu, Jun 26, 2025 at 12:41:03AM +0530, Shrikanth Hegde wrote:
Load balancer shouldn't spread CFS tasks into a CPU marked as Avoid.
Remove those CPUs from load balancing decisions.
At wakeup, don't select a CPU marked as avoid.
Signed-off-by:
static key and set those CPUs as avoid.
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/include/asm/paravirt.h | 2 ++
arch/powerpc/kernel/smp.c | 50 +
2 files changed, 52 insertions(+)
diff --git a/arch/powerpc/include/asm/paravirt.h
b/arch/powerpc
Checking if a CPU is avoid can add a slight overhead and should be
done only when necessary.
Add a static key check which makes it almost nop when key is false.
Arch needs to set the key when it decides to. Refer to debug patch
for example.
Signed-off-by: Shrikanth Hegde
---
This method
- While wakeup don't select the CPU if it marked as avoid.
- Don't pull a task if CPU is marked as avoid.
- Don't push a task to a CPU marked as Avoid.
Signed-off-by: Shrikanth Hegde
---
kernel/sched/rt.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff
Load balancer shouldn't spread CFS tasks into a CPU marked as Avoid.
Remove those CPUs from load balancing decisions.
At wakeup, don't select a CPU marked as avoid.
Signed-off-by: Shrikanth Hegde
---
while tesing didn't see cpu being marked as avoid while new_cpu is.
May
Add a sysfs file called "avoid" which prints the current CPUs
makred as avoid.
This could be used by userspace components or tools such as irqbalance.
/sys/devices/system/cpu # cat avoid
70-479
Signed-off-by: Shrikanth Hegde
---
drivers/base/cpu.c | 8
1 file
Don't allow the CPU marked as avoid. This is used when task is pushed out
of a CPU marked as avoid in select_fallback_rq
Signed-off-by: Shrikanth Hegde
---
kernel/sched/core.c | 4
1 file changed, 4 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 0e3a00e
Introduce cpu_avoid_mask and get/set routines for it.
By having the mask, it is easier for other kernel subsystem to consume
it as well. One could quickly know which CPUs are currently marked as
avoid.
Signed-off-by: Shrikanth Hegde
---
There is a sysfs patch later in the series which prints
.
Signed-off-by: Shrikanth Hegde
---
kernel/sched/core.c | 44
kernel/sched/sched.h | 1 +
2 files changed, 45 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 13e44d7a0b90..aea4232e3ec4 100644
--- a/kernel/sched/core.c
+++ b
push task related
changes. It is currently spread across rt, dl, fair. Maybe some
consolidation can be done. but which tasks to push/pull still remains in
the class.
6. cpu_avoid_mask may need some sort of locking to ensure read/write is
correct.
[1]: https://lore.kernel.org/all/20250523181448.
This describes what avoid CPU means and what scheduler aims to do
when a CPU is marked as avoid.
Signed-off-by: Shrikanth Hegde
---
Documentation/scheduler/sched-arch.rst | 25 +
1 file changed, 25 insertions(+)
diff --git a/Documentation/scheduler/sched-arch.rst
b
On 5/8/25 11:23, Sourabh Jain wrote:
Hi Sourabh.
On 05/05/25 13:23, Shrikanth Hegde wrote:
use scoped_guard for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all
On 5/5/25 13:23, Shrikanth Hegde wrote:
This is an effort to make the code simpler by making use of lock
guards which were introduced in [1], which works by using __cleanup
attributes. More details in v1 cover letter
compile/boot tested on PowerNV(P9). Also ran eeh selftests.
No
use guard(mutex) for scope based resource management of mutex
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Reviewed-by: Srikar Dronamraju
Signed-off-by: Shrikanth
-by: Srikar Dronamraju
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/platforms/book3s/vas-api.c | 32 ++---
1 file changed, 13 insertions(+), 19 deletions(-)
diff --git a/arch/powerpc/platforms/book3s/vas-api.c
b/arch/powerpc/platforms/book3s/vas-api.c
index 0b6365d85d11
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/platforms/powernv/ocxl.c | 12 +++-
1 file changed, 3 insertions(+), 9 deletions(-)
diff --git a/arch/powerpc/platforms/powernv/ocxl.c
b/arch/powerpc/platforms/powernv/ocxl.c
index 64a9c7125c29..f8139948348e 100644
--- a/arch/powerpc
use scoped_guard for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Reviewed-by: Srikar Dronamraju
Signed-off-by: Shrikanth
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Reviewed-by: Srikar Dronamraju
Signed-off-by: Shrikanth
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Reviewed-by: Srikar Dronamraju
Signed-off-by: Shrikanth
.com/#t
v2: https://lore.kernel.org/all/20250314114502.2083434-1-sshe...@linux.ibm.com/
Shrikanth Hegde (6):
powerpc: eeh: use lock guard for mutex
powerpc: rtas: use lock guard for mutex
powerpc: fadump: use lock guard for mutex
powerpc: book3s: vas: use lock guard for mutex
powerpc: pow
Hi
On 4/30/25 18:40, Srikar Dronamraju wrote:
* Shrikanth Hegde [2025-03-14 17:15:02]:
Hi Srikar.
use guard(mutex) for scope based resource management of mutex
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https
On 4/25/25 19:01, Sebastian Andrzej Siewior wrote:
On 2025-04-25 16:49:19 [+0530], Shrikanth Hegde wrote:
On 4/25/25 00:08, Sebastian Andrzej Siewior wrote:
On 2025-04-24 21:27:59 [+0530], Shrikanth Hegde wrote:
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index
On 4/28/25 20:52, Mukesh Kumar Chaurasiya wrote:
Enable the syscall entry and exit path from generic framework.
Signed-off-by: Mukesh Kumar Chaurasiya
Hi Mukesh. Thanks for working on this. Trying to go through it.
---
arch/powerpc/Kconfig| 1 +
arch/powerpc/kernel/i
On 4/25/25 19:01, Sebastian Andrzej Siewior wrote:
On 2025-04-25 16:49:19 [+0530], Shrikanth Hegde wrote:
On 4/25/25 00:08, Sebastian Andrzej Siewior wrote:
On 2025-04-24 21:27:59 [+0530], Shrikanth Hegde wrote:
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index
On 4/25/25 00:08, Sebastian Andrzej Siewior wrote:
On 2025-04-24 21:27:59 [+0530], Shrikanth Hegde wrote:
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 19f4d298d..123539642 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
On 4/24/25 20:12, Sebastian Andrzej Siewior wrote:
Thanks Sebastian for taking a look.
On 2025-04-21 15:58:36 [+0530], Shrikanth Hegde wrote:
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 19f4d298d..123539642 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b
On 4/21/25 15:58, Shrikanth Hegde wrote:
From: Gautam Menghani
I made a mistake while generating the patch. Sorry about that. i will
fix it up in next version.
Please consider the above as:
From: Shrikanth Hegde
This is an effort to use the generic kvm infra which handles check for
.
This is based on tip/master
Shrikanth Hegde (2):
powerpc: kvm: use generic transfer to guest mode work
powerpc: enable to run posix cpu timers in task context
arch/powerpc/Kconfig | 2 ++
arch/powerpc/kvm/book3s_hv.c | 13 +++--
arch/powerpc/kvm/powerpc.c | 22
.
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 83807ae44..f42fa4181 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -277,6 +277,7 @@ config PPC
select
igned-off-by: Shrikanth Hegde
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/kvm/book3s_hv.c | 13 +++--
arch/powerpc/kvm/powerpc.c | 22 --
3 files changed, 16 insertions(+), 20 deletions(-)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 6722
ick.
It makes sense since base_slice is the only tunable available under EEVDF.
This would allow the users to make use of it.
Reviewed-by: Shrikanth Hegde
By increasing CONFIG_HZ to 1000 (1ms tick), base_slice is properly honored,
and user-defined slices work as expected. Benchmark results su
-by: Shrikanth Hegde
---
arch/powerpc/platforms/book3s/vas-api.c | 32 ++---
1 file changed, 13 insertions(+), 19 deletions(-)
diff --git a/arch/powerpc/platforms/book3s/vas-api.c
b/arch/powerpc/platforms/book3s/vas-api.c
index 0b6365d85d11..d7462c16d828 100644
--- a/arch
On 3/14/25 13:52, Peter Zijlstra wrote:
Thanks Peter for taking a look.
On Fri, Mar 14, 2025 at 11:15:41AM +0530, Shrikanth Hegde wrote:
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can
On 3/14/25 15:00, Shrikanth Hegde wrote:
On 3/14/25 11:15, Shrikanth Hegde wrote:
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all
use guard(mutex) for scope based resource management of mutex
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/sysdev
On 3/14/25 13:55, Peter Zijlstra wrote:
On Fri, Mar 14, 2025 at 11:15:42AM +0530, Shrikanth Hegde wrote:
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/kernel
use scoped_guard for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/kernel
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/platforms
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/kernel
use scoped_guard in couple of places to avoid holding mutex
un-necessarily (Peter Zijlstra)
Shrikanth Hegde (6):
powerpc: eeh: use lock guard for mutex
powerpc: rtas: use lock guard for mutex
powerpc: fadump: use lock guard for mutex
powerpc: book3s: vas: use lock guard for mutex
powe
On 3/14/25 11:15, Shrikanth Hegde wrote:
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by
On 3/14/25 11:36, Andrew Donnellan wrote:
On Fri, 2025-03-14 at 11:15 +0530, Shrikanth Hegde wrote:
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/platforms
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/kernel
use guard(mutex) for scope based resource management of mutex
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/sysdev
-by: Shrikanth Hegde
---
arch/powerpc/platforms/book3s/vas-api.c | 19 ++-
1 file changed, 6 insertions(+), 13 deletions(-)
diff --git a/arch/powerpc/platforms/book3s/vas-api.c
b/arch/powerpc/platforms/book3s/vas-api.c
index 0b6365d85d11..eb1a97271afb 100644
--- a/arch/powerpc
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/kernel
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/kernel
bisect. Let me if they should
be combined into one. Commit message is same for all.
Shrikanth Hegde (6):
powerpc: eeh: use lock guard for mutex
powerpc: rtas: use lock guard for mutex
powerpc: fadump: use lock guard for mutex
powerpc: book3s: vas: use lock guard for mutex
powerpc: powen
On 2/20/25 16:25, Tobias Huschle wrote:
On 18/02/2025 06:58, Shrikanth Hegde wrote:
[...]
There are a couple of issues and corner cases which need further
considerations:
- rt & dl: Realtime and deadline scheduling require some additional
attention.
I think we
On 2/17/25 17:02, Tobias Huschle wrote:
Changes to v1
parked vs idle
- parked CPUs are now never considered to be idle
- a scheduler group is now considered parked iff there are parked CPUs
and there are no idle CPUs, i.e. all non parked CPUs are busy or there
are only parked CPUs. A sc
Hi Tobias.
On 2/17/25 17:02, Tobias Huschle wrote:
A parked CPU is considered to be flagged as unsuitable to process
workload at the moment, but might be become usable anytime. Depending on
the necessity for additional computation power and/or available capacity
of the underlying hardware.
A sc
l.org/all/20250106051919.55020-1-sshe...@linux.ibm.com/
Shrikanth Hegde (1):
powerpc: enable dynamic preemption
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/preempt.h | 16
arch/powerpc/kernel/interrupt.c| 6 +-
arch/powerpc/lib/vmx-helper.c | 2 +-
luntary (full) lazy
perf stat -e probe:__cond_resched -a sleep 1
Performance counter stats for 'system wide':
0 probe:__cond_resched
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/preempt.h | 16 ++
On 2/10/25 19:53, Sebastian Andrzej Siewior wrote:
On 2025-02-10 11:59:50 [+0100], To Shrikanth Hegde wrote:
Thank you for noticing. I did remove it on other architectures, I
somehow missed it here. Will remove it from from the arch code.
This is what I have for powerpc now. I'm goi
On 2/8/25 23:25, Christophe Leroy wrote:
Le 08/02/2025 à 14:42, Shrikanth Hegde a écrit :
On 2/8/25 18:25, Christophe Leroy wrote:
Le 08/02/2025 à 08:35, Shrikanth Hegde a écrit :
On 2/4/25 13:52, Sebastian Andrzej Siewior wrote:
Use preempt_model_str() instead of manually
On 2/8/25 18:25, Christophe Leroy wrote:
Le 08/02/2025 à 08:35, Shrikanth Hegde a écrit :
On 2/4/25 13:52, Sebastian Andrzej Siewior wrote:
Use preempt_model_str() instead of manually conducting the preemption
model. Use pr_emerg() instead of printk() to pass a loglevel.
even on
On 2/4/25 13:52, Sebastian Andrzej Siewior wrote:
Use preempt_model_str() instead of manually conducting the preemption
model. Use pr_emerg() instead of printk() to pass a loglevel.
even on powerpc, i see __die ends up calling show_regs_print_info().
Why print it twice?
Cc: Madhavan Srini
On 1/31/25 11:39, Christophe Leroy wrote:
Le 30/01/2025 à 21:26, Sebastian Andrzej Siewior a écrit :
On 2025-01-30 22:27:07 [+0530], Shrikanth Hegde wrote:
| #DEFINE need_irq_preemption() \
|
(static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched))
|
|
On 1/30/25 20:24, Sebastian Andrzej Siewior wrote:
On 2025-01-06 10:49:19 [+0530], Shrikanth Hegde wrote:
--- a/arch/powerpc/kernel/interrupt.c
Thanks for taking a look.
+
#ifdef CONFIG_PPC_BOOK3S_64
DEFINE_STATIC_KEY_FALSE(interrupt_exit_not_reentrant);
static inline bool
On 1/6/25 10:49, Shrikanth Hegde wrote:
Now that preempt=lazy patches[1] are in powerpc-next tree, sending out the
patch to support dynamic preemption based on DYNAMIC_KEY.
base: powerpc-next
+ankur, sebastian; sorry for not cc'ing earlier.
Once the arch supports static inline call
other method to test it out, please let me know.
So for powerpc bits:
Tested-by: Shrikanth Hegde
diff --git a/arch/powerpc/kernel/trace/ftrace.c
b/arch/powerpc/kernel/trace/ftrace.c
index 5ccd791761e8f..558d7f4e4bea6 100644
--- a/arch/powerpc/kernel/trace/ftrace.c
+++ b/arch/powerpc/kernel/
d_resched -a sleep 1
Performance counter stats for 'system wide':
0 probe:__cond_resched
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/preempt.h | 12
arch/powerpc/kernel/interrupt.c| 6
2315-1-sshe...@linux.ibm.com/
v2: https://lore.kernel.org/all/20250102191856.499424-1-sshe...@linux.ibm.com/
[1]:
https://lore.kernel.org/all/173572211264.1875638.9927288574435880962.b4...@linux.ibm.com/
Shrikanth Hegde (1):
powerpc: Enable dynamic preemption
arch/powerpc/Kconfig
:__cond_resched -a sleep 1
Performance counter stats for 'system wide':
0 probe:__cond_resched
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/preempt.h | 11 +++
arch/powerpc/kernel/interrupt.c| 6
all/20241125042212.1522315-1-sshe...@linux.ibm.com/
[1]:
https://lore.kernel.org/all/173572211264.1875638.9927288574435880962.b4...@linux.ibm.com/
Shrikanth Hegde (1):
powerpc: Enable dynamic preemption
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/preempt.h | 11 +++
arch/powe
On 12/26/24 17:20, Yicong Yang wrote:
On 2024/12/26 17:23, Shrikanth Hegde wrote:
On 12/20/24 13:23, Yicong Yang wrote:
From: Yicong Yang
The core CPU control framework supports runtime SMT control which
is not yet supported on arm64. Besides the general vulnerabilities
concerns we want
On 12/20/24 13:23, Yicong Yang wrote:
From: Yicong Yang
The core CPU control framework supports runtime SMT control which
is not yet supported on arm64. Besides the general vulnerabilities
concerns we want this runtime control on our arm64 server for:
- better single CPU performance in some
On 12/20/24 13:23, Yicong Yang wrote:
From: Yicong Yang
Currently if architectures want to support HOTPLUG_SMT they need to
provide a topology_is_primary_thread() telling the framework which
thread in the SMT cannot offline. However arm64 doesn't have a
restriction on which thread in the SMT
On 12/9/24 13:35, Tobias Huschle wrote:
[...]
So I gave it a try with using a debugfs based hint to say which CPUs
are parked.
It is a hack to try it out. patch is below so one could try something
similar is their archs
and see if it help if they have a use case.
Notes:
1. Arch shouldn't
On 11/17/24 00:53, Shrikanth Hegde wrote:
preempt=lazy has been merged into tip[1]. Lets Enable it for PowerPC.
This has been very lightly tested and as michael suggested could go
through a test cycle. If needed, patches can be merged. I have kept it
separate for easier bisect.
Lazy
On 12/4/24 16:51, Tobias Huschle wrote:
A parked CPU is considered to be flagged as unsuitable to process
workload at the moment, but might be become usable anytime. Depending on
the necessity for additional computation power and/or available capacity
of the underlying hardware.
A scheduler g
On 12/4/24 16:51, Tobias Huschle wrote:
In this simplified example, vertical low CPUs are parked generally.
This will later be adjusted by making the parked state dependent
on the overall utilization on the underlying hypervisor.
Vertical lows are always bound to the highest CPU IDs. This imp
group type simplifies from implementation perspective.
So for the idea of using this,
Acked-by: Shrikanth Hegde
Some architectures (e.g. s390) provide virtualization on a firmware
level. This implies, that Linux kernels running on such architectures
run on virtualized CPUs.
Like in other
On 12/2/24 23:47, Christophe Leroy wrote:
Le 02/12/2024 à 15:05, Shrikanth Hegde a écrit :
On 11/27/24 12:07, Christophe Leroy wrote:
Le 25/11/2024 à 05:22, Shrikanth Hegde a écrit :
PowerPC uses asm-generic preempt definitions as of now.
Copy that into arch/asm so that arch specific
On 11/27/24 12:07, Christophe Leroy wrote:
Le 25/11/2024 à 05:22, Shrikanth Hegde a écrit :
PowerPC uses asm-generic preempt definitions as of now.
Copy that into arch/asm so that arch specific changes can be done.
This would help the next patch for enabling dynamic preemption.
The
On 11/27/24 12:14, Christophe Leroy wrote:
Le 25/11/2024 à 05:22, Shrikanth Hegde a écrit :
Once the lazy preemption is supported, it would be desirable to change
the preemption models at runtime. So this change adds support for dynamic
preemption using DYNAMIC_KEY.
In irq-exit to kernel
On 11/26/24 16:23, Christophe Leroy wrote:
Le 16/11/2024 à 20:23, Shrikanth Hegde a écrit :
Define preempt lazy bit for Powerpc. Use bit 9 which is free and within
16 bit range of NEED_RESCHED, so compiler can issue single andi.
Since Powerpc doesn't use the generic entry/exit, add
On 11/26/24 16:18, Christophe Leroy wrote:
Hi Christophe, Thanks for taking a look at this.
Le 25/11/2024 à 05:22, Shrikanth Hegde a écrit :
Once the lazy preemption is supported, it would be desirable to change
the preemption models at runtime. So this change adds support for dynamic
is plan to move preempt count to paca for 64
bit systems as idea was discussed in [2]
[1] https://lore.kernel.org/all/20241116192306.88217-1-sshe...@linux.ibm.com/#t
[2]
https://lore.kernel.org/all/14d4584d-a087-4674-9e2b-810e96078...@linux.ibm.com/
Shrikanth Hegde (3):
powerpc: copy
PowerPC uses asm-generic preempt definitions as of now.
Copy that into arch/asm so that arch specific changes can be done.
This would help the next patch for enabling dynamic preemption.
No functional changes intended.
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/include/asm/preempt.h | 100
kernel/debug/sched/preempt
none voluntary full (lazy)
perf stat -e probe:__cond_resched -a sleep 1
Performance counter stats for 'system wide':
0 probe:__cond_resched
[1]:
https://lore.kernel.org/all/1a973dda-c79e-4d95-935b-e4b93eb07...@linux.ibm.com/
Signed-off-by: S
Preemption models can change at runtime with dynamic preemption in
place. So need to use the right methods instead of relying on
CONFIG_PREEMPT to decide whether its full preemption or not.
While there, fix it to print preemption model correctly.
Signed-off-by: Shrikanth Hegde
---
arch
On 11/20/24 13:30, Sebastian Andrzej Siewior wrote:
On 2024-11-17 00:53:06 [+0530], Shrikanth Hegde wrote:
Large user copy_to/from (more than 16 bytes) uses vmx instructions to
speed things up. Once the copy is done, it makes sense to try schedule
as soon as possible for preemptible kernels
On 11/20/24 13:33, Sebastian Andrzej Siewior wrote:
On 2024-11-19 13:08:31 [-0800], Ankur Arora wrote:
Shrikanth Hegde writes:
Thanks Ankur and Sebastian for taking a look.
Large user copy_to/from (more than 16 bytes) uses vmx instructions to
speed things up. Once the copy is done, it
done.
Refs:
[1]: https://lore.kernel.org/lkml/20241007074609.447006...@infradead.org/
v1: https://lore.kernel.org/all/20241108101853.277808-1-sshe...@linux.ibm.com/
Changes since v1:
- Change for vmx copy as suggested by Sebastian.
- Add rwb tags
Shrikanth Hegde (2):
powerpc: Add preempt lazy
unnecessary
context switches.
Suggested-by: Sebastian Andrzej Siewior
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/lib/vmx-helper.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c
index d491da8d1838..58ed6bd613a6
Arora
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/thread_info.h | 9 ++---
arch/powerpc/kernel/interrupt.c| 4 ++--
3 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
On 11/9/24 22:24, Shrikanth Hegde wrote:
On 11/9/24 00:36, Ankur Arora wrote:
Shrikanth Hegde writes:
Define preempt lazy bit for Powerpc. Use bit 9 which is free and within
16 bit range of NEED_RESCHED, so compiler can issue single andi.
Since Powerpc doesn't use the generic
On 11/14/24 07:31, Michael Ellerman wrote:
Shrikanth Hegde writes:
Thank you Sebastian for taking a look and rwb tag.
On 2024-11-08 15:48:53 [+0530], Shrikanth Hegde wrote:
Define preempt lazy bit for Powerpc. Use bit 9 which is free and within
16 bit range of NEED_RESCHED, so compiler
1 - 100 of 138 matches
Mail list logo