p; 1)
> - return true;
> + /*
> + * None of the threads in this thread group are running but none of
> + * them were preempted too. Hence assume the thread to be
> + * non-preempted.
> + */
That comment is bit confusing. instead of threads it would be better say CPUs
"None of the CPUs in this Big Core are running but none of them were preempted
too. Hence assume the
the CPU to be non-preempted."
> return false;
> }
>
Otherwise LGTM
Reviewed-by: Shrikanth Hegde
ve scenario with baseline.
With this patch series applied hard lockup was NOT SEEN in each of
the above scenario.
So,
Tested-by: Shrikanth Hegde
> Thanks,
> Nick
>
> Nicholas Piggin (6):
> powerpc/qspinlock: Fix stale propagated yield_cpu
> powerpc/qspinlock: stop que
On 10/18/23 10:07 PM, Srikar Dronamraju wrote:
> The ability to detect if the system is running in a shared processor
> mode is helpful in few more generic cases not just in
> paravirtualization.
> For example: At boot time, different scheduler/ topology flags may be
> set based on the processor
On 10/18/23 10:07 PM, Srikar Dronamraju wrote:
> If there are shared processor LPARs, underlying Hypervisor can have more
> virtual cores to handle than actual physical cores.
>
> Starting with Power 9, a core has 2 nearly independent thread groups.
> On a shared processors LPARs, it helps to p
On 11/14/23 12:42 PM, Aneesh Kumar K.V wrote:
> No functional change in this patch. A helper is added to find if
> vcpu is dispatched by hypervisor. Use that instead of opencoding.
> Also clarify some of the comments.
>
> Signed-off-by: Aneesh Kumar K.V
> ---
> arch/powerpc/include/asm/paravi
On 5/15/23 5:16 PM, Tobias Huschle wrote:
> The current load balancer implementation implies that scheduler groups,
> within the same domain, all host the same number of CPUs. This is
> reflected in the condition, that a scheduler group, which is load
> balancing and classified as having spare c
On 7/7/23 1:14 PM, Tobias Huschle wrote:
> On 2023-07-05 09:52, Vincent Guittot wrote:
>> Le lundi 05 juin 2023 à 10:07:16 (+0200), Tobias Huschle a écrit :
>>> On 2023-05-16 15:36, Vincent Guittot wrote:
>>> > On Mon, 15 May 2023 at 13:46, Tobias Huschle
>>> > wrote:
>>> > >
>>> > > The curren
On 7/7/23 9:29 PM, Tobias Huschle wrote:
> On 2023-07-07 16:33, Shrikanth Hegde wrote:
>> On 7/7/23 1:14 PM, Tobias Huschle wrote:
>>> On 2023-07-05 09:52, Vincent Guittot wrote:
>>>> Le lundi 05 juin 2023 à 10:07:16 (+0200), Tobias Huschle a écrit :
>>>
On 7/12/23 8:32 PM, Valentin Schneider wrote:
> On 12/07/23 16:10, Peter Zijlstra wrote:
>> Hi
>>
>> Thomas just tripped over the x86 topology setup creating a 'DIE' domain
>> for the package mask :-)
>>
>> Since these names are SCHED_DEBUG only, rename them.
>> I don't think anybody *should* be
On 11/26/22 3:29 PM, Nicholas Piggin wrote:
> This replaces the generic queued spinlock code (like s390 does) with
> our own implementation. There is an extra shim patch 1a to get the
> series to apply.
>
> Generic PV qspinlock code is causing latency / starvation regressions on
> large systems
and not needed.
Plus a minor comment update to reflect the else case.
No functional change is intended here. It only aims to improve code
readability.
Signed-off-by: Shrikanth Hegde
---
kernel/sched/core.c | 4 +---
kernel/sched/fair.c | 2 --
2 files changed, 1 insertion(+), 5 deletions(-)
diff
defined back to back. merged the two
ifdefs.
No functional change is intended here. It only aims to improve code
readability.
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/include/asm/paca.h | 4
arch/powerpc/kernel/asm-offsets.c | 2 --
arch/powerpc/platforms/powermac
ifdefs_list.append(last_word_splits[1])
if last_word_splits[0] == "#endif"":
ifdefs_list.pop()
i=i+1
if __name__ == "__main__":
args = parse_args()
parseFiles(args)
-
Shrikanth
improve code
readability.
Signed-off-by: Shrikanth Hegde
---
fs/ntfs/inode.c| 2 --
fs/xfs/xfs_sysfs.c | 4
2 files changed, 6 deletions(-)
diff --git a/fs/ntfs/inode.c b/fs/ntfs/inode.c
index aba1e22db4e9..d2c8622d53d1 100644
--- a/fs/ntfs/inode.c
+++ b/fs/ntfs/inode.c
@@ -2859,11
On 1/22/24 6:20 PM, Chandan Babu R wrote:
> On Thu, Jan 18, 2024 at 01:33:25 PM +0530, Shrikanth Hegde wrote:
>> when a ifdef is used in the below manner, second one could be considered as
>> duplicate.
>>
>> ifdef DEFINE_A
>> ...code block...
>> ifde
and not needed.
Plus a minor comment update to reflect the else case.
No functional change is intended here. It only aims to improve code
readability.
Signed-off-by: Shrikanth Hegde
---
kernel/sched/core.c | 4 +---
kernel/sched/fair.c | 2 --
2 files changed, 1 insertion(+), 5 deletions(-)
diff
nge into two patches as suggested by Chandan Babu R.
v1: https://lore.kernel.org/all/20240118080326.13137-1-sshe...@linux.ibm.com/
Shrikanth Hegde (4):
sched: remove duplicate ifdefs
xfs: remove duplicate ifdefs
ntfs: remove duplicate ifdefs
arch/powerpc: remove duplicate ifdefs
arch/pow
only aims to improve code
readability.
Reviewed-by: Darrick J. Wong
Signed-off-by: Shrikanth Hegde
---
fs/xfs/xfs_sysfs.c | 4
1 file changed, 4 deletions(-)
diff --git a/fs/xfs/xfs_sysfs.c b/fs/xfs/xfs_sysfs.c
index 17485666b672..d2391eec37fe 100644
--- a/fs/xfs/xfs_sysfs.c
+++ b/fs/xfs
here. It only aims to improve code
readability.
Signed-off-by: Shrikanth Hegde
---
fs/ntfs/inode.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/fs/ntfs/inode.c b/fs/ntfs/inode.c
index aba1e22db4e9..d2c8622d53d1 100644
--- a/fs/ntfs/inode.c
+++ b/fs/ntfs/inode.c
@@ -2859,11 +2859,9 @@ int
defined back to back. merged the two
ifdefs.
No functional change is intended here. It only aims to improve code
readability.
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/include/asm/paca.h | 4
arch/powerpc/kernel/asm-offsets.c | 2 --
arch/powerpc/platforms/powermac
/36VP 37.369.2
12EC/48VP 38.578.3
Fixes: 0e8a63132800 ("powerpc/pseries: Implement
CONFIG_PARAVIRT_TIME_ACCOUNTING")
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/platforms/pseries/lpar.c | 8 ++--
1 file changed, 6 insertions(+), 2
nge is intended here. It only aims to improve code
readability.
[1] https://lore.kernel.org/all/20240118080326.13137-1-sshe...@linux.ibm.com/
Signed-off-by: Shrikanth Hegde
---
Changes from v2:
- Converted from series to individual patches.
- Dropped RFC tag.
- Added more context on each hunk for
patch in powerpc-utils tree.
Signed-off-by: Shrikanth Hegde
---
Note:
This patch needs to merged first in the kernel for the powerpc-utils
patches to work. powerpc-utils patches will be posted to its mailing
list and link would be found in the reply to this patch if available.
arch/powerpc/platforms
On 4/5/24 3:43 PM, Shrikanth Hegde wrote:
> When there are no options specified for lparstat, it is expected to
> give reports since LPAR(Logical Partition) boot. App is an indicator
> for available processor pool in an Shared Processor LPAR(SPLPAR). App is
> derived using pool_idl
On 4/5/24 6:19 PM, Nathan Lynch wrote:
> Shrikanth Hegde writes:
Hi Nathan, Thanks for reviewing this.
>> When there are no options specified for lparstat, it is expected to
>> give reports since LPAR(Logical Partition) boot. App is an indicator
>> for available proces
for h_get_mpp, h_get_ppp calls as well.
v1: https://lore.kernel.org/all/20240405101340.149171-1-sshe...@linux.ibm.com/
Shrikanth Hegde (2):
powerpc/pseries: Add pool idle time at LPAR boot
powerpc/pseries: Add fail related checks for h_get_mpp and h_get_ppp
arch/powerpc/include/asm/hvcal
Couple of Minor fixes:
- hcall return values are long. Fix that for h_get_mpp, h_get_ppp and
parse_ppp_data
- If hcall fails, values set should be at-least zero. It shouldn't be
uninitialized values. Fix that for h_get_mpp and h_get_ppp
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/in
a
separate patch in powerpc-utils tree.
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/platforms/pseries/lparcfg.c | 39 ++--
1 file changed, 30 insertions(+), 9 deletions(-)
diff --git a/arch/powerpc/platforms/pseries/lparcfg.c
b/arch/powerpc/platforms/pseries/lparcfg.c
On 4/12/24 2:50 PM, Shrikanth Hegde wrote:
> Currently lparstat reports which shows since LPAR boot are wrong for
> some fields. There is a need for storing the PIC(Pool Idle Count) at
> boot for accurate reporting. PATCH 1 Does that.
>
> While there, it was noticed that hcall
On 6/13/24 12:20 AM, Nysal Jan K.A. wrote:
> From: "Nysal Jan K.A"
>
> topology_is_core_online() checks if the core a CPU belongs to
> is online. The core is online if at least one of the sibling
> CPUs is online. The first CPU of an online core is also online
> in the common case, so this sho
On 6/24/24 1:44 AM, Thomas Gleixner wrote:
> Michael!
>
> On Thu, Jun 13 2024 at 21:34, Michael Ellerman wrote:
>> IIUIC the regression was in the ppc64_cpu userspace tool, which switched
>> to using the new kernel interface without taking into account the way it
>> behaves.
>>
>> Or are you sa
On 6/25/24 2:54 AM, Thomas Gleixner wrote:
> On Tue, Jun 25 2024 at 00:41, Shrikanth Hegde wrote:
>> On 6/24/24 1:44 AM, Thomas Gleixner wrote:
>>> Right. So changing it not to online a thread when the full core is
>>> offline should not really break stuff.
>>&
line topology_is_core_online
> +static inline bool topology_is_core_online(unsigned int cpu)
> +{
> + int i, first_cpu = cpu_first_thread_sibling(cpu);
> +
> + for (i = first_cpu; i < first_cpu + threads_per_core; ++i) {
> + if (cpu_online(i))
> + return true;
> + }
> + return false;
> +}
> #endif
>
> #endif /* __KERNEL__ */
Reviewed-by: Shrikanth Hegde
int cpu, ret = 0;
> @@ -2699,7 +2709,7 @@ int cpuhp_smt_enable(void)
> /* Skip online CPUs and CPUs on offline nodes */
> if (cpu_online(cpu) || !node_online(cpu_to_node(cpu)))
> continue;
> - if (!cpu_smt_thread_allowed(cpu))
> + if (!cpu_smt_thread_allowed(cpu) ||
> !topology_is_core_online(cpu))
> continue;
> ret = _cpu_up(cpu, 0, CPUHP_ONLINE);
> if (ret)
Reviewed-by: Shrikanth Hegde
On 8/16/24 01:23, Michal Suchánek wrote:
On Fri, Apr 12, 2024 at 02:50:47PM +0530, Shrikanth Hegde wrote:
Couple of Minor fixes:
- hcall return values are long. Fix that for h_get_mpp, h_get_ppp and
parse_ppp_data
- If hcall fails, values set should be at-least zero. It shouldn'
kernel/debug/sched/preempt
none voluntary full (lazy)
perf stat -e probe:__cond_resched -a sleep 1
Performance counter stats for 'system wide':
0 probe:__cond_resched
[1]:
https://lore.kernel.org/all/1a973dda-c79e-4d95-935b-e4b93eb07...@linux.ibm.com/
Signed-off-by: S
Preemption models can change at runtime with dynamic preemption in
place. So need to use the right methods instead of relying on
CONFIG_PREEMPT to decide whether its full preemption or not.
While there, fix it to print preemption model correctly.
Signed-off-by: Shrikanth Hegde
---
arch
PowerPC uses asm-generic preempt definitions as of now.
Copy that into arch/asm so that arch specific changes can be done.
This would help the next patch for enabling dynamic preemption.
No functional changes intended.
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/include/asm/preempt.h | 100
is plan to move preempt count to paca for 64
bit systems as idea was discussed in [2]
[1] https://lore.kernel.org/all/20241116192306.88217-1-sshe...@linux.ibm.com/#t
[2]
https://lore.kernel.org/all/14d4584d-a087-4674-9e2b-810e96078...@linux.ibm.com/
Shrikanth Hegde (3):
powerpc: copy
Arora
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/thread_info.h | 9 ++---
arch/powerpc/kernel/interrupt.c| 4 ++--
3 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
unnecessary
context switches.
Suggested-by: Sebastian Andrzej Siewior
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/lib/vmx-helper.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c
index d491da8d1838..58ed6bd613a6
done.
Refs:
[1]: https://lore.kernel.org/lkml/20241007074609.447006...@infradead.org/
v1: https://lore.kernel.org/all/20241108101853.277808-1-sshe...@linux.ibm.com/
Changes since v1:
- Change for vmx copy as suggested by Sebastian.
- Add rwb tags
Shrikanth Hegde (2):
powerpc: Add preempt lazy
On 11/20/24 13:33, Sebastian Andrzej Siewior wrote:
On 2024-11-19 13:08:31 [-0800], Ankur Arora wrote:
Shrikanth Hegde writes:
Thanks Ankur and Sebastian for taking a look.
Large user copy_to/from (more than 16 bytes) uses vmx instructions to
speed things up. Once the copy is done, it
On 11/20/24 13:30, Sebastian Andrzej Siewior wrote:
On 2024-11-17 00:53:06 [+0530], Shrikanth Hegde wrote:
Large user copy_to/from (more than 16 bytes) uses vmx instructions to
speed things up. Once the copy is done, it makes sense to try schedule
as soon as possible for preemptible kernels
Thank you Sebastian for taking a look and rwb tag.
On 2024-11-08 15:48:53 [+0530], Shrikanth Hegde wrote:
Define preempt lazy bit for Powerpc. Use bit 9 which is free and within
16 bit range of NEED_RESCHED, so compiler can issue single andi.
Since Powerpc doesn't use the generic
On 11/9/24 00:36, Ankur Arora wrote:
Shrikanth Hegde writes:
Define preempt lazy bit for Powerpc. Use bit 9 which is free and within
16 bit range of NEED_RESCHED, so compiler can issue single andi.
Since Powerpc doesn't use the generic entry/exit, add lazy check at exit
to
to be helpful in avoiding soft lockup issues.
[1]: https://lore.kernel.org/lkml/20241007074609.447006...@infradead.org/
[2]:
https://lore.kernel.org/all/1a973dda-c79e-4d95-935b-e4b93eb07...@linux.ibm.com/
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/Kconfig | 1 +
arch/po
On 11/14/24 07:31, Michael Ellerman wrote:
Shrikanth Hegde writes:
Thank you Sebastian for taking a look and rwb tag.
On 2024-11-08 15:48:53 [+0530], Shrikanth Hegde wrote:
Define preempt lazy bit for Powerpc. Use bit 9 which is free and within
16 bit range of NEED_RESCHED, so compiler
On 11/9/24 22:24, Shrikanth Hegde wrote:
On 11/9/24 00:36, Ankur Arora wrote:
Shrikanth Hegde writes:
Define preempt lazy bit for Powerpc. Use bit 9 which is free and within
16 bit range of NEED_RESCHED, so compiler can issue single andi.
Since Powerpc doesn't use the generic
On 11/27/24 12:07, Christophe Leroy wrote:
Le 25/11/2024 à 05:22, Shrikanth Hegde a écrit :
PowerPC uses asm-generic preempt definitions as of now.
Copy that into arch/asm so that arch specific changes can be done.
This would help the next patch for enabling dynamic preemption.
The
On 11/26/24 16:18, Christophe Leroy wrote:
Hi Christophe, Thanks for taking a look at this.
Le 25/11/2024 à 05:22, Shrikanth Hegde a écrit :
Once the lazy preemption is supported, it would be desirable to change
the preemption models at runtime. So this change adds support for dynamic
On 11/26/24 16:23, Christophe Leroy wrote:
Le 16/11/2024 à 20:23, Shrikanth Hegde a écrit :
Define preempt lazy bit for Powerpc. Use bit 9 which is free and within
16 bit range of NEED_RESCHED, so compiler can issue single andi.
Since Powerpc doesn't use the generic entry/exit, add
On 11/27/24 12:14, Christophe Leroy wrote:
Le 25/11/2024 à 05:22, Shrikanth Hegde a écrit :
Once the lazy preemption is supported, it would be desirable to change
the preemption models at runtime. So this change adds support for dynamic
preemption using DYNAMIC_KEY.
In irq-exit to kernel
On 12/2/24 23:47, Christophe Leroy wrote:
Le 02/12/2024 à 15:05, Shrikanth Hegde a écrit :
On 11/27/24 12:07, Christophe Leroy wrote:
Le 25/11/2024 à 05:22, Shrikanth Hegde a écrit :
PowerPC uses asm-generic preempt definitions as of now.
Copy that into arch/asm so that arch specific
group type simplifies from implementation perspective.
So for the idea of using this,
Acked-by: Shrikanth Hegde
Some architectures (e.g. s390) provide virtualization on a firmware
level. This implies, that Linux kernels running on such architectures
run on virtualized CPUs.
Like in other
On 12/4/24 16:51, Tobias Huschle wrote:
In this simplified example, vertical low CPUs are parked generally.
This will later be adjusted by making the parked state dependent
on the overall utilization on the underlying hypervisor.
Vertical lows are always bound to the highest CPU IDs. This imp
On 12/4/24 16:51, Tobias Huschle wrote:
A parked CPU is considered to be flagged as unsuitable to process
workload at the moment, but might be become usable anytime. Depending on
the necessity for additional computation power and/or available capacity
of the underlying hardware.
A scheduler g
2315-1-sshe...@linux.ibm.com/
v2: https://lore.kernel.org/all/20250102191856.499424-1-sshe...@linux.ibm.com/
[1]:
https://lore.kernel.org/all/173572211264.1875638.9927288574435880962.b4...@linux.ibm.com/
Shrikanth Hegde (1):
powerpc: Enable dynamic preemption
arch/powerpc/Kconfig
d_resched -a sleep 1
Performance counter stats for 'system wide':
0 probe:__cond_resched
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/preempt.h | 12
arch/powerpc/kernel/interrupt.c| 6
On 12/20/24 13:23, Yicong Yang wrote:
From: Yicong Yang
Currently if architectures want to support HOTPLUG_SMT they need to
provide a topology_is_primary_thread() telling the framework which
thread in the SMT cannot offline. However arm64 doesn't have a
restriction on which thread in the SMT
On 12/20/24 13:23, Yicong Yang wrote:
From: Yicong Yang
The core CPU control framework supports runtime SMT control which
is not yet supported on arm64. Besides the general vulnerabilities
concerns we want this runtime control on our arm64 server for:
- better single CPU performance in some
On 12/26/24 17:20, Yicong Yang wrote:
On 2024/12/26 17:23, Shrikanth Hegde wrote:
On 12/20/24 13:23, Yicong Yang wrote:
From: Yicong Yang
The core CPU control framework supports runtime SMT control which
is not yet supported on arm64. Besides the general vulnerabilities
concerns we want
On 1/31/25 11:39, Christophe Leroy wrote:
Le 30/01/2025 à 21:26, Sebastian Andrzej Siewior a écrit :
On 2025-01-30 22:27:07 [+0530], Shrikanth Hegde wrote:
| #DEFINE need_irq_preemption() \
|
(static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched))
|
|
On 2/8/25 18:25, Christophe Leroy wrote:
Le 08/02/2025 à 08:35, Shrikanth Hegde a écrit :
On 2/4/25 13:52, Sebastian Andrzej Siewior wrote:
Use preempt_model_str() instead of manually conducting the preemption
model. Use pr_emerg() instead of printk() to pass a loglevel.
even on
On 2/4/25 13:52, Sebastian Andrzej Siewior wrote:
Use preempt_model_str() instead of manually conducting the preemption
model. Use pr_emerg() instead of printk() to pass a loglevel.
even on powerpc, i see __die ends up calling show_regs_print_info().
Why print it twice?
Cc: Madhavan Srini
On 2/10/25 19:53, Sebastian Andrzej Siewior wrote:
On 2025-02-10 11:59:50 [+0100], To Shrikanth Hegde wrote:
Thank you for noticing. I did remove it on other architectures, I
somehow missed it here. Will remove it from from the arch code.
This is what I have for powerpc now. I'm goi
luntary (full) lazy
perf stat -e probe:__cond_resched -a sleep 1
Performance counter stats for 'system wide':
0 probe:__cond_resched
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/preempt.h | 16 ++
l.org/all/20250106051919.55020-1-sshe...@linux.ibm.com/
Shrikanth Hegde (1):
powerpc: enable dynamic preemption
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/preempt.h | 16
arch/powerpc/kernel/interrupt.c| 6 +-
arch/powerpc/lib/vmx-helper.c | 2 +-
On 1/30/25 20:24, Sebastian Andrzej Siewior wrote:
On 2025-01-06 10:49:19 [+0530], Shrikanth Hegde wrote:
--- a/arch/powerpc/kernel/interrupt.c
Thanks for taking a look.
+
#ifdef CONFIG_PPC_BOOK3S_64
DEFINE_STATIC_KEY_FALSE(interrupt_exit_not_reentrant);
static inline bool
On 12/9/24 13:35, Tobias Huschle wrote:
[...]
So I gave it a try with using a debugfs based hint to say which CPUs
are parked.
It is a hack to try it out. patch is below so one could try something
similar is their archs
and see if it help if they have a use case.
Notes:
1. Arch shouldn't
On 11/17/24 00:53, Shrikanth Hegde wrote:
preempt=lazy has been merged into tip[1]. Lets Enable it for PowerPC.
This has been very lightly tested and as michael suggested could go
through a test cycle. If needed, patches can be merged. I have kept it
separate for easier bisect.
Lazy
On 1/6/25 10:49, Shrikanth Hegde wrote:
Now that preempt=lazy patches[1] are in powerpc-next tree, sending out the
patch to support dynamic preemption based on DYNAMIC_KEY.
base: powerpc-next
+ankur, sebastian; sorry for not cc'ing earlier.
Once the arch supports static inline call
other method to test it out, please let me know.
So for powerpc bits:
Tested-by: Shrikanth Hegde
diff --git a/arch/powerpc/kernel/trace/ftrace.c
b/arch/powerpc/kernel/trace/ftrace.c
index 5ccd791761e8f..558d7f4e4bea6 100644
--- a/arch/powerpc/kernel/trace/ftrace.c
+++ b/arch/powerpc/kernel/
all/20241125042212.1522315-1-sshe...@linux.ibm.com/
[1]:
https://lore.kernel.org/all/173572211264.1875638.9927288574435880962.b4...@linux.ibm.com/
Shrikanth Hegde (1):
powerpc: Enable dynamic preemption
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/preempt.h | 11 +++
arch/powe
:__cond_resched -a sleep 1
Performance counter stats for 'system wide':
0 probe:__cond_resched
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/preempt.h | 11 +++
arch/powerpc/kernel/interrupt.c| 6
On 2/8/25 23:25, Christophe Leroy wrote:
Le 08/02/2025 à 14:42, Shrikanth Hegde a écrit :
On 2/8/25 18:25, Christophe Leroy wrote:
Le 08/02/2025 à 08:35, Shrikanth Hegde a écrit :
On 2/4/25 13:52, Sebastian Andrzej Siewior wrote:
Use preempt_model_str() instead of manually
On 2/20/25 16:25, Tobias Huschle wrote:
On 18/02/2025 06:58, Shrikanth Hegde wrote:
[...]
There are a couple of issues and corner cases which need further
considerations:
- rt & dl: Realtime and deadline scheduling require some additional
attention.
I think we
On 2/17/25 17:02, Tobias Huschle wrote:
Changes to v1
parked vs idle
- parked CPUs are now never considered to be idle
- a scheduler group is now considered parked iff there are parked CPUs
and there are no idle CPUs, i.e. all non parked CPUs are busy or there
are only parked CPUs. A sc
Hi Tobias.
On 2/17/25 17:02, Tobias Huschle wrote:
A parked CPU is considered to be flagged as unsuitable to process
workload at the moment, but might be become usable anytime. Depending on
the necessity for additional computation power and/or available capacity
of the underlying hardware.
A sc
use scoped_guard in couple of places to avoid holding mutex
un-necessarily (Peter Zijlstra)
Shrikanth Hegde (6):
powerpc: eeh: use lock guard for mutex
powerpc: rtas: use lock guard for mutex
powerpc: fadump: use lock guard for mutex
powerpc: book3s: vas: use lock guard for mutex
powe
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/platforms
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/kernel
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/platforms
On 3/14/25 11:36, Andrew Donnellan wrote:
On Fri, 2025-03-14 at 11:15 +0530, Shrikanth Hegde wrote:
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all
On 3/14/25 11:15, Shrikanth Hegde wrote:
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by
use scoped_guard for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/kernel
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/kernel
-by: Shrikanth Hegde
---
arch/powerpc/platforms/book3s/vas-api.c | 32 ++---
1 file changed, 13 insertions(+), 19 deletions(-)
diff --git a/arch/powerpc/platforms/book3s/vas-api.c
b/arch/powerpc/platforms/book3s/vas-api.c
index 0b6365d85d11..d7462c16d828 100644
--- a/arch
use guard(mutex) for scope based resource management of mutex
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/sysdev
On 3/14/25 15:00, Shrikanth Hegde wrote:
On 3/14/25 11:15, Shrikanth Hegde wrote:
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all
use guard(mutex) for scope based resource management of mutex
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/sysdev
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/kernel
-by: Shrikanth Hegde
---
arch/powerpc/platforms/book3s/vas-api.c | 19 ++-
1 file changed, 6 insertions(+), 13 deletions(-)
diff --git a/arch/powerpc/platforms/book3s/vas-api.c
b/arch/powerpc/platforms/book3s/vas-api.c
index 0b6365d85d11..eb1a97271afb 100644
--- a/arch/powerpc
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/kernel
bisect. Let me if they should
be combined into one. Commit message is same for all.
Shrikanth Hegde (6):
powerpc: eeh: use lock guard for mutex
powerpc: rtas: use lock guard for mutex
powerpc: fadump: use lock guard for mutex
powerpc: book3s: vas: use lock guard for mutex
powerpc: powen
On 3/14/25 13:55, Peter Zijlstra wrote:
On Fri, Mar 14, 2025 at 11:15:42AM +0530, Shrikanth Hegde wrote:
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can be found at
https://lore.kernel.org/all/20230612093537.614161...@infradead.org/T/#u
Signed-off-by: Shrikanth Hegde
---
arch/powerpc/kernel
On 3/14/25 13:52, Peter Zijlstra wrote:
Thanks Peter for taking a look.
On Fri, Mar 14, 2025 at 11:15:41AM +0530, Shrikanth Hegde wrote:
use guard(mutex) for scope based resource management of mutex.
This would make the code simpler and easier to maintain.
More details on lock guards can
98 matches
Mail list logo