On 2023-05-16 18:35, Dietmar Eggemann wrote:
On 15/05/2023 13:46, Tobias Huschle wrote:
The current load balancer implementation implies that scheduler
groups,
within the same scheduler domain, all host the same number of CPUs.
This appears to be valid for non-s390 architectures. Nevertheless
On 2023-07-05 09:52, Vincent Guittot wrote:
Le lundi 05 juin 2023 à 10:07:16 (+0200), Tobias Huschle a écrit :
On 2023-05-16 15:36, Vincent Guittot wrote:
> On Mon, 15 May 2023 at 13:46, Tobias Huschle
> wrote:
> >
> > The current load balancer implementation implies that sc
On 2023-07-04 15:40, Peter Zijlstra wrote:
On Mon, May 15, 2023 at 01:46:01PM +0200, Tobias Huschle wrote:
The current load balancer implementation implies that scheduler
groups,
within the same domain, all host the same number of CPUs. This is
reflected in the condition, that a scheduler
On 2023-07-06 19:19, Shrikanth Hegde wrote:
On 5/15/23 5:16 PM, Tobias Huschle wrote:
The current load balancer implementation implies that scheduler
groups,
within the same domain, all host the same number of CPUs. This is
reflected in the condition, that a scheduler group, which is load
On 2023-07-07 16:33, Shrikanth Hegde wrote:
On 7/7/23 1:14 PM, Tobias Huschle wrote:
On 2023-07-05 09:52, Vincent Guittot wrote:
Le lundi 05 juin 2023 à 10:07:16 (+0200), Tobias Huschle a écrit :
On 2023-05-16 15:36, Vincent Guittot wrote:
> On Mon, 15 May 2023 at 13:46, Tobias Husc
ones that are
about to run into SMT.
Feedback would be greatly appreciated.
Tobias Huschle (1):
sched/fair: Consider asymmetric scheduler groups in load balancer
kernel/sched/fair.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--
2.34.1
the scheduler groups into account, ensures that
a load balancing CPU within a smaller group will not try to pull tasks
from a bigger group while the bigger group still has idle CPUs
available.
Signed-off-by: Tobias Huschle
---
kernel/sched/fair.c | 3 ++-
1 file changed, 2 insertions(+), 1
On 2023-05-16 15:36, Vincent Guittot wrote:
On Mon, 15 May 2023 at 13:46, Tobias Huschle
wrote:
The current load balancer implementation implies that scheduler
groups,
within the same domain, all host the same number of CPUs. This is
reflected in the condition, that a scheduler group, which
grant the kworker time to execute?
In the vhost case, this is currently attempted through a cond_resched
which is not doing anything because the need_resched flag is not set.
Feedback and opinions would be highly appreciated.
Signed-off-by: Tobias Huschle
---
kernel/sched/fair.c | 5
opinions would be highly appreciated.
Signed-off-by: Tobias Huschle
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 61c4ef20a2f8..e9733ef7964a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@
On Thu, Feb 29, 2024 at 09:06:16AM +0530, K Prateek Nayak wrote:
> (+ Xuewen Yan, Ke Wang)
>
> Hello Tobias,
>
<...>
> >
> > Questions:
> > 1. The kworker getting its negative lag occurs in the following scenario
> >- kworker and a cgroup are supposed to execute on the same CPU
> >- one
On Fri, Mar 08, 2024 at 03:11:38PM +, Luis Machado wrote:
> On 2/28/24 16:10, Tobias Huschle wrote:
> >
> > Questions:
> > 1. The kworker getting its negative lag occurs in the following scenario
> >- kworker and a cgroup are supposed to execute on the same C
On 2024-03-18 15:45, Luis Machado wrote:
On 3/14/24 13:45, Tobias Huschle wrote:
On Fri, Mar 08, 2024 at 03:11:38PM +, Luis Machado wrote:
On 2/28/24 16:10, Tobias Huschle wrote:
Questions:
1. The kworker getting its negative lag occurs in the following
scenario
- kworker and a
On Tue, Mar 19, 2024 at 02:41:14PM +0100, Vincent Guittot wrote:
> On Tue, 19 Mar 2024 at 10:08, Tobias Huschle wrote:
> >
> > On 2024-03-18 15:45, Luis Machado wrote:
> > > On 3/14/24 13:45, Tobias Huschle wrote:
> > >> On Fri, Mar 08, 2024 at 03:11:38PM +00
On Wed, Mar 20, 2024 at 02:51:00PM +0100, Vincent Guittot wrote:
> On Wed, 20 Mar 2024 at 08:04, Tobias Huschle wrote:
> > There was no guarantee of course. place_entity was reducing the vruntime of
> > woken up tasks though, giving it a slight boost, right?. For the scenario
>
On Fri, Mar 22, 2024 at 06:02:05PM +0100, Vincent Guittot wrote:
> and then
> se->vruntime = max_vruntime(se->vruntime, vruntime)
>
First things first, I was wrong to assume a "boost" in the CFS code. So I
dug a bit deeper and tried to pinpoint what the difference between CFS and
EEVDF actual
polarized CPUs are always clustered by ID.
This has the following implications:
- There can be scheduler domains consisting of only vertical highs
- There can be scheduler domains consisting of only vertical lows
Signed-off-by: Tobias Huschle
---
arch/s390/include/asm/topology.h | 3 +++
arch/s390
h serves as a simplified implementation example.
Tobias Huschle (2):
sched/fair: introduce new scheduler group type group_parked
s390/topology: Add initial implementation for selection of parked CPUs
arch/s390/include/asm/topology.h | 3 +
arch/s390/kernel/topology.c | 5 ++
includ
whether a CPU is parked is architecture specific.
For architectures not relying on this feature, the check is a NOP.
This is more efficient and non-disruptive compared to CPU hotplug in
environments where such changes can be necessary on a frequent basis.
Signed-off-by: Tobias Huschle
On 05/12/2024 19:12, Shrikanth Hegde wrote:
On 12/4/24 16:51, Tobias Huschle wrote:
In this simplified example, vertical low CPUs are parked generally.
This will later be adjusted by making the parked state dependent
on the overall utilization on the underlying hypervisor.
Vertical lows
On 05/12/2024 15:48, Shrikanth Hegde wrote:
On 12/4/24 16:51, Tobias Huschle wrote:
Adding a new scheduler group type which allows to remove all tasks
from certain CPUs through load balancing can help in scenarios where
such CPUs are currently unfavorable to use, for example in a
On 05/12/2024 19:04, Shrikanth Hegde wrote:
On 12/4/24 16:51, Tobias Huschle wrote:
A parked CPU is considered to be flagged as unsuitable to process
workload at the moment, but might be become usable anytime. Depending on
the necessity for additional computation power and/or available
On 10/12/2024 21:24, Shrikanth Hegde wrote:
On 12/9/24 13:35, Tobias Huschle wrote:
[...]
So I gave it a try with using a debugfs based hint to say which CPUs
are parked.
It is a hack to try it out. patch is below so one could try something
similar is their archs
and see if it help if
On 10/12/2024 21:24, Shrikanth Hegde wrote:
On 12/9/24 13:35, Tobias Huschle wrote:
[...]
It was happening with 100% stress-ng case. I was wondering since i dont
have no-hz full enabled.
I found out the reason why and one way to do is to trigger active load
balance if there are any
always clustered by ID.
This has the following implications:
- There might be scheduler domains consisting of only vertical highs
- There might be scheduler domains consisting of only vertical lows
Signed-off-by: Tobias Huschle
---
arch/s390/include/asm/smp.h | 2 ++
arch/s390/kernel/smp.c
non-disruptive compared to CPU hotplug in
environments where such changes can be necessary on a frequent basis.
Signed-off-by: Tobias Huschle
---
include/linux/sched/topology.h | 19
kernel/sched/core.c| 13 -
kernel/sched/fair.c| 86
to have a larger weight.
The same consideration holds true for the CPU capacities of such groups.
A group of parked CPUs should not be considered to have any capacity.
Signed-off-by: Tobias Huschle
---
kernel/sched/fair.c | 18 ++
1 file changed, 14 insertions(+), 4 deletions
d implementation example.
Tobias Huschle (3):
sched/fair: introduce new scheduler group type group_parked
sched/fair: adapt scheduler group weight and capacity for parked CPUs
s390/topology: Add initial implementation for selection of parked CPUs
arch/s390/include/asm/smp.h| 2 +
On 18/02/2025 06:44, Shrikanth Hegde wrote:
[...]
@@ -1352,6 +1352,9 @@ bool sched_can_stop_tick(struct rq *rq)
if (rq->cfs.h_nr_queued > 1)
return false;
+ if (rq->cfs.nr_running > 0 && arch_cpu_parked(cpu_of(rq)))
+ return false;
+
you mean rq->cfs.h_nr_queued or
tion example.
Gave it a try on powerpc with the debugfs file. it works for
sched_normal tasks.
That's great to hear!
Tobias Huschle (3):
sched/fair: introduce new scheduler group type group_parked
sched/fair: adapt scheduler group weight and capacity for parked CPUs
s390/topo
30 matches
Mail list logo