On 02/07/2020 16:42, Vincent Guittot wrote:
> task_h_load() can return 0 in some situations like running stress-ng
> mmapfork, which forks thousands of threads, in a sched group on a 224 cores
> system. The load balance doesn't handle this correctly because
I guess the issue here is that 'cfs_rq->
+ Patrick Bellasi
+ Qais Yousef
On 02.10.20 07:38, Yun Hsiang wrote:
> On Wed, Sep 30, 2020 at 03:12:51PM +0200, Dietmar Eggemann wrote:
[...]
>> On 28/09/2020 10:26, Yun Hsiang wrote:
>>> If the user wants to release the util clamp and let cgroup to control it,
>>>
Hi Yun,
On 28/09/2020 10:26, Yun Hsiang wrote:
> If the user wants to release the util clamp and let cgroup to control it,
> we need a method to reset.
>
> So if the user set the task uclamp to the default value (0 for UCLAMP_MIN
> and 1024 for UCLAMP_MAX), reset the user_defined flag to release
On 25/09/2020 19:49, Valentin Schneider wrote:
>
> On 25/09/20 13:19, Valentin Schneider wrote:
>> On 25/09/20 12:58, Dietmar Eggemann wrote:
>>> With Valentin's print_rq() inspired test snippet I always see one of the
>>> RT user tasks as the second guy
On 25/09/2020 21:10, Hui Su wrote:
> Macro for_each_leaf_cfs_rq_safe() use list_for_each_entry_safe(),
> which can against removal of list entry, but we only
> print the cfs_rq data and won't remove the list entry in
> print_cfs_stats().
>
> Thus, add macro for_each_leaf_cfs_rq() based on
> list_f
On 05/10/2020 16:57, Peter Zijlstra wrote:
> Since we now migrate tasks away before DYING, we should also move
> bandwidth unthrottle, otherwise we can gain tasks from unthrottle
> after we expect all tasks to be gone already.
>
> Also; it looks like the RT balancers don't respect cpu_active() and
On 05/10/2020 16:57, Peter Zijlstra wrote:
[...]
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -1859,7 +1859,7 @@ static struct task_struct *pick_next_pus
> * running task can migrate over to a CPU that is running a task
> * of lesser priority.
> */
> -static int push_rt_task(str
On 12/10/2020 13:28, Peter Zijlstra wrote:
> On Mon, Oct 12, 2020 at 11:56:09AM +0200, Dietmar Eggemann wrote:
>> On 05/10/2020 16:57, Peter Zijlstra wrote:
>>
>> [...]
>>
>>> --- a/kernel/sched/rt.c
>>> +++ b/kernel/sched/rt.c
>>> @@ -1859
On 12/10/2020 15:18, Peter Zijlstra wrote:
> On Mon, Oct 12, 2020 at 02:52:00PM +0200, Peter Zijlstra wrote:
>> On Fri, Oct 09, 2020 at 10:41:11PM +0200, Dietmar Eggemann wrote:
>>> On 05/10/2020 16:57, Peter Zijlstra wrote:
>>>> Since we now migrate tasks away bef
Hi Yun,
On 12/10/2020 18:31, Yun Hsiang wrote:
> If the user wants to stop controlling uclamp and let the task inherit
> the value from the group, we need a method to reset.
>
> Add SCHED_FLAG_UTIL_CLAMP_RESET flag to allow the user to reset uclamp via
> sched_setattr syscall.
before we decide o
On 25/09/2020 12:10, Peter Zijlstra wrote:
> On Fri, Sep 25, 2020 at 11:12:09AM +0200, Dietmar Eggemann wrote:
>
>> I get this when running 6 (periodic) RT50 tasks with CPU hp stress on my
>> 6 CPU JUNO board (!CONFIG_PREEMPT_RT).
>>
>> [ 55.
On 21/09/2020 18:36, Peter Zijlstra wrote:
[...]
> This replaces the unlikely(rq->balance_callbacks) test at the tail of
> context_switch with an unlikely(rq->balance_work), the fast path is
While looking for why BALANCE_WORK is needed:
Shouldn't this be unlikely(rq->balance_callback) and
unlik
On 25/09/2020 15:59, Quentin Perret wrote:
> Hey Ionela,
>
> On Thursday 24 Sep 2020 at 17:10:02 (+0100), Ionela Voinescu wrote:
>> I'm not sure what is a good way of fixing this.. I could add more info
>> to the warning to suggest it might be temporary ("Disabling EAS:
>> frequency-invariant load
On 28/05/2020 20:29, Peter Zijlstra wrote:
> On Thu, May 28, 2020 at 05:51:31PM +0100, Qais Yousef wrote:
>
>> In my head, the simpler version of
>>
>> if (rt_task(p) && !uc->user_defined)
>> // update_uclamp_min
>>
>> Is a single branch and write to cache, so should be fast. I'm
On 08/05/2020 19:02, Tao Zhou wrote:
> On Fri, May 08, 2020 at 05:27:44PM +0200, Vincent Guittot wrote:
>> On Fri, 8 May 2020 at 17:12, Tao Zhou wrote:
>>>
>>> Hi Phil,
>>>
>>> On Thu, May 07, 2020 at 04:36:12PM -0400, Phil Auld wrote:
sched/fair: Fix enqueue_task_fair warning some more
[...
--
>
> V2: Added "requested" prefix (suggested by Valentin)
Reviewed-by: Dietmar Eggemann
>
> kernel/sched/debug.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
> index a562df5..77eceb
On 11/05/2020 11:36, Vincent Guittot wrote:
> On Mon, 11 May 2020 at 10:40, Dietmar Eggemann
> wrote:
>>
>> On 08/05/2020 19:02, Tao Zhou wrote:
>>> On Fri, May 08, 2020 at 05:27:44PM +0200, Vincent Guittot wrote:
>>>> On Fri, 8 May 2020 at 17:1
On 28/02/2020 10:07, Parth Shah wrote:
> Introduce the latency_nice attribute to sched_attr and provide a
> mechanism to change the value with the use of sched_setattr/sched_getattr
> syscall.
>
> Also add new flag "SCHED_FLAG_LATENCY_NICE" to hint the change in
> latency_nice of the task on every
On 11/05/2020 14:12, Vincent Guittot wrote:
> On Mon, 11 May 2020 at 12:39, Dietmar Eggemann
> wrote:
>>
>> On 11/05/2020 11:36, Vincent Guittot wrote:
>>> On Mon, 11 May 2020 at 10:40, Dietmar Eggemann
>>> wrote:
>>>>
>>>> On 08/05/
Hi Tao,
On 11/05/2020 17:44, Tao Zhou wrote:
> Hi Dietmar,
[...]
> On Mon, May 11, 2020 at 12:39:52PM +0200, Dietmar Eggemann wrote:
>> On 11/05/2020 11:36, Vincent Guittot wrote:
>>> On Mon, 11 May 2020 at 10:40, Dietmar Eggemann
>>> wrote:
>>>>
On 04/05/2020 05:58, Pavan Kondeti wrote:
> On Fri, May 01, 2020 at 06:12:07PM +0200, Dietmar Eggemann wrote:
>> On 30/04/2020 15:10, Pavan Kondeti wrote:
>>> On Mon, Apr 27, 2020 at 10:37:08AM +0200, Dietmar Eggemann wrote:
>>>> From: Luca Abeni
>>
&
On 29.05.20 12:08, Mel Gorman wrote:
> On Thu, May 28, 2020 at 06:11:12PM +0200, Peter Zijlstra wrote:
>>> FWIW, I think you're referring to Mel's notice in OSPM regarding the
>>> overhead.
>>> Trying to see what goes on in there.
>>
>> Indeed, that one. The fact that regular distros cannot enable
Remove redundant functions, parameters and macros from the task
scheduler code.
Dietmar Eggemann (4):
sched/pelt: Remove redundant cap_scale() definition
sched/core: Remove redundant 'preempt' param from
sched_class->yield_to_task()
sched/idle,stop: Remove .get_rr_
Commit 6d1cafd8b56e ("sched: Resched proper CPU on yield_to()") moved
the code to resched the CPU from yield_to_task_fair() to yield_to()
making the preempt parameter in sched_class->yield_to_task()
unnecessary. Remove it. No other sched_class implements yield_to_task().
Signed-of
Besides in PELT cap_scale() is used in the Deadline scheduler class for
scale-invariant bandwidth enforcement.
Remove the cap_scale() definition in kernel/sched/pelt.c and keep the
one in kernel/sched/sched.h.
Signed-off-by: Dietmar Eggemann
---
kernel/sched/pelt.c | 2 --
1 file changed, 2
Commit a57beec5d427
("sched: Make sched_class::get_rr_interval() optional") introduced
the default time-slice=0 for sched classes which do not provide this
function.
So .get_rr_interval for idle and stop sched_class can be removed to
shrink the code a little.
Signed-off-by: Dietmar Eggemann
-
Since commit 8ec59c0f5f49 ("sched/topology: Remove unused 'sd'
parameter from arch_scale_cpu_capacity()") it is no longer needed.
Signed-off-by: Dietmar Eggemann
---
kernel/sched/fair.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/f
On 28/08/2020 12:27, Qais Yousef wrote:
> On 08/28/20 10:00, vincent.donnef...@arm.com wrote:
>> From: Vincent Donnefort
>>
>> rq->cpu_capacity is a key element in several scheduler parts, such as EAS
>> task placement and load balancing. Tracking this value enables testing
>> and/or debugging by
+ Phil Auld
On 28/08/2020 19:26, Qais Yousef wrote:
> On 08/28/20 19:10, Dietmar Eggemann wrote:
>> On 28/08/2020 12:27, Qais Yousef wrote:
>>> On 08/28/20 10:00, vincent.donnef...@arm.com wrote:
>>>> From: Vincent Donnefort
[...]
>> Can you remind me wh
race_*() helper functions can be coded in a tp-2-te converter.
Remove them from kernel/sched/fair.c.
Signed-off-by: Dietmar Eggemann
---
include/linux/sched.h | 13 ---
kernel/sched/fair.c | 86 ---
2 files changed, 99 deletions(-)
diff --git a/include/
: Dietmar Eggemann
---
kernel/sched/autogroup.c | 8
kernel/sched/autogroup.h | 8 +++-
2 files changed, 7 insertions(+), 9 deletions(-)
diff --git a/kernel/sched/autogroup.c b/kernel/sched/autogroup.c
index 2067080bb235..3c6c78d909dd 100644
--- a/kernel/sched/autogroup.c
+++ b/kernel
I
in relation to internal scheduler structures.
Dietmar Eggemann (3):
sched/fair: Remove sched_trace_*() helper functions
sched/fair: Remove cfs_rq_tg_path()
sched/autogroup: Change autogroup_path() into a static inline function
include/linux/sched.h| 13 -
kernel/sched/autogroup.
erter.
Remove it from kernel/sched/fair.c.
Signed-off-by: Dietmar Eggemann
---
kernel/sched/fair.c | 19 ---
kernel/sched/sched.h | 3 ---
2 files changed, 22 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f7640af1dcaa..7b9b5ed3c506 100644
--- a/kernel/
On 14/08/2020 01:55, benbjiang(蒋彪) wrote:
> Hi,
>
>> On Aug 13, 2020, at 2:39 AM, Dietmar Eggemann
>> wrote:
>>
>> On 12/08/2020 05:19, benbjiang(蒋彪) wrote:
>>> Hi,
>>>
>>>> On Aug 11, 2020, at 11:54 PM, Dietmar Eggemann
>>>
dequeue_load_avg(cfs_rq, se);
>
> @@ -3102,7 +3102,7 @@ static void reweight_entity(struct cfs_rq *cfs_rq,
> struct sched_entity *se,
>
> enqueue_load_avg(cfs_rq, se);
> if (se->on_rq)
> - account_entity_enqueue(cfs_rq, se);
> + update_load_add(&cfs_rq->load, se->load.weight);
>
> }
Reviewed-by: Dietmar Eggemann
On 07/09/2020 16:51, Qais Yousef wrote:
> On 09/07/20 13:13, pet...@infradead.org wrote:
>> On Mon, Sep 07, 2020 at 11:48:45AM +0100, Qais Yousef wrote:
>>> IMHO the above is a hack. Out-of-tree modules should rely on public headers
>>> and
>>> exported functions only. What you propose means that
On 08/09/2020 17:17, Qais Yousef wrote:
> On 09/08/20 13:17, Dietmar Eggemann wrote:
>> On 07/09/2020 16:51, Qais Yousef wrote:
>>> On 09/07/20 13:13, pet...@infradead.org wrote:
>>>> On Mon, Sep 07, 2020 at 11:48:45AM +0100, Qais Yousef wrote:
>>>>>
On 14/10/2020 16:50, Patrick Bellasi wrote:
>
> On Tue, Oct 13, 2020 at 22:25:48 +0200, Dietmar Eggemann
> wrote...
[...]
>> On 12/10/2020 18:31, Yun Hsiang wrote:
[...]
> Not sure what's the specific use-case Yun is after, but I have at least
> one in my mind.
&g
On 14/10/2020 17:00, Yun Hsiang wrote:
> On Tue, Oct 13, 2020 at 10:25:48PM +0200, Dietmar Eggemann wrote:
>> Hi Yun,
>>
>> On 12/10/2020 18:31, Yun Hsiang wrote:
[...]
> The tg uclamp value may also change. If top-app's cpu.uclamp.min change
> to 50 (~500), th
On 27/10/2020 04:32, Xuewen Yan wrote:
> the highest_flag_domain is to search the highest sched_domain
> containing flag, but if the lower sched_domain didn't contain
> the flag, but the higher sched_domain contains the flag, the
> function will return NULL instead of the higher sched_domain.
>
>
On 20/10/2020 09:37, Peter Zijlstra wrote:
> On Mon, Oct 19, 2020 at 04:15:01PM +0200, Dietmar Eggemann wrote:
>> On 14/10/2020 21:54, Peter Zijlstra wrote:
[...]
> Maybe I've not had enough wake-up juice, but I can't seem to locate
> this.
Sorry, I was commenting on my own debug code ;-)
On 22/10/2020 17:33, Vincent Guittot wrote:
> On Thu, 22 Oct 2020 at 16:53, Valentin Schneider
> wrote:
>>
>>
>> Hi Vincent,
>>
>> On 22/10/20 14:43, Vincent Guittot wrote:
[...]
>>> static int
>>> -select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int
>>> target)
>>> +selec
On 25/10/2020 08:36, Yun Hsiang wrote:
> If the user wants to stop controlling uclamp and let the task inherit
> the value from the group, we need a method to reset.
>
> Add SCHED_FLAG_UTIL_CLAMP_RESET flag to allow the user to reset uclamp via
> sched_setattr syscall.
>
> The policy is
> _CLAMP_
On 26/10/2020 16:45, Yun Hsiang wrote:
> Hi Dietmar,
>
> On Mon, Oct 26, 2020 at 10:47:11AM +0100, Dietmar Eggemann wrote:
>> On 25/10/2020 08:36, Yun Hsiang wrote:
>>> If the user wants to stop controlling uclamp and let the task inherit
>>> the value from the
On 28/10/2020 19:41, Yun Hsiang wrote:
> Hi Patrick,
>
> On Wed, Oct 28, 2020 at 11:11:07AM +0100, Patrick Bellasi wrote:
[...]
>> On Tue, Oct 27, 2020 at 16:58:13 +0100, Yun Hsiang
>> wrote...
>>
>>> Hi Diet mar,
>>> On Mon, Oct 26, 2020 at 08:0
On 29/10/2020 14:06, Qais Yousef wrote:
> On 10/29/20 21:02, Yun Hsiang wrote:
>> Hi Qais,
>>
>> On Thu, Oct 29, 2020 at 11:08:18AM +, Qais Yousef wrote:
>>> Hi Yun
>>>
>>> Sorry for chipping in late.
>>>
>>> On 10/25/20 15:36, Yun Hsiang wrote:
[...]
#define SCHED_FLAG_UTIL_CLAMP (
On 20.08.20 22:39, Rik van Riel wrote:
> On Thu, 2020-08-20 at 16:56 +0200, Dietmar Eggemann wrote:
[...]
> The issue happens with a flat runqueue, when t1 goes
> to sleep, but t2 and t3 continue running.
>
> We need to make sure the vruntime for t2 has not been
> advanc
On 08/07/2020 11:47, Vincent Guittot wrote:
> On Wed, 8 Jul 2020 at 11:45, Dietmar Eggemann
> wrote:
>>
>> On 02/07/2020 16:42, Vincent Guittot wrote:
>>> task_h_load() can return 0 in some situations like running stress-ng
>>> mmapfork, which forks thousan
On 14/10/2020 21:48, Peter Zijlstra wrote:
[...]
> + switch (prio) {
> + case CPUPRI_INVALID:
> + cpupri = CPUPRI_INVALID;/* -1 */
> + break;
> +
> + case 0...98:
kernel/sched/cpupri.c:54:7: error: too many decimal points in number
54 | case 0...98
iority (INVALID-RT99) to assign to this CPU
+ * @newpri: The priority (INVALID-RT1-RT99-NORMAL-HIGHER) to assign to
this CPU
Reviewed-by: Dietmar Eggemann
we can record why this isn't as nice a
> solution, dunno.
IMHO the '-1' magic value approach is cleaner. Did some light testing on it.
>From 2e6a64fac4f2f66a2c6246de33db22c467fa7d33 Mon Sep 17 00:00:00 2001
From: Dietmar Eggemann
Date: Wed, 11 Nov 2020 01:14:33 +0100
Subject
On 11/11/2020 19:04, Peter Zijlstra wrote:
> On Wed, Nov 11, 2020 at 06:41:07PM +0100, Dietmar Eggemann wrote:
>> diff --git a/include/uapi/linux/sched/types.h
>> b/include/uapi/linux/sched/types.h
>> index c852153ddb0d..b9165f17dddc 100644
>> --- a/include/uapi/l
On 12/11/2020 15:41, Qais Yousef wrote:
> On 11/11/20 18:41, Dietmar Eggemann wrote:
>> On 10/11/2020 13:21, Peter Zijlstra wrote:
>>> On Tue, Nov 03, 2020 at 10:37:56AM +0800, Yun Hsiang wrote:
[...]
> I assume we agree then that we don't want to explicitly document
er_defined uclamp_se it is currently first reset
and then set.
Fix this by AND'ing !user_defined with !SCHED_FLAG_UTIL_CLAMP which
stands for the 'sched class change' case.
The related condition 'if (uc_se->user_defined)' moved from
__setscheduler_uclamp() into uclamp_res
On 12/11/2020 17:01, Dietmar Eggemann wrote:
> On 12/11/2020 15:41, Qais Yousef wrote:
>> On 11/11/20 18:41, Dietmar Eggemann wrote:
>>> On 10/11/2020 13:21, Peter Zijlstra wrote:
>>>> On Tue, Nov 03, 2020 at 10:37:56AM +0800, Yun Hsiang wrote:
[...]
>> If you
On 03/11/2020 14:48, Qais Yousef wrote:
> Oops, +Juri for real this time.
>
> On 11/03/20 13:46, Qais Yousef wrote:
>> Hi Yun
>>
>> +Juri (A question for you below)
>>
>> On 11/03/20 10:37, Yun Hsiang wrote:
[...]
>>> include/uapi/linux/sched.h | 7 +++--
>>> kernel/sched/core.c| 59 ++
+- 0.19% )
(b) w/o patch: 0.416645 +- 0.000520 seconds time elapsed ( +- 0.12% )
w/ patch: 0.358098 +- 0.000577 seconds time elapsed ( +- 0.16% )
Tested-by: Dietmar Eggemann
> According to test on hikey, the patch doesn't impact symmetric system
> compared to current imple
On 27/07/2020 16:18, Qian Cai wrote:
> On Sun, Jul 12, 2020 at 05:59:16PM +0100, Valentin Schneider wrote:
>> As Russell pointed out [1], this option is severely lacking in the
>> documentation department, and figuring out if one has the required
>> dependencies to benefit from turning it on is not
On 10/07/2020 01:08, chris hyser wrote:
[...]
>> D) Desired behavior:
>
> Reduce the maximum wake-up latency of designated CFS tasks by skipping
> some or all of the idle CPU and core searches by setting a maximum idle
> CPU search value (maximum loop iterations).
>
> Searching 'ALL' as the max
On 28/07/2020 10:58, Wang Wenhu wrote:
> The only parameter "unsigned long ticks" for calc_global_load is
> never used inside the function definition. Delete it now.
>
> Signed-off-by: Wang Wenhu
> ---
> include/linux/sched/loadavg.h | 2 +-
> kernel/sched/loadavg.c| 2 +-
> kernel/time/
On 28/07/2020 18:16, Valentin Schneider wrote:
>
> Hi,
>
> On 27/07/20 18:45, Dietmar Eggemann wrote:
>> On 27/07/2020 16:18, Qian Cai wrote:
>>> On Sun, Jul 12, 2020 at 05:59:16PM +0100, Valentin Schneider wrote:
[...]
> I went for having SCHED_THERMAL_PRESSU
On 21/07/2020 12:13, Qais Yousef wrote:
> On 07/21/20 10:36, pet...@infradead.org wrote:
>> On Mon, Jul 20, 2020 at 06:19:43PM -0400, Steven Rostedt wrote:
>>> On Mon, 20 Jul 2020 23:49:18 +0200
>>> Peter Zijlstra wrote:
>>>
Steve, would this work for you, or would you prefer renaming the
>>>
On 02/07/2020 13:44, Ionela Voinescu wrote:
> Hi,
>
> On Thursday 02 Jul 2020 at 08:28:18 (+0530), Viresh Kumar wrote:
>> On 01-07-20, 18:05, Rafael J. Wysocki wrote:
>>> On Wed, Jul 1, 2020 at 3:33 PM Ionela Voinescu
>>> wrote:
On Wednesday 01 Jul 2020 at 16:16:17 (+0530), Viresh Kumar wro
now add_running_bw() is called later
>> by enqueue_task_dl(), but rq_clock has already been updated by core's
>> enqueue_task().
>>
>> Daniel, Dietmar, a second pair of eyes (since you authored the commits
>> above)?
>>
>> I'd chage subject to something like "sched/deadline: Stop updating
>> rq_clock before pushing a task".
>
> Looks good to me!
>
> Acked-by: Daniel Bristot de Oliveira
Yes, makes sense to me!
Reviewed-by: Dietmar Eggemann
On 23/06/2020 09:29, Patrick Bellasi wrote:
> .:: Scheduler Wakeup Path Requirements Collection Template
> ==
>
> A) Name: unique one-liner name for the proposed use-case
[SchedulerWakeupLatency] Skip energy aware task placement
> B) Targe
On 13/07/2020 14:59, Vincent Guittot wrote:
> On Fri, 10 Jul 2020 at 21:59, Patrick Bellasi
> wrote:
>>
>>
>> On Fri, Jul 10, 2020 at 15:21:48 +0200, Vincent Guittot
>> wrote...
[...]
>>> Instead, it should weight the decision in wakeup_preempt_entity() and
>>> wakeup_gran()
>>
>> In those fun
On 21/09/2020 18:35, Peter Zijlstra wrote:
> Hi,
>
> Here's my take on migrate_disable(). It avoids growing a second means of
> changing the affinity, documents how the things violates locking rules but
> still mostly works.
>
> It also avoids blocking completely, so no more futex band-aids requi
0 99
Signed-off-by: Dietmar Eggemann
---
kernel/sched/cpupri.c | 6 +++---
kernel/sched/cpupri.h | 4 ++--
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/cpupri.c b/kernel/sched/cpupri.c
index a5d14ed485f4..8d9952a51664 100644
--- a/kernel/sched/cpupri.c
++
Two of the 102 elements of the cpu priority vector, among them the one
for MAX_PRIO (140) representing the IDLE task, are never used.
Remove them and adapt the cpupri implementation accordingly.
Dietmar Eggemann (2):
sched/cpupri: Remove pri_to_cpu[CPUPRI_IDLE]
sched/cpupri: Remove
50 50 50
5049 49 51
...
99 00 100
Signed-off-by: Dietmar Eggemann
---
kernel/sched/cpupri.c | 10 --
kernel/sched/cpupri.h | 7 +++
2 files changed, 7 insertions(+), 10 deletions(-)
diff --
9c194106 ("sched/uclamp: Add bucket
local max tracking") but was never used.
Reviewed-by: Dietmar Eggemann
On 6/4/19 9:45 AM, Paul E. McKenney wrote:
On Mon, Jun 03, 2019 at 03:39:18PM +0200, Dietmar Eggemann wrote:
On 6/3/19 1:44 PM, Mark Rutland wrote:
On Mon, Jun 03, 2019 at 10:38:48AM +0200, Peter Zijlstra wrote:
On Sat, Jun 01, 2019 at 06:12:53PM -0700, Paul E. McKenney wrote:
Scheduling
On 5/28/19 6:42 AM, Hillf Danton wrote:
On Mon, 27 May 2019 07:21:11 +0100 Dietmar Eggemann wrote:
[...]
@@ -5500,7 +5464,7 @@ wake_affine_weight(struct sched_domain *sd, struct
task_struct *p,
this_eff_load *= 100;
this_eff_load *= capacity_of(prev_cpu
On 6/12/19 9:32 PM, Rik van Riel wrote:
> Use an explicit "cfs_rq of parent sched_entity" helper in a few
> strategic places, where cfs_rq_of(se) may no longer point at the
> right runqueue once we flatten the hierarchical cgroup runqueues.
>
> No functional change.
>
> Signed-off-by: Rik van Rie
On 6/3/19 1:44 PM, Mark Rutland wrote:
On Mon, Jun 03, 2019 at 10:38:48AM +0200, Peter Zijlstra wrote:
On Sat, Jun 01, 2019 at 06:12:53PM -0700, Paul E. McKenney wrote:
Scheduling-clock interrupts can arrive late in the CPU-offline process,
after idle entry and the subsequent call to cpuhp_repo
On 27/04/2020 10:37, Dietmar Eggemann wrote:
[...]
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 4ae22bfc37ae..eb23e6921d94 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -69,6 +69,25 @@ static inline int d
On 06/05/2020 14:37, Juri Lelli wrote:
> On 06/05/20 12:54, Dietmar Eggemann wrote:
>> On 27/04/2020 10:37, Dietmar Eggemann wrote:
[...]
>> There is an issue w/ excl. cpusets and cpuset.sched_load_balance=0. The
>> latter is needed to demonstrate the problem since DL task
t, that last line of the commit message should read "list_add_leaf_cfs_rq"
>
>
>> Reviewed-by: Vincent Guittot
>
> Thanks Vincent.
>
> Peter/Ingo, do you want me to resend or can you fix when applying?
Maybe you could add that 'the throttled parent was already added back to
the list by a task enqueue in a parallel child hierarchy'.
IMHO, this is part of the description because otherwise the throttled
parent would have connected the branch.
And the not-adding of the intermediate child cfs_rq would have gone
unnoticed.
Reviewed-by: Dietmar Eggemann
[...]
On 11/05/2020 10:01, Juri Lelli wrote:
> On 06/05/20 17:09, Dietmar Eggemann wrote:
>> On 06/05/2020 14:37, Juri Lelli wrote:
>>> On 06/05/20 12:54, Dietmar Eggemann wrote:
>>>> On 27/04/2020 10:37, Dietmar Eggemann wrote:
[...]
>>> to say that we actually
On 07/05/2020 20:05, John Mathew wrote:
[...]
> diff --git a/Documentation/scheduler/cfs-overview.rst
> b/Documentation/scheduler/cfs-overview.rst
> new file mode 100644
> index ..b717f2d3e340
> --- /dev/null
> +++ b/Documentation/scheduler/cfs-overview.rst
> @@ -0,0 +1,113 @@
> +..
On 07/05/2020 23:15, Valentin Schneider wrote:
>
> On 07/05/20 19:05, John Mathew wrote:
[...]
> It would also be an opportunity to have one place to (at least briefly)
> describe what the different sched classes do wrt capacity asymmetry - CFS
> does one thing, RT now does one thing (see Qais'
On 19/06/2020 19:20, Qais Yousef wrote> This series attempts to address the
report that uclamp logic could be expensive
> sometimes and shows a regression in netperf UDP_STREAM under certain
> conditions.
>
> The first patch is a fix for how struct uclamp_rq is initialized which is
> required by
> sysctl_sched_uclamp_util_max ||
> - sysctl_sched_uclamp_util_max > SCHED_CAPACITY_SCALE) {
> + sysctl_sched_uclamp_util_max > SCHED_CAPACITY_SCALE ||
Nit pick: This extra space looks weird to me.
[...]
Apart from that, LGTM
For both patches of this v5:
Reviewed-by: Dietmar Eggemann
blic by this patch-stack.
This version bases on tip/sched/core as of yesterday (bc4278987e38). It
has been compile tested on ~160 configurations via 0day's kbuild test
robot.
Dietmar Eggemann (5):
sched/autogroup: Define autogroup_path() for !CONFIG_SCHED_DEBUG
sched/events: Introduce c
fined for CONFIG_SMP.
The helper functions __trace_sched_cpu(), __trace_sched_path() and
__trace_sched_id() are extended to deal with sched_entities as well.
Signed-off-by: Dietmar Eggemann
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Steven Rostedt
---
include
Define autogroup_path() even in the !CONFIG_SCHED_DEBUG case. If
CONFIG_SCHED_AUTOGROUP is enabled the path of an autogroup has to be
available to be printed in the load tracking trace events provided by
this patch-stack regardless whether CONFIG_SCHED_DEBUG is set or not.
Signed-off-by: Dietmar
Export struct cfs_rq *group_cfs_rq(struct sched_entity *se) to be able
to distinguish sched_entities representing either tasks or task_groups
in the sched_entity related load tracking trace event provided by the
next patch.
Signed-off-by: Dietmar Eggemann
Cc: Peter Zijlstra
Cc: Ingo Molnar
SMP.
The helper function __trace_sched_path() can be used to get the length
parameter of the dynamic array (path == NULL) and to copy the path into
it (path != NULL).
Signed-off-by: Dietmar Eggemann
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Steven Rostedt
---
include/trace/eve
0 id=0 load=1050
We don't maintain a load signal for a root task group.
The trace event is only defined if cfs group scheduling support
(CONFIG_FAIR_GROUP_SCHED) is enabled.
Signed-off-by: Dietmar Eggemann
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Steven Rostedt
---
include/trace/events/s
On 03/28/2017 09:56 AM, Peter Zijlstra wrote:
On Tue, Mar 28, 2017 at 07:35:38AM +0100, Dietmar Eggemann wrote:
[...]
(1) a root task_group:
cpu=4 path=/ id=1 load=6 util=331
What's @id and why do we care?
It's a per cgroup/subsystem unique id for every task_group (cpu
On 03/28/2017 12:05 PM, Vincent Guittot wrote:
On 28 March 2017 at 08:35, Dietmar Eggemann wrote:
[...]
The following keys are used to identify the cfs scheduler brick:
(1) Cpu number the cfs scheduler brick is attached to.
(2) Task_group path and (css) id.
(3) Task name and pid.
Do
On 03/28/2017 10:05 AM, Peter Zijlstra wrote:
On Tue, Mar 28, 2017 at 07:35:40AM +0100, Dietmar Eggemann wrote:
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 04d4f81b96ae..d1dcb19f5b55 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2940,6 +2940,8
On 03/28/2017 10:05 AM, Peter Zijlstra wrote:
On Tue, Mar 28, 2017 at 07:35:40AM +0100, Dietmar Eggemann wrote:
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 04d4f81b96ae..d1dcb19f5b55 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2940,6 +2940,8
On 03/28/2017 10:08 AM, Peter Zijlstra wrote:
On Tue, Mar 28, 2017 at 07:35:40AM +0100, Dietmar Eggemann wrote:
diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index 51db8a90e45f..647cfaf528fd 100644
--- a/include/trace/events/sched.h
+++ b/include/trace/events/sched.h
On 03/28/2017 06:41 PM, Peter Zijlstra wrote:
On Tue, Mar 28, 2017 at 04:13:45PM +0200, Dietmar Eggemann wrote:
Do you think that making them public in include/linux/sched.h is the way to
go?
No; all that stuff should really stay private. tracepoints are a very
bad reason to leak this stuff
On 03/28/2017 07:37 PM, Steven Rostedt wrote:
On Tue, 28 Mar 2017 13:36:26 -0400
Steven Rostedt wrote:
But why play games, and rely on the design of the code? A
TRACE_EVENT_CONDTION() is more robust and documents that this
tracepoint should not be called when cfs_rq is NULL.
In other words,
On 03/28/2017 06:44 PM, Peter Zijlstra wrote:
On Tue, Mar 28, 2017 at 10:46:00AM -0400, Steven Rostedt wrote:
On Tue, 28 Mar 2017 07:35:38 +0100
Dietmar Eggemann wrote:
[...]
I too suggested that; but then I looked again at that code and we can
actually do this. cfs_rq can be constant
On 03/30/2017 09:04 AM, Peter Zijlstra wrote:
On Wed, Mar 29, 2017 at 11:03:45PM +0200, Dietmar Eggemann wrote:
[...]
Why not reduce the parameter list of these 3 incarnations to 'now, cpu,
object'?
static int
__update_load_avg_blocked_se(u64 now, int cpu, struct sched_entity *s
On 31/03/17 12:23, Peter Zijlstra wrote:
> On Fri, Mar 31, 2017 at 02:58:57AM -0700, Paul Turner wrote:
>>> So lets pull it out again -- but I don't think we need to undo all of
>>> yuyang's patches for that. So please, look at the patch I proposed for
>>> the problem you spotted. Lets fix the cu
On 19/04/17 17:54, Vincent Guittot wrote:
> In the current implementation of load/util_avg, we assume that the ongoing
> time segment has fully elapsed, and util/load_sum is divided by LOAD_AVG_MAX,
> even if part of the time segment still remains to run. As a consequence, this
> remaining part is
601 - 700 of 871 matches
Mail list logo