On 26/08/2019 13:24, Peter Zijlstra wrote:
> On Mon, Aug 26, 2019 at 11:51:17AM +0200, Dietmar Eggemann wrote:
>
>> Not sure about the extra 'if trace_cpu_frequency_enabled()' but I guess
>> it doesn't hurt.
>
> Without that you do that for_each_cpu() i
On 04/09/2019 12:39, Ingo Molnar wrote:
>
> * Dietmar Eggemann wrote:
>
>>> -v3 attached. Build and minimally boot tested.
>>>
>>> Thanks,
>>>
>>> Ingo
>>>
>>
>> This patch fixes the issue (almost).
>>
>
On 9/4/19 4:40 PM, Qais Yousef wrote:
> On 09/04/19 07:25, Steven Rostedt wrote:
>> On Tue, 3 Sep 2019 11:33:29 +0100
>> Qais Yousef wrote:
[...]
>>> @@ -1614,7 +1660,8 @@ static void put_prev_task_rt(struct rq *rq, struct
>>> task_struct *p)
>>> static int pick_rt_task(struct rq *rq, struct
On 02/10/2019 08:44, Vincent Guittot wrote:
> On Tue, 1 Oct 2019 at 18:53, Dietmar Eggemann
> wrote:
>>
>> On 01/10/2019 10:14, Vincent Guittot wrote:
>>> On Mon, 30 Sep 2019 at 18:24, Dietmar Eggemann
>>> wrote:
>>>>
>>>> Hi V
On 02/10/2019 10:23, Vincent Guittot wrote:
> On Tue, 1 Oct 2019 at 18:53, Dietmar Eggemann
> wrote:
>>
>> On 01/10/2019 10:14, Vincent Guittot wrote:
>>> On Mon, 30 Sep 2019 at 18:24, Dietmar Eggemann
>>> wrote:
>>>>
>>>> Hi V
[+ Steven Rostedt ]
On 29/08/2019 05:15, Jing-Ting Wu wrote:
> At original linux design, RT & CFS scheduler are independent.
> Current RT task placement policy will select the first cpu in
> lowest_mask, even if the first CPU is running a CFS task.
> This may put RT task to a running cpu and let C
Hi Vincent,
On 19/09/2019 09:33, Vincent Guittot wrote:
these are just some comments & questions based on a code study. Haven't
run any tests with it yet.
[...]
> The type of sched_group has been extended to better reflect the type of
> imbalance. We now have :
> group_has_spare
>
On 19/09/2019 09:33, Vincent Guittot wrote:
[...]
> @@ -8042,14 +8104,24 @@ static inline void update_sg_lb_stats(struct lb_env
> *env,
> }
> }
>
> - /* Adjust by relative CPU capacity of the group */
> + /* Check if dst cpu is idle and preferred to this group */
>
On 01/10/2019 10:14, Vincent Guittot wrote:
> On Mon, 30 Sep 2019 at 18:24, Dietmar Eggemann
> wrote:
>>
>> Hi Vincent,
>>
>> On 19/09/2019 09:33, Vincent Guittot wrote:
[...]
>>> @@ -7347,7 +7362,7 @@ static int detach_tasks(struct lb_env *env)
>
On 01/10/2019 11:14, Vincent Guittot wrote:
> group_asym_packing
>
> On Tue, 1 Oct 2019 at 10:15, Dietmar Eggemann
> wrote:
>>
>> On 19/09/2019 09:33, Vincent Guittot wrote:
>>
>>
>> [...]
>>
>>> @@ -8042,14 +8104,24 @@ stat
[- Quentin Perret ]
[+ Quentin Perret ]
See commit c193a3ffc282 ("mailmap: Update email address for Quentin Perret")
On 07/10/2019 18:53, Parth Shah wrote:
>
>
> On 10/7/19 5:49 PM, Vincent Guittot wrote:
>> On Mon, 7 Oct 2019 at 10:31, Parth Shah wrote:
>>>
>>> The algorithm finds the first n
On 09/10/2019 10:57, Parth Shah wrote:
[...]
>> On 07/10/2019 18:53, Parth Shah wrote:
>>>
>>>
>>> On 10/7/19 5:49 PM, Vincent Guittot wrote:
On Mon, 7 Oct 2019 at 10:31, Parth Shah wrote:
[...]
>>> Maybe I can add just below the sched_energy_present(){...} construct giving
>>> precedence
On 23/09/2019 13:52, Qais Yousef wrote:
> On 09/20/19 14:52, Dietmar Eggemann wrote:
>>> 2. The fallback mechanism means we either have to call cpupri_find()
>>>twice once to find filtered lowest_rq and the other to return the
>>>none filtered vers
On 9/18/19 4:52 PM, Qais Yousef wrote:
> On 09/13/19 14:30, Dietmar Eggemann wrote:
>> On 9/4/19 4:40 PM, Qais Yousef wrote:
>>> On 09/04/19 07:25, Steven Rostedt wrote:
>>>> On Tue, 3 Sep 2019 11:33:29 +0100
>>>> Qais Yousef wrote:
[...]
>> On
On 9/23/19 6:06 PM, Valentin Schneider wrote:
> On 23/09/2019 16:43, Dietmar Eggemann wrote:
>> I'm not sure that CONFIG_DEBUG_PER_CPU_MAPS=y will help you here.
>>
>> __set_cpus_allowed_ptr(...)
>> {
>> ...
>> dest_cpu = cpumask_any_and(..
On 9/19/19 9:20 AM, YT Chang wrote:
> When the system is overutilization, the load-balance crossing
> clusters will be triggered and scheduler will not use energy
> aware scheduling to choose CPUs.
We're currently transitioning from traditional big.LITTLE (the CPUs of 1
cluster (all having the sam
On 9/15/19 4:33 PM, Valentin Schneider wrote:
> On 15/09/2019 09:21, shikemeng wrote:
>>> It's more thoughtful to add check in cpumask_test_cpu.It can solve this
>>> problem and can prevent other potential bugs.I will test it and resend
>>> a new patch.
>>>
>>
>> Think again and again. As cpumask_
On 30/04/2020 13:00, Pavan Kondeti wrote:
> On Wed, Apr 29, 2020 at 07:39:50PM +0200, Dietmar Eggemann wrote:
>> On 27/04/2020 16:17, luca abeni wrote:
[...]
>>> On Mon, 27 Apr 2020 15:34:38 +0200
>>> Juri Lelli wrote:
[...]
>>>> On 27/04/20 10:37, D
On 30/04/2020 12:55, Pavan Kondeti wrote:
> On Mon, Apr 27, 2020 at 10:37:05AM +0200, Dietmar Eggemann wrote:
[..]
>> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>> index 504d2f51b0d6..4ae22bfc37ae 100644
>> --- a/kernel/sched/deadline.c
>> ++
On 30/04/2020 15:10, Pavan Kondeti wrote:
> On Mon, Apr 27, 2020 at 10:37:08AM +0200, Dietmar Eggemann wrote:
>> From: Luca Abeni
[...]
>> @@ -1653,10 +1654,19 @@ select_task_rq_dl(struct task_struct *p, int cpu,
>> int sd_flag, int flags)
>> * other hand, i
On 04/05/2020 17:17, Vincent Guittot wrote:
> On Sun, 3 May 2020 at 10:34, Peng Liu wrote:
>>
>> commit c5afb6a87f23 ("sched/fair: Fix nohz.next_balance update")
>> During idle load balance, this_cpu(ilb) do load balance for the other
>> idle CPUs, also gather the earliest (nohz.)next_balance.
>>
On 30/04/2020 14:13, Hillf Danton wrote:
>
> On Tue, 28 Apr 2020 17:32:45 Valentin Schneider wrote:
>>
>>> + else if (fair_policy(policy)) {
>>> + if (attr->sched_nice < MIN_NICE ||
>>> + attr->sched_nice > MAX_NICE)
>>> + return -EINVAL;
>>
>> We can't
On 29/04/2020 14:30, Qais Yousef wrote:
> Hi Pavan
>
> On 04/29/20 17:02, Pavan Kondeti wrote:
>> Hi Qais,
>>
>> On Tue, Apr 28, 2020 at 05:41:33PM +0100, Qais Yousef wrote:
[...]
>>> @@ -907,8 +935,15 @@ uclamp_tg_restrict(struct task_struct *p, enum
>>> uclamp_id clamp_id)
>>> static inline
On 15/10/2019 17:42, Valentin Schneider wrote:
> Turns out hotplugging CPUs that are in exclusive cpusets can lead to the
> cpuset code feeding empty cpumasks to the sched domain rebuild machinery.
> This leads to the following splat:
>
> [ 30.618174] Internal error: Oops: 9604 [#1] PREEMPT
hat specific asym code is also enabled for the CPUs
of the smp rd's wouldn't harm here.
Reviewed-by: Dietmar Eggemann
> Change the simple key enablement to an increment, and decrement the key
> counter when destroying domains that cover asymmetric CPUs.
>
> Cc:
>
On 15/10/2019 17:42, Valentin Schneider wrote:
[...]
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index c52bc91f882b..a859e5539440 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -817,6 +817,11 @@ static int generate_sched_domains(cpumask_var_t
> **d
On 09/10/2019 19:02, Parth Shah wrote:
>
>
> On 10/9/19 7:56 PM, Dietmar Eggemann wrote:
>> On 09/10/2019 10:57, Parth Shah wrote:
>>
>> [...]
>>
>>>> On 07/10/2019 18:53, Parth Shah wrote:
>>>>>
>>>>>
>>>>
On 27/04/2020 16:17, luca abeni wrote:
> Hi Juri,
>
> On Mon, 27 Apr 2020 15:34:38 +0200
> Juri Lelli wrote:
>
>> Hi,
>>
>> On 27/04/20 10:37, Dietmar Eggemann wrote:
>>> From: Luca Abeni
>>>
>>> When a task has a runtime that canno
On 14/10/2019 18:03, Valentin Schneider wrote:
> On 14/10/2019 14:52, Quentin Perret wrote:
>> Right, but that's not possible by definition -- static keys aren't
>> variables. The static keys for asym CPUs and for EAS are just to
>> optimize the case when they're disabled, but when they _are_ enabl
On 15/10/2019 13:07, Quentin Perret wrote:
> On Tuesday 15 Oct 2019 at 11:22:12 (+0200), Dietmar Eggemann wrote:
>> I still don't understand the benefit of the counter approach here.
>> sched_smt_present counts the number of cores with SMT. So in case you
>> have 2 SMT c
On 11/10/2019 15:44, Douglas RAILLARD wrote:
[...]
> @@ -66,6 +70,38 @@ static DEFINE_PER_CPU(struct sugov_cpu, sugov_cpu);
>
> / Governor internals ***/
>
> +#ifdef CONFIG_ENERGY_MODEL
> +static void sugov_policy_attach_pd(struct sugov_policy *sg_
On 11/10/2019 15:44, Douglas RAILLARD wrote:
[...]
> diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
> index d249b88a4d5a..dd6a35f099ea 100644
> --- a/include/linux/energy_model.h
> +++ b/include/linux/energy_model.h
> @@ -159,6 +159,53 @@ static inline int em_pd_nr_cap_s
On 11/10/2019 15:44, Douglas RAILLARD wrote:
[...]
> @@ -181,6 +185,42 @@ static void sugov_deferred_update(struct sugov_policy
> *sg_policy, u64 time,
> }
> }
>
> +static unsigned long sugov_cpu_ramp_boost(struct sugov_cpu *sg_cpu)
> +{
> + return READ_ONCE(sg_cpu->ramp_boost);
> +
On 11/10/2019 15:44, Douglas RAILLARD wrote:
[...]
> @@ -539,6 +543,7 @@ static void sugov_update_single(struct update_util_data
> *hook, u64 time,
> unsigned long util, max;
> unsigned int next_f;
> bool busy;
> + unsigned long ramp_boost = 0;
Shouldn't always order local
On 11/10/2019 15:44, Douglas RAILLARD wrote:
[...]
> diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
> index d249b88a4d5a..dd6a35f099ea 100644
> --- a/include/linux/energy_model.h
> +++ b/include/linux/energy_model.h
> @@ -159,6 +159,53 @@ static inline int em_pd_nr_cap_s
On 17/10/2019 16:11, Peter Zijlstra wrote:
> On Thu, Oct 17, 2019 at 12:11:16PM +0100, Quentin Perret wrote:
[...]
> It only boosts when 'rq->cfs.avg.util' increases while
> 'rq->cfs.avg.util_est.enqueued' remains unchanged (and util > util_est
> obv).
>
> This condition can be true for select_t
On 7/26/19 11:11 AM, luca abeni wrote:
> Hi Dietmar,
>
> On Fri, 26 Jul 2019 09:27:52 +0100
> Dietmar Eggemann wrote:
>
>> push_dl_task() always calls deactivate_task() with flags=0 which sets
>> p->on_rq=TASK_ON_RQ_MIGRATING.
>
> Uhm... This is a
On 7/26/19 2:30 PM, luca abeni wrote:
> Hi,
>
> On Fri, 26 Jul 2019 09:27:52 +0100
> Dietmar Eggemann wrote:
> [...]
>> @@ -2121,17 +2121,13 @@ static int push_dl_task(struct rq *rq)
>> }
>>
>> deactivate_task(rq, next_task, 0);
>>
On 7/29/19 5:54 PM, Peter Zijlstra wrote:
> On Fri, Jul 26, 2019 at 12:18:19PM +0200, luca abeni wrote:
>> Hi Dietmar,
>>
>> On Fri, 26 Jul 2019 09:27:56 +0100
>> Dietmar Eggemann wrote:
>>
>>> To make the decision whether to set rq or running bw to 0 in
On 7/29/19 5:35 PM, Peter Zijlstra wrote:
> On Fri, Jul 26, 2019 at 09:27:53AM +0100, Dietmar Eggemann wrote:
>> The int flags parameter is not used in __dequeue_task_dl(). Remove it.
>
> I just posted a patch(es) that will actually make use of it and extends
> the f
On 7/29/19 5:47 PM, Peter Zijlstra wrote:
> On Fri, Jul 26, 2019 at 09:27:54AM +0100, Dietmar Eggemann wrote:
>> dl_change_utilization() has a BUG_ON() to check that no schedutil
>> kthread (sugov) is entering this function. So instead of calling
>> sub_running_bw() which c
the priority of the calling task.
-Message d'origine-
De : linux-kernel-ow...@vger.kernel.org
[mailto:linux-kernel-ow...@vger.kernel.org] De la part de Dietmar Eggemann
Envoyé : mercredi 6 février 2019 11:55
À : Frédéric Mathieu ; linux-kernel@vger.kernel.org
Objet : Re: Kerne
ct pointer argument which
also eliminates the entity_is_task(se) if condition in the fork path and
get rid of the stale comment in remove_entity_load_avg() accordingly.
Signed-off-by: Dietmar Eggemann
---
kernel/sched/core.c | 2 +-
kernel/sched/fair.c | 38
On 1/16/19 10:43 AM, Vincent Guittot wrote:
[...]
+static inline u64 rq_clock_pelt(struct rq *rq)
+{
Doesn't this function need
lockdep_assert_held(&rq->lock);
assert_clock_updated(rq);
like rq_clock() and rq_clock_task()? Later to support commit
cb42c9a3ebbb "sched/core: Add debugging
On 1/23/19 10:48 AM, Vincent Guittot wrote:
On Wed, 23 Jan 2019 at 09:26, Dietmar Eggemann wrote:
On 1/16/19 10:43 AM, Vincent Guittot wrote:
[...]
+static inline u64 rq_clock_pelt(struct rq *rq)
+{
Doesn't this function need
lockdep_assert_held(&rq->lock);
assert_c
Hi Qais,
On 5/5/19 1:57 PM, Qais Yousef wrote:
[...]
diff --git a/kernel/sched/sched_tracepoints.h b/kernel/sched/sched_tracepoints.h
new file mode 100644
index ..f4ded705118e
--- /dev/null
+++ b/kernel/sched/sched_tracepoints.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0
The sched domain per rq load index files also disappear from the
/proc/sys/kernel/sched_domain/cpuX/domainY directories.
Signed-off-by: Dietmar Eggemann
---
include/linux/sched/topology.h | 5 -
kernel/sched/debug.c | 25 ++---
kernel/sched/topology.c
With LB_BIAS disabled, there is no need to update the rq->cpu_load[idx]
any more.
Signed-off-by: Dietmar Eggemann
---
include/linux/sched/nohz.h | 8 --
kernel/sched/core.c| 1 -
kernel/sched/fair.c| 255 -
kernel/sched/sched.h |
-off-by: Dietmar Eggemann
---
kernel/sched/fair.c | 90 ++---
kernel/sched/features.h | 1 -
2 files changed, 4 insertions(+), 87 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f619b93ca331..88779c45e8e6 100644
--- a/kernel/sched
7;s now identical with the actual sched group load
Dietmar Eggemann (7):
sched: Remove rq->cpu_load[] update code
sched/fair: Replace source_load() & target_load() w/
weighted_cpuload()
sched/debug: Remove sd->*_idx range on sysctl
sched: Remove rq->cpu_load[
The per rq load array values also disappear from the cpu#X sections in
/proc/sched_debug.
Signed-off-by: Dietmar Eggemann
---
kernel/sched/core.c | 6 +-
kernel/sched/debug.c | 5 -
kernel/sched/sched.h | 2 --
3 files changed, 1 insertion(+), 12 deletions(-)
diff --git a/kernel/sched
_IDX_MAX.
At the same time, fix the following coding style issues detected by
scripts/checkpatch.pl:
ERROR: space prohibited before that ','
ERROR: space prohibited before that close parenthesis ')'
Signed-off-by: Dietmar Eggemann
---
kernel/sched/debug.c | 37 +++
Since sg_lb_stats::sum_weighted_load is now identical with
sg_lb_stats::group_load remove it and replace its use case
(calculating load per task) with the latter.
Signed-off-by: Dietmar Eggemann
---
kernel/sched/fair.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a
This is done to align the per cpu (i.e. per rq) load with the util
counterpart (cpu_util(int cpu)). The term 'weighted' is not needed
since there is no 'unweighted' load to distinguish it from.
Signed-off-by: Dietmar Eggemann
---
kern
On 5/13/19 2:48 PM, Qais Yousef wrote:
On 05/13/19 14:14, Peter Zijlstra wrote:
On Fri, May 10, 2019 at 12:30:10PM +0100, Qais Yousef wrote:
+DECLARE_TRACE(pelt_rq,
+ TP_PROTO(int cpu, const char *path, struct sched_avg *avg),
+ TP_ARGS(cpu, path, avg));
+
+static __always_inlin
Hi,
On 4/24/19 10:45 AM, Dietmar Eggemann wrote:
The CFS class is the only one maintaining and using the CPU wide load
(rq->load(.weight)). The last use case of the CPU wide load in CFS's
set_next_entity() can be replaced by using the load of the CFS class
(rq->cfs.load(.weigh
On 7/11/19 2:00 PM, Peter Zijlstra wrote:
> On Thu, Jul 11, 2019 at 01:17:17PM +0200, Dietmar Eggemann wrote:
>> On 7/9/19 3:42 PM, Peter Zijlstra wrote:
>
>>>>> That is, we only do those callbacks from:
>>>>>
>>>>> sche
On 7/22/19 7:33 PM, Rik van Riel wrote:
> Sometimes the hierarchical load of a sched_entity needs to be calculated.
> Rename task_h_load to task_se_h_load, and directly pass a sched_entity to
> that function.
>
> Move the function declaration up above where it will be used later.
>
> No functiona
On 7/26/19 4:54 PM, Peter Zijlstra wrote:
[...]
> +void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq,
> + dl_server_has_tasks_f has_tasks,
> + dl_server_pick_f pick)
> +{
> + dl_se->dl_server = 1;
> + dl_se->rq = rq;
> + dl_se->server_has
On 8/8/19 1:01 PM, tip-bot for Phil Auld wrote:
[...]
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 19c58599e967..d9407517dae9 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -10281,18 +10281,18 @@ err:
> void online_fair_sched_group(struct task_group *tg)
On 8/9/19 7:28 PM, Phil Auld wrote:
> On Fri, Aug 09, 2019 at 06:21:22PM +0200 Dietmar Eggemann wrote:
>> On 8/8/19 1:01 PM, tip-bot for Phil Auld wrote:
[...]
>> Shouldn't this be:
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index d940
On 7/30/19 9:21 AM, Peter Zijlstra wrote:
> On Tue, Jul 30, 2019 at 08:41:15AM +0200, Juri Lelli wrote:
>> On 29/07/19 18:49, Peter Zijlstra wrote:
>>> On Fri, Jul 26, 2019 at 09:27:55AM +0100, Dietmar Eggemann wrote:
>>>> Remove BUG_ON() in __enqueue_dl_entity()
On 7/31/19 9:20 PM, luca abeni wrote:
> On Wed, 31 Jul 2019 18:32:47 +0100
> Dietmar Eggemann wrote:
> [...]
>>>>>> static void dequeue_dl_entity(struct sched_dl_entity *dl_se)
>>>>>> {
>>>>>> +if (!on_dl_rq(dl_se))
>>
Remove BUG_ON() in __enqueue_dl_entity() since there is already one in
enqueue_dl_entity().
Move the check that the dl_se is not on the dl_rq from
__dequeue_dl_entity() to dequeue_dl_entity() to align with the enqueue
side and use the on_dl_rq() helper function.
Signed-off-by: Dietmar Eggemann
To make the decision whether to set rq or running bw to 0 in underflow
case use the return value of SCHED_WARN_ON() rather than an extra if
condition.
Signed-off-by: Dietmar Eggemann
---
kernel/sched/deadline.c | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/kernel
dl_change_utilization() has a BUG_ON() to check that no schedutil
kthread (sugov) is entering this function. So instead of calling
sub_running_bw() which checks for the special entity related to a
sugov thread, call the underlying function __sub_running_bw().
Signed-off-by: Dietmar Eggemann
The int flags parameter is not used in __dequeue_task_dl(). Remove it.
Signed-off-by: Dietmar Eggemann
---
kernel/sched/deadline.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index d1aeada374e1..99d4c24a8637
[ 50.194400] __sched_setscheduler+0x1d0/0x860
[ 50.198707] _sched_setscheduler+0x74/0x98
[ 50.202757] do_sched_setscheduler+0xa8/0x110
[ 50.207065] __arm64_sys_sched_setscheduler+0x1c/0x30
Signed-off-by: Dietmar Eggemann
---
kernel/sched/deadline.c | 4
1 file changed, 4 deletions(-)
dif
ered while
debugging the actual issue.
Dietmar Eggemann (5):
sched/deadline: Fix double accounting of rq/running bw in
push_dl_task()
sched/deadline: Remove unused int flags from __dequeue_task_dl()
sched/deadline: Use __sub_running_bw() throughout
dl_change_utilization()
sched/dead
e v1's "sched/deadline: Use return value of SCHED_WARN_ON()
in bw accounting"
Dietmar Eggemann (3):
sched/deadline: Fix double accounting of rq/running bw in push & pull
sched/deadline: Use __sub_running_bw() throughout
dl_change_utilization()
sched/deadline: C
dl_change_utilization() has a BUG_ON() to check that no schedutil
kthread (sugov) is entering this function. So instead of calling
sub_running_bw() which checks for the special entity related to a
sugov thread, call the underlying function __sub_running_bw().
Signed-off-by: Dietmar Eggemann
__sched_setscheduler+0x1d0/0x860
[ 50.198707] _sched_setscheduler+0x74/0x98
[ 50.202757] do_sched_setscheduler+0xa8/0x110
[ 50.207065] __arm64_sys_sched_setscheduler+0x1c/0x30
Signed-off-by: Dietmar Eggemann
Fixes: 7dd778841164 ("sched/core: Unify p->on_rq updates")
---
kern
G.
Signed-off-by: Dietmar Eggemann
---
kernel/sched/deadline.c | 11 +--
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index c34e35e7ac23..2add54c8be8a 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@
On 7/30/19 9:23 AM, Peter Zijlstra wrote:
> On Mon, Jul 29, 2019 at 05:59:04PM +0100, Dietmar Eggemann wrote:
>> On 7/29/19 5:54 PM, Peter Zijlstra wrote:
>>> On Fri, Jul 26, 2019 at 12:18:19PM +0200, luca abeni wrote:
>>>> Hi Dietmar,
>>>>
>>&g
On 7/29/19 10:00 AM, Dietmar Eggemann wrote:
> On 7/26/19 2:30 PM, luca abeni wrote:
>> Hi,
>>
>> On Fri, 26 Jul 2019 09:27:52 +0100
>> Dietmar Eggemann wrote:
>> [...]
>>> @@ -2121,17 +2121,13 @@ static int push_dl_task(struct rq *rq)
>>>
On 7/26/19 4:54 PM, Peter Zijlstra wrote:
>
>
> Signed-off-by: Peter Zijlstra (Intel)
[...]
> @@ -889,6 +891,8 @@ static void update_curr(struct cfs_rq *c
> trace_sched_stat_runtime(curtask, delta_exec, curr->vruntime);
> cgroup_account_cputime(curtask, delta_exec);
On 8/8/19 8:52 AM, Juri Lelli wrote:
> Hi Dietmar,
>
> On 07/08/19 18:31, Dietmar Eggemann wrote:
>> On 7/26/19 4:54 PM, Peter Zijlstra wrote:
>>>
>>>
>>> Signed-off-by: Peter Zijlstra (Intel)
>>
>> [...]
>>
>
On 8/8/19 9:56 AM, Peter Zijlstra wrote:
> On Wed, Aug 07, 2019 at 06:31:59PM +0200, Dietmar Eggemann wrote:
>> On 7/26/19 4:54 PM, Peter Zijlstra wrote:
>>>
>>>
>>> Signed-off-by: Peter Zijlstra (Intel)
>>
>> [...]
>>
>>&g
On 8/8/19 10:46 AM, Juri Lelli wrote:
> On 08/08/19 10:11, Dietmar Eggemann wrote:
>> On 8/8/19 9:56 AM, Peter Zijlstra wrote:
>>> On Wed, Aug 07, 2019 at 06:31:59PM +0200, Dietmar Eggemann wrote:
>>>> On 7/26/19 4:54 PM, Peter Zijlstra wrote:
>>>>>
>
gelog regarding normalize_rt_tasks() (8/8 - Peter)
>
> Set also available at
>
> https://github.com/jlelli/linux.git fixes/deadline/root-domain-accounting-v9
Tested-by: Dietmar Eggemann
Test description:
Juno-r0 (Arm64 big/Little [L b b L L L]) with 6 DL tasks
(12000/10/100
On 6/20/19 6:29 PM, Rik van Riel wrote:
> On Thu, 2019-06-20 at 18:23 +0200, Dietmar Eggemann wrote:
>> On 6/12/19 9:32 PM, Rik van Riel wrote:
[...]
>>> @@ -7779,7 +7788,7 @@ static void update_cfs_rq_h_load(struct
>>> cfs_rq *cfs_rq)
>>>
>>
Hi Rik,
On 6/12/19 9:32 PM, Rik van Riel wrote:
[...]
@@ -379,17 +368,11 @@ int update_irq_load_avg(struct rq *rq, u64 running)
* We can safely remove running from rq->clock because
* rq->clock += delta with delta >= running
*/
- ret = ___update_load_sum(rq->cl
On 5/27/19 9:13 PM, Peter Zijlstra wrote:
> On Mon, May 27, 2019 at 12:24:07PM -0400, Rik van Riel wrote:
>> On Mon, 2019-05-27 at 07:21 +0100, Dietmar Eggemann wrote:
>>> This is done to align the per cpu (i.e. per rq) load with the util
>>> counterpart (cpu_util(int cp
On 6/12/19 9:32 PM, Rik van Riel wrote:
> Sometimes the hierarchical load of a sched_entity needs to be calculated.
> Split out task_h_load into a task_se_h_load that takes a sched_entity pointer
> as its argument, and a task_h_load wrapper that calls task_se_h_load.
>
> No functional changes.
>
On 6/19/19 3:57 PM, Rik van Riel wrote:
> On Wed, 2019-06-19 at 14:52 +0200, Dietmar Eggemann wrote:
>
>>> @@ -7833,14 +7834,19 @@ static void update_cfs_rq_h_load(struct
>>> cfs_rq *cfs_rq)
>>> }
>>> }
>>>
>>> -static unsigned
On 6/12/19 9:32 PM, Rik van Riel wrote:
[...]
> @@ -410,6 +412,11 @@ static inline struct sched_entity *parent_entity(struct
> sched_entity *se)
> return se->parent;
> }
>
> +static inline bool task_se_in_cgroup(struct sched_entity *se)
> +{
> + return parent_entity(se);
> +}
IMHO,
On 6/12/19 9:32 PM, Rik van Riel wrote:
> The runnable_load magic is used to quickly propagate information about
> runnable tasks up the hierarchy of runqueues. lhen switching to a flat
Looks like here is some information missing.
> runqueue, that no longer works.
>
> Redefine the CPU cfs_rq run
On 6/12/19 9:32 PM, Rik van Riel wrote:
> Use an explicit "cfs_rq of parent sched_entity" helper in a few
> strategic places, where cfs_rq_of(se) may no longer point at the
> right runqueue once we flatten the hierarchical cgroup runqueues.
>
> No functional change.
>
> Signed-off-by: Rik van Rie
On 6/8/19 6:41 PM, Paul E. McKenney wrote:
On Tue, Jun 04, 2019 at 03:29:32PM +0200, Dietmar Eggemann wrote:
On 6/4/19 9:45 AM, Paul E. McKenney wrote:
On Mon, Jun 03, 2019 at 03:39:18PM +0200, Dietmar Eggemann wrote:
On 6/3/19 1:44 PM, Mark Rutland wrote:
On Mon, Jun 03, 2019 at 10:38:48AM
On 6/11/19 3:54 PM, Paul E. McKenney wrote:
On Tue, Jun 11, 2019 at 03:14:54PM +0200, Dietmar Eggemann wrote:
On 6/8/19 6:41 PM, Paul E. McKenney wrote:
On Tue, Jun 04, 2019 at 03:29:32PM +0200, Dietmar Eggemann wrote:
On 6/4/19 9:45 AM, Paul E. McKenney wrote:
On Mon, Jun 03, 2019 at 03:39
On 6/6/19 10:20 AM, Vincent Guittot wrote:
On Thu, 6 Jun 2019 at 09:49, Quentin Perret wrote:
Hi Vincent,
On Thursday 06 Jun 2019 at 09:05:16 (+0200), Vincent Guittot wrote:
Hi Quentin,
On Wed, 5 Jun 2019 at 19:21, Quentin Perret wrote:
On Friday 17 May 2019 at 14:55:19 (-0700), Stephen
On 6/12/19 9:32 PM, Rik van Riel wrote:
> Flatten the hierarchical runqueues into just the per CPU rq.cfs runqueue.
>
> Iteration of the sched_entity hierarchy is rate limited to once per jiffy
> per sched_entity, which is a smaller change than it seems, because load
> average adjustments were alr
On 5/6/19 6:48 AM, Luca Abeni wrote:
[...]
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 5b981eeeb944..3436f3d8fa8f 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -1584,6 +1584,9 @@ select_task_rq_dl(struct task_struct *p, int cpu, int
> sd
On 7/8/19 9:41 AM, luca abeni wrote:
> Hi Dietmar,
>
> On Thu, 4 Jul 2019 14:05:22 +0200
> Dietmar Eggemann wrote:
>
>> On 5/6/19 6:48 AM, Luca Abeni wrote:
>>
>> [...]
>>
>>> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>&
On 5/7/19 4:43 PM, luca abeni wrote:
> On Tue, 7 May 2019 15:31:27 +0100
> Quentin Perret wrote:
>
>> On Tuesday 07 May 2019 at 16:25:23 (+0200), luca abeni wrote:
>>> On Tue, 7 May 2019 14:48:52 +0100
>>> Quentin Perret wrote:
>>>
Hi Luca,
On Monday 06 May 2019 at 06:48:31 (+0
On 7/19/19 3:59 PM, Juri Lelli wrote:
> From: Mathieu Poirier
[...]
> @@ -4269,8 +4269,8 @@ static int __sched_setscheduler(struct task_struct *p,
>*/
> if (!cpumask_subset(span, &p->cpus_allowed) ||
This doesn't apply cleanly on v5.3-rc1 anymore du
On 7/22/19 10:32 AM, Juri Lelli wrote:
> On 22/07/19 10:21, Dietmar Eggemann wrote:
>> On 7/19/19 3:59 PM, Juri Lelli wrote:
>>> From: Mathieu Poirier
>>
>> [...]
>>
>>> @@ -4269,8 +4269,8 @@ static i
On 7/19/19 3:59 PM, Juri Lelli wrote:
[...]
> @@ -557,6 +558,38 @@ static struct rq *dl_task_offline_migration(struct rq
> *rq, struct task_struct *p
> double_lock_balance(rq, later_rq);
> }
>
> + if (p->dl.dl_non_contending || p->dl.dl_throttled) {
> + /*
>
On 7/22/19 2:28 PM, Juri Lelli wrote:
> On 22/07/19 13:07, Dietmar Eggemann wrote:
>> On 7/19/19 3:59 PM, Juri Lelli wrote:
>>
>> [...]
>>
>>> @@ -557,6 +558,38 @@ static struct rq *dl_task_offline_migration(struct rq
>>> *rq, struct task_struct *p
>
On 7/22/19 3:35 PM, Juri Lelli wrote:
> On 22/07/19 15:21, Dietmar Eggemann wrote:
>> On 7/22/19 2:28 PM, Juri Lelli wrote:
>>> On 22/07/19 13:07, Dietmar Eggemann wrote:
>>>> On 7/19/19 3:59 PM, Juri Lelli wrote:
>>>>
>>>>
401 - 500 of 871 matches
Mail list logo