ar
So restore that check back to avoid the mentioned issue.
Cc: Dietmar Eggemann
Cc: Will Deacon
Reported-by: Santosh Shilimkar
Acked-by: Santosh Shilimkar
Signed-off-by: Lokesh Vutla
---
arch/arm/kernel/hw_breakpoint.c |3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --g
On 19/03/13 10:28, Will Deacon wrote:
On Tue, Mar 19, 2013 at 06:39:38AM +, Santosh Shilimkar wrote:
On Monday 18 March 2013 10:36 PM, Will Deacon wrote:
Any chance you could follow up with your firmware/hardware guys about this
please? I'd really like to understand how we end up in this st
On 19/09/16 14:40, Peter Zijlstra wrote:
> On Mon, Sep 19, 2016 at 03:19:11PM +0200, Christian Borntraeger wrote:
>> Dietmar, Ingo, Tejun,
>>
>> since commit cd92bfd3b8cb0ec2ee825e55a3aee704cd55aea9
>>sched/core: Store maximum per-CPU capacity in root domain
>>
>> I get tons of messages from th
entity
computation")?
I guess in the meantime we lost the functionality to remove a cfs_rq from the
leaf_cfs_rq list once there are no se's enqueued on it anymore. If e.g. t
migrates
away from Cpu1, all the cfs_rq's of the task hierarchy (for tg_css_id=2,4,6)
owned by Cpu1 sta
On 21/09/16 13:34, Vincent Guittot wrote:
> Hi Dietmar,
>
> On 21 September 2016 at 12:14, Dietmar Eggemann
> wrote:
>> Hi Vincent,
>>
>> On 12/09/16 08:47, Vincent Guittot wrote:
[...]
>> I guess in the meantime we lost the functionality to remove a cfs_
On 17/06/16 17:18, Peter Zijlstra wrote:
> On Fri, Jun 17, 2016 at 06:02:39PM +0200, Peter Zijlstra wrote:
>> So yes, ho-humm, how to go about doing that bestest. Lemme have a play.
>
> This is what I came up with, not entirely pretty, but I suppose it'll
> have to do.
>
> ---
> --- a/kernel/sc
On 20/06/16 13:35, Vincent Guittot wrote:
> On 20 June 2016 at 13:35, Dietmar Eggemann wrote:
>>
>>
>> On 17/06/16 17:18, Peter Zijlstra wrote:
>>> On Fri, Jun 17, 2016 at 06:02:39PM +0200, Peter Zijlstra wrote:
>>>> So yes, ho-humm, how to go a
On 23/09/16 15:30, Vincent Guittot wrote:
> Hi Matt,
>
> On 23 September 2016 at 13:58, Matt Fleming wrote:
>> Since commit 7dc603c9028e ("sched/fair: Fix PELT integrity for new
>> tasks") ::last_update_time will be set to a non-zero value in
>> post_init_entity_util_avg(), which leads to p->se.a
d_unlock();
>
> + if (rq) {
> + pr_info("span: %*pbl (max cpu_capacity = %lu)\n",
> + cpumask_pr_args(cpu_map), rq->rd->max_cpu_capacity);
> + }
> +
> ret = 0;
> error:
> __free_domain_allocs(&d, alloc
On 12/09/16 08:47, Vincent Guittot wrote:
> When a task moves from/to a cfs_rq, we set a flag which is then used to
> propagate the change at parent level (sched_entity and cfs_rq) during
> next update. If the cfs_rq is throttled, the flag will stay pending until
> the cfs_rw is unthrottled.
>
> F
On 15/09/16 15:31, Vincent Guittot wrote:
> On 15 September 2016 at 15:11, Dietmar Eggemann
> wrote:
[...]
>> Wasn't 'consuming <1' related to 'NICE_0_LOAD' and not
>> scale_load_down(gcfs_rq->tg->shares) before the rewrite of PELT (v4.2,
>
On 15/09/16 16:14, Peter Zijlstra wrote:
> On Thu, Sep 15, 2016 at 02:11:49PM +0100, Dietmar Eggemann wrote:
>> On 12/09/16 08:47, Vincent Guittot wrote:
>
>>> +/* Take into account change of load of a child task group */
>>> +static inline void
>>> +u
On 29/08/16 02:37, Yuyang Du wrote:
> On Tue, Aug 23, 2016 at 04:39:51PM +0100, Dietmar Eggemann wrote:
>> On 23/08/16 15:45, Vincent Guittot wrote:
>>> On 23 August 2016 at 16:13, Peter Zijlstra wrote:
>>>> On Tue, Aug 23, 2016 at 03:28:19PM +0200, Vincent Guittot
On 23/08/16 21:40, Paul Turner wrote:
> On Mon, Aug 22, 2016 at 7:00 AM, Dietmar Eggemann
[...]
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 61d485421bed..18f80c4c7737 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>>
On 20/06/17 07:17, Viresh Kumar wrote:
> On Thu, Jun 8, 2017 at 1:25 PM, Dietmar Eggemann
> wrote:
>
>> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
>
>> static int
>> init_cpu_capacity_callback(struct notifier_
On 14/06/17 14:08, Vincent Guittot wrote:
> On 14 June 2017 at 09:55, Dietmar Eggemann wrote:
>>
>> On 06/12/2017 04:27 PM, Vincent Guittot wrote:
>>> On 8 June 2017 at 09:55, Dietmar Eggemann wrote:
[...]
>>
>> Yes, we should free cpus_to_visit if the p
On 21/06/17 01:31, Saravana Kannan wrote:
> On 06/19/2017 11:17 PM, Viresh Kumar wrote:
>> On Thu, Jun 8, 2017 at 1:25 PM, Dietmar Eggemann
>> wrote:
>>
>>> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
>>
>>> sta
Hi Greg,
On 05/26/2017 08:36 PM, Greg KH wrote:
On Fri, May 26, 2017 at 11:10:32AM +0100, Juri Lelli wrote:
Hi,
On 25/05/17 15:18, Greg KH wrote:
On Thu, Apr 20, 2017 at 03:43:16PM +0100, Juri Lelli wrote:
[...]
But this is all really topology stuff, right? Why use "capacity" at
all:
On 05/29/2017 11:58 AM, Greg KH wrote:
On Mon, May 29, 2017 at 11:20:24AM +0200, Dietmar Eggemann wrote:
Hi Greg,
On 05/26/2017 08:36 PM, Greg KH wrote:
On Fri, May 26, 2017 at 11:10:32AM +0100, Juri Lelli wrote:
Hi,
On 25/05/17 15:18, Greg KH wrote:
On Thu, Apr 20, 2017 at 03:43:16PM
This patch adds the missing of_node_put() for of_find_node_by_path() and
of_get_cpu_node() in parse_dt_topology().
Cc: Russell King
Cc: Vincent Guittot
Cc: Juri Lelli
Signed-off-by: Dietmar Eggemann
---
arch/arm/kernel/topology.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions
big.LITTLE system no
matter which core types it consists of.
Cc: Russell King
Cc: Vincent Guittot
Cc: Juri Lelli
Signed-off-by: Dietmar Eggemann
---
arch/arm/kernel/topology.c | 132 ++---
1 file changed, 4 insertions(+), 128 deletions(-)
diff --g
p of v4.14-rc4
- Remove superfluous continue statement in parse_dt_topology()
[01/04]
- Remove 'cpu capacity scale management' and 'cpu capacity table'
related comments [01/04]
- Remove dt related patches [02-04/04]
Dietmar Eggemann (2):
arm: topology: remove cp
Hi Russel,
Thanks for the review!
On 24/10/17 11:52, Russell King - ARM Linux wrote:
> On Tue, Oct 24, 2017 at 11:27:16AM +0100, Dietmar Eggemann wrote:
>> With the dt related patches for exynos and renesas now in the
>> appropriated for-next branches for v4.15 there are no Co
Hi Vincent,
On 17/10/17 13:28, Vincent Guittot wrote:
> Hi Dietmar,
>
> On 12 October 2017 at 16:00, Dietmar Eggemann
> wrote:
>> Remove the 'cpu_efficiency/clock-frequency dt property' based solution
>> to set cpu capacity which was only working for Corte
Hi Ying,
On 24/09/15 03:00, kernel test robot wrote:
> FYI, we noticed the below changes on
>
> https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
> commit 98d8fd8126676f7ba6e133e65b2ca4b17989d32c ("sched/fair: Initialize task
> load and utilization before placing task on rq"
On 04/09/15 08:26, Vincent Guittot wrote:
> On 3 September 2015 at 21:58, Dietmar Eggemann
> wrote:
[...]
>>> So you change the way to declare arch_scale_cpu_capacity but i don't
>>> see the update of the arm arch which declare a
>>> arch_scale_cpu_c
ch build, I get a repeatable
> improvement with this patch set (although how much of that is due to the
> patches itself or just because of code movement is as yet undetermined).
>
> I'm of a mind to apply these patches; with two patches on top, which
> I'll post shortly.
>
an-up.
Beyond that, I'm not sure if the current functionality is
broken if we use different SCALE and SHIFT values for LOAD and CAPACITY?
>
>> + * capacity_orig is the cpu_capacity available at * the highest frequency
>
> spurious *
>
> thanks,
> Steve
>
Fixe
On 07/09/15 17:21, Vincent Guittot wrote:
> On 7 September 2015 at 17:37, Dietmar Eggemann
> wrote:
>> On 04/09/15 00:51, Steve Muckle wrote:
>>> Hi Morten, Dietmar,
>>>
>>> On 08/14/2015 09:23 AM, Morten Rasmussen wrote:
>>> ...
>>&g
On 07/09/15 20:47, Peter Zijlstra wrote:
> On Mon, Sep 07, 2015 at 07:54:18PM +0100, Dietmar Eggemann wrote:
>> I would vote for removing this SCHED_LOAD_RESOLUTION thing completely so
>> that we can
>> assume that load/util and capacity are always using 1024/10.
>
> H
On 08/09/15 08:22, Vincent Guittot wrote:
> On 7 September 2015 at 20:54, Dietmar Eggemann
> wrote:
>> On 07/09/15 17:21, Vincent Guittot wrote:
>>> On 7 September 2015 at 17:37, Dietmar Eggemann
>>> wrote:
>>>> On 04/09/15 00:51, Steve Muckle wrote:
&
On 08/09/15 15:01, Vincent Guittot wrote:
> On 8 September 2015 at 14:50, Dietmar Eggemann
> wrote:
>> On 08/09/15 08:22, Vincent Guittot wrote:
>>> On 7 September 2015 at 20:54, Dietmar Eggemann
>>> wrote:
>>>> On 07/09/15 17:21, Vincent Guittot
Hi Yuhang,
On 13/10/15 02:18, Yuyang Du wrote:
> Commit 9d89c257dfb9c51a532d69 (sched/fair: Rewrite runnable load
> and utilization average tracking) led to overly small weight for
> interactive group entity. The case can be easily reproduced when
> a number of CPU hogs compete for the CPUs at the
On 08/31/2015 11:24 AM, Peter Zijlstra wrote:
On Fri, Aug 14, 2015 at 05:23:08PM +0100, Morten Rasmussen wrote:
Target: ARM TC2 A7-only (x3)
Test: hackbench -g 25 --threads -l 1
Before After
315.545 313.408 -0.68%
Target: Intel(R) Core(TM) i5 CPU M 520 @ 2.40GHz
Test: hackbench -g 25 --th
Hi Vincent,
On 02/09/15 10:31, Vincent Guittot wrote:
> Hi Morten,
>
> On 14 August 2015 at 18:23, Morten Rasmussen wrote:
>> Bring arch_scale_cpu_capacity() in line with the recent change of its
>> arch_scale_freq_capacity() sibling in commit dfbca41f3479 ("sched:
>> Optimize freq invariant acc
Make sure that the task scheduler domain hierarchy is set-up correctly
on systems with single or multi-cluster topology.
Signed-off-by: Dietmar Eggemann
---
arch/arm64/configs/defconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs
Commit cd126afe838d ("sched/fair: Remove rq's runnable avg") got rid of
rq->avg and so there is no need to update it any more when entering or
exiting idle.
Remove the now empty functions idle_{enter|exit}_fair().
Signed-off-by: Dietmar Eggemann
---
kernel/sche
ced
(during which only its static priority gets set) switches to
SCHED_OTHER or SCHED_BATCH.
Set se->runnable_weight to se->load.weight in these two cases to prevent
this. This eliminates the need to explicitly set it to se->load.weight
during PELT updates in the CFS scheduler fastpat
On 08/06/2018 10:40 AM, Vincent Guittot wrote:
On Fri, 3 Aug 2018 at 17:55, Quentin Perret wrote:
On Friday 03 Aug 2018 at 15:49:24 (+0200), Vincent Guittot wrote:
On Fri, 3 Aug 2018 at 10:18, Quentin Perret wrote:
On Friday 03 Aug 2018 at 09:48:47 (+0200), Vincent Guittot wrote:
On Thu,
On 08/06/2018 12:33 PM, Vincent Guittot wrote:
On Mon, 6 Aug 2018 at 12:08, Dietmar Eggemann wrote:
On 08/06/2018 10:40 AM, Vincent Guittot wrote:
On Fri, 3 Aug 2018 at 17:55, Quentin Perret wrote:
On Friday 03 Aug 2018 at 15:49:24 (+0200), Vincent Guittot wrote:
On Fri, 3 Aug 2018 at 10
On 08/06/2018 02:37 PM, Vincent Guittot wrote:
On Mon, 6 Aug 2018 at 14:29, Dietmar Eggemann wrote:
On 08/06/2018 12:33 PM, Vincent Guittot wrote:
On Mon, 6 Aug 2018 at 12:08, Dietmar Eggemann wrote:
On 08/06/2018 10:40 AM, Vincent Guittot wrote:
On Fri, 3 Aug 2018 at 17:55, Quentin
On 08/27/2018 12:14 PM, Peter Zijlstra wrote:
> On Fri, Aug 24, 2018 at 02:24:48PM -0700, Steve Muckle wrote:
>> On 08/24/2018 02:47 AM, Peter Zijlstra wrote:
> On 08/17/2018 11:27 AM, Steve Muckle wrote:
>>>
>> When rt_mutex_setprio changes a task's scheduling class to RT,
>> we're see
On 08/28/2018 03:53 PM, Dietmar Eggemann wrote:
On 08/27/2018 12:14 PM, Peter Zijlstra wrote:
On Fri, Aug 24, 2018 at 02:24:48PM -0700, Steve Muckle wrote:
On 08/24/2018 02:47 AM, Peter Zijlstra wrote:
On 08/17/2018 11:27 AM, Steve Muckle wrote:
When rt_mutex_setprio changes a task
On 08/29/2018 12:59 PM, Peter Zijlstra wrote:
On Wed, Aug 29, 2018 at 11:54:58AM +0100, Dietmar Eggemann wrote:
I forgot to mention that since fair_task's cpu affinity is restricted to
CPU4, there is no call to set_task_cpu()->migrate_task_rq_fair() since if
(task_cpu(p) != cpu) fails.
ormance-mem_rtns_1-8000
aim7/performance-disk_wrt-8000
aim7/performance-pipe_cpy-8000
aim7/performance-ram_copy-8000
(d) lkp-avoton3:
8 threads Intel(R) Atom(TM) CPU C2750 @ 2.40GHz 16G
netperf/ipv4-900s-200%-cs-localhost-TCP_STREAM-performance
Signed-off-by: Dietmar Eggemann
On 08/06/2018 06:39 PM, Patrick Bellasi wrote:
[...]
+/**
+ * uclamp_cpu_put_id(): decrease reference count for a clamp group on a CPU
+ * @p: the task being dequeued from a CPU
+ * @cpu: the CPU from where the clamp group has to be released
+ * @clamp_id: the utilization clamp (e.g. min or max
Hi,
On 08/21/2018 01:54 AM, Miguel de Dios wrote:
On 08/17/2018 11:27 AM, Steve Muckle wrote:
From: John Dias
When rt_mutex_setprio changes a task's scheduling class to RT,
we're seeing cases where the task's vruntime is not updated
correctly upon return to the fair class.
Specifically, the f
On 07/26/2018 07:14 PM, Valentin Schneider wrote:
Hi,
On 09/07/18 16:08, Morten Rasmussen wrote:
On Fri, Jul 06, 2018 at 12:18:27PM +0200, Vincent Guittot wrote:
Hi Morten,
On Wed, 4 Jul 2018 at 12:18, Morten Rasmussen wrote:
[...]
With that out of the way, I did some lmbench runs:
lat_
Hi,
running v4.18-rc5 (plus still missing "power: vexpress: fix corruption
in notifier registration", otherwise I get this rcu_sched stall issue)
on TC2 (A7 boot) with vanilla multi_v7_defconfig plus
CONFIG_ARM_BIG_LITTLE_CPUIDLE=y gives me continuous:
...
CPUX: Spectre v2: incorrect contex
107277]
[ 293.107277] which lock already depends on the new lock.
...
w/ the patch:
root@h960:~# echo NO_LB_BIAS > /sys/kernel/debug/sched_features
root@h960:~#
Tested-by: Dietmar Eggemann
On 07/31/2018 02:13 PM, Vincent Guittot wrote:
On Mon, 30 Jul 2018 at 16:30, Dietmar Eggemann wrote:
On 07/26/2018 07:14 PM, Valentin Schneider wrote:
[...]
The task layout of the test looks like n=85 always running tasks (each
for ~ 1.25ms on big or little) and they all get created and
On 07/12/2018 10:17 AM, Joel Fernandes wrote:
On Wed, Jul 11, 2018 at 10:43:28AM +0200, Dietmar Eggemann wrote:
On 07/11/2018 01:09 AM, Joel Fernandes wrote:
On Mon, Jul 09, 2018 at 05:47:53PM +0100, Dietmar Eggemann wrote:
A CFS (SCHED_OTHER, SCHED_BATCH or SCHED_IDLE policy) task'
On 01/15/2018 08:26 AM, Vincent Guittot wrote:
Le Wednesday 03 Jan 2018 à 10:16:00 (+0100), Vincent Guittot a écrit :
Hi Peter,
On 22 December 2017 at 21:42, Peter Zijlstra wrote:
On Fri, Dec 22, 2017 at 07:56:29PM +0100, Peter Zijlstra wrote:
Right; but I figured we'd try and do it 'right'
On 01/24/2018 09:25 AM, Vincent Guittot wrote:
Hi,
Le Thursday 18 Jan 2018 à 10:38:07 (+), Morten Rasmussen a écrit :
On Mon, Jan 15, 2018 at 09:26:09AM +0100, Vincent Guittot wrote:
Le Wednesday 03 Jan 2018 à 10:16:00 (+0100), Vincent Guittot a écrit :
[...]
Hi Peter,
With the patch
On 03/25/2018 03:48 PM, Quentin Perret wrote:
> On Tuesday 20 Mar 2018 at 10:52:15 (+0100), Greg Kroah-Hartman wrote:
>> On Tue, Mar 20, 2018 at 09:43:08AM +, Dietmar Eggemann wrote:
>>> From: Quentin Perret
>
> [...]
>
>>> +#ifdef CONFIG_PM_OPP
>&g
rm.org/git?p=linux-de.git;a=shortlog;h=refs/heads/upstream/eas_v2_base
[3] https://marc.info/?l=linux-pm&m=151635516419249&w=2
[4] https://marc.info/?l=linux-pm&m=152153905805048&w=2
Dietmar Eggemann (1):
sched/fair: Create util_fits_capacity()
Quentin Perret (4):
sched: I
sched domain level to let other sched groups help getting rid of the
overutilization of cpus.
Signed-off-by: Thara Gopinath
Signed-off-by: Dietmar Eggemann
---
include/linux/sched/topology.h | 1 +
kernel/sched/fair.c| 62 --
kernel/sched/sched.h
: Quentin Perret
Signed-off-by: Dietmar Eggemann
---
drivers/base/arch_topology.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index 52ec5174bcb1..25a70c21860f 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
t meet all
dependencies for EAS (CONFIG_PM_OPP for ex.) at compile time,
sched_enegy_enabled() defaults to a constant "false" value, hence letting
the compiler remove the unused EAS code entirely.
Cc: Ingo Molnar
Cc: Peter Zijlstra
Signed-off-by: Quentin Perret
Signed-off-by: Dietmar Eggeman
estimates
the consumption of each online CPU according to its energy model and its
percentage of busy time.
Cc: Ingo Molnar
Cc: Peter Zijlstra
Signed-off-by: Quentin Perret
Signed-off-by: Dietmar Eggemann
---
include/linux/sched/energy.h | 20 +
kernel/sched/fair.c | 68
ing a static key in order to minimize the overhead on non-energy-aware
systems.
Cc: Ingo Molnar
Cc: Peter Zijlstra
Signed-off-by: Quentin Perret
Signed-off-by: Dietmar Eggemann
---
This patch depends on additional infrastructure being merged in the OPP
core. As this infrastructure can also be usefu
The functionality that a given utilization fits into a given capacity
is factored out into a separate function.
Currently it is only used in wake_cap() but will be re-used to figure
out if a cpu or a scheduler group is over-utilized.
Cc: Ingo Molnar
Cc: Peter Zijlstra
Signed-off-by: Dietmar
On 04/12/2018 09:02 AM, Viresh Kumar wrote:
On 06-04-18, 16:36, Dietmar Eggemann wrote:
The functionality that a given utilization fits into a given capacity
is factored out into a separate function.
Currently it is only used in wake_cap() but will be re-used to figure
out if a cpu or a
1. Overview
The Energy Aware Scheduler (EAS) based on Morten Rasmussen's posting on
LKML [1] is currently part of the AOSP Common Kernel and runs on
today's smartphones with Arm's big.LITTLE CPUs.
Based on the experience gained over the last two and a half years in
product development, we propose
estimates
the consumption of each online CPU according to its energy model and its
percentage of busy time.
Cc: Ingo Molnar
Cc: Peter Zijlstra
Signed-off-by: Quentin Perret
Signed-off-by: Dietmar Eggemann
---
kernel/sched/fair.c | 81 +
1 file
From: Quentin Perret
Energy Aware Scheduling (EAS) has to be started from the arch code.
This commit enables it from the arch topology driver for arm/arm64
systems, hence enabling a better support for Arm big.LITTLE and future
DynamIQ architectures.
Cc: Greg Kroah-Hartman
Signed-off-by: Quentin
g set. These systems not only show the most
promising opportunities for saving energy but also typically feature a
limited number of logical CPUs.
Cc: Ingo Molnar
Cc: Peter Zijlstra
Signed-off-by: Quentin Perret
Signed-off-by: Dietmar Eggemann
---
kernel/sched/fair.c
ing a static key in order to minimize the overhead on non-energy-aware
systems.
Cc: Ingo Molnar
Cc: Peter Zijlstra
Signed-off-by: Quentin Perret
Signed-off-by: Dietmar Eggemann
---
This patch depends on additional infrastructure being merged in the OPP
core. As this infrastructure can also be usefu
sched domain level to let other sched groups help getting rid of the
overutilization of cpus.
Signed-off-by: Thara Gopinath
Signed-off-by: Dietmar Eggemann
---
include/linux/sched/topology.h | 1 +
kernel/sched/fair.c| 64 --
kernel/sched/sched.h
The functionality that a given utilization fits into a given capacity
is factored out into a separate function.
Currently it is only used in wake_cap() but will be re-used to figure
out if a cpu or a scheduler group is over-utilized.
Cc: Ingo Molnar
Cc: Peter Zijlstra
Signed-off-by: Dietmar
On 03/20/2018 10:49 AM, Greg Kroah-Hartman wrote:
On Tue, Mar 20, 2018 at 09:43:12AM +, Dietmar Eggemann wrote:
From: Quentin Perret
Energy Aware Scheduling (EAS) has to be started from the arch code.
Ok, but:
This commit enables it from the arch topology driver for arm/arm64
systems
On 03/21/2018 03:02 PM, Quentin Perret wrote:
> On Wednesday 21 Mar 2018 at 12:26:21 (+), Patrick Bellasi wrote:
>> On 21-Mar 10:04, Juri Lelli wrote:
>>> Hi,
>>>
>>> On 20/03/18 09:43, Dietmar Eggemann wrote:
>>>> From: Quentin Perret
[...]
On 04/09/2018 11:40 AM, Peter Zijlstra wrote:
(I know there is a new version out; but I was reading through this to
catch up with the discussion)
On Tue, Mar 20, 2018 at 09:43:09AM +, Dietmar Eggemann wrote:
+static inline int sd_overutilized(struct sched_domain *sd)
+{
+ return
On 04/10/2018 01:54 PM, Peter Zijlstra wrote:
On Fri, Apr 06, 2018 at 04:36:03PM +0100, Dietmar Eggemann wrote:
+ /*
+* Build the energy model of one CPU, and link it to all CPUs
+* in its frequency domain. This should be correct as long as
Hi,
On 09/26/2018 11:50 AM, Wanpeng Li wrote:
> Hi Dietmar,
> On Tue, 28 Aug 2018 at 22:55, Dietmar Eggemann
> wrote:
>>
>> On 08/27/2018 12:14 PM, Peter Zijlstra wrote:
>>> On Fri, Aug 24, 2018 at 02:24:48PM -0700, Steve Muckle wrote:
>>>> On
On 09/27/2018 03:19 AM, Wanpeng Li wrote:
On Thu, 27 Sep 2018 at 06:38, Dietmar Eggemann wrote:
Hi,
On 09/26/2018 11:50 AM, Wanpeng Li wrote:
Hi Dietmar,
On Tue, 28 Aug 2018 at 22:55, Dietmar Eggemann wrote:
On 08/27/2018 12:14 PM, Peter Zijlstra wrote:
On Fri, Aug 24, 2018 at 02:24
On 09/28/2018 02:43 AM, Wanpeng Li wrote:
On Thu, 27 Sep 2018 at 21:23, Dietmar Eggemann wrote:
On 09/27/2018 03:19 AM, Wanpeng Li wrote:
On Thu, 27 Sep 2018 at 06:38, Dietmar Eggemann wrote:
Hi,
On 09/26/2018 11:50 AM, Wanpeng Li wrote:
Hi Dietmar,
On Tue, 28 Aug 2018 at 22:55, Dietmar
On 09/28/2018 06:10 PM, Steve Muckle wrote:
On 09/27/2018 05:43 PM, Wanpeng Li wrote:
On your CPU4:
scheduler_ipi()
-> sched_ttwu_pending()
-> ttwu_do_activate() => p->sched_remote_wakeup should be
false, so ENQUEUE_WAKEUP is set, ENQUEUE_MIGRATED is not
-> ttwu_activa
On 11/9/18 8:20 AM, Vincent Guittot wrote:
[...]
In order to achieve this time scaling, a new clock_pelt is created per rq.
The increase of this clock scales with current capacity when something
is running on rq and synchronizes with clock_task when rq is idle. With
this mecanism, we ensure the
On 11/9/18 8:20 AM, Vincent Guittot wrote:
This new version of the scale invariance patchset adds an important change
compare to v3 and before. It still scales the time to reflect the
amount of work that has been done during the elapsed running time but this is
now done at rq level instead of per
Hi Daniel,
+cc: Russell King
On 11/27/18 2:24 PM, Daniel Lezcano wrote:
In the case of asymmetric SoC with the same micro-architecture, we
have a group of CPUs with smaller OPPs than the other group. One
example is the 96boards dragonboard 820c. There is no dmips/MHz
difference between both gr
ced
(during which only its static priority gets set) switches to
SCHED_OTHER or SCHED_BATCH.
Set se->runnable_weight to se->load.weight in these two cases to prevent
this. This eliminates the need to explicitly set it to se->load.weight
during PELT updates in the CFS scheduler fastpat
On 06/28/2018 01:40 PM, Quentin Perret wrote:
[...]
+/**
+ * em_rescale_cpu_capacity() - Re-scale capacity values of the Energy Model
+ *
+ * This re-scales the capacity values for all capacity states of all frequency
+ * domains of the Energy Model. This should be used when the capacity values
On 07/11/2018 01:09 AM, Joel Fernandes wrote:
On Mon, Jul 09, 2018 at 05:47:53PM +0100, Dietmar Eggemann wrote:
A CFS (SCHED_OTHER, SCHED_BATCH or SCHED_IDLE policy) task's
se->runnable_weight must always be in sync with its se->load.weight.
se->runnable_weight is set to se->
On 06/22/2018 04:36 PM, Morten Rasmussen wrote:
On Fri, Jun 22, 2018 at 09:22:22AM +0100, Quentin Perret wrote:
Hi Morten,
On Wednesday 20 Jun 2018 at 10:05:41 (+0100), Morten Rasmussen wrote:
+static void update_asym_cpucapacity(int cpu)
+{
+ int enable = false;
+
+ rcu_read_lock(
On 06/28/2018 10:48 AM, Morten Rasmussen wrote:
On Wed, Jun 27, 2018 at 05:41:22PM +0200, Dietmar Eggemann wrote:
On 06/22/2018 04:36 PM, Morten Rasmussen wrote:
On Fri, Jun 22, 2018 at 09:22:22AM +0100, Quentin Perret wrote:
[...]
What would happen if you hotplugged an entire cluster
oot domain span: 0-5 (max cpu_capacity = 1024)
Signed-off-by: Juri Lelli
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Dietmar Eggemann
Cc: Patrick Bellasi
Cc: linux-kernel@vger.kernel.org
---
kernel/sched/topology.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/
On 06/08/2018 02:09 PM, Vincent Guittot wrote:
Take into account rt utilization when selecting an OPP for cfs tasks in order
to reflect the utilization of the CPU.
The rt utilization signal is only tracked per-cpu, not per-entity. So it
is not aware of PELT migrations (attach/detach).
IMHO,
Coredump does not use any task or task scheduler trace events.
Commit 10c28d937e2c ("coredump: move core dump functionality into its
own file") moved coredump from fs/exec.c in which task and task
scheduler trace events are used.
Signed-off-by: Dietmar Eggemann
---
fs/coredump.c
On 06/19/2018 09:01 AM, Pavan Kondeti wrote:
On Mon, May 21, 2018 at 03:25:01PM +0100, Quentin Perret wrote:
[...]
@@ -8152,6 +8176,9 @@ static inline void update_sg_lb_stats(struct lb_env *env,
if (nr_running > 1)
*overload = true;
+ if (cpu_overut
On 05/25/2018 03:12 PM, Vincent Guittot wrote:
interrupt and steal time are the only remaining activities tracked by
rt_avg. Like for sched classes, we can use PELT to track their average
utilization of the CPU. But unlike sched class, we don't track when
entering/leaving interrupt; Instead, we t
On 05/30/2018 08:45 PM, Vincent Guittot wrote:
> Hi Dietmar,
>
> On 30 May 2018 at 17:55, Dietmar Eggemann wrote:
>> On 05/25/2018 03:12 PM, Vincent Guittot wrote:
[...]
>>> +*/
>>> + ret = ___update_load_sum(rq->clock
Hi,
On 07/05/2018 10:02 AM, kernel test robot wrote:
>
> FYI, we noticed the following commit (built with gcc-7):
>
> commit: fbd51884933192c9cada60628892024495942482 ("[PATCH] sched/fair: Avoid
> divide by zero when rebalancing domains")
> url:
> https://github.com/0day-ci/linux/commits/Matt-
On 07/05/2018 10:58 AM, Dietmar Eggemann wrote:
> Hi,
>
> On 07/05/2018 10:02 AM, kernel test robot wrote:
>>
>> FYI, we noticed the following commit (built with gcc-7):
>>
>> commit: fbd51884933192c9cada60628892024495942482 ("[PATCH] sched/fair: Avoid
>
On 07/05/2018 03:24 PM, Matt Fleming wrote:
On Thu, 05 Jul, at 11:52:21AM, Dietmar Eggemann wrote:
Moving the code from _nohz_idle_balance to nohz_idle_balance let it disappear:
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 02be51c9dcc1..070924f07c68 100644
--- a/kernel/sched
On 06/08/2018 02:09 PM, Vincent Guittot wrote:
[...]
@@ -182,21 +183,30 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu)
sg_cpu->util_dl = cpu_util_dl(rq);
sg_cpu->bw_dl= cpu_bw_dl(rq);
sg_cpu->util_rt = cpu_util_rt(rq);
+ sg_cpu->util_irq = cpu_util_i
On 06/08/2018 02:09 PM, Vincent Guittot wrote:
> schedutil governor relies on cfs_rq's util_avg to choose the OPP when cfs
> tasks are running. When the CPU is overloaded by cfs and rt tasks, cfs tasks
> are preempted by rt tasks and in this case util_avg reflects the remaining
> capacity but not w
On 06/15/2018 02:18 PM, Vincent Guittot wrote:
> Hi Dietmar,
>
> On 15 June 2018 at 13:52, Dietmar Eggemann wrote:
>> On 06/08/2018 02:09 PM, Vincent Guittot wrote:
>>> schedutil governor relies on cfs_rq's util_avg to choose the OPP when cfs
>>> tasks are
On 05/18/2018 10:36 AM, Peter Zijlstra wrote:
Replying to the latest version available; given the current interest I
figure I'd re-read some of the old threads and look at this stuff again.
On Fri, Apr 28, 2017 at 04:23:55PM +0200, Vincent Guittot wrote:
[...]
What happened to the proposed
On 08/14/2018 06:49 PM, Patrick Bellasi wrote:
Hi Dietmar!
On 14-Aug 17:44, Dietmar Eggemann wrote:
On 08/06/2018 06:39 PM, Patrick Bellasi wrote:
[...]
This one indicates that there are some holes in your ref-counting.
Not really, this has been added not because I've detected a ref
1 - 100 of 871 matches
Mail list logo