On 7/9/19 3:24 PM, luca abeni wrote:
> Hi Peter,
>
> On Mon, 8 Jul 2019 15:55:36 +0200
> Peter Zijlstra wrote:
>
>> On Mon, May 06, 2019 at 06:48:33AM +0200, Luca Abeni wrote:
>>> @@ -1223,8 +1250,17 @@ static void update_curr_dl(struct rq *rq)
>>> dl_se->dl_overrun = 1;
>>>
On 7/9/19 3:42 PM, Peter Zijlstra wrote:
> On Tue, Jul 09, 2019 at 03:24:36PM +0200, luca abeni wrote:
>> Hi Peter,
>>
>> On Mon, 8 Jul 2019 15:55:36 +0200
>> Peter Zijlstra wrote:
>>
>>> On Mon, May 06, 2019 at 06:48:33AM +0200, Luca Abeni wrote:
@@ -1223,8 +1250,17 @@ static void update_cur
32 irq code.
Signed-off-by: Dietmar Eggemann
---
The hotplug issue on Arm TC2 happens because the vexpress-spc interrupt
(irq=22) is affine to CPU0. This occurs since it is setup early when the
cpu_online_mask is still 0.
But the problem with the missing copy of the affinity mask should occur
Hi Marc,
On 1/8/19 3:16 PM, Marc Zyngier wrote:
Hi Dietmar,
On 08/01/2019 13:58, Dietmar Eggemann wrote:
Arm TC2 (multi_v7_defconfig plus CONFIG_ARM_BIG_LITTLE_CPUFREQ=y and
CONFIG_ARM_VEXPRESS_SPC_CPUFREQ=y) fails hotplug stress tests.
This issue was tracked down to a missing copy of the
ds it to build modules.
Tested on x86 with eBPF scripts and exporting an alternative
BCC_KERNEL_SOURCE.
Tested-by: Dietmar Eggemann
ity);
- mutex_unlock(&cpu_scale_mutex);
-
- schedule_work(&update_topology_flags_work);
-
- return count;
-}
-
-static DEVICE_ATTR_RW(cpu_capacity);
+static DEVICE_ATTR_RO(cpu_capacity);
static int register_cpu_capacity_sysctl(void)
{
Tested-by: Dietmar Eggemann
on Arm64 Juno with v5.0
e vexpress-spc interrupt (irq=22) on this board is affine to CPU0.
Its affinity cpumask now changes correctly e.g. from 0 to 1-4 when
CPU0 is hotplugged out.
Suggested-by: Marc Zyngier
Signed-off-by: Dietmar Eggemann
---
arch/arm/Kconfig | 1 +
arch/arm/include/asm/irq.h | 1 -
arch/
On 1/9/19 5:21 PM, Marc Zyngier wrote:
On 09/01/2019 15:47, Dietmar Eggemann wrote:
Hi Marc,
On 1/8/19 3:16 PM, Marc Zyngier wrote:
Hi Dietmar,
On 08/01/2019 13:58, Dietmar Eggemann wrote:
[...]
On the arm64 side, we've solved the exact same issue by getting rid of
this code and
On 08/20/2018 02:44 AM, Quentin Perret wrote:
In order to ensure a minimal performance impact on non-energy-aware
systems, introduce a static_key guarding the access to Energy-Aware
Scheduling (EAS) code.
The static key is set iff all the following conditions are met for at
least one root domain
On 08/20/2018 02:44 AM, Quentin Perret wrote:
Expose the Energy Model (read-only) of all performance domains in sysfs
for convenience. To do so, add a kobject to the CPU subsystem under the
umbrella of which a kobject for each performance domain is attached.
The resulting hierarchy is as follows
Hi Juri,
On 08/23/2018 11:54 PM, Juri Lelli wrote:
On 23/08/18 18:52, Dietmar Eggemann wrote:
Hi,
On 08/21/2018 01:54 AM, Miguel de Dios wrote:
On 08/17/2018 11:27 AM, Steve Muckle wrote:
From: John Dias
[...]
I tried to catch this issue on my Arm64 Juno board using pi_test (and a
On 09/06/2018 02:29 AM, Quentin Perret wrote:
Hi Dietmar,
On Wednesday 05 Sep 2018 at 23:06:38 (-0700), Dietmar Eggemann wrote:
On 08/20/2018 02:44 AM, Quentin Perret wrote:
In order to ensure a minimal performance impact on non-energy-aware
systems, introduce a static_key guarding the access
On 09/06/2018 07:09 AM, Quentin Perret wrote:
Hi Dietmar,
On Wednesday 05 Sep 2018 at 23:56:43 (-0700), Dietmar Eggemann wrote:
On 08/20/2018 02:44 AM, Quentin Perret wrote:
Expose the Energy Model (read-only) of all performance domains in sysfs
for convenience. To do so, add a kobject to the
Hi,
On 08/03/2018 07:05 AM, Dietmar Eggemann wrote:
A CFS (SCHED_OTHER, SCHED_BATCH or SCHED_IDLE policy) task's
se->runnable_weight must always be in sync with its se->load.weight.
se->runnable_weight is set to se->load.weight when the task is
forked (init_entity_runna
On 09/07/2018 12:58 AM, Vincent Guittot wrote:
On Fri, 7 Sep 2018 at 09:16, Juri Lelli wrote:
On 06/09/18 16:25, Dietmar Eggemann wrote:
Hi Juri,
On 08/23/2018 11:54 PM, Juri Lelli wrote:
On 23/08/18 18:52, Dietmar Eggemann wrote:
Hi,
On 08/21/2018 01:54 AM, Miguel de Dios wrote:
On 08
>state == TASK_WAKING)
+ if (!se->sum_exec_runtime ||
+ (p->state == TASK_WAKING && p->sched_remote_wakeup))
return true;
return false;
Tested-by: Dietmar Eggemann
On 08/31/2018 08:22 AM, Vincent Guittot wrote:
update_blocked_averages() is called to periodiccally decay the stalled load
of idle CPUs and to sync all loads before running load balance.
When cfs rq is idle, it trigs a load balance during pick_next_task_fair()
in order to potentially pull tasks
On 06/07/2018 05:19 PM, Quentin Perret wrote:
Hi Juri,
On Thursday 07 Jun 2018 at 16:44:09 (+0200), Juri Lelli wrote:
On 21/05/18 15:24, Quentin Perret wrote:
[...]
Mmm, this gets complicated pretty fast eh? :)
Yeah, hopefully I'll be able to explain/clarify that :-).
I had to go back
On 06/06/2018 06:26 PM, Quentin Perret wrote:
On Wednesday 06 Jun 2018 at 16:29:50 (+0100), Quentin Perret wrote:
On Wednesday 06 Jun 2018 at 17:20:00 (+0200), Juri Lelli wrote:
This brings me to another question. Let's say there are multiple users of
the Energy Model in the system. Shouldn't t
On 06/08/2018 10:25 AM, Quentin Perret wrote:
Hi Dietmar,
On Thursday 07 Jun 2018 at 17:55:32 (+0200), Dietmar Eggemann wrote:
On 06/07/2018 05:19 PM, Quentin Perret wrote:
Hi Juri,
On Thursday 07 Jun 2018 at 16:44:09 (+0200), Juri Lelli wrote:
On 21/05/18 15:24, Quentin Perret wrote
On 06/08/2018 03:11 PM, Quentin Perret wrote:
On Friday 08 Jun 2018 at 14:39:33 (+0200), Dietmar Eggemann wrote:
[...]
Even though we would be forced to get cpufreq's related cpumask from
somewhere.
That's the easy part. The difficult part is, where do you get power
values from
On 03/16/2018 12:25 PM, Vincent Guittot wrote:
[...]
For a 15 seconds long test on a hikey 6220 (octo core cortex A53 platfrom),
the cpufreq statistics outputs (stats are reset just before the test) :
$ cat /sys/devices/system/cpu/cpufreq/policy0/stats/total_trans
without patchset : 1230
with p
On 03/16/2018 12:25 PM, Vincent Guittot wrote:
[...]
diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
new file mode 100644
index 000..c312d8c
--- /dev/null
+++ b/kernel/sched/pelt.h
@@ -0,0 +1,17 @@
+#ifdef CONFIG_SMP
+
+int __update_load_avg_blocked_se(u64 now, int cpu, struct sched_
On 03/16/2018 12:25 PM, Vincent Guittot wrote:
We want to track rt_rq's utilization as a part of the estimation of the
whole rq's utilization. This is necessary because rt tasks can steal
utilization to cfs tasks and make them lighter than they are.
As we want to use the same load tracking mecani
On 04/15/2018 02:16 PM, Vincent Guittot wrote:
On 15 April 2018 at 13:58, Dietmar Eggemann wrote:
On 03/16/2018 12:25 PM, Vincent Guittot wrote:
We want to track rt_rq's utilization as a part of the estimation of the
whole rq's utilization. This is necessary because rt tasks
Pavan Kondeti
Cc: Juri Lelli
Cc: Joel Fernandes
Cc: Patrick Bellasi
Cc: Quentin Perret
Signed-off-by: Dietmar Eggemann
---
kernel/sched/cpufreq_schedutil.c | 6 +-
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpu
On 05/08/2018 10:22 AM, Viresh Kumar wrote:
On 08-05-18, 08:33, Dietmar Eggemann wrote:
This reverts commit e2cabe48c20efb174ce0c01190f8b9c5f3ea1d13.
Lifting the restriction that the sugov kthread is bound to the
policy->related_cpus for a system with a slow switching cpufreq driver,
which
On 05/08/2018 11:45 AM, Viresh Kumar wrote:
On 08-05-18, 11:09, Dietmar Eggemann wrote:
This would make sure that the kthreads are bound to the correct set of cpus
for platforms with those cpufreq drivers (cpufreq-dt (h960), scmi-cpufreq,
scpi-cpufreq) but it will also change the logic (e.g
Hi Leo,
On 04/17/2018 02:50 PM, Leo Yan wrote:
Hi Dietmar,
On Fri, Apr 06, 2018 at 04:36:01PM +0100, Dietmar Eggemann wrote:
[...]
1.1 Energy Model
A CPU with asymmetric core capacities features cores with significantly
different energy and performance characteristics. As the
On 04/17/2018 04:25 PM, Leo Yan wrote:
@@ -5394,8 +5416,10 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p,
int flags)
update_cfs_group(se);
}
- if (!se)
+ if (!se) {
add_nr_running(rq, 1);
+ update_overutilized_status(rq
On 19/01/17 14:37, Juri Lelli wrote:
> arm and arm64 share lot of code relative to parsing CPU capacity
> information from DT, using that information for appropriate scaling and
> exposing a sysfs interface for chaging such values at runtime.
>
> Factorize such code in a common place (driver/base/
On 21/06/16 09:41, Peter Zijlstra wrote:
> On Mon, Jun 20, 2016 at 03:49:34PM +0100, Dietmar Eggemann wrote:
>> On 20/06/16 13:35, Vincent Guittot wrote:
>
>>> It will go through wake_up_new_task and post_init_entity_util_avg
>>> during its fork which is enough to
On 14/06/16 08:58, Mike Galbraith wrote:
> SUSE's regression testing noticed that...
>
> 0905f04eb21f sched/fair: Fix new task's load avg removed from source CPU in
> wake_up_new_task()
>
> ...introduced a hackbench regression, and indeed it does. I think this
> regression has more to do with r
On 14/06/16 17:40, Mike Galbraith wrote:
> On Tue, 2016-06-14 at 15:14 +0100, Dietmar Eggemann wrote:
>
>> IMHO, the hackbench performance "boost" w/o 0905f04eb21f is due to the
>> fact that a new task gets all it's load decayed (making it a small task)
>
On 15/06/16 17:03, Mike Galbraith wrote:
> On Wed, 2016-06-15 at 16:32 +0100, Dietmar Eggemann wrote:
>
>>> In general, the fuzz helps us to not be so spastic. I'm not sure that
>>> we really really need to care all that much, because I strongly suspect
>
On 16/06/16 04:33, Mike Galbraith wrote:
> On Wed, 2016-06-15 at 20:03 +0100, Dietmar Eggemann wrote:
>
>> Isn't there a theoretical problem with the scale_load() on CONFIG_64BIT
>> machines on tip/sched/core? load.weight has a higher resolution than
>> runnable_load_
On 12/07/16 12:42, Peter Zijlstra wrote:
> On Mon, Jul 11, 2016 at 05:16:06PM +0100, Dietmar Eggemann wrote:
>> On 11/07/16 11:18, Peter Zijlstra wrote:
>>> On Wed, Jun 22, 2016 at 06:03:17PM +0100, Morten Rasmussen wrote:
>>>> @@ -6905,11 +6906,19 @@ static int bu
On 13/07/16 13:40, Vincent Guittot wrote:
> On 22 June 2016 at 19:03, Morten Rasmussen wrote:
>> From: Dietmar Eggemann
>>
>> To be able to compare the capacity of the target cpu with the highest
>> available cpu capacity, store the maximum per-cpu capacity in the roo
;migrated = !sa->last_update_time' as a flag in
enqueue_entity_load_avg() to decide if we call attach_entity_load_avg()
(again) or only update se->avg .
Tested-by: Dietmar Eggemann
> Signed-off-by: Vincent Guittot
> ---
>
> v3:
> - add initialization of load_last_upda
' by providing the information whether
utilization has to be maintained via an argument to this function.
The additional requirements for the alignment of the last_update_time of a
se and the root cfs_rq are handled by the patch 'sched/fair: Sync se with
root cfs_rq'.
Signed-off-
rq in [remove|detach]_entity_load_avg().
In case the difference between the last_update_time value of the cfs_rq
and the root cfs_rq is smaller than 1024ns, the additional calls to
__update_load_avg() will bail early.
Signed-off-by: Dietmar Eggemann
---
kernel/sched/fair.c | 21 +++
nc with the root cfs_rq
[patch 3] sched/fair: Change @running of __update_load_avg() to
@update_util
Pass the information whether utilization (besides load) has to be
maintained for the container element of @sa.
Dietmar Eggemann (3):
sched/fair: Aggregate task utilization only on root
.
@running is changed to @update_util which now carries the information if
the utilization of the se/cfs_rq should be updated and if the se/cfs_rq
is running or not.
Signed-off-by: Dietmar Eggemann
---
kernel/sched/fair.c | 42 +-
1 file changed, 21
On 01/06/16 13:54, Peter Zijlstra wrote:
> On Tue, May 24, 2016 at 10:57:32AM +0200, Vincent Guittot wrote:
>> Ensure that the changes of the utilization of a sched_entity will be
>> reflected in the task_group hierarchy.
>>
>> This patch tries another way than the flat utilization hierarchy propos
On 01/06/16 21:10, Peter Zijlstra wrote:
> On Wed, Jun 01, 2016 at 08:39:19PM +0100, Dietmar Eggemann wrote:
>> This is an alternative approach to '[RFC PATCH v2] sched: reflect
>> sched_entity movement into task_group's utilization' which requires
>> '[RF
On 02/06/16 10:23, Juri Lelli wrote:
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 218f8e83db73..212becd3708f 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -2705,6 +2705,7 @@ __update_load_avg(u64 now, int cpu, struct sched_avg
>> *sa,
>> u32 con
On 01/06/16 21:11, Peter Zijlstra wrote:
> On Wed, Jun 01, 2016 at 08:39:22PM +0100, Dietmar Eggemann wrote:
>> The information whether a se/cfs_rq should get its load and
>> utilization (se representing a task and root cfs_rq) or only its load
>> (se representing a task grou
On 02/06/16 10:25, Juri Lelli wrote:
[...]
>> @@ -2757,7 +2754,7 @@ __update_load_avg(u64 now, int cpu, struct sched_avg
>> *sa,
>> weight * scaled_delta_w;
>> }
>> }
>> -if (update_util && running)
>> +
On 10/10/16 13:29, Vincent Guittot wrote:
> On 10 October 2016 at 12:01, Matt Fleming wrote:
>> On Sun, 09 Oct, at 11:39:27AM, Wanpeng Li wrote:
>>>
>>> The difference between this patch and Peterz's is your patch have a
>>> delta since activate_task()->enqueue_task() does do update_rq_clock(),
>>
On 09/10/16 06:59, kernel test robot wrote:
>
> FYI, we noticed a -32.9% improvement of reaim.child_utime due to commit:
>
> commit ab522e33f91799661aad47bebb691f241a9f6bb8 ("sched/fair: Fix fixed point
> arithmetic width for shares and effective load")
> https://git.kernel.org/pub/scm/linux/ker
On 10/10/16 19:29, Vincent Guittot wrote:
> On 10 October 2016 at 15:54, Dietmar Eggemann
> wrote:
>> On 10/10/16 13:29, Vincent Guittot wrote:
>>> On 10 October 2016 at 12:01, Matt Fleming wrote:
>>>> On Sun, 09 Oct, at 11:39:27AM, Wanpeng Li wrote:
[...]
&
On 12/10/16 11:59, Vincent Guittot wrote:
> On 7 October 2016 at 01:11, Vincent Guittot
> wrote:
>>
>> On 5 October 2016 at 11:38, Dietmar Eggemann
>> wrote:
>>> On 09/26/2016 01:19 PM, Vincent Guittot wrote:
[...]
>>>> -static void attach_task_cf
On 26/09/16 13:19, Vincent Guittot wrote:
> A task can be asynchronously detached from cfs_rq when migrating
> between CPUs. The load of the migrated task is then removed from
> source cfs_rq during its next update. We use this event to set propagation
> flag.
>
> During the load balance, we take
On 12/10/16 16:45, Vincent Guittot wrote:
> On 12 October 2016 at 17:03, Dietmar Eggemann
> wrote:
>> On 26/09/16 13:19, Vincent Guittot wrote:
[...]
>>> @@ -6607,6 +6609,10 @@ static void update_blocked_averages(int cpu)
>>>
>>> if (u
On 13/10/16 17:48, Vincent Guittot wrote:
> On 13 October 2016 at 17:52, Joseph Salisbury
> wrote:
>> On 10/13/2016 06:58 AM, Vincent Guittot wrote:
>>> Hi,
>>>
>>> On 12 October 2016 at 18:21, Joseph Salisbury
>>> wrote:
On 10/12/2016 08:20 AM, Vincent Guittot wrote:
> On 8 October 2016
On 10/17/2016 10:14 AM, Vincent Guittot wrote:
When a task moves from/to a cfs_rq, we set a flag which is then used to
propagate the change at parent level (sched_entity and cfs_rq) during
next update. If the cfs_rq is throttled, the flag will stay pending until
the cfs_rw is unthrottled.
minor
late the sequence for changing the property of a task
> - remove a cfs_rq from list during update_blocked_averages
> These topics don't gain anything from being added in this patchset as they
> are fairly independent and deserve a separate patch.
Acked-by: Dietmar Eggemann
I te
Hi Samuel,
On 12/20/2016 12:45 AM, Samuel Thibault wrote:
Paul Turner, on Mon 19 Dec 2016 15:32:15 -0800, wrote:
On Mon, Dec 19, 2016 at 3:29 PM, Samuel Thibault
wrote:
Paul Turner, on Mon 19 Dec 2016 15:26:19 -0800, wrote:
[...]
The MIN_SHARES you are seeing here is overloaded.
In the un
Hi Vincent and Ying,
On 01/02/2017 04:42 PM, Vincent Guittot wrote:
Hi Ying,
On 28 December 2016 at 09:17, Huang, Ying wrote:
Vincent Guittot writes:
Le Tuesday 13 Dec 2016 . 09:47:30 (+0800), Huang, Ying a .crit :
Hi, Vincent,
Vincent Guittot writes:
[...]
---
kernel/sched/fair.c
On 21/12/16 15:50, Vincent Guittot wrote:
IMHO, the overall idea makes sense to me. Just a couple of small
questions ...
> The update of the share of a cfs_rq is done when its load_avg is updated
> but before the group_entity's load_avg has been updated for the past time
> slot. This generates wr
Hi Vincent,
On 17/03/17 13:47, Vincent Guittot wrote:
[...]
> Reported-by: ying.hu...@linux.intel.com
> Signed-off-by: Vincent Guittot
> Fixes: 4e5160766fcc ("sched/fair: Propagate asynchrous detach")
I thought I can see a difference by running:
perf stat --null --repeat 10 -- perf bench sch
On 22/03/17 09:22, Vincent Guittot wrote:
> On 21 March 2017 at 18:46, Dietmar Eggemann wrote:
>> Hi Vincent,
>>
>> On 17/03/17 13:47, Vincent Guittot wrote:
>>
>> [...]
>>
>>> Reported-by: ying.hu...@linux.intel.com
>>> Signed-off-
On 22/03/17 16:55, Vincent Guittot wrote:
> On 22 March 2017 at 17:22, Dietmar Eggemann wrote:
>> On 22/03/17 09:22, Vincent Guittot wrote:
>>> On 21 March 2017 at 18:46, Dietmar Eggemann
>>> wrote:
>>>> Hi Vincent,
>>>>
>>>> On 1
On 17/06/16 07:21, Mike Galbraith wrote:
> Here are some schbench runs on an 8x8 box to show that longish
> run/sleep period corner I mentioned.
>
> vogelweide:~/:[1]# for i in `seq 5`; do schbench -m 8 -t 1 -a -r 10 2>&1 |
> grep 'threads 8'; done
> cputime 3 threads 8 p99 68
> cputime 3
On 02/14/2017 06:28 PM, Uladzislau Rezki wrote:
So that is useful information that should have been in the Changelog.
OK, can you respin this patch with adjusted Changelog and taking Mike's
feedback?
Yes, i will prepare a patch accordingly, no problem.
Also, I worry about the effects of th
On 20/12/16 13:15, Peter Zijlstra wrote:
> On Tue, Dec 20, 2016 at 02:04:34PM +0100, Dietmar Eggemann wrote:
>> Hi Samuel,
>>
>> On 12/20/2016 12:45 AM, Samuel Thibault wrote:
>>> Paul Turner, on Mon 19 Dec 2016 15:32:15 -0800, wrote:
>>>> On Mon
ix this by scaling down NICE_0_LOAD when multiplying
load_above_capacity with it.
Signed-off-by: Dietmar Eggemann
Acked-by: Morten Rasmussen
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 4088eedea763..fe
On 01/06/16 16:54, Vincent Guittot wrote:
> On 1 June 2016 at 17:31, Dietmar Eggemann wrote:
>> On 30/05/16 16:52, Vincent Guittot wrote:
>>> The cfs_rq->avg.last_update_time is initialize to 0 with the main effect
>>> that the 1st sched_entity that will be attached
On 04/07/16 16:04, Matt Fleming wrote:
> On Wed, 15 Jun, at 04:32:58PM, Dietmar Eggemann wrote:
>> On 14/06/16 17:40, Mike Galbraith wrote:
>>> On Tue, 2016-06-14 at 15:14 +0100, Dietmar Eggemann wrote:
>>>
>>>> IMHO, the hackbench performance "boost&qu
On 11/07/16 11:18, Peter Zijlstra wrote:
> On Wed, Jun 22, 2016 at 06:03:17PM +0100, Morten Rasmussen wrote:
>> @@ -6905,11 +6906,19 @@ static int build_sched_domains(const struct cpumask
>> *cpu_map,
>> /* Attach the domains */
>> rcu_read_lock();
>> for_each_cpu(i, cpu_map) {
>> +
Hi Steve,
On 19/08/16 02:55, Steve Muckle wrote:
> PELT scales its util_sum and util_avg values via
> arch_scale_cpu_capacity(). If that function is passed the CPU's sched
> domain then it will reduce the scaling capacity if SD_SHARE_CPUCAPACITY
> is set. PELT does not pass in the sd however. The
On 06/06/16 03:59, Leo Yan wrote:
> On Wed, Jun 01, 2016 at 08:39:21PM +0100, Dietmar Eggemann wrote:
[...]
>> @@ -2995,8 +2997,16 @@ static void attach_entity_load_avg(struct cfs_rq
>> *cfs_rq, struct sched_entity *s
>> if (!entity_is_task(se))
>>
On 24/05/16 10:55, Vincent Guittot wrote:
[...]
> +/* Take into account the change of the utilization of a child task group */
> +static void update_tg_cfs_util(struct sched_entity *se, int blocked)
> +{
> + int delta;
> + struct cfs_rq *cfs_rq;
> + long update_util_avg;
> + long
On 03/03/16 16:28, Peter Zijlstra wrote:
> On Thu, Mar 03, 2016 at 04:38:17PM +0100, Peter Zijlstra wrote:
>> On Thu, Mar 03, 2016 at 03:01:15PM +0100, Vincent Guittot wrote:
In case a more formal derivation of this formula is needed, it is
based on the following 3 assumptions:
On 03/03/16 18:26, Peter Zijlstra wrote:
> On Thu, Mar 03, 2016 at 05:28:55PM +0000, Dietmar Eggemann wrote:
>>> +void arch_scale_freq_tick(void)
>>> +{
>>> + u64 aperf, mperf;
>>> + u64 acnt, mcnt;
>>> +
>>> + if (!stati
Hi Steve,
these patches fall into the bucket of 'optimization of updating the
value only if the root cfs_rq util has changed' as discussed in '[PATCH
5/8] sched/cpufreq: pass sched class into cpufreq_update_util' of Mike
T's current series '[PATCH 0/8] schedutil enhancements', right?
I wonde
On 03/28/2016 06:34 PM, Steve Muckle wrote:
Hi Dietmar,
On 03/28/2016 05:02 AM, Dietmar Eggemann wrote:
Hi Steve,
these patches fall into the bucket of 'optimization of updating the
value only if the root cfs_rq util has changed' as discussed in '[PATCH
5/8] sched/cpufreq: p
On 15/03/16 20:19, Michael Turquette wrote:
> Quoting Dietmar Eggemann (2016-03-15 12:13:46)
>> Hi Mike,
>>
>> On 14/03/16 05:22, Michael Turquette wrote:
>>> From: Dietmar Eggemann
>>>
[...]
>> Maybe it is worth mentioning that this patch is from
On 02/09/15 18:11, Leo Yan wrote:
> On Tue, Jul 07, 2015 at 07:24:15PM +0100, Morten Rasmussen wrote:
>> Let available compute capacity and estimated energy impact select
>> wake-up target cpu when energy-aware scheduling is enabled and the
>> system in not over-utilized (above the tipping point).
On 29/06/15 10:06, pang.xun...@zte.com.cn wrote:
> Hi Abel,
>
> Abel Vesa wrote 2015-06-29 AM 04:26:31:
>>
>> Re: [RFCv4 PATCH 00/34] sched: Energy cost model for energy-aware
> scheduling
[...]
>> I wasn't able to determine the cause from the line:
>>
>> 7de5: 49 0f a3 87 00 03 00bt
Hi Leo,
On 08/17/2015 02:19 AM, Leo Yan wrote:
[...]
diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c
index b35d3e5..bbe20c7 100644
--- a/arch/arm/kernel/topology.c
+++ b/arch/arm/kernel/topology.c
@@ -274,6 +274,119 @@ void store_cpu_topology(unsigned int cpuid)
On 12/08/15 11:04, Peter Zijlstra wrote:
> On Tue, Jul 07, 2015 at 07:23:59PM +0100, Morten Rasmussen wrote:
>> +
>> +sge->nr_idle_states = fn(cpu)->nr_idle_states;
>> +sge->nr_cap_states = fn(cpu)->nr_cap_states;
>> +memcpy(sge->idle_states, fn(cpu)->idle_states,
>> + sge->nr
On 12/08/15 11:17, Peter Zijlstra wrote:
> On Tue, Jul 07, 2015 at 07:23:59PM +0100, Morten Rasmussen wrote:
>> @@ -6647,10 +6703,24 @@ static int __sdt_alloc(const struct cpumask *cpu_map)
[...]
>> @@ -6674,6 +6744,16 @@ static int __sdt_alloc(const struct cpumask *cpu_map)
>>
On 12/08/15 11:33, Peter Zijlstra wrote:
> On Tue, Jul 07, 2015 at 07:24:01PM +0100, Morten Rasmussen wrote:
>> +static struct capacity_state cap_states_cluster_a7[] = {
>> +/* Cluster only power */
>> + { .cap = 150, .power = 2967, }, /* 350 MHz */
>> + { .cap = 172, .power = 2792,
Hi Yuyang,
On 05/07/15 21:12, Yuyang Du wrote:
> Hi Morten,
>
> On Fri, Jul 03, 2015 at 10:34:41AM +0100, Morten Rasmussen wrote:
IOW, since task groups include blocked load in the load_avg_contrib (see
__update_group_entity_contrib() and __update_cfs_rq_tg_load_contrib()) the
imba
Hi Mike,
On 27/06/15 00:53, Michael Turquette wrote:
> From: Michael Turquette
>
[...]
> comment "CPU frequency scaling drivers"
>
> config CPUFREQ_DT
> diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h
> index 1f2c9a1..30241c9 100644
> --- a/include/linux/cpufreq.h
> +++ b/inc
Hi Tixy,
On 08/07/15 13:36, Jon Medhurst (Tixy) wrote:
> On Tue, 2015-07-07 at 19:23 +0100, Morten Rasmussen wrote:
>> From: Dietmar Eggemann
>>
>> To enable the parsing of clock frequency and cpu efficiency values
>> inside parse_dt_topology [arch/arm/kernel/topology.
Hi Yuyang,
I did some testing of your new pelt implementation.
TC 1: one nice-0 60% task affine to cpu1 in root tg and 2 nice-0 20%
periodic tasks affine to cpu1 in a task group with id=3 (one hierarchy).
TC 2: 10 nice-0 5% tasks affine to cpu1 in a task group with id=3 (one
hierarchy).
and com
On 07/07/15 12:17, Rabin Vincent wrote:
> On Mon, Jul 06, 2015 at 07:36:56PM +0200, Dietmar Eggemann wrote:
>> Rabin, could you share the content of your
>> /sys/fs/cgroup/cpu/system.slice directory and of /proc/cgroups ?
>
> Here's /proc/cgroups,
>
> # c
On 01/12/15 11:20, Juri Lelli wrote:
> Hi Vincent,
>
> On 30/11/15 10:59, Vincent Guittot wrote:
>> Hi Juri,
>>
>> On 24 November 2015 at 11:54, Juri Lelli wrote:
[...]
> +==
> +3 - capacity-scale
> +==
On 23/11/15 14:28, Juri Lelli wrote:
> With the introduction of cpu capacity bindings, CPU capacities can now be
> extracted from DT. Add parsing of such information at boot time. We keep
> code that can produce same information, based on different DT properties
> and hard-coded values, as fall-bac
On 23/11/15 14:28, Juri Lelli wrote:
> Add a sysfs cpu_capacity attribute with which it is possible to read and
> write (thus over-writing default values) CPUs capacity. This might be
> useful in situation where there is no way to get proper default values
> at boot time.
>
> The new attribute sho
On 23/11/15 14:28, Juri Lelli wrote:
> With the introduction of cpu capacity bindings, CPU capacities can now be
> extracted from DT. Add parsing of such information at boot time. Also,
> store such information using per CPU variables, as we do for arm.
>
> Cc: Catalin Marinas
> Cc: Will Deacon
On 11/12/15 17:57, Morten Rasmussen wrote:
> On Fri, Dec 11, 2015 at 05:00:01PM +0300, Andrey Ryabinin wrote:
>>
>>
>> On 12/11/2015 04:36 PM, Peter Zijlstra wrote:
>>> On Fri, Dec 11, 2015 at 02:25:51PM +0100, Peter Zijlstra wrote:
On Fri, Dec 11, 2015 at 03:55:18PM +0300, Andrey Ryabinin wro
Hi Juri,
On 03/02/16 11:59, Juri Lelli wrote:
> Add a sysfs cpu_capacity attribute with which it is possible to read and
> write (thus over-writing default values) CPUs capacity. This might be
> useful in situation where there is no way to get proper default values
> at boot time.
>
> The new att
On 17/06/2020 16:52, Peter Puhov wrote:
> On Wed, 17 Jun 2020 at 06:50, Valentin Schneider
> wrote:
>>
>>
>> On 16/06/20 17:48, peter.pu...@linaro.org wrote:
>>> From: Peter Puhov
>>> We tested this patch with following benchmarks:
>>> perf bench -f simple sched pipe -l 400
>>> perf bench
On 01/07/2020 21:06, Valentin Schneider wrote:
> This flag was introduced in 2014 by commit
>
> d77b3ed5c9f8 ("sched: Add a new SD_SHARE_POWERDOMAIN for sched_domain")
>
> but AFAIA it was never leveraged by the scheduler. The closest thing I can
> think of is EAS caring about frequency domains
On 01/07/2020 21:06, Valentin Schneider wrote:
> I don't think it is going to change much in practice, but we were missing
> those:
>
> o SD_BALANCE_WAKE: Used just like the other SD_BALANCE_* flags, so also
> needs > 1 group.
> o SD_ASYM_PACKING: Hinges on load balancing (periodic / wakeup), th
On 01/07/2020 21:06, Valentin Schneider wrote:
[...]
> @@ -105,16 +122,18 @@ SD_FLAG(SD_SERIALIZE, 8, SDF_SHARED_PARENT)
> * Place busy tasks earlier in the domain
> *
> * SHARED_CHILD: Usually set on the SMT level. Technically could be set
> further
> - * up, but currently assum
On 02/07/2020 14:52, Peter Zijlstra wrote:
>
> Dave hit the problem fixed by commit:
>
> b6e13e85829f ("sched/core: Fix ttwu() race")
>
> and failed to understand much of the code involved. Per his request a
> few comments to (hopefully) clarify things.
>
> Requested-by: Dave Chinner
> Signe
501 - 600 of 871 matches
Mail list logo