Re: [PATCH v4 5/5] sched/fair: Track peak per-entity utilization

2016-09-01 Thread Patrick Bellasi
turn cpu_util(cpu); > > capacity = capacity_orig_of(cpu); > - util = max_t(long, cpu_rq(cpu)->cfs.avg.util_avg - task_util(p), 0); > + util = max_t(long, cpu_rq(cpu)->cfs.avg.util_avg - task_util_peak(p), > 0); > > return (util >= capacity) ? capacity : util; > } > @@ -5476,7 +5481,7 @@ static int wake_cap(struct task_struct *p, int cpu, int > prev_cpu) > /* Bring task utilization in sync with prev_cpu */ > sync_entity_load_avg(&p->se); > > - return min_cap * 1024 < task_util(p) * capacity_margin; > + return min_cap * 1024 < task_util_peak(p) * capacity_margin; > } > > /* > -- > 1.9.1 > -- #include Patrick Bellasi

Re: [RFC 08/14] sched/tune: add detailed documentation

2015-09-03 Thread Patrick Bellasi
t; causing the heavy task to run on the little > core and the light task to run on the big core. That's an interesting point we should keep into consideration for the design of the complete solution. I would prefer to post-pone this discussion on the list once we will present the next ext

[PATCH v3 06/14] sched/cpufreq: uclamp: add utilization clamping for RT tasks

2018-08-06 Thread Patrick Bellasi
_util = (0, SCHED_CAPACITY_SCALE) and thus, RT tasks always run at the maximum OPP if not otherwise constrained by userspace. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Suren Baghdasaryan Cc: Todd Kjos Cc: Joel Fernandes Cc: Juri

[PATCH v3 03/14] sched/core: uclamp: add CPU's clamp groups accounting

2018-08-06 Thread Patrick Bellasi
the expected number of different clamp values, which can be configured at build time, is usually so small that a more advanced ordering algorithm is not needed. In real use-cases we expect less then 10 different values. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Pa

[PATCH v3 00/14] Add utilization clamping support

2018-08-06 Thread Patrick Bellasi
{min,max}_utiql clamps. - use -ERANGE as range violation error - add attributes to the default hierarchy as well as the legacy one - implement a "nice" semantics where cgroup clamp values are always used to restrict task specific clamp values, i.e. tasks running on a TG are only a

[PATCH v3 11/14] sched/core: uclamp: use TG's clamps to restrict Task's clamps

2018-08-06 Thread Patrick Bellasi
described above. This will also make sched_getattr(2) a convenient userpace API to know the utilization constraints enforced on a task by the cgroup's CPU controller. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Tejun Heo Cc: Paul Turner Cc: Suren Baghdasaryan

[PATCH v3 05/14] sched/cpufreq: uclamp: add utilization clamping for FAIR tasks

2018-08-06 Thread Patrick Bellasi
nd capping are defined to be: - util_min: 0 - util_max: SCHED_CAPACITY_SCALE which means that by default no boosting/capping is enforced on FAIR tasks, and thus the frequency will be selected considering the actual utilization value of each CPU. Signed-off-by: Patrick Bellasi Cc: Ingo Molna

[PATCH v3 04/14] sched/core: uclamp: update CPU's refcount on clamp changes

2018-08-06 Thread Patrick Bellasi
least one valid clamp group. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Paul Turner Cc: Suren Baghdasaryan Cc: Todd Kjos Cc: Joel Fernandes Cc: Juri Lelli Cc: Dietmar Eggemann Cc: Morten Rasmussen Cc: linux-kernel@vger.kernel.org Cc: linux...@vger.kernel.org

[PATCH v3 14/14] sched/core: uclamp: use percentage clamp values

2018-08-06 Thread Patrick Bellasi
the standard [0..100] range. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Tejun Heo Cc: Rafael J. Wysocki Cc: Paul Turner Cc: Suren Baghdasaryan Cc: Todd Kjos Cc: Joel Fernandes Cc: Steve Muckle Cc: Juri Lelli Cc: linux-kernel@vger.kernel.org Cc: linux...@

[PATCH v3 02/14] sched/core: uclamp: map TASK's clamp values into CPU's clamp groups

2018-08-06 Thread Patrick Bellasi
clamp values is currently defined at compile time. Thus, setting a new clamp value for a task can result into a -ENOSPC error in case this will exceed the number of maximum different clamp values supported. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Paul Turner Cc: S

[PATCH v3 07/14] sched/core: uclamp: enforce last task UCLAMP_MAX

2018-08-06 Thread Patrick Bellasi
while a CPU is idle, we can still enforce the last used clamp value for it. To the contrary, we do not track any UCLAMP_MIN since, while a CPU is idle, we don't want to enforce any minimum frequency Indeed, we rely just on blocked load decay to smoothly reduce the frequency. Signed-off-b

[PATCH v3 01/14] sched/core: uclamp: extend sched_setattr to support utilization clamping

2018-08-06 Thread Patrick Bellasi
s for a specified task by extending sched_setattr, a syscall which already allows to define task specific properties for different scheduling classes. Specifically, a new pair of attributes allows to specify a minimum and maximum utilization which the scheduler should consider for a task. Signed-off-b

[PATCH v3 12/14] sched/core: uclamp: add system default clamps

2018-08-06 Thread Patrick Bellasi
ation. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Tejun Heo Cc: Paul Turner Cc: Suren Baghdasaryan Cc: Todd Kjos Cc: Joel Fernandes Cc: Steve Muckle Cc: Juri Lelli Cc: Dietmar Eggemann Cc: Morten Rasmussen Cc: linux-kernel@vger.kernel.org Cc: linux...@vger.kerne

[PATCH v3 13/14] sched/core: uclamp: update CPU's refcount on TG's clamp changes

2018-08-06 Thread Patrick Bellasi
, as soon as a task group attribute is tweaked. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Tejun Heo Cc: Paul Turner Cc: Suren Baghdasaryan Cc: Todd Kjos Cc: Joel Fernandes Cc: Steve Muckle Cc: Juri Lelli Cc: Dietmar Eggemann Cc: Morten Rasmussen Cc: linux-kernel@

[PATCH v3 10/14] sched/core: uclamp: map TG's clamp values into CPU's clamp groups

2018-08-06 Thread Patrick Bellasi
ask_struct parameter optional. This allows to re-use the code already available to support the per-task API. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Tejun Heo Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Suren Baghdasaryan Cc: Todd Kjos Cc: Joel Fernandes Cc: J

[PATCH v3 09/14] sched/core: uclamp: propagate parent clamps

2018-08-06 Thread Patrick Bellasi
s: cpu.util.{min,max}.effective. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Tejun Heo Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Suren Baghdasaryan Cc: Todd Kjos Cc: Joel Fernandes Cc: Juri Lelli Cc: linux-kernel@vger.kernel.org Cc: linux...@vger.kernel.org --- Change

[PATCH v3 08/14] sched/core: uclamp: extend cpu's cgroup controller

2018-08-06 Thread Patrick Bellasi
patch always returns -EINVAL. Following patches will provide the missing bits. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Tejun Heo Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Suren Baghdasaryan Cc: Todd Kjos Cc: Joel Fernandes Cc: Juri Lelli Cc: linux-kernel@vger.

[PATCH v4 01/16] sched/core: uclamp: extend sched_setattr to support utilization clamping

2018-08-28 Thread Patrick Bellasi
s for a specified task by extending sched_setattr, a syscall which already allows to define task specific properties for different scheduling classes. Specifically, a new pair of attributes allows to specify a minimum and maximum utilization which the scheduler should consider for a task. Signed-off-b

[PATCH v4 00/16] Add utilization clamping support

2018-08-28 Thread Patrick Bellasi
nt {min,max}_utiql clamps. - use -ERANGE as range violation error - add attributes to the default hierarchy as well as the legacy one - implement a "nice" semantics where cgroup clamp values are always used to restrict task specific clamp values, i.e. tasks running on a TG are only

[PATCH v4 02/16] sched/core: uclamp: map TASK's clamp values into CPU's clamp groups

2018-08-28 Thread Patrick Bellasi
clamp values is currently defined at compile time. Thus, setting a new clamp value for a task can result into a -ENOSPC error in case this will exceed the number of maximum different clamp values supported. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Paul Turner Cc: S

[PATCH v4 15/16] sched/core: uclamp: add clamp group discretization support

2018-08-28 Thread Patrick Bellasi
l also be updated to aggregate and represent at run-time the most restrictive value among those of the RUNNABLE tasks refcounted by that group. Each time a CPU clamp group becomes empty we reset its clamp value to the minimum value of the range it tracks. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar

[PATCH v4 13/16] sched/core: uclamp: use percentage clamp values

2018-08-28 Thread Patrick Bellasi
the standard [0..100] range. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Tejun Heo Cc: Rafael J. Wysocki Cc: Paul Turner Cc: Suren Baghdasaryan Cc: Todd Kjos Cc: Joel Fernandes Cc: Steve Muckle Cc: Juri Lelli Cc: Quentin Perret Cc: Dietmar Eggemann Cc: Morte

[PATCH v4 16/16] sched/cpufreq: uclamp: add utilization clamping for RT tasks

2018-08-28 Thread Patrick Bellasi
T tasks as well as CFS ones are always subject to the set of current utilization clamping constraints. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Suren Baghdasaryan Cc: Todd Kjos Cc: Joel Fernandes Cc: Juri Lelli Cc: Quentin

[PATCH v4 05/16] sched/core: uclamp: enforce last task UCLAMP_MAX

2018-08-28 Thread Patrick Bellasi
while a CPU is idle, we can still enforce the last used clamp value for it. To the contrary, we do not track any UCLAMP_MIN since, while a CPU is idle, we don't want to enforce any minimum frequency Indeed, we rely just on blocked load decay to smoothly reduce the frequency. Signed-off-b

[PATCH v4 12/16] sched/core: uclamp: update CPU's refcount on TG's clamp changes

2018-08-28 Thread Patrick Bellasi
, as soon as a task group attribute is tweaked. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Tejun Heo Cc: Paul Turner Cc: Suren Baghdasaryan Cc: Todd Kjos Cc: Joel Fernandes Cc: Steve Muckle Cc: Juri Lelli Cc: Quentin Perret Cc: Dietmar Eggemann Cc: Morten Ra

[PATCH v4 14/16] sched/core: uclamp: request CAP_SYS_ADMIN by default

2018-08-28 Thread Patrick Bellasi
ADMIN capabilities. Whenever this should be considered too restrictive and/or not required for a specific platforms, a kernel boot option is provided to change this default behavior thus allowing non privileged tasks to change their utilization clamp values. Signed-off-by: Patrick Bellasi Cc: Ingo M

[PATCH v4 07/16] sched/core: uclamp: extend cpu's cgroup controller

2018-08-28 Thread Patrick Bellasi
patch always returns -EINVAL. Following patches will provide the missing bits. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Tejun Heo Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Suren Baghdasaryan Cc: Todd Kjos Cc: Joel Fernandes Cc: Juri Lelli Cc: Quentin Perret

[PATCH v4 10/16] sched/core: uclamp: use TG's clamps to restrict Task's clamps

2018-08-28 Thread Patrick Bellasi
oses, as well as to properly inform userspace, the sched_getattr(2) call is updated to always return the properly aggregated constrains as described above. This will also make sched_getattr(2) a convenient userspace API to know the utilization constraints enforced on a task by the cgroup's CP

[PATCH v4 11/16] sched/core: uclamp: add system default clamps

2018-08-28 Thread Patrick Bellasi
ue is refcounted considering the system default clamps if either we do not have task group support or they are part of the root_task_group. Tasks without a task specific clamp value in a child task group will be refcounted instead considering the task group clamps. Signed-off-by: Patrick Bellasi Cc: Ingo M

[PATCH v4 08/16] sched/core: uclamp: propagate parent clamps

2018-08-28 Thread Patrick Bellasi
s: cpu.util.{min,max}.effective. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Tejun Heo Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Suren Baghdasaryan Cc: Todd Kjos Cc: Joel Fernandes Cc: Juri Lelli Cc: Quentin Perret Cc: Dietmar Eggemann Cc: Morten Rasmussen Cc: linux-ke

[PATCH v4 06/16] sched/cpufreq: uclamp: add utilization clamping for FAIR tasks

2018-08-28 Thread Patrick Bellasi
nd capping are defined to be: - util_min: 0 - util_max: SCHED_CAPACITY_SCALE which means that by default no boosting/capping is enforced on FAIR tasks, and thus the frequency will be selected considering the actual utilization value of each CPU. Signed-off-by: Patrick Bellasi Cc: Ingo Molna

[PATCH v4 09/16] sched/core: uclamp: map TG's clamp values into CPU's clamp groups

2018-08-28 Thread Patrick Bellasi
time). We do that by slightly refactoring uclamp_group_get() to make the *task_struct parameter optional. This allows to re-use the code already available to support the per-task API. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Tejun Heo Cc: Rafael J. Wysocki Cc: Vir

[PATCH v4 04/16] sched/core: uclamp: update CPU's refcount on clamp changes

2018-08-28 Thread Patrick Bellasi
least one valid clamp group. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Paul Turner Cc: Suren Baghdasaryan Cc: Todd Kjos Cc: Joel Fernandes Cc: Juri Lelli Cc: Quentin Perret Cc: Dietmar Eggemann Cc: Morten Rasmussen Cc: linux-kernel@vger.kernel.org Cc: linux

[PATCH v4 03/16] sched/core: uclamp: add CPU's clamp groups accounting

2018-08-28 Thread Patrick Bellasi
the expected number of different clamp values, which can be configured at build time, is usually so small that a more advanced ordering algorithm is not needed. In real use-cases we expect less then 10 different values. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Pa

Re: [PATCH v4 07/16] sched/core: uclamp: extend cpu's cgroup controller

2018-08-29 Thread Patrick Bellasi
On 28-Aug 11:29, Randy Dunlap wrote: > On 08/28/2018 06:53 AM, Patrick Bellasi wrote: > > +config UCLAMP_TASK_GROUP > > + bool "Utilization clamping per group of tasks" > > + depends on CGROUP_SCHED > > + depends on UCLAMP_TASK > > + default n

Re: [PATCH v6 03/14] PM: Introduce an Energy Model management framework

2018-08-29 Thread Patrick Bellasi
factor between number Write allocated .data..percp sections and the value of NR_CPUS. Meaning that in the worst case we allocate the same amount of memory using NR_CPUS=64 (the default on arm64) while running on an 8 CPUs system... but still we should get less cluster caches pressure at run-time with the array approach, 1 cache line vs 4. Best, Patrick -- #include Patrick Bellasi

Re: [PATCH v6 05/14] sched/topology: Reference the Energy Model of CPUs when available

2018-08-29 Thread Patrick Bellasi
fallback_doms; - /* * arch_update_cpu_topology lets virtualized architectures update the * CPU core maps. It is supposed to return 1 if the topology changed @@ -2198,21 +2219,7 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], ; } -#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) - /* Build perf. domains: */ - for (i = 0; i < ndoms_new; i++) { - for (j = 0; j < n && !sched_energy_update; j++) { - if (cpumask_equal(doms_new[i], doms_cur[j]) && - cpu_rq(cpumask_first(doms_cur[j]))->rd->pd) - goto match3; - } - /* No match - add perf. domains for a new rd */ - build_perf_domains(doms_new[i]); -match3: - ; - } - sched_energy_start(ndoms_new, doms_new); -#endif + build_perf_domains(ndoms_new, n, doms_new); /* Remember the new sched domains: */ if (doms_cur != &fallback_doms) ---8<--- > /* Remember the new sched domains: */ > if (doms_cur != &fallback_doms) > free_sched_domains(doms_cur, ndoms_cur); > -- > 2.17.1 > -- #include Patrick Bellasi

Re: [PATCH v6 07/14] sched/topology: Introduce sched_energy_present static key

2018-08-29 Thread Patrick Bellasi
s: stopping EAS\n", __func__); > + static_branch_disable_cpuslocked(&sched_energy_present); > + } > + > + return; > + > +enable: > + if (!static_branch_unlikely(&sched_energy_present)) { > + if (sched_debug()) > + pr_info("%s: starting EAS\n", __func__); > + static_branch_enable_cpuslocked(&sched_energy_present); > + } > +} > #else > static void free_pd(struct perf_domain *pd) { } > #endif > @@ -2123,6 +2197,7 @@ void partition_sched_domains(int ndoms_new, > cpumask_var_t doms_new[], > match3: > ; > } > + sched_energy_start(ndoms_new, doms_new); > #endif > > /* Remember the new sched domains: */ > -- > 2.17.1 > -- #include Patrick Bellasi

Re: [PATCH v6 07/14] sched/topology: Introduce sched_energy_present static key

2018-08-30 Thread Patrick Bellasi
On 29-Aug 18:20, Quentin Perret wrote: > On Wednesday 29 Aug 2018 at 17:50:58 (+0100), Patrick Bellasi wrote: > > > +/* > > > + * The complexity of the Energy Model is defined as: nr_pd * (nr_cpus + > > > nr_cs) > > > + * with: 'nr_pd' the nu

Re: [PATCH v6 05/14] sched/topology: Reference the Energy Model of CPUs when available

2018-08-30 Thread Patrick Bellasi
On 29-Aug 17:56, Quentin Perret wrote: > On Wednesday 29 Aug 2018 at 17:22:38 (+0100), Patrick Bellasi wrote: > > > +static void build_perf_domains(const struct cpumask *cpu_map) > > > +{ > > > + struct perf_domain *pd = NULL, *tmp; > > > + int cpu =

Re: [PATCH v6 07/14] sched/topology: Introduce sched_energy_present static key

2018-08-30 Thread Patrick Bellasi
On 30-Aug 10:57, Quentin Perret wrote: > Hi Patrick, > > On Thursday 30 Aug 2018 at 10:23:29 (+0100), Patrick Bellasi wrote: > > Yes, dunno if it's just me but perhaps a bit of rephrasing could help. > > Ok, so what about something a little bit more explicit like: >

Re: [PATCH v6 05/14] sched/topology: Reference the Energy Model of CPUs when available

2018-08-30 Thread Patrick Bellasi
On 30-Aug 11:47, Quentin Perret wrote: > On Thursday 30 Aug 2018 at 11:00:20 (+0100), Patrick Bellasi wrote: > > Dunno... but, in any case, probably we don't care about using EAS until > > the boot complete, isn't it? > > So, as of now, EAS will typically start so

Re: [PATCH v3 01/14] sched/core: uclamp: extend sched_setattr to support utilization clamping

2018-08-09 Thread Patrick Bellasi
On 06-Aug 09:50, Randy Dunlap wrote: > Hi, Hi Randy, > On 08/06/2018 09:39 AM, Patrick Bellasi wrote: > > diff --git a/init/Kconfig b/init/Kconfig > > index 041f3a022122..1d45a6877d6f 100644 > > --- a/init/Kconfig > > +++ b/init/Kconfig > > @@ -583,6 +583,25

Re: [PATCH v3 01/14] sched/core: uclamp: extend sched_setattr to support utilization clamping

2018-08-09 Thread Patrick Bellasi
On 07-Aug 14:35, Juri Lelli wrote: > On 06/08/18 17:39, Patrick Bellasi wrote: > > [...] > > > @@ -4218,6 +4245,13 @@ static int __sched_setscheduler(struct task_struct > > *p, > > return retval; > > } > > > >

Re: [PATCH v3 01/14] sched/core: uclamp: extend sched_setattr to support utilization clamping

2018-08-09 Thread Patrick Bellasi
On 09-Aug 11:50, Juri Lelli wrote: > On 09/08/18 10:14, Patrick Bellasi wrote: > > On 07-Aug 14:35, Juri Lelli wrote: > > > On 06/08/18 17:39, Patrick Bellasi wrote: [...] > > 1) make CAP_SYS_NICE protected the clamp groups, with an optional boot > >time paramet

Re: [PATCH v3 05/14] sched/cpufreq: uclamp: add utilization clamping for FAIR tasks

2018-08-09 Thread Patrick Bellasi
On 08-Aug 15:18, Vincent Guittot wrote: > Hi Patrick, Hi VIncent, > On Mon, 6 Aug 2018 at 18:40, Patrick Bellasi wrote: [...] > > +static inline unsigned int uclamp_util(unsigned int cpu, unsigned int util) > > using struct *rq rq instead of cpu as parameter would a

Re: [PATCH v3 06/14] sched/cpufreq: uclamp: add utilization clamping for RT tasks

2018-08-09 Thread Patrick Bellasi
On 07-Aug 15:26, Juri Lelli wrote: > Hi, > > On 06/08/18 17:39, Patrick Bellasi wrote: > > [...] > > > @@ -223,13 +224,25 @@ static unsigned long sugov_get_util(struct sugov_cpu > > *sg_cpu) > > * utilization (PELT windows are synchronized) we can di

Re: [PATCH v3 06/14] sched/cpufreq: uclamp: add utilization clamping for RT tasks

2018-08-09 Thread Patrick Bellasi
On 07-Aug 14:54, Quentin Perret wrote: > Hi Patrick, Hi Quentin! > On Monday 06 Aug 2018 at 17:39:38 (+0100), Patrick Bellasi wrote: > > diff --git a/kernel/sched/cpufreq_schedutil.c > > b/kernel/sched/cpufreq_schedutil.c > > index a7affc729c25..bb25ef66c2d3 10064

Re: [PATCH v3 06/14] sched/cpufreq: uclamp: add utilization clamping for RT tasks

2018-08-13 Thread Patrick Bellasi
d-up a task will get at run-time, independently from higher priority classes. Does that make sense? > > I'm not sure keeping the sched_feat is a good solution on the long > > run, i.e. mainline merge ;) This problem still stands... -- #include Patrick Bellasi

Re: [PATCH v3 06/14] sched/cpufreq: uclamp: add utilization clamping for RT tasks

2018-08-13 Thread Patrick Bellasi
Hi Quentin! On 09-Aug 16:55, Quentin Perret wrote: > Hi Patrick, > > On Thursday 09 Aug 2018 at 16:41:56 (+0100), Patrick Bellasi wrote: > > > IIUC, not far below this you should still have something like: > > > > > > if (rt_rq_is_runnable(&rq->rt))

Re: [PATCH v3 01/14] sched/core: uclamp: extend sched_setattr to support utilization clamping

2018-08-13 Thread Patrick Bellasi
On 07-Aug 11:59, Juri Lelli wrote: > Hi, > > Minor comments below. > > On 06/08/18 17:39, Patrick Bellasi wrote: > > [...] > > > + * > > + * Task Utilization Attributes > > + * === > > + * > > + * A subset of s

Re: [PATCH v3 06/14] sched/cpufreq: uclamp: add utilization clamping for RT tasks

2018-08-13 Thread Patrick Bellasi
On 13-Aug 14:07, Vincent Guittot wrote: > On Mon, 13 Aug 2018 at 12:12, Patrick Bellasi wrote: > > > > Hi Vincent! > > > > On 09-Aug 18:03, Vincent Guittot wrote: > > > > On 07-Aug 15:26, Juri Lelli wrote: > > > > [...] > > > > &

Re: [PATCH v3 06/14] sched/cpufreq: uclamp: add utilization clamping for RT tasks

2018-08-13 Thread Patrick Bellasi
On 13-Aug 16:06, Vincent Guittot wrote: > On Mon, 13 Aug 2018 at 14:49, Patrick Bellasi wrote: > > On 13-Aug 14:07, Vincent Guittot wrote: > > > On Mon, 13 Aug 2018 at 12:12, Patrick Bellasi > > > wrote: [...] > > Yes I agree that the current behavior is not

Re: [PATCH v3 02/14] sched/core: uclamp: map TASK's clamp values into CPU's clamp groups

2018-08-14 Thread Patrick Bellasi
Hi Pavan, On 14-Aug 16:55, Pavan Kondeti wrote: > On Mon, Aug 06, 2018 at 05:39:34PM +0100, Patrick Bellasi wrote: > I see that we drop reference on the previous clamp group when a task changes > its clamp limits. What about exiting tasks which claimed clamp groups? should > not

Re: [PATCH v3 03/14] sched/core: uclamp: add CPU's clamp groups accounting

2018-08-14 Thread Patrick Bellasi
Hi Dietmar! On 14-Aug 17:44, Dietmar Eggemann wrote: > On 08/06/2018 06:39 PM, Patrick Bellasi wrote: [...] > >+/** > >+ * uclamp_cpu_put_id(): decrease reference count for a clamp group on a CPU > >+ * @p: the task being dequeued from a CPU > >+ * @cpu: the CPU fro

Re: [PATCH v2 02/12] sched/core: uclamp: map TASK's clamp values into CPU's clamp groups

2018-07-20 Thread Patrick Bellasi
Hi Suren, thanks for the review, all good point... some more comments follow inline. On 19-Jul 16:51, Suren Baghdasaryan wrote: > On Mon, Jul 16, 2018 at 1:28 AM, Patrick Bellasi > wrote: [...] > > +/** > > + * uclamp_group_available: checks if a clamp group is available >

Re: [RFC PATCH] sched/deadline: sched_getattr() returns absolute dl-task information

2018-07-23 Thread Patrick Bellasi
And of course, by the time we get back to userspace, the returned values > will be out-of-date anyway. But that isn't to be helped I suppose. Yes, but that's always kind-of implied by syscall returning kernel metrics, isn't it? > > + } else { > > + attr->sched_runtime = dl_se->dl_runtime; > > + attr->sched_deadline = dl_se->dl_deadline; > > + } > > + > > attr->sched_period = dl_se->dl_period; > > attr->sched_flags = dl_se->flags; > > } -- #include Patrick Bellasi

Re: [PATCH v2 02/12] sched/core: uclamp: map TASK's clamp values into CPU's clamp groups

2018-07-23 Thread Patrick Bellasi
be possible to find the group_ids before actually increasing the refcount. ... will look into this for the next reposting. -- #include Patrick Bellasi

Re: [RFC PATCH] sched/deadline: sched_getattr() returns absolute dl-task information

2018-07-23 Thread Patrick Bellasi
On 23-Jul 16:13, Peter Zijlstra wrote: > On Mon, Jul 23, 2018 at 01:49:46PM +0100, Patrick Bellasi wrote: > > On 23-Jul 11:49, Peter Zijlstra wrote: > > > > [...] > > > > > > -void __getparam_dl(struct task_struct *p, struct sched_attr *attr) > >

Re: [PATCH v2 07/12] sched/core: uclamp: enforce last task UCLAMP_MAX

2018-07-23 Thread Patrick Bellasi
On 20-Jul 18:23, Suren Baghdasaryan wrote: > Hi Patrick, Hi Sure, thank! > On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi > wrote: [...] > > @@ -977,13 +991,21 @@ static inline void uclamp_cpu_get_id(struct > > task_struct *p, > > uc_grp = &a

Re: [PATCH v2 08/12] sched/core: uclamp: extend cpu's cgroup controller

2018-07-23 Thread Patrick Bellasi
On 20-Jul 19:37, Suren Baghdasaryan wrote: > On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi > wrote: [...] > > +#ifdef CONFIG_UCLAMP_TASK_GROUP > > +static int cpu_util_min_write_u64(struct cgroup_subsys_state *css, > > + struct cftyp

Re: [PATCH v2 10/12] sched/core: uclamp: use TG's clamps to restrict Task's clamps

2018-07-23 Thread Patrick Bellasi
On 21-Jul 20:05, Suren Baghdasaryan wrote: > On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi > wrote: > > When a task's util_clamp value is configured via sched_setattr(2), this > > value has to be properly accounted in the corresponding clamp group > > every

Re: [PATCH v2 08/12] sched/core: uclamp: extend cpu's cgroup controller

2018-07-23 Thread Patrick Bellasi
On 23-Jul 08:30, Tejun Heo wrote: > Hello, Hi Tejun! > On Mon, Jul 16, 2018 at 09:29:02AM +0100, Patrick Bellasi wrote: > > The cgroup's CPU controller allows to assign a specified (maximum) > > bandwidth to the tasks of a group. However this bandwidth is defined and

Re: [PATCH v2 10/12] sched/core: uclamp: use TG's clamps to restrict Task's clamps

2018-07-24 Thread Patrick Bellasi
On 23-Jul 10:11, Suren Baghdasaryan wrote: > On Mon, Jul 23, 2018 at 8:40 AM, Patrick Bellasi > wrote: > > On 21-Jul 20:05, Suren Baghdasaryan wrote: > >> On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi [...] > >> So to satisfy both TG and syscall requirements

Re: [PATCH v2 08/12] sched/core: uclamp: extend cpu's cgroup controller

2018-07-24 Thread Patrick Bellasi
. what it actually does. Is it acceptable to have a new interface which fits a wider description? With such a description, our aim is also to demonstrate that we are _not_ adding a special case new user-space interface but a generic enough interface which can be properly extended in the future without breaking existing functionalities but just by keep improving them. Best, Patrick -- #include Patrick Bellasi

Re: [PATCH v2 10/12] sched/core: uclamp: use TG's clamps to restrict Task's clamps

2018-07-24 Thread Patrick Bellasi
your review! Cheers Patrick -- #include Patrick Bellasi

Re: [PATCH v2 12/12] sched/core: uclamp: use percentage clamp values

2018-07-24 Thread Patrick Bellasi
On 21-Jul 21:04, Suren Baghdasaryan wrote: > On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi > wrote: [...] > > +static inline unsigned int scale_from_percent(unsigned int pct) > > +{ > > + WARN_ON(pct > 100); > > + > > + retur

Re: [PATCH v2 12/12] sched/core: uclamp: use percentage clamp values

2018-07-24 Thread Patrick Bellasi
On 24-Jul 10:11, Suren Baghdasaryan wrote: > On Tue, Jul 24, 2018 at 9:43 AM, Patrick Bellasi > wrote: > > On 21-Jul 21:04, Suren Baghdasaryan wrote: > >> On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi > >> wrote: > > > > [...] > > > >>

[PATCH v3 2/3] sched/fair: use util_est in LB and WU paths

2018-01-23 Thread Patrick Bellasi
CPU. This allows to properly represent the spare capacity of a CPU which, for example, has just got a big task running since a long sleep period. Signed-off-by: Patrick Bellasi Reviewed-by: Dietmar Eggemann Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Paul

[PATCH v3 0/3] Utilization estimation (util_est) for FAIR tasks

2018-01-23 Thread Patrick Bellasi
f68664 [5] Window Assisted Load Tracking https://lwn.net/Articles/704903/ Patrick Bellasi (3): sched/fair: add util_est on top of PELT sched/fair: use util_est in LB and WU paths sched/cpufreq_schedutil: use util_est for OPP selection include/linux/sched.h | 16 + kernel/sched/

[PATCH v3 3/3] sched/cpufreq_schedutil: use util_est for OPP selection

2018-01-23 Thread Patrick Bellasi
ke-up. Signed-off-by: Patrick Bellasi Reviewed-by: Dietmar Eggemann Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Paul Turner Cc: Vincent Guittot Cc: Morten Rasmussen Cc: Dietmar Eggemann Cc: linux-kernel@vger.kernel.org Cc: linux...@vger.kernel.org --- Chang

[PATCH v3 1/3] sched/fair: add util_est on top of PELT

2018-01-23 Thread Patrick Bellasi
y: - Tasks: to better support tasks placement decisions - root cfs_rqs: to better support both tasks placement decisions as well as frequencies selection Signed-off-by: Patrick Bellasi Reviewed-by: Dietmar Eggemann Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Rafael J. Wysocki C

Re: [PATCH v3 1/3] sched/fair: add util_est on top of PELT

2018-01-24 Thread Patrick Bellasi
On 24-Jan 08:40, Joel Fernandes wrote: > On Tue, Jan 23, 2018 at 10:08 AM, Patrick Bellasi > wrote: > > The util_avg signal computed by PELT is too variable for some use-cases. > > For example, a big task waking up after a long sleep period will have its > > utilization a

Re: [PATCH v3 2/3] sched/fair: use util_est in LB and WU paths

2018-01-24 Thread Patrick Bellasi
On 24-Jan 17:03, Pavan Kondeti wrote: > Hi Patrick, Hi Pavan, > On Tue, Jan 23, 2018 at 06:08:46PM +, Patrick Bellasi wrote: > > static unsigned long cpu_util_wake(int cpu, struct task_struct *p) > > { > > - unsigned long util, capacity; >

Re: [PATCH v3 1/3] sched/fair: add util_est on top of PELT

2018-01-30 Thread Patrick Bellasi
On 29-Jan 17:36, Peter Zijlstra wrote: > On Tue, Jan 23, 2018 at 06:08:45PM +0000, Patrick Bellasi wrote: > > +static inline void util_est_dequeue(struct task_struct *p, int flags) > > +{ > > + struct cfs_rq *cfs_rq = &task_rq(p)->cfs; > > + unsigned long util

Re: [PATCH v3 2/3] sched/fair: use util_est in LB and WU paths

2018-01-31 Thread Patrick Bellasi
On 25-Jan 20:03, Pavan Kondeti wrote: > On Wed, Jan 24, 2018 at 07:31:38PM +0000, Patrick Bellasi wrote: > > > > > > + /* > > > > +* These are the main cases covered: > > > > +* - if *p is the only task sleeping on thi

Re: [BUG] schedutil governor produces regular max freq spikes because of lockup detector watchdog threads

2018-01-08 Thread Patrick Bellasi
ross" and thus perhaps it does not make sense to keep adding special DL tasks. Another possible alternative to "tag an RT task" as being special, is to use an API similar to the one proposed by the util_clamp RFC: 20170824180857.32103-1-patrick.bell...@arm.com which would allow to define what's the maximum utilization which can be required by a properly configured RT task. -- #include Patrick Bellasi

Re: [PATCH v5 1/4] sched/fair: add util_est on top of PELT

2018-03-07 Thread Patrick Bellasi
On 06-Mar 19:58, Peter Zijlstra wrote: > On Thu, Feb 22, 2018 at 05:01:50PM +0000, Patrick Bellasi wrote: > > +static inline void util_est_enqueue(struct cfs_rq *cfs_rq, > > + struct task_struct *p) > > +{ > > + unsigned int enqueued; &g

Re: [PATCH v5 1/4] sched/fair: add util_est on top of PELT

2018-03-07 Thread Patrick Bellasi
On 06-Mar 20:02, Peter Zijlstra wrote: > On Thu, Feb 22, 2018 at 05:01:50PM +0000, Patrick Bellasi wrote: > > +struct util_est { > > + unsigned intenqueued; > > + unsigned intewma; > > +#define UTIL_EST_WEIGHT_SHIFT

Re: [PATCH v5 1/4] sched/fair: add util_est on top of PELT

2018-03-07 Thread Patrick Bellasi
On 06-Mar 19:56, Peter Zijlstra wrote: > On Thu, Feb 22, 2018 at 05:01:50PM +0000, Patrick Bellasi wrote: > > +/** > > + * Estimation Utilization for FAIR tasks. > > + * > > + * Support data structure to track an Exponential Weighted Moving Average > > + * (EWMA)

Re: [PATCH v5 1/4] sched/fair: add util_est on top of PELT

2018-03-07 Thread Patrick Bellasi
On 07-Mar 13:26, Peter Zijlstra wrote: > On Wed, Mar 07, 2018 at 11:47:11AM +0000, Patrick Bellasi wrote: > > On 06-Mar 20:02, Peter Zijlstra wrote: > > > On Thu, Feb 22, 2018 at 05:01:50PM +0000, Patrick Bellasi wrote: > > > > +struct util_est

Re: [PATCH v5 1/4] sched/fair: add util_est on top of PELT

2018-03-07 Thread Patrick Bellasi
On 07-Mar 13:24, Peter Zijlstra wrote: > On Wed, Mar 07, 2018 at 11:31:49AM +0000, Patrick Bellasi wrote: > > > It appears to me this isn't a stable situation and completely relies on > > > the !nr_running case to recalibrate. If we ensure that doesn't happen > &

Re: [PATCH v5 1/4] sched/fair: add util_est on top of PELT

2018-03-07 Thread Patrick Bellasi
On 07-Mar 10:39, Peter Zijlstra wrote: > On Tue, Mar 06, 2018 at 07:58:51PM +0100, Peter Zijlstra wrote: > > On Thu, Feb 22, 2018 at 05:01:50PM +, Patrick Bellasi wrote: > > > +static inline void util_est_enqueue(struct cfs_rq *cfs_rq, > > > +

[PATCH] sched/fair: schedutil: update only with all info available

2018-04-06 Thread Patrick Bellasi
but not the actual cfs_rq utilization, which is updated by a following sched event This new proposal allows also to better aggregate schedutil related flags, which are required only at enqueue_task_fair() time. Indeed, IOWAIT and MIGRATION flags are now requested only when a task is actually

Re: [PATCH] sched/fair: schedutil: update only with all info available

2018-04-11 Thread Patrick Bellasi
On 11-Apr 09:57, Vincent Guittot wrote: > On 6 April 2018 at 19:28, Patrick Bellasi wrote: > > > } > > @@ -5454,8 +5441,11 @@ static void dequeue_task_fair(struct rq *rq, struct > > task_struct *p, int flags) > > update_cfs_group(se); > &g

Re: [PATCH] sched/fair: schedutil: update only with all info available

2018-04-11 Thread Patrick Bellasi
On 11-Apr 08:57, Vincent Guittot wrote: > On 10 April 2018 at 13:04, Patrick Bellasi wrote: > > On 09-Apr 10:51, Vincent Guittot wrote: > >> On 6 April 2018 at 19:28, Patrick Bellasi wrote: > >> Peter, > >> what was your goal with adding the condition &quo

Re: [PATCH v2] cpufreq/schedutil: Cleanup, document and fix iowait boost

2018-04-11 Thread Patrick Bellasi
On 10-Apr 21:37, Peter Zijlstra wrote: > On Tue, Apr 10, 2018 at 04:59:31PM +0100, Patrick Bellasi wrote: > > The iowait boosting code has been recently updated to add a progressive > > boosting behavior which allows to be less aggressive in boosting tasks > > doing only s

Re: [PATCH v2] cpufreq/schedutil: Cleanup, document and fix iowait boost

2018-04-11 Thread Patrick Bellasi
On 11-Apr 10:07, Viresh Kumar wrote: > On 10-04-18, 16:59, Patrick Bellasi wrote: > > The iowait boosting code has been recently updated to add a progressive > > boosting behavior which allows to be less aggressive in boosting tasks > > doing only sporadic IO operations, t

Re: [PATCH] sched/fair: schedutil: update only with all info available

2018-04-11 Thread Patrick Bellasi
On 11-Apr 13:56, Vincent Guittot wrote: > On 11 April 2018 at 12:15, Patrick Bellasi wrote: > > On 11-Apr 08:57, Vincent Guittot wrote: > >> On 10 April 2018 at 13:04, Patrick Bellasi wrote: > >> > On 09-Apr 10:51, Vincent Guittot wrote: > >> >&

Re: [PATCH v2] cpufreq/schedutil: Cleanup, document and fix iowait boost

2018-04-11 Thread Patrick Bellasi
On 11-Apr 12:58, Peter Zijlstra wrote: > On Wed, Apr 11, 2018 at 11:44:45AM +0100, Patrick Bellasi wrote: > > > > - sugov_set_iowait_boost: is now in charge only to set/increase the IO > > > > wait boost, every time a task wakes up from an IO wait. > > >

Re: [PATCH 1/7] sched/core: uclamp: add CPU clamp groups accounting

2018-04-13 Thread Patrick Bellasi
On 13-Apr 12:22, Peter Zijlstra wrote: > On Fri, Apr 13, 2018 at 10:26:48AM +0200, Peter Zijlstra wrote: > > On Mon, Apr 09, 2018 at 05:56:09PM +0100, Patrick Bellasi wrote: > > > +static inline void uclamp_cpu_get(struct task_struct *p, int cpu, int > > > clamp_

Re: [PATCH 1/7] sched/core: uclamp: add CPU clamp groups accounting

2018-04-13 Thread Patrick Bellasi
On 13-Apr 11:46, Peter Zijlstra wrote: > On Mon, Apr 09, 2018 at 05:56:09PM +0100, Patrick Bellasi wrote: > > +static inline void uclamp_cpu_get(struct task_struct *p, int cpu, int > > clamp_id) > > +{ > > + struct uclamp_cpu *uc_cpu = &cpu_rq(cpu)->uclamp[

Re: [PATCH 1/7] sched/core: uclamp: add CPU clamp groups accounting

2018-04-13 Thread Patrick Bellasi
On 13-Apr 10:43, Peter Zijlstra wrote: > On Mon, Apr 09, 2018 at 05:56:09PM +0100, Patrick Bellasi wrote: > > +static inline void uclamp_task_update(struct rq *rq, struct task_struct *p) > > +{ > > + int cpu = cpu_of(rq); > > + int clamp_id; > > + > > +

Re: [PATCH 1/7] sched/core: uclamp: add CPU clamp groups accounting

2018-04-13 Thread Patrick Bellasi
On 13-Apr 10:40, Peter Zijlstra wrote: > On Mon, Apr 09, 2018 at 05:56:09PM +0100, Patrick Bellasi wrote: > > +static inline void init_uclamp(void) > > WTH is that inline? You mean I can avoid the attribute? ... or that I should do it in another way? > > +{ > > +

Re: [PATCH 1/7] sched/core: uclamp: add CPU clamp groups accounting

2018-04-13 Thread Patrick Bellasi
On 13-Apr 13:29, Peter Zijlstra wrote: > On Fri, Apr 13, 2018 at 12:17:53PM +0100, Patrick Bellasi wrote: > > On 13-Apr 10:40, Peter Zijlstra wrote: > > > On Mon, Apr 09, 2018 at 05:56:09PM +0100, Patrick Bellasi wrote: > > > > +static inline void init_uclamp(void) &

Re: [PATCH 1/7] sched/core: uclamp: add CPU clamp groups accounting

2018-04-13 Thread Patrick Bellasi
On 13-Apr 13:36, Peter Zijlstra wrote: > On Fri, Apr 13, 2018 at 12:15:10PM +0100, Patrick Bellasi wrote: > > On 13-Apr 10:43, Peter Zijlstra wrote: > > > On Mon, Apr 09, 2018 at 05:56:09PM +0100, Patrick Bellasi wrote: > > > > +static inline void uclamp_tas

Re: [PATCH 1/7] sched/core: uclamp: add CPU clamp groups accounting

2018-04-13 Thread Patrick Bellasi
On 13-Apr 12:47, Patrick Bellasi wrote: > On 13-Apr 13:36, Peter Zijlstra wrote: > > On Fri, Apr 13, 2018 at 12:15:10PM +0100, Patrick Bellasi wrote: > > > On 13-Apr 10:43, Peter Zijlstra wrote: > > > > On Mon, Apr 09, 2018 at 05:56:09PM +0100, Patrick Bellasi wrote:

Re: [RFC PATCH 4/6] sched/fair: Introduce an energy estimation helper function

2018-03-21 Thread Patrick Bellasi
S mostly makes sense when you have a "minimum" control on OPPs... otherwise all the energy estimations are really fuzzy. > Also, even when schedutil is in use, shouldn't we ask it for a util > "computation" instead of replicating its _current_ heuristic? Are you proposing to have the 1.25 factor only here and remove it from schedutil? > I fear the two might diverge in the future. That could be avoided by factoring out from schedutil the "compensation" factor into a proper function to be used by all the interested playes, isn't it? -- #include Patrick Bellasi

Re: [RFC PATCH 4/6] sched/fair: Introduce an energy estimation helper function

2018-03-21 Thread Patrick Bellasi
t; + > /* > * select_task_rq_fair: Select target runqueue for the waking task in domains > * that have the 'sd_flag' flag set. In practice, this is SD_BALANCE_WAKE, > -- > 2.11.0 > -- #include Patrick Bellasi

Re: [RFC PATCH 5/6] sched/fair: Select an energy-efficient CPU on task wake-up

2018-03-21 Thread Patrick Bellasi
@@ select_task_rq_fair(struct task_struct *p, int > prev_cpu, int sd_flag, int wake_f > if (want_affine) > current->recent_used_cpu = cpu; > } > + } else if (energy_sd) { > + new_cpu = find_energy_efficient_cpu(energy_sd, p, prev_cpu); > } else { > new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag); > } -- #include Patrick Bellasi

  1   2   3   4   5   6   7   8   9   >