turn cpu_util(cpu);
>
> capacity = capacity_orig_of(cpu);
> - util = max_t(long, cpu_rq(cpu)->cfs.avg.util_avg - task_util(p), 0);
> + util = max_t(long, cpu_rq(cpu)->cfs.avg.util_avg - task_util_peak(p),
> 0);
>
> return (util >= capacity) ? capacity : util;
> }
> @@ -5476,7 +5481,7 @@ static int wake_cap(struct task_struct *p, int cpu, int
> prev_cpu)
> /* Bring task utilization in sync with prev_cpu */
> sync_entity_load_avg(&p->se);
>
> - return min_cap * 1024 < task_util(p) * capacity_margin;
> + return min_cap * 1024 < task_util_peak(p) * capacity_margin;
> }
>
> /*
> --
> 1.9.1
>
--
#include
Patrick Bellasi
t; causing the heavy task to run on the little
> core and the light task to run on the big core.
That's an interesting point we should keep into consideration for the
design of the complete solution.
I would prefer to post-pone this discussion on the list once we will
present the next ext
_util = (0, SCHED_CAPACITY_SCALE)
and thus, RT tasks always run at the maximum OPP if not otherwise
constrained by userspace.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Rafael J. Wysocki
Cc: Viresh Kumar
Cc: Suren Baghdasaryan
Cc: Todd Kjos
Cc: Joel Fernandes
Cc: Juri
the expected number of different clamp values, which can be
configured at build time, is usually so small that a more advanced
ordering algorithm is not needed. In real use-cases we expect less then
10 different values.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Pa
{min,max}_utiql clamps.
- use -ERANGE as range violation error
- add attributes to the default hierarchy as well as the legacy one
- implement a "nice" semantics where cgroup clamp values are always
used to restrict task specific clamp values,
i.e. tasks running on a TG are only a
described above. This will also make
sched_getattr(2) a convenient userpace API to know the utilization
constraints enforced on a task by the cgroup's CPU controller.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Tejun Heo
Cc: Paul Turner
Cc: Suren Baghdasaryan
nd capping are defined to be:
- util_min: 0
- util_max: SCHED_CAPACITY_SCALE
which means that by default no boosting/capping is enforced on FAIR
tasks, and thus the frequency will be selected considering the actual
utilization value of each CPU.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molna
least one valid clamp group.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Paul Turner
Cc: Suren Baghdasaryan
Cc: Todd Kjos
Cc: Joel Fernandes
Cc: Juri Lelli
Cc: Dietmar Eggemann
Cc: Morten Rasmussen
Cc: linux-kernel@vger.kernel.org
Cc: linux...@vger.kernel.org
the standard [0..100] range.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Tejun Heo
Cc: Rafael J. Wysocki
Cc: Paul Turner
Cc: Suren Baghdasaryan
Cc: Todd Kjos
Cc: Joel Fernandes
Cc: Steve Muckle
Cc: Juri Lelli
Cc: linux-kernel@vger.kernel.org
Cc: linux...@
clamp values is currently defined at
compile time. Thus, setting a new clamp value for a task can result into
a -ENOSPC error in case this will exceed the number of maximum different
clamp values supported.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Paul Turner
Cc: S
while a CPU is idle, we can still enforce the last used
clamp value for it.
To the contrary, we do not track any UCLAMP_MIN since, while a CPU is
idle, we don't want to enforce any minimum frequency
Indeed, we rely just on blocked load decay to smoothly reduce the
frequency.
Signed-off-b
s for a
specified task by extending sched_setattr, a syscall which already
allows to define task specific properties for different scheduling
classes.
Specifically, a new pair of attributes allows to specify a minimum and
maximum utilization which the scheduler should consider for a task.
Signed-off-b
ation.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Tejun Heo
Cc: Paul Turner
Cc: Suren Baghdasaryan
Cc: Todd Kjos
Cc: Joel Fernandes
Cc: Steve Muckle
Cc: Juri Lelli
Cc: Dietmar Eggemann
Cc: Morten Rasmussen
Cc: linux-kernel@vger.kernel.org
Cc: linux...@vger.kerne
, as soon as a task group attribute is tweaked.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Tejun Heo
Cc: Paul Turner
Cc: Suren Baghdasaryan
Cc: Todd Kjos
Cc: Joel Fernandes
Cc: Steve Muckle
Cc: Juri Lelli
Cc: Dietmar Eggemann
Cc: Morten Rasmussen
Cc: linux-kernel@
ask_struct parameter optional. This allows to re-use the code already
available to support the per-task API.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Tejun Heo
Cc: Rafael J. Wysocki
Cc: Viresh Kumar
Cc: Suren Baghdasaryan
Cc: Todd Kjos
Cc: Joel Fernandes
Cc: J
s: cpu.util.{min,max}.effective.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Tejun Heo
Cc: Rafael J. Wysocki
Cc: Viresh Kumar
Cc: Suren Baghdasaryan
Cc: Todd Kjos
Cc: Joel Fernandes
Cc: Juri Lelli
Cc: linux-kernel@vger.kernel.org
Cc: linux...@vger.kernel.org
---
Change
patch
always returns -EINVAL. Following patches will provide the missing bits.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Tejun Heo
Cc: Rafael J. Wysocki
Cc: Viresh Kumar
Cc: Suren Baghdasaryan
Cc: Todd Kjos
Cc: Joel Fernandes
Cc: Juri Lelli
Cc: linux-kernel@vger.
s for a
specified task by extending sched_setattr, a syscall which already
allows to define task specific properties for different scheduling
classes.
Specifically, a new pair of attributes allows to specify a minimum and
maximum utilization which the scheduler should consider for a task.
Signed-off-b
nt {min,max}_utiql clamps.
- use -ERANGE as range violation error
- add attributes to the default hierarchy as well as the legacy one
- implement a "nice" semantics where cgroup clamp values are always
used to restrict task specific clamp values,
i.e. tasks running on a TG are only
clamp values is currently defined at
compile time. Thus, setting a new clamp value for a task can result into
a -ENOSPC error in case this will exceed the number of maximum different
clamp values supported.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Paul Turner
Cc: S
l also be updated to aggregate and represent
at run-time the most restrictive value among those of the RUNNABLE tasks
refcounted by that group. Each time a CPU clamp group becomes empty we
reset its clamp value to the minimum value of the range it tracks.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
the standard [0..100] range.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Tejun Heo
Cc: Rafael J. Wysocki
Cc: Paul Turner
Cc: Suren Baghdasaryan
Cc: Todd Kjos
Cc: Joel Fernandes
Cc: Steve Muckle
Cc: Juri Lelli
Cc: Quentin Perret
Cc: Dietmar Eggemann
Cc: Morte
T tasks as well as CFS ones are always subject
to the set of current utilization clamping constraints.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Rafael J. Wysocki
Cc: Viresh Kumar
Cc: Suren Baghdasaryan
Cc: Todd Kjos
Cc: Joel Fernandes
Cc: Juri Lelli
Cc: Quentin
while a CPU is idle, we can still enforce the last used
clamp value for it.
To the contrary, we do not track any UCLAMP_MIN since, while a CPU is
idle, we don't want to enforce any minimum frequency
Indeed, we rely just on blocked load decay to smoothly reduce the
frequency.
Signed-off-b
, as soon as a task group attribute is tweaked.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Tejun Heo
Cc: Paul Turner
Cc: Suren Baghdasaryan
Cc: Todd Kjos
Cc: Joel Fernandes
Cc: Steve Muckle
Cc: Juri Lelli
Cc: Quentin Perret
Cc: Dietmar Eggemann
Cc: Morten Ra
ADMIN capabilities.
Whenever this should be considered too restrictive and/or not required
for a specific platforms, a kernel boot option is provided to change
this default behavior thus allowing non privileged tasks to change their
utilization clamp values.
Signed-off-by: Patrick Bellasi
Cc: Ingo M
patch
always returns -EINVAL. Following patches will provide the missing bits.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Tejun Heo
Cc: Rafael J. Wysocki
Cc: Viresh Kumar
Cc: Suren Baghdasaryan
Cc: Todd Kjos
Cc: Joel Fernandes
Cc: Juri Lelli
Cc: Quentin Perret
oses, as well as to properly inform userspace, the
sched_getattr(2) call is updated to always return the properly
aggregated constrains as described above. This will also make
sched_getattr(2) a convenient userspace API to know the utilization
constraints enforced on a task by the cgroup's CP
ue is refcounted considering the
system default clamps if either we do not have task group support or
they are part of the root_task_group.
Tasks without a task specific clamp value in a child task group will be
refcounted instead considering the task group clamps.
Signed-off-by: Patrick Bellasi
Cc: Ingo M
s: cpu.util.{min,max}.effective.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Tejun Heo
Cc: Rafael J. Wysocki
Cc: Viresh Kumar
Cc: Suren Baghdasaryan
Cc: Todd Kjos
Cc: Joel Fernandes
Cc: Juri Lelli
Cc: Quentin Perret
Cc: Dietmar Eggemann
Cc: Morten Rasmussen
Cc: linux-ke
nd capping are defined to be:
- util_min: 0
- util_max: SCHED_CAPACITY_SCALE
which means that by default no boosting/capping is enforced on FAIR
tasks, and thus the frequency will be selected considering the actual
utilization value of each CPU.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molna
time).
We do that by slightly refactoring uclamp_group_get() to make the
*task_struct parameter optional. This allows to re-use the code already
available to support the per-task API.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Tejun Heo
Cc: Rafael J. Wysocki
Cc: Vir
least one valid clamp group.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Paul Turner
Cc: Suren Baghdasaryan
Cc: Todd Kjos
Cc: Joel Fernandes
Cc: Juri Lelli
Cc: Quentin Perret
Cc: Dietmar Eggemann
Cc: Morten Rasmussen
Cc: linux-kernel@vger.kernel.org
Cc: linux
the expected number of different clamp values, which can be
configured at build time, is usually so small that a more advanced
ordering algorithm is not needed. In real use-cases we expect less then
10 different values.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Pa
On 28-Aug 11:29, Randy Dunlap wrote:
> On 08/28/2018 06:53 AM, Patrick Bellasi wrote:
> > +config UCLAMP_TASK_GROUP
> > + bool "Utilization clamping per group of tasks"
> > + depends on CGROUP_SCHED
> > + depends on UCLAMP_TASK
> > + default n
factor between number Write allocated .data..percp sections and
the value of NR_CPUS. Meaning that in the worst case we allocate
the same amount of memory using NR_CPUS=64 (the default on arm64)
while running on an 8 CPUs system... but still we should get less
cluster caches pressure at run-time with the array approach, 1
cache line vs 4.
Best,
Patrick
--
#include
Patrick Bellasi
fallback_doms;
-
/*
* arch_update_cpu_topology lets virtualized architectures update the
* CPU core maps. It is supposed to return 1 if the topology changed
@@ -2198,21 +2219,7 @@ void partition_sched_domains(int ndoms_new,
cpumask_var_t doms_new[],
;
}
-#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
- /* Build perf. domains: */
- for (i = 0; i < ndoms_new; i++) {
- for (j = 0; j < n && !sched_energy_update; j++) {
- if (cpumask_equal(doms_new[i], doms_cur[j]) &&
- cpu_rq(cpumask_first(doms_cur[j]))->rd->pd)
- goto match3;
- }
- /* No match - add perf. domains for a new rd */
- build_perf_domains(doms_new[i]);
-match3:
- ;
- }
- sched_energy_start(ndoms_new, doms_new);
-#endif
+ build_perf_domains(ndoms_new, n, doms_new);
/* Remember the new sched domains: */
if (doms_cur != &fallback_doms)
---8<---
> /* Remember the new sched domains: */
> if (doms_cur != &fallback_doms)
> free_sched_domains(doms_cur, ndoms_cur);
> --
> 2.17.1
>
--
#include
Patrick Bellasi
s: stopping EAS\n", __func__);
> + static_branch_disable_cpuslocked(&sched_energy_present);
> + }
> +
> + return;
> +
> +enable:
> + if (!static_branch_unlikely(&sched_energy_present)) {
> + if (sched_debug())
> + pr_info("%s: starting EAS\n", __func__);
> + static_branch_enable_cpuslocked(&sched_energy_present);
> + }
> +}
> #else
> static void free_pd(struct perf_domain *pd) { }
> #endif
> @@ -2123,6 +2197,7 @@ void partition_sched_domains(int ndoms_new,
> cpumask_var_t doms_new[],
> match3:
> ;
> }
> + sched_energy_start(ndoms_new, doms_new);
> #endif
>
> /* Remember the new sched domains: */
> --
> 2.17.1
>
--
#include
Patrick Bellasi
On 29-Aug 18:20, Quentin Perret wrote:
> On Wednesday 29 Aug 2018 at 17:50:58 (+0100), Patrick Bellasi wrote:
> > > +/*
> > > + * The complexity of the Energy Model is defined as: nr_pd * (nr_cpus +
> > > nr_cs)
> > > + * with: 'nr_pd' the nu
On 29-Aug 17:56, Quentin Perret wrote:
> On Wednesday 29 Aug 2018 at 17:22:38 (+0100), Patrick Bellasi wrote:
> > > +static void build_perf_domains(const struct cpumask *cpu_map)
> > > +{
> > > + struct perf_domain *pd = NULL, *tmp;
> > > + int cpu =
On 30-Aug 10:57, Quentin Perret wrote:
> Hi Patrick,
>
> On Thursday 30 Aug 2018 at 10:23:29 (+0100), Patrick Bellasi wrote:
> > Yes, dunno if it's just me but perhaps a bit of rephrasing could help.
>
> Ok, so what about something a little bit more explicit like:
>
On 30-Aug 11:47, Quentin Perret wrote:
> On Thursday 30 Aug 2018 at 11:00:20 (+0100), Patrick Bellasi wrote:
> > Dunno... but, in any case, probably we don't care about using EAS until
> > the boot complete, isn't it?
>
> So, as of now, EAS will typically start so
On 06-Aug 09:50, Randy Dunlap wrote:
> Hi,
Hi Randy,
> On 08/06/2018 09:39 AM, Patrick Bellasi wrote:
> > diff --git a/init/Kconfig b/init/Kconfig
> > index 041f3a022122..1d45a6877d6f 100644
> > --- a/init/Kconfig
> > +++ b/init/Kconfig
> > @@ -583,6 +583,25
On 07-Aug 14:35, Juri Lelli wrote:
> On 06/08/18 17:39, Patrick Bellasi wrote:
>
> [...]
>
> > @@ -4218,6 +4245,13 @@ static int __sched_setscheduler(struct task_struct
> > *p,
> > return retval;
> > }
> >
> >
On 09-Aug 11:50, Juri Lelli wrote:
> On 09/08/18 10:14, Patrick Bellasi wrote:
> > On 07-Aug 14:35, Juri Lelli wrote:
> > > On 06/08/18 17:39, Patrick Bellasi wrote:
[...]
> > 1) make CAP_SYS_NICE protected the clamp groups, with an optional boot
> >time paramet
On 08-Aug 15:18, Vincent Guittot wrote:
> Hi Patrick,
Hi VIncent,
> On Mon, 6 Aug 2018 at 18:40, Patrick Bellasi wrote:
[...]
> > +static inline unsigned int uclamp_util(unsigned int cpu, unsigned int util)
>
> using struct *rq rq instead of cpu as parameter would a
On 07-Aug 15:26, Juri Lelli wrote:
> Hi,
>
> On 06/08/18 17:39, Patrick Bellasi wrote:
>
> [...]
>
> > @@ -223,13 +224,25 @@ static unsigned long sugov_get_util(struct sugov_cpu
> > *sg_cpu)
> > * utilization (PELT windows are synchronized) we can di
On 07-Aug 14:54, Quentin Perret wrote:
> Hi Patrick,
Hi Quentin!
> On Monday 06 Aug 2018 at 17:39:38 (+0100), Patrick Bellasi wrote:
> > diff --git a/kernel/sched/cpufreq_schedutil.c
> > b/kernel/sched/cpufreq_schedutil.c
> > index a7affc729c25..bb25ef66c2d3 10064
d-up a task will get at run-time, independently from higher
priority classes.
Does that make sense?
> > I'm not sure keeping the sched_feat is a good solution on the long
> > run, i.e. mainline merge ;)
This problem still stands...
--
#include
Patrick Bellasi
Hi Quentin!
On 09-Aug 16:55, Quentin Perret wrote:
> Hi Patrick,
>
> On Thursday 09 Aug 2018 at 16:41:56 (+0100), Patrick Bellasi wrote:
> > > IIUC, not far below this you should still have something like:
> > >
> > > if (rt_rq_is_runnable(&rq->rt))
On 07-Aug 11:59, Juri Lelli wrote:
> Hi,
>
> Minor comments below.
>
> On 06/08/18 17:39, Patrick Bellasi wrote:
>
> [...]
>
> > + *
> > + * Task Utilization Attributes
> > + * ===
> > + *
> > + * A subset of s
On 13-Aug 14:07, Vincent Guittot wrote:
> On Mon, 13 Aug 2018 at 12:12, Patrick Bellasi wrote:
> >
> > Hi Vincent!
> >
> > On 09-Aug 18:03, Vincent Guittot wrote:
> > > > On 07-Aug 15:26, Juri Lelli wrote:
> >
> > [...]
> >
> > &
On 13-Aug 16:06, Vincent Guittot wrote:
> On Mon, 13 Aug 2018 at 14:49, Patrick Bellasi wrote:
> > On 13-Aug 14:07, Vincent Guittot wrote:
> > > On Mon, 13 Aug 2018 at 12:12, Patrick Bellasi
> > > wrote:
[...]
> > Yes I agree that the current behavior is not
Hi Pavan,
On 14-Aug 16:55, Pavan Kondeti wrote:
> On Mon, Aug 06, 2018 at 05:39:34PM +0100, Patrick Bellasi wrote:
> I see that we drop reference on the previous clamp group when a task changes
> its clamp limits. What about exiting tasks which claimed clamp groups? should
> not
Hi Dietmar!
On 14-Aug 17:44, Dietmar Eggemann wrote:
> On 08/06/2018 06:39 PM, Patrick Bellasi wrote:
[...]
> >+/**
> >+ * uclamp_cpu_put_id(): decrease reference count for a clamp group on a CPU
> >+ * @p: the task being dequeued from a CPU
> >+ * @cpu: the CPU fro
Hi Suren,
thanks for the review, all good point... some more comments follow
inline.
On 19-Jul 16:51, Suren Baghdasaryan wrote:
> On Mon, Jul 16, 2018 at 1:28 AM, Patrick Bellasi
> wrote:
[...]
> > +/**
> > + * uclamp_group_available: checks if a clamp group is available
>
And of course, by the time we get back to userspace, the returned values
> will be out-of-date anyway. But that isn't to be helped I suppose.
Yes, but that's always kind-of implied by syscall returning kernel
metrics, isn't it?
> > + } else {
> > + attr->sched_runtime = dl_se->dl_runtime;
> > + attr->sched_deadline = dl_se->dl_deadline;
> > + }
> > +
> > attr->sched_period = dl_se->dl_period;
> > attr->sched_flags = dl_se->flags;
> > }
--
#include
Patrick Bellasi
be possible to find the group_ids before
actually increasing the refcount.
... will look into this for the next reposting.
--
#include
Patrick Bellasi
On 23-Jul 16:13, Peter Zijlstra wrote:
> On Mon, Jul 23, 2018 at 01:49:46PM +0100, Patrick Bellasi wrote:
> > On 23-Jul 11:49, Peter Zijlstra wrote:
> >
> > [...]
> >
> > > > -void __getparam_dl(struct task_struct *p, struct sched_attr *attr)
> >
On 20-Jul 18:23, Suren Baghdasaryan wrote:
> Hi Patrick,
Hi Sure,
thank!
> On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi
> wrote:
[...]
> > @@ -977,13 +991,21 @@ static inline void uclamp_cpu_get_id(struct
> > task_struct *p,
> > uc_grp = &a
On 20-Jul 19:37, Suren Baghdasaryan wrote:
> On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi
> wrote:
[...]
> > +#ifdef CONFIG_UCLAMP_TASK_GROUP
> > +static int cpu_util_min_write_u64(struct cgroup_subsys_state *css,
> > + struct cftyp
On 21-Jul 20:05, Suren Baghdasaryan wrote:
> On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi
> wrote:
> > When a task's util_clamp value is configured via sched_setattr(2), this
> > value has to be properly accounted in the corresponding clamp group
> > every
On 23-Jul 08:30, Tejun Heo wrote:
> Hello,
Hi Tejun!
> On Mon, Jul 16, 2018 at 09:29:02AM +0100, Patrick Bellasi wrote:
> > The cgroup's CPU controller allows to assign a specified (maximum)
> > bandwidth to the tasks of a group. However this bandwidth is defined and
On 23-Jul 10:11, Suren Baghdasaryan wrote:
> On Mon, Jul 23, 2018 at 8:40 AM, Patrick Bellasi
> wrote:
> > On 21-Jul 20:05, Suren Baghdasaryan wrote:
> >> On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi
[...]
> >> So to satisfy both TG and syscall requirements
. what it actually does.
Is it acceptable to have a new interface which fits a wider
description?
With such a description, our aim is also to demonstrate that we are
_not_ adding a special case new user-space interface but a generic
enough interface which can be properly extended in the future without
breaking existing functionalities but just by keep improving them.
Best,
Patrick
--
#include
Patrick Bellasi
your review!
Cheers Patrick
--
#include
Patrick Bellasi
On 21-Jul 21:04, Suren Baghdasaryan wrote:
> On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi
> wrote:
[...]
> > +static inline unsigned int scale_from_percent(unsigned int pct)
> > +{
> > + WARN_ON(pct > 100);
> > +
> > + retur
On 24-Jul 10:11, Suren Baghdasaryan wrote:
> On Tue, Jul 24, 2018 at 9:43 AM, Patrick Bellasi
> wrote:
> > On 21-Jul 21:04, Suren Baghdasaryan wrote:
> >> On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi
> >> wrote:
> >
> > [...]
> >
> >>
CPU.
This allows to properly represent the spare capacity of a CPU which, for
example, has just got a big task running since a long sleep period.
Signed-off-by: Patrick Bellasi
Reviewed-by: Dietmar Eggemann
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Rafael J. Wysocki
Cc: Viresh Kumar
Cc: Paul
f68664
[5] Window Assisted Load Tracking
https://lwn.net/Articles/704903/
Patrick Bellasi (3):
sched/fair: add util_est on top of PELT
sched/fair: use util_est in LB and WU paths
sched/cpufreq_schedutil: use util_est for OPP selection
include/linux/sched.h | 16 +
kernel/sched/
ke-up.
Signed-off-by: Patrick Bellasi
Reviewed-by: Dietmar Eggemann
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Rafael J. Wysocki
Cc: Viresh Kumar
Cc: Paul Turner
Cc: Vincent Guittot
Cc: Morten Rasmussen
Cc: Dietmar Eggemann
Cc: linux-kernel@vger.kernel.org
Cc: linux...@vger.kernel.org
---
Chang
y:
- Tasks: to better support tasks placement decisions
- root cfs_rqs: to better support both tasks placement decisions as
well as frequencies selection
Signed-off-by: Patrick Bellasi
Reviewed-by: Dietmar Eggemann
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Rafael J. Wysocki
C
On 24-Jan 08:40, Joel Fernandes wrote:
> On Tue, Jan 23, 2018 at 10:08 AM, Patrick Bellasi
> wrote:
> > The util_avg signal computed by PELT is too variable for some use-cases.
> > For example, a big task waking up after a long sleep period will have its
> > utilization a
On 24-Jan 17:03, Pavan Kondeti wrote:
> Hi Patrick,
Hi Pavan,
> On Tue, Jan 23, 2018 at 06:08:46PM +, Patrick Bellasi wrote:
> > static unsigned long cpu_util_wake(int cpu, struct task_struct *p)
> > {
> > - unsigned long util, capacity;
>
On 29-Jan 17:36, Peter Zijlstra wrote:
> On Tue, Jan 23, 2018 at 06:08:45PM +0000, Patrick Bellasi wrote:
> > +static inline void util_est_dequeue(struct task_struct *p, int flags)
> > +{
> > + struct cfs_rq *cfs_rq = &task_rq(p)->cfs;
> > + unsigned long util
On 25-Jan 20:03, Pavan Kondeti wrote:
> On Wed, Jan 24, 2018 at 07:31:38PM +0000, Patrick Bellasi wrote:
> >
> > > > + /*
> > > > +* These are the main cases covered:
> > > > +* - if *p is the only task sleeping on thi
ross" and thus perhaps
it does not make sense to keep adding special DL tasks.
Another possible alternative to "tag an RT task" as being special, is
to use an API similar to the one proposed by the util_clamp RFC:
20170824180857.32103-1-patrick.bell...@arm.com
which would allow to define what's the maximum utilization which can
be required by a properly configured RT task.
--
#include
Patrick Bellasi
On 06-Mar 19:58, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 05:01:50PM +0000, Patrick Bellasi wrote:
> > +static inline void util_est_enqueue(struct cfs_rq *cfs_rq,
> > + struct task_struct *p)
> > +{
> > + unsigned int enqueued;
&g
On 06-Mar 20:02, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 05:01:50PM +0000, Patrick Bellasi wrote:
> > +struct util_est {
> > + unsigned intenqueued;
> > + unsigned intewma;
> > +#define UTIL_EST_WEIGHT_SHIFT
On 06-Mar 19:56, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 05:01:50PM +0000, Patrick Bellasi wrote:
> > +/**
> > + * Estimation Utilization for FAIR tasks.
> > + *
> > + * Support data structure to track an Exponential Weighted Moving Average
> > + * (EWMA)
On 07-Mar 13:26, Peter Zijlstra wrote:
> On Wed, Mar 07, 2018 at 11:47:11AM +0000, Patrick Bellasi wrote:
> > On 06-Mar 20:02, Peter Zijlstra wrote:
> > > On Thu, Feb 22, 2018 at 05:01:50PM +0000, Patrick Bellasi wrote:
> > > > +struct util_est
On 07-Mar 13:24, Peter Zijlstra wrote:
> On Wed, Mar 07, 2018 at 11:31:49AM +0000, Patrick Bellasi wrote:
> > > It appears to me this isn't a stable situation and completely relies on
> > > the !nr_running case to recalibrate. If we ensure that doesn't happen
> &
On 07-Mar 10:39, Peter Zijlstra wrote:
> On Tue, Mar 06, 2018 at 07:58:51PM +0100, Peter Zijlstra wrote:
> > On Thu, Feb 22, 2018 at 05:01:50PM +, Patrick Bellasi wrote:
> > > +static inline void util_est_enqueue(struct cfs_rq *cfs_rq,
> > > +
but not the actual cfs_rq
utilization, which is updated by a following sched event
This new proposal allows also to better aggregate schedutil related
flags, which are required only at enqueue_task_fair() time.
Indeed, IOWAIT and MIGRATION flags are now requested only when a task is
actually
On 11-Apr 09:57, Vincent Guittot wrote:
> On 6 April 2018 at 19:28, Patrick Bellasi wrote:
>
> > }
> > @@ -5454,8 +5441,11 @@ static void dequeue_task_fair(struct rq *rq, struct
> > task_struct *p, int flags)
> > update_cfs_group(se);
> &g
On 11-Apr 08:57, Vincent Guittot wrote:
> On 10 April 2018 at 13:04, Patrick Bellasi wrote:
> > On 09-Apr 10:51, Vincent Guittot wrote:
> >> On 6 April 2018 at 19:28, Patrick Bellasi wrote:
> >> Peter,
> >> what was your goal with adding the condition &quo
On 10-Apr 21:37, Peter Zijlstra wrote:
> On Tue, Apr 10, 2018 at 04:59:31PM +0100, Patrick Bellasi wrote:
> > The iowait boosting code has been recently updated to add a progressive
> > boosting behavior which allows to be less aggressive in boosting tasks
> > doing only s
On 11-Apr 10:07, Viresh Kumar wrote:
> On 10-04-18, 16:59, Patrick Bellasi wrote:
> > The iowait boosting code has been recently updated to add a progressive
> > boosting behavior which allows to be less aggressive in boosting tasks
> > doing only sporadic IO operations, t
On 11-Apr 13:56, Vincent Guittot wrote:
> On 11 April 2018 at 12:15, Patrick Bellasi wrote:
> > On 11-Apr 08:57, Vincent Guittot wrote:
> >> On 10 April 2018 at 13:04, Patrick Bellasi wrote:
> >> > On 09-Apr 10:51, Vincent Guittot wrote:
> >> >&
On 11-Apr 12:58, Peter Zijlstra wrote:
> On Wed, Apr 11, 2018 at 11:44:45AM +0100, Patrick Bellasi wrote:
> > > > - sugov_set_iowait_boost: is now in charge only to set/increase the IO
> > > > wait boost, every time a task wakes up from an IO wait.
> > >
On 13-Apr 12:22, Peter Zijlstra wrote:
> On Fri, Apr 13, 2018 at 10:26:48AM +0200, Peter Zijlstra wrote:
> > On Mon, Apr 09, 2018 at 05:56:09PM +0100, Patrick Bellasi wrote:
> > > +static inline void uclamp_cpu_get(struct task_struct *p, int cpu, int
> > > clamp_
On 13-Apr 11:46, Peter Zijlstra wrote:
> On Mon, Apr 09, 2018 at 05:56:09PM +0100, Patrick Bellasi wrote:
> > +static inline void uclamp_cpu_get(struct task_struct *p, int cpu, int
> > clamp_id)
> > +{
> > + struct uclamp_cpu *uc_cpu = &cpu_rq(cpu)->uclamp[
On 13-Apr 10:43, Peter Zijlstra wrote:
> On Mon, Apr 09, 2018 at 05:56:09PM +0100, Patrick Bellasi wrote:
> > +static inline void uclamp_task_update(struct rq *rq, struct task_struct *p)
> > +{
> > + int cpu = cpu_of(rq);
> > + int clamp_id;
> > +
> > +
On 13-Apr 10:40, Peter Zijlstra wrote:
> On Mon, Apr 09, 2018 at 05:56:09PM +0100, Patrick Bellasi wrote:
> > +static inline void init_uclamp(void)
>
> WTH is that inline?
You mean I can avoid the attribute?
... or that I should do it in another way?
> > +{
> > +
On 13-Apr 13:29, Peter Zijlstra wrote:
> On Fri, Apr 13, 2018 at 12:17:53PM +0100, Patrick Bellasi wrote:
> > On 13-Apr 10:40, Peter Zijlstra wrote:
> > > On Mon, Apr 09, 2018 at 05:56:09PM +0100, Patrick Bellasi wrote:
> > > > +static inline void init_uclamp(void)
&
On 13-Apr 13:36, Peter Zijlstra wrote:
> On Fri, Apr 13, 2018 at 12:15:10PM +0100, Patrick Bellasi wrote:
> > On 13-Apr 10:43, Peter Zijlstra wrote:
> > > On Mon, Apr 09, 2018 at 05:56:09PM +0100, Patrick Bellasi wrote:
> > > > +static inline void uclamp_tas
On 13-Apr 12:47, Patrick Bellasi wrote:
> On 13-Apr 13:36, Peter Zijlstra wrote:
> > On Fri, Apr 13, 2018 at 12:15:10PM +0100, Patrick Bellasi wrote:
> > > On 13-Apr 10:43, Peter Zijlstra wrote:
> > > > On Mon, Apr 09, 2018 at 05:56:09PM +0100, Patrick Bellasi wrote:
S mostly makes sense when you have a "minimum"
control on OPPs... otherwise all the energy estimations are really
fuzzy.
> Also, even when schedutil is in use, shouldn't we ask it for a util
> "computation" instead of replicating its _current_ heuristic?
Are you proposing to have the 1.25 factor only here and remove it from
schedutil?
> I fear the two might diverge in the future.
That could be avoided by factoring out from schedutil the
"compensation" factor into a proper function to be used by all the
interested playes, isn't it?
--
#include
Patrick Bellasi
t; +
> /*
> * select_task_rq_fair: Select target runqueue for the waking task in domains
> * that have the 'sd_flag' flag set. In practice, this is SD_BALANCE_WAKE,
> --
> 2.11.0
>
--
#include
Patrick Bellasi
@@ select_task_rq_fair(struct task_struct *p, int
> prev_cpu, int sd_flag, int wake_f
> if (want_affine)
> current->recent_used_cpu = cpu;
> }
> + } else if (energy_sd) {
> + new_cpu = find_energy_efficient_cpu(energy_sd, p, prev_cpu);
> } else {
> new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag);
> }
--
#include
Patrick Bellasi
1 - 100 of 808 matches
Mail list logo