Hi Yun,
thanks for keep improving this.
I'm replying here but still considering all other reviewers comments.
Best,
Patrick
On Tue, Nov 03, 2020 at 03:37:56 +0100, Yun Hsiang
wrote...
> If the user wants to stop controlling uclamp and let the task inherit
> the value from the group, we need
Hi Dietmar, Yun,
I hope I'm not too late before v4 posting ;)
I think the overall approach is sound, I just added in a couple of
cleanups and a possible fix (user_defined reset).
Best,
Patrick
On Tue, Oct 27, 2020 at 16:58:13 +0100, Yun Hsiang
wrote...
> Hi Diet mar,
> On Mon, Oct 26, 2020
On Wed, Oct 28, 2020 at 12:39:43 +0100, Qais Yousef
wrote...
> On 10/28/20 11:11, Patrick Bellasi wrote:
>> >>
>> >> /*
>> >>* RT by default have a 100% boost value that could be modified
>> >>* at runtim
On Tue, Oct 13, 2020 at 22:25:48 +0200, Dietmar Eggemann
wrote...
Hi Dietmar,
> Hi Yun,
>
> On 12/10/2020 18:31, Yun Hsiang wrote:
>> If the user wants to stop controlling uclamp and let the task inherit
>> the value from the group, we need a method to reset.
>>
>> Add SCHED_FLAG_UTIL_CLAMP_
On Tue, Oct 13, 2020 at 15:32:46 +0200, Qais Yousef
wrote...
> On 10/13/20 13:46, Patrick Bellasi wrote:
>> > So IMO you just need a single SCHED_FLAG_UTIL_CLAMP_RESET that if set in
>> > the
>> > attr, you just execute that loop in __setscheduler_uclamp() +
On Tue, Oct 13, 2020 at 12:29:51 +0200, Qais Yousef
wrote...
> On 10/13/20 10:21, Patrick Bellasi wrote:
>>
[...]
>> > +#define SCHED_FLAG_UTIL_CLAMP_RESET (SCHED_FLAG_UTIL_CLAMP_RESET_MIN | \
>> > + SCHED_FLAG_UTIL_CLAMP_RESE
On Tue, Oct 13, 2020 at 07:31:14 +0200, Juri Lelli
wrote...
> Commit 765cc3a4b224e ("sched/core: Optimize sched_feat() for
> !CONFIG_SCHED_DEBUG builds") made sched features static for
> !CONFIG_SCHED_DEBUG configurations, but overlooked the CONFIG_
> SCHED_DEBUG enabled and !CONFIG_JUMP_LABEL
Hi Yun,
thanks for sharing this new implementation.
On Mon, Oct 12, 2020 at 18:31:40 +0200, Yun Hsiang
wrote...
> If the user wants to stop controlling uclamp and let the task inherit
> the value from the group, we need a method to reset.
>
> Add SCHED_FLAG_UTIL_CLAMP_RESET flag to allow the
Hi Yun, Dietmar,
On Mon, Oct 05, 2020 at 14:38:18 +0200, Dietmar Eggemann
wrote...
> + Patrick Bellasi
> + Qais Yousef
>
> On 02.10.20 07:38, Yun Hsiang wrote:
>> On Wed, Sep 30, 2020 at 03:12:51PM +0200, Dietmar Eggemann wrote:
>
> [...]
>
>>> O
Hi Vincent,
On Mon, Jul 13, 2020 at 14:59:51 +0200, Vincent Guittot
wrote...
> On Fri, 10 Jul 2020 at 21:59, Patrick Bellasi
> wrote:
>> On Fri, Jul 10, 2020 at 15:21:48 +0200, Vincent Guittot
>> wrote...
>>
>> [...]
>>
>> >>
On Fri, Jul 10, 2020 at 15:21:48 +0200, Vincent Guittot
wrote...
> Hi Patrick,
Hi Vincent,
[...]
>> > C) Existing control paths
>>
>> Assuming:
>>
>> C: CFS task currently running on CPUx
>> W: CFS task waking up on the same CPUx
>>
>> And considering the overall simplified workflow:
>>
>
On Tue, Jun 30, 2020 at 17:40:34 +0200, Qais Yousef
wrote...
> Hi Patrick
>
> On 06/30/20 16:55, Patrick Bellasi wrote:
>>
>> Hi Qais,
>> sorry for commenting on v5 with a v6 already posted, but...
>> ... I cannot keep up with your re-spinning rate ;)
>
11:46:24 +0200, Qais Yousef
wrote...
> On 06/30/20 10:11, Patrick Bellasi wrote:
>> On Mon, Jun 29, 2020 at 18:26:33 +0200, Qais Yousef
>> wrote...
[...]
>> > +
>> > +static inline bool uclamp_is_enabled(void)
>> > +{
>> >
Hi Qais,
here are some more 2c from me...
On Mon, Jun 29, 2020 at 18:26:33 +0200, Qais Yousef
wrote...
[...]
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 235b2cae00a0..8d80d6091d86 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -794,6 +794,26 @@ unsig
On Thu, Jun 25, 2020 at 17:43:52 +0200, Qais Yousef
wrote...
[...]
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 235b2cae00a0..e2f1fffa013c 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -794,6 +794,25 @@ unsigned int sysctl_sched_uclamp_util_max =
> S
On Thu, Jun 25, 2020 at 17:43:51 +0200, Qais Yousef
wrote...
> struct uclamp_rq was zeroed out entirely in assumption that in the first
> call to uclamp_rq_inc() they'd be initialized correctly in accordance to
> default settings.
Perhaps I was not clear in my previous comment:
https://lo
On Fri, Jun 19, 2020 at 19:20:11 +0200, Qais Yousef
wrote...
[...]
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 4265861e13e9..9ab22f699613 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -793,6 +793,25 @@ unsigned int sysctl_sched_uclamp_util_max =
> S
e ("sched/uclamp: Add CPU's clamp buckets refcounting")
> Signed-off-by: Qais Yousef
> Cc: Juri Lelli
> Cc: Vincent Guittot
> Cc: Dietmar Eggemann
> Cc: Steven Rostedt
> Cc: Ben Segall
> Cc: Mel Gorman
> CC: Patrick Bellasi
> Cc: Chris R
On Tue, Jun 23, 2020 at 09:29:03 +0200, Patrick Bellasi
wrote...
> .:: Scheduler Wakeup Path Requirements Collection Template
> ==
>
> A) Name
Runtime tunable vruntime wakeup bonus.
> B) Target behavior
All SCHED_OTHE
Since last year's OSPM Summit we started conceiving the idea that task
wakeup path could be better tuned for certain classes of workloads
and usage scenarios. Various people showed interest for a possible
tuning interface for the scheduler wakeup path.
.:: The Problem
===
The discu
On Fri, Jun 05, 2020 at 13:32:04 +0200, Qais Yousef
wrote...
> On 06/05/20 09:55, Patrick Bellasi wrote:
>> On Wed, Jun 03, 2020 at 18:52:00 +0200, Qais Yousef
>> wrote...
[...]
>> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> > index
Hi Qais,
On Wed, Jun 03, 2020 at 18:52:00 +0200, Qais Yousef
wrote...
> On 06/03/20 16:59, Vincent Guittot wrote:
>> When I want to stress the fast path i usually use "perf bench sched pipe -T "
>> The tip/sched/core on my arm octo core gives the following results for
>> 20 iterations of perf
Hi Dietmar,
thanks for sharing these numbers.
On Tue, Jun 02, 2020 at 18:46:00 +0200, Dietmar Eggemann
wrote...
[...]
> I ran these tests on 'Ubuntu 18.04 Desktop' on Intel E5-2690 v2
> (2 sockets * 10 cores * 2 threads) with powersave governor as:
>
> $ numactl -N 0 ./run-mmtests.sh XXX
Gr
[+Giovanni]
On Thu, May 28, 2020 at 20:29:14 +0200, Peter Zijlstra
wrote...
> On Thu, May 28, 2020 at 05:51:31PM +0100, Qais Yousef wrote:
>> I had a humble try to catch the overhead but wasn't successful. The
>> observation
>> wasn't missed by us too then.
>
> Right, I remember us doing be
I Qais,
I see we are converging toward the final shape. :)
Function wise code looks ok to me now.
Lemme just point out few more remarks and possible nit-picks.
I guess at the end it's up to you to decide if you wanna follow up with
a v6 and to the maintainers to decide how picky they wanna be.
Hi Qais,
On Tue, May 05, 2020 at 16:56:37 +0200, Qais Yousef
wrote...
>> > +sched_util_clamp_min_rt_default:
>> > +
>> > +
>> > +By default Linux is tuned for performance. Which means that RT tasks
>> > always run
>> > +at the highest frequency and most capabl
Hi Qais,
On Fri, May 01, 2020 at 13:49:27 +0200, Qais Yousef
wrote...
[...]
> diff --git a/Documentation/admin-guide/sysctl/kernel.rst
> b/Documentation/admin-guide/sysctl/kernel.rst
> index 0d427fd10941..521c18ce3d92 100644
> --- a/Documentation/admin-guide/sysctl/kernel.rst
> +++ b/Docume
Hi Qais,
few notes follows, but in general I like the way code is now organised.
On Fri, May 01, 2020 at 13:49:26 +0200, Qais Yousef
wrote...
[...]
> diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
> index d4f6215ee03f..e62cef019094 100644
> --- a/include/linux/sche
Hi Peter,
On 14-Oct 16:52, Peter Zijlstra wrote:
>
> The energy aware schedutil patches remimded me this was still pending.
>
> On Fri, Aug 02, 2019 at 10:47:25AM +0100, Patrick Bellasi wrote:
> > Hi Peter, Vincent,
> > is there anything different I can do on this?
>
On Wed, Sep 18, 2019 at 16:22:32 +0100, Vincent Guittot wrote...
> On Wed, 18 Sep 2019 at 16:19, Patrick Bellasi wrote:
[...]
>> $> Wakeup path tunings
>> ==
>>
>> Some additional possible use-cases was already discussed in [3]:
>>
cy niceness of a task.
PeterZ thinks this is dangerous but that we can "(carefully) fumble a
bit there."
- bias the decisions we take in check_preempt_tick() still depending
on a relative comparison of the current and wakeup task latency
niceness values.
> References:
> ===
> [1]. https://lkml.org/lkml/2019/8/30/829
> [2]. https://lkml.org/lkml/2019/7/25/296
[3]. Message-ID: <20190905114709.gm2...@hirez.programming.kicks-ass.net>
https://lore.kernel.org/lkml/20190905114709.gm2...@hirez.programming.kicks-ass.net/
Best,
Patrick
--
#include
Patrick Bellasi
On Wed, Sep 18, 2019 at 07:05:53 +0100, Ingo Molnar wrote...
> * Randy Dunlap wrote:
>
>> On 9/17/19 6:38 AM, Patrick Bellasi wrote:
>> >
>> > On Tue, Sep 17, 2019 at 08:52:42 +0100, Ingo Molnar wrote...
>> >
>> >> * Randy Dunlap wrote
not reproduce this build failue: I took Linus's latest which has all
> the -next scheduler commits included (ad062195731b), and an x86-64 "make
> defconfig" and a disabling of CONFIG_CGROUPS still resuls in a kernel
> that builds fine.
Same here Ingo, I cannot reproduce on
On Thu, Sep 05, 2019 at 12:46:37 +0100, Valentin Schneider wrote...
> On 05/09/2019 12:18, Patrick Bellasi wrote:
>>> There's a few things wrong there; I really feel that if we call it nice,
>>> it should be like nice. Otherwise we should call it latency-bias and no
On Thu, Sep 05, 2019 at 12:40:30 +0100, Peter Zijlstra wrote...
> On Thu, Sep 05, 2019 at 12:18:55PM +0100, Patrick Bellasi wrote:
>
>> Right, we have this dualism to deal with and current mainline behaviour
>> is somehow in the middle.
>>
>> BTW, the FB requ
s a "latency
niceness" crossing a given threshold.
For example, by setting something like:
/proc/sys/kernel/sched_cfs_latency_idle = 1000
we state that the task is going to be scheduled according to the
SCHED_IDLE policy.
( ( (tomatoes target here) ) )
Not sure also if we wanna comm
On Thu, Sep 05, 2019 at 12:13:47 +0100, Qais Yousef wrote...
> On 09/05/19 12:46, Peter Zijlstra wrote:
>> On Thu, Sep 05, 2019 at 10:45:27AM +0100, Patrick Bellasi wrote:
>>
>> > > From just reading the above, I would expect it to have the range
>> &
On Thu, Sep 05, 2019 at 11:46:16 +0100, Peter Zijlstra wrote...
> On Thu, Sep 05, 2019 at 10:45:27AM +0100, Patrick Bellasi wrote:
>
>> > From just reading the above, I would expect it to have the range
>> > [-20,19] just like normal nice. Apparently this is not so.
&g
ncept which
progressively enables different "levels of biasing" both at wake-up time
and load-balance time.
Why it's not possible to have SIS_CORE/NO_SIS_CORE switch implemented
just as different threshold values for the latency-nice value of a task?
Best,
Patrick
--
#include
Patrick Bellasi
if ((unsigned)i < nr_cpumask_bits)
This looks like should be squashed with the previous one, or whatever
code you'll add to define when this "biasing" is to be used or not.
Best,
Patrick
--
#include
Patrick Bellasi
ably use the init/Kconfig's
"Scheduler features" section, recently added by:
commit 69842cba9ace ("sched/uclamp: Add CPU's clamp buckets refcounting")
> /*
> * Issue a WARN when we do multiple update_rq_clock() calls
Best,
Patrick
--
#include
Patrick Bellasi
as PaulT proposed at OSPM
- map this concept in kernel-space to different kind of bias, both at
wakeup time and load-balance time, and use both for RT and CFS tasks.
That's my understanding at least ;)
I guess we will have interesting discussions at the upcoming LPC to
figure out a solution fitting all needs.
> Thanks,
> Parth
Best,
Patrick
--
#include
Patrick Bellasi
NCY_NICE_MAX100
Values 1 and 5 looks kind of arbitrary.
For the range specifically, I already commented in this other message:
Message-ID: <87r24v2i14@arm.com>
https://lore.kernel.org/lkml/87r24v2i14@arm.com/
> +
> +/*
> * Single value that decides SCHED_DEADLINE internal math precision.
> * 10 -> just above 1us
> * 9 -> just above 0.5us
> @@ -362,6 +369,7 @@ struct cfs_bandwidth {
> /* Task group related information */
> struct task_group {
> struct cgroup_subsys_state css;
> + u64 latency_nice;
>
> #ifdef CONFIG_FAIR_GROUP_SCHED
> /* schedulable entities of this group on each CPU */
Best,
Patrick
--
#include
Patrick Bellasi
commit a509a7cd7974 ("sched/uclamp: Extend sched_setattr() to support
utilization clamping")
[3] 5 patches in today's tip/sched/core up to:
commit babbe170e053 ("sched/uclamp: Update CPU's refcount on TG's clamp
changes")
--
#include
Patrick Bellasi
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 0b60ba2dd342016e4e717dbaa4ca9af3a43f4434
Gitweb:
https://git.kernel.org/tip/0b60ba2dd342016e4e717dbaa4ca9af3a43f4434
Author:Patrick Bellasi
AuthorDate:Thu, 22 Aug 2019 14:28:07 +01:00
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 2480c093130f64ac3a410504fa8b3db1fc4b87ce
Gitweb:
https://git.kernel.org/tip/2480c093130f64ac3a410504fa8b3db1fc4b87ce
Author:Patrick Bellasi
AuthorDate:Thu, 22 Aug 2019 14:28:06 +01:00
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 7274a5c1bbec45f06f1fff4b8c8b5855b6cc189d
Gitweb:
https://git.kernel.org/tip/7274a5c1bbec45f06f1fff4b8c8b5855b6cc189d
Author:Patrick Bellasi
AuthorDate:Thu, 22 Aug 2019 14:28:08 +01:00
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 3eac870a324728e5d1711840dad70bcd37f3
Gitweb:
https://git.kernel.org/tip/3eac870a324728e5d1711840dad70bcd37f3
Author:Patrick Bellasi
AuthorDate:Thu, 22 Aug 2019 14:28:09 +01:00
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 0413d7f33e60751570fd6c179546bde2f7d82dcb
Gitweb:
https://git.kernel.org/tip/0413d7f33e60751570fd6c179546bde2f7d82dcb
Author:Patrick Bellasi
AuthorDate:Thu, 22 Aug 2019 14:28:11 +01:00
The following commit has been merged into the sched/core branch of tip:
Commit-ID: babbe170e053c6ec2343751749995b7b9fd5fd2c
Gitweb:
https://git.kernel.org/tip/babbe170e053c6ec2343751749995b7b9fd5fd2c
Author:Patrick Bellasi
AuthorDate:Thu, 22 Aug 2019 14:28:10 +01:00
On Fri, Aug 30, 2019 at 09:48:34 +, Peter Zijlstra wrote...
> On Thu, Aug 22, 2019 at 02:28:10PM +0100, Patrick Bellasi wrote:
>
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index 04fc161e4dbe..fc2dc86a2abe 100644
>> --- a/kernel/sched/core.c
>
On Fri, Aug 30, 2019 at 09:45:05 +, Peter Zijlstra wrote...
> On Thu, Aug 22, 2019 at 02:28:06PM +0100, Patrick Bellasi wrote:
>> +#define _POW10(exp) ((unsigned int)1e##exp)
>> +#define POW10(exp) _POW10(exp)
>
> What is this magic? You're forcing a float literal
the proper enum uclamp_id type.
Fix it with a bulk rename now that we have all the bits merged.
Signed-off-by: Patrick Bellasi
Reviewed-by: Michal Koutny
Acked-by: Tejun Heo
Cc: Ingo Molnar
Cc: Peter Zijlstra
---
kernel/sched/core.c | 38 +++---
kernel/sched
he RUNNABLE tasks.
Otherwise, keep things simple and do just a lazy update next time each
task will be enqueued.
Do that since we assume a more strict resource control is required when
cgroups are in use. This allows also to keep "effective" clamp values
updated in case we need to expose
the concept of "effective" clamp, which is already
used by a TG to track parent enforced restrictions.
Apply task group clamp restrictions only to tasks belonging to a child
group. While, for tasks in the root group or in an autogroup, system
defaults are still enforced.
Signed-off-by: Pat
*cgroup_subsys_state (css) to walk the list of tasks in each
affected TG and update their RUNNABLE tasks.
Update each task by using the same mechanism used for cpu affinity masks
updates, i.e. by taking the rq lock.
Signed-off-by: Patrick Bellasi
Reviewed-by: Michal Koutny
Acked-by: Tejun Heo
Cc: Ingo Molnar
that at effective clamps propagation to ensure all user-space write
never fails while still always tracking the most restrictive values.
Signed-off-by: Patrick Bellasi
Reviewed-by: Michal Koutny
Acked-by: Tejun Heo
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: T
not caring now about "effective" values computation
and propagation along the hierarchy.
Update sysctl_sched_uclamp_handler() to use the newly introduced
uclamp_mutex so that we serialize system default updates with cgroup
relate updates.
Signed-off-by: Patrick Bellasi
Reviewed-by: Michal K
performance hints
Linux Plumbers Conference 2018
https://linuxplumbersconf.org/event/2/contributions/128/
Patrick Bellasi (6):
sched/core: uclamp: Extend CPU's cgroup controller
sched/core: uclamp: Propagate parent clamps
sched/core: uclamp: Propagate system defaults to root gro
On Tue, Aug 06, 2019 at 17:11:34 +0100, Michal Koutný wrote...
> On Fri, Aug 02, 2019 at 10:08:48AM +0100, Patrick Bellasi
> wrote:
>> +static ssize_t cpu_uclamp_write(struct kernfs_open_file *of, char *buf,
>> +size_t
On Tue, Aug 06, 2019 at 17:11:53 +0100, Michal Koutný wrote...
> On Fri, Aug 02, 2019 at 10:08:49AM +0100, Patrick Bellasi
> wrote:
>> @@ -7095,6 +7149,7 @@ static ssize_t cpu_uclamp_write(struct
>> kernfs_open_file *of, char *buf,
>> if (req.ret)
>&g
On Tue, Aug 06, 2019 at 17:12:06 +0100, Michal Koutný wrote...
> On Fri, Aug 02, 2019 at 10:08:47AM +0100, Patrick Bellasi
> wrote:
>> Patrick Bellasi (6):
>> sched/core: uclamp: Extend CPU's cgroup controller
>> sched/core: uclamp: Propagate parent c
Hi Peter, Vincent,
is there anything different I can do on this?
Cheers,
Patrick
On 28-Jun 15:00, Patrick Bellasi wrote:
> On 28-Jun 14:38, Peter Zijlstra wrote:
> > On Fri, Jun 28, 2019 at 11:08:14AM +0100, Patrick Bellasi wrote:
> > > On 26-Jun 13:40, Vincent Guittot wrote:
he RUNNABLE tasks.
Otherwise, keep things simple and do just a lazy update next time each
task will be enqueued.
Do that since we assume a more strict resource control is required when
cgroups are in use. This allows also to keep "effective" clamp values
updated in case we need to expose the
xt?h=v5.1
[2] Expressing per-task/per-cgroup performance hints
Linux Plumbers Conference 2018
https://linuxplumbersconf.org/event/2/contributions/128/
Patrick Bellasi (6):
sched/core: uclamp: Extend CPU's cgroup controller
sched/core: uclamp: Propagate parent clamps
sched/core
the proper enum uclamp_id type.
Fix it with a bulk rename now that we have all the bits merged.
Signed-off-by: Patrick Bellasi
Acked-by: Tejun Heo
Cc: Ingo Molnar
Cc: Peter Zijlstra
---
kernel/sched/core.c | 38 +++---
kernel/sched/sched.h | 2 +-
2 files
the concept of "effective" clamp, which is already
used by a TG to track parent enforced restrictions.
Apply task group clamp restrictions only to tasks belonging to a child
group. While, for tasks in the root group or in an autogroup, system
defaults are still enforced.
Signed-off-by:
istency by enforcing uclamp.min < uclamp.max.
Keep it simple by not caring now about "effective" values computation
and propagation along the hierarchy.
Signed-off-by: Patrick Bellasi
Acked-by: Tejun Heo
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Tejun Heo
---
Changes in v13:
that at effective clamps propagation to ensure all user-space write
never fails while still always tracking the most restrictive values.
Update sysctl_sched_uclamp_handler() to use the newly introduced
uclamp_mutex so that we serialize system default updates with cgroup
relate updates.
Signed-off-by:
*cgroup_subsys_state (css) to walk the list of tasks in each
affected TG and update their RUNNABLE tasks.
Update each task by using the same mechanism used for cpu affinity masks
updates, i.e. by taking the rq lock.
Signed-off-by: Patrick Bellasi
Acked-by: Tejun Heo
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc
.
Cheers Patrick
--
#include
Patrick Bellasi
On 25-Jul 13:41, Michal Koutný wrote:
> On Thu, Jul 18, 2019 at 07:17:45PM +0100, Patrick Bellasi
> wrote:
> > The clamp values are not tunable at the level of the root task group.
> > That's for two main reasons:
> >
> > - the root group represents
On 25-Jul 13:41, Michal Koutný wrote:
> On Thu, Jul 18, 2019 at 07:17:43PM +0100, Patrick Bellasi
> wrote:
> > +static ssize_t cpu_uclamp_min_write(struct kernfs_open_file *of,
> > + char *buf, size_t nbytes,
> > +
istency by enforcing uclamp.min < uclamp.max.
Keep it simple by not caring now about "effective" values computation
and propagation along the hierarchy.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Tejun Heo
---
Changes in v12:
Message-ID: <20190715133801.yo
kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/scheduler/sched-energy.txt?h=v5.1
[2] Expressing per-task/per-cgroup performance hints
Linux Plumbers Conference 2018
https://linuxplumbersconf.org/event/2/contributions/128/
Patrick Bellasi (6):
sched/core: ucl
the concept of "effective" clamp, which is already
used by a TG to track parent enforced restrictions.
Apply task group clamp restrictions only to tasks belonging to a child
group. While, for tasks in the root group or in an autogroup, system
defaults are still enforced.
Signed-off-by:
uot;requested" from them.
Exploit these two concepts and bind them together in such a way that,
whenever system default are tuned, the new values are propagated to
(possibly) restrict or relax the "effective" value of nested cgroups.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
*cgroup_subsys_state (css) to walk the list of tasks in each
affected TG and update their RUNNABLE tasks.
Update each task by using the same mechanism used for cpu affinity masks
updates, i.e. by taking the rq lock.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Tejun Heo
---
kernel
the proper enum uclamp_id type.
Fix it with a bulk rename now that we have all the bits merged.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
---
Changes in v12:
Message-ID: <20190716140319.hdmgcuevnpwdqobl@e110439-lin>
- added in this series
---
kernel/sched/
that at effective clamps propagation to ensure all user-space write
never fails while still always tracking the most restrictive values.
Update sysctl_sched_uclamp_handler() to use the newly introduced
uclamp_mutex so that we serialize system default updates with cgroup
relate updates.
Signed-off-
On 18-Jul 07:52, Tejun Heo wrote:
> Hello, Patrick.
>
> On Mon, Jul 08, 2019 at 09:43:53AM +0100, Patrick Bellasi wrote:
> > +static inline void cpu_uclamp_print(struct seq_file *sf,
> > + enum uclamp_id clamp_id)
> > +{
> > + s
On 16-Jul 17:36, Michal Koutný wrote:
> On Tue, Jul 16, 2019 at 03:34:17PM +0100, Patrick Bellasi
> wrote:
> > > cpu_util_update_eff internally calls css_for_each_descendant_pre() so
> > > this should be protected with rcu_read_lock().
> >
> > Right, g
On 16-Jul 17:29, Michal Koutný wrote:
> On Tue, Jul 16, 2019 at 03:07:06PM +0100, Patrick Bellasi
> wrote:
> > That note comes from the previous review cycle and it's based on a
> > request from Tejun to align uclamp behaviors with the way the
> > delegation model
On 15-Jul 18:42, Michal Koutný wrote:
> On Mon, Jul 08, 2019 at 09:43:56AM +0100, Patrick Bellasi
> wrote:
> > This mimics what already happens for a task's CPU affinity mask when the
> > task is also in a cpuset, i.e. cgroup attributes are always used to
> > restric
On 15-Jul 18:42, Michal Koutný wrote:
> On Mon, Jul 08, 2019 at 09:43:55AM +0100, Patrick Bellasi
> wrote:
> > +static void uclamp_update_root_tg(void)
> > +{
> > + struct task_group *tg = &root_task_group;
> > +
> > +
Hi Michal,
On 15-Jul 18:42, Michal Koutný wrote:
> On Mon, Jul 08, 2019 at 09:43:54AM +0100, Patrick Bellasi
> wrote:
> > Since it's possible for a cpu.uclamp.min value to be bigger than the
> > cpu.uclamp.max value, ensure local consistency by restricting each
> &g
the cgroup part of your series.)
Good point, I'll add that for the upcoming v12 posting.
Cheers,
Patrick
--
#include
Patrick Bellasi
On 08-Jul 12:08, Quentin Perret wrote:
> Hi Patrick,
Hi Quentin!
> On Monday 08 Jul 2019 at 09:43:53 (+0100), Patrick Bellasi wrote:
> > +static inline int uclamp_scale_from_percent(char *buf, u64 *value)
> > +{
> > + *value = SCHED_CAPACITY_SCALE;
> > +
> &
On 08-Jul 14:46, Douglas Raillard wrote:
> Hi Patrick,
>
> On 7/8/19 12:09 PM, Patrick Bellasi wrote:
> > On 03-Jul 17:36, Douglas Raillard wrote:
> > > On 7/2/19 4:51 PM, Peter Zijlstra wrote:
> > > > On Thu, Jun 27, 2019 at 06:15:58PM +0100, Douglas RAIL
is a system wide solution.
While, the ramp_boost thingy you propose, it's a more fine grained
mechanisms which could be extended in the future to have a per-task
side. IOW, it could contribute to have better user-space hints, for
example to ramp_boost more certain tasks and not others.
Best,
Patrick
--
#include
Patrick Bellasi
ng (proportional) ramp_boost would be so
tiny to not have any noticeable effect on OPP selection.
Am I correct on point b) above?
Could you maybe come up with some experimental numbers related to that
case specifically?
Best,
Patrick
--
#include
Patrick Bellasi
group clamp restrictions only to tasks belonging to a child
group. While, for tasks in the root group or in an autogroup, only system
defaults are enforced.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Tejun Heo
---
kernel/sched/core.c | 28 ++
uot;requested" from them.
Exploit these two concepts and bind them together in such a way that,
whenever system default are tuned, the new values are propagated to
(possibly) restrict or relax the "effective" value of nested cgroups.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molna
t effective clamps propagation to ensure all
user-space write never fails while still always tracking the most
restrictive values.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Tejun Heo
---
Changes in v11:
Message-ID: <20190624174607.gq657...@devbig004.ftw2
*cgroup_subsys_state (css) to walk the list of tasks in each
affected TG and update their RUNNABLE tasks.
Update each task by using the same mechanism used for cpu affinity masks
updates, i.e. by taking the rq lock.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Tejun Heo
---
Changes
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/scheduler/sched-energy.txt?h=v5.1
[2] Expressing per-task/per-cgroup performance hints
Linux Plumbers Conference 2018
https://linuxplumbersconf.org/event/2/contributions/128/
Patrick Bellasi (5):
sched/core:
istency by enforcing uclamp.min < uclamp.max.
Keep it simple by not caring now about "effective" values computation
and propagation along the hierarchy.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Tejun Heo
---
Changes in v11:
Message-ID: <20190624175215.
On 01-Jul 17:01, Subhra Mazumdar wrote:
>
> On 7/1/19 6:55 AM, Patrick Bellasi wrote:
> > On 01-Jul 11:02, Peter Zijlstra wrote:
> > > On Wed, Jun 26, 2019 at 06:29:12PM -0700, subhra mazumdar wrote:
> > > > Hi,
> > > >
> > > > Resend
or either the
fast (select_idle_siblings) or the slow (energy aware) path.
> Hmmm?
Just one more requirement I think it's worth to consider since the
beginning: CGroups support
That would be very welcome interface. Just because is so much more
convenient (and safe) to set these bias on a group of tasks depending
on their role in the system.
Do you have any idea on how we can expose such a "lantency-nice"
property via CGroups? It's very similar to cpu.shares but it does not
represent a resource which can be partitioned.
Best,
Patrick
--
#include
Patrick Bellasi
On 30-Jun 10:43, Vincent Guittot wrote:
> On Fri, 28 Jun 2019 at 16:10, Patrick Bellasi wrote:
> > On 28-Jun 15:51, Vincent Guittot wrote:
> > > On Fri, 28 Jun 2019 at 14:38, Peter Zijlstra wrote:
> > > > On Fri, Jun 28, 2019 at 11:08:14AM +0100, Patrick Bellasi wr
1 - 100 of 808 matches
Mail list logo