the so called "0-lag time".
Signed-off-by: Luca Abeni
---
include/linux/sched.h | 1 +
kernel/sched/core.c | 1 +
kernel/sched/deadline.c | 139 ++--
kernel/sched/sched.h| 1 +
4 files changed, 126 insertions(+), 16 deletions(
Now that the inactive timer can be armed to fire at the 0-lag time,
it is possible to use inactive_task_timer() to update the total
-deadline utilization (dl_b->total_bw) at the correct time, fixing
dl_overflow() and __setparam_dl().
Signed-off-by: Luca Abeni
---
kernel/sched/core.c |
Signed-off-by: Luca Abeni
---
include/uapi/linux/sched.h | 1 +
kernel/sched/core.c| 3 ++-
kernel/sched/deadline.c| 3 ++-
3 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/include/uapi/linux/sched.h b/include/uapi/linux/sched.h
index 5f0fe01..e2a6c7b 100644
--- a
Original GRUB tends to reclaim 100% of the CPU time... And this allows a
CPU hot to starve non-deadline tasks.
To address this issue, allow the scheduler to reclaim only a specified
fraction of CPU time.
Signed-off-by: Luca Abeni
---
kernel/sched/core.c | 4
kernel/sched/deadline.c | 7
Hi Daniel,
On Tue, 25 Oct 2016 11:09:52 +0200
Daniel Bristot de Oliveira wrote:
[...]
> > +static void add_running_bw(struct sched_dl_entity *dl_se, struct
> > dl_rq *dl_rq) +{
> > + u64 se_bw = dl_se->dl_bw;
> > +
> > + dl_rq->running_bw += se_bw;
> > +}
>
> why not...
>
> static *inline
On Tue, 25 Oct 2016 10:33:21 +0100
Juri Lelli wrote:
> On 25/10/16 11:25, Peter Zijlstra wrote:
> > On Tue, Oct 25, 2016 at 12:32:53AM +0200, Tommaso Cucinotta wrote:
> > > Hi all,
> > >
> > > this is a tiny patch providing readings of the current (leftover)
> > > runtime and absolute deadline
On Tue, 25 Oct 2016 09:58:11 -0400
Steven Rostedt wrote:
> On Tue, 25 Oct 2016 11:29:16 +0200
> luca abeni wrote:
>
> > Hi Daniel,
> >
> > On Tue, 25 Oct 2016 11:09:52 +0200
> > Daniel Bristot de Oliveira wrote:
> > [...]
> > > > +stati
Hi Daniel,
(sorry for the previous html email; I replied from my phone and I did
not realise how the email client was configured)
On Tue, 3 Jan 2017 19:58:38 +0100
Daniel Bristot de Oliveira wrote:
[...]
> > The implemented CPU reclaiming algorithm is based on tracking the
> > utilization U_act
Hi Daniel,
On Tue, 3 Jan 2017 19:58:38 +0100
Daniel Bristot de Oliveira wrote:
[...]
> In a four core box, if I dispatch 11 tasks [1] with setup:
>
> period = 30 ms
> runtime = 10 ms
> flags = 0 (GRUB disabled)
>
> I see this:
> --- HTOP
>
Hi all,
trying to debug a reclaiming issue discovered by Daniel, I find myself
confused by the push logic... Maybe I am misunderstanding something
very obvious, so I ask here:
- push_dl_task() selects a task to be pushed, and then searches for a
runqueue to push the task to by calling find_lock
Hi Daniel,
2017-01-04 16:14 GMT+01:00, Daniel Bristot de Oliveira :
> On 01/04/2017 01:17 PM, luca abeni wrote:
>> Hi Daniel,
>>
>> On Tue, 3 Jan 2017 19:58:38 +0100
>> Daniel Bristot de Oliveira wrote:
>>
>> [...]
>>> In a four core box, if I di
From: Luca Abeni
Original GRUB tends to reclaim 100% of the CPU time... And this
allows a CPU hog to starve non-deadline tasks.
To address this issue, allow the scheduler to reclaim only a
specified fraction of CPU time.
Signed-off-by: Luca Abeni
---
kernel/sched/core.c | 4
kernel
From: Luca Abeni
According to the GRUB (Greedy Reclaimation of Unused Bandwidth)
reclaiming algorithm, the runtime is not decreased as "dq = -dt",
but as "dq = -Uact dt" (where Uact is the per-runqueue active
utilization).
Hence, this commit modifies the runtime accounting ru
From: Luca Abeni
Signed-off-by: Luca Abeni
---
include/uapi/linux/sched.h | 1 +
kernel/sched/core.c| 3 ++-
kernel/sched/deadline.c| 3 ++-
3 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/include/uapi/linux/sched.h b/include/uapi/linux/sched.h
index 5f0fe01..e2a6c7b
From: Luca Abeni
This patch implements a more theoretically sound algorithm for
tracking active utilization: instead of decreasing it when a
task blocks, use a timer (the "inactive timer", named after the
"Inactive" task state of the GRUB algorithm) to decrease the
active u
From: Luca Abeni
Active utilization is defined as the total utilization of active
(TASK_RUNNING) tasks queued on a runqueue. Hence, it is increased
when a task wakes up and is decreased when a task blocks.
When a task is migrated from CPUi to CPUj, immediately subtract the
task's utiliz
From: Luca Abeni
Now that the inactive timer can be armed to fire at the 0-lag time,
it is possible to use inactive_task_timer() to update the total
-deadline utilization (dl_b->total_bw) at the correct time, fixing
dl_overflow() and __setparam_dl().
Signed-off-by: Luca Abeni
---
kernel/sc
From: Luca Abeni
Hi all,
here is a new version of the patchset implementing CPU reclaiming
(using the GRUB algorithm[1]) for SCHED_DEADLINE.
Basically, this feature allows SCHED_DEADLINE tasks to consume more
than their reserved runtime, up to a maximum fraction of the CPU time
(so that other
Hi Peter,
On Fri, 18 Nov 2016 14:55:54 +0100
Peter Zijlstra wrote:
> On Mon, Oct 24, 2016 at 04:06:33PM +0200, Luca Abeni wrote:
>
> > @@ -498,6 +514,8 @@ static void update_dl_entity(struct
> > sched_dl_entity *dl_se, struct dl_rq *dl_rq = dl_rq_of_se(dl_se);
>
On Fri, 18 Nov 2016 15:23:59 +0100
Peter Zijlstra wrote:
> On Tue, Oct 25, 2016 at 09:58:11AM -0400, Steven Rostedt wrote:
>
> > I agree with Daniel, especially since I don't usually trust the
> > compiler. And the added variable is more of a distraction as it
> > doesn't seem to have any real p
On Fri, 18 Nov 2016 16:36:15 +0100
Peter Zijlstra wrote:
> On Mon, Oct 24, 2016 at 04:06:34PM +0200, Luca Abeni wrote:
> > @@ -514,7 +556,20 @@ static void update_dl_entity(struct
> > sched_dl_entity *dl_se, struct dl_rq *dl_rq = dl_rq_of_se(dl_se);
> > struct rq *r
On Fri, 18 Nov 2016 16:47:48 +0100
Peter Zijlstra wrote:
> On Mon, Oct 24, 2016 at 04:06:34PM +0200, Luca Abeni wrote:
> > @@ -1074,6 +1161,14 @@ select_task_rq_dl(struct task_struct *p, int
> > cpu, int sd_flag, int flags) }
> > rcu_read_unlock();
> &g
From: Luca Abeni
Now that the inactive timer can be armed to fire at the 0-lag time,
it is possible to use inactive_task_timer() to update the total
-deadline utilization (dl_b->total_bw) at the correct time, fixing
dl_overflow() and __setparam_dl().
Signed-off-by: Luca Abeni
Tested-by: Dan
From: Luca Abeni
This commit introduces a per-runqueue "extra utilization" that can be
reclaimed by deadline tasks. In this way, the maximum fraction of CPU
time that can reclaimed by deadline tasks is fixed (and configurable)
and does not depend on the total deadline utilization.
Sig
From: Luca Abeni
This patch implements a more theoretically sound algorithm for
tracking active utilization: instead of decreasing it when a
task blocks, use a timer (the "inactive timer", named after the
"Inactive" task state of the GRUB algorithm) to decrease the
active u
From: Luca Abeni
Hi all,
here is yet another version of the patchset implementing CPU reclaiming
(using the GRUB algorithm[1]) for SCHED_DEADLINE.
Basically, this feature allows SCHED_DEADLINE tasks to consume more
than their reserved runtime, up to a maximum fraction of the CPU time
(so that
From: Luca Abeni
Original GRUB tends to reclaim 100% of the CPU time... And this
allows a CPU hog to starve non-deadline tasks.
To address this issue, allow the scheduler to reclaim only a
specified fraction of CPU time.
Signed-off-by: Luca Abeni
Tested-by: Daniel Bristot de Oliveira
From: Luca Abeni
The total rq utilization is defined as the sum of the utilisations of
tasks that are "assigned" to a runqueue, independently from their state
(TASK_RUNNING or blocked)
Signed-off-by: Luca Abeni
Signed-off-by: Claudio Scordino
Tested-by: Daniel Bristot de Oliveira
-
From: Luca Abeni
Instead of decreasing the runtime as "dq = -Uact dt" (eventually
divided by the maximum utilization available for deadline tasks),
decrease it as "dq = -(1 - Uinact) dt", where Uinact is the "inactive
utilization".
In this way, the maximum fr
From: Luca Abeni
Active utilization is defined as the total utilization of active
(TASK_RUNNING) tasks queued on a runqueue. Hence, it is increased
when a task wakes up and is decreased when a task blocks.
When a task is migrated from CPUi to CPUj, immediately subtract the
task's utiliz
From: Luca Abeni
According to the GRUB (Greedy Reclaimation of Unused Bandwidth)
reclaiming algorithm, the runtime is not decreased as "dq = -dt",
but as "dq = -Uact dt" (where Uact is the per-runqueue active
utilization).
Hence, this commit modifies the runtime accounting ru
From: Luca Abeni
Signed-off-by: Luca Abeni
Tested-by: Daniel Bristot de Oliveira
---
include/uapi/linux/sched.h | 1 +
kernel/sched/core.c| 3 ++-
kernel/sched/deadline.c| 3 ++-
3 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/include/uapi/linux/sched.h b/include
Hi Peter,
On Fri, 24 Mar 2017 14:20:41 +0100
Peter Zijlstra wrote:
> On Fri, Mar 24, 2017 at 04:52:55AM +0100, luca abeni wrote:
>
> > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > index d67eee8..952cac8 100644
> > --- a/include/linux/sched.h
> &g
Hi Peter,
On Fri, 24 Mar 2017 15:00:15 +0100
Peter Zijlstra wrote:
> On Fri, Mar 24, 2017 at 04:52:58AM +0100, luca abeni wrote:
>
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 20c62e7..efa88eb 100644
> > --- a/kernel/sched/core.c
>
Hi Mathieu,
On Sun, 26 Mar 2017 11:04:18 -0600
Mathieu Poirier wrote:
[...]
> > @@ -946,8 +968,12 @@ enqueue_dl_entity(struct sched_dl_entity
> > *dl_se,
> > * parameters of the task might need updating. Otherwise,
> > * we want a replenishment of its runtime.
> > */
>
Hi Mathieu,
On Sun, 26 Mar 2017 11:32:59 -0600
Mathieu Poirier wrote:
[...]
> > + task_rq_unlock(rq, p, &rf);
> > + put_task_struct(p);
> > +
> > + return HRTIMER_NORESTART;
> > +}
> > +
> > +void init_inactive_task_timer(struct sched_dl_entity *dl_se)
>
> To be consistent wi
On Fri, 24 Mar 2017 22:47:15 +0100
luca abeni wrote:
[...]
> > > + } else {
> > > + /*
> > > + * Since "dl_non_contending" is not set, the
> > > + * task's utilization has already been removed
> > > from
Hi Juri,
On Mon, 27 Mar 2017 08:17:45 +0100
Juri Lelli wrote:
[...]
> > > In general I feel it would be nice to have a state diagram
> > > included somewhere near these two functions. It would be nice to
> > > not have to dig out the PDF every time.
> >
> > Ok... Since I am not good at ascii a
On Fri, 24 Mar 2017 22:31:46 -0400
Steven Rostedt wrote:
> On Fri, 24 Mar 2017 22:47:15 +0100
> luca abeni wrote:
>
> > Ok... Since I am not good at ascii art, would it be ok to add a
> > textual description? If yes, I'll add a comment like:
> > "
> >
Hi Peter,
On Mon, 27 Mar 2017 16:03:41 +0200
Peter Zijlstra wrote:
> On Fri, Mar 24, 2017 at 04:53:02AM +0100, luca abeni wrote:
>
> > +static inline
> > +void __dl_update(struct dl_bw *dl_b, s64 bw)
> > +{
> > + struct root_domain *rd = container_of(dl_b,
On Mon, 27 Mar 2017 16:26:33 +0200
Peter Zijlstra wrote:
> On Fri, Mar 24, 2017 at 04:53:01AM +0100, luca abeni wrote:
> > From: Luca Abeni
> >
> > Instead of decreasing the runtime as "dq = -Uact dt" (eventually
> > divided by the maximum uti
On Mon, 27 Mar 2017 17:53:35 +0200
Peter Zijlstra wrote:
> On Mon, Mar 27, 2017 at 04:56:51PM +0200, Luca Abeni wrote:
>
> > > > +u64 grub_reclaim(u64 delta, struct rq *rq, u64 u)
> > > > {
> > > > + u64 u_act;
> > > > +
> > &
tch is a reasonable solution (maybe we can improve it later, but
I think a fix for this issue should go in soon).
So,
Reviewed-by: Luca Abeni
Thanks,
Luca
>
> Reproducer:
>
> --- %< ---
>
ooks good to me
Thanks,
Luca
>
> Cc: Ingo Molnar
> Cc: Peter Zijlstra
> Cc: Steven Rostedt
> Cc: Luca Abeni
> Signed-off-by: Juri Lelli
> ---
> kernel/sched/deadline.c | 9 -
> 1 file changed, 4
ce in this situation...
Luca
>
> > It would be nice to have the reason in the change log.
> >
>
> Thanks a lot for pointing out what might be more than inaccuracy in the
> changelog.
>
> Best,
>
> - Juri
>
> &g
On Fri, 17 Jun 2016 22:15:18 +0200
luca abeni wrote:
> On Fri, 17 Jun 2016 17:28:37 +0100
> Juri Lelli wrote:
> [...]
> > True, but we were practically already using the same parameter, under a
> > different name though, after
> >
> > 2f9f3fdc928 "sch
Hi Steven,
On Tue, 14 Feb 2017 14:28:48 -0500
"Steven Rostedt (VMware)" wrote:
> I was testing Daniel's changes with his test case, and tweaked it a
> little. Instead of having the runtime equal to the deadline, I
> increased the deadline ten fold.
>
> Daniel's test case had:
>
> attr.sc
Hi Steven,
On Tue, 14 Feb 2017 19:14:17 -0500
Steven Rostedt wrote:
[...]
> > I am not sure about the correct fix (wouldn't
> > "runtime / (deadline - t) > dl_runtime / dl_deadline" allow the
> > task to use a fraction of CPU time equal to dl_runtime /
> > dl_deadline?)
> >
> > The current code
Hi Juri,
On Wed, 15 Feb 2017 10:29:19 +
Juri Lelli wrote:
[...]
> > Ok, thanks; I think I can now see why this can result in a task
> > consuming more than the reserved utilisation. I still need some
> > time to convince me that "runtime / (deadline - t) > dl_runtime /
> > dl_deadline" is the
On Wed, 15 Feb 2017 12:59:25 +
Juri Lelli wrote:
> On 15/02/17 13:31, Luca Abeni wrote:
> > Hi Juri,
> >
> > On Wed, 15 Feb 2017 10:29:19 +
> > Juri Lelli wrote:
> > [...]
> > > > Ok, thanks; I think I can now see why this can result in a
Hi Daniel,
On Fri, 10 Feb 2017 20:48:10 +0100
Daniel Bristot de Oliveira wrote:
[...]
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 70ef2b1..3c94d85 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -505,10 +505,15 @@ static void update_dl_ent
gt;* and then wake up before the next period to receive
>* a new replenishment.
>*/
> nanosleep(&ts, NULL);
> }
>
> exit(0);
> }
> --- >% ---
>
> On my box, this reproducer uses almost 50%
Hi Juri,
(I reply from my new email address)
On Wed, 11 Jan 2017 12:19:51 +
Juri Lelli wrote:
[...]
> > > For example, with my taskset, with a hypothetical perfect balance
> > > of the whole runqueue, one possible scenario is:
> > >
> > >CPU01 2 3
> > > # TASKS 33
On Wed, 11 Jan 2017 15:06:47 +
Juri Lelli wrote:
> On 11/01/17 13:39, Luca Abeni wrote:
> > Hi Juri,
> > (I reply from my new email address)
> >
> > On Wed, 11 Jan 2017 12:19:51 +
> > Juri Lelli wrote:
> > [...]
> > > > > For
On Wed, 11 Jan 2017 17:05:42 +
Juri Lelli wrote:
> Hi,
>
> On 30/12/16 12:33, Luca Abeni wrote:
> > From: Luca Abeni
> >
> > This patch implements a more theoretically sound algorithm for
> > tracking active utilization: instead of decreasing it when a
Hello,
On Mon, 11 Jul 2016 13:03:56 +0800
Xunlei Pang wrote:
> On 2016/07/08 at 19:28, Juri Lelli wrote:
[...]
> > @@ -363,6 +364,15 @@ static inline void setup_new_dl_entity(struct
> > sched_dl_entity *dl_se, return;
> >
> > /*
> > +* Use the scheduling parameters of the top pi-waiter
On Mon, 11 Jul 2016 16:16:20 +0800
Xunlei Pang wrote:
> On 2016/07/11 at 16:01, luca abeni wrote:
> > Hello,
> >
> > On Mon, 11 Jul 2016 13:03:56 +0800
> > Xunlei Pang wrote:
> >
> >> On 2016/07/08 at 19:28, Juri Lelli wrote:
> > [...]
On Thu, 25 Feb 2016 09:46:55 +
Juri Lelli wrote:
> Hi,
>
> On 24/02/16 14:53, luca abeni wrote:
> > On Tue, 23 Feb 2016 16:42:49 +0100
> > Peter Zijlstra wrote:
> >
> > > On Mon, Feb 22, 2016 at 11:57:04AM +0100, Luca Abeni wrote:
> >
Hi Steven,
On Thu, 3 Mar 2016 09:23:44 -0500
Steven Rostedt wrote:
> On Thu, 3 Mar 2016 09:28:01 +
> Juri Lelli wrote:
>
> > That's the one that I use, and I'm not seeing any problems with it.
> > I'll send you the binary in private.
>
> That's the one I use too.
Juri provided me with a w
Hi Peter,
On Wed, 2 Mar 2016 20:02:58 +0100
Peter Zijlstra wrote:
[...]
> > +++ b/kernel/sched/core.c
> > @@ -4079,6 +4079,8 @@ change:
> > new_effective_prio =
> > rt_mutex_get_effective_prio(p, newprio); if (new_effective_prio ==
> > oldprio) { __setscheduler_params(p, attr);
> > +
. So, dl_new can be removed by introducing this
check in switched_to_dl(); this allows to simplify the
SCHED_DEADLINE code.
Signed-off-by: Luca Abeni
---
include/linux/sched.h | 6 +-
kernel/sched/core.c | 1 -
kernel/sched/deadline.c | 35 +--
3 files
On Sat, 5 Mar 2016 09:50:59 +0100
Ingo Molnar wrote:
>
> * Luca Abeni wrote:
>
> > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> > index 57b939c..e0c4456 100644
> > --- a/kernel/sched/deadline.c
> > +++ b/kernel/sched/deadline.c
> >
. So, dl_new can be removed by introducing this
check in switched_to_dl(); this allows to simplify the
SCHED_DEADLINE code.
Signed-off-by: Luca Abeni
---
include/linux/sched.h | 6 +-
kernel/sched/core.c | 1 -
kernel/sched/deadline.c | 35 +--
3 files
reads, and the
tests did not show any regressions respect to git master.
Luca Abeni (1):
sched/deadline: remove dl_new from sched_dl_entity
include/linux/sched.h | 6 +-
kernel/sched/core.c | 1 -
kernel/sched/deadline.c | 35 +--
3 files change
On Fri, 19 Feb 2016 09:20:08 -0500
Steven Rostedt wrote:
> On Fri, 19 Feb 2016 14:43:47 +0100
> luca abeni wrote:
>
>
> > So, the first attached patch (to be applied over Juri's patch) just
> > moves two __dl_sub_ac() and __dl_add_ac() invocations from
> >
ld. So, I wrote patch 0003, which seems
to be working correctly, but I am probably missing some important
tests. Let me know what you think about it: I think it might be
a nice simplification of the code, but as usual I might be missing
something :)
Luca Abeni (4):
Move some calls to __dl_{sub,add}
This moves some deadline-specific calls from core.c (dl_overflow())
switched_from_dl() and switched_to_dl().
Some __dl_{sub,add}_ac() calls are left in dl_overflow(), to handle
the case in which the deadline parameters of a task are changed without
changing the scheduling class.
---
kernel/sched/c
To move these calls from dl_overflow() to deadline.c, we must change
the meaning of the third parameter of prio_changed_dl().
Instead of passing the "old priority" (which is always equal to the current
one, for SCHED_DEADLINE) we pass the old utilization. An alternative approach
is to change the pr
switched_to_dl() can be used instead
---
kernel/sched/core.c | 1 -
kernel/sched/deadline.c | 28 +---
2 files changed, 5 insertions(+), 24 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 5dc12db..4246b1b 100644
--- a/kernel/sched/core.c
+++
On Fri, 19 Feb 2016 15:53:06 +0100
luca abeni wrote:
[...]
> > Please send the patches in a patch series. It is very hard to review
> > patches that are attachments. And our scripts are made to apply
> > patches from mailing lists. Having attachments just makes the job
>
Hi Juri,
On Thu, 11 Feb 2016 12:12:57 +
Juri Lelli wrote:
[...]
> I think we still have (at least) two problems:
>
> - select_task_rq_dl, if we select a different target
> - select_task_rq might make use of select_fallback_rq, if
> cpus_allowed changed after the task went to sleep
>
> Sec
On Thu, 11 Feb 2016 12:27:54 +
Juri Lelli wrote:
> On 11/02/16 13:22, Luca Abeni wrote:
> > Hi Juri,
> >
> > On Thu, 11 Feb 2016 12:12:57 +
> > Juri Lelli wrote:
> > [...]
> > > I think we still have (at least) two problems:
> > >
&
On Thu, 11 Feb 2016 12:49:59 +
Juri Lelli wrote:
[...]
> > > > > Luca, did you already face this problem (if I got it right)
> > > > > and thought of a way to fix it? I'll go back and stare a bit
> > > > > more at those paths.
> > > > In my patch I took care of the first case (modifying
> > >
On Thu, 11 Feb 2016 09:25:46 -0500
Steven Rostedt wrote:
> On Thu, 11 Feb 2016 14:05:45 +0100
> luca abeni wrote:
>
>
> > Well, I never used the rq utilization to re-build the root_domain
> > utilization (and I never played with root domains too much... :)...
>
On Wed, 2 Mar 2016 20:02:58 +0100
Peter Zijlstra wrote:
> On Sat, Feb 27, 2016 at 12:37:57PM +0100, luca abeni wrote:
> > Subject: sched/core: fix __sched_setscheduler() to properly invoke
> > prio_changed_dl()
> >
> > Currently, prio_changed_dl() is not calle
On Wed, 27 Jan 2016 15:44:22 +0100
Peter Zijlstra wrote:
> On Tue, Jan 26, 2016 at 01:52:19PM +0100, luca abeni wrote:
>
> > > The trouble is with interfaces. Once we expose them we're stuck
> > > with them. And from that POV I think an explicit SCHED_OTHER
> >
by deadline scheduler class, we can directly
calculate it thanks to the sum of utilization of deadline tasks on the
CPU. We can remove deadline tasks from rt_avg metric and directly use
the average bandwidth of deadline scheduler in scale_rt_capacity.
Based in part on a similar patch from Luca
Hi Vincent,
On 12/10/2015 05:11 PM, Vincent Guittot wrote:
[...]
If yes, I think your approach is safe (and easier to implement - modulo a
small
issue when a task terminates of switches to other scheduling policies; I
think
there already are some "XXX" comments in the current code). However, it
Hi Peter,
On Fri, 11 Dec 2015 15:10:28 +0100
Peter Zijlstra wrote:
[...]
> Thomas just reported a 'fun' problem with our rt 'load-balancer'.
I suspect the root of the proble is that rt push/pull do not implement
a load balancer, but just make sure that the M high priority tasks
(where M is the nu
Hi Steven,
On Fri, 11 Dec 2015 14:53:59 -0500
Steven Rostedt wrote:
[...]
> > > The push-pull thing only acts when there's idle or new tasks, and
> > > in the above scenario, the CPU with only the single RR task will
> > > happily continue running that task, while the other CPU will have
> > > to
Hi Peter,
On Wed, 27 Jan 2016 15:39:46 +0100
Peter Zijlstra wrote:
> On Wed, Jan 27, 2016 at 02:36:51PM +0100, Luca Abeni wrote:
> > Ok, so I implemented this idea, and I am currently testing it...
> > The first experiments seem to show that there are no problems, but I
>
Hi Peter,
On Thu, 28 Jan 2016 13:21:00 +0100
Peter Zijlstra wrote:
> On Thu, Jan 28, 2016 at 12:14:41PM +0100, luca abeni wrote:
> > I am looking at the PI stuff right now... And I am not sure if
> > SCHED_DEADLINE does the right thing for PI :)
>
> Strictly speaking it
On Thu, 28 Jan 2016 14:05:44 +0100
Vincent Guittot wrote:
> Hi Luca,
>
>
> On 27 January 2016 at 15:45, Luca Abeni wrote:
>
> > Hi Peter,
> >
> > On Wed, 27 Jan 2016 15:39:46 +0100
> > Peter Zijlstra wrote:
> >
> > > On Wed, Jan 27, 2016
On Thu, 28 Jan 2016 15:00:53 +0100
Peter Zijlstra wrote:
> On Thu, Jan 28, 2016 at 02:41:29PM +0100, luca abeni wrote:
>
> > > Some day we should fix this :-)
>
> > I am trying to have a better look at the code, and I think that
> > implementing bandwidth i
Hi Peter,
On Fri, 29 Jan 2016 16:06:05 +0100
Peter Zijlstra wrote:
> On Fri, Jan 15, 2016 at 10:15:11AM +0100, Luca Abeni wrote:
>
> > There is also a newer paper, that will be published at ACM SAC 2016
> > (so, it is not available yet), but is based on this technica
Hi Peter,
On Tue, 19 Jan 2016 14:47:39 +0100
Peter Zijlstra wrote:
> On Tue, Jan 19, 2016 at 01:20:13PM +0100, Luca Abeni wrote:
> > Hi Peter,
> >
> > On 01/14/2016 08:43 PM, Peter Zijlstra wrote:
> > >On Thu, Jan 14, 2016 at 04:24:49PM +0100, Luca Abeni wrote:
Hi Peter,
On Wed, 27 Jan 2016 15:39:46 +0100
Peter Zijlstra wrote:
> On Wed, Jan 27, 2016 at 02:36:51PM +0100, Luca Abeni wrote:
> > Ok, so I implemented this idea, and I am currently testing it...
> > The first experiments seem to show that there are no problems, but I
>
Hi Peter,
On Fri, 15 Jan 2016 09:50:04 +0100
Peter Zijlstra wrote:
[...]
> > >>NOTE: the fraction of CPU time that cannot be reclaimed is
> > >>currently hardcoded as (1 << 20) / 10 -> 90%, but it must be made
> > >>configurable!
> > >
> > >So the alternative is an explicit SCHED_OTHER server whi
Hi Juri,
On Wed, 3 Feb 2016 11:30:19 +
Juri Lelli wrote:
[...]
> > > > Which kind of interface is better for this? Would adding
> > > > something like /proc/sys/kernel/sched_other_period_us
> > > > /proc/sys/kernel/sched_other_runtime_us
> > > > be ok?
> > > >
> > > > If this is ok, I'll add
Hi,
sorry for the late reply... Anyway, I am currently testing this
patchset (and trying to use it for the "SCHED_DEADLINE-based cgroup
scheduling" patchset).
And during my tests I had a doubt:
On Fri, 7 Aug 2020 11:50:49 +0200
Juri Lelli wrote:
> From: Peter Zijlstra
>
> Low priority task
On Tue, 6 Oct 2020 11:35:23 +0200
Juri Lelli wrote:
[...]
> > > + if (dl_se->server_has_tasks(dl_se)) {
> > > + enqueue_dl_entity(dl_se, dl_se,
> > > ENQUEUE_REPLENISH);
> > > + resched_curr(rq);
> > > + __pus
From: Luca Abeni
When switching to -deadline, if the scheduling deadline of a task is
in the past then switched_to_dl() calls setup_new_entity() to properly
initialize the scheduling deadline and runtime.
The problem is that the task is enqueued _before_ having its parameters
initialized by
On Fri, 21 Apr 2017 10:39:26 +0100
Juri Lelli wrote:
> Hi Luca,
>
> On 20/04/17 21:30, Luca Abeni wrote:
> > From: Luca Abeni
> >
> > When switching to -deadline, if the scheduling deadline of a task is
> > in the past then switched_to_dl() calls setup_new_en
On Fri, 21 Apr 2017 11:42:40 +0200
luca abeni wrote:
[...]
> > > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> > > index a2ce590..ec53d24 100644
> > > --- a/kernel/sched/deadline.c
> > > +++ b/kernel/sched/deadline.c
> > >
On Fri, 21 Apr 2017 10:47:29 +0100
Juri Lelli wrote:
[...]
> > > > *dl_se, update_dl_entity(dl_se, pi_se);
> > > > else if (flags & ENQUEUE_REPLENISH)
> > > > replenish_dl_entity(dl_se, pi_se);
> > > > + else if ((flags & ENQUEUE_RESTORE) &&
> > >
> > > Not sure
On Fri, 21 Apr 2017 11:26:59 +0100
Juri Lelli wrote:
> On 21/04/17 11:59, Luca Abeni wrote:
> > On Fri, 21 Apr 2017 10:47:29 +0100
> > Juri Lelli wrote:
> > [...]
> > > > > > *dl_se, update_dl_entity(dl_se, pi_se);
> > &
Hi all,
On Tue, 25 Apr 2017 11:13:03 +0100
Juri Lelli wrote:
[...]
> > > Currently, KVM do the CPU resource reservation by the cgroup
> > > mechanism which can't do entire accurate separation because the
> > > capacity of the Linux scheduler. Take the public cloud as an
> > > example, some custom
Hi,
On Wed, 12 Apr 2017 13:27:32 +0800
Xunlei Pang wrote:
[...]
> The more I read the code, the more I am confused why
> dl_entity_overflow() is needed, if the task is before its deadline,
> just let it run.
Sorry for jumping in this thread; I did not read all of the previous
emails, but I think
On Wed, 12 Apr 2017 20:30:04 +0800
Xunlei Pang wrote:
[...]
> > If the relative deadline is different from the period, then the
> > check is an approximation (and this is the big issue here). I am
> > still not sure about what is the best thing to do in this case.
> >
> >> E.g. For (runtime 2ms,
On Wed, 12 Apr 2017 16:28:02 +0100
Juri Lelli wrote:
> Hi Paul,
>
> On 12/04/17 08:15, Paul E. McKenney wrote:
> > Hello!
> >
> > On the unlikely off-chance that this is new news...
> >
>
> It is actually new news for me (it might be still unlikely for Peter,
> Luca and Tommaso, that I Cc-e
201 - 300 of 376 matches
Mail list logo