Hi Tommaso,
On Fri, 9 Sep 2016 10:44:10 +0200
Tommaso Cucinotta wrote:
[...]
> +4.4 Behavior of sched_yield()
> +-
> +
> + When a SCHED_DEADLINE task calls sched_yield(), it gives up its
> + remaining runtime and is suspended till the next reservation period,
Maybe I
runqueues that do not already contain
SCHED_DEADLINE tasks.
This patches fixes the issue by checking if dl.earliest_dl.curr == 0.
Signed-off-by: Luca Abeni
---
kernel/sched/deadline.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/deadline.c b/kernel/s
Hi Juri,
On Thu, 15 Oct 2015 17:40:19 +0100
Juri Lelli wrote:
> On 15/10/15 12:09, Luca Abeni wrote:
> > Commit 9d5142624256 ("sched/deadline: Reduce rq lock contention by
> > eliminating locking of non-feasible target") broke
[...]
> > cpu_
On 10/15/2015 06:40 PM, Juri Lelli wrote:
On 15/10/15 12:09, Luca Abeni wrote:
Commit 9d5142624256 ("sched/deadline: Reduce rq lock contention by
eliminating locking of non-feasible target") broke select_task_rq_dl()
[...]
- dl_time_before(p-&g
runqueues that do not already contain
SCHED_DEADLINE tasks.
This patch fixes the issue by checking if dl.dl_nr_running == 0.
Signed-off-by: Luca Abeni
---
kernel/sched/deadline.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/d
Hi all,
after fixing task migrations for SCHED_DEADLINE, I started to see some
lockdep-related warnings that look like this:
[ 794.428081] WARNING: CPU: 1 PID: 0 at
/home/luca/Src/GRUB/linux-reclaiming/kernel/locking/lockdep.c:3407
lock_release+0x3f4/0x440()
[ 794.428439] releasing a pinned l
Hi,
On 10/22/2015 07:35 AM, Wanpeng Li wrote:
[...]
Now, if I understand correctly the issue is that dl_task_timer() does:
rq = task_rq_lock(p, &flags);
[...]
if (has_pushable_dl_tasks(rq))
push_dl_task(rq);
with task_rq_lock() that pins rq->lock and push_tl_task() that invokes
f
-> push_dl/rt_task()
> > >
> > >I think you also should consider the lockdep pin_lock in this path.
>
> Durr, clearly I overlooked both these when I did that. Sorry about
> that.
>
> So how about:
>
> ---
> Subject: sched: Add missing lockdep_unpin
patch improves dl_runtime_exceeded() to achieve that.
>
> Fixes: 269ad8015a6b ("sched/deadline: Avoid double-accounting in case
> of missed deadlines") Cc: Luca Abeni
> Signed-off-by: Xunlei Pang
> ---
> kernel/sched/deadline.c | 9 +++--
> 1 file changed, 7 i
Hi,
On Fri, 12 May 2017 14:53:33 +0800
Xunlei Pang wrote:
> On 05/12/2017 at 01:57 PM, luca abeni wrote:
> > Hi again,
> >
> > (sorry for the previous email; I replied from gmail and I did not
> > realize I was sending it in html).
> >
> >
> > On Fr
On Fri, 12 May 2017 15:19:55 +0800
Xunlei Pang wrote:
[...]
> >> "As seen, enforcing that the total utilization is smaller than M
> >> does not guarantee that global EDF schedules the tasks without
> >> missing any deadline (in other words, global EDF is not an optimal
> >> scheduling algorithm).
Hi all,
On Mon, 19 Nov 2018 09:23:03 +0100 (CET)
Thomas Gleixner wrote:
> Adding scheduler folks
>
> On Sun, 18 Nov 2018, syzbot wrote:
>
> > Hello,
> >
> > syzbot found the following crash on:
> >
> > HEAD commit:1ce80e0fe98e Merge tag 'fsnotify_for_v4.20-rc3' of
> > git://g.. git tree:
nning_bw if the task is
> not queued and in non_contending state while switched to a different
> class.
>
> Reported-by: Mark Rutland
> Signed-off-by: Juri Lelli
> ---
> kernel/sched/deadline.c | 11 ++-
> 1 file changed, 10 insertions(+), 1 deletion(-)
Hi all,
(and, happy new year to everyone!)
this looks similar to a bug we have seen some time ago (a task
switching from SCHED_OTHER to SCHED_DEADLINE while inheriting a
deadline from a SCHED_DEADLINE task triggers the warning)...
Juri, I think you found a fix for such a bug; has it been committe
Hi,
On Tue, 9 Oct 2018 11:24:29 +0200
Juri Lelli wrote:
> From: Peter Zijlstra
>
> Track the blocked-on relation for mutexes, this allows following this
> relation at schedule time. Add blocked_task to track the inverse
> relation.
>
> ,-> task
> | | block
Hi,
On Tue, 9 Oct 2018 11:24:31 +0200
Juri Lelli wrote:
[...]
> +migrate_task:
[...]
> + put_prev_task(rq, next);
> + if (rq->curr != rq->idle) {
> + rq->proxy = rq->idle;
> + set_tsk_need_resched(rq->idle);
> + /*
> + * XXX [juril] don't
On Wed, 10 Oct 2018 12:57:10 +0200
Peter Zijlstra wrote:
> On Wed, Oct 10, 2018 at 12:34:17PM +0200, luca abeni wrote:
> > So, I would propose to make the proxy() function of patch more
> > generic, and not strictly bound to mutexes. Maybe a task structure
> > can contai
Hi all,
On Tue, 9 Oct 2018 11:24:26 +0200
Juri Lelli wrote:
> Hi all,
>
> Proxy Execution (also goes under several other names) isn't a new
> concept, it has been mentioned already in the past to this community
> (both in email discussions and at conferences [1, 2]), but no actual
> implementa
Hi Peter,
On Tue, 30 Oct 2018 11:45:54 +0100
Peter Zijlstra wrote:
[...]
> > 2. This is related to perf_event_open syscall reproducer does
> > before becoming DEADLINE and entering the busy loop. Enabling of
> > perf swevents generates lot of hrtimers load that happens in the
> > reproducer
On Thu, 11 Oct 2018 14:53:25 +0200
Peter Zijlstra wrote:
[...]
> > > > + if (rq->curr != rq->idle) {
> > > > + rq->proxy = rq->idle;
> > > > + set_tsk_need_resched(rq->idle);
> > > > + /*
> > > > +* XXX [juril] don't we still need to
Hi Juri,
On Thu, 18 Oct 2018 10:28:38 +0200
Juri Lelli wrote:
[...]
> struct sched_attr {
> .size = 0,
> .policy = 6,
> .flags= 0,
> .nice = 0,
> .priority = 0,
> .runtime = 0x9917,
> .deadline = 0x,
> .period = 0,
> }
>
> So, we seem to be
Hi Peter,
On Thu, 18 Oct 2018 11:48:50 +0200
Peter Zijlstra wrote:
[...]
> > So, I tend to think that we might want to play safe and put some
> > higher minimum value for dl_runtime (it's currently at 1ULL <<
> > DL_SCALE). Guess the problem is to pick a reasonable value, though.
> > Maybe link i
Hi Juri,
On Thu, 18 Oct 2018 12:10:08 +0200
Juri Lelli wrote:
[...]
> > Yes, a HZ related limit sounds like something we'd want. But if
> > we're going to do a minimum sysctl, we should also consider adding
> > a maximum, if you set a massive period/deadline, you can, even with
> > a relatively l
On Thu, 18 Oct 2018 12:47:13 +0200
Juri Lelli wrote:
> Hi,
>
> On 18/10/18 12:23, luca abeni wrote:
> > Hi Juri,
> >
> > On Thu, 18 Oct 2018 10:28:38 +0200
> > Juri Lelli wrote:
> > [...]
> > > struct sched_attr {
> > >
Hi Juri,
On Thu, 18 Oct 2018 14:21:42 +0200
Juri Lelli wrote:
[...]
> > > > I missed the original emails, but maybe the issue is that the
> > > > task blocks before the tick, and when it wakes up again
> > > > something goes wrong with the deadline and runtime assignment?
> > > > (maybe because t
On Fri, 19 Oct 2018 13:39:42 +0200
Peter Zijlstra wrote:
> On Thu, Oct 18, 2018 at 01:08:11PM +0200, luca abeni wrote:
> > Ok, I see the issue now: the problem is that the "while
> > (dl_se->runtime <= 0)" loop is executed at replenishment time, but
> >
init_dl_task_timer(struct sched_dl_entity *dl_se);
unsigned long to_ratio(u64 period, u64 runtime);
>From 7a0e6747c40cf9186f3645eb94408090ab11936a Mon Sep 17 00:00:00 2001
From: Luca Abeni
Date: Sat, 27 Dec 2014 18:20:57 +0100
Subject: [PATCH 03/11] Do not initialize the deadline timer if it is alr
Hi all,
when running some experiments on current git master, I noticed a
regression respect to version 3.18 of the kernel: when invoking
sched_setattr() to change the SCHED_DEADLINE parameters of a task that
is already scheduled by SCHED_DEADLINE, it is possible to crash the
system.
The bug can b
Hi Kirill,
On 01/14/2015 01:43 PM, Kirill Tkhai wrote:
[...]
Say we have a userspace task that evaluates and changes runtime
parameters for other tasks (basically what Luca is doing IIRC), and the
changes keep resetting the sleep time, the whole guarantee system comes
down, rendering the deadlin
Hi Peter,
On 01/15/2015 01:23 PM, Peter Zijlstra wrote:
On Thu, Jan 15, 2015 at 12:23:43PM +0100, Luca Abeni wrote:
There are some parts of the patch that I do not understand (for example:
if I understand well, if the task is not throttled you set dl_new to 1...
And if it is throttled you
Hi Kirill,
On Tue, 06 Jan 2015 02:07:21 +0300
Kirill Tkhai wrote:
> On Пн, 2015-01-05 at 16:21 +0100, Luca Abeni wrote:
[...]
> > For reference, I attach the patch I am using locally (based on what
> > I suggested in my previous mail) and seems to work fine here.
> >
>
On 01/07/2015 01:29 PM, Kirill Tkhai wrote:
[...]
Based on your comments, I suspect my patch can be further
simplified by moving the call to init_dl_task_timer() in
__sched_fork().
It seems this way has problems. The first one is that task may become
throttled again, and we will start dl_timer
On 01/07/2015 02:04 PM, Kirill Tkhai wrote:
[...]
and further enqueue_task() places it on the dl_rq.
I was under the impression that no further enqueue_task() will happen (since
the task is throttled, it is not on runqueue, so __sched_setscheduler() will
not dequeue/enqueue it).
But I am probabl
Hi Kirill,
On 01/07/2015 02:04 PM, Kirill Tkhai wrote:
[...]
If in the future we allow non-privileged users to increase deadline,
we will reflect that in __setparam_dl() too.
Ok.
Does my patch help you? It helps me, but anyway I need your confirmation.
Sorry about the delay... Anyway, I fina
Hi Peter,
On 01/28/2015 03:08 PM, Peter Zijlstra wrote:
On Thu, Jan 15, 2015 at 02:35:46PM +0100, Luca Abeni wrote:
>From what I understand we should either modify the tasks run/sleep stats
when we change its parameters or we should schedule a delayed release of
the bandwidth delta (when
Hi,
On Fri, 3 Apr 2015 16:18:33 +0800
Zhiqiang Zhang wrote:
> >From the contex,the definition of the destiny of a task
> C_i/min{D_i,T_i},where T_i is not referred before, should be
> substituted by C_i/min{D_i,P_i}.
You are right, "T_i" should be substituted with "P_i"...
But now that I look at
Hi Henrik,
On Fri, 3 Apr 2015 19:57:37 +0200
Henrik Austad wrote:
[...]
> > C_i/min{D_i,T_i},where T_i is not referred before, should be
> > substituted by C_i/min{D_i,P_i}.
>
> Actually, I'd prefer we use T_i to describe the period and not P
> because:
>
> - P is easy to confuse with priority
On 04/03/2015 09:47 PM, Luca Abeni wrote:
On Fri, 3 Apr 2015 19:57:37 +0200
Henrik Austad wrote:
[...]
I realise that I've reviewed quite a lot of this, and I have some
vague memories of this being discussed earlier, Juri? Luca?
I remember there was a discussion (and I seem to remember
On 04/08/2015 11:31 AM, Juri Lelli wrote:
Hi Luca,
On 03/04/15 11:52, Luca Abeni wrote:
Hi,
On Fri, 3 Apr 2015 16:18:33 +0800
Zhiqiang Zhang wrote:
>From the contex,the definition of the destiny of a task
C_i/min{D_i,T_i},where T_i is not referred before, should be
substituted by C_i/
Add a short discussion about sufficient and necessary schedulability tests,
and a simple example showing that if D_i != P_i then density based tests
are only sufficient.
Also add some references to scientific papers on schedulability tests for
EDF that are both necessary and sufficient, and on thei
The names "C_i" and "T_i" were used (without previously defining them)
instead of "WCET_i" and "P_i".
Based on a patch by Zhiqiang Zhang
---
Documentation/scheduler/sched-deadline.txt |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/Documentation/scheduler/sched-deadline
---
Documentation/scheduler/sched-deadline.txt |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/Documentation/scheduler/sched-deadline.txt
b/Documentation/scheduler/sched-deadline.txt
index 21461a0..b29b16c 100644
--- a/Documentation/scheduler/sched-deadline.txt
+++ b/Do
Add a description of the Dhall's effect, some discussion about
schedulability tests for global EDF, and references to real-time literature,
---
Documentation/scheduler/sched-deadline.txt | 81
1 file changed, 71 insertions(+), 10 deletions(-)
diff --git a/Documentat
ll search and
add them
Thanks,
Luca
Luca Abeni (4):
Documentation/scheduler/sched-deadline.txt: fix typos
Documentation/scheduler/sched-deadline.txt: use consistent namings
Documentation/scheduler/sched-deadline.txt: Some notes
On 04/09/2015 11:17 AM, Peter Zijlstra wrote:
On Thu, Apr 09, 2015 at 11:13:10AM +0200, Luca Abeni wrote:
Ok; so how should I proceed? Should I address the various comments (by you, Juri
and Henrik) by sending incremental patches based on these ones (since I see you
queued
these patches), or
On 04/09/2015 11:44 AM, Peter Zijlstra wrote:
On Thu, Apr 09, 2015 at 11:39:08AM +0200, Henrik Austad wrote:
+ CPUs, with the first M - 1 tasks having a small worst case execution time
+ WCET_i=e and period equal to relative deadline P_i=D_i=P-1. The last task
Normally, 'e' is used to denote a
Hi Henrik,
On 04/09/2015 11:39 AM, Henrik Austad wrote:
[...]
- SCHED_DEADLINE can be used to schedule real-time tasks guaranteeing that
- the jobs' deadlines of a task are respected. In order to do this, a task
- must be scheduled by setting:
+ utilisations or densities: it can be shown that ev
Hi Henrik,
On 04/09/2015 11:06 AM, Henrik Austad wrote:
On Wed, Apr 08, 2015 at 01:59:39PM +0200, Luca Abeni wrote:
Add a short discussion about sufficient and necessary schedulability tests,
and a simple example showing that if D_i != P_i then density based tests
are only sufficient.
Also add
On 04/09/2015 12:10 PM, Henrik Austad wrote:
[...]
@@ -43,7 +43,13 @@ CONTENTS
"deadline", to schedule tasks. A SCHED_DEADLINE task should receive
"runtime" microseconds of execution time every "period" microseconds, and
these "runtime" microseconds are available within "deadline" micros
Hi Peter,
On 04/08/2015 04:44 PM, Peter Zijlstra wrote:
On Wed, Apr 08, 2015 at 01:59:36PM +0200, Luca Abeni wrote:
Hi all,
here is the promised update for Documentation/scheduler/sched-deadline.txt.
I send it as an RFC because of the following doubts:
1) I split the patches trying to isolate
Hi Juri,
thanks for the review! I am fixing these issues locally.
Thanks,
Luca
On 04/09/2015 10:24 AM, Juri Lelli wrote:
On 08/04/15 12:59, Luca Abeni wrote:
Add a description of the Dhall's effect, some discussion about
schedulab
Hi,
On 04/12/2015 11:47 AM, Ingo Molnar wrote:
* Luca Abeni wrote:
Hi all,
here is an update for Documentation/scheduler/sched-deadline.txt.
Respect to the RFC I sent few days ago, I:
1) Split the patches in a better way, (so that, for example, Zhiqiang Zhang's
authorship is pres
Hello,
On 06/15/2015 05:15 AM, Zhiqiang Zhang wrote:
Sine commit 269ad80(sched/deadline: Avoid double-accounting in
case of missed deadlines), parameter rq is no longer used, so
remove it.
I do not know if other people have plans to use this "rq" parameter,
but the patch looks ok to me.
Hi,
On Wed, 6 Jun 2018 14:20:46 +0100
Quentin Perret wrote:
[...]
> > However, IMHO, these are corner cases and in the average case it is
> > better to rely on running_bw and reduce the CPU frequency
> > accordingly.
>
> My point was that accepting to go at a lower frequency than required
> by
Hi all,
sorry; I missed the beginning of this thread... Anyway, below I add
some comments:
On Wed, 6 Jun 2018 15:05:58 +0200
Claudio Scordino wrote:
[...]
> >> Ok, I see ... Have you guys already tried something like my patch
> >> above (keeping the freq >= this_bw) in real world use cases ? Is
Hi all,
On Tue, 12 Mar 2019 10:03:12 +0800
"chengjian (D)" wrote:
> Hi.
>
> When looking to test SCHED_DEADLINE syzkaller report an warn in
> task_non_contending(). I tested the mainline kernel with the C program
> and captured the same call trace.
[...]
> diff --git a/kernel/sched/deadline.c b
Hi,
(I added Juri in cc)
On Tue, 12 Mar 2019 10:03:12 +0800
"chengjian (D)" wrote:
[...]
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 31c050a0d0ce..d73cb033a06d 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -252,7 +252,6 @@ static void ta
Hi,
On Fri, 15 Mar 2019 08:43:00 +0800
"chengjian (D)" wrote:
[...]
> > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> > index 6a73e41a2016..43901fa3f269 100644
> > --- a/kernel/sched/deadline.c
> > +++ b/kernel/sched/deadline.c
> > @@ -252,7 +252,6 @@ static void task_non_conte
Hi Juri,
On Fri, 7 Aug 2020 11:56:04 +0200
Juri Lelli wrote:
> Starting deadline server for lower priority classes right away when
> first task is enqueued might break guarantees
Which guarantees are you thinking about, here? Response times of fixed
priority tasks?
If fixed priority tasks are
Hi Juri,
On Fri, 7 Aug 2020 15:30:41 +0200
Juri Lelli wrote:
[...]
> > In the meanwhile, I have some questions/comments after a first quick
> > look.
> >
> > If I understand well, the patchset does not apply deadline servers
> > to FIFO and RR tasks, right? How does this patchset interact with R
Hi Peter,
On Fri, 7 Aug 2020 12:46:18 +0200
pet...@infradead.org wrote:
> On Fri, Aug 07, 2020 at 11:56:04AM +0200, Juri Lelli wrote:
> > Starting deadline server for lower priority classes right away when
> > first task is enqueued might break guarantees, as tasks belonging to
> > intermediate p
On Fri, 7 Aug 2020 15:43:53 +0200
Juri Lelli wrote:
> On 07/08/20 15:28, luca abeni wrote:
> > Hi Juri,
> >
> > On Fri, 7 Aug 2020 11:56:04 +0200
> > Juri Lelli wrote:
> >
> > > Starting deadline server for lower priority classes right away
>
Hi Juri,
thanks for sharing the v2 patchset!
In the next days I'll have a look at it, and try some tests...
In the meanwhile, I have some questions/comments after a first quick
look.
If I understand well, the patchset does not apply deadline servers to
FIFO and RR tasks, right? How does this pa
Hi,
On 01/21/2014 02:55 PM, Peter Zijlstra wrote:
On Tue, Jan 21, 2014 at 01:50:41PM +0100, Luca Abeni wrote:
On 01/21/2014 01:33 PM, Peter Zijlstra wrote:
- During the execution of a job, the task might invoke a blocking system call,
and block... When it wakes up, it is still in the
Hi Henrik,
On 01/27/2014 12:53 PM, Henrik Austad wrote:
[...]
+ In more details, the CBS algorithm assigns scheduling deadlines to
+ tasks in the following way:
+
+ - Each SCHED_DEADLINE task is characterised by the "runtime",
+"deadline", and "period" parameters;
+
+ - The state of the ta
On 01/27/2014 01:40 PM, Henrik Austad wrote:
[...]
Current runtime: time spent running _this_ period? or is _remaining_
runtime this period? I get the feeling it's the latter.
So, roughly, it is the ration
remaining_runtime / relative_time_to_deadline
which needs to be greater than the
Hi Juri,
On 01/20/2014 11:40 AM, Juri Lelli wrote:
From: Dario Faggioli
Add in Documentation/scheduler/ some hints about the design
choices, the usage and the future possible developments of the
sched_dl scheduling class and of the SCHED_DEADLINE policy.
[...]
+ References:
+ 1 - C. L. Liu
Hi all,
On 01/20/2014 02:16 PM, Henrik Austad wrote:
[...]
+ The typical -deadline task is composed of a computation phase (instance)
+ which is activated on a periodic or sporadic fashion. The expected (maximum)
+ duration of such computation is called the task's runtime; the time interval
+ by
On 01/21/2014 11:20 AM, Henrik Austad wrote:
On Mon, Jan 20, 2014 at 02:39:29PM +0100, Luca Abeni wrote:
Hi all,
On 01/20/2014 02:16 PM, Henrik Austad wrote:
[...]
+ The typical -deadline task is composed of a computation phase (instance)
+ which is activated on a periodic or sporadic fashion
On 01/21/2014 01:33 PM, Peter Zijlstra wrote:
On Tue, Jan 21, 2014 at 12:35:27PM +0100, Luca Abeni wrote:
In a system, we typically look at a set of tasks. In Linux-kernel
terminology, a particular task is normally a thread. When a thread is
ready to run, we say that a *job* of that task is
On 06/18/2014 09:01 AM, xiaofeng.yan wrote:
[...]
I also had an implementation of the GRUB algorithm (based on a modification
of my old CBS scheduler for Linux), but the computational complexity of the
algorithm was too high. That's why I never proposed to merge it in
SCHED_DEADLINE.
But maybe t
Hi,
first of all, sorry for the ultra-delayed reply: I've been busy,
and I did not notice this email... Anyway, some comments are below
On 05/16/2014 09:11 AM, Henrik Austad wrote:
[...]
This can also be implemented in user-space (without modifying the
scheduler)
by having a daemon that monitor
Hi,
On 06/17/2014 04:43 AM, xiaofeng.yan wrote:
[...]
The basic ideas are (warning! This is an over-simplification of the algorithm!
:)
- You assign runtime and period to each SCHED_DEADLINE task as usual
- Each task is guaranteed to receive its runtime every period
- You can also define a maxi
Hi Steven,
On Mon, 27 Jan 2014 10:35:56 -0500
Steven Rostedt wrote:
[...]
> > + to be executed first. Thanks to this feature, also tasks that do
> > not
> > + strictly comply with the "traditional" real-time task model (see
> > Section 3)
> > + can effectively use the new policy.
> > +
> > + In
Hi Steven,
On Mon, 27 Jan 2014 12:09:38 -0500
Steven Rostedt wrote:
[...]
> > > Lets take a case where deadline == period. It seems that the above
> > > would be true any time there was any delay to starting the task
> > > or the task was interrupted by another SCHED_DEADLINE task.
> > Not sure a
Hi Ingo,
On Thu, 21 Aug 2014 15:38:37 +0200
Ingo Molnar wrote:
[...]
> > + If the total utilisation sum_i(WCET_i/P_i) (sum of the
> > utilisations
> > + WCET_i/P_i of all the tasks in the system - notice that when
> > considering
> > + multiple tasks, the parameters of the i-th one are indicated
Hi,
On Fri, 22 Aug 2014 10:31:11 +0200
Ingo Molnar wrote:
[...]
> > > > + execution time is guaranteed for non real-time tasks, which
> > > > risk to be
> > > > + starved by real-time tasks.
> > >
> > > The last part doesn't really parse as correct English for me -
> > > maybe also split this o
Hi,
On 09/02/2014 11:10 PM, Henrik Austad wrote:
On Thu, Aug 28, 2014 at 11:00:26AM +0100, Juri Lelli wrote:
From: Luca Abeni
Several small changes regarding SCHED_DEADLINE documentation that fix
terminology and improve clarity and readability:
- "current runtime" becomes
Hi,
On 09/02/2014 11:45 PM, Henrik Austad wrote:
[...]
+ On multiprocessor systems with global EDF scheduling (non partitioned
+ systems), a sufficient test for schedulability can not be based on the
+ utilisations (it can be shown that task sets with utilisations slightly
+ larger than 1 can mi
On 09/03/2014 09:45 AM, Henrik Austad wrote:
[...]
Summing up, the CBS[2,3] algorithms assigns scheduling deadlines to
tasks so
that each task runs for at most its runtime every period, avoiding any
interference between different tasks (bandwidth isolation), while the
EDF[1]
- algor
On 09/03/2014 09:48 AM, Henrik Austad wrote:
On Wed, Sep 3, 2014 at 8:49 AM, Luca Abeni wrote:
Hi,
On 09/02/2014 11:45 PM, Henrik Austad wrote:
[...]
+ On multiprocessor systems with global EDF scheduling (non partitioned
+ systems), a sufficient test for schedulability can not be based
On Tue, 12 Aug 2014 11:11:00 -0700
Randy Dunlap wrote:
[...]
> > + The utilisation of a real-time task is defined as the ratio
> > between its
> > + wcet and its period (or minimum inter-arrival time), and
> > represents
>
>"wcet" seems to be used here without any explanation of what it
> mea
Hi all,
On Mon, 16 May 2016 18:00:04 +0200
Tommaso Cucinotta wrote:
> Hi,
>
> looking at the SCHED_DEADLINE code, I spotted an opportunity to
> make cpudeadline.c faster, in that we can skip real swaps during
> re-heapify()ication of items after addition/removal. As such ops
> are done under a
Hi all,
a quick reply because I am in hurry... I'll write a longer reply this
evening or tomorrow
On Tue, 17 May 2016 09:46:46 -0400
Steven Rostedt wrote:
[...]
> And I still don't see how this is a SMP vs UP situation.
Well, on UP if the sum of the sum of the tasks' densities is <= 1 then
all t
Hi,
On Tue, 17 May 2016 09:46:46 -0400
Steven Rostedt wrote:
> [ Added LKML and Peter ]
>
> On Tue, 17 May 2016 12:38:54 +0200
> luca abeni wrote:
>
> > Hi all,
> >
> > On Tue, 17 May 2016 10:02:01 +0100
> > Juri Lelli wrote:
> > [...]
On Tue, 17 May 2016 10:33:00 -0400
Steven Rostedt wrote:
> On Tue, 17 May 2016 16:07:49 +0200
> luca abeni wrote:
>
> > > As I
> > > mentioned on IRC, what about the case with two CPUs and this:
> > >
> > > Two tasks with: R:10us D: 15us
On 12/02/2015 11:42 AM, Wanpeng Li wrote:
Hi Luca,
2015-11-27 20:14 GMT+08:00 Luca Abeni :
Hi all,
I ran some quick tests on this patch (because I was working on something
related), and it seems to me that it triggers a bug. Here are some
Your Tested-by to v4 is a great appreciated.
Sorry
d/dequeued, similar to what we already do for
RT.
Signed-off-by: Wanpeng Li
I ran some tests with this patch, and I found no issues; so, you can add
Tested-by: luca abeni
I just have one minor comment on the patch:
[...]
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 8b0a15
Hi,
On 12/02/2015 02:33 PM, Wanpeng Li wrote:
[...]
We updated leftmost above, can't we simply use that path for this thing
below?
Do you mean something like below?
@@ -195,6 +195,9 @@ static void dequeue_pushable_dl_task(struct rq *rq, struct
task_struct *p)
next_node = rb
Hi,
On 12/03/2015 03:25 AM, Wanpeng Li wrote:
[...]
@@ -202,16 +197,18 @@ static void dequeue_pushable_dl_task(struct rq *rq,
struct task_struct *p)
next_node = rb_next(&p->pushable_dl_tasks);
dl_rq->pushable_dl_tasks_leftmost = next_node;
+ if (n
d/dequeued, similar to what we already do for
RT.
I just re-ran some tests with this version of the patch, and
it still looks ok.
Luca
Tested-by: Luca Abeni
Signed-off-by: Wanpeng Li
---
v5 -> v6:
* take advantage of next_node
v4 -> v5:
* remove useless pick_next_
On 12/14/2015 03:02 PM, Vincent Guittot wrote:
[...]
Small nit: why "average" utilization? I think a better name would be
"runqueue utilization"
or "local utilization", or something similar... If I understand correctly
(sorry if I
missed something), this is not an average, but the sum of the
util
On Mon, 14 Dec 2015 16:56:17 +0100
Vincent Guittot wrote:
[...]
> >> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> >> index 08858d1..e44c6be 100644
> >> --- a/kernel/sched/sched.h
> >> +++ b/kernel/sched/sched.h
> >> @@ -519,6 +519,8 @@ struct dl_rq {
> >> #else
> >> struct dl_
On Mon, 14 Dec 2015 16:07:59 +
Juri Lelli wrote:
[...]
> > I agree that if the WCET is far from reality, we will underestimate
> > available capacity for CFS. Have you got some use case in mind which
> > overestimates the WCET ?
>
> I guess simply the fact that one task can be admitted to the
On Mon, 14 Dec 2015 17:51:28 +0100
Peter Zijlstra wrote:
> On Mon, Dec 14, 2015 at 04:56:17PM +0100, Vincent Guittot wrote:
> > I agree that if the WCET is far from reality, we will underestimate
> > available capacity for CFS. Have you got some use case in mind which
> > overestimates the WCET ?
On 12/15/2015 05:59 AM, Vincent Guittot wrote:
[...]
So I don't think this is right. AFAICT this projects the WCET as the
amount of time actually used by DL. This will, under many
circumstances, vastly overestimate the amount of time actually
spend on it. Therefore unduly pessimisme the fair capa
On 12/15/2015 01:20 PM, Peter Zijlstra wrote:
On Tue, Dec 15, 2015 at 09:50:14AM +0100, Luca Abeni wrote:
On 12/15/2015 05:59 AM, Vincent Guittot wrote:
The 2nd definition is used to compute the remaining capacity for the
CFS scheduler. This one doesn't need to be updated at each wake/
On 12/15/2015 01:23 PM, Peter Zijlstra wrote:
On Tue, Dec 15, 2015 at 09:50:14AM +0100, Luca Abeni wrote:
Strictly speaking, the active utilisation must be updated when a task
wakes up and when a task sleeps/terminates (but when a task sleeps/terminates
you cannot decrease the active
On 12/15/2015 01:38 PM, Peter Zijlstra wrote:
On Mon, Dec 14, 2015 at 10:31:13PM +0100, Luca Abeni wrote:
There 'might' be smart pants ways around this, where you run part of
the execution at lower speed and switch to a higher speed to 'catch'
up if you exceed some bou
On 12/15/2015 01:43 PM, Vincent Guittot wrote:
[...]
I agree that if the WCET is far from reality, we will underestimate
available capacity for CFS. Have you got some use case in mind which
overestimates the WCET ?
If we can't rely on this parameters to evaluate the amount of capacity
used by dea
1 - 100 of 376 matches
Mail list logo