On 02/14/2017 04:54 PM, Tommaso Cucinotta wrote: > On 13/02/2017 20:05, Daniel Bristot de Oliveira wrote: >> To avoid this problem, in the activation of a constrained deadline >> task after the deadline but before the next period, throttle the >> task and set the replenishing timer to the begin of the next period, >> unless it is boosted. > > my only comment is that, by throttling on (dl < wakeuptime < period), we > force the app to sync its activation time with the kernel, and the cbs > doesn't self-sync anymore with the app own periodicity, which is what > normally happens with dl=period. With dl=period, we loose the cbs > self-sync and we force the app to sync with the kernel periodic timer > only if we use explicitly yield(), but now this becomes also implicit > just if we set dl<period.
I see your point. However, that will happen only if, for some external fact or imprecision, the task wakes up with a minimum inter-arrival time smaller than the dl_period. In such case, IMHO the user must be aware of the miss behavior or imprecision of the task/method which activates the task and set an appropriate/safer smaller dl_period. Furthermore, (correct me if I am wrong...) CBS will self-sync implicit deadline tasks which did not consume all its previous runtime. Because, if the runtime was consumed, the wake-up will fall in the same case I am making constrained tasks to fall - the task will be throttled until the next replenishment, after the next period. The idea is to simulate sched_yield(). By suspending itself further than the deadline, the task either has timing problems, or it wants to suspend itself until the next activation, like calling sched_yeild(), but allowing itself to be sporadic (to be activated after the minimum inter-arrival time). > >> attr.sched_policy = SCHED_DEADLINE; >> attr.sched_runtime = 2 * 1000 * 1000; /* 2 ms */ >> attr.sched_deadline = 2 * 1000 * 1000; /* 2 ms */ >> attr.sched_period = 2 * 1000 * 1000 * 1000; /* 2 s */ > ... >> On my box, this reproducer uses almost 50% of the CPU time, which is >> obviously wrong for a task with 2/2000 reservation. > > just a note here: in this example of runtime=deadline=2ms, shall we rely > on a utilization-based test, then we should assume the task is taking 100%. > More precise tests for EDF with deadline<period would properly count the > 1998ms/2000ms free space, instead. Yeah, it is taking 100% for runtime/deadline. But the admission is runtime/period, so it will pass. The idea of runtime=deadline is to avoid the task being throttled. If the task is throttle we would not be able to demonstrate this bug. Anyway, we can set runtime = (0.95 * deadline), it will also reproduce the problem, as long as the task is put to sleep before being throttled. Thanks! -- Daniel