Red Hat's lead real-time scheduling engineer (who has a solid
background in formal methods) posted an interested comment on an LWN
article about seL4's scheduler research
(https://lwn.net/Articles/745946/).

While I disagree with the tone of the comment, I rarely see informed
critiques of seL4 and context switching performance is core to
microkernel viability.  My apologies if this is an inappropriate venue
for this discussion.

---

"With the current scheduler, that driver must be given a high priority
and trusted to not hog the CPU." So, they need to implement either RT
throttling or the CBS, like Linux.

"Another problem is data structures that are shared between
criticalities....se of the priority-ceiling protocol can help." Nice,
then they will see that the priority ceiling will add release jitter
to the highest priority task, and then will notice that they will have
to use other protocol, like, mutexes with priority inheritance? Where
did I hear about it already... ah! On Linux! It is widely known that
if you use priority ceiling, you will have scheduling "latency." But
it really helps to reduce the response time... so, what do you prefer
Response Time or Scheduling latency (release jitter or any scheduling
delay, scheduler overhead... or any other name people do for it)?

"Any process that exceeds its CPU budget during a given period can be
throttled. Processes that have shorter periods are given a higher
priority by the scheduler." Ok! here they have something like CBS, but
they are using the fixed-priority rate monotonic scheduling approach!
Nice! that is a technology from... 1969? But wait, if they already
implemented this, why is the "network driver" a problem? - answer:
because they cannot estimate the period of the packets arriving - RM
is not flexible enough to all real-world cases! But that does not mean
that this is the only solution. So my question is: Is there a
sufficient & necessary schedulability test for their scheduler?

"what happens if the client's budget runs out while the server is
running?" Welcome to our world, check the "dl_boosted" variable on
sched_deadline.

"This new scheduler solves the problem, but it does not come for free:
interrupt latency increases by 11% and scheduling itself is 19% more
expensive. In both cases, the need to reprogram hardware timers is the
source of most of the slowdown. But, he noted, scheduling in seL4 is
"still an order of magnitude faster than Linux""

Welcome to our world again! And let me guess when you will start to
have our numbers... When you finish implementing "some
memory-management unit operations, and multicore operation"?

Because on PREEMPT_RT we do this in microseconds, will they
implementation continue working on ns (i.e., an order of magnitude
faster than Linux) with such features?

And I wonder, where is the mixed-criticality part of the
schedulers...? Running Linux as a guest?

I read the papers, and the work is very nice! Seriously! Very nice!

But please, stop the comparison of orange and apples. It is not
possible to compare this OS with Linux. Moreover, there are other OSs
with such features running on real systems. Verified or not, they are
certified for it.

_______________________________________________
Devel mailing list
[email protected]
https://sel4.systems/lists/listinfo/devel

Reply via email to