On 16 Oct 2022, at 23:25, Harry Butterworth <heb1...@gmail.com> wrote:
> 
> The "single producer, single consumer, lockless bounded queues implemented
> as ring buffers" also caught my attention.
> 
> In the past, to minimize jitter, I found it useful to have a pool of
> threads consuming work from a single queue.  This reduces the probability
> that tasks will be significantly delayed behind other tasks that are taking
> an unusually long time, because additional threads in the pool become free
> and allow subsequent tasks to bypass the ones taking too long.  This is
> well known and often implemented in post offices where there is a single
> queue for multiple counters.  This implies MPMC (or perhaps SPMC and MPSC
> in pairs).

Hi Harry, Stewart,

The whole point here is that we don’t think we need this, and as a result can 
keep implementation complexity as well as overheads low. SPMC/MPSC is 
inherently more complex than SPSC, and, judging by the papers recently I read 
that use it, I’m reasonably confident we’ll at least match performance of such 
approaches (and I'll offer a grovelling retraction if proven wrong ;-).

In our design there’s only one place where there’s a 1:n mapping, that’s in the 
multiplexer, and all it does is moving pointers from a driver-side ring  to a 
set of per-client rings (input) or from a set of per-client rings to a 
driver-side ring (output). On output it will apply a simple policy when 
deciding which client to serve (priority-based, round-robin, or bandwidth 
limiting). A particular multiplexer will just implement one particular policy, 
and you can pick the one you want. Basically looking at building a lego set, 
where every lego block is single-threaded and can run on a different core.

This keeps all rings SPSC, and every single piece very simple (and likely 
verifiable), and should be able to maximise concurrency of the whole system. 

We haven’t implemented and evaluated the multiplexers yet, but that’ll be one 
of the first things we’ll do when Lucy returns from internship/vacation in 
early Jan (and I’m very much looking forward to analysing results).

Gernot
_______________________________________________
Devel mailing list -- devel@sel4.systems
To unsubscribe send an email to devel-leave@sel4.systems

Reply via email to