[timing of GC]
> which shows their meassured STW pauses are bounded to about 95% by 600us and
> typically less than 400us. This is consistent with other reports I've seen,
> and that's why I took 600us as a worst case STW we're likely to see.
I didn't see any description about what was actuall
> Right. But the way to get there is certainly *not* to try to design ahead
> while you're still thinking in a language like C where concurrent programming
> is difficult and error-prone.
This is not a hard problem. There is nothing tricky.
> Once you get used to being able to program in Go's
Hal Murray :
>
> > We don't have a multithreaded server yet. Worst case we have two threads,
> > and only one can ever reach the critical region in question. Don't borrow
> > trouble! :-)
>
> I'm interested in building a server that will keep a gigabit link running at
> full speed. We can do
> We don't have a multithreaded server yet. Worst case we have two threads,
> and only one can ever reach the critical region in question. Don't borrow
> trouble! :-)
I'm interested in building a server that will keep a gigabit link running at
full speed. We can do that with multiple threads
Hal Murray :
> >> 1. packet tx happening right after tx timestamp for server response
>
> > A) Mitigate window 1 by turning off GC before it and back on after.
>
> Things get complicated. Consider a multi threaded server. If you have
> several busy server threads, can they keep the GC off 100%
>> 1. packet tx happening right after tx timestamp for server response
> A) Mitigate window 1 by turning off GC before it and back on after.
Things get complicated. Consider a multi threaded server. If you have
several busy server threads, can they keep the GC off 100% of the time?
(Condider
>> 1. packet tx happening right after tx timestamp for server response
> Yes, and that really should be handled in the kernel, maybe implemented via
> BPF.
Interesting idea. It might work for simple packes but is unlikely to be
practical for authenticated packets. If nothing else, you have to