Re: GC timing

2021-07-07 Thread Eric S. Raymond via devel
Hal Murray : > If you pass in a buffer, there is no reason to allocate anything in the case > of a server processing a request so this whole discussion is a wild goose > chase. It's a little more complicated than that, because I was describing the lowest-level recvfrom() in the socket library.

Re: GC timing

2021-07-07 Thread Hal Murray via devel
>> What is the API for recvfrom()? Do you pass in a buffer, like in C, or does >> it return a newly allocated buffer? > You pass in a buffer. In theory we could maintain a buffer ring. I'd want > to see actual benchmarks showing frequent GCs before I'd believe it was > necessary, though. If

Re: GC timing

2021-07-07 Thread Eric S. Raymond via devel
Hal Murray : > > >> That doesn't make sense. Where does your "one second apart" come from? > Why > >> is "currently has 2 threads" interesting? > > When do we poll at a less than one-secpmd interval? Most allocatopmns wo;l; > > ber associated with making a packet fra,e for he send, thn dealing

Re: GC timing

2021-07-07 Thread Hal Murray via devel
>> That doesn't make sense. Where does your "one second apart" come from? Why >> is "currently has 2 threads" interesting? > When do we poll at a less than one-secpmd interval? Most allocatopmns wo;l; > ber associated with making a packet fra,e for he send, thn dealing with a > response that c

Re: GC timing

2021-07-07 Thread Eric S. Raymond via devel
Hal Murray : > > > I don't know all those numbers yet. But: given that NTPsec only currently > > has 2 threads and our allocations are typically occuring one second apart or > > less per upstream or downstream, I can't even plausibly *imagine* a Raft > > implementation having lower memory churn t

GC timing

2021-07-06 Thread Hal Murray via devel
> I don't know all those numbers yet. But: given that NTPsec only currently > has 2 threads and our allocations are typically occuring one second apart or > less per upstream or downstream, I can't even plausibly *imagine* a Raft > implementation having lower memory churn than we do. That does