Hal Murray <hmur...@megapathdsl.net>: > > [Transmit path] > > No, I'd much rather put in a GC lockout on the critical region than > > complicate the protocol. > > That works if the environment has a GC lockout. Is that a column on your > features list?
Not explicitly, but there is no way I would *ever* try using a language with GC for soft-realtime work (even soft realtime as comparatively squishy as NTP) unless it had that capability. I wasn't even a bit surprised Pike and his crew put in that switch; I share traditions with them that would demand it. > > ntpd spends enough time in I/O waits that I do not think latency spikes will > > otherwise induce any problems above measurement noise. > > I don't think the I/O waits are important. We need to work correctly when > the server is overloaded. Serious question: how often does this actually happen? I want to get a feel for scale. > The key idea is that the server contributes 2 time stamps, the difference is > how long the packet spent on the server, either waiting to get processed > and/or actually getting processed say because the crypto takes a lot of CPU > cycles. > > We should measure the time from grabbing the time stamp to sending the > packet. That might include some crypto. We might get better results by > adjusting the time stamp to compensate for that. Good idea. I'd do it something like this: 1. Every time we ship a packet, take timestamps at the beginning and end of the critical region. 2. On the next send, adjust the timerstamp by the average of all previous ones. -- <a href="http://www.catb.org/~esr/">Eric S. Raymond</a> _______________________________________________ devel mailing list devel@ntpsec.org http://lists.ntpsec.org/mailman/listinfo/devel