On Wed, Mar 15, 2023 at 7:54 AM Nathan Bossart <nathandboss...@gmail.com> wrote: > Here is roughly what I had in mind: > > NOTE: Although the delay is specified in microseconds, older Unixen > and > Windows use periodic kernel ticks to wake up, which might increase the > delay time significantly. We've observed delay increases as large as > 20 milliseconds on supported platforms.
Sold. And pushed. I couldn't let that 20ms != 1s/100 problem go, despite my claim that I would, and now I see: NetBSD does have 10ms resolution, so everyone can relax, arithmetic still works. It's just that it always or often adds on one extra tick, for some strange reason. So you can measure 20ms, 30ms, ... but never as low as 10ms. *Shrug*. Your description covered that nicely. https://marc.info/?l=netbsd-current-users&m=144832117108168&w=2 > > (The word "interrupt" is a bit overloaded, which doesn't help with > > this discussion.) > > Yeah, I think it would be clearer if "interrupt" was disambiguated. OK, I rewrote it to avoid that terminology. On small detail, after reading Tom's 2019 proposal to do this[1]: He mentioned SUSv2's ENOSYS error. I see that SUSv3 (POSIX.1-2001) dropped that. Systems that don't have the "timers" option simply shouldn't define the function, but we already require the "timers" option for clock_gettime(). And more practically, I know that all our target systems have it and it works. Pushed. [1] https://www.postgresql.org/message-id/4902.1552349...@sss.pgh.pa.us