As far as the top-of-stack info, it gets complicated. You have to hard-code
an algorithm that must be implemented in every user binary, and then it's a
pain to change. This is why in Go we've backed away from reading that info
on Mac and why we call into the VDSO on Linux instead of recreating that
code ourselves. Personally, I'm not too worried about the cost of fetching
the time. A system call is fine.

Based on this discussion, I've abandoned the idea of changing the system
calls, and I've updated Go to open /dev/bintime at startup and abandon
nsec. It also opens /dev/random at startup now too. If we're opening one,
two is not a big deal. That change is pending at https://go.dev/cl/656755.

However, a change to Plan 9 is still needed to provide monotonic time. At
first I was going to try to recreate it from the ticks and fasthz values in
/dev/bintime, but the value of fasthz can change over time as aux/timesync
deems it necessary, and if fasthz goes up, then 1e9*ticks/fasthz will go
down, making the derived time non-monotonic. It is also annoying to do that
calculation efficiently: more parameters are needed from the kernel.
Instead of exposing all those parameters, it is far easier and cleaner to
have the kernel maintain a monotonic time and simply expose that. I suggest
we add the monotonic time in nanoseconds as an extra field you can read
from /dev/time and /dev/bintime. If you ask for a big enough buffer, you
get it. If not, you don't. The diff is here:
https://github.com/rsc/plan9/commit/baf076425c.

Best,
Russ

------------------------------------------
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T59810df4fe34a033-M94a3e524e0a77ddfec440be9
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

Reply via email to