I know the time package includes support for using the cycle timer on the
machine (if available) to get high precision monotonic time measurements.

But...calling time.Now() appears to have a lot of overhead. Measuring the
delay between 2 consecutive calls gives me anywhere from 150ns to 900+ns
depending on arch (linux and OS/X for these 2 examples).

My problem is I'm writing an emulator for an 8 bit cpu and on certain types
of emulation I want it to run at original clock speeds (so 550ns clock
cycles or so in this case). Just measuring time.Now() at the start of a
cycle and then subtracting time.Now() at the end to sleep for remaining
won't work if the overhead of the calls exceeds my cycle time like it's
doing on OS/X. I'm assuming negligible enough overhead for time.Sleep().

I know for benchmarking we deal with this by aggregating a lot of samples
and then dividing out. Is there a way to get the timer data much quicker?
I'm assuming on OS/X it's ending up doing the syscall for gettimeofday (I
recall an open bug somewhere) which is where the large jump comes from.

Or should I just measure my average latency when initializing my emulator
and use that as a baseline for determining how much to sleep? i.e.
effectively a mini benchmark on startup to determine local machine average
run time and assume some slop?

James

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to