On Thu, Mar 1, 2018 at 11:59 PM, James Chacon <chacon.ja...@gmail.com> wrote:
> > > On Thu, Mar 1, 2018 at 11:07 PM, James Chacon <chacon.ja...@gmail.com> > wrote: > >> >> >> On Thu, Mar 1, 2018 at 10:13 AM, Ian Lance Taylor <i...@golang.org> >> wrote: >> >>> On Thu, Mar 1, 2018 at 8:56 AM, James Chacon <chacon.ja...@gmail.com> >>> wrote: >>> > >>> > I know the time package includes support for using the cycle timer on >>> the >>> > machine (if available) to get high precision monotonic time >>> measurements. >>> > >>> > But...calling time.Now() appears to have a lot of overhead. Measuring >>> the >>> > delay between 2 consecutive calls gives me anywhere from 150ns to >>> 900+ns >>> > depending on arch (linux and OS/X for these 2 examples). >>> > >>> > My problem is I'm writing an emulator for an 8 bit cpu and on certain >>> types >>> > of emulation I want it to run at original clock speeds (so 550ns clock >>> > cycles or so in this case). Just measuring time.Now() at the start of a >>> > cycle and then subtracting time.Now() at the end to sleep for remaining >>> > won't work if the overhead of the calls exceeds my cycle time like it's >>> > doing on OS/X. I'm assuming negligible enough overhead for >>> time.Sleep(). >>> > >>> > I know for benchmarking we deal with this by aggregating a lot of >>> samples >>> > and then dividing out. Is there a way to get the timer data much >>> quicker? >>> > I'm assuming on OS/X it's ending up doing the syscall for gettimeofday >>> (I >>> > recall an open bug somewhere) which is where the large jump comes from. >>> > >>> > Or should I just measure my average latency when initializing my >>> emulator >>> > and use that as a baseline for determining how much to sleep? i.e. >>> > effectively a mini benchmark on startup to determine local machine >>> average >>> > run time and assume some slop? >>> >>> I don't think there is any way we could make time.Now run noticeably >>> faster on Darwin. It's not doing a system call of any sort. >>> >>> >> I'm reading time·now in the 1.9.2 sources and it clearly has a fallback >> path to invoking the gettimeofday call. >> >> I hadn't looked at 1.10 yet so I'll update and check my results there. >> >> >>> Your best bet, if you can assume you are running on amd64, is a tiny >>> bit of assembly code to execute the rdtsc instruction. rdtsc has its >>> problems, but it will give you fairly accurate cycle time when it's >>> not way way off. >>> >>> >> I may go with that. >> >> > Wow...Ok, updating to 1.10 made a *huge* difference here. On 1.9.2 Darwin > was showing an average of 811ns for back to back time.Now() calls. On 1.10 > it's 11-13ns :) I haven't tested amd64 linux yet but I'll assume it's > similar. > > At that amount I can just go back to calling time.Now() at the start/end > and sleeping for the difference modula some slop. > > Looks like I may have to go with assembly for the sleep part or just call time.Now() N times in a row (since it's got decent granularity now in 1.10). The overhead for time.Sleep (using select of some form on darwin/linux it appears) doesn't get anywhere near good enough granularity. 100-200 microseconds extra lag on a GCE instance (yes I know a virtual machine will be slower but...) and also on a native darwin setup that only shows the 11-13ns hit for time.Now(). Might be worth a note in the time.Sleep() docs indicating very low durations are unlikely to work well. James -- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.