On 2010-02-09, Grant Edwards <inva...@invalid.invalid> wrote: > On 2010-02-09, Jean-Michel Pichavant <jeanmic...@sequans.com> wrote: >> Grant Edwards wrote: >>> What's the correct way to measure small periods of elapsed >>> time. I've always used time.clock() in the past: >>> >>> start = time.clock() >>> [stuff being timed] >>> stop = time.clock() >>> >>> delta = stop-start >>> >>> >>> However on multi-processor machines that doesn't work. >>> Sometimes I get negative values for delta. According to >>> google, this is due to a bug in Windows that causes the value >>> of time.clock() to be different depending on which core in a >>> multi-core CPU you happen to be on. [insert appropriate >>> MS-bashing here] >>> >>> Is there another way to measure small periods of elapsed time >>> (say in the 1-10ms range)? >>> >>> Is there a way to lock the python process to a single core so >>> that time.clock() works right? > >> Did you try with the datetime module ? >> >> import datetime >> t0 = datetime.datetime.now() >> t1 = t0 - datetime.datetime.now() >> t1.microseconds >> Out[4]: 644114 > > Doesn't work. datetime.datetime.now has granularity of > 15-16ms.
time.time() exhibits the same behavior, so I assume that datetime.datetime.new() ends up making the same libc/system call as time.time(). From what I can grok of the datetime module source code, it looks like it's calling gettimeofday(). I can't find any real documentation on the granularity of Win32 gettimeofday() other than a blog post that claims it is 10ms (which doesn't agree with what my tests show). -- Grant Edwards grante Yow! I feel better about at world problems now! visi.com -- http://mail.python.org/mailman/listinfo/python-list