On my 2.8GHz P4, Windows 2000 SP4 with Python 2.3.4 I am getting totally different results compared to Ray. Does Python 2.3.4 already use the Pentium RTDSC instruction for clock()?
Claudio # \> Claudio Grondi, 2.8GHz P4 Python 2.3.4 (2005-01-24 14:32) # time of taking time: # 0.000001396825574200073100 # 0.000001676190689040086400 # 0.000001396825574200074000 # 0.000001676190689040088100 # 0.000001955555803880100500 # 0.000001620317666072084300 (average) # statistics of 1.000.000 times of taking time in a while loop: # 0.000001396825573429794100 (min) # 0.002370692364532356300000 (max) # 0.000001598858514140937100 (avg) # >>> Ray Schumacher, 2.4GHz P4 Python 2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)] on win32 # 0.000321028401686 # 0.00030348379596 # 0.000297101358228 # 0.000295895991258 # 0.000342754564927 (average) Here my code: # Tests show, that the first call takes longer than subsequent calls, # so it makes sense to run t = clock() just one time before the next calls # are used: t = clock() t0 = clock() t1 = clock() t2 = clock() t3 = clock() t4 = clock() t5 = clock() print 'time of taking time: ' print ' %25.24f'%((t1-t0),) print ' %25.24f'%((t2-t1),) print ' %25.24f'%((t3-t2),) print ' %25.24f'%((t4-t3),) print ' %25.24f'%((t5-t4),) print ' %25.24f (average)'%( ((-t0+t5)/5.),) intCounter=1000000 fltTotTimeOfTakingTime = 0.0 fltMaxTimeOfTakingTime = 0.0 fltMinTimeOfTakingTime = 1.0 while(intCounter > 0): t1 = clock() t2 = clock() timeDiff = t2-t1 if(timeDiff < fltMinTimeOfTakingTime): fltMinTimeOfTakingTime = timeDiff if(timeDiff > fltMaxTimeOfTakingTime): fltMaxTimeOfTakingTime = timeDiff fltTotTimeOfTakingTime+=timeDiff intCounter-=1 #:while fltAvgTimeOfTakingTime = fltTotTimeOfTakingTime / 1000000.0 print 'statistics of 1.000.000 times of taking time in a while loop:' print ' %25.24f (min)'%(fltMinTimeOfTakingTime,) print ' %25.24f (max)'%(fltMaxTimeOfTakingTime,) print ' %25.24f (avg)'%(fltAvgTimeOfTakingTime,) "Ray Schumacher" <[EMAIL PROTECTED]> schrieb im Newsbeitrag news:[EMAIL PROTECTED] > I have a need for a time.clock() with >0.000016 second (16us) accuracy. > The sleep() (on Python 2.3, Win32, at least) has a .001s limit. > > Are they lower/better on other's platforms? > > Test code, 2.4GHz P4 > Python 2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)] on win32 > > import time > t0 = time.clock() > t1 = time.clock() > t2 = time.clock() > t3 = time.clock() > t4 = time.clock() > t5 = time.clock() > print (-t0+t5)/5. > print t1-t0 > print t2-t1 > print t3-t2 > print t4-t3 > print t5-t4 > > >>> > ave 0.000342754564927 > 0.000321028401686 > 0.00030348379596 > 0.000297101358228 > 0.000295895991258 > > I had also considered forking a thread that would spin a loop checking time.clock() and firing the TTL pulse after the appropriate interval, but the real, ultimate resolution of time.clock() appears to be ~.00035s. If I increase process priority to real-time, it is ~.00028s > The alternative appears to be more C code... > > Ray > BCI/Congitive Vision > "Paul Rubin" <http://[EMAIL PROTECTED]> schrieb im Newsbeitrag news:[EMAIL PROTECTED] > Ray Schumacher <[EMAIL PROTECTED]> writes: > > I have a need for a time.clock() with >0.000016 second (16us) accuracy. > > The sleep() (on Python 2.3, Win32, at least) has a .001s limit. > > > > Are they lower/better on other's platforms? > > > > The alternative appears to be more C code... > > C code is your best bet. The highest resolution timer on x86's these > days is the Pentium RTDSC instruction which counts the number of cpu > cycles since power-on. There's various C routines floating around > that let you access that instruction. -- http://mail.python.org/mailman/listinfo/python-list