I have two code snippets to time a function object being executed. I expected that they should give roughly the same result, but one is more than an order of magnitude slower than the other.
Here are the snippets: def timer1(): timer = time.time func = lambda : None itr = [None] * 1000000 t0 = timer() for _ in itr: func() t1 = timer() return t1 - t0 def timer2(): timer = time.time func = lambda : None itr = [None] * 1000000 t = 0.0 for _ in itr: t0 = timer() func() t1 = timer() t += t1 - t0 return t Here are the results: >>> timer1() 0.54168200492858887 >>> timer2() 6.1631934642791748 Of course I expect timer2 should take longer to execute in total, because it is doing a lot more work. But it seems to me that all that extra work should not affect the time measured, which (I imagine) should be about the same as timer1. Possibly even less as it isn't timing the setup of the for loop. Any ideas what is causing the difference? I'm running Python 2.3 under Linux. Thanks, -- Steven. -- http://mail.python.org/mailman/listinfo/python-list