Steven D'Aprano <steve+pyt...@pearwood.info> added the comment:

Running timeit with number=1 for fast running processes is not likely to be 
reliable. The problem is that modern computers are *incredibly noisy*, there 
are typically hundreds of processes running on them at any one time.

Trying to get deterministic times from something that runs in 0.3 ms or less is 
not easy. You can see yourself that the first couple of runs are of the order 
of 2-10 times higher before settling down. And as you point out, then there is 
a further drop by a factor of 5.

I agree that a CPU cache kicking in is a likely explanation.

When I use timeit, I generally try to have large enough number that the time is 
at least 0.1s per loop. To be perfectly honest, I don't know if that is 
actually helpful or if it is just superstition, but using that as a guide, I've 
never seen the sort of large drop in timing results that you are getting.

I presume you have read the notes in the doc about the default time?

"default_timer() measurements can be affected by other programs running on the 
same machine"

https://docs.python.org/3/library/timeit.html

There are more comments about timing in the source code and the commentary by 
Tim Peters in the Cookbook.

Another alternative is to try Victor Stinner's pyperf tool. I don't know how it 
works though.

----------
nosy: +steven.daprano, tim.peters, vstinner

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue45261>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to