STINNER Victor <victor.stin...@gmail.com> added the comment:

Hi, I wrote recently a similar function because timeit is not reliable by 
default. Results look random and require to run the same benchmark 3 times or 
more on the command line.

https://bitbucket.org/haypo/misc/src/tip/python/benchmark.py

By default, the benchmark takes at least 5 measures, one measure should be 
greater than 100 ms, and the benchmark should not be longer than 1 second. I 
chose these parameters to get reliable results on microbenchmarks like 
"abc".encode("utf-8").

The calibration function uses also the precision of the timer. The user may 
define a minimum time (of one measure) smaller than the timer precision, so the 
calibration function tries to solve such issue. The calibration computes the 
number of loops and the number of repetitions.

Look at BenchmarkRunner.calibrate_timer() and BenchmarkRunner.run_benchmark().
https://bitbucket.org/haypo/misc/src/bfacfb9a1224/python/benchmark.py#cl-362

----------
nosy: +haypo

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue6422>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to