Marc-Andre Lemburg added the comment:

> Marc-Andre: "Consensus then was to use the minimum as basis for benchmarking: 
> (...) There are arguments both pro and con using min or avg values."
> 
> To be honest, I expect that most developer are already aware that minimum is 
> evil and so I wouldn't have to convince you. I already posted two links for 
> the rationale. It seems that you are not convinced yet, it seems like I have 
> to prepare a better rationale :-)

I'm not sure I follow. The first link clearly says that "So for better or 
worse, the choice of which one is better comes down to what we think the 
underlying distribution will be like." and it ends with "So personally I use 
the minimum when I benchmark.".

http://blog.kevmod.com/2016/06/benchmarking-minimum-vs-average/

If we display all available numbers, people who run timeit can then see where 
things vary and possibly look deeper to find the reason.

As I said and the above articles also underlines: there are cases where min is 
better and others where avg is better. So in the end, having both numbers 
available gives you all the relevant information.

I have focused on average in pybench 1.0 and then switched to minimum for 
pybench 2.0. Using minimum resulted in more reproducible results at least on 
the computers I ran pybench on, but do note that pybench 2.0 still does print 
out the average values as well. The latter mostly due to some test runs I found 
where (probably due to CPU timers not working correctly), the min value 
sometimes dropped to very low values which did not really make sense compared 
to the average values.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue28240>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to