STINNER Victor added the comment: Maciej Fijalkowski also sent me the following article a few months ago, it also explains indirectly why using the minimum for benchmarks is not reliable:
"Virtual Machine Warmup Blows Hot and Cold" http://arxiv.org/pdf/1602.00602.pdf Even if the article is more focused on JIT compilers, it shows that benchmarks are not straightforward but always full of bad surprises. A benchmark doesn't have a single value but a *distribution*. The best question is how to summarize the full distribution without loosing too much information. In the perf module I decided to not take a decision: a JSON file stores *all* data :-D But by default, perf displays mean +- std dev. ---------- _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue28240> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com