Elliot Gorokhovsky added the comment: On Mon, Nov 14, 2016 at 10:32 AM STINNER Victor <rep...@bugs.python.org> wrote:
> You can use perf timeit --compare-to to check if the result is significant > or not, and it displays you the "N.NNx faster" or "N.NNx slower" if it's > significant. > Will do -- I'm writing this up as a paper since this is my science fair project, so I'll redo the measurements that way and upload a pdf here. > About benchmarks, I also would like to see a benchmark on the bad case, > when specialization is not used. And not only on an empty list :-) For > example, sort 1000 objects which implement compare operators and/or a sort > function. > The worst case is the third benchmark from the top -- a list of floats with a single, sad, solitary long at the end. That disables optimization because keys_are_all_same_type gets set to 0 when assign_compare_function finds the long (since key_type == &PyFloat_Type). That benchmark is the *absolute* worst case for two reasons: 1. Float compares are really cheap, so the ratio doesn't get messed up by the common term "time spent actually comparing floats" (since the total time is "time spent on overhead + time spend actually comparing floats"). 2. The list is of size 10. I guess I could've used size 3 or 4, but it shouldn't be too far off... smaller lists give worse ratios because the overhead is O(n) while sorting is O(nlogn). So, again, the absolute worst possible case is the third benchmark, which suffers a 10% slowdown. Certainly a reasonable price to pay considering how rare that case is in practice, and considering the 40-75% gains we get on the common cases. ---------- _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue28685> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com