On 22/02/2018 10:59, Steven D'Aprano wrote:
https://www.ibm.com/developerworks/community/blogs/jfp/entry/Python_Meets_Julia_Micro_Performance?lang=en

While an interesting article on speed-up techniques, that seems to miss the point of benchmarks.

On the fib(20) test, it suggests using this to get a 30,000 times speed-up:

    from functools import lru_cache as cache

    @cache(maxsize=None)
    def fib_cache(n):
        if n<2:
            return n
        return fib_cache(n-1)+fib_cache(n-2)

The idea of the Fibonacci benchmark is to test how effectively an implementation manages large numbers of recursive function calls. Then, fib(36) would normally involve 48,315,633 calls.

This version does only 37, giving a misleading impression.

Anyway, I got a 6x speed-up using pypy, without changing anything. Although, I doubt if that's still executing actual byte-code, if /that/ was the point of the test.

(It then goes on to suggest using 'numba', and using its JIT compiler, and using on that on an /iterative/ version of fib(). Way to miss the point.

It might be a technique to bear in mind, but it is nonsensical to say this gives a 17,000 times speed-up over the original code.

Here's another speed-up I found myself, although it was only 50 times faster, not 17,000: just write the code in C, and call it via os.system("fib.exe"). But you /do/ need to write it in a different language.)

--
bartc
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to