On Thu, Feb 22, 2018 at 11:03 PM, bartc <b...@freeuk.com> wrote: > On 22/02/2018 10:59, Steven D'Aprano wrote: >> >> >> https://www.ibm.com/developerworks/community/blogs/jfp/entry/Python_Meets_Julia_Micro_Performance?lang=en > > > While an interesting article on speed-up techniques, that seems to miss the > point of benchmarks. > > On the fib(20) test, it suggests using this to get a 30,000 times speed-up: > > from functools import lru_cache as cache > > @cache(maxsize=None) > def fib_cache(n): > if n<2: > return n > return fib_cache(n-1)+fib_cache(n-2) > > The idea of the Fibonacci benchmark is to test how effectively an > implementation manages large numbers of recursive function calls. Then, > fib(36) would normally involve 48,315,633 calls. > > This version does only 37, giving a misleading impression.
Not overly misleading; the point of it is to show how trivially easy it is to memoize a function in Python. For a fair comparison, I'd like to see the equivalent Julia code: the function, unchanged, with something around the outside of it to manage caching and memoization. Can that be done with a couple of trivial lines of code using only the standard library? ChrisA -- https://mail.python.org/mailman/listinfo/python-list