En Tue, 10 Feb 2009 06:45:37 -0200, Steven D'Aprano <ste...@remove.this.cybersource.com.au> escribió:
On Tue, 10 Feb 2009 18:18:23 +1000, Gerhard Weis wrote:

We tried this code on a 4 core machine using only 1 core and all 4
cores. 1 core wallclock: ~10s
4 core wallclock: ~3s

Three seconds to calculate fib(40) is terrible. Well, it's a great saving
over ten seconds, but it's still horribly, horribly slow.

Check this out, on a two-core 2.2 GHz system:


import timeit
timeit.Timer('fib(40)', 'from __main__ import fib').timeit(1)
7.2956085205078125e-05

That's five orders of magnitude faster. How did I do it? I used a better
algorithm.


def fib(n, a=1, b=1):
    if n==0:
        return a
    elif n==1:
        return b
    return fib(n-1, b, a+b)

I agree with your comment, but this is not the right way to write it in Python. This style is fine in a language with tail-call optimization -- but fib(1000) raises `RuntimeError: maximum recursion depth exceeded`

The same thing after removing recursion can compute the 2089 digits of fib(10000) without problems:

def fib(n):
    a = b = 1
    while n:
      n, a, b = n-1, b, a+b
    return a

Less than a millisecond, versus millions of years for the OP's algorithm.
I know which I would choose. Faster hardware can only go so far in hiding
the effect of poor algorithms.

Completely agree!

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to