On Wed, 14 Sep 2005 23:44:18 -0400, Terry Reedy wrote: >> Yes. There are lots of algorithms that could be done, and they all have >> their pros and cons. Biset's formula, for example, is mathematically >> correct, but for large enough n, the "mere implementation detail" that >> floats have a finite precision will cause that algorithm to give >> incorrect answers. For "large enough", on my system I mean n=71. > > I am assuming you mean the Fib formula? No matter. With finite precision > ints, typically 32 bits, fib(n) by addition overflows sooner than that. If > you allow extended precision ints, then allow extended precision floats > also.
Except for the mere implementation detail that Python doesn't have extended precision floats, but it does have transparent conversion between 32 bit ints and longints. So in the real world of Python language programming, the mathematically elegant, fast, efficient algorithm using Biset's formula is only usable for n up to about 70. That illustrates my original point that a programming strategy that seems like a good idea in the ivory tower of academia can be a bad idea in real world program development. Ivory towers are important. If not for academics and their clever ideas, we'd still be programming in assembly language. But developers should be aware of the dangers of adapting ideas from the ivory tower too quickly, in the sense that an algorithm that runs optimally using one compiler may run pessimally using a language that doesn't have all those optimizations built into it. For instance, naively translating Lisp-style (head, tail) lists into Python data structures give you something like this: [0, [1, [2, [3, [4]]]]] Operating on such a list in Python is not as efficient as it is in Lisp, and your code will generally run significantly slower than if you had written code to operate on the Python-style list [0, 1, 2, 3, 4]. This is obvious, but not obvious enough -- programmers frequently "code in X", blindly translating code or algorithms that worked well in some other language into whichever language they are using at the time. The original advice to always avoid iteration in favour of recursion falls into the same category. Not all languages have the optimizations to make that work, and more importantly, it is often easier to come up with inefficient recursive algorithms than inefficient iterative algorithms. I have learnt a lot about the state of the art of functional programming from this discussion, thank you to all the folks who took the time to respond. Will that make me a better Python programmer? Doubtful, at least not while Python is mostly an imperative language. > To compare iteration versus recursion, one should use the same algorithm or > same set of algorithms, with an iterative and recursive version of each. > To compare algorithms, one should use the same implementation form or forms > on each. Varying one factor at a time, when possible, is basic good > science. That is good science, but the majority of programmers are not scientists. Most professional, non-academic programmers don't have the foggiest clue what "tail-recursion" means, whether that implies that "head-recursion" exists or not, or how to tell them apart, let alone how to convert an iterative algorithm into a recursive one or vice versa. That's not meant as a slight against the programmers -- many of them are extremely competent, intelligent folks, but for them programming is a combination of art and engineering, and not a science. That is where advice like "use recursion, not iteration" falls down. Most of the common languages in use outside of academia (C/C++, Perl, Python, PHP, Javascript, Java) aren't functional programming languages that optimize your recursion, and most programmers don't have the skills to do it themselves. That is a skill, and not one that is taught in terribly many "Python in a Nutshell" style books. -- Steven. -- http://mail.python.org/mailman/listinfo/python-list