On Monday, February 26, 2018 at 4:39:22 AM UTC-6, Steven D'Aprano wrote: > On Sun, 25 Feb 2018 19:26:12 -0800, Rick Johnson wrote: > > > On Friday, February 23, 2018 at 8:48:55 PM UTC-6, Steven D'Aprano wrote: > > [...] > > > Take the Fibonacci double-recursion benchmark. Okay, it > > > tests how well your language does at making millions of > > > function calls. Why? > > > > Because making "millons of function calls" is what > > software *DOES*! > > You're right -- I worded that badly, and ended up saying > something I didn't really mean. Mea culpa. Of course over > the course of the entire program run, most non-trivial > programmers will end up having millions (if not billions) > of function calls *in total*. But they're not all happening > *together*.
So what? Latency is latency. And whether it occurs over the course of one heavily recursive algorithm that constitutes the depth and breadth of an entire program (a la fib()), or it is the incremental cumulative consequence of the entire program execution, the fact remains that function call overhead contributes to a significant portion of the latency inherent in some trivial, and *ALL* non-trivial, modern software. > > What I tried to get across is, how often do we use millions > of function calls to get one value? All the time! For, in the abstract, the fundamental function of software is to extract a "value" from one or more inputs -- is it not? So, whether that "value" be: (1) a document in an office productivity suite, or (2) a flight path for a mars lander or a jumbo jet or child's quad-copter, or (3) a modified image in a image processing program, or (4) a design drawing or technical schematic for a mars rover bot, or (5) an HD quality streaming porno flick, or yes, (6) even a recursive function which unwittingly underscores some of the less attractive aspects of the Python language (aka: function call overhead!) -- all of these examples, and countless others, extract a value from inputs. Of course, much like physical traits i suppose, some "values" are more desirable than others. > > By default, Python will only allow one thousand function > calls in a stack before raising RecursionError. By _default_, yes. > > py> def bad(): > ... return bad() > ... > py> bad() > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > File "<stdin>", line 2, in bad > [ previous line repeats 998 times ] > RecursionError: maximum recursion depth exceeded > > > There's nothing unique about recursion that will trigger > this error. Any call stack of more than 1000 function calls > will do it. So forget about code getting a million > functions deep. The average is probably more like a dozen, > perhaps a hundred tops. I'm very glad you brought this up, because it proves my main point that, Python's maximum recursion error is one feature they got _right_. If you don't like the default recursion limit of 1000, or maybe 1000 recursions is breaking a valid recursive algorithm, you can set the recursion limit to whatever you like... ## Python 2 ## >>> import sys >>> sys.setrecursionlimit <built-in function setrecursionlimit> >>> sys.setrecursionlimit(10) >>> count = 1 >>> def bad(): ... global count ... print count ... count += 1 ... bad() ... >>> bad() 1 2 3 4 5 6 7 8 9 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 5, in bad File "<stdin>", line 5, in bad File "<stdin>", line 5, in bad File "<stdin>", line 5, in bad File "<stdin>", line 5, in bad File "<stdin>", line 5, in bad File "<stdin>", line 5, in bad File "<stdin>", line 5, in bad File "<stdin>", line 5, in bad RuntimeError: maximum recursion depth exceeded It is however unfortunate that we get slamed with N recursive exception messages :-(, but hey, at least we can modify the maximum recursion limit! It's a step in the right direction. :-) > [...] > > > For most application code, executing the function is far > > > more costly than the overhead of calling it, > > > > What? Your functions are only called _once_ during the > > lifetime of your code execution? Enough with the leg > > pulling. > > Let me explain what I intended to say. Here is a really > poor estimate of the overhead of calling a function on my > computer, measured in iPython. > > [...snip valid examples...] > > Now I know that this is not always the case. But I maintain > that for *most* (but not all) practical, real-world code > that actually does something useful, the overhead of > calling the functions is *usually* (but not always) low > compared to the cost of actually doing whatever it is that > the functions do. True. But again you failed to address my main point that: "Lazy features (when feasible) must be offered _optionally_". Yes, function call overhead may be tiny in relation to the code encapsulated within (at least for non-trivial code), but the fact remains that function calls are inherent in modern software, and as a result, there is a compelling argument to be made that function calls should therefore be as efficient as possible. In reality, one cannot write modern software without using lots of functions and calling (at least) some subset of those functions *MANY* times during the execution of the program. And we're not just talking about functions who's parent node is module scope. Nope. Methods are functions too, albeit, a special bound class of functions. And the Python std-lib is chalk full of functions, many of which are utilized every second of every day to do lots of heavy lifting. So in conclusion, if anyone out there is not writing tons of functions to solve their problems (or at least calling tons of library functions to solve their problems), well then, their software is either really *REALLY* simplistic, or they are doing something really *REALLY* wrong. And from from that conclusion we now come face to face with the reality that function call efficiency may be the most important optimization that Python could ever make. -- https://mail.python.org/mailman/listinfo/python-list