On 24/02/2018 02:46, Steven D'Aprano wrote:

Take the Fibonacci double-recursion benchmark. Okay, it tests how well
your language does at making millions of function calls. Why? How often
do you make millions of function calls?

Very often. Unless you are writing code 1970s style with everything in one big main function.

I've done some tests with my interpreter [sorry, Ned], on one real task:

  Total number of byte-code calls: 3.2 million
  Total number of real x64 calls: 520 million

On this specific benchmark: 48 million and 580 million.

Changing the way those x64 calls were made (using a different call convention), made some byte-code programs take up to 50% longer to execute. [In this interpreter, each byte-code, no matter what it is, is dispatched via a function call.]

 For most application code,
executing the function is far more costly than the overhead of calling
it, and the call overhead is dwarfed by the rest of the application.

Any actual figures?

In the case of interpreters, you want to optimise each byte-code, and one way of doing that is to write a small program which features that byte-code heavily. And then you start tweaking.

It is true that when working with heavy-duty data, or offloading work to external, non-byte-code functions, then the byte-code execution overheads are small. But Python's weakness is when it /is/ executing actual algorithms using actual byte-code.

And actually, performance of CPython does seem to have improved tremendously over the years. So some people have their eye on the ball. Clearly not you.

If you have a language with tail recursion elimination, you can bet
that's its benchmarks will include examples of tail recursion and tail
recursion will be a favoured idiom in that language. If it doesn't, it
won't.

Benchmarks need to be honest. But Fibonacci I think can't use that optimisation (although gcc seems to have found another way of not that much work).

--
bartc
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to