On 26/02/2018 19:50, Chris Angelico wrote:
On Tue, Feb 27, 2018 at 6:37 AM, Rick Johnson
So what? Latency is latency. And whether it occurs over the
course of one heavily recursive algorithm that constitutes
the depth and breadth of an entire program (a la fib()), or
it is the incremental cumulative consequence of the entire
program execution, the fact remains that function call
overhead contributes to a significant portion of the latency
inherent in some trivial, and *ALL* non-trivial, modern
software.
Take the last bit of Python I posted, which was that RNG test.
It uses this function:
def i64(x): return x & 0xFFFFFFFFFFFFFFFF
This is a small function, but it can't be a toy one as it was suggested
by Ned Batchelder. And the test program wasn't at all recursive.
Running the program with N=10_000_000 took 46 seconds.
Replacing the i64() call with '&m' (where m is a global set to
0xFFFFFFFFFFFFFFFF), it took 38 seconds.
So putting that fix into a function, which is convenient for coding,
maintenance and readability, cost 20% in runtime.
Going the other way, having i64() call another function i64a() which
does the work, common when using wrapper functions, it took 60 seconds.
A 30% slowdown, even though most of the work is doing numeric shifts and
adds.
So function calls do seem to be expensive operations in CPython, and any
benchmarks which highlight that fact should be welcomed.
(Note than none of this seems to affect PyPy, which ran at the same
speed with i64(), without it, or with both i64() and i64a().)
--
bartc
--
https://mail.python.org/mailman/listinfo/python-list