Why is calling a function faster than bypassing the function object and evaluating the code object itself? And not by a little, but by a lot?
Here I have a file, eval_test.py: # === cut === from timeit import Timer def func(): a = 2 b = 3 c = 4 return (a+b)*(a-b)/(a*c + b*c) code = func.__code__ assert func() == eval(code) t1 = Timer("eval; func()", setup="from __main__ import func") t2 = Timer("eval(code)", setup="from __main__ import code") # Best of 10 trials. print (min(t1.repeat(repeat=10))) print (min(t2.repeat(repeat=10))) # === cut === Note that both tests include a name lookup for eval, so that as much as possible I am comparing the two pieces of code on an equal footing. Here are the results I get: [steve@ando ~]$ python2.7 eval_test.py 0.804041147232 1.74012994766 [steve@ando ~]$ python3.3 eval_test.py 0.7233301624655724 1.7154695875942707 Directly eval'ing the code object is easily more than twice as expensive than calling the function, but calling the function has to eval the code object. That suggests that the overhead of calling the function is negative, which is clearly ludicrous. I knew that calling eval() on a string was slow, as it has to parse and compile the source code into byte code before it can evaluate it, but this is pre-compiled and shouldn't have that overhead. So what's going on? -- Steven -- https://mail.python.org/mailman/listinfo/python-list