05.05.18 19:10, Steven D'Aprano пише:
# calling a regular function
python3.5 -m timeit -s "f = lambda: 99" "f()"
# evaluating a function code object
python3.5 -m timeit -s "f = (lambda: 99).__code__" "eval(f)"
# estimate the overhead of the eval name lookup
python3.5 -m timeit "eval"
# evaluating a pre-compiled byte-code object
python3.5 -m timeit -s "f = compile('99', '', 'eval')" "eval(f)"
# evaluating a string
python3.5 -m timeit "eval('99')"
And the results on my computer:
# call a regular function
1000000 loops, best of 3: 0.245 usec per loop
# evaluate the function __code__ object
1000000 loops, best of 3: 1.16 usec per loop
# overhead of looking up "eval"
10000000 loops, best of 3: 0.11 usec per loop
which means it takes four times longer to execute the code object alone,
compared to calling the function object which executes the code object.
Why so slow?
1. Add an overhead of calling eval(), not just looking up its name. It
is comparable with a time of calling a simple lambda.
2. Add an overhead of getting the globals dict and creating a locals dict.
$ python3.6 -m timeit -s "f = lambda: 99" "f()"
10000000 loops, best of 3: 0.0658 usec per loop
$ python3.6 -m timeit -s "f = (lambda: 99).__code__" "eval(f)"
1000000 loops, best of 3: 0.282 usec per loop
$ python3.6 -m timeit "eval"
10000000 loops, best of 3: 0.0226 usec per loop
$ python3.6 -m timeit -s "f = (lambda: 99).__code__; g = {}" "eval(f, g)"
10000000 loops, best of 3: 0.165 usec per loop
0.0658*2 + 0.0226 = 0.1542. It is comparable with 0.165.
--
https://mail.python.org/mailman/listinfo/python-list