"Russ" <[EMAIL PROTECTED]> writes: > I just did a little time test (which I should have done *before* my > original post!), and 2.0**2 seems to be about twice as fast as > pow(2.0,2). That seems consistent with your claim above...> > I just did another little time test comparing 2.0**0.5 with sqrt(2.0). > Surprisingly, 2.0**0.5 seems to take around a third less time.
I think the explanation is likely here: Python 2.3.4 (#1, Feb 2 2005, 12:11:53) >>> import dis >>> from math import sqrt >>> def f(x): return x**.5 ... >>> dis.dis(f) 1 0 LOAD_FAST 0 (x) 3 LOAD_CONST 1 (0.5) 6 BINARY_POWER 7 RETURN_VALUE 8 LOAD_CONST 0 (None) 11 RETURN_VALUE See, x**.5 does two immediate loads and an inline BINARY_POWER bytecode. >>> def g(x): return sqrt(x) ... >>> dis.dis(g) 1 0 LOAD_GLOBAL 0 (sqrt) 3 LOAD_FAST 0 (x) 6 CALL_FUNCTION 1 9 RETURN_VALUE 10 LOAD_CONST 0 (None) 13 RETURN_VALUE sqrt(x), on the other hand, does a lookup of 'sqrt' in the global namespace, then does a Python function call, both of which likely are almost as expensive as the C library pow(...) call. If you do something like def h(x, sqrt=sqrt): return sqrt(x) you replace the LOAD_GLOBAL with a LOAD_FAST and that might give a slight speedup: >>> dis.dis(h) 2 0 LOAD_FAST 1 (sqrt) 3 LOAD_FAST 0 (x) 6 CALL_FUNCTION 1 9 RETURN_VALUE 10 LOAD_CONST 0 (None) 13 RETURN_VALUE -- http://mail.python.org/mailman/listinfo/python-list