On 12/30/2015 8:18 AM, Steven D'Aprano wrote:
We know that Python floats are equivalent to C doubles,
Yes
which are 64-bit IEEE-754 floating point numbers.
I believe that this was not true on all systems when Python was first released. Not all 64-bit floats divided them the same way. I believe there has been some discussion on pydev whether the python code itself should assume IEEE now. I do not believe that there are currently any buildbots that are not IEEE. Does the standard allow exposing the 80 bit floats of FP processors?
Well, actually, C doubles are not strictly defined. The only promise the C standard makes is that double is no smaller than float. (That's C float, not Python float.) And of course, not all Python implementations use C. Nevertheless, it's well known (in the sense that "everybody knows") that Python floats are equivalent to C 64-bit IEEE-754 doubles. How safe is that assumption? I have a function with two implementations: a fast implementation that converts an int to a float, does some processing, then converts it back to int. That works fine so long as the int can be represented exactly as a float. The other implementation uses integer maths only, and is much slower but exact. As an optimization, I want to write: def func(n): if n <= 2**53:
The magic number 53 should be explained in the code.
# use the floating point fast implementation else: # fall back on the slower, but exact, int algorithm (The optimization makes a real difference: for large n, the float version is about 500 times faster.) But I wonder whether I need to write this instead? def func(n): if n <= 2**sys.float_info.mant_dig: # ...float else: # ...int
Pull the calculation of the constant out of the function. Naming the constant documents it and allows easy change. There is pretty standard in scientific computing (or was once).
finmax = 2 ** sys.float_info.mant_dig # -1? def func(n): if n <= finmax: ... -- Terry Jan Reedy -- https://mail.python.org/mailman/listinfo/python-list