On Fri, Feb 22, 2013 at 9:33 AM, Dave Angel <da...@davea.name> wrote: > On 02/21/2013 05:11 PM, Chris Angelico wrote: >> >> >>> <snip> >> >> >> Note how, in each case, calculating three powers that have the same >> real-number result gives a one-element set. Three to the sixtieth >> power can't be perfectly rendered with a 53-bit mantissa, but it's >> rendered the same way whichever route is used to calculate it. >> > > But you don't know how the floating point math library (note, it's the > machine's C-library, not Python's that used) actually calculates that. > > For example, if they were to calculate 2**64 by squaring the number 6 times, > that's likely to give a different answer than multiplying by 2 63 times. > And you don't know how the library does it. For any integer power up to > 128, you can do a combination of square and multiply so that the total > operations are never more than 13, more or less. But if you then figure a = > a*a and b = b/2, and do the same optimization, you might not do them > exactly in the same order, and therefore might not get exactly the same > answer. > > Even if it's being done in the coprocessor inside the Pentium, we don't have > a documented algorithm for it. Professor Kahn helped with the 8087, but I > know they've tweaked their algorithms over the years (as well as repairing > bugs). So it might not be a difference between Python versions, nor between > OS's, but between processor chips.
I was under the impression that, on most modern FPUs, calculations were done inside the FPU with more precision than the 53-bit that gets stored. But in any case, I'd find it _extremely_ surprising if the calculation actually resulted in something that wasn't one of the two nearest possible representable values to the correct result. And I'd call it a CPU/FPU bug. Of course, as we know, Intel's *never* had an FPU bug before... ChrisA -- http://mail.python.org/mailman/listinfo/python-list