Just a little background: I wrote a program that does high accuracy factorials. It saves a given factorial for later use as a double precision real between 1 and 2 and an integer power of 2. It does this by calculating the factorial to a practically unlimited number of bits, rounding it to 53 bits only as the final step, thereby avoiding the cumulative round off errors that occur.
I wanted to compare the accuracy of this method to a straight-forward way of calculating factorials. So I also scaled factorials calculated in a straight-forward way the same way as above, that is as a double precision real between 1 and 2 and an integer power of 2. In the comparison, the integer power of 2 can be ignored, as it is practically always the same. As to the real portion, the difference between the two reals should always be an integer multiple of 2**(-52). The crux of the bug is that when I compiled using the flag -O, this was true. Using -O2, it was not. Stated another way, the calculation of small differences seems to depend on optimization flags. -- Summary: In gfortran, the calculation of small differences seems to depend on optimization flags Product: gcc Version: 4.5.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: fortran AssignedTo: unassigned at gcc dot gnu dot org ReportedBy: CycleTimeChart at yahoo dot com http://gcc.gnu.org/bugzilla/show_bug.cgi?id=45175