On 5/8/2011 8:25 AM, Michael D. Berger wrote:
-----Original Message-----
From: Robert Dewar [mailto:de...@adacore.com]
Sent: Sunday, May 08, 2011 11:13
To: Michael D. Berger
Cc: gcc@gcc.gnu.org
Subject: Re: numerical results differ after irrelevant code change
[...]
This kind of result is quite expected on an x86 using the old
style (default) floating-point (becauae of extra precision in
intermediate results).
How does the extra precision lead to the variable result?
Also, is there a way to prevent it? It is a pain in regression testing.
If you don't need to support CPUs over 10 years old, consider
-march=pentium4 -mfpmath=sse or use the 64-bit OS and gcc.
Note the resemblance of your quoted differences to DBL_EPSILON from
<float.h>. That's 1 ULP relative to 1.0. I have a hard time imagining
the nature of real applications which don't need to tolerate differences
of 1 ULP.
--
Tim Prince