On 2006-12-21, at 22:19, David Nicol wrote:


It has always seemed to me that floating point comparison could
be standardized to regularize the exponent and ignore the least significant
few bits and doing so would save a lot of headaches.

Well actually it wouldn't "save the world". However adding an
op-code implementing: x eqeps y <=> |x - y| < epsilion, would be indeed helpful. Maybe some m-f has already patented it, and that's the reason we don't see it
already done. But that's of course only me speculating.

Would it really save
the headaches or would it just make the cases where absolute comparisons
of fp results break less often, making the error more intermittent and
thereby worse?  Could a compiler switch be added that would alter
fp equality?

However in numerical computation there isn't really a silver bullet to be found. If you are serious about it you simply do the hard work - which is the numerical analysis of your algorithms with respect to computational stability. That's the real effort and perhaps the reason, why the peculiarities of the FP implementations
are not perceived as problematic.

I have argued for "precision" to be included in numeric types in other forae and have been stunned that all except people with a background in Chemistry find the suggestion bizarre and unnecessary; I realize that GCC is not really a good place to try to shift norms; but on the other hand if a patch was to be prepared that would add a command-line switch (perhaps -sloppy- fpe and -no-sloppy-fpe) that would govern wrapping ((fptype) == (fptype)) with something
that threw away the least sig. GCC_SLOPPY_FPE_SLOP_SIZE bits in
the mantissa, would it get accepted or considered silly?

No that's not sufficient. And a few bit's of precision are really not the center-point of numerical stability of the overall calculation. Please look up as an example a numerical phenomenon which is usually called "cancelation" to see
immediately why.

Marcin Dalecki


Reply via email to