------- Comment #5 from vincent at vinc17 dot org 2006-02-14 17:03 ------- (In reply to comment #4) > Note however, that the true accurate value for d, calculated at infinite > precision, is 1-(2^-16). So, the absolute error for gcj is 1+(2^-16) and the > absolute error with correct rounding is 1-(2^-16). (I'm not surprised this > hasn't been reported as a problem with any real applications!)
Note that some algorithms may be sensitive to this difference. I give an example in <http://www.vinc17.org/research/publi.en.html#Lef2005b> (The Euclidean division implemented with a floating-point division and a floor); the effect of extended precision is dealt with in Section 5. A second problem is the reproducilibity of the results on various architectures. Under probabilistic hypotheses, there should be something like 1 case over 2048 that is incorrectly rounded (in the real world, this is much less). > It might be worth setting the floating-point precision of gcj to double, but > that would only fix the double-precision case, and I presume we'd still have > the same double rounding problem for floats. Yes, however doubles are nowadays used much more often than floats, IMHO. I think that fixing the problem for the doubles would be sufficient (as it is probably too difficult to do better), though not perfect. > And in any case, I do not know if libcalls would be affected by being entered > with the FPU in round-to-double mode. We might end up breaking things. The only glibc function for which problems have been noticed is pow in corner cases. See <http://sources.redhat.com/bugzilla/show_bug.cgi?id=706>. And it is also inaccurate when the processor is configured in extended precision; so in any case, users shouldn't rely on it. I'd be interested in other cases, if any. More information here: <http://www.vinc17.org/research/extended.en.html> -- http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16122