On Mar 28, 2007, at 10:27 AM, cwitty wrote: > In the following carefully chosen example, we see a case where a > native double result (or a result using RDF) has only 4 correct digits > (about 12 bits). (This example is from John Harrison's paper, _Formal > verification of floating point trigonometric functions_.) > > sage: n = RR('13126962.690651042') > sage: n == RR(float(n)) > True > sage: sin(n) > -0.000000000452018866841080 > sage: math.sin(float(n)) > -4.5200196699662215e-10 > sage: tan(n) > -0.000000000452018866841080 > sage: tan(RDF(n)) > -4.52001966997e-10 > > The trick is to find a floating-point number which is very close to a > multiple of Pi; this means that the range reduction step needs to use > a very precise approximation of Pi. (This transcript is from a Linux > (Debian testing) box with an Intel Core 2 Duo processor in 32-bit > mode; I would be curious if other architectures/operating systems give > different results.)
Hmm... the irony is in this carefully chosen example RR (53-bit MPFR) performs just as poorly as the RDF/python float. I've also got a Core 2 Duo and G5, both of which produce exactly the same results. On sage.math one gets sage: a = '13126962.690651042' sage: sin(RR(a)) -0.000000000452018866841080 sage: sin(RDF(a)) -4.52018866841e-10 sage: sin(RR(a)) / sin(RDF(a)) 1.00000000000017 It should also be noted that a - 4178442 * pi is on the order of 10^-10, so one would not expect something closer to 0 than that. > Also, even in cases (like default 32-bit x86 basic operations) where > only the least significant bit may be wrong, this error can be > magnified arbitrarily by subsequent operations. As a simple example, > if foo() should return 1.0 exactly, but instead returns the next > higher floating-point number, then (foo() - 1.0) should be exactly 0 > but is not; this is a huge relative error (you might even say an > infinite relative error, depending on your exact definition of > relative error). Good point. I don't know that MPFR would be immune to this either (though perhaps less vulnerable). The composition of arithmetic operations cannot (a priori) be guaranteed to be correct to the least significant bit. E.g. sage: sum([RDF(2/3)^k for k in range(1000)]) - 3 -1.33226762955e-15 sage: sum([RR(2/3)^k for k in range(1000)]) - 3 # mpfr 53-bit -0.00000000000000133226762955018 sage: 1.33226762955e-15/0.00000000000000133226762955018 0.9999999999998648 > On the other hand, the vast majority of floating-point in the world is > done with native floating-point; clearly this is adequate for a huge > variety of uses. > > I don't know how to weigh the tradeoffs of portability and precision > versus speed in terms of picking a default floating-point ring for > SAGE. Your input is very valuable, you know more about this than I do for sure. I think we need to make comparisons on a wider range of architecture (could be since what we're all using is relatively new that we just happen to have very accurate native doubles...) If the results are similar to the above (i.e. no major accuracy differences between RR and RDF) I think a change would be a very good thing. On the other hand we need to do a lot of cross-platform testing to be sure. - Robert --~--~---------~--~----~------------~-------~--~----~ To post to this group, send email to sage-devel@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sage-devel URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/ -~----------~----~----~----~------~----~------~--~---