On Mar 28, 12:39 pm, Robert Bradshaw <[EMAIL PROTECTED]>
wrote:
> On Mar 28, 2007, at 10:27 AM, cwitty wrote:
> > In the following carefully chosen example, we see a case where a
> > native double result (or a result using RDF) has only 4 correct digits
> > (about 12 bits).  (This example is from John Harrison's paper, _Formal
> > verification of floating point trigonometric functions_.)
>
> > sage: n = RR('13126962.690651042')
> > sage: n == RR(float(n))
> > True
> > sage: sin(n)
> > -0.000000000452018866841080
> > sage: math.sin(float(n))
> > -4.5200196699662215e-10
> > sage: tan(n)
> > -0.000000000452018866841080
> > sage: tan(RDF(n))
> > -4.52001966997e-10
>
> Hmm... the irony is in this carefully chosen example RR (53-bit MPFR)
> performs just as poorly as the RDF/python float.

What do you mean?  RR is far more accurate.  (MPFR promises to give
the best possible correctly-rounded answer for all operations; if it
ever fails to do so, that's a bug.)

sage: sin(RealField(200)(n)) / sin(n)
1.00000000000000
sage: sin(RealField(200)(n)) / RR(math.sin(float(n)))
1.00003738887370

(Did you miss the difference between ...45201... and ...452001... in
the numbers above?  I missed the difference the first few times I
looked at them, and believed that the RDF and Python float numbers
were much better than they really were.)

> I've also got a Core 2 Duo and G5, both of which produce exactly the
> same results. On sage.math one gets
>
> sage: a = '13126962.690651042'
> sage: sin(RR(a))
> -0.000000000452018866841080
> sage: sin(RDF(a))
> -4.52018866841e-10
> sage: sin(RR(a)) / sin(RDF(a))
> 1.00000000000017
>
> It should also be noted that a - 4178442 * pi is on the order of
> 10^-10, so one would not expect something closer to 0 than that.

Note that math.sin(float(n)), math.tan(float(n)), and tan(RDF(n)) use
the C library versions of sin and tan, but sin(RDF(n)) uses the GSL
implementation of sin.  This is probably portable across IEEE-
compliant architectures, and is more accurate than the C library
version on my machine (although it's evidently still not quite
perfect).

> > Also, even in cases (like default 32-bit x86 basic operations) where
> > only the least significant bit may be wrong, this error can be
> > magnified arbitrarily by subsequent operations.  As a simple example,
> > if foo() should return 1.0 exactly, but instead returns the next
> > higher floating-point number, then (foo() - 1.0) should be exactly 0
> > but is not; this is a huge relative error (you might even say an
> > infinite relative error, depending on your exact definition of
> > relative error).
>
> Good point. I don't know that MPFR would be immune to this either
> (though perhaps less vulnerable). The composition of arithmetic
> operations cannot (a priori) be guaranteed to be correct to the least
> significant bit. E.g.
>
> sage: sum([RDF(2/3)^k for k in range(1000)]) - 3
> -1.33226762955e-15
> sage: sum([RR(2/3)^k for k in range(1000)]) - 3  # mpfr 53-bit
> -0.00000000000000133226762955018
>
> sage: 1.33226762955e-15/0.00000000000000133226762955018
> 0.9999999999998648

Note that printing out a number and then typing it in again does not
give you the same number back; the SAGE MPFR code deliberately rounds
off the number and discards a few bits at the end when it prints.
These computations actually give exactly the same answer:

sage: rdf_ans = sum([RDF(2/3)^k for k in range(1000)]) - 3
sage: rr_ans = sum([RR(2/3)^k for k in range(1000)]) - 3  # mpfr 53-
bit
sage: rdf_ans / rr_ans
1.0
sage: rdf_ans == rr_ans
True

This is not too surprising; the RDF version of this uses only basic
IEEE operations (+, *, /), so each operation is correctly rounded to
the best possible answer; and the RR version uses MPFR, so each
operation is correctly rounded to the best possible answer.

It's true that floating-point computations may give an answer which is
arbitrarily different than the answer you would get with true real-
number computations.  There is some value in producing portable,
precise answers; for instance, a double-double or quad-double package,
or an implementation of interval arithmetic, may totally fail with
occasional answers that are off by a single bit.  However, people are
unlikely to be implementing these at the SAGE command line, and it may
be that at the SAGE command line, slightly-wrong floating-point
answers would seldom be a problem.  (Also, portability is particularly
useful for SAGE because it means we can use floating-point arithmetic
in doctests, and guarantee that it gives the same results everywhere.)

> Your input is very valuable, you know more about this than I do for
> sure. I think we need to make comparisons on a wider range of
> architecture (could be since what we're all using is relatively new
> that we just happen to have very accurate native doubles...) If the
> results are similar to the above (i.e. no major accuracy differences
> between RR and RDF) I think a change would be a very good thing. On
> the other hand we need to do a lot of cross-platform testing to be sure.

If we decide that we value portability but not necessarily precision,
we could use RDF but avoid using C library implementations of the
mathematical functions.  (Currently, RDF uses the C library for tan(),
acos(), asin(), atan(), cosh(), sinh(), tanh(), according to a quick
scan of real_double.pyx -- I may have missed some.)  We would also
want to verify that the GSL implementations are portable across
architectures, and do something about 32-bit x86.

Carl Witty


--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~----------~----~----~----~------~----~------~--~---

Reply via email to