On Mar 28, 2007, at 2:23 PM, cwitty wrote:

>
> On Mar 28, 12:39 pm, Robert Bradshaw <[EMAIL PROTECTED]>
> wrote:
>> On Mar 28, 2007, at 10:27 AM, cwitty wrote:
>>> In the following carefully chosen example, we see a case where a
>>> native double result (or a result using RDF) has only 4 correct  
>>> digits
>>> (about 12 bits).  (This example is from John Harrison's paper,  
>>> _Formal
>>> verification of floating point trigonometric functions_.)
>>
>>> sage: n = RR('13126962.690651042')
>>> sage: n == RR(float(n))
>>> True
>>> sage: sin(n)
>>> -0.000000000452018866841080
>>> sage: math.sin(float(n))
>>> -4.5200196699662215e-10
>>> sage: tan(n)
>>> -0.000000000452018866841080
>>> sage: tan(RDF(n))
>>> -4.52001966997e-10
>>
>> Hmm... the irony is in this carefully chosen example RR (53-bit MPFR)
>> performs just as poorly as the RDF/python float.
>
> What do you mean?  RR is far more accurate.  (MPFR promises to give
> the best possible correctly-rounded answer for all operations; if it
> ever fails to do so, that's a bug.)
>
> sage: sin(RealField(200)(n)) / sin(n)
> 1.00000000000000
> sage: sin(RealField(200)(n)) / RR(math.sin(float(n)))
> 1.00003738887370
>
> (Did you miss the difference between ...45201... and ...452001... in
> the numbers above?  I missed the difference the first few times I
> looked at them, and believed that the RDF and Python float numbers
> were much better than they really were.)

Yes, I missed that double 0 when reading the email. However, what I  
was comparing on my machine was

sage: RR(sin(RDF(a))) / sin(RealField(200)(a))
0.999999999999998

which isn't bad. RDF here is using GSL. It looks like in this case if  
we're to go with RDF, we should make sure we use GSL rather than the  
clib versions (right now it's a mix).

>>> Also, even in cases (like default 32-bit x86 basic operations) where
>>> only the least significant bit may be wrong, this error can be
>>> magnified arbitrarily by subsequent operations.  As a simple  
>>> example,
>>> if foo() should return 1.0 exactly, but instead returns the next
>>> higher floating-point number, then (foo() - 1.0) should be exactly 0
>>> but is not; this is a huge relative error (you might even say an
>>> infinite relative error, depending on your exact definition of
>>> relative error).
>>
>> Good point. I don't know that MPFR would be immune to this either
>> (though perhaps less vulnerable). The composition of arithmetic
>> operations cannot (a priori) be guaranteed to be correct to the least
>> significant bit. E.g.
>>
>> sage: sum([RDF(2/3)^k for k in range(1000)]) - 3
>> -1.33226762955e-15
>> sage: sum([RR(2/3)^k for k in range(1000)]) - 3  # mpfr 53-bit
>> -0.00000000000000133226762955018
>>
>> sage: 1.33226762955e-15/0.00000000000000133226762955018
>> 0.9999999999998648
>
> Note that printing out a number and then typing it in again does not
> give you the same number back; the SAGE MPFR code deliberately rounds
> off the number and discards a few bits at the end when it prints.

Yes. In fact this is the very reason I started this thread in the  
first place (I saw quad doubles were at first doing coercions via  
strings, though that has now changed).

> These computations actually give exactly the same answer:
>
> sage: rdf_ans = sum([RDF(2/3)^k for k in range(1000)]) - 3
> sage: rr_ans = sum([RR(2/3)^k for k in range(1000)]) - 3  # mpfr 53-
> bit
> sage: rdf_ans / rr_ans
> 1.0
> sage: rdf_ans == rr_ans
> True
>
> This is not too surprising; the RDF version of this uses only basic
> IEEE operations (+, *, /), so each operation is correctly rounded to
> the best possible answer; and the RR version uses MPFR, so each
> operation is correctly rounded to the best possible answer.

Yeah. I actually was dividing 'cause I was interested to see if the  
errors were of the same magnitude after the fact, and it was quicker  
than counting to see if the second answer had 15 zeros...

> It's true that floating-point computations may give an answer which is
> arbitrarily different than the answer you would get with true real-
> number computations.  There is some value in producing portable,
> precise answers; for instance, a double-double or quad-double package,
> or an implementation of interval arithmetic, may totally fail with
> occasional answers that are off by a single bit.  However, people are
> unlikely to be implementing these at the SAGE command line, and it may
> be that at the SAGE command line, slightly-wrong floating-point
> answers would seldom be a problem.  (Also, portability is particularly
> useful for SAGE because it means we can use floating-point arithmetic
> in doctests, and guarantee that it gives the same results everywhere.)

My thoughts exactly. If one needs the extra precision, one can always  
explicitly get it, even from the command line.

>
>> Your input is very valuable, you know more about this than I do for
>> sure. I think we need to make comparisons on a wider range of
>> architecture (could be since what we're all using is relatively new
>> that we just happen to have very accurate native doubles...) If the
>> results are similar to the above (i.e. no major accuracy differences
>> between RR and RDF) I think a change would be a very good thing. On
>> the other hand we need to do a lot of cross-platform testing to be  
>> sure.
>
> If we decide that we value portability but not necessarily precision,
> we could use RDF but avoid using C library implementations of the
> mathematical functions.  (Currently, RDF uses the C library for tan(),
> acos(), asin(), atan(), cosh(), sinh(), tanh(), according to a quick
> scan of real_double.pyx -- I may have missed some.)  We would also
> want to verify that the GSL implementations are portable across
> architectures, and do something about 32-bit x86.

I'm leaning towards this too. There area actually fpu_fix_* functions  
in the quad double package to handle 32-bit x86 that might be useful  
to look at.

- Robert


--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~----------~----~----~----~------~----~------~--~---

Reply via email to