On Mar 27, 12:12 pm, Robert Bradshaw <[EMAIL PROTECTED]> wrote: > Yes, I am aware that lots of doctests would break, but if its just a > change in the last decimal or two(?) I'm OK with fixing that. I'm > wondering if anyone knows of any algorithms/etc. that rely on MPFR 53- > bit rather than native cdef doubles (also 53 bit). Are there any ways > that MPFR is better at this precision that native doubles (which are > much faster)?
MPFR is better than native doubles in at least two ways: MPFR has a much wider range of possible values (up to about 2^(2^31), instead of about 2^1024), and MPFR gives the best possible rounded floating-point value for all supported floating-point operations in all rounding modes. I will call this best possible answer the "precise" answer; an answer which is not the best possible answer is "imprecise" (even if it is off by only the least significant bit). As far as I know, virtually all modern computer architectures support IEEE double-precision arithmetic, which guarantees that the basic operations (+,-,*,/,sqrt) are precise, but says very little about other operations. Other operations may not be portable across architectures (or even operating systems). Also, other operations may not work correctly in rounding modes other than round-to-nearest. In the rest of this message, I will use "IEEE compliance" to refer to this basic requirement, ignoring other IEEE requirements like exception handling. SAGE currently has no support for rounding modes other than round-to- nearest for native doubles. The situation is more complicated for 32-bit x86 processors, which are not IEEE compliant by default. I believe that SAGE on 32-bit x86 is not IEEE compliant for RDF, for Python doubles, or for SAGEX doubles (and it's possible that each of these gives a different answer). So for portable, precise computation with control over rounding modes, MPFR is the way to go. Native doubles are useful if: 1) you don't need portability or precision, or 2) you only need basic operations (+,-,*,/,sqrt) in round-to-nearest mode, and you're not on 32-bit x86. IEEE compliance for 32-bit x86 can be greatly improved by setting a processor flag (although there's no flag that gives the correct behavior on overflows; an x86 running in this mode might compute a finite value when a truly IEEE compliant processor would give an answer of Infinity). For modern x86 processors, you can also get full IEEE compliance by using SSE2 instructions instead of 80387 instructions for floating-point arithmetic; a version of SAGE compiled this way would probably be faster, but would not run on older or cheaper processors (pre-Pentium 4 Intel processors, for instance). Hope this helps. Carl Witty --~--~---------~--~----~------------~-------~--~----~ To post to this group, send email to sage-devel@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sage-devel URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/ -~----------~----~----~----~------~----~------~--~---