On Mar 28, 12:53 am, Robert Bradshaw <[EMAIL PROTECTED]> wrote: > Thank you. This is exactly the kind of information I was looking for. > I knew about the range of values limitation, but was only vaguely > aware of the rest. The situation I'm thinking of is the default > implicit ring (e.g. when one enters "3.2") in which case the lack of > support for rounding modes wouldn't probably be a big issue (if one > cares about such things, one would probably want to specify it > explicitly). The lack of precision/portability/consistency could be a > major issue though. Of course all options would be available > explicitly, but do you think this last point is severe enough to > write off using them as the default implicit ring despite their > relative inefficiency? Perhaps one could claim that they only really > have 51 bits of precision (or are the answers sometimes way off, > other than inf/nan)?
In the following carefully chosen example, we see a case where a native double result (or a result using RDF) has only 4 correct digits (about 12 bits). (This example is from John Harrison's paper, _Formal verification of floating point trigonometric functions_.) sage: n = RR('13126962.690651042') sage: n == RR(float(n)) True sage: sin(n) -0.000000000452018866841080 sage: math.sin(float(n)) -4.5200196699662215e-10 sage: tan(n) -0.000000000452018866841080 sage: tan(RDF(n)) -4.52001966997e-10 The trick is to find a floating-point number which is very close to a multiple of Pi; this means that the range reduction step needs to use a very precise approximation of Pi. (This transcript is from a Linux (Debian testing) box with an Intel Core 2 Duo processor in 32-bit mode; I would be curious if other architectures/operating systems give different results.) Also, even in cases (like default 32-bit x86 basic operations) where only the least significant bit may be wrong, this error can be magnified arbitrarily by subsequent operations. As a simple example, if foo() should return 1.0 exactly, but instead returns the next higher floating-point number, then (foo() - 1.0) should be exactly 0 but is not; this is a huge relative error (you might even say an infinite relative error, depending on your exact definition of relative error). On the other hand, the vast majority of floating-point in the world is done with native floating-point; clearly this is adequate for a huge variety of uses. I don't know how to weigh the tradeoffs of portability and precision versus speed in terms of picking a default floating-point ring for SAGE. Carl Witty --~--~---------~--~----~------------~-------~--~----~ To post to this group, send email to sage-devel@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sage-devel URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/ -~----------~----~----~----~------~----~------~--~---