On Sat, Oct 3, 2009 at 4:07 AM, rjf <fate...@gmail.com> wrote: > > > > On Oct 2, 5:32 pm, Fredrik Johansson <fredrik.johans...@gmail.com> > wrote: >> On Sat, Oct 3, 2009 at 12:58 AM, rjf <fate...@gmail.com> wrote: >> >> > Reading the bug report it seemed to me that the code was determining >> > in some way that terms could be dropped off the sum because they were >> > too small to contribute, and then >> > stopped adding them in. Is that the "simple implementation bug"? Or is >> > that an additional bad idea? >> >> The summation code basically adds terms that are close together >> exactly (using a big integer mantissa) and discards terms that are >> much smaller. If the first pass over the terms results in a large >> cancellation, the summation is restarted at higher precision and >> eventually (if necessary) the smaller terms will be included as well. > > Is this better than compensated summation? Is it better than taking > those > discarded smaller terms and adding them together separately?
Yes and yes. The purpose of this code is *not* to add a list of binary fractions accurately. It is to approximate the sum of symbolically defined terms, which with a high probability don't have exact binary representations. In this context, including much smaller terms is a waste of time. I'll illustrate with a simple example (in decimal). Consider three symbolic terms (with high probability irrational or at least not exact binary numbers) which to 10 digit accuracy are: 1.234501000 1.000000000e-10 -1.234500000 Suppose we want to evaluate their sum to a precision of 5 digits. The sum should be about 1.0001e-6. First we evaluate each term to 5 digits: 1.2345 1.0000e-10 -1.2345 Now, adding these terms in order naively yields 0.0 which is wrong. Adding them using compensated (or exact) summation yields 1.0000e-10, which is the exact sum of the approximated terms but the wrong value for the sum of the exact terms. The only way to compute the sum of the exact terms accurately is to restart with higher precision approximations for the terms. > Is this just copied from MPFR? No. > It may be a good idea if comparison of 2 numbers is very very much > cheaper than addition of 2 numbers, and checking for cancellation is > cheap. I didn't notice any > reference to the literature in the file I looked at, but maybe I > wasn't looking > in the right place. I don't know any particular references for this algorithm. It's not really original; it's just summation with a separate error estimate. It's similar to what MPFR (for example) might do internally to track errors when evaluating special functions that are sums of multiple terms, only here it is explicit and adapted to handle general expressions. Fredrik >> >> The bug was due to a basic, dumb error in the formula used to >> determine "much smaller", and the reason the bug wasn't detected >> before by any tests is that it only triggered in some cases (indeed, >> in a platform dependent way) and with precisions or numbers around 40 >> digits or larger. > > That's unfortunate. It would still be nice to know that the algorithm > used > has been shown to be a good one, and perhaps has been compared to > others --- since there has been an extensive discussion of algorithms > that are (within their own constraints) known to be quite good. > > RJf > > >> > > > --~--~---------~--~----~------------~-------~--~----~ To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel-unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org -~----------~----~----~----~------~----~------~--~---