Tim Peters <t...@python.org> added the comment:

Here's an amusing cautionary tale:  when looking at correct-rounding failures 
for the fsum approach, I was baffled until I realized it was actually the 
_decimal_ method that was failing.  Simplest example I have is 9 instances of 
b=4.999999999999999, which is 1 ulp less than 5. Of course the best-possible 
hypot is 3*b rounded, but that's not what default decimal precision computes 
(when rounded back).  In fact, it bounces between "off by 1" and "correct" 
until decimal precision exceeds 96:

>>> pfailed = []
>>> for p in range(28, 300):
...     with decimal.localcontext() as c:
...         c.prec = p
...         if float(sum([decimal.Decimal(b) ** 2] * 9).sqrt()) != 3*b:
...             pfailed.append(p)
>>> pfailed
[28, 29, 30, 31, 34, 36, 37, 39, 41, 42, 43, 44, 45, 57, 77, 96]

In a nutshell, it takes in the ballpark of 50 decimal digits to represent b 
exactly, then twice as many to represent its square exactly, then another to 
avoid losing a digit when summing 9 of those.  So if decimal prec is less than 
about 100, it can lose info.

So the fsum method is actually more reliable (it doesn't lose any info until 
the fsum step).

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue41513>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to