On Tuesday, April 29, 2014 11:39:12 PM UTC-4, Steven D'Aprano wrote: > On Tue, 29 Apr 2014 19:37:17 -0700, pleasedontspam wrote: > > > > > from decimal import * > > > getcontext().prec=2016 > > > one=Decimal(1) > > > number=Decimal('1e-1007') > > > partial=(one+number)/(one-number) > > > final.ln() > > > > What's final? Did you mean partial?
Final is my final result: ln((1+x)/(1-x)) I called the quotient partial because I wanted to investigate if perhaps the result of the division (that's my partial result) had a rounding error, which would throw off the ln(). > > > > When I try it in Python 3.3, I get: > > > > py> from decimal import * > > py> getcontext().prec=2016 > > py> one = Decimal(1) > > py> number = Decimal('1e-1007') > > py> partial = (one+number)/(one-number) > > py> partial.ln() > > Decimal('1.9999[...]9999987E-1007') > > > > with a lot of 9s deleted for brevity. > > > > > The result should be 2.00000... with all zeroes and 7 at the end. > > > > Can you demonstrate exactly how you tested it on Wolfram-Alpha, because I > > cannot reproduce that. When I try it with ln((1+1e-1007)/(1-1e-1007)), That's exactly how you test it. The result should be 2.0000... 7 (at 2016 digits precision, at higher precisions it will continue adding another 2014 sixes until the digits become something else. This is from the definition of the yperbolic arc tangent: atanh(x) = 1/2 * ln ( (1+x)/(1-x) ) As it turns out to be the ln() part of 1e-N is the integer 2, followed by 2N zeroes, then followed by 2N-1 sixes, then a 0, a seven, then another group of around 2N sixes (I didn't actually count them), then other interesting digit patterns. Try with 1e-1007 using 4000 digits or more and you'll see python giving a better result... but there's a 4 in the middle of the sixes! That 4 shouldn't be there. getcontext().prec=4000, then do the rest the same (you need to recalculate the partial result at the higher precision too). > > the decimal expansion shown starts off as: > > > > 2.00000000000000000000000000000000000000000000000000000... × 10^-1007 > > > > By repeatedly clicking "More digits", I get: > > > > 2.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000... > > > × 10^-1007 > > > > For brevity, I will truncate some repeated digits from this point on. > > > > > > 1.99999999999999999[...]999999999999999999999999999999999999999999999... > > × 10^-1007 > > > > as above, except with even more nines (477 of them if I have calculated > > correctly) > You're correct! It seems Alpha has its own problems too. Depending on the precision, it's giving the correct or incorrect result, changing as you hit 'more digits'. I can imagine this could happen because the (1+x)/(1-x) ratio becomes 1. followed by 2N zeroes, then a 2, then 2N zeroes, then another 2, etc. If the precision is such that your last digit just includes the next 2 or not, it can throw the result off a little bit. But this should self-correct as the precision goes higher, as it does in Wolfram Alpha. But python is giving bad data even with very high precision, as I mentioned above with 4000 digits, there's a bad digit in the middle of the sixes, at much less precision than the required by the system (hundreds of digits less). > > > 2.00000000000000000[...]0000000000000000000000000000000000000000000000... > > × 10^-1007 > > > > as above, with even more zeroes > > > > and finally the last available approximation is > > > > 2.000000[...]000000666666[...]666666... × 10^-1007 > > > > with a lot of zeroes and sixes not shown. But not ending in 7. Ending in 7 is because I had exactly 2016 digits precision, and 2016 is 2*1007+1, so exactly the last digit was the first 6 of the 6666666... which becomes 7 because it was correctly rounded. > > > > > > > Instead, I'm getting 1.999999.. with 2 digits different at the end. > > > > Well, it's hard to say exactly what's going on, since Wolfram Alpha > > doesn't show the context, but it looks like Python's version may agree > > with two out of the seven decimal approximations Alpha makes available. > > This suggests to me that it's a representation issue. > Definitely not a representation issue. I'm working on a high precision calculator, and was using mpdecimal to generate tables for the CORDIC method, (the tables have atanh(1e-N) for N=1..Precision) and that's how I stumbled upon this issue, the last 2/3 of my tables were consistently showing the wrong results. And I'm not printing the numbers, I'm accessing the digits directly from the data structures, so what you see is exactly what gets calculated, this is not a problem with printing. It might be a printing issue in Alpha, since they probably use binary numbers (GMP and the like), as opposed to decimal like python. > > > > > [...] > > > Can other people confirm this or is it just my imagination? I tested > > > with python 2.7.5 in XUbuntu, and also with my own build of mpdecimal > > > 2.4.0 (latest, built from source). I compared the results with wolfram > > > Alpha, and also with an open source arbitrary precision calculator, > > > which matches Alpha results. > > > > I think it's worth raising a ticket for it on the bug tracker. > > > > I think it would help if you can provide a link or instructions to how to > > replicate this in Alpha, You already did it right! ln((1+1e-1007)/(1-1e-1007)) > also give the name and version number of the > > arbitrary precision calculator you used. Anything that can be used to > > replicate your results. I used this to double check: http://preccalc.sourceforge.net I think you replicated my results perfectly and confirmed something is going on. I'm now especially baffled at the number 4 that appeared in the middle of the sixes, using 4000 for the precision. I just checked this with python 2.7.6 running on PC-BSD 10, and it's confirmed. So it's not a problem with the compiler used for the build either (PC-BSD/FreeBSD used Clang 3.3, XUbuntu used GCC). > > > > > > > > -- > > Steven D'Aprano > > http://import-that.dreamwidth.org/ -- https://mail.python.org/mailman/listinfo/python-list