Tim Peters added the comment:

Let's spell one of these out, to better understand why sticking to native 
precision is inadequate.  Here's one way to write the Newton step in "guess + 
relatively_small_correction" form:

    def plain(x, n):
        g = x**(1.0/n)
        return g - (g - x/g**(n-1))/n

Alas, it's pretty much useless.  That's because when g is a really good guess, 
`g` and `x/g**(n-1)` are, in native precision, almost the same.  So the 
subtraction cancels out almost all the bits, leaving only a bit or two of 
actual information.  For this kind of approach to be helpful in native 
precision, it generally requires a clever way of rewriting the correction 
computation that _doesn't_ suffer massive cancellation.

Example:

>>> x = 5.283415219603294e-14

and we want the square root.  Mark's `nroot` always gives the best possible 
result:

>>> nroot(x, 2)
2.298568080262861e-07

In this case, so does sqrt(), and also x**0.5 (on my box - pow() may not on 
yours):

>>> sqrt(x)
2.298568080262861e-07
>>> x**0.5
2.298568080262861e-07

You're going to think it needs "improvement", because the square of that is not 
x:

>>> sqrt(x)**2 < x
True

Let's see what happens in the `plain()` function above:

>>> g = x**0.5
>>> temp = x/g**(2-1)

>>> g.hex()
'0x1.ed9d1dd7ce57fp-23'
>>> temp.hex()
'0x1.ed9d1dd7ce580p-23'

They differ by just 1 in the very last bit.  There's nothing wrong with "the 
math", but _in native precision_ the subtraction leaves only 1 bit of 
information.

Then:

>>> correction = (g - temp)/2
>>> correction
-1.3234889800848443e-23

is indeed small compared to g, but it only has one "real" bit:

>>> correction.hex()
'-0x1.0000000000000p-76'

Finally,

>>> g - correction
2.2985680802628612e-07

is _worse_ than the guess we started with.  Not because of "the math", but 
because sticking to native precision leaves us with an extremely crude 
approximation to the truth.

This can be repaired in a straightforward way by computing the crucial 
subtraction with greater than native precision.  For example,

    import decimal
    c = decimal.DefaultContext.copy()
    c.prec = 25

    def blarg(x, n,
              D=decimal.Decimal,
              pow=c.power,
              sub=c.subtract,
              div=c.divide):
        g = x**(1.0/n)
        Dg = D(g)
        return g - float(sub(Dg, div(D(x), pow(Dg, n-1)))) / n

    del decimal, c

Then the difference is computed as

Decimal('-2.324439989147273024835272E-23')

and the correction (convert that to float and divide by 2) as:

-1.1622199945736365e-23

The magnitude of that is smaller than of the -1.3234889800848443e-23 we got in 
native precision, and adding the new correction to g makes no difference:  the 
correction is (correctly!) viewed as being too small (compared with g) to 
matter.

So, bottom line:  there is no known way of writing the Newton step that won't 
make things worse at times, so long as it sticks to native precision.  Newton 
iterations in native precision are wonderful when, e.g., you know you have 
about 20 good bits and want to get about 40 good bits quickly.  The last bit or 
two remain pure noise, unless you can write the correction computation in a way 
that retains "a bunch" of significant bits.  By far the easiest way to do the 
latter is simply to use more precision.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue27761>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to