On Sep 4, 12:31 pm, Robert Miller <[EMAIL PROTECTED]> wrote:
> Using numpy, reasonably accurate means up to only a few places, not
> the full double precision. This leads to badness when you try to
> factor anything. If it finds that 1.0000112451357 is a root of
> (x-1)^3, you will likely get a quadratic irreducible as a factor...

Multiple roots are a very bad case for floating-point root finding.
Unless the solver includes special-purpose heuristics to detect
multiple roots, it's basically impossible to get an "accurate" result;
if your input is given to  n bits of precision, you should not expect
more than n/k good bits for a k-fold root.  (As William points out,
you can get solutions that make the polynomial evaluate to a number
very close to zero; basically the problem is that with a multiple
root, there are many such numbers, and it's difficult to choose
between them.)

To see why this is true, consider the naïve floating-point error
analysis model where you just pretend that all floating-point numbers
are somewhat imprecise -- a little "fuzzy", if you will.  Then the
graph of a floating-point polynomial will be a fuzzy line.  Consider
the graph of (x-1)^3; this will be a fuzzy line with a horizontal
tangent at x=1, y=0.  If the fuzziness extends up and down by about j,
then it will extend left and right by about cube_root(j).

Carl Witty


--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~----------~----~----~----~------~----~------~--~---

Reply via email to