On 04/27/2010 12:38 AM, William Stein wrote:
RJF asks me to forward this, since it bounced for him, evidently.

Thanks.



---------- Forwarded message ----------
From: Richard Fateman<fate...@cs.berkeley.edu>
Date: Mon, Apr 26, 2010 at 10:15 PM
Subject: Re: [sage-devel] numerically stable fast univariate
polynomial      multiplication over RR[x]
To: William Stein<wst...@gmail.com>
Cc: sage-devel@googlegroups.com, rjf<fate...@gmail.com>


William Stein wrote:

On Mon, Apr 26, 2010 at 8:00 PM, Jason Grout
<jason-s...@creativetrax.com>  wrote:

When multiplying two univariate polynomials over RR[x], Sage uses the naive
O(n^2) algorithm.  The docstring for _mul_ cites numerical stability as the
reason to not use asymptotically faster algorithms:

"Here we use the naive `O(n^2)` algorithm, as asymptotically faster
algorithms such as Karatsuba can have very inaccurate results due to
intermediate rounding errors. "

If you are doing arithmetic over the reals (RR?) then there is no
rounding error. But you apparently are doing
arithmetic in some approximation to the reals-- presumably doing n-bit
floating-point arithmetic for
some particular n.


Yes, that's what I had in mind. Specifically double-precision, but more generally for any specific fixed n.



If you want to get the right answer, then convert all the numbers to
exact rationals in QQ. Multiply and then
convert back.   Or  multiply all the coefficients by 2^k so they are
all integers. Do the arithmetic
in ZZ.

Thanks.



You can also multiply (approximately) by a fast Fourier transform
convolution, which is reputedly rather stable.
The stability may in fact directly address the notion of accuracy, at
least if you are talking about dense polynomials.


Okay, that was one of the implicit subquestions---is FFT polynomial multiplication stable (in a fixed precision that does not change)? (Yes, I was thinking of dense polynomials.)


How are you judging whether one polynomial is "messed up" relative to
another?  I am unable to find the
example or the documentation by searching from the Sage home page for
f.mul_fateman or f.mul or karatsuba.
None of which come up with anything. So I don't know what examples you
are talking about.

Sorry for not including these.  Here are the examples:

sage: R.<x>=RR[]
sage: x._mul_? # for the documentation I mentioned
sage: a=x+R('1e30')
sage: b=x+R('1e20')
sage: a
x + 1.00000000000000e30
sage: b
x + 1.00000000000000e20
sage: a*b # naive multiplication
x^2 + 1.00000000010000e30*x + 1.00000000000000e50
sage: a._mul_karatsuba(b) # missing the x term
x^2 + 1.00000000000000e50
sage: a._mul_fateman(b) # congratulations!
x^2 + 1.00000000010000e30*x + 1.00000000000000e50


I don't know what you are calling Fateman multiplication, so I can't
comment. Maybe you could email me whatever
documentation you are referring to.  I do NOT have Sage installed.


The docs for _mul_fateman say:

ALGORITHM: Based on a paper by R. Fateman

http://www.cs.berkeley.edu/~fateman/papers/polysbyGMP.pdf

The idea is to encode dense univariate polynomials as big integers,
instead of sequences of coefficients. The paper argues that because
integer multiplication is so cheap, that encoding 2 polynomials to
big numbers and then decoding the result might be faster than
popular multiplication algorithms. This seems true when the degree
is larger than 200.



I hope this has given you something to think about.

Yes; thanks!

Jason

--
Jason Grout

--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to