I didn't see any mention in your paper of the Schoenhage-Strassen FFT for exact arithmetic. It would be faster than using CRT for a whole lot of multiplications modulo small primes.
No number theorist would accept the results of "exact arithmetic" done using FFTW or other floating point FFT's unless the FFT implementation has proven accuracy bounds and a proof that these are never exceeded. Certainly this is not the case for FFTW. I did find the idea of splitting multivariate problems into smaller problems until the exponents fit into a machine word interesting. For some reason I hadn't thought of doing that. I guess one needs to be careful that this doesn't just split the polynomial up into single monomials to be multiplied. I'd also not seen the radix tree idea before, though I probably should have. The results of the benchmarking are interesting, but rather meaningless when not compared to other standard implementations on a similar system. Your contention is that with short programs in a high level language, these computations can be done quickly. How do we know that the constant factor involved isn't 6000 compared to an implementation in a low level language with a long program? I realise this is probably not a final version as such, and so I'll leave off mentioning the handful of minor typos I noted (e.g. you have ^10 instead of ^{10} somewhere). Bill. On May 3, 5:57 pm, Bill Hart <goodwillh...@googlemail.com> wrote: > That's actually a very interesting paper. I've recently been playing > with Forth, which is a kind of "Lisp type language" (yeah I know you > won't agree with that), based on a data stack. I also worked through a > book on Lisp up to the point where macros were defined, as I wanted to > understand how that was handled in Lisp. I actually "get" Lisp now, > but it was a round about way that I got there. It's clearly not for > everyone. > > I've also been experimenting with how short programs can be that still > give reasonable performance. The answer is, amazingly short, if one > spends a lot of time thinking about it before coding. > > Another thing I've been enjoying lately is literate programming. > Amazingly it turns out to be faster to write a literate program than > an ordinary program because debugging takes almost no time. > > Anyhow, I'm going to read this paper of yours now. > > Bill. > > On May 3, 3:37 pm, rjf <fate...@gmail.com> wrote: > > > > > > > If you are not doing floating point arithmetic with machine > > arithmetic, but using MPFR, then you are sacrificing a huge amount of > > time. You might as well be using rational arithmetic, or the kind of > > arithmetic that Collins once proposed, where the denominator is a > > power of 2. Makes reducing to lowest terms relatively fast because > > the > > GCD is trivial. Compare that to boosting the overall precision in > > MPFR to "big enough". > > > If you want to read more about multiplying polynomials, you can read > > the (unpublished, unfinished, too-long) paper here: > > >www.cs.berkeley.edu/~fateman/papers/shortprog.tex > > > RJF > > > -- > > To post to this group, send an email to sage-devel@googlegroups.com > > To unsubscribe from this group, send an email to > > sage-devel+unsubscr...@googlegroups.com > > For more options, visit this group > > athttp://groups.google.com/group/sage-devel > > URL:http://www.sagemath.org > > -- > To post to this group, send an email to sage-devel@googlegroups.com > To unsubscribe from this group, send an email to > sage-devel+unsubscr...@googlegroups.com > For more options, visit this group athttp://groups.google.com/group/sage-devel > URL:http://www.sagemath.org -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org