On Wednesday 19 December 2007 23:42, Bill Hart wrote:
> Here is a graph that uses slightly bigger coefficients, up to 10000
> bits.
>
> http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/factor2.pn
>g
>
> I don't see the speed increase you mentioned for large coefficients,
> but this might be due to some special properties of the polynomials
> you are factoring, or perhaps due to the particular architecture you
> have. Prime field arithmetic is probably quite architecture dependent,
> so that may contribute, I'm not sure.

Yeah, in the end, I don't think I believe the speed increase for large 
coefficients either.  I think what I did observe was probably a result of 
accidents with semi-random polynomials.

Here's a snippet from my patched branch:
Loading SAGE library. Current Mercurial branch is: uni-factoring
sage: R.<x>=ZZ[]
sage: f=x^2-1
sage: timeit f._factor_ntl()  ######## NTL wins
10000 loops, best of 3: 159 µs per loop
sage: timeit f._factor_pari()
1000 loops, best of 3: 794 µs per loop
sage: f=R.random_element(6)
sage: timeit f._factor_ntl()    ######### NTL wins
1000 loops, best of 3: 316 µs per loop
sage: timeit f._factor_pari()
1000 loops, best of 3: 533 µs per loop

I'm not seeing this NTL win on your chart.  I'm wondering if this might be 
because of data-conversion costs to pari.  Of course, that data conversion is 
a cost of the sage implementation so when making the cutoff choices for sage 
we need to include the conversion costs.

--
Joel

--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~----------~----~----~----~------~----~------~--~---

Reply via email to