Hi Joel,

Yes, it occurred to me this morning that conversion costs might be
getting in the way.

Does SAGE use the GP interpreter or the Pari library to do this
computation? If it is using the Pari library, and the data conversion
isn't quadratic time, then I don't think this should be a problem.

The other thing is, you are factoring a single random polynomial,
which may be irreducible. Factoring generic polynomials may be much
faster, since one only needs to know it is irreducible and then one
need not do any factoring. I will prepare a graph comparing the time
to factor generic polynomials, though I don't in general think that is
such a great benchmark, unless one specifically wants to tune for that
situation (which could be a nice idea if one had a certain flag that
could be set to indicate that one expected the polynomials being
factored to be vaguely generic).

Bill.

On 20 Dec, 10:10, "Joel B. Mohler" <[EMAIL PROTECTED]> wrote:
> On Wednesday 19 December 2007 23:42, Bill Hart wrote:
>
> > Here is a graph that uses slightly bigger coefficients, up to 10000
> > bits.
>
> >http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/fact...
> >g
>
> > I don't see the speed increase you mentioned for large coefficients,
> > but this might be due to some special properties of the polynomials
> > you are factoring, or perhaps due to the particular architecture you
> > have. Prime field arithmetic is probably quite architecture dependent,
> > so that may contribute, I'm not sure.
>
> Yeah, in the end, I don't think I believe the speed increase for large
> coefficients either.  I think what I did observe was probably a result of
> accidents with semi-random polynomials.
>
> Here's a snippet from my patched branch:
> Loading SAGE library. Current Mercurial branch is: uni-factoring
> sage: R.<x>=ZZ[]
> sage: f=x^2-1
> sage: timeit f._factor_ntl()  ######## NTL wins
> 10000 loops, best of 3: 159 µs per loop
> sage: timeit f._factor_pari()
> 1000 loops, best of 3: 794 µs per loop
> sage: f=R.random_element(6)
> sage: timeit f._factor_ntl()    ######### NTL wins
> 1000 loops, best of 3: 316 µs per loop
> sage: timeit f._factor_pari()
> 1000 loops, best of 3: 533 µs per loop
>
> I'm not seeing this NTL win on your chart.  I'm wondering if this might be
> because of data-conversion costs to pari.  Of course, that data conversion is
> a cost of the sage implementation so when making the cutoff choices for sage
> we need to include the conversion costs.
>
> --
> Joel
--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~----------~----~----~----~------~----~------~--~---

Reply via email to