On Dec 3, 2007 10:26 AM, Bill Hart <[EMAIL PROTECTED]> wrote:
>
>
>
> On 3 Dec, 16:16, "William Stein" <[EMAIL PROTECTED]> wrote:
>
> > By the way, PARI is *not* competitive here:
> >
> > sage: nn = gp(n)
> > sage: gp.eval('gettime; a = %s^20000; gettime/1000.0'%nn.name())
> > '0.42800000000000000000000000000000000000'
> > sage: gp.eval('gettime; a = %s^200000; gettime/1000.0'%nn.name())
> > '14.084000000000000000000000000000000000'
>
> When you use polmods in Pari, it is actually faster than SAGE:
>
> ? f=x^3 + 37/10*x^2 - 8/15*x + 1/60
> ? a=Mod(x,f)
> ? #
> ? a^20000;
> time = 120 ms.
> ? a^200000;
> time = 3,865 ms.
>
> For a^2000000 Pari chokes and takes about 3 times longer than SAGE.
> But the algorithm it uses is not optimal for such things.
>
> I think scaling, doing the arithmetic, then scaling back sounds like a
> good option. Does SAGE really do the computation without doing that in
> the first place?

I think Sage uses a generic powering algorithm.  So what it would do
is write the exponent in binary, then do each matrix multiplication.
For each matrix multiplication, Sage *does* rescale the matrix, multiply
over the integers, then divide through again.  So basically it is  doing
the rescaling, but it does it log_2(the exponent) times.  This would probably,
with a few lines of code, be easy to fix.   For small matrices, currently
multiply over ZZ is done using the classical O(n^3) algorithm.

So... it's surprising to me that Sage is good in this benchmark,
and there is room for improvement.

William

--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~----------~----~----~----~------~----~------~--~---

Reply via email to