I think repeated squaring is not more efficient in all cases. For
example over Z it is only
more efficient if your multiplication algorithm is faster than the
naive one (which is O(n^2) in the number of bits).

So in the case of elliptic curves it really depends on the efficiency
of the addition steps and on how fast
the numerators/denominators grow. I am sure the elliptic curve experts
on this forum can say more about this.

Michel


On Aug 14, 10:41 pm, David Harvey <[EMAIL PROTECTED]> wrote:
> On Aug 14, 2007, at 4:19 PM, Alex Ghitza wrote:
>
>
>
> > Hi,
>
> > I've looked at ticket #59, in which David Kohel says:
>
> >   William, my student noticed some slow performance with elliptic
> > curves
> >   group law. I think there was a huge overhead in duplication:
>
> >   sage: E = EllipticCurve([GF(101)(1),3])
> >   sage: P = E([-1,1,1])
> >   sage: timeit 2*P
> >   100 loops, best of 3: 3.81 ms per loop
> >   sage: timeit P+P
> >   1000 loops, best of 3: 1.81 ms per loop
>
> >   Basically n*P was passing through all sorts of high-level layers for
> >   group schemes, abelian groups, and the like.
>
> > It turns out that n*P is inherited from multiplication in
> > AdditiveGroupElement, which itself goes all the way to ModuleElement.
> > But at that point, it is implemented efficiently (by repeated
> > doubling).
> >  It seems to me that the loss of efficiency in comparison with P+P
> > +...+P
> > is all in the overhead.
>
> > I experimented with this and noticed that P+...+P is faster than
> > n*P as
> > long as n is smaller than 10, then the repeated doubling starts to
> > bear
> > fruit.  So inside ell_point.py, I overloaded the multiplication
> > operator, simply making it return P+...+P if n is less than 10, and
> > return n*P if n is larger than 10 (see below).
>
> > This is not a very elegant solution.  For one thing, the magic
> > number 10
> > that works on my architecture (Intel Core Duo running Gentoo) might
> > not
> > be right for a completely different machine.  Is there a way to
> > automatically generate this number when the user compiles sage from
> > source (by starting with a value like 100, say, and then doing a
> > "binary
> > search" until it hits the optimal number)?
>
> Even on a single architecture, the correct crossover point will vary
> between different base fields.
>
> For example if the base field is Q, the best crossover will depend on
> the size of the coordinates. If the coordinates are enormous, it's
> probably best to do repeated doubling as much as possible. Similarly
> if you are working over GF(p) for some very large p, you'll find a
> different crossover from GF(101).
>
> This is going to be a difficult problem in general. I think a better
> approach is to try to reduce the overhead, i.e. to get the n*P
> pathway working faster, and/or to reduce the overhead in the addition
> law itself.
>
> david


--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~----------~----~----~----~------~----~------~--~---

Reply via email to