Even four times as slow as Magma is still fantastic (for GCDs).
While gcd's are not my personal interest, I keep an eye on the
subject.

I am interested in progresses, and would be willing to promote some
cooperation ideas to the Singular
group, if you have a good plan.
Michael

On 5 Feb., 01:09, Bill Hart <goodwillh...@googlemail.com> wrote:
> For those still interested in this thread, I have completed the Magma
> timings again on an identical machine which does not suffer from the
> timing irregularities of the other machine. I've done this very
> carefully, taking best of 5 timings.
>
> So I report here the original giac timings that I reported and the
> Magma 2-15.3 times from today. I have been unable to redo the giac
> timings due to some issue with the libc on the other machine, but I
> will report them once we have it sorted out.
>
> giac:
>
> 1var: 0.00034, 0.06, 0.085
> 2var: 0.0011, 0.0046, 0.048, 0.2
> 3var: 0.014, 0.15, 0.63
> 4var: 0.016, 0.07, 0.18, 1.03
>
> mod-1var: 0.00038, 0.02, 0.026
> mod-2var: 0.00052, 0.0024, 0.03, 0.15 (0.112)
> mod-3var: 0.004, 0.0085, 0.22 (0.198)
> mod-4var: 0.012, 0.048, 0.12 (0.12)
>
> Magma:
>
> 1var: 0.00047, 0.01562, 0.03682
> 2var: 0.00138, 0.00505, 0.04839, 0.1620
> 3var: 0.01064, 0.07207, 0.2681
> 4var: 0.00905, 0.04372, 0.0978, 0.4172
>
> mod-1var: 0.00024, 0.00811, 0.01840
> mod-2var: 0.00069, 0.00274, 0.02462, 0.0843
> mod-3var: 0.00109, 0.00641, 0.05365
> mod-4var: 0.00563, 0.02750, 0.0642
>
> The results are mixed. For large degree or large numbers of variables,
> Magma seems to be faster by up to a factor of about 4. However for
> small problems giac does well - up to about 30% faster. I'll report
> the final timings in a new thread when we've sorted out the issues
> with the glibc on this machine. The giac timings will almost certainly
> go down in some cases.
>
> Bill.
>
> On 27 Jan, 00:06, Bill Hart <goodwillh...@googlemail.com> wrote:
>
> > In general there aren't global variables, with a couple of important
> > exceptions. One is the memory manager, particularly the stack based
> > manager, is not currently threadsafe. But as releasing memory back to
> > the stack is actually done by calling a function rather than some
> > macro, this can definitely be done on a per thread basis. So it's not
> > all that difficult to fix.
>
> > There is also a function fmpz_poly_mul_modular, which is currently not
> > threadsafe but will be soon. Also, some of the random functions are
> > not threadsafe. They might return garbage if you try and call them
> > from multiple threads. ;-o
>
> > Those problems should be fixed eventually. I don't know precisely
> > when, it's just a matter of finding the time. Soon I hope. At any rate
> > there are not the sorts of problems with making FLINT threadsafe that
> > NTL has. The library has been designed from the start with this in
> > mind.
>
> > Bill.
>
> > On 26 Jan, 14:27, parisse <bernard.pari...@ujf-grenoble.fr> wrote:
>
> > > > Well, FLINT ought to be faster at plain univariate GCD than NTL,
> > > > whether over Z or Z/pZ. You probably need to use the functions in the
> > > > NTL-interface module to convert between NTL format polynomials and
> > > > FLINT polynomials.
>
> > > Moreover, I guess your library does not have global variables, hence
> > > can be called from concurrent threads, right?
>
>
--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to