On Feb 6, 4:11 am, mabshoff wrote:
My apologies for the double post.
> Yes, I took Singular 3-0-3 using slimgb and computed the DegRevLex as
> well as Lex basis of homogenized Karatsuba 7 for some F_p (p=31007
> maybe?) and the rationals.
>
> * For DegRevLex and F_p Singular was slightly f
On Feb 6, 3:30 am, Martin Albrecht
wrote:
> > * libSingular is significant more effort to build and the interface
> > isn't exactly clean
>
> That can and is being addressed.
Sure, things are getting better and having other external uses, i.e.
GFan as you mention below will make this all bett
> * libSingular is significant more effort to build and the interface
> isn't exactly clean
That can and is being addressed.
> * CoCoALib is beautifully designed and tight. modern C++ code
> without going nuts with templates. I have build it using g++, MSVC and
> the Sun compiler.
> * licens
On Feb 6, 2:28 am, parisse wrote:
> > So, you are using Cocoa or cocoalib?
>
> cocoalib
>
> > Being an expert for the groebner
> > bases functionality, I can say,
> > that Singular's biggest strength in this area, is supporting many,
> > many implementations
> > of different algorithms and m
> I think, this would give a good basis.
> Actually the Singular group is also interested in a gcd, which also
> works for finite, algebraic extensions
> of Q and GF(p).
> And I would be happy, if they stopped discussing about it behind
> closed doors and
> actually wasting time, as the efforts co
Hi!
> It seems possible to cooperate on two levels:
> - level 1: I can try to write a paper where I describe some of the
> ideas I have put in my implementation of the modular multivariate gcd
> algorithm, in order to help singular or flint implement them, maybe
> better than I did myself in giac
On 5 fév, 08:54, Michael Brickenstein wrote:
> Even four times as slow as Magma is still fantastic (for GCDs).
> While gcd's are not my personal interest, I keep an eye on the
> subject.
>
> I am interested in progresses, and would be willing to promote some
> cooperation ideas to the Singular
On 5 fév, 01:09, Bill Hart wrote:
> For those still interested in this thread, I have completed the Magma
> timings again on an identical machine which does not suffer from the
> timing irregularities of the other machine. I've done this very
> carefully, taking best of 5 timings.
>
> So I repo
Even four times as slow as Magma is still fantastic (for GCDs).
While gcd's are not my personal interest, I keep an eye on the
subject.
I am interested in progresses, and would be willing to promote some
cooperation ideas to the Singular
group, if you have a good plan.
Michael
On 5 Feb., 01:09,
For those still interested in this thread, I have completed the Magma
timings again on an identical machine which does not suffer from the
timing irregularities of the other machine. I've done this very
carefully, taking best of 5 timings.
So I report here the original giac timings that I reporte
In general there aren't global variables, with a couple of important
exceptions. One is the memory manager, particularly the stack based
manager, is not currently threadsafe. But as releasing memory back to
the stack is actually done by calling a function rather than some
macro, this can definitel
I found a small trick to unroll some loops, this improve a little bit
the modular timings, I have updated giac_benchmark.tgz.
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to
sage-d
> Well, FLINT ought to be faster at plain univariate GCD than NTL,
> whether over Z or Z/pZ. You probably need to use the functions in the
> NTL-interface module to convert between NTL format polynomials and
> FLINT polynomials.
>
Moreover, I guess your library does not have global variables, hen
On 26 Jan, 07:24, parisse wrote:
> On Jan 25, 10:22 pm, Bill Hart wrote:
>
> > FLINT is faster of course! (Well for 1d anyway, :-) )
>
> > But seriously, Bernard's original contention that giac is within 1.5
> > times what Magma will do is about right, and for small problems giac
> > is actual
On Jan 25, 10:22 pm, Bill Hart wrote:
> FLINT is faster of course! (Well for 1d anyway, :-) )
>
> But seriously, Bernard's original contention that giac is within 1.5
> times what Magma will do is about right, and for small problems giac
> is actually faster! I haven't done timings for Magma's
FLINT is faster of course! (Well for 1d anyway, :-) )
But seriously, Bernard's original contention that giac is within 1.5
times what Magma will do is about right, and for small problems giac
is actually faster! I haven't done timings for Magma's Z[x1, ..., xn]
GCD yet, only the Z/pZ[x1, , xn
On Sun, Jan 25, 2009 at 12:27 PM, Bill Hart wrote:
>
> Thanks for the executable. It worked perfectly. Here are the times
> that it reports on the 2.4Ghz Opteron:
>
> 1d: 0.00034, 0.06, 0.085
> 2d: 0.0011, 0.0046, 0.048, 0.2
> 3d: 0.014, 0.15, 0.63
> 4d: 0.016, 0.07, 0.18, 1.03
>
> mod-1d: 0.0003
Thanks for the executable. It worked perfectly. Here are the times
that it reports on the 2.4Ghz Opteron:
1d: 0.00034, 0.06, 0.085
2d: 0.0011, 0.0046, 0.048, 0.2
3d: 0.014, 0.15, 0.63
4d: 0.016, 0.07, 0.18, 1.03
mod-1d: 0.00038, 0.02, 0.026
mod-2d: 0.00052, 0.0024, 0.03, 0.15 (0.112)
mod-3d: 0.0
On Jan 24, 1:24 am, Bill Hart wrote:
> Here are the times for the other GCD's on Magma on the Opteron 2.4Ghz:
>
> 2D:
>
> 0.00089s
> 0.00374s
> 0.0256s
> 0.08549s
>
> 3D:
>
> 0.00185s
> 0.0069s
> 0.05491s
>
> 4D:
>
> 0.006s
> 0.028s
> 0.071s
>
> I don't have giac times on the Opteron to compare
Whoops, you didn't mention assembly language. I think I got that mixed
up with another conversation.
On 23 Jan, 22:48, Bill Hart wrote:
> Somehow I screwed up and put the wrong times for the Opteron for giac.
> Here are all the 1-d times again:
>
> FLINT (2.4Ghz Opteron):
>
> 1d: 0.00018s, 0.005
Here are the times for the other GCD's on Magma on the Opteron 2.4Ghz:
2D:
0.00089s
0.00374s
0.0256s
0.08549s
3D:
0.00185s
0.0069s
0.05491s
4D:
0.006s
0.028s
0.071s
I don't have giac times on the Opteron to compare. It would be
interesting to see the times.
Bill.
On 23 Jan, 22:48, Bill Ha
We ripped the longlong.h out of GMP which has assembly code for this
sort of thing. We use precomputed inverses for the modular reduction.
On 23 Jan, 19:01, parisse wrote:
> > No, in my case the maximum modulus is 2^63 for now on a 64 bit
> > machine.
>
> How do you make multiplication followed
Somehow I screwed up and put the wrong times for the Opteron for giac.
Here are all the 1-d times again:
FLINT (2.4Ghz Opteron):
1d: 0.00018s, 0.0055s, 0.021s
mod-1d: 0.00046s, 0.00419s, 0.00979
Magma (2.4Ghz Opteron):
1d: 0.00168s, 0.01692s and 0.0453s
mod-1d: 0.00067s, 0.00981s, 0.0244s
Gia
> No, in my case the maximum modulus is 2^63 for now on a 64 bit
> machine.
>
How do you make multiplication followed by modular reduction? I don't
know how to to that in C (I mean there is no quadruple long or
int128_t in gcc?)
--~--~-~--~~~---~--~~
To post to thi
On 23 Jan, 07:44, parisse wrote:
> > Your timings are surprisingly low. Can you tell me which files perform
> > the Z/pZ[x] GCD? I'd be very interested to see your code.
>
> The division code in Z/pZ is in the file modpoly.cc, lines 1621->1723.
> It depends on invmod(int,int) for modular invers
> Your timings are surprisingly low. Can you tell me which files perform
> the Z/pZ[x] GCD? I'd be very interested to see your code.
>
The division code in Z/pZ is in the file modpoly.cc, lines 1621->1723.
It depends on invmod(int,int) for modular inverse (in gen.cc) and the
asm multiplication fo
Oh, sorry, I completely forgot to do the 2d-4d tests in Magma. I'll
try and do that in the next few days. Sooo busy at present.
Your timings are surprisingly low. Can you tell me which files perform
the Z/pZ[x] GCD? I'd be very interested to see your code.
You also mention that you made improvem
> 1d: 0.00168s, 0.01692s and 0.0453s
> mod-1d: 0.00067s, 0.00981s, 0.0244s
>
> Your timings (on a 2.2 Ghz Opteron!!)
>
> 1-d:0.00018, 0.009, 0.034
> mod 1-d: 7.5e-5, 0.0022, 0.007
>
> So you look to be well ahead of Magma! Did you do something special
> for small moduli, such as packing multiple c
Wait a minute. I just realised something. The mod-1var GCD's in that
file are polynomials over GF(43051). I thought it meant it was the
modular GCD algorithm, meaning they were polynomials over ZZ whose GCD
was computed using the modular algorithm!!
The 1d polynomial GCD's over ZZ take less than
> I can time Magma for you on a 2.4GHz Opteron. I don't have access on a
> Core 2, but other people on this list do.
>
It would indeed be interesting to have timings of giac (with the icas
executable, available on my homepage at sage) and magma on the same
PC.
> That's interesting. The Opteron t
On 22 Jan, 08:11, parisse wrote:
> > Thanks for the nice paper by the way. :-)
>
> I'm glad it was useful to someone despite not being published in a
> standard journal.
>
> > Do your timings you list include checking that the GCD is correct? Or
> > are the divisions still to be done?
>
> The t
> Thanks for the nice paper by the way. :-)
I'm glad it was useful to someone despite not being published in a
standard journal.
> Do your timings you list include checking that the GCD is correct? Or
> are the divisions still to be done?
>
The timings I give include checking, but I do not use
Just as well I didn't take the piss out of the paper hey!! :-)
Bill.
On 21 Jan, 21:30, Ondrej Certik wrote:
> On Wed, Jan 21, 2009 at 1:26 PM, Bill Hart
> wrote:
>
> > Ha, yep, stupid of me not to check the name of the person posting to
> > the list! Thanks for the nice paper by the way. :-)
On Wed, Jan 21, 2009 at 1:26 PM, Bill Hart wrote:
>
> Ha, yep, stupid of me not to check the name of the person posting to
> the list! Thanks for the nice paper by the way. :-)
Haha, that made my day. :)
Ondrej
--~--~-~--~~~---~--~~
To post to this group, send e
Ha, yep, stupid of me not to check the name of the person posting to
the list! Thanks for the nice paper by the way. :-)
I now see why heuristic gcd is inefficient for multivariable
polynomials. This seems to be a feature of multivariable algorithms. I
recently thought about using Kronecker subst
Univariate polynomial GCD is very different from multivariate
polynomial GCD.
I know the heuristic gcd algorithm and the fact that for univariate
polynomials, if you take z large enough (the bound is related to the
resultant of the polynomials), you can not have a false positive,
hence you don't n
I should point out the paper linked is *NOT* my paper.
Bill.
On 21 Jan, 20:03, Bill Hart wrote:
> I worked through this problem in detail with univariate polynomial gcd
> recently. It proved to be very difficult to beat Magma, though on the
> whole I did in the end:
>
> http://sage.math.washing
I worked through this problem in detail with univariate polynomial gcd
recently. It proved to be very difficult to beat Magma, though on the
whole I did in the end:
http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd19.png
(Blue points are where I win, red where Magma wins - but
Good guess.
I'm developing a heuristic for the multivariate gcd over Z in
Singular. And I must admit not having thought enough about the
advantages of a statistical analysis. Its transparency beats SA and
NN. Now I'm working on the statistical approach. At the moment, I'm
trying to define the prob
On Apr 24, 11:19 pm, Achim <[EMAIL PROTECTED]> wrote:
> Has
> anyone experience with simulated annealing?
well, a bit, what exactly do you want to do? i have no idea so i'm
just guessing:
you extract "statistical" parameters out of the input and then choose
the appropriate algorithm? SA or simi
Hello folks!
Thank you, Michael, for introducing me.
To me it seems sensible, that a fast gcd implementation should consist
of a bunch of algorithms and a heuristic, that chooses one of them in
dependence of the input. Currently I'm working on the heuristic. I'm
thinking about letting the comput
41 matches
Mail list logo