Hello, I did exchange a couple emails with Martin Albrecht over the last couple days about the benchmarks he did comparing multivariate polynomial arithmetic in Singular and Magma. I then did run some of the benchmarks with CoCoALib 0.97CVS and I had some suggestions on how to do things differently. After searching for a set of benchmarks I pretty much came up empty handed.
I could only find one recent paper: FRPOLY: A Benchmark Revisited (see http://www.brics.dk/~hosc/local/LaSC-4-2-pp155-164.pdf ) - but I didn't look for papers very long either and I currently do not have access to my library. Any pointers would be appreciated. The lisp code referred to in FRPOLY can be found at http://www.koders.com/lisp/fid5977A8A29DAE1A62638CE7BEFCE391E4C2CCF2C3.aspx I would suggest something along the following lines for the MVPoly benchmark: Have a matrix of test cases: *number of indeterminates: - small (3 indeterminates) - medium (10 indeterminates) - large (25 indeterminates) *length of polynomials - small (3 monoids) - medium (10 monoids) - large (25 monoids) * Ring/Algebra - Z (no very interesting - at least to me :)) - Z2 (special case for certain fields like crypto research - I do care about that) - Zp small, i.e. p=32003 - Zp huge i.e. p=something prime beyond 2^32 (not everybody has that implemented, at least not efficiently) - Q small - Q large - Weyl Algebras, non-commutative Algebras in general - rather exotic, not present everywhere * Term Ordering - not sure about the impact of that - this might disappear: - DegRevLex - DegLex - Lex - Ordering Matrix Depending on how many options you select and after computing the cartesian product of all selected options you end up with lots of tests. To graph the result I really like what was done for FLINT by plotting the cases in a plane with colored circles of different size signifying who was faster by what factor. Operations: - addition - mutiply - power small - power huge - GCD adding should be pretty boring, small powers are probably, too. Can anybody else suggest more operations? A couple remarks: * I would measure time *and* memory. * Instead of a certain number of repetitions for each benchmark I would run for a number of seconds and count the number of operations completed in that timeframe. The time allocated would depend on how difficult a certain task is, i.e. multiplying with small coefficients is obviously much cheaper compared to large coefficients in Q. The total runtime of a given benchmark would be the product of the weights assigned to each characteristic, i.e. 25 indeterminates would give a factor of 5, 25 monoids a factor of 3, Q small a factor of 1.5, so 5*3*1.5=22.5 times longer than the base line. That way the performance of the more difficult operations would not vary as much statistically and the total time for the benchmark run could be computed ahead of time. That way you know how long you can take a break :) * Use something like a 1GHz P3 as a baseline. If you feel like running a benchmark on you Sparc Station 20 in the basement you will not get any useful result, but hopefully the benchmark will not lose its value in a couple years and the results will remain comparable unlike say SPEC. You can obviously always dial up the length of the polynomials as well as the number of indeterminates. DISCLAIMER: I am a developer of the CoCoALib, so I would like to get input from the developers of Singular, Magma and any other project interested in participating in order to avoid favoring my own system. Cheers, Michael PS: We will be releasing CoCoALib 0.98 in roughly 4 to 5 days, so if I don't reply quickly I am probably doing last minute fixes. --~--~---------~--~----~------------~-------~--~----~ To post to this group, send email to sage-devel@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sage-devel URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/ -~----------~----~----~----~------~----~------~--~---