I add the benchmark for Maple 15 in http://www.imcce.fr/trip/features.php I take care to report the time used in the multiplication step for SDMP and not the time used in the construction of the Maple DAG. For example, I use the following code for the sparse example:
with(CodeTools): sdmp:-info(1); degre:=16; f:= expand((1+x+y+2*z^2+3*t^3+5*u^5)^degre): g:= expand((1+u+t+2*z^2+3*y^3+5*x^5)^degre): kernelopts(numcpus=1); sdmp:-info(1); # Maple 15 #sdmp:-num_cpus(1): infolevel[sdmp] := 2: # Maple 14 p:=CodeTools:-Usage(expand(f*g)): The output gives : > p:=CodeTools:-Usage(expand(f*g)): mul: 25.739 sec, 25.730 cpu 99% dag: 56.900 sec, 56.900 cpu 99% memory used=5927.6MB, alloc=5930.2MB, time=92.73 memory used=5.78GiB, alloc change=5.78GiB, cpu time=93.01s, real time=93.03s So I report only 25s instead of 93s. The DAG construction is always sequential. With 4 cpus, the output is : > p:=CodeTools:-Usage(expand(f*g)): mul: 10.290 sec, 39.800 cpu 386% dag: 57.142 sec, 57.140 cpu 99% memory used=5927.6MB, alloc=5930.2MB, time=107.14 memory used=5.78GiB, alloc change=5.78GiB, cpu time=107.43s, real time=77.93s Mickaël For memory allocation, I use my own memory allocators described in http://portal.acm.org/citation.cfm?id=1837210.1837220 Mickaël On May 14, 1:39 pm, parisse <bernard.pari...@ujf-grenoble.fr> wrote: > > So that issue still exists in Maple 15, however it is generally much better > > because memory is recycled. > > As far as I understand on my side, the problem with parallelization is > that malloc locks threads, therefore I can only parallelize code that > does not allocate memory. That's why I can not improve much timings in > the sparse case for more cores, because I need to allocate GMP > integers for a lot of terms to make polynomials once the > multiplication is done using immediate int128. I believe TRIP is using > it's own allocator with probably distincts heaps for each thread. > > > > > As for Giac, thanks for the update. I look forward to timing it for our > > next paper :) > > The TRIP benchmark is perhaps a bit unfair for SDMP, because it > probably uses the symbolic representation for Maple compared to the > polynomial representation for trip and giac. This should not be > sensitive for the dense case, but is probably for the sparse case. -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org