Dear sage-support team,

i'm trying to convince my boss about using modern linear algebra
software (as part of a project on group cohomology rings). However, i
need better arguments. Perhaps you have some?

I wrote a toy dynamic module for Sage providing two test functions:
1. - Read a square matrix over a finite field from a MeatAxe file
    - convert it into a Sage matrix
    - multiply it n times with itself
    - return the result
2. - Read a square matrix over a finite field from a MeatAxe file
    - multiply it n times with itself by using MeatAxe
    - convert the result into a Sage matrix
    - return it

I was not interested in the result (computing M^1000001 in sage needs
almost no time), but  i was interested in the time of a single matrix
multiplication. Therefore i've put the multiplications into a loop,
compiled using sage -cython.

When i take a 4x4 matrix over GF(7) with n=1000001, test function 1
needs 92.50 s CPU time, but test function 2 only needs 1.49 s CPU
time!
Hence, in that case, MeatAxe (actually a very old version!!) appears
to be faster than Sage built with Atlas, by a factor of >60!
On the other hand, a single multiplication of very large (4000x4000)
random matrices over GF(2) in Sage is faster than multiplication in
MeatAxe, by a factor of about 5.

Do you have an explanation of that phenomenon? I do think the two test
functions are fair, as in both cases we have exactly one matrix
conversion and the loop -- the only difference is whether one uses
multiplication by Sage or by MeatAxe.

Yours sincerely
          Simon


--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sage-support
URLs: http://sage.math.washington.edu/sage/ and http://sage.scipy.org/sage/
-~----------~----~----~----~------~----~------~--~---

Reply via email to