That may not be necessary. It may only be necessary to know the size
of the L1 and L2 caches and then it is probably possible to figure out
the optimal values. Probably it is something like 2^k rows of B must
fit in half the L1 cache and BLOCK rows of A must fit in half the L1
cache.

I'm not sure what happens with the L2 cache yet, since we haven't
really solved that issue yet.

Bill.

On 17 May, 01:53, mabshoff <[EMAIL PROTECTED]> wrote:
> On May 17, 2:48 am, Bill Hart <[EMAIL PROTECTED]> wrote:
>
> > Hi Martin,
>
> > That spike is wierd. Basically I got closer to 2x the time of Magma
> > for 16384x16384, but you need different parameters than for
> > 10000x10000 or 20000x20000 since the size of the M4R matrices will be
> > different than in either of those cases.
>
> > For 16384x16384 my notes say that I used k = 6 and BLOCK = 256. One
> > might also need to fiddle with the cutoff. I think I used 3600, but at
> > various times I had the cutoff set lower.
>
> > It would be awfully surprising if there wasn't a set of parameters
> > that dropped this time right down. It is possible that the L1 cache is
> > smaller on sage.math than on the Opteron I was using. Perhaps my
> > parameters don't apply on that machine.
>
> Shouldn't we use an ATLAS like tuning process? [obviously not that
> heavy timewise]. It will likely find good default values.
>
> > Bill.
>
> <SNIP>
--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to