gt; > mult*=2;
> > }
> > a -= a%(RADIX*mult);
> > b -= b%(RADIX*mult);
> > c -= c%(RADIX*mult);
>
> > and this sped up the 32000x32000 multiplication by a further 5% or so.
> > The other times didn't change (the 1x1 may have been slightly
&g
William Stein wrote:
> On Wed, May 21, 2008 at 3:02 PM, Sara Billey <[EMAIL PROTECTED]> wrote:
>> Hi William, I have a undergrad working on the 4-color theorem and related
>> items for her undergrad thesis named Ruth Davidson. Part of her project is
>> to implement the computer proof by Roberts
Hi Jason,
One thing you can do is the following:
sage: m = matrix(4, range(16))
sage: m = m.change_ring(RDF)
sage: m.eigenspaces()
This will give you the real eigenvalues and their eigenvectors. It
uses numpy which in turn uses lapack.
Cheers,
Yi
http://yiqiang.org
On Wed, May 21, 2008
On Wed, May 21, 2008 at 3:02 PM, Sara Billey <[EMAIL PROTECTED]> wrote:
> Hi William, I have a undergrad working on the 4-color theorem and related
> items for her undergrad thesis named Ruth Davidson. Part of her project is
> to implement the computer proof by Robertson, Sanders, Seymour and T
On Wed, May 21, 2008 at 12:43 PM, PJ <[EMAIL PROTECTED]> wrote:
>
> Today in the Fedora Linux list, a person asked if there was a Fedora
> project to build & distribute SAGE. He spoke with such enthusiasm
> about SAGE that I became interested to see what it does and I'm
> compiling it right now (
In the following:
sage: m=matrix(4, range(16))
sage: m.eigenspaces()
[
(0, Vector space of degree 4 and dimension 2 over Rational Field
User basis matrix:
[ 1 0 -3 2]
[ 0 1 -2 1]),
(a1, Vector space of degree 4 and dimension 1 over Number Field in a1
with defining polynomial x^2 - 30*x - 80
> By the way, I wrote some code ages ago for computing the row
> echelon form of sparse matrices over GF2 which essentially inverts
> 6000x6000 sparse matrices in about a second.
Hi Bill,
I guess the best strategy is to look both at your modifications and at Jason's
code and try to understand w
> Wow! In that case I revise my viewpoint on this matter. That's
> really interesting. It is amazing how many things in Sage were
> written to make Sage easier for "random undergrads", but turn
> it to be really loved by working researchers in the trenches.
I very rarely create MatrixSpaces e
DIX*mult);
> b -= b%(RADIX*mult);
> c -= c%(RADIX*mult);
>
> and this sped up the 32000x32000 multiplication by a further 5% or so.
> The other times didn't change (the 1x1 may have been slightly
> quicker).
>
> :-)
>
> Bill.
I've uploaded a new
On Wed, May 21, 2008 at 11:34 AM, Jason Grout
<[EMAIL PROTECTED]> wrote:
>
> William Stein wrote:
>> On Wed, May 21, 2008 at 10:46 AM, Nick Alexander <[EMAIL PROTECTED]> wrote:
Nick, do you honestly actually use the matrix(...) command in your
code
all the time without explicitly gi
William Stein wrote:
> On Wed, May 21, 2008 at 10:46 AM, Nick Alexander <[EMAIL PROTECTED]> wrote:
>>> Nick, do you honestly actually use the matrix(...) command in your
>>> code
>>> all the time without explicitly giving the base ring?
>> All the time? No. I just checked one of my working direc
On Wed, May 21, 2008 at 10:46 AM, Nick Alexander <[EMAIL PROTECTED]> wrote:
>
>> Nick, do you honestly actually use the matrix(...) command in your
>> code
>> all the time without explicitly giving the base ring?
>
> All the time? No. I just checked one of my working directories
> (code for work
> Nick, do you honestly actually use the matrix(...) command in your
> code
> all the time without explicitly giving the base ring?
All the time? No. I just checked one of my working directories
(code for working with analytic abelian varieties and principal
polarizations). Of ~60 matrix
On Wed, May 21, 2008 at 10:25 AM, Simon King
<[EMAIL PROTECTED]> wrote:
>
> Dear Carl, dear Mike,
>
> On May 21, 6:27 pm, Carl Witty <[EMAIL PROTECTED]> wrote:
>> > There are coercions that are taking place that one often forgets. In
>> > this case, it helps to look at the annotated Cython file.
On Wed, May 21, 2008 at 10:24 AM, Nick Alexander <[EMAIL PROTECTED]> wrote:
>
>> Proposal B (from William's summary on the previous thread):
>>
>> Leave matrix() as-is. Rename echelon_form to hermite_form, and make a
>> new echelon_form function that computes hermite_form over the fraction
>> fie
Dear Carl, dear Mike,
On May 21, 6:27 pm, Carl Witty <[EMAIL PROTECTED]> wrote:
> > There are coercions that are taking place that one often forgets. In
> > this case, it helps to look at the annotated Cython file.
> > Seehttp://sage.math.washington.edu/home/mhansen/loops.html. The
> > yellow
> Proposal B (from William's summary on the previous thread):
>
> Leave matrix() as-is. Rename echelon_form to hermite_form, and make a
> new echelon_form function that computes hermite_form over the fraction
> field of the base ring.
I support this. I never want Sage to coerce my data away fro
In the recent discussion "Change the default base_ring for matrices from
ZZ to QQ", there were lots of opinions shared, and William summarized
some feelings from the group, but it wasn't a solid conclusion (at
least, based on an IRC conversation, William is rethinking the conclusion).
Here are
On May 21, 3:38 am, "Mike Hansen" <[EMAIL PROTECTED]> wrote:
> > Question: Why is FakeCyloop slightly *faster* than Cyloop (here we
> > have "cdef int i")? I thought that explicitly int-declaring the
> > running variable in a loop makes it faster?
>
> There are coercions that are taking place that
On May 20, 10:00 pm, "William Stein" <[EMAIL PROTECTED]> wrote:
> On Tue, May 20, 2008 at 12:43 PM, Mark V <[EMAIL PROTECTED]> wrote:
> > - shipping MPI itself (a dependency) would significantly complicate
> > Sage so use of MPI packages is currently optional.
>
> True. Also there has been no dem
> Question: Why is FakeCyloop slightly *faster* than Cyloop (here we
> have "cdef int i")? I thought that explicitly int-declaring the
> running variable in a loop makes it faster?
There are coercions that are taking place that one often forgets. In
this case, it helps to look at the annotated C
Dear Mikael, dear Carl,
On May 20, 6:55 pm, Carl Witty <[EMAIL PROTECTED]> wrote:
> In this last case, presumably the C compiler has removed the loop
> altogether
... hence, a better example is the following:
def Pyloop(n):
x=0
for i in range(n):
x = x+i
return x
and the ot
Dear Carl,
On May 20, 6:55 pm, Carl Witty <[EMAIL PROTECTED]> wrote:
> In this last case, presumably the C compiler has removed the loop
> altogether (unless you have a computer that can execute a trillion
> instructions per second).
Now that you mention it: probably my computer isn't that fast
On Wednesday 21 May 2008, Bill Hart wrote:
> Hi Martin,
>
> I downloaded the clean tarball and added an extra test, but I get:
>
>mul: m: 4096, l: 3528, n: 4096, k: 0, cutoff: 1024
> FAIL: Strassen != M4RM
> FAIL: Strassen != Naive
>
> :-(
Same here, I'll look into it right away. "Only" Stra
24 matches
Mail list logo