The sort of slicing operation your describing isn't well supported in
sage's matrices as far as I know. However numpy can  do this well
but it doesn't use abitrary precision arithmetic.

import numpy
m=numpy.matrix([[1,2,3],[4,5,6],[7,8,9]],dtype=float)
n=m[0:3,0:2]
m*n

Not sure if that helps.

                                                         Josh

On Jun 2, 8:39 pm, "Mike Hansen" <[EMAIL PROTECTED]> wrote:
> Yeah, that's not quite what I was looking for since I'd like to be
> able to keep the fast dense matrix multiplication within the blocks.
> I'm not too familiar with the matrix codebase so I don't know how easy
> it'd be to do.  I was thinking of basically storing a list of blocks
> as well as a list of the rows (columns) where those blocks start.  Do
> you much about how the matrix windows work?  For example, I'd like to
> be able to do something along the lines of
>
> sage: mw = matrix(3,3,range(9)).matrix_window(1,1,2,2)
> sage: m = matrix(2,2,range(4))
> sage: m*mw
> [ 7  8]
> [29 34]
>
> but instead I get a canonical coercion error.
>
> Also, I've used the netlib SparseBLAS C reference implementation with
> GSL before, and it worked pretty smoothly.
>
> --Mike
>
> On 6/2/07, Joshua Kantor <[EMAIL PROTECTED]> wrote:
>
>
>
> > I am implementing a sparse matrix class for real doubles (finite
> > precision real numbers.)
> > The storage format I am using is called compressed sparse column. This
> > is the standard format
> > used by all sparse matrix libraries as well as matlab
>
> >http://www.netlib.org/linalg/html_templates/node92.html
>
> > It is not specifically designed for block diagonal matrices though.
>
> > As for matrix vector multiplication, it is very slow. One problem with
> > this is that currently sparsity is
> > treated as a mathematical property so when multiplying a sparse matrix
> > and a dense vector  you have to coerce the dense vector into a sparse
> > vector. This behaviour should be changed, I was going to propose that
> > when doing arithmetic between sparse and dense objects coercion should
> > not be necessary.
>
> > Josh
>
> > On Jun 1, 11:07 pm, Michel <[EMAIL PROTECTED]> wrote:
> > > Hi,
>
> > > Something related. A while ago I was using sparse matrices
> > > to compute the page ranks of a small web. The computation
> > > was *much* too slow. So I implemented my own method
> > > for "matrix" x "vector" as a simple loop through the non-zero entries
> > > of "martrix" which was *much* faster.
>
> > > So: question: could it be that the multiplication of sparse matrices
> > > is not as optimized as it should be? I looked in
> > > "matrix_generic_sparse.pyx"
> > > but I don't even see a _mul_ method. Where is multiplication
> > > of sparse matrices implemented?
>
> > > Michel
>
> > > On Jun 2, 4:23 am, "Mike Hansen" <[EMAIL PROTECTED]> wrote:
>
> > > > I thought I recalled someone mentioning this before, but is someone
> > > > working on or thinking about working on implementing "sparse" block
> > > > diagonal matrices where you would only store the dense blocks?
>
> > > > If someone is working on it, I'd be willing to help out a bit since it
> > > > would be incredibly useful for some of my research.
>
> > > > --Mike


--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~----------~----~----~----~------~----~------~--~---

Reply via email to