This is because matrix windows are not matrices--just views into an attached matrix. There is a matrix_from_rows_and_columns command that you might be able to use.
sage: M = matrix(QQ, 4, 4, range(16)) sage: M.matrix_from_rows_and_columns([1,2,0],[0,3]) [ 4 7] [ 8 11] [ 0 3] I don't think this is at all optimized though. Matrix windows are used in the underlying algorithms (like Strassen multiplication) to perform operations like "take this submatrix of A times this submatrix of B and add it to this submatrix of C." This sound like something that could be useful for you, but you would have to work in pyrex as all of the matrix window operations are cdef methods. Your suggested method of storage would be much more useful though. - Robert On Jun 2, 2007, at 8:39 PM, Mike Hansen wrote: > Yeah, that's not quite what I was looking for since I'd like to be > able to keep the fast dense matrix multiplication within the blocks. > I'm not too familiar with the matrix codebase so I don't know how easy > it'd be to do. I was thinking of basically storing a list of blocks > as well as a list of the rows (columns) where those blocks start. Do > you much about how the matrix windows work? For example, I'd like to > be able to do something along the lines of > > sage: mw = matrix(3,3,range(9)).matrix_window(1,1,2,2) > sage: m = matrix(2,2,range(4)) > sage: m*mw > [ 7 8] > [29 34] > > but instead I get a canonical coercion error. > > Also, I've used the netlib SparseBLAS C reference implementation with > GSL before, and it worked pretty smoothly. > > --Mike > > On 6/2/07, Joshua Kantor <[EMAIL PROTECTED]> wrote: >> >> I am implementing a sparse matrix class for real doubles (finite >> precision real numbers.) >> The storage format I am using is called compressed sparse column. >> This >> is the standard format >> used by all sparse matrix libraries as well as matlab >> >> http://www.netlib.org/linalg/html_templates/node92.html >> >> It is not specifically designed for block diagonal matrices though. >> >> As for matrix vector multiplication, it is very slow. One problem >> with >> this is that currently sparsity is >> treated as a mathematical property so when multiplying a sparse >> matrix >> and a dense vector you have to coerce the dense vector into a sparse >> vector. This behaviour should be changed, I was going to propose that >> when doing arithmetic between sparse and dense objects coercion >> should >> not be necessary. >> >> >> Josh >> >> On Jun 1, 11:07 pm, Michel <[EMAIL PROTECTED]> wrote: >>> Hi, >>> >>> Something related. A while ago I was using sparse matrices >>> to compute the page ranks of a small web. The computation >>> was *much* too slow. So I implemented my own method >>> for "matrix" x "vector" as a simple loop through the non-zero >>> entries >>> of "martrix" which was *much* faster. >>> >>> So: question: could it be that the multiplication of sparse matrices >>> is not as optimized as it should be? I looked in >>> "matrix_generic_sparse.pyx" >>> but I don't even see a _mul_ method. Where is multiplication >>> of sparse matrices implemented? >>> >>> Michel >>> >>> On Jun 2, 4:23 am, "Mike Hansen" <[EMAIL PROTECTED]> wrote: >>> >>>> I thought I recalled someone mentioning this before, but is someone >>>> working on or thinking about working on implementing "sparse" block >>>> diagonal matrices where you would only store the dense blocks? >>> >>>> If someone is working on it, I'd be willing to help out a bit >>>> since it >>>> would be incredibly useful for some of my research. >>> >>>> --Mike >> >> >>> >> > > --~--~---------~--~----~------------~-------~--~----~ To post to this group, send email to sage-devel@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sage-devel URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/ -~----------~----~----~----~------~----~------~--~---