On Wed, Aug 17, 2011 at 4:24 PM, Greg Sterijevski <gsterijev...@gmail.com>wrote:

> On symmetrics, diagonal, banded and so on, I disagree-as I have made clear
> in the past. In the case of White standard errors or panel regressions, you
> typically have long strings of multiplication by diagonals and symmetrics,
> sandwich products and so forth. There are enough of these types of
> operations that a math/stat library should support these forms of matrices.
> isSparse() will not cut it. A symmetric matrix is typically NOT sparse, it
> is typically dense in these cases. More importantly, the number of
> operations you save by explicitly recognizing the special structure is not
> insignificant.
>

I agree pretty much for the symmetric case.  The savings for banded matrices
is surprisingly small and you can have all of those savings as a left
operand.  It is the right operand where simple sparsity accounts for almost
everything.

>
> > For banded arrays, the economies available beyond simple sparse
> algorithms
> > are even more limited.
> >
> >
> I am confused, Ted, since when I suggested that some of the multiplication
> issues could solved by method overloading, you thought it would not work.
> Probably a mis-communication on my part.
>

No.  What I mean is that when a sparse matrix is the right operand and not
subject overloading due to dynamic typing, you get most of the advantages of
the fancy optimizations without any need for anything more than isSparse and
a sparse iterator.  You don't get all of the benefit, but you do get most of
it and you get it for a wide variety of left operands.


> > Symmetric and triangular matrices also have special properties but it is
> > hard to decide what is really important there.  Many of the special
> > operations for these kinds of matrix are subject to solving by overloads
> > instead of indicators since we aren't dealing with binary operations.
>  For
> > example, left and right inverse multiplication with triangular matrices
> is
> > handled by normal single dispatch and qualifying an argument for real
> > Cholesky decomposition is specific to the Cholesky decomposition itself.
> >
> >
> While on the discussion of extending RealMatrix in any direction, I would
> humbly offer that the objects are too complex. Pardon this foolish
> question,
> but what are the uses for the methods " double
> walkInRowOrder(RealMatrixChangingVisitor visitor)" When I think of having
> to
> fill in all those methods for any extension, my head spins.
>

Exactly.  I think that you shouldn't need that.  The abstract super class
should have a moderately fancy operation that is limited to handling
generically sparse and dense matrices reasonably well and dense x dense
cases really well.  The writer of a new matrix type should not need to know
about that at all.

What I was suggesting is that if you have a leftInverseMultiply method that
solves for x in the system Ax = b, then that method knows the type of A
because the natural method call is A.leftInverseMultiply(b).  Thus, if A has
special structure, you get what you need from normal method dispatch.

It is the cases like A.times(b) or A.plus(b) that will be best supported by
generic sparsity support in the abstract class.

Reply via email to