Gilles,

Is the test checked in? I am interested in looking at it.

Thank you,

-Greg

On Thu, Sep 8, 2011 at 2:51 PM, Ted Dunning <ted.dunn...@gmail.com> wrote:

> On Thu, Sep 8, 2011 at 12:42 PM, Luc Maisonobe <luc.maison...@free.fr
> >wrote:
>
> > ... Luc - are there other reasons that QR would be better for cov
> >> matrices?   I would have to play with a bunch of examples, but I
> >> suspect with the defaults, Cholesky may do the best job detecting
> >> singular problems.
> >>
> >
> > I'm not sure about Cholesky, but I have always thought that at least QR
> was
> > better than LU for near singular matrices, with only a factor 2 overhead
> in
> > number of operations (but number of operations is not the main bottleneck
> in
> > modern computers, cache behavior is more important).
>
>
> This is my impression as well.
>
> In addition, Cholesky very nearly is a QR decomposition through the back
> door.  That is, if
>
>     Q R = A
>
> Then
>
>     R' R = A' A
>
> is the Cholesky decomposition.  The algorithms are quite similar when
> viewed
> this way.  Cholesky does not produce as accurate a result for R if you are
> given A, but if you are given A'A as in the discussion here, it is pretty
> much just as good, I think.
>
> I use this property for very large SVD's via stochastic projection because
> Q
> is much larger than R in that context so computing just R can be done
> in-core while the full QR would require out-of-core operations.
>

Reply via email to