Computation of the largest few singular values (often associated with a
partial set of singular vectors, either left, right or both) is very much
the business of the SVD algorithm because computing just a few singular
values is much more efficient than computing the entire set and often
suffices just as well as the full set.

Consider a common case of a sparse matrix that has 100 million rows and 10
million columns that contains small counts from a random process.  Assume
also that there are, on average, about 100 nonzero elements per row.  There
is no chance that finding all 10 million singular values or vectors is of
any practical interest, partially because the truncation due to the counting
process would obscure all but a the largest few values and practically
because computing all of the singular values and vectors would exhaust any
computer that is likely to exist in our lifetime.  The computation of the
first few dozen singular values is quite feasible, however, even with
current hardware.

On Sun, Mar 14, 2010 at 1:44 PM, Dimitri Pourbaix
<pourb...@astro.ulb.ac.be>wrote:

> I think there are two parts in the isse.
>> The first one is related to sparse matrix and we don't have an answer
>> yet. The second part is related to compute a partial set of singular
>> values. This is used for example in image compression or to find a
>> matrix with reduce rank that is the closest possible to an input matrix.
>> for this part, we may have an answer.
>>
>
> What do you mean by partial set of singular values?  If you mean setting
> all the singular values which are either below some threshold, of index
> above some value (rank), ... to zero and to compute the resulting product
> as an approximation of the original matrix, this is no longer the business
> of SVD but rather the user business as (s)he decides what (s)he does with
> the decomposition.  However, one could add a method to SVD which would
> return such a 'product'.

Reply via email to