> On Dec 1, 2016, at 12:07, Bradley Lucier <luc...@math.purdue.edu> wrote: > > On 12/01/2016 02:04 PM, John Clements wrote: >> >> Would it be all right with you if I shared your mail with the mailing list? >> A brief reading of this paper shows me the relationship between solving this >> problem and approximation of gaussian elimination, and I’d like to ask Jens >> Axel Soegaard and Neil Toronto how this relates to the general algorithms >> for matrix solution. > > Yes, you can share it. > > The Conjugate Gradient method is applicable to solving $Ax=b$ when $A$ is a > symmetric, positive definite matrix. It's not applicable to the problem with > general $A$. > > So, in fact, it's applicable to solving $A^*Ax=A^*b$, the normal equations > for least squares, since $A^*A$ is symmetric and positive definite. The > brilliance of the method is that it doesn't actually compute $A^*A$, but only > $Ay$ and $A^*y$ for various vectors $y$. > > The method is particularly useful when the linear system $Ax=b$ has inherent > error in $A$ and $b$, and so one requires only an approximation to the > solution $x$.
Okay, I’ve taken a crack at implementing this… well, I have a toy implementation, that doesn’t do any checking and only works for a particular matrix shape. Short version: yep, it works! Now, a few questions. 1) What should one use as the initial estimate, x_0 ? The method appears to work fine for an initial estimate that is uniformly zero. Is there any reason not to use this? 2) When does one stop? I have not read the paper carefully, but it appears that it’s intended to “halt” in approximately ’n’ steps, where ’n’ is the … number of rows? In the one case I tried, my “direction” vector p_i quickly dropped to something on the order of [1e-16, 1e-16], and in fact it became perfectly stable, in that successive iterations produced precisely the same values for the three iteration variables. However, it wouldn’t surprise me to discover that there were cases on which the answer oscillated between two minutely different values. Can you shed any light on when it’s safe to stop in the general case? 3) Do you have any idea how this technique compares to any described in Numerical Recipes or implemented in LAPACK ? John -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.