Jonathan,

> I have a simple question I was hoping to get advice on related to setting the 
> linear solver tolerance for an iterative solver. In the tutorial examples 
> (and 
> in most of the codes I have seen and written) the solver tolerance is some 
> fraction of the initial right hand side 2-norm (e.g. tol = 1.0e-8 * 
> rhs.l2_norm() ). However, I'm questioning this strategy for a nonlinear 
> problem using Newton's method to solve the nonlinear system. As the 
> iterations 
> progress, the initial norm of the right hand side (i.e. the norm of the 
> residual) is decreasing substantially. For a number of problems that I am 
> trying to solve, this causes my tolerance to drop below machine precision at 
> some point (if this happens I take 1e-15 to be the tolerance).

This isn't quite the right way to see it. I believe you are thinking that 
1e-16 is the machine precision, but that doesn't mean that 1e-16 is the 
*smallest number a machine can represent*. It's not an *absolute* precision, 
but a *relative* precision. A different way to see this is to say that in 
double precision, the processor stores numbers with approximately 16 decimal 
digits.

So if the norm of your right hand side (the residual of your nonlinear 
problem) is already at 1e-10, then there is nothing wrong with requiring that 
the linear solver gets the residual down to 1e-18=1e-8*1e-10. All you're 
requiring is that the individual entries of the (linear) residual b-Ax are, on 
average, 10^8 times smaller than the individual entries of the right hand side 
b. That's fine.


> I would like to have a tolerance setting strategy that allows me to have a 
> reasonable and consistent number of iterations for each linear solve required 
> in a Newton increment.

No, that's not actually what you want. What you really want (or *should* want 
;-) ) is a strategy in which you solve the first few Newton steps inexactly 
(you're still far from the solution, so who cares whether you compute the 
Newton update with great accuracy!) and gradually tighten the tolerance of the 
linear solver as your Newton iteration gets you closer and closer to the 
solution. The key word to search for in the literature is "Eisenstat-Walker 
algorithm". The canonical reference for this method is this:

@article{eiwa96,
author = {Stanley C. Eisenstat and Homer F. Walker},
title = {Choosing the Forcing Terms in an Inexact {N}ewton Method},
journal = {SIAM Journal on Scientific Computing},
volume = {17},
number = {1},
pages = {16-32},
year = {1996},
doi = {10.1137/0917003},
URL = {http://dx.doi.org/10.1137/0917003},
eprint = {http://dx.doi.org/10.1137/0917003}
}

The price you will pay is that the number of linear iterations goes up as you 
get closer to the solution. But the way you should really see this is that you 
are saving yourself iterations in the early Newton steps.

Best
  Wolfgang

-- 
------------------------------------------------------------------------
Wolfgang Bangerth          email:                 [email protected]
                            www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/fb37c11a-ecd0-fe15-3dd3-0b8306ec5bd0%40colostate.edu.

Reply via email to