Hello,
I have a fairly complicated optimization problem for which I am using
LD_TNEWTON_PRECOND to solve it. Sometimes I get a runtime exception
"nlopt failure". I noticed this happens when the relative tolerance
ftol_rel is lower than 10^-04. The gradient is correct up to a factor
of 10^-05.
What are general reasons for this error? I tried googeling but
couldn't find a list of possible reasons to look into. Seems like this
happens to a lot of people but often those posts are abandoned or do
not come to a conclusion
- https://github.com/stevengj/nlopt/issues/339
- https://github.com/JuliaOpt/NLopt.jl/issues/33
What I gathered so far is:
- there was an exception while calculating the error value or gradient
(https://github.com/stevengj/nlopt/issues/171)
- the gradient is wrong
(https://discourse.julialang.org/t/help-debugging-nlopt-failure-1-errors/11907/10)
- there is a numerical instability
(https://stackoverflow.com/questions/40771707/why-does-my-nlopt-optimization-error-fail-to-solve)
Could it be that the gradient is just not exact enough to converge
with higher precision?
Thanks, Lisa
_______________________________________________
NLopt-discuss mailing list
NLopt-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss