This is a fortran code that doesn’t make use of argc,argv (I tried running with
the runtime options anyway, in case you implemented some magic I’m not familiar
with, but didn’t see anything new in the output). I have a call to TaoView(tao,
PETSC_VIEWER_STDOUT_SELF,ierr) in the code and it reports back
Tao Object: 1 MPI process
type: cg
CG Type: prp
Gradient steps: 0
Reset steps: 0
TaoLineSearch Object: 1 MPI process
type: more-thuente
maximum function evaluations=30
tolerances: ftol=0.0001, rtol=1e-10, gtol=0.9
total number of function evaluations=0
total number of gradient evaluations=0
total number of function/gradient evaluations=0
Termination reason: 0
convergence tolerances: gatol=1e-08, steptol=0., gttol=0.
Residual in Function/Gradient:=7.54237e+75
Objective value=2.96082e+86
total number of iterations=0, (max: 100)
total number of function/gradient evaluations=1, (max: 4000)
Solution converged: ||g(X)||/|f(X)| <= grtol
Bruce
From: Barry Smith <[email protected]>
Date: Wednesday, June 26, 2024 at 2:02 PM
To: Palmer, Bruce J <[email protected]>
Cc: [email protected] <[email protected]>
Subject: Re: [petsc-users] Unconstrained optimization question
Check twice before you click! This email originated from outside PNNL.
Please run with -tao_monitor -tao_converged_reason and see why it has stopped.
Barry
On Jun 26, 2024, at 4:34 PM, Palmer, Bruce J via petsc-users
<[email protected]> wrote:
This Message Is From an External Sender
This message came from outside your organization.
Hi,
I’m trying to do an unconstrained optimization on a molecular scale problem.
Previously, I was looking at an artificial molecular problem where all
parameters were of order 1 and so the objective function and variables were
also in the range of 1 or at least within a few orders of magnitude of 1.
More recently, I’ve been trying to apply this optimization to a real molecular
system. Between Avogadro’s number (6.022e23) and Boltzmann’s constant
(1.38e-16) combined with very small distances (1.0e-8 cm), etc. the objective
function values and the values of the optimization variables have very large
values (~1e86 and ~1e9, respectively). I’ve verified that the analytic
gradients of the objective function that I’m calculating are correct by
comparing them with numerical derivatives.
I’ve tried using the LMVM and Conjugate Gradient optimizations, both of which
worked previously, but I find that the optimization completes one objective
function evaluation and then declares that the problem is converged and stops.
I could find a set of units where everything is approximately 1 but I was
hoping that there are some parameters I can set in the optimization that will
get it moving again. Any suggestions?
Bruce Palmer