Hello,

Is it possible to extract some of the internal parameters needed for the
augmented Lagrangian method?

To be precise, I would like access to the augmented Lagrangian function
handle to use in a subsidiary constrained optimization algorithm.

To provide a short example of what I would like to achieve, please see the
code below.

import nlopt
> import numpy as np
> from scipy import optimize
>
> def Rosenbrock(x, grad):
>     val = optimize.rosen(x)
>     return val
>
> def mycons1(x, grad):
>     val = np.dot(x,x) - 4.0
>     return val
>
> def mycons2(x, grad):
>     val = 1.0 - np.dot(x,x)
>     return val
>
> n = 20
> maxeval = 20 * (n+1)
> x0 = np.zeros(n)
>
> local_opt = nlopt.opt(nlopt.LN_BOBYQA, n)
> local_opt.set_ftol_rel(1e-8)
> local_opt.set_initial_step(0.5)
>
> opt1 = nlopt.opt(nlopt.LD_AUGLAG, n)
> opt1.set_local_optimizer(local_opt)
> opt1.add_inequality_constraint(mycons1, 1e-8)
> opt1.add_inequality_constraint(mycons2, 1e-8)
> opt1.set_min_objective( Rosenbrock )
> opt1.set_maxeval(maxeval)
> x1 = opt1.optimize(x0)
>

Given the above code, I would like access to the objective function handle
in opt1 so that I can call this objective with alternative values of x.

That is, if L_al is the function handle for the objective in opt1, I would
like to be able to call L_al(x) where x is a different value than x1 or x0.

If access to the function handle is not possible, I could also build the
function myself using the objective from the ALGENCAN algorithm. However,
for this I need to know the final values of the penalty parameter and the
Lagrange multipliers at the end of the call. I would imagine there would be
some option for verbosity which would output these values, but I have not
found any information with regards to this when using the Python interface.

Cheers!
James
_______________________________________________
NLopt-discuss mailing list
NLopt-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to