Again, I'm no expert, but you could always separate the equality constraint
into two inequality constraints, yes?  Maybe worth a try.

On Tue, Apr 26, 2016, 10:37 PM David Morris <otha...@othalan.net> wrote:

> I've been looking through the NLOpt code for AUGLAG.  If I understand
> correctly, it seems that my problem is AUGLAG will not efficiently find a
> solution which fits my constraints and so never finds a valid solution (not
> even a bad one) within time limits I have defined.  It usually finds a
> solution eventually, just not within an acceptable time for our project.
> The difficulty appears to be that a constraint of "sum(x) == 100" is not a
> type of constraint which works well with the algorithm.
>
> In particular, I am looking at auglag.c lines 251 - 268 where results are
> saved, but which potentially is not called if interrupted too early by a
> timeout.
>
> Does this make sense?  Any suggestions for alternative ways to write this
> constraint?
>
> David
>
> On Tue, Apr 26, 2016 at 7:57 PM, David Morris <otha...@othalan.net> wrote:
>
>> I am using NLOpt in a python application and am having problems when
>> setting an equality constraint when using LN_AUGLAG.
>>
>> If I use LN_COBYLA, optimization works perfectly.  However, if I use
>> LN_AUGLAG and set LN_COBYLA as the local optimizer, the result is the exact
>> same as my initial guess.  My goal is to use experiment with other
>> optimization algorithms (for example, LN_SBPLX), and use AUGLAG to provide
>> the quality constraint.
>>
>> Can anyone help determine why the equality constraint causes AUGLAG to
>> fail?
>>
>> Below is sample code showing how I use NLOpt.
>>
>>
>>
>> ###################################################################################
>>         # CODE Sample:
>>
>>         args = ( ... )
>>         kw   = { ... }
>>
>>         # Optimization Function for NLOpt
>>         def optfunc(x,grad):
>>             # Big complex function which calculates a single return value:
>>             val = model_p_mixer(x,*args,**kw)
>>             return val
>>
>>         # Constraint Function for NLOpt
>>         #   x -> percentage of total content for each component
>>         #   sum(x) == 100 %
>>         def opt_constraint(x, grad):
>>             val = float(100.0 - x.sum())
>>             return val
>>
>>         gopt = nlopt.opt(nlopt.LN_AUGLAG, len(guess))
>>         lopt = nlopt.opt(nlopt.LN_COBYLA, len(guess))
>>
>>         gopt.set_min_objective(optfunc)
>>
>>         gopt.set_lower_bounds([98.62, 0.0, 0.0])
>>         gopt.set_upper_bounds([99.5,  1.0, 1.0])
>>
>>         gopt.add_equality_constraint(opt_constraint, 0.001)
>>
>>         # Set tolerances to determine when the optimizer stops looking
>> for solutions
>>         gopt.set_xtol_abs(1E-6)
>>         gopt.set_ftol_abs(0.001)
>>         lopt.set_xtol_abs(1E-6)
>>         lopt.set_ftol_abs(0.001)
>>
>>         # Set initial step size
>>         gopt.set_initial_step(0.01)
>>
>>         gopt.set_maxtime(8.0)
>>
>>         gopt.set_local_optimizer(lopt)
>>
>>         # Run the optimizer
>>         mix = gopt.optimize([99.0, 0.5, 0.5])
>>
>>         # Initial  Guess :  [99.0, 0.5 , 0.5 ]
>>         # Expected Result:  [99.2, 0.35, 0.45]
>>         # Actual   Result:  [99.0, 0.5 , 0.5 ]
>>
>> ###################################################################################
>>
>>
>> Thank you,
>>
>> David
>>
> _______________________________________________
> NLopt-discuss mailing list
> NLopt-discuss@ab-initio.mit.edu
> http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss
>
_______________________________________________
NLopt-discuss mailing list
NLopt-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to