Instead of solving

max f(x)
subject to 
sum(x) = 1
g(x) = 0
h(x) <= 0

where x is in R^n

solve 

max f(x(y))
subject to 
g(x(y)) = 0
h(x(y)) <= 0

where y is in R^(n-1) and x(y) = [1-sum(y) y(1) y(2) … y(n-1)]

This way sum(x(y)) = 1 for all y. 


For a simple example

max u(c1,c2)
c1 + c2 = 1
c1>= 0
c2 >= 0

becomes

max u(1-c2, c2)
1-c2 >= 0
c2 >= 0

-Grey

> On Apr 27, 2016, at 11:11 AM, David Morris <otha...@othalan.net> wrote:
> 
> On Wed, Apr 27, 2016 at 9:44 PM, Grey Gordon <greygor...@gmail.com> wrote:
> 
> Instead of using sum(x) = 1 as an equality constraint, perhaps you can 
> directly take x(1) = 1 - sum(x(2:end)) and substitute it into the problem 
> directly.
> 
> Grey, can you expand on this a bit?  Do you mean use that as a penalty in the 
> optimization function, or something else?
> 
> David 


_______________________________________________
NLopt-discuss mailing list
NLopt-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to