> >> The BOBYQA optimizer takes simple bound contraints into account: > >> lowerBound(i) <= p(i) <= upperBound(i) 0 <= i < n > >> where "n" is the problem dimension. > >> > >> The parent class ("BaseMultivariateRealOptimizer") currently mandates the > >> following "optimize" method: > >> ---CUT--- > >> RealPointValuePair optimize(int maxEval, > >> FUNC f, > >> GoalType goalType, > >> double[] startPoint); > >> ---CUT--- > >> > >> I think that the bounds are arguments that should be passed through that > >> method. The current method definition is a special case: no bound > >> constraints (or, equivalently, all lower bounds = -infinity, all upper > >> bounds = +infinity). > >> > >> Thus, it seems that adding the following to the API > >> ---CUT--- > >> RealPointValuePair optimize(int maxEval, > >> FUNC f, > >> GoalType goalType, > >> double[] startPoint, > >> double[] lowerBounds, > >> double[] upperBounds); > >> ---CUT--- > >> is all there is to do in order to accomodate algorithms like BOBYQA.
> > [...] > > It sounds like a useful addition, which raises one question I'd like > to ask all of you. I guess all three double[] arrays should have the > same size, which must be checked, and also documented in the javadoc. > > In order to avoid this, what I tend to do at the moment is to define a > new class --say, BoundedPoint-- which would hold three double:s an > initial value, a lower bound and an upper bound. Then I would just > provide the method with the corresponding array of BoundedPoint, > instead of three arrays of doubles. Then no dimension mismatch can > occur, and no additional information needs be provided in the javadoc. > Of course, the price to pay is that you have to construct a few > BoundedPoint. As I said, this is what I tend to do at the moment, but > I have mixed feelings about this solution vs. passing 3 double[]. Any > thoughts about this side question? On the principle, you are right of course. But I'm not sure that we gain much by adding this layer of data encapsulation, because having a single array (instead of three) is not enough to prevent a dimensionality error: indeed, the arrays dimension must match the expected argument of the function, and this possible mismatch can only be detected later, when the function is called (within the "doOptimize" method). To be completely safe, we would have to introduce an additional method ("getDimension") in the "MultivariateFunction" interface. [But this has the drawback that it reduces the generality of this interface.] Also, keeping it simple (although not totally safe) by using (arrays of) primitive type makes the interface closer to what people are accustomed to coming from e.g. the C language. This approach is also used in the "...VectorialOptimizer" classes (see "BaseAbstractVectorialOptimizer" where a check is done to verify that the "target" and "weight" arrays have the same length). That said, I agree that it would be nicer to have a completely uniform and integrated set of interfaces. The current design is already quite a rework of the previous state of affairs (see versions 2.1 and 2.2) with much of the inconsistency and code duplication corrected. However, similarly to other design issues raised during the last months, it is not clear that the obvious solution at this point will lead to a clear improvement in the overall design. I mean that the code might in fact need a more thorough refactoring, not some patch here and there. But that work would be for the next++ major release. :-) Best, Gilles --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org For additional commands, e-mail: dev-h...@commons.apache.org