2011/8/8 Phil Steitz <phil.ste...@gmail.com>:
>
> Sounds reasonable.  Have a look at the StoppingCondition interface
> and its implementations in the genetics package.  Something like
> that would probably work.
>
> Phil
>

I wasn't aware of o.a.c.m.genetics.StoppingCondition. It's exactly
what I would need, only slightly more general. Instead of a
Population, the StoppingCriterion should be passed the "state" (for
what that means) of the iterative solvers.

So at the moment, stopping criteria are implemented in at least two
different places in Commons Math
  - o.a.c.m.optimization.ConvergenceChecker.converged(int iteration,
PAIR previous, PAIR current)
  - o.a.c.m.genetic.StoppingCondition.isSatisfied(Population population)

Do we want to reconcile these two different implementations of the
same concept? I think it would be interesting to give a thought about
iterative algorithms at large, and design some general classes that
support the implementation of such an iterative algorithm.

Here are a few (obvious) thoughts on what makes an iterative
algorithm. Maybe that could help.
  - Iterative algorithms obviously iterate, so they probably keep an
eye on the current iteration number. Besides, an upper bound on the
total number of iterations must be set. As was suggested, this hints
at the use of o.a.c.m.util.Incrementor.
  - Iterative algorithms update their *state*, whether this is a
Population, a RealVector, or anything. So from one iteration to the
next, the state changes.
  - The iterative scheme itself can be quite long, and it might be
interesting to have a way to *listen* to the algorithm while the
iterations take place (for example, to back-up the whole thing). This
relates to the thread on "Monitors".
  - The last two points bring up a new question (I think). Let us
imagine that at the end of each iteration, the Iterative Algorithm
calls monitor.update(state). Then the monitor can access the updated
state of the Iterative Algorithm, and possibly *alter* it, which might
ruin the whole thing. This would call for monitor.update() being
passed a *copy* of the updated state, which might be unefficient, no
(if the computation works on large data sets)?
  - The iterations should stop at some point. If the maximum iteration
count has been reached, an exception should generally (but not
necessarily) be thrown, this is taken care of by the Incrementor.
Otherwise, it means that the algorithm has reached a *satisfactory*
state. What satisfactory means depends on the actual problem at hand.
For optimization purposes, it really means "stable": the current
solution is not much different from the previous solution (the
previous state must be kept in some place). For linear solvers, it
means: the residual is small enough (only the current state is used).
Both approaches might be reconciled by the same method hasConverged()
which would take anything as a parameter (a pair of RealPointValuePair
--previous and next--, a residual,...).

So a few concepts emerge: Iterative Algorithm, State, Incrementor,
Observer (or monitor, or listener...), Convergence Criterion.

Maybe that takes us too much on the philosophical side, in which case
the discussion should be closed soon. As for me, I'm very much
interested...

Sebastien

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org

Reply via email to