On Thu, 24 Sep 2015 17:02:15 -0500, Ole Ersoy wrote:
On 09/24/2015 03:23 PM, Luc Maisonobe wrote:
Le 24/09/2015 21:40, Ole Ersoy a écrit :
Hi Luc,
I gave this some more thought, and I think I may have tapped out to
soon, even though you are absolutely right about what an exception
does
in terms bubbling execution to a point where it stops or we handle
it.
Suppose we have an Optimizer and an Optimizer observer. The
optimizer
will emit three different events given in the process of stepping
through to the max number of iterations it is allotted:
- SOLUTION_FOUND
- COULD_NOT_CONVERGE_FOR_REASON_1
- COULD_NOT_CONVERGE_FOR_REASON_2
- END (Max iterations reached)
So we have the observer interface:
interface OptimizerObserver {
success(Solution solution)
update(Enum enum, Optimizer optimizer)
end(Optimizer optimizer)
}
So if the Optimizer notifies the observer of `success`, then the
observer does what it needs to with the results and moves on. If
the
observer gets an `update` notification, that means that given the
current [constraints, numbers of iterations, data] the optimizer
cannot
finish. But the update method receives the optimizer, so it can
adapt
it, and tell it to continue or just trash it and try something
completely different. If the `END` event is reached then the
Optimizer
could not finish given the number of allotted iterations. The
Optimizer
is passed back via the callback interface so the observer could
allow
more iterations if it wants to...perhaps based on some metric
indicating
how close the optimizer is to finding a solution.
What this could do is allow the implementation of the observer to
throw
the exception if 'All is lost!', in which case the Optimizer does
not
need an exception. Totally understand that this may not work
everywhere, but it seems like it could work in this case.
WDYT?
With this version, you should also pass the optimizer in case of
success. In most cases, the observer will just ignore it, but in
some
cases it may try to solve another problem, or to solve again with
stricter constraints, using the previous solution as the start point
for the more stringent problem. Another case would be to go from a
simple problem to a more difficult problem using some kind of
homotopy.
Great - whoooh - glad you like this version a little better - for a
sec I thought I had complete lost it :).
IIUC, I don't like it: it looks like "GOTO"...
Note to seeelf ... cancel
therapy with Dr. Phil. BTW - Gilles - this could also be used as a
light weight logger.
I don't like this either (reinventing the wheel).
The Optimizer could publish information deemed
interesting on each ITERATION event.
If we'd go for an "OptimizerObserver" that gets called at every
iteration, there shouldn't be any overlap between it and "Optimizer":
iteration limit should be dealt with by the observer, the iterative
algorithm would just run "forever" until the observer is satisfied
with the current state (solution is good enough or the allotted
resources - be they time, iterations, evaluations, ... - are
exhausted).
The observer could then be wired
with SLF4J and perform the same type of logging that the Optimizer
would perform. So CM could declare SLF4J as a test dependency, and
unit tests could log iterations using it.
As a "user", I'm interested in how the algorithms behave on my problem,
not in the CM unit tests.
The question remains unanswered: why not use slf4j directly?
Lombok also has a @SLF4J annotation that's pretty sweet. Saves the
SLF4J boilerplate.
I understand that using annotations can be a time-saver, but IMO not
so much for a library like CM; so in this case, the risk of depending
on another library must be weighed against the advantages.
Regards,
Gilles
Cheers,
- Ole
[...]
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org