On Mon, May 16, 2011 at 02:39:01PM -0700, Phil Steitz wrote:
> On 5/16/11 3:44 AM, Dr. Dietmar Wolz wrote:
> > Nikolaus Hansen, Luc and me discussed this issue in Toulouse.

Reading that, I've been assuming that...

> > We have two options to handle this kind of failure in tests of stochastic
> > optimization algorithms:
> > 1) fixed random seed - but this reduces  the value of the test 
> > 2) Using the RetryRunner - preferred solution
> >
> > @Retry(3) should be sufficient for all tests.
> >
> The problem with that is that it is really equivalent to just
> reducing the sensitivity of the test to sensitivity^3 (if, e.g, the
> test will pick up anomalies with stochastic probability of less than
> alpha as is, making it retry three times really just reduces that
> sensitivity to alpha^3).  I think the right answer here is to find
> out why the test is failing with higher than, say .001 probability
> and fix the underlying problem.  If the test itself is too
> sensitive, then we should fix that.  Then switch to a fixed seed for
> the released code, reverting to random seeding when the code is
> under development.

... they had settled on the best approach for the class at hand.
[I.e. we had raised the possibility that there could a bug in the code that
triggered test failures, but IIUC they now concluded that the code is fine
and that failures are expected to happen sometimes.]

It still seems strange that it is always the same 2 tests that fail.
Is there an explanation to this behaviour, that we might add as a comment
in the test code?


Gilles

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org

Reply via email to