Luc Maisonobe wrote: > Gump a écrit : >> To whom it may engage... >> >> This is an automated request, but not an unsolicited one. For >> more information please visit http://gump.apache.org/nagged.html, >> and/or contact the folk at gene...@gump.apache.org. >> >> Project commons-math has an issue affecting its community integration. >> This issue affects 1 projects, >> and has been outstanding for 2 runs. > > [snip] > >> BUILD FAILED >> /srv/gump/public/workspace/apache-commons/math/build.xml:199: There were >> test failures. > > > The failed test is once again RandomDataTest.testNextPoissonConsistency. > The output is: > > [junit] Testcase: testNextPoissonConsistency took 0.596 sec > [junit] FAILED > [junit] Chisquare test failed for mean = 2.0 p-value = > 3.5409049905721357E-4 chisquare statistic = 20.7552099562672. > [junit] bin expected observed > [junit] [1,1) 135.34 165 > [junit] [1,3) 541.34 572 > [junit] [3,5) 270.67 226 > [junit] [5,6) 36.09 23 > [junit] [6,inf) 16.56 14 > [junit] This test can fail randomly due to sampling error with > probability 0.0010. > > I think it is the third time in less than 6 months and the second time > in row that this test fails, so the 0.001 failure probability seems > exceeded. IS this related to the comment we find in the test source: > > // TODO: When MATH-282 is resolved, s/3000/10000 below > > Would it be sensible to add some loop around the test and consider it > fails if 2 or 3 successive iterations all fail ? Would this really test > something ? Should this test be used only manually during development > and removed from the suite ? > > I am puzzled by tests that can randomly fail and belong to an automatic > test suite.
We could disable this test case for now, until MATH-282 is resolved, but I am not keen on removing it, as (I think) the failures really are pointing to sickness - which in this case is the Gamma function issue in MATH-282. The test case is (most likely [1]) already being retried twice (RandomDataTest extends RetryTestCase, which retries tests when it there is a failure); so in fact the probability of false failure is (probably) (.001)^2. What is displayed is the output of the second consecutive failed test. I am ambivalent on whether or not tests that have small positive probability of false failure should be included in our unit tests. I don't personally see it as a big deal if we get a false failure now and then. If anyone else has a better idea of how to test the data generation utilities, I am open to changing. The tests in there now were certainly useful in development and have flagged some problems when changing the code, so I would like to at least maintain something like them. [1] IIRC, RetryTestCase repeats *all* tests each time *any* test fails; so it is possible that the first failure is not testPoissonConsistency. I think that is unlikely; though, as this is the only test case that has been reported failing recently. Phil > > Luc > > --------------------------------------------------------------------- > To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org > For additional commands, e-mail: dev-h...@commons.apache.org > --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org For additional commands, e-mail: dev-h...@commons.apache.org