: This is doable by enabling/disabling test groups. A new build plan : would need to be created that would do: : : ant -Dtests.haltonfailure=false -Dtests.awaitsfix=true : -Dtests.unstable=true test
right ...that's an idea that came up the other day when i was talking to simon at revolution another idea that came up this morning talking with rmuir is to have a new ant target "ie: test-needs-fix" that could be run as part of the same jenkins "run all tests continuously" build target that only runs the @AwaitsFix group, and overrides haltonfailure when calling the junit macro. (which would save us an extra jenkins run) : Doable, but honestly this seems like more work (scripts for collecting : stats, test groups are trivial) than trying to fix those two or three : tests that fail? but there are 3 distinct (in my mind) issues here tthat all of this helps address: 1) those "two or three" tests that fail and should be fixed ... i agree, we should fix them, but when you can't reproduce the failures at all, it's really hard to iterate and figure out what needs fixed. if jenkins is where they fail, we need a way for jenkins to run them so we can see if our attempted fixes work. 2) all of that is double painful when they only fail sporadically, it might take a week of constant jenkins testing to discover that your fix decreased the likelyhood of failure, but didn't completely fix the problem -- a test might fail an average of 1/5 times, and someone might start working on improving them, and reduce the failure rate to 1/20 times - they need to be running constantly (with some way to review the rate of failure) to make progress 3) we have some tests that demonstrate bugs no one has ever fixed because they are too hard to fix, or we don't have a good solution for them. some of these tests are commited but @Ignored, some are commited but commented out, some are sitting in patches in jira waiting for the patch to be expanded to include the code -- it would be nice if all of those tests could be commited, and uncommented, and run on every build, failing 100% of the time, so the known weaknesses in the code that now one has time/energy/ideas on how to fix would be more publicly visible, and people evaluating lucene/solr and looking at the tests could see in a very clear way "testDoSomeStuffWithFeatureXandFeatureYTogether() fails 100% of the time, so i probably shouldn't use X and Y together" I think if running "ant test test-needs-fix" executed all the normal tests (which fail the build) as well as all @AwaitsFix (which wouldn't fail the build) and just generated the normal junit test output, with the pretty graph showing all the failing @AwaitsFix tests in red, so people could realisticly see "some stuff doesn't work, and some of that stuff doesn't work sporadically" then we would benefit us in multiple ways (more data to help fix sporadically failing tests, more open honesty about what doesn't work) and when people fix @AwaitsFix tests, they just remove the annotation. -Hoss --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org