[
https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15838140#comment-15838140
]
Mark Miller commented on SOLR-10032:
------------------------------------
bq. If there's stuff we can do to raise the visibility of such tests, that
will be a start to getting all of us more aware of the problem and moving
toward a long-term solution.
I think it really depends on how much effort we can maintain over time on this,
but ideally we would do this:
Take the first report and file JIRA issues (or find current ones) for all of
the tests that are beyond a bit flakey. Push on the author and contributors to
those tests to get them solid. Generating reports for just the worst tests is
actually pretty fast for a feedback loop. For tests that have fails, I have all
the logs to provide to the JIRA issue.
If we can get out a report with all tests within a certain flakey cutoff range
and if we can regularly generate this report over time, it will be relatively
easy to spot new bad tests or tests that re enter a bad state, and we can file
high priority JIRA's (or reopen JIRAs) and/or ignore them with AwaitsFix
annotations.
Once we get a report under a basic level, we can be harder on tests that creep
into a danger zone. It really depends on if we get enough momentum, but I'm
willing to give it a try.
One thing I've tried to do is create a rating for different failure rates, with
the idea being, we first work on the tests worse than the 'flakey' rating, Once
we achieve that, we can be very hard on tests that go above that rating while
working on hardening the least flakey tests over time as well.
> Create report to assess Solr test quality at a commit point.
> ------------------------------------------------------------
>
> Key: SOLR-10032
> URL: https://issues.apache.org/jira/browse/SOLR-10032
> Project: Solr
> Issue Type: Task
> Security Level: Public(Default Security Level. Issues are Public)
> Components: Tests
> Reporter: Mark Miller
> Assignee: Mark Miller
> Attachments: Test-Report-Sample.pdf
>
>
> We have many Jenkins instances blasting tests, some official, some policeman,
> I and others have or had their own, and the email trail proves the power of
> the Jenkins cluster to find test fails.
> However, I still have a very hard time with some basic questions:
> what tests are flakey right now? which test fails actually affect devs most?
> did I break it? was that test already flakey? is that test still flakey? what
> are our worst tests right now? is that test getting better or worse?
> We really need a way to see exactly what tests are the problem, not because
> of OS or environmental issues, but more basic test quality issues. Which
> tests are flakey and how flakey are they at any point in time.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]