[ 
https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15838489#comment-15838489
 ] 

Mark Miller commented on SOLR-10032:
------------------------------------

I think there is likely too much of a test coverage problem if we take that 
approach.

I'd like to instead push gradually, though perhaps 'Apache time' quickly.

First I will great critical issues for the worst offenders, if they cannot be 
fixed pretty much right away, I will badapple or awaitsfix them.

I'll also create critical issues for other fails above a certain threshold and 
ping appropriate JIRA issues to try and bring attention to them. Over time we 
can ignore these as well if they are not addressed and someone doesn't find 
them important enough to keep coverage.

We can then tighten this net down to a certain level. 

I think if we commit to following through on some progress, we can take an 
iterative approach that gives people ample time to fix important tests and us 
time to evaluate loss of important test coverage (even flakey test coverage is 
very valuable info to us right now, and some flakey tests pass 90%+ of the time 
- we want to harden them, but they provide critical coverage in many cases).

> Create report to assess Solr test quality at a commit point.
> ------------------------------------------------------------
>
>                 Key: SOLR-10032
>                 URL: https://issues.apache.org/jira/browse/SOLR-10032
>             Project: Solr
>          Issue Type: Task
>      Security Level: Public(Default Security Level. Issues are Public) 
>          Components: Tests
>            Reporter: Mark Miller
>            Assignee: Mark Miller
>         Attachments: Test-Report-Sample.pdf
>
>
> We have many Jenkins instances blasting tests, some official, some policeman, 
> I and others have or had their own, and the email trail proves the power of 
> the Jenkins cluster to find test fails.
> However, I still have a very hard time with some basic questions:
> what tests are flakey right now? which test fails actually affect devs most? 
> did I break it? was that test already flakey? is that test still flakey? what 
> are our worst tests right now? is that test getting better or worse?
> We really need a way to see exactly what tests are the problem, not because 
> of OS or environmental issues, but more basic test quality issues. Which 
> tests are flakey and how flakey are they at any point in time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to