If the code coverage goes down or do not go above the required cutoff due
to adding toString or getter setter like functionality, can we make it a
process to explain the reason in the JIRA before committing it?

Regarding changes in hard to unit tests parts of the code, can we make it a
process to link the JIRA which talks about rewriting that component of C*.
Linking JIRAs will help us learn how many changes have gone in with little
or no testing due to this component.

On Tue, Mar 28, 2017 at 8:38 AM, Jason Brown <jasedbr...@gmail.com> wrote:

> re: code coverage (this could become it's own thread, and probably should)
>
> As somebody working on a new large feature (CASSANDRA-8457), I have a few
> thoughts/observations about code coverage. As a preface, though, I'll point
> out that we have jacoco (http://www.jacoco.org/jacoco/trunk/index.html)
> already incorporated into the build, so it's quite easy to generate code
> coverage reports ('ant codecoverage' runs the tests and creates the jacoco
> binary, and 'ant jacoco-report' creates an html dump).
>
> As CASSANDRA-8457 is largely additive, it's pretty easy for me to confirm
> the code coverage, as it's a whole new package - so clearly it's pretty
> obvious to what i didn't cover in the test suite. Contrast this against the
> rest of the code base. It may be harder to discern, from a simple
> percentage output, just how much of an impact the added tests have across
> the entire code base. Further, if a given patch added minor functionality
> (say a one-liner fix), plus implementing some toString() functions, you
> might actually see the coverage go *down* if the patch doesn't include
> tests for the toString().
>
> Also, for things that just can't be unit tested in the current code base,
> but where correctness can be ensured with a dtest, that wouldn't get
> included by coverage tools. Of course, this is where incremental changes to
> the code base to allow unit testing are really awesome and appreciated.
> I've been encouraging some folks to do that with their submissions. It
> makes their patches a bit more involved (for them to code and for me to
> review), but hopefully it improves the overall quality of the code.
>
> Thus, imho, for new/large features, sure, having a high coverage is an
> awesome thing: I'd think a minimum of 80%, with a "highly recommended"
> target of > 90%. For smaller fixes, I'm not so sure, but I think a
> reasonable agreement between the contributor and reviewer, based on the
> guidelines coming from Blake's forthcoming doc, seem like a good point of
> departure.
>
> On Tue, Mar 28, 2017 at 5:57 AM, Josh McKenzie <jmcken...@apache.org>
> wrote:
>
> > Hold on now - there's no reason to throw the baby out with the bathwater
> on
> > this. Code coverage on it's own is admittedly a questionable metric, but
> > bad metrics are better than *no* metrics if taken with context.
> >
> > Most code coverage tools will allow you to drill down to the granularity
> > we'd need to confirm that new code (new, as in entirely new file, etc)
> was
> > covered, and to delta against a previously run coverage metric to make
> sure
> > there's an uptick in modules where we make changes. It's quite a bit more
> > involved / extra overhead compared to what we do now; I'd argue we
> probably
> > need to learn to crawl before we can walk (as in, actually write tests
> for
> > the code that goes into the project and do a *lot* more jmh benchmarking
> > for perf changes).
> >
> > Definitely like the direction this is headed, but it sounds like some
> > cart-before-horse to me if we integrate point 4 with the rest of them.
> Also
> > - for point 2 who's going to do the work to correlate flaky failures to
> the
> > changes that introduced them, or are we just going to accept that flaky
> > tests get added and deal with them retroactively? I assume we could also
> > set up some kind of CI job that would run a new test, say, 20 times and
> > greenlight it for patches adding completely new tests? Might be a bad
> idea,
> > might not.
> >
> > On Tue, Mar 28, 2017 at 1:13 AM, Blake Eggleston <beggles...@apple.com>
> > wrote:
> >
> > > In addition to it’s test coverage problem, the project has a general
> > > testability problem, and I think it would be more effective to
> introduce
> > > some testing guidelines and standards that drive incremental
> improvement
> > of
> > > both, instead of requiring an arbitrary code coverage metric be hit,
> > which
> > > doesn’t tell the whole story anyway.
> > >
> > > It’s not ready yet, but I’ve been putting together a testing standards
> > > document for the project since bringing it up in the “Code quality,
> > > principles and rules” email thread a week or so ago.
> > >
> > > On March 27, 2017 at 4:51:31 PM, Edward Capriolo (
> edlinuxg...@gmail.com)
> > > wrote:
> > > On Mon, Mar 27, 2017 at 7:03 PM, Josh McKenzie <jmcken...@apache.org>
> > > wrote:
> > >
> > > > How do we plan on verifying #4? Also, root-cause to tie back new code
> > > that
> > > > introduces flaky tests (i.e. passes on commit, fails 5% of the time
> > > > thereafter) is a non-trivial pursuit (thinking #2 here), and a pretty
> > > > common problem in this environment.
> > > >
> > > > On Mon, Mar 27, 2017 at 6:51 PM, Nate McCall <zznat...@gmail.com>
> > wrote:
> > > >
> > > > > I don't want to lose track of the original idea from François, so
> > > > > let's do this formally in preparation for a vote. Having this all
> in
> > > > > place will make transition to new testing infrastructure more
> > > > > goal-oriented and keep us more focused moving forward.
> > > > >
> > > > > Does anybody have specific feedback/discussion points on the
> > following
> > > > > (awesome, IMO) proposal:
> > > > >
> > > > > Principles:
> > > > >
> > > > > 1. Tests always pass. This is the starting point. If we don't care
> > > > > about test failures, then we should stop writing tests. A recurring
> > > > > failing test carries no signal and is better deleted.
> > > > > 2. The code is tested.
> > > > >
> > > > > Assuming we can align on these principles, here is a proposal for
> > > > > their implementation.
> > > > >
> > > > > Rules:
> > > > >
> > > > > 1. Each new release passes all tests (no flakinesss).
> > > > > 2. If a patch has a failing test (test touching the same code
> path),
> > > > > the code or test should be fixed prior to being accepted.
> > > > > 3. Bugs fixes should have one test that fails prior to the fix and
> > > > > passes after fix.
> > > > > 4. New code should have at least 90% test coverage.
> > > > >
> > > >
> > >
> > > True #4 is hard to verify in he current state. This was mentioned in a
> > > separate thread: If the code was in submodules, the code coverage tools
> > > should have less work to do because they typically only count coverage
> > for
> > > a module and the tests inside that module. At that point it should be
> > easy
> > > to write a plugin on top of something like this:
> > > http://alvinalexander.com/blog/post/java/sample-
> > cobertura-ant-build-script
> > > .
> > >
> > > This is also an option:
> > >
> > > https://about.sonarqube.com/news/2016/05/02/continuous-
> > > analysis-for-oss-projects.html
> > >
> >
>

Reply via email to