Hi Tison,

Thanks for your input. I agree that the community should focus on the
priorities regarding CI that you mentioned. At the same time, I'm having a
hard time understanding the negative impact that you suggested from this
change.

1. To my knowledge code coverage calculation adds little overhead to ci.
Please provide data point its impact to CI stability.
2. Code coverage isn't completely accurate and yet it's valuable data
point. The goal is not to reach 100%, but to prevent coverage becoming
worse. To me this change has only benefits and not drawbacks.
3. I don't agree with your sentiments about code coverage. There's material
difference between 50% and 70% and we are at the low end of 50%. *We will
not add a commit gate to coverage change. *The report serves as an
additional data point for the approvers.

Lin


On Fri, Sep 9, 2022 at 9:51 AM tison <wander4...@gmail.com> wrote:

> Hi Lin,
>
> Thanks for starting this discussion!
>
> As long as it takes a different resource set from current CI tasks, I'm +0
> as commented on PR-17382. I hardly read the report.
>
> I read the output in your proposal as simply:
>
> > The report will serve as additional input for the reviewers. The
> requester
> is expected to explain any significant negative impact.
>
> This works for me as a nonmandatory helper.
>
> Thoughts on the baseline:
>
> 1. I don't like Code Coverage because from my experience they're _never_
> precise. And chasing 100% test coverage is unnecessary.
> 2. Although, if test coverage is below 50%, it _is_ a signal to me that we
> suffer from too few tests.
> 3. Over 50% or 70%, test coverage looks the same to me.
>
> And our tests suffer from:
>
> 1. Heavyweight. We're now working on resolving CI congestion. Many "unit
> tests" are actually integration tests with mock Pulsar Service etc. which
> setup/teardown takes over ten (tens of) seconds.
> 2. Brittle. Too many mocks and radical powermocks make tests brittle.
> Mockito suggests never mock classes you don't own, but it seems nobody
> cares. We mock over ES and ZK clients and so on.
>
> I don't want to spread the discussion over tests. But when it comes to
> improving tests, I want to share the most emergent items on it.
>
> Best,
> tison.
>
>
> Lin Zhao <lin.z...@streamnative.io.invalid> 于2022年9月10日周六 00:26写道:
>
> > Hi,
> >
> > I'd like to start a discussion about turning on CodeCov report for PRs to
> > master to show the PR's impact on unit test coverage. Previous discussion
> > on https://github.com/apache/pulsar/pull/17382.
> >
> > Proposal:
> > 1. Unit test coverage will be added to the CI pipeline and reported to
> the
> > PR page.
> > Sample report:
> >
> > @@            Coverage Diff            @@##             master
> > #17382   +/-   ##
> > =========================================
> >   Coverage          ?   32.10%
> >   Complexity        ?     4141
> > =========================================
> >   Files             ?      387
> >   Lines             ?    42806
> >   Branches          ?     4420
> > =========================================
> >   Hits              ?    13741
> >   Misses            ?    27032
> >   Partials          ?     2033
> >
> > 2. The report will serve as additional input for the reviewers. The
> > requester is expected to explain any significant negative impact.
> >
> > Why?
> >
> >
> >    1. The existing code coverage for Pulsar is very poor at just above
> > 50%. Reasonable expectation for libraries is 90% and 70 or 80% for the
> > broker. We are at 60% and 50%.
> >    2. The coverage report would prevent coverage from getting worse
> > and bring the conversation on the table.
> >    3. Even though code coverage has its limitation in assessing test
> > coverage, it's the only viable tool.
> >
> >
> >
> > Thoughts?
> >
>

Reply via email to