On Tue, Sep 15, 2020 at 9:28 AM Andrew McCreight <amccrei...@mozilla.com>
wrote:

> On Tue, Sep 15, 2020 at 8:03 AM Brian Grinstead <bgrinst...@mozilla.com>
> wrote:
>
> > (This is a crosspost from firefox-dev)
> >
> > Hi all,
> >
> > We’re rolling out a change to the review process to put more focus on
> > automated testing. This will affect you if you review code that lands in
> > mozilla-central.
> >
> > ## TLDR
> >
> > Reviewers will now need to add a testing Project Tag in Phabricator when
> > Accepting a revision. This can be done with the “Add Action” → “Change
> > Project Tags” UI in Phabricator. There's a screenshot of this UI at
> > https://groups.google.com/g/firefox-dev/c/IlHNKOKknwU/m/zl6cOlT2AgAJ.
> >
>
> I of course think having more tests is a good thing, but I don't like how
> this specific process places all of the burden of understanding and
> documenting some taxonomy of exceptions on the reviewer. Reviewing is
> already a fairly thankless and time consuming task. It seems like the way
> this is set up is due to the specifics of our how reviewing process is
> implemented, so maybe there's no way around it.
>

I certainly have some sympathy for this, but I'm not sure that it needs to
be
addressed via tools. What I would generally expect in cases like this is
that the reviewer says "why isn't there a test for X" and the author says
"for reason Y" and either the reviewer does or does not accept that.
That's certainly been my experience on both sides of this interaction
in previous instances where there were testing policies but not this
machinery.

Also, contrary to what ahal said, I don't know that tests being an official
> requirement relieves the burden of guilt of asking for tests, as everybody
> already knows that tests are good and that you should always write tests.
>

I also don't know about how this will impact people's internal states; I see
this as having two major benefits:

1. It tells people what we expect of them
2. It gives us the ability to measure what's actually happening and adjust
accordingly.

To expand on (2) a bit, if we look back and find that there was a lot of
code
landed without tests but where exceptions weren't filed, then we know we
need to work on one set of things. On the other hand, if we see that there
are a lot of exceptions being filed in cases we don't think there should
be (for whatever reason) then we need to work on a different set of things
(e.g., improving test frameworks in that area).



> My perspective might be a little distorted as I think a lot of the patches
> I write would fall under the exceptions, either because they are
> refactoring, or I am fixing a crash or security issue based on just a stack
> trace.
>
> Separately, one category of fixes I often deal with is fixing leaks or
> crashes in areas of the code I am unfamiliar with. I can often figure out
> the localized condition that causes the problem and correct that, but I
> don't really know anything about, say, service workers or networking to
> write a test. Do I need to hold off on landing until I can find somebody
> who is willing and able to write a test for me, or until I'm willing to
> invest the effort to become knowledgeable enough in an area of code I'm
> unlikely to ever look at again? Neither of those feel great to me.
>

Agreed that it's not an ideal situation. I think it's important to step
back and
ask how we got into that situation, though. I agree that it's not that
productive
for you to write the test, but hopefully *someone* at Mozilla understands
the
code and is able to write a test and if not we have some other problems,
right?
So what I would hope here is that you were able to identify the problem,
maybe write a fix and then hand it off to someone who could carry it over
the
line.



> Another testing difficulty I've hit a few times this year (bug 1659013 and
> bug 1607703) are crashes that happen when starting Firefox with unusual
> command line or environment options. I think we only have one testing
> framework that can test those kinds of conditions, and it seemed like it
> was in the process of being deprecated. I had trouble finding somebody who
> understood it, and I didn't want to spend more than a few hours trying to
> figure it out for myself, so I landed it without testing. To be clear, I
> could reproduce at least one of those crashes locally, but I wasn't sure
> how to write a test to create those conditions. Under this new policy, how
> do we handle fixing issues where there is not always a good testing
> framework already? Writing a test is hopefully not too much of a burden,
> but obviously having to develop and maintain a new testing framework could
> be a good deal of work.
>

As a general matter, I think we need to be fairly cautious about landing
code
where we aren't able to test. In some cases that means taking a step back
and doing a lot of work on the testing framework before you can write some
comparatively trivial code, which obviously is annoying at the time but also
is an investment in the future. In other cases, we'll have to make a
decision
that the value of the change outweighs the risk of landing untested code.
This also depends on the nature of the change: if you're fixing a crash in a
section of code, the bar might be lower than if you're adding a new feature.

-Ekr
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to