On 4/9/14, 11:29 AM, L. David Baron wrote:
On Wednesday 2014-04-09 11:00 -0700, Gregory Szorc wrote:
The simple solution is to have a separate in-tree manifest
annotation for intermittents. Put another way, we can describe
exactly why we are not running a test. This is kinda/sorta the realm
of bug 922581.
The harder solution is to have some service (like orange factor)
keep track of the state of every test. We can have a feedback loop
whereby test automation queries that service to see what tests
should run and what the expected result is. Of course, we will want
that integration to work locally so we have consistent test
execution between automation and developer machines.
I think both of these are bad.
It should be visible near the tests whether they are running or not,
rather than out of band, so that module owners and those working on
the code are aware of the testing coverage.
I would love this all to be in band and part of version control (in the
test manifests). That's why I propose putting things in test manfiests
today and for the foreseeable future.
I just think we'll ultimately reach a stage where we want to leverage
"big data" for more intelligently running tests. We'll cross that bridge
when we get to it, I reckon.
Annotating something as intermittent is halfway to disabling, and
should thus be reviewed by test authors / module owners just like
disabling should be; it sounds like you're proposing changing that
as well.
Absolutely not! I am very disappointed with the current dynamic between
sheriffs and module owners and test authors because disabling tests is
leading to worse test coverage and opening ourselves up to all kinds of
risks. I'd like to think test authors and module owners should have the
last word. But we have been doing a pretty crappy job of fixing our
broken tests. I feel a lot of people just shrug shoulders and allow the
test to be disabled (I'm guilty of it as much as anyone). From my
perspective, it's difficult to convince the powers at be that fixing
intermittent failures (that have been successfully swept under a rug and
are out of sight and out of mind) is more important than implementing
some shiny new feature (that shows up on an official goals list). I feel
we all need to treat failing tests with more urgency. The engineering
culture is not currently favoring that.
The latter solution also breaks being able to describe a test run
with reference to a revision in the VCS repository.
Not necessarily. It would add complexity to the service, but if captured
as a requirement is certainly doable.
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform