On Wed, Feb 10, 2016 at 10:30 AM, James Graham <ja...@hoppipolla.co.uk>
wrote:

> FWIW I think it's closer to the truth to say that these tests are not set
> up to be performance regression tests
>

Right, but this was just one of the aspects that was pointed out. I think
the performance analysis is not much about catching regressions (even if
that can effectively happen), rather figuring why a test takes 90 seconds
instead of 10, is that a problem that may affect end users too?
The larger we set the threshold, the less people are likely to investigate
the reasons the test takes so much, cause there will be no bugs filed about
the problem. It will just go unnoticed.
Other reasons are bound to costs imo. Developers time is a cost, if
everyone starts writing tests taking more time, cause now the timeout is
much larger, we may double the time to run tests locally or to get results
out of Try.
Finally, bumping timeouts isn't a final solution, that means in a few years
we may be back rediscussing another bump.

I'm not saying bumping the timeout is wrong, I'm saying we should also
evaluate a long term strategy to avoid the downsides. Maybe we should have
something like the orange factor tracking tests runtime (in all the
harnesses) and automatically filing bugs in the appropriate component when
some test goes over a threshold?
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to