> IMHO we *can* regress on synthetic ones as long as we know what is going on.

It's the requirement that we know what is going on that I think is unreasonable.

Indeed, we /have/ a no not-understood regresisons policy, IIRC.  The
extent to which it's being ignored is at least partially indicative of
how difficult these changes can be to track down.  Rafael's post has
some great examples of how insane tracking down perf regressions can
be.

I really don't think that the right way to go about fixing our
proclivity to regress Talos is to "get tough on regressions" and make
this every committer's problem.  We shouldn't expect committers to
track down the fact that "my change pushes X function down 16 bytes,
which changes some other function's alignment, which, in combination
with a change to __FILE__, affects benchmark Y" as a regular part of
their job.  And it's not clear to me that if we have any tests left if
we eliminated from the tree all tests which are affected by this sort
of thing.

I think the right way to go about this is to first investigate which
tests are stable, and how stable they are (*).  Then a team of
engineers can gain some experience finding and understanding
regressions which occur over some period of time, so we can understand
how feasible it would be to seriously ask developers to do this as a
part of their day-to-day jobs.

I'm not saying it should be OK to regress our performance tests, as a
rule.  But I think we need to acknowledge that hunting regressions can
be time-consuming, and that a policy requiring that all regressions be
understood may hamstring our ability to get anything else done.
There's a trade-off here that we seem to be ignoring.

-Justin

(*) This is essentially SfN.
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to