Sorry -- after re-reading, I realized I was wrong here -- your example
scenario is actually different from the legitimate scenario I alluded to
in the first message of this thread.

The legitimate scenario from that first message was:
 - We're expecting that an event *will not* fire.
 - We wait a bit to see if it fires.
 - Fail if it fires before the timeout expires.

(Clearly, regardless of the timeout we choose  -- and unexpected delays
-- any failures here are "real" and indicate bugs.)

In contrast, your scenario is:
 - We're expecting that an event *will* fire.
 - We wait a bit to see if it fires.
 - Fail if it the event *does not* fire before the timeout expires.

Here, failures are iffy - they may or may not be "real" depending on
delays and whether the timeout was long enough.  This is precisely the
sort of thing that results in random-oranges (when e.g. new
test-platforms are added that are way slower than the system that the
test was developed on).

So, the idea now is that there should be a high threshold for adding the
second sort of test (ideally, we should just be using an event-listener
where possible). If a setTimeout is really needed for some reason in
this kind of scenario, the justification (and ratioale for why it won't
cause randomorange) will need to be explicitly documented in the test.

~Daniel
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to