> On Dec 17, 2014, at 11:47 PM, Daniel Holbert <dholb...@mozilla.com> wrote:
> In contrast, your scenario is:
> - We're expecting that an event *will* fire.
> - We wait a bit to see if it fires.
> - Fail if it the event *does not* fire before the timeout expires.
> 
> Here, failures are iffy - they may or may not be "real" depending on
> delays and whether the timeout was long enough.  This is precisely the
> sort of thing that results in random-oranges (when e.g. new
> test-platforms are added that are way slower than the system that the
> test was developed on).
> 
> So, the idea now is that there should be a high threshold for adding the
> second sort of test (ideally, we should just be using an event-listener
> where possible).

Well there is an event listener waiting for the event to fire.
But how else then through a timeout, obviously with a high timeout value like
30 or 60 seconds, can your test tell/complain that the event is missing?
The API’s usually do not provide event listeners for NOT firing events.

Sure I can print an info message that my test is now waiting for an event to 
pop,
but lots of my tests don’t just sit there and wait, they proceed and do other 
things,
but eventually they have to block on waiting for the event callback.
In that scenario only the over all timeout will terminate my test. And somewhere
far up in the log I might find a log message. But as I tried to describe in my 
other
email having a long living timer which pops complaining that event X is missing
I think is a legit use case for setTimeout in test.

  Nils

> If a setTimeout is really needed for some reason in
> this kind of scenario, the justification (and ratioale for why it won't
> cause randomorange) will need to be explicitly documented in the test.


_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to