On 03/06/2014 15:35, Mike de Boer wrote:
I started to summarise the things I’d like to see in a JS unit test runner 
here[1]:

  * mini-core.
  * Async support as a base. We’ve added `add_task()` methods where possible, 
but we haven’t made it a core feature of the legacy suites in use today. 
Generators yielding Promises are now possible, but I think we can do better. 
This leads to the following point...
What's your concrete proposal for "we can do better"? (see also the following point, but your point above is sufficiently vague that I don't know if it's the same as the following or not) How is it being generally available not a "core feature" ?
  * It’d be nice if a suite would be extensible is such a way that it’d be 
possible to plug-in static analysis tooling to catch race conditions, run 
linters, common coding pattern validations, common coding error traps, etc.
This is attempting to put the cart before the horse. We don't have linters or pattern validators in use today. When someone steps up to work on them, of course we'd all like them to be modular. But the 3 (debatably...) distinct examples of "pluggable things" you gave (race detectors, linters, pattern validators) each require different things from the test runner. It'll basically be the job of whoever wants to integrate such systems to also write the integration points - which makes sense, because pre-emptively adding the integration points is likely to result in a mismatch with the eventual requirements of the tool (e.g. do you do runtime or static race detection, do you do regex- or parse-tree-based linting, and so on).

IOW, I don't see what such extensions would require of the test runners that they don't do today, and/or how you propose the test runner could do better in "supporting" as-yet-undetermined extensions. We'd do better to start at writing the static/runtime analysis stuff, and then integrating it, than we would lamenting about the lack of "support" in the test framework.
All this should be possible to add separately as modules, so we don’t end up 
with 3000+ LOC frameworks that’ll become harder and harder to maintain over 
time.
  * Pluggable reporters to spew out different type of logging formats. TAP, 
spec, XUnit, to (log)file, etc, all configurable.
AIUI jgraham is working on this.
  * Pluggable code coverage tooling
This is basically the same as your point above about static analysis - cart before the horse (although I hear releng are working on code coverage!)
  * Allow to single out specific tests to aid in local debugging
You can do this already.
  * Allow to disable specific tests without the need to comment them out, so 
they will be registered as ‘skipped’ (depending on the reporter)
And this.
  * Support pluggable JS debugger support. Perhaps by running the DevTools 
remote in a separate XUL window?
And this (although not for the SDK tests because they're dumb, and not for xpcshell - the latter would require significant devtools work, AIUI, less so than test runner work, because the debugger protocol has concepts of things like windows and tabs, which doesn't really happen in xpcshell-land; in any case the integration wouldn't really be "pluggably" the same, even if we stuck it all behind the same --jsdebugger flag, because mochitest-* get the integration for free from the running browser).

Note that historically, many of our xpcshell tests actually tested native code (XPCOM implementations), so the native debugger integration (which /is/ present) was much more useful. I'm not sure that still holds.
  * Pluggable environments; the runner should be able to adapt to any JS 
environment, be it XPCShell, Mochi/ Fx browser, b2g, etc.
This doesn't really make sense. The environments are very different (you've forgotten android, which is java-based, and AFAIK we sadly have no build system tests (which would be python-based)). Whether or not you have a chrome or content DOM to work with makes a big difference for the type of test you write; you can't realistically run 99% of mochitest-plain/chrome/browser/a11y as xpcshell tests, and there's basically no reason to want to do the reverse (which goes for crashtests and reftests as well).
  * Should be independent of the assertion style used. Assert.jsm could be 
bundled by default.
  * It should be covered 100% by tests itself, Inception-style.
AFAIK SimpleTest, SpecialPowers, xpcshell and other bits of the mochitest framework all have tests. If the tests are incomplete, I'm sure nobody would object to you writing more, but it seems hard to believe this interfering with your ability to quickly write new tests.
  * It should provide hooks to setup a test suite, an individual test, to tear 
down an individual test and a test suite.
I'm not sure what the benefit of the "test suite" concept is for our purposes, besides what directory filtering and --start-at and --end-at already give us. It mostly just sounds like busy-work in terms of the manifest administration of the tests into test suites.
  * It‘d be nice to support multiple styles of writing tests: BDD, TDD and 
QUnit.
Our current test suites are set up based on the runtime requirements of the test (ie needs a DOM, needs chrome privileges, needs both, ...). What do you think we need to do to "support" different styles of tests? Rearchitecturing all of our test suites isn't a realistic option. Adding more options to the list purely because some people prefer

when(fooManager.handlingBar).thenExpect(fooManager.bar()).toReturn(5);

over

is(fooManager.bar(), 5, "bar should return 5");

doesn't seem a productive use of our time.


As a small counter-proposal, I'd think we'd get a better cost/effort ratio for e.g. making the add-on SDK tests work like mochitests, because the mochitest test runner is much more full-featured than cfx, the requirements of the tests are very similar, and "debugging"/running the SDK tests is just painful right now (mostly because the runner is terrible).

More generally, I'm afraid that, as jgraham already pointed out in another thread, you're underestimating the variety of tests we have, as well as the different requirements for each of them, and the complications that each of the test runners have to deal with. Sure, it would be nice, from a certain perspective, to have the One True Framework for running tests, but given the different requirements of the different environments we support (b2g, android, native, js, dom, html, css, chrome/content, outside-process (marionette)) I don't think it's an achievable goal.

~ Gijs
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to