On Thu, Aug 16, 2012 at 6:10 PM, Ehsan Akhgari <ehsan.akhg...@gmail.com> wrote:
> 1. Is the current testharness.js API the documentation at the beginning of
> <http://w3c-test.org/resources/testharness.js>?  If that is the case, the
> API looks a lot heavier weight than the default mochitest API we use.

Not in practice.  The assert_*() functions are pretty
self-explanatory.  They often make test code appreciably simpler or
more rigorous.  For instance, assert_throws() can check that the thing
being thrown is a proper DOMException with all expected properties.
In mochitests, it would be a pain to do every time, so often we'll
just test that something is thrown but not test what it is, etc.
Another advantage of having a lot of functions is that they produce
nicer failure messages.

Basically, as someone who's written lots of testharness.js tests and
lots of mochitests: testharness.js is somewhat more complex, but not
dramatically.


If there's one big problem with shared tests, it's that we have to
change the way we annotate expected failures.  Currently we just go in
and change ok() to todo() or whatever in the source code of the test,
but of course that doesn't work for shared tests.  testharness.js
expects you to break things up into test()s of one or more
assert_*()s, with the same number of test()s running no matter what,
and each test() either passes or fails as a unit.  Then you have to
keep track of expected failures out-of-band (see files in
dom/imptests/failures/).  The major disadvantage of this is that if a
test() tests multiple things and one of them is expected to fail, you
lose regression-testing for any subsequent ones, because the test()
aborts at the first assert failure.

So in practice, it's not always clear where to divide up your test()s.
 One assert per test would be best for regression-testing, but it adds
a lot of clutter to the source code.  I think this is the one big
thing that makes testharness.js more complicated to use than
mochitest, although it's still not rocket science.  If we decide
test-per-assert is the way to go, perhaps we could get a series of
functions added to make single-assert tests simpler.

James, could you explain again what exactly the benefit is of this
test/assert distinction?  Mochitests effectively have one assert per
test hardcoded in, and they work fine for us.

> 2. Is there any support for running reftest-style tests in a framework that
> is reusable by other browsers?  If not, can we move to propose the reftest
> framework to the appropriate standards bodies so that it can be adopted by
> other browsers?  Our reftest framework has been carefully designed to be
> Gecko-agnostic, and is much superior to the equivalent testing framework
> that WebKit has (not sure about other browser engines).  Furthermore, the
> files loaded by this framework are not loaded in a privileged context with
> APIs such as SpecialPowers, which makes a large number of them portable to
> other browser engines.
>
> I think it makes sense for us if we can start this effort on the reftest
> framework, since that has a much lower barrier to entry, and ultimately this
> effort would be valuable only if other browser engines start to use our
> tests (and hopefully share theirs with us as well).

The CSSWG has such a framework.  Unfortunately, they're extremely
demanding about accepting tests, requiring all kinds of documentation
that it follows the standard, and they have formatting guidelines and
so on and so forth.  So it's not compatible with the idea of "do
things a little differently and everyone can use our tests".

I agree that reftests would be easier to share, though.  Crashtests
would be even easier!  But mochitests are really where most of our
tests are.  Also, unlike reftests, they can mostly be run in the
browser with no special privileges.  But as far as the actual
sharing-tests thing goes, yes, it would make sense to start any kind
of sharing initiative with crashtests, then reftests, then mochitests.

On Thu, Aug 16, 2012 at 6:25 PM, Benjamin Smedberg
<benja...@smedbergs.us> wrote:
> I agree with the first 3 points, but I object rather strongly to this one. I
> think we should try to keep the tests close to the relevant code whenever
> possible; this makes it more clear which module owner is responsible for the
> test, and makes it easier to find and run the relevant tests when modifying
> code. I think our system should try to keep this style of tests in the code
> module.

There are two basic models you can have of test-sharing:

1) Everyone owns their own tests and just exports them to the world.
Everyone else has to use them as-is or not at all; third parties can't
make changes directly.  This is compatible with whatever internal
formatting we like.

2) Tests by all parties are put in a shared repository and maintained
in common.  Submitters don't own their tests; they're subject to
review and adjustment by others.  In this case, it doesn't make sense
for us to organize the tests by our internal code structure.  This is
the model that will be used by standards bodies, for instance.
They'll likely want to break things up by the specification being
tested, not any particular implementation.  dom/imptests/ is already
organized by specification, because it's just imported.


So for things that get contributed to standards bodies, I do think we
need to match their directory structure, because we have to to mirror
their tests.  In this model, tests would be put somewhere as a staging
ground to be submitted to the standards body, and once they're
submitted they'd be reimported along with other vendors' tests in a
place like dom/imptests/, and the original removed from the staging
ground.

For random other tests, I agree that we could probably keep our
directory structure.  We just need some clear way to delineate
exported from non-exported tests.  Since we don't guarantee that these
tests are meaningful for other vendors anyway, I guess we could just
export everything and let all the Gecko-specific ones be marked as
expected fails by other vendors.

> Why do you think it would be better to have (somebody == Ms2ger) do this,
> instead of expecting module owners in general to be a part of this task? It
> feels to me that module owners should primarily be trying to accomplish this
> sort of thing, and if they need help figuring out the right standards body,
> asking for help from Ms2ger or other experts is a great fallback plan.

If module owners want to be involved in submitting tests to standards
bodies, that would be great.  But we shouldn't try to require them to
if they don't want to.

> Given the recent discussion about QA, it feels like this would also be a
> great thing to involve QA in.

It would be great if we had people specifically assigned to this,
yeah.  I don't have an opinion on whether they belong in QA or
someplace else.

On Thu, Aug 16, 2012 at 8:20 PM, L. David Baron <dba...@dbaron.org> wrote:
> It's two extra lines of boilerplate if you only have one test in the
> file.
>
> But if you have many tests in the file, it's a lot more, since each
> test needs to be wrapped in this -- at least in my understanding.
> Some browser vendors (e.g., Opera) seem to care quite strongly that
> each test file always execute the same number of tests in the same
> order -- even if some of those tests fail by throwing an exception.
> So my understanding is that the intent here is that *each* test be
> wrapped this way, presumably along with anything that might throw an
> exception.  (That said, I think this "might throw" concept is rather
> loose.)
>
> I think it's probably worth writing tests this way because of the
> value of sharing them.  But I wouldn't minimize that it is more
> overhead.

I think it's fine if we run a different number of tests each time.
It's not a problem for us.  Others who want to use our tests can adapt
their systems to accommodate it.  The key thing is we share the tests.

> One other characteristic of tests to be submitted to the W3C that's
> rather important is that they fail when the feature isn't
> implemented.  If this isn't true, then people will build tables that
> show a feature as being partially implemented, etc.  (It's
> particularly bad if, say, all but one of a large set of tests that
> mostly test error handling actually pass when the feature isn't
> implemented.)

I'm focusing here mostly on sharing tests with other browsers, not
submitting to the W3C.  Submitting to the W3C is a further step that
requires a lot more effort, such as: making sure there's a
specification, making sure it mandates what we're testing for, testing
in other browsers to identify possible spec bugs, and responding to
feedback from any random person in the WG who submits it.

This stuff is all great to do, but it's extra work.  I think we should
start by identifying ways to share tests *without* extra work by test
authors or module owners, because that will allow us to share all of
our tests, not just a tiny fraction.

On Fri, Aug 17, 2012 at 12:39 AM, Justin Dolske <dol...@mozilla.com> wrote:
> Is there a concrete plan for getting other browsers to run these shared
> tests?

The W3C already has test suites we can submit to in testharness.js
format.  We run some of those tests as mochitests; I know Opera does
as well.  I believe WebKit doesn't run them automatically yet.  James
Graham of Opera has indicated that they'd probably be interested in
running our tests.  (Opera gets much less user testing than we do, so
they're very interested in automated testing.)

> The basic idea here sounds worthy, but one concern is that our own tests are
> often unreliable in our own browser -- and I'd expect that to only get worse
> as other browsers and their tests enter the picture. I'd therefore suggest
> that a successful cross-browser test effort should prioritize getting stuff
> running (even with just a handful of tests)... That way fun problems like
> reliability have a chance to be found/fixed over time, instead of having a
> megatestsuite suddenly appear that's unappealing to get working.

Yes, I think it would be a good idea to start small.
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to