ActiveData is just a very large data base. An automated-client would be
something that periodically runs a query, formats the data and plugs it
into a graph. Here's an example of a client side JS tool that runs a
query to determine which tests are enabled or skipped:
https://github.com/mozilla/test-informant/blob/master/js/report.js#L105

But I think trying to track runtime at the job (aka chunk) level is
going to be way too noisy. Something more useful might be (total suite
runtime)/(total number of tests) across all chunks.

-Andrew

On 05/11/15 04:18 AM, L. David Baron wrote:
On Wednesday 2015-11-04 12:46 -0500, William Lachance wrote:
On 2015-11-04 10:55 AM, William Lachance wrote:

1. Relatively deterministic. 2. Something people actually care
about and are willing to act on, on a per-commit basis. If you're
only going to look at it once a quarter or so, it doesn't need to
be in Perfherder.

Anyway, just thought I'd open the floor to brainstorming. I'd
prefer to add stuff incrementally, to make sure Perfherder can
handle the load, but I'd love to hear all your ideas.

Someone mentioned "test times" to me in private email.

That was me.  (I didn't feel like sending a late-at-night
one-sentence email to the whole list, and figured there was a decent
chance that somebody else would mention it as well.)

I think they're worth tracking because we've had substantial
performance regressions (I think including as bad as a doubling in
times) that weren't caught quickly, and led to substantially worse
load on our testing infrastructure.

I do think test times are worth tracking, but probably not in
Perfherder: test times might not be deterministic depending on
where / how they're running (which makes it difficult to
automatically detect regressions and sheriff them on a per commit
basis) and regardless there's too much data to really be manageable
by Perfherder's intended interface even if that problem were
magically solved.

It seems like if we're running the same tests on different sorts of
machines, we could track different perf numbers for the test run on
different machine classes.

We'd also want to measure the test time and *not* the time spent
downloading the build.

And we'd probably want to measure the total time across chunks so
that we don't count redistribution between chunks as a set of
regressions and improvements.

So that does make it a bit difficult, but it does seem doable.

As a possible alternative, I believe Kyle Lahnakoski's ActiveData
project (https://wiki.mozilla.org/Auto-tools/Projects/ActiveData)
already *does* track this type of data but last I heard he was
looking for more feedback on how to alert/present it to the
platform community. If you have any ideas on this, please let him
know (he's CC'ed). :)

Perhaps, but I have no idea how to use it or what that would look
like.  The wiki page explicitly says it's for automated clients and
not by humans; it would be useful to see an example of such an
automated client to have an idea of how this would work.

-David


_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to