On Fri, 26 Jul 2013, Rob Savoye wrote:

>    I'd agree there is lots of crufty support for things like the old
> Cygnus trees that could be removed. Ideally I'd prefer to explore
> people's ideas on what would be useful for testing toolchains 5-10 years
> from now. Me, I want something not dependent on a dying and mostly
> unmaintained scripting language that nobody likes anyway (the current
> working idea is to use python). I also want to be able to compare test
> results in better ways than diffing huge text files. I'd like to compare
> multiple test runs as well in a reasonably detailed fashion.

* Eliminate build-tree testing.

* Look at QMTest's class structure - I don't think it's quite right as I 
explained regarding not separating the unit that gets run from the unit 
that has an assigned PASS/FAIL result, but it's closer than DejaGnu is at 
present, in particular as regards the ability to enumerate tests 
independently of running them (so, to look at the testsuite and a log of a 
partial run and see what tests were not run).  Another thing I don't 
really care for there is how it handles XFAILs (the QMTest approach has 
logical simplicity, but is not so good in practice for toolchain testing, 
I think, so I prefer tests actually having XPASS/XFAIL results as in 
DejaGnu).

* Structured results so that annotations can readily be associated with 
individual test results, and whole test runs.  Some annotations identify 
the test run in some way (configured target, configure options, ...).  The 
test's "name" might have multiple fields rather than being a pure text 
string as at present (file with test, options used, line on which 
assertion is being tested, for example).  And there would be other 
annotations such as compile command, output produced by compile command, 
... - much of this is presently in the .log file but not in a properly 
machine-processable form.  (However, I'd still like the format to be 
something simple that it's easy to generate from non-DejaGnu testsuites as 
well if desired.)

* Built-in test harness software support for parallelism, while allowing 
for cases of host or target boards not supporting parallelism (if host 
does but not target, you can still run compiles in parallel).

-- 
Joseph S. Myers
jos...@codesourcery.com

Reply via email to