It's quite difficult at present to interpret the unit tests for the parsers.
It's also tedious to define new tests, and difficult to determine if
all the possible combinations have been tried. Test coverage tools
such as Clover can show whether all code paths have been exercised,
but they won't reveal missing code.

Seems to me would be useful to have a way of defining CLI test cases -
and define expected output - using a text format that can easily be
read by humans.

The idea would be then to write a test harness to process the file,
and each implementation would have test methods to translate the
expected output definitions into the appropriate method calls.

It's easy enough to define a format for the input data, as that is
standard - an array of strings.

But how does one define the expected output?

Does anyone have any bright ideas for a standard format that could be
used to define the expected output? Even if it does not cover all the
options it might be useful.

Alternatively, could one define a standard way to represent the parsed
data; each parser would need to implement "toCanonicalString()" which
the test harness would compare with the expected data?

I tried this latter approach with some of the Avalon parser testing,
but did not take it far enough - and I don't know if it can be made
generic.

Maybe the best one can hope for is a standard set of test input data
that all the parsers need to be tested against.

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to