On 2015-10-26, Liviu Andronic wrote: > On Mon, Oct 26, 2015 at 10:24 AM, Guenter Milde <mi...@users.sf.net> wrote: >> On 2015-10-26, Scott Kostyshak wrote:
... >> Could this prevent some of the regressions? (We need to look carefully, not >> only if the relevant documents compile without error, but also if the >> exported document is OK.) > By exported document do you mean .tex or .pdf? If it is .tex, would it > be a good idea to check whether the latest export is identical to a > reference .tex generated when creating the test; if not, display a > diff. Simply relying on the exit code seems like an easy way to miss > non-error generating regressions... I think we have to distinguish several test methods: a) automatic export tests with lots of "real life" documents (manuals, templates, examples) b) functional tests: import/export of a complete test document and comparing it to the expected output c) unit tests: test of certain features or sub-units. Unit tests (c) are currently not implemented, AFAIK. The tex2lyx tests are "functional tests" (b), where we keep track of the expected output. They include also export tests (in the "round trip" suite). Here, we have to manually check and update the "expected" output documents discriminating between intended changes and regressions/bug-indicators. For a), it would be a pain to keep track of and update all the output documents, because this would not only be required for different export routines but also for changes in the input docs. However, if the exit status of a test changes (from fail to pass or vice versa), we should check whether this is due to a new bug, a fix or just exposing previously hidden problems. Günter