Dear Kornel,

thank you for the explanation. I come a bit nearer to a generic
understanding now.

On 2015-11-05, Kornel Benko wrote:
> Am Donnerstag, 5. November 2015 um 07:34:47, schrieb Guenter Milde 
> <mi...@users.sf.net>

> ...
>> > It is suspended _only_ if you select testcases with the '-L' parameter.

>> OK. My idea was that suspended testcases are skipped by default.

> Let me sketch our use of '-L' parameter.

> We have a number of lyx-files under lib/doc, lib/examples etc directories.
> There are export types like lyx16, xhtml, pdf, dvi etc
> TeX or non TeX fonts.
> For each combination we create a testname containing a hint to
>       a.) the relative path of the lyx file (without extension)
>       b.) export type (restricted to the specified output format(in 
> lyx-file), if not 'default')
>       c.) TeX or non TeX font (texF, systemF)

(Is there a documentation of the testnames? If not, could this be added to
Development.lyx, maybe or in a README for the test machinery? With examples?)


Then: for my concept of "suspension":

> If the name

> i. matches regex in ignoreTests, the testcase is discarded. Handle to
>    next testname.

  ii. matches regex in suspendedTests: Prepend SUSPENDED to testname. If
      there can be more than one label, the test gets the label "suspended".

  iii. Not matching revertedTests, the test gets the label "export".
       Handle next testname.
  
> (from here on it matches revertedTests)
  iv. Prepend INVERTED... to testname. Revert test verification.
  

> Now calling ctest with '-L export' selects tests with label 'export',
> thus skipping suspended tests. The same is valid for '-L reverted'.

> The other possibilities, e.g. using '-R' parameter, does only check for
> testnames. But the suspended tests must also have a name, so we cannot
> skip them automatically.

Then, with a regexp not starting with SUSPENDED, -R would not include
suspended tests, right?


> This should answer also your remark in a previous post (
>       > So what do you propose for such tests (84% of export tests)?
>       > ctest -L export -N | wc => 3719
>       > ctest -L export -N |egrep '/(doc|examples)/'| wc => 3153
>       > 3153 / 3719 => 84.78%

>>      I don't have any clue which tests are hidden behind these commands.

This means we have about 3000 export tests with "real life" documents in doc 
and examples?

Basically, I would look at currently failing tests 
(i.e. return value != expected return value) and

* invert/uninvert test based on the "correct" return value

* suspend test failing for a known reason, so they are not run nor reported
  by a "normal" run.
  
However, this is basically what you currently do with "inverted tests" there
is basically just a different naming scheme.

The "measure for deviation from a clean state" would be the number of
"suspended" tests with my proposal and "inverted" tests in yours.

The only difference are export tests we want to return an error.
How are they managed in your scheme?

Günter

Reply via email to