Am Montag, 28. November 2016 um 10:14:18, schrieb Scott Kostyshak 
<skost...@lyx.org>
> On Mon, Nov 28, 2016 at 10:24:43AM +0100, Kornel Benko wrote:
> 
> > You could make an alias ...
> 
> True. I didn't actually mean the command itself.
> 
> > > When I add a pattern to invertedTests, it does not affect the unreliable
> > > tests. Can we change this?
> > 
> > We had some discussions about this. The current logic is:
> > 
> > ignored: we don't consider this test case
> > unreliable: we do not trust the result of the test
> > inverted: we know, the test fails
> >     suspended: test fails, but we don't care for now
> 
> Thanks for the summary. Let's dig deeper into the unreliable category:
> 
> Sublabel: nonstandard
> These tests are marked as unreliable because of non-standard
> dependencies. I have those dependencies installed (or at least many of
> them, such as knitr, rjournal.sty, Farsi, aa.cls, iucr.cls,
> acmsiggraph.cls). So from my perspective I would like those test to be
> treated as normal ctests when I run the ctests (although I don't mind
> the label "unreliable"). I understand that other users of ctest don't
> want to pay attention to whether they fail or not, but they don't need
> to.

+1

> Sublabel: varying_versions
> The tests in this category fail for some versions (e.g. of a LaTeX
> class) and pass for others.
> I propose a simple rule by which we set the test to fail/pass depending
> on the updated version of the latest TeX Live release (or if the
> dependency is not in TeX Live, then the latest version released).

+1

> Sublabel: erratic
> To me this sublabel contains the most unreliable tests. These tests
> could depend on the phase of the moon or time of day. I would almost
> suggest a new label for them (or a new label for the other sublabels).
> Actually, I might just suggest these tests be ignored. (Note that I'm
> not actually convinced that the only test in this label should actually
> be labeled an erratic test. If fails every time for me.)
> 
> Sublabel: wrong_output
> These tests do not fail but we would like them to.
> I want to know when these tests go from passing to failing. Then they
> could potentially be moved to inverted or LyX bugs or whatever the
> underlying problem for them not displaying correctly is. Occassionally
> we should audit them to see whether the output is now correct (e.g. with
> a fix in TeX Live).

+1

> The reason that I want the output of the "unreliable" tests to be clean
> is because I want to have the choice of whether to pay attention to
> changing tests. Currently I have to manually compare one large list to
> another. This is quite annoying.

Yes, this is a very strong point.

> If an unreliable test changes status
> (whether going from failing to passing or passing to failing) I would
> like to easily see this, so I can decide whether I think that
> information is useful, and whether I want to spend my time to act on it.
> If I were to report a "regression" because an uninverted test went from
> passing to failing, the burden would be on me to argue why I think this
> is actually a true regression, and not just a property of the
> unreliableness of the tests.
> 
> > > Attached is the list of unreliable tests that I would like to invert.
> > 
> > And what should happen if the the changed tests do not fail here but fail 
> > at your side?
> 
> We could at least invert the ones we have in common. Or, since you don't
> pay attention to the unreliable tests, we could just invert the ones
> failing for me since I do care :)

OK.
Let us wait for Günter, he may have some ideas ...

> Scott

        Kornel

Attachment: signature.asc
Description: This is a digitally signed message part.

Reply via email to