On 10/12/21 10:18 AM, Segher Boessenkool wrote:
Hi!
On Tue, Oct 12, 2021 at 09:49:19AM -0600, Martin Sebor wrote:
Coming back to the xfail conditionals, do you think you'll
be able to put together some target-supports magic so they
don't have to enumerate all the affected targets?
There should only be an xfail if we do not expect to be able to fix the
bug causing this any time soon. There shouldn't be one here, not yet
anyway.
Other than that: yes, and one you have such a selector, just dg-require
it (or its inverse) for this test, don't xfail the test (if this is
expected and correct behaviour).
My sense is that fixing all the fallout from the vectorization
change is going to be delicate and time-consuming work. With
the end of stage 1 just about a month away I'm not too optimistic
how much of it I'll be able to get it done before then. Depending
on how intrusive the fixes turn out to be it may or may not be
suitable in stage 3.
Based on pr102706 that Jeff reported for the regressions in his
automated tester, it also sounds like the test failures are spread
out across a multitude of targets. In addition, it doesn't look
like the targets are all the same in all the tests. Enumerating
the targets that correspond to each test failure would be like
playing the proverbial Whac-A-Mole.
That makes me think we do need some such selector rather soon.
The failing test cases are a subset of all the cases exercised
by the tests. We don't want to conditionally enable/disable
the whole tests just for the few failing cases (if that's what
you were suggesting by dg-require). So we need to apply
the selector to individual dg-warning and dg-bogus directives
in these tests.
Martin