I'm confused by what I'm seeing when running the libstdc++ tests with:

RUNTESTFLAGS='--target_board=\"unix{-std=gnu++98,-std=gnu++11,-std=gnu++14,-std=gnu++17}\"'

If I inspect the libstdc++.sum file to look for the results for a
particular file that has { dg-do compile { target c++14 } }, I find:

UNSUPPORTED: experimental/chrono/value.cc
UNSUPPORTED: experimental/chrono/value.cc

rather than the expected:

UNSUPPORTED: experimental/chrono/value.cc
UNSUPPORTED: experimental/chrono/value.cc
PASS: experimental/chrono/value.cc (test for excess errors)
PASS: experimental/chrono/value.cc (test for excess errors)

i.e. there are no results for two of the four variations.

If I reorder the variations like so:

RUNTESTFLAGS='--target_board=\"unix{-std=gnu++17,-std=gnu++14,-std=gnu++11,-std=gnu++98}\"'

I see three results:

PASS: experimental/chrono/value.cc (test for excess errors)
UNSUPPORTED: experimental/chrono/value.cc
UNSUPPORTED: experimental/chrono/value.cc

(The PASS is for gnu++17, there's no result for gnu++14).

If I run that test explicitly I get the expected results:

make check RUNTESTFLAGS='conformance.exp=experimental/chrono/value.cc 
--target_board=\"unix{-std=gnu++17,-std=gnu++14,-std=gnu++11,-std=gnu++98}\"'
grep experimental/chrono/value.cc testsuite/libstdc++.sum UNSUPPORTED: experimental/chrono/value.cc
UNSUPPORTED: experimental/chrono/value.cc
PASS: experimental/chrono/value.cc (test for excess errors)
PASS: experimental/chrono/value.cc (test for excess errors)

And it also works as expected for just "conformance.exp" without the
"=experimental/chrono/value.cc" part.

Running in parallel with -j (or not) doesn't seem to make any difference.

If I just use {-std=gnu++14,-std=gnu++11} I see the same behaviour
(just one UNSUPPORTED and no PASS).

If I look in the libstdc++.log file I see that the missing variations
do get compiled, but there's just no PASS line output. If I
intentionally introduce an error into the test then the log shows that
it does fail (there's a compiler error) but there's no FAIL line
output.

What's going on?!

Have I fundamentally misunderstood something about how RUNTESTFLAGS or
effective-target keywords work?


Reply via email to