On 11/25/2015 03:26 AM, David Malcolm wrote:
Consider the case where an assumption that the host is little-endian
assumption creeps into one of the bitmap functions. Some time later,
another developer updates their working copy from svn on a big-endian
host and finds that lots of things are broken. What's the ideal
behavior?
Internal compiler error in test_bitmaps, IMO. That's the quickest way to
get to the right place in the debugger.
In the end I think I lean towards run everything with automatic
registration/discovery. But I still have state worries. Or to put it
another way, given a test of tests, we should be able to run them in an
arbitrary order with no changes in the expected output or pass/fail results.
That would be the ideal - though do we require randomization, or merely
hold it up as an ideal? As it happens, I believe function-tests.c has
an ordering dependency (an rtl initialization assert, iirc), which
sorting them papered over.
What do you hope to gain with randomization? IMO if there are
dependencies, we should be able to specify priority levels, which could
also help running lower-level tests first.
Bernd