On Tue, 2015-11-24 at 13:44 -0700, Jeff Law wrote:
> On 11/19/2015 11:44 AM, Bernd Schmidt wrote:
> > On 11/19/2015 07:08 PM, David Malcolm wrote:
> >> gcc_assert terminates the process and no further testing is done,
> >> whereas the approach the kit tries to run as much of the testsuite as
> >> possible, and then fail if any errors occurred.
> >
> > Yeah, but let's say someone is working on bitmaps and one of the bitmap
> > tests fail, it's somewhat unlikely that cfg will also fail (or if it
> > does, it'll be a consequence of the earlier failure). You debug the
> > issue, fix it, and run cc1 -fself-test again to see if that sorted it out.
> >
> > As I said, it's a matter of taste and style and I won't claim that my
> > way is necessarily the right one, but I do want to see if others feel
> > the same.
> I was originally going to say that immediate abort would be the 
> preferred method of operation, but as I thought more about it....
> 
> I think this really is a question of how the tests are likely used.  I 
> kind of expect that most of the time they'll be used as part of an early 
> sanity test.
> 
> So to continue with the bitmap botch causing a CFG failure, presumably 
> the developer was mucking around in the bitmap code already and when 
> they see the CFG test failure, they're going to suspect they've mucked 
> up the bitmap code in some way.

Consider the case where an assumption that the host is little-endian
assumption creeps into one of the bitmap functions.  Some time later,
another developer updates their working copy from svn on a big-endian
host and finds that lots of things are broken.  What's the ideal
behavior?


> The first question should then be did the bitmap tests pass or fail and 
> if they passed, then those tests clearly need extending :-)

Indeed, for the case above also.

> >
> >> The patch kit does use a lot of "magic" via macros and C++.
> >>
> >> Taking registration/discovery/running in completely the other direction,
> >> another approach could be a completely manual approach, with something
> >> like this in toplev.c:
> >>
> >>    bitmap_selftest ();
> >>    et_forest_selftest ();
> >>    /* etc */
> >>    vec_selftest ();
> >>
> >> This has the advantage of being explicit, and the disadvantage of
> >> requiring a bit more typing.
> The one advantage of explicit registration I see is the ability to order 
> the tests so that the lowest level data structures are tested first, 
> moving to increasingly more complex stuff.
> 
> But if we're in a mode of run everything, then ordering won't be that 
> important.
> 
> In the end I think I lean towards run everything with automatic 
> registration/discovery.  But I still have state worries.  Or to put it 
> another way, given a test of tests, we should be able to run them in an 
> arbitrary order with no changes in the expected output or pass/fail results.

That would be the ideal - though do we require randomization, or merely
hold it up as an ideal?  As it happens, I believe function-tests.c has
an ordering dependency (an rtl initialization assert, iirc), which
sorting them papered over.

Dave

Reply via email to