On Jan 10, 2017, at 9:13 PM, Daniel Santos <[email protected]> wrote:
> I've gotten rid of the Makefile and everything is run now from msabi.exp.
> I've also gotten rid of the header file, now that I know how to define a
> "_noinfo" fn pointer, so it's down to just 4 files: msabi.exp, gen.cc,
> msabi.c and do_test.S.
Sounds better.
> After running using DG_TORTURE_OPTIONS,
But why? I think you missed what you're testing. You aren't creating or
looking for bugs in the optimizer. Your test case isn't for an optimizer,
therefore, you should not torture the poor test case. I think what you are
testing is argument passing. That typically is the decision about what bits
are where, and that is an optimization irrelevant thing to test.
> it became clear that the resulting program was just too big, so I've modified
> the generator so that the test can be done in little bits.
A sum of little bits always is likely more costly the just one large thing. I
don't think there is an economy to be had there, other than the ability to say
test case 15 fails, and you want a person to be able to drill into test case 15
by itself without the others around. With a well structured large test case,
it should be clear how each subpart can be separated out and run as a single
small test case.
For example:
test1() { ... }
main() {
test1();
test2();
[ ... ]
}
here, we see that we remove 99% of the test case, and run just a single case.
Normal, manual edit leaving just one line, and then the transitive closure of
the one test routine. I think if you time it, you discover that you can fit in
more cases this way, then if you break them up; also, if you torture, you can't
fit in as many cases in the time given. This is at the heart of why I don't
think you want to torture.
> Otherwise, the build eats 6GiB+ and takes forever on the final set of flags.
So, one review point will be, is the added time testing at all useful in
general. That's an open review point. The compiler can be easily damaged with
random edits, but we have fairly good coverage that will catch most of it. We
typically don't spend time in the test suite methodically to catch every single
thing that can go wrong, just the things that usually do go wrong based upon
reported bugs. What is the added time in seconds to test on what type of
machine?
> And now for 50 questions. :) Am I using DG_TORTURE_OPTIONS correctly
I want to say no. See above. No one should ever use it, unless they have a
very specific well though out reason. I've not heard the reason in this case.
> or should such a test only exist under gcc.torture?
gcc.torture is for a very narrow and specific type of bug. There are bugs that
people that work on the optimizer add for test cases that go though the
optimizer that they want to ensure that the bug they just fixed, doesn't
re-appear. So, the first question, are you working on the optimizer? If not,
then it would likely be inappropriate.
> I'm not sure if I'm managing this correctly, as I'm calling pass/fail $subdir
> after each iteration of the test (should this only be called once?).
No, if you did it, you would call it once per iteration, and you would mix in
the torture flags to the pass/fail line. pass "$file.c $torture_option" would
be the typical, in your code, it would be $generator_args.
> Also, being that the generator is C++, I've added HOSTCXX and HOSTCXXFLAGS to
> site.exp, I hope that's OK.
Hum. I worry about the knock on effect of some sort. Generally I don't like
adding anything to site.exp unless needed. In this case, I think it'd be fine.
It is the most simple and direct way to do it.
> Finally, would you please look at my runtest_msabi procedure to make sure
> that I'm doing the build correctly? I'm using "remote_exec build" for most
> of it and I'm not 100% certain if that is the correct way to do it.
Yeah, close enough to likely not worry about it too much. If you wanted to
improve it, then next step would be to remove the isnative part and finish the
code for cross builds, and just after that finish the code for canadian cross
builds. A canadian cross is one in which the build machine and the host
machine are different. With the isnative, you can get the details of
host/build and target machine completely wrong and pay no price for it. Once
you remove it, you then have understand which code works for which system and
ensure it works. A cross build loosely, is one in which the target machine and
the host machine are different. The reason why I suggested isnative, is then
you don't have to worry about it, and you can punt the finishing to a cross or
canadian cross person. For them, it is rather trivial to clean up the test
case to get it to work i a cross environment. Without testing, it is easy
enough to get wrong. Also, for them, testing it is then trivial. If you can
find someone that can test in a cross environment and report back if it works
or not, that might be a way to step it forward, if you want.
> Once I get this cleaned up a bit more I'm going to send it as an RFC and
> hopefully get some feedback from the i386 maintainers.