Sorry for my delayed response. I've been a bit under the weather lately.
On 01/11/2017 10:46 AM, Mike Stump wrote:
After running using DG_TORTURE_OPTIONS,
But why? I think you missed what you're testing. You aren't creating or
looking for bugs in the optimizer. Your test case isn't for an optimizer,
therefore, you should not torture the poor test case. I think what you are
testing is argument passing. That typically is the decision about what bits
are where, and that is an optimization irrelevant thing to test.
Hmm. There are a few optimizations (that I'm aware of) being involved
in what I'm intending to test, but I should probably back up a bit. The
aim is to test prologue and epilogue creation of 64-bit ms_abi functions
that call sysv_abi functions. Differences in these ABIs requires RSI,
RDI and XMM6-15 to be considered clobbered when calling the sysv_abi
function. This test is intended to support two patch sets, the first to
emit aligned SSE movs when force_align_arg_pointer is used
(https://gcc.gnu.org/ml/gcc-patches/2016-12/msg01859.html) and the
second implements out-of-lined stubs for these pro/epilogues to reduce
text size. The argument passing part of the test is mainly to ensure I
don't accidentally clobber something I shouldn't and to shuffle around
the stack save area in different ways.
This first patch set probably will be unaffected by optimizations, but
the second one is -- mainly shrink-wrapping and sibling calls. Uros
indicated an interest in the first patch set for the next stage 1 and I
haven't re-submitted the second patch set yet as I wanted to have good
tests to back it up. Still, I probably don't need to test -O3
-funroll-all-loops, etc., so I can probably just test once each with -O0
and -O2, as I do need to verify correctness both with and without
sibling calls enabled.
it became clear that the resulting program was just too big, so I've modified
the generator so that the test can be done in little bits.
A sum of little bits always is likely more costly the just one large thing.
Well first off I'm still using an old Phenom/DDR2 machine and for some
reason splitting it up (into 6 tests vs 1) is 3.8 times faster than
running it as one large test (220 seconds vs 841). I should note that
I've configured with --enable-stage1-checking=yes,rtl and that running
the actual test program takes about 1/50th of a second. I'll build
another bootstrap w/o checking and see how that performs.
I don't think there is an economy to be had there, other than the ability to
say test case 15 fails, and you want a person to be able to drill into test
case 15 by itself without the others around. With a well structured large test
case, it should be clear how each subpart can be separated out and run as a
single small test case.
For example:
test1() { ... }
main() {
test1();
test2();
[ ... ]
}
here, we see that we remove 99% of the test case, and run just a single case.
Normal, manual edit leaving just one line, and then the transitive closure of
the one test routine. I think if you time it, you discover that you can fit in
more cases this way, then if you break them up; also, if you torture, you can't
fit in as many cases in the time given. This is at the heart of why I don't
think you want to torture.
Yes, this makes good sense.
Otherwise, the build eats 6GiB+ and takes forever on the final set of flags.
So, one review point will be, is the added time testing at all useful in
general. That's an open review point. The compiler can be easily damaged with
random edits, but we have fairly good coverage that will catch most of it. We
typically don't spend time in the test suite methodically to catch every single
thing that can go wrong, just the things that usually do go wrong based upon
reported bugs. What is the added time in seconds to test on what type of
machine?
Yes, I hope to be successful in making this case. I actually wrote it
to help me find flaws in my out-of-lined pro/epilogues for ms to sysv
function calls. Prior to writing the test program I was debugging Wine
builds to try to determine what I got wrong, and that's just a pain.
But it did help me figure out what I needed to include in my test.
I'm not sure if I'm managing this correctly, as I'm calling pass/fail $subdir
after each iteration of the test (should this only be called once?).
No, if you did it, you would call it once per iteration, and you would mix in the torture
flags to the pass/fail line. pass "$file.c $torture_option" would be the
typical, in your code, it would be $generator_args.
Thanks, that's what I thought.
Finally, would you please look at my runtest_msabi procedure to make sure that I'm doing
the build correctly? I'm using "remote_exec build" for most of it and I'm not
100% certain if that is the correct way to do it.
Yeah, close enough to likely not worry about it too much. If you wanted to
improve it, then next step would be to remove the isnative part and finish the
code for cross builds,
The issue that I see here is that the resulting program needs to be
executed on the target. Is there a way to run a cross build test on a
host machine but have target executables run in an emulator or some such?
and just after that finish the code for canadian cross builds. A canadian
cross is one in which the build machine and the host machine are different.
With the isnative, you can get the details of host/build and target machine
completely wrong and pay no price for it. Once you remove it, you then have
understand which code works for which system and ensure it works. A cross
build loosely, is one in which the target machine and the host machine are
different. The reason why I suggested isnative, is then you don't have to
worry about it, and you can punt the finishing to a cross or canadian cross
person. For them, it is rather trivial to clean up the test case to get it to
work i a cross environment. Without testing, it is easy enough to get wrong.
Also, for them, testing it is then trivial. If you can find someone that can
test in a cross environment and report back if it works or not, that might be a
way to step it forward, if you want.
Good, then I'm happy to punt this part until later. :) I just googled
Canadian cross build and that's entirely new to me!
Thank you for your thoughtful reply.
Daniel