On Tue, 1 Aug 2017, Oleg Endo wrote: > To improve the situation, we'd need a lot more target specific tests > which test for those regressions that you have mentioned. Then of > course somebody has to run all those tests on all those various > targets. I think that's the biggest problem. But still, with a test
Code size is something where you could in principle have a regression tester that runs for all target architectures without needing target hardware (though you still need some way to decide which regressions, whether sudden or gradual, are significant and which are noise, and someone needs to keep monitoring the results and reporting regressions). Much like the compilation parts of the GCC testsuites (where identifying regressions would be rather easier). The compilation-only regression testers I set up for glibc are very helpful for ensuring it stays building for minority architectures, albeit with existing compiler regressions for ColdFire and SH that predate setting up the testers, and with execution test results still being rather a mess for less-tested configurations (and anyone can do the compilation tests themselves with the build-many-glibcs.py script, though it takes a while without a many-cores system to run it on). -- Joseph S. Myers jos...@codesourcery.com