On Tue, May 22, 2007 at 10:11:27AM +0200, Jan-Benedict Glaw wrote: > On Tue, 2007-05-22 08:50:59 +0100, Manuel López-Ibáñez <[EMAIL PROTECTED]> > wrote: > > On 22/05/07, Jan-Benedict Glaw <[EMAIL PROTECTED]> wrote: > > > On Mon, 2007-05-21 15:35:53 -0700, Mark Mitchell <[EMAIL PROTECTED]> > > > wrote: > > > > Is there a volunteer who would like to help prepare a regular list of > > > > P3-and-higher PRs, together with -- where known -- the name of the > > > > person responsible for the checkin which caused the regression? Or, is > > > > this something that could be automated through Bugzilla, perhaps by > > > > adding a pointer to the SVN revision at which the regression was > > > introduced? > > > > > > For a start, isn't there enough computation power on the testing > > > cluster to build all and any revisions of binutils+gcc to run the test > > > suite? Shouldn't be hard to implement and would be helpful. I'd > > > volunteer to prepare the setup, if somebody supplies the CPU cycles. > > > > I am not sure what you mean exactly. Running the testsuite is not the > > issue (it must be run for every patch). The discussion is about having > > a PR with a testcase, then identifying at which revision that testcase > > started to fail, then identifying who committed that revision. > > Whee... You're right, I got that wrong. Doesn't change anything about > scripting work, though. :) > > > Janis has some scripts to do regression hunting. However, it is not > > fully automatic. For one reason or another, some revisions fail to > > built. When that happens, you need to mark that revision as ignored > > and restart the regression hunt. Perhaps you can modify the scripts to > > handle such situation. Maybe it should also contain a list of "bad" > > revisions. Apart from this, the scripts are very powerful and > > intuitive (although some docs would also help everybody). > > > > On the other hand, setting up the regression hunt is not automatic > > either but with some practice, documentation and fine-tuning of the > > scripts, it can be made fairly trivial. > > Alas, it sounds like an interesting project and I'd help trying to > automate it as far as possible.
Many regressions only show up for particularly targets, and wrong-code regressions usually require executing the failing test for the target on which it fails. Full automation wouldn't be easy, and probably isn't possible. On the other hand, many PRs already identify the patch that caused the regression, either because there was a full regression hunt or because someone examined the ChangeLog entries for a short range in which the regression occurred and verified that a particular patch caused the new failure. Making it easy to search for bugs for which that information is or is not available, and searching for yourself as the person who introduced regressions, is a great idea. Janis