On Mon, Aug 17, 2009 at 05:17:08PM +0200, Reinhold Kainhofer wrote: > Am Montag, 17. August 2009 16:08:36 schrieb Michael Käppler: > > > > > (nobody checks the regression tests for each release, for example > > > -- and that's trivially done with a web browser!) > > > > That reminds me of an idea I recently had: Wouldn't it be possible to > > automatically generate a sort of "checksum" for each regression-test > > output-file and compare it with the former releases? > > Isn't this exactly what we already have (make test-baseline to create the > reference version and then "make test" to compare the new output to the > reference output...). For the results see > http://lilypond.org/test/ > The 2.13.3 results are at: > http://lilypond.org/test/v2.13.3-0/compare-v2-13/index.html)
Yes. Right now there happen to be a lot, but I think there's generally around a dozen examples. > Graham was referring to the fact that nobody seem to bother about looking at > those automatically-created regression results before or after a release. Yes. All it takes is bookmarking the site, checking it whenever there's a release, and reporting any broken examples. However, nobody is willing to commit to do this. 15 minutes whenever there's a release, which happens at most once every two weeks. Cheers, - Graham _______________________________________________ lilypond-devel mailing list lilypond-devel@gnu.org http://lists.gnu.org/mailman/listinfo/lilypond-devel