2011/6/30 Han-Wen Nienhuys <hanw...@gmail.com>:
> Overall, I think this cycle took too long.

As i'm pretty new here, i cannot compare this cycle to previous ones,
but i think i agree :)

2011/6/30 m...@apollinemike.com <m...@apollinemike.com>:
> On Jun 30, 2011, at 4:17 PM, Han-Wen Nienhuys wrote:
>> We should strive to have policies that make each development release
>> be a worthy stable candidate. That means -for example- being serious
>> about
>>
>> * changes passing through the regtest
>> * bugfixes and features always having a test to check against
>
> I agree, and I'd go further to add that one of the problems with the 2.13 
> process towards the end was a difficulty in anticipating what to make 
> regtests look like so that they tested all possible contingencies.  In order 
> to make sure that the regtests are robust w/o forcing each change to go 
> through 5 hours of regression testing, I propose that before each minor 
> release, a large regression test is run on a suite of real-life pieces 
> followed by a pixel comparison.  This'd take a while, but it'd provide a 
> periodic check to nip problems in the bud.

I like this idea, but i'm not sure if it will be feasible without a
*very* smart program that would compare the results. The main problem
i anticipate is that real-life pieces are much longer than regtests
and therefore any change in spacing may easily result in different
line breaks - and i suppose different breaking means that comparing
outputs by machine is impossible, unless some very advanced algorithms
are used (and i don't know if such are available at all).
In other words, the check would probably have to be executed by a
human, and preferably the same person every time (as (s)he will
eventually memorize all this pieces - that would make comparison
easier).

cheers,
Janek

_______________________________________________
lilypond-devel mailing list
lilypond-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/lilypond-devel

Reply via email to