On Tue, May 01, 2007 at 07:46:56PM -0400, Daniel Gryniewicz wrote: > On Wed, 2007-05-02 at 01:32 +0200, Marius Mauch wrote: > > I'd approach it a bit different: Before creating fixed classification > > groups I'd first identify the attributes of tests that should be used > > for those classifications. > > a) cost (in terms of runtime, resource usage, additional deps) > > b) effectiveness (does a failing/working test mean the package is > > broken/working?) > > c) importance (is there a realistic chance for the test to be useful?) > > d) correctness (does the test match the implementation? overlaps a bit > > with effectiveness) > > e) others? > There is one serious problem with this: Who's going to do the work to > figure all this out for the 11,000 odd packages in the tree? This seems > like a *huge* amount of work, work that I have no plan on doing for the > 100-odd packages I (help) maintain, let alone the 4-10 different > versions of each package. I highly doubt other maintainers want to do > this kind of work either. This wouldn't be an instant transition, and a lot of packages would be covered under the 'importance' side.
Using crypto packages as an example, the costs are low (compare known input+output pairs), the effectiveness is high, the importance is high (witness the checksum problems caused in the tree some months ago), and the correctness is very high. The mysql testcases on the other hand have a low effectiveness, there have been lots of cases where they break due to userpriv or sandbox and high cost. For the packages I maintain, I'd definitely implement test stuff for the crypto and system-admin packages where feasible, but for a lot of others I wouldn't bother - the cost/benefit ratio is not high enough. -- Robin Hugh Johnson Gentoo Linux Developer & Council Member E-Mail : [EMAIL PROTECTED] GnuPG FP : 11AC BA4F 4778 E3F6 E4ED F38E B27B 944E 3488 4E85
pgprsk8CFF5tt.pgp
Description: PGP signature