On Tue, Mar 13, 2007 at 08:44:18AM +0100, Michael Schmitz wrote: > > It is probably useful to compile things on a wide range of hardware in > > order to look for driver and hardware problems. That said, finding > > problems that are due to hardware failure are probably less useful. It > > is nice to know failure modes though. I guess ideally everything would > > get compiled on every sufficiently unique system including an emulator. > Ideally we'd have a massively parallel cluster of Amigas, Macs and Ataris > to do that. Right now, we'd be more than happy if additional machines > (real or emulated) could take up some of the backlog at times. We'd need > to prevent an emulated system attempting to build a CPU intensive package, > and for that we'd need a build time estimate up front. Not implemented > yet.
Hmmm, sort of... http://unstable.buildd.net/index-m68k.html shows some rough build time estimation for all packages in needs-build: Needs-Build queue ETA: 12 days 18:53:30 The algorithm behind that estimation is far away from being optimal, but the buildd.net database contains/supports a lot of data: - names of buildds - CPU type and speed - RAM/disk sizes - when was what package being built in what time - load average over time - memory and swap usage over time - kernel version - ... Sadly, I don't have that much time to implement all details on my own, but basically - if someone else is interested - I can provide either a login to buildd.net or just to the database. Everyone is invited to contribute! :-) -- Ciao... // Fon: 0381-2744150 Ingo \X/ SIP: [EMAIL PROTECTED] gpg pubkey: http://www.juergensmann.de/ij/public_key.asc -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]