On Sunday, January 25, 2004, at 06:01 , Matt Fowles wrote:

Of late it seems that everybody has been throwing around their own little homegrown benchmarks to support their points. But many people frequently point out that these benchmarks are flawed on one way or another.

I suggest that we add a benchmark/ subdirectory and create a canonical suite of benchmarks that exercise things well (and hopefully fully). Then we can all post relative times for runs on this benchmark suite, and we will know exactly what is being tested and how valid it is.

Well, there's already examples/benchmarks. If those programs are not at all realistic, then more realistic benchmarks should be added.


Would be nice if there were a convenient way to run the lot of them and collect the timing information, though.



Gordon Henriksen
[EMAIL PROTECTED]

Reply via email to