I am working in the field of reliability, availability, and serviceability (RAS)
benchmarking.  While not nearly as mature as computationally-intensive
benchmarking, there are many similarities and we face the same fundamental
problems, notably the difficulty in creating a fair benchmark which is also
representative of something useful.

We have a number of RAS benchmarks which we use internally at Sun to
characterize hardware and software systems.  Some of these have been 
described at conferences and in journals.  One of my tasks for the next year
is to prepare make them free and open.  As Mike and others attest, writing a
good, useful, representative benchmark is no simple feat.

One point I'd like to make (often) is that any benchmark, microbenchmark, or
model of a system only provides a single view of the system.  To view the
whole of Mount Fuji requires looking from many viewpoints.  This is particularly
true when something wonderful and new comes along -- we're hard at work
on the CMT systems which provide signifcant challenges for comparing against
existing systems.  Or, to use analogy, one can carry 100 tons of fish to market
in a thousand rowboats or one supertanker.

 -- richard
This message posted from opensolaris.org
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to