Hi Alan,

Do you get any follow-up data from your customers or other groups in
your company that receive your product?  If so, you can run some
statistics on the percentage of defects that get through your process
and logged by your customers relative to the percentage of defects that
get caught by your tests. The lower the number, the better your testing
is at catching defects.  You might also run some statistics on the
percentage of defects that get out the door to the total number of tests
run.  There are many more that I can think up.  Ultimately, the less
problem defects logged by your customers is the best method for
determining how well your tests are doing their job.  A customer can be
defined as anyone downwind of your tests (an external paying customer,
your factory, another internal group, etc). I hope this helps.  These
are my opinions also. 

Scott

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of
[EMAIL PROTECTED]
Sent: Thursday, May 13, 2004 10:46 AM
To: [EMAIL PROTECTED]
Subject: Testing the Testers


I'm looking for ideas on improving my methods and documentation for
verifying testers.  Currently I have a simple document with a table of
test
cases and a section that list information about the tester and what was
used to run the verification.

How do you test your completed testers. (Not that they're ever really
completed.)

Alan Gleichman
Hella Electronics Corp.
Plymouth, Michigan

>From the "Software Engineering Glossary of Product Terminology"
BREAKTHROUGH - It nearly worked on the first try
MAINTENANCE FREE - Impossible to fix
MEETS QUALITY STANDARDS - It compiles without errors




Reply via email to