Steve Holden <[EMAIL PROTECTED]> writes: > The issue I have with correctness proofs (at least as they were > presented in the 1980s - for all I know the claims may have become > more realistic now) is that the proof of correctness can only relate > to some highly-formal specification so complex that it's only > comprehensible to computer scientists.
Just like software as it is now, is only comprehensible to programmers. A long while back there was a language called COBOL designed to be comprehensible to non-programmers, and it actually did catch on, but it was cumbersome and limited even in its time and in its application era, not so readable to non-programmers after all, and is not used much in new projects nowadays. > In other words, we can never "prove" that a program does what the > customer wants, because the semantics are communicated in different > ways, involving a translation which, since it is necessarily performed > by humans using natural language, will always be incomplete at best > and inaccurate at worst. The types of things one tries to prove are similar to what is less formally expressed as localized test cases in traditional software. E.g. for a sorting routine you'd want to prove that output[n+1] >= output[n] for all n and for all inputs. As the Intel FDIV bug incident reminds us, even billions of test inputs are not enough to show that the routine does the right thing for EVERY input. -- http://mail.python.org/mailman/listinfo/python-list