Quoting Diego Novillo <[EMAIL PROTECTED]>:
[...]
> 
> The compiler cannot statically short-circuit that predicate because it 
> doesn't know whether best.score will be positive or not.
> 
> The chain of inferences that the compiler would need to do to properly 
> diagnose this case is beyond the scope of the mechanical transformations.  
> The reasoning you need to implement to catch these cases could even be 
> reduced to the halting problem.

Correct in theory. Incorrect in practice in many cases including this one.
Otherwise you'd never hear of S/W formal model checking. (e.g. MS hands
out, gratis, a model checker for driver writers, AFAICT).

In many cases mechanical analysis of code can prove much more complex
and subtle issues. You should read the latest papers on this subject.
It seems to me that predicate abstraction will do wonders on this example.
  P_score_neg_or_zero= true;
  P_d_init= false;
  while(*) { /* star means "nondeterministic choice" */
    if (*) { 
      /* I assume that VRP can do the deduction below */
      P_score_neg_or_zero = false;
      P_d_init = true;
    }
  ASSERT (P_score_neg_or_zero || P_d_init)

This is a trivial to mechanically prove that 
  P_d_init != neg P_score_neg_or_zero
And that ASSERT is valid, and as a result 'd' is proven to be
initialized whenever it is read.

Note that this is an abstraction, so it can only prove the ASSERT.
A failure may be caused by the extra behaviors the abstraction brought.
If the failing path can't be reconstructed on the concrete program,
then you'd have to refine the abstracted model (AKA refinement-abstraction). 

> The uninitialized warnings are fairly easy to trick.  There is only so much 
> the compiler can realistically do.

You are most likely right about the "realistically" part.
Although, without hard numbers you can't be 100% certain that analysis
such as this is not beneficial to optimization (outweighing the costs).

--

  Michael

Reply via email to