Alexandre Oliva wrote: >> You're again trying to make this a binary-value question. Why? > > Because in my mind, when we agree there is a bug, then a fix for it > can is easier to swallow even if it makes the compiler spend more > resources, whereas a mere quality-of-implementation issue is subject > to quite different standards.
Unfortunately, not all questions are black-and-white. I don't think you're going to get consensus that this issue is as important to fix as wrong-code (in the traditional sense) problems. So, arguing about whether this is a "correctness issue" isn't very productive. Neither is arguing that there is now some urgent need for machine-usable debugging information in a way that there wasn't before. Machines have been using debugging information for various purposes other than interactive debugging for ages. But, they've always had to deal with the kinds of problems that you're encountering, especially with optimized code. I think that at this point you're doing research. I don't think we have a well-defined notion of what exactly debugging information should be for optimized code. Robert Dewar's definition of -O1 as doing optimizations that don't interfere with debugging is coherent (though informal, of course), but you're asking for something more: full optimization, and, somehow, accurate debugging information in the presence of that. I'm all for research, and the thinking that you're doing is unquestionably valuable. But, you're pushing hard for a particular solution and that may be premature at this point. Debugging information just isn't rich enough to describe the full complexity of the optimization transformations. There's no great way to assign a line number to an instruction that was created by the compiler when it inserted code on some flow-graph edge. You can't get exact information about variable lifetimes because the scope doesn't start at a particular point in the generated code in the same way that it does in the source code. My suggestion (not as a GCC SC member or GCC RM, but just as a fellow GCC developer with an interest in improving the compiler in the same way that you're trying to do) is that you stop writing code and start writing a paper about what you're trying to do. Ignore the implementation. Describe the problem in detail. Narrow its scope if necessary. Describe the success criteria in detail. Ideally, the success criteria are mechanically checkable properties: i.e., given a C program as input, and optimized code + debug information as output, it should be possible to algorithmically prove whether the output is correct. For example, how do you define the correctness of debug information for a variable's location at a given PC? Perhaps we want to say that giving the answer "no information available" is always correct, but that saying "the value is here" when it's not is incorrect; that gives us a conservative fallback. How do you define the point in the source program given a PC? If the value of "x" changes on line 100, and we're at an instruction which corresponds line 101, are we guaranteed to see the changed value? Or is seeing the previous value OK? What about some intermediate value if "x" is being changed byte-by-byte? What about a garbage value if the compiler happens to optimize by throwing away the old value of "x" before assigning a new one? -- Mark Mitchell CodeSourcery [EMAIL PROTECTED] (650) 331-3385 x713