https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806
--- Comment #24 from Alexander Cherepanov <ch3root at openwall dot com> --- (In reply to Vincent Lefèvre from comment #11) > But what does "internal consistency" mean? That's a good question. Here we talk about cases (like -funsafe-math-optimizations) that are not covered by any standards. Other PRs (like pr61502 or pr93301) discuss possible changes to the standard. So we need some basic rules to decide what is good and what is bad. pr61502 taught me that discussing which value is the right result for a particular computation is very interesting but not very conclusive. So now I'm looking for contradictions. If you can derive a contradiction then you can derive any statement, so it's an ultimate goal. How to apply this to a compiler? I thought the following is supposed to always hold: if you explicitly write a value into a variable (of any type) then you should get back the same value at every future read no matter how the results of other reads are used or what control flow happens (without other writes to the variable, of course). That is, after `int x = 0;` all `printf("x = %d", x);` should output the same no matter how many `if (x == ...)` there are in-between -- either `printf` doesn't fire at all or it prints `x = 0`. If we demonstrated that it's broken then we demonstrated a contradiction (nonsense). And I hoped that it would be uncontroversial:-( Sometimes it's possible to raise the bar even higher and to construct a testcase where `if` connecting the problematic part with the "independent" variable is hidden in non-executed code in such a way that loop unswitching will move it back into executable part (see bug 93301, comment 6 for an example). OTOH I think the bar should be lowered in gcc and I hope it would be possible to come to an agreement that all integers in gcc should be stable. That is, in this PR the testcase in comment 0 should be enough to demonstrate the problem, without any need for a testcase in comment 1. It's quite easy to get the latter from the former so this agreement doesn't seem very important. Much more important to agree on the general principle described above. It's always possible that any particular testcase is broken itself. Then some undefined behavior should be pointed out. So I totally support how you assessed my testcase from comment 8. We can disagree whether it's UB (I'll get to this a bit later) but we agree that it's either UB or the program should print something sane. What I don't understand is what is happening with my initial testcases. (In reply to Richard Biener from comment #3) > But you asked for that. So no "wrong-code" here, just another case > of "instabilities" or how you call that via conditional equivalence > propagation. Just to be sure: you are saying that everything works as intended -- the testcase doesn't contain UB and the result it prints is one of the allowed? (It could also be read as implying that this pr is a dup of another bug about conditional equivalence propagation or something.) Then we just disagree. Discussion went to specifics of particular optimizations. IMHO it's not important at all for deciding whether comment 1 demonstrates a bug or not. Again, IMHO either the testcase contains UB or it shouldn't print nonsense (in the sense described above). And it doesn't matter which options are used, whether it's a standards compliant mode, etc.