https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113838
--- Comment #5 from absoler at smail dot nju.edu.cn --- (In reply to Andrew Pinski from comment #2) > The difference from the gimple level IR: > ``` > _14 = g_26[5][3][0]; > _15 = (int) _14; > _16 = _13 ^ _15; > g_51 = _16; > if (_13 != _15) > ``` > > vs: > ``` > _14 = g_26[5][3][0]; > _15 = (int) _14; > _16 = _13 ^ _15; > g_51 = _16; > if (_16 != 0) > goto <bb 4>; [50.00%] > else > goto <bb 3>; [50.00%] > ``` > > > This is expected behavior even for the x86_64 target The gimple ir has no problem, but `_13` is replaced with g_26[5][3][0] in the follow-up process, this shouldn't be expected behavior. We question this option because we found in an older version of gcc (10.2.0), only the O2 option is needed to produce the same bad code, so we worry about there's a hidden un-fixed problem and it's re-triggered by this option. Besides, the bad binary code introduce more load operation than the source code (without optimization), so we thought it's necessary to check it regardless of which optimization is disabled.