On 09/29/2014 11:12 AM, Jiong Wang wrote: > +inline rtx single_set_no_clobber_use (const rtx_insn *insn) > +{ > + if (!INSN_P (insn)) > + return NULL_RTX; > + > + if (GET_CODE (PATTERN (insn)) == SET) > + return PATTERN (insn); > + > + /* Defer to the more expensive case, and return NULL_RTX if there is > + USE or CLOBBER. */ > + return single_set_2 (insn, PATTERN (insn), true); > }
What more expensive case? If you're disallowing USE and CLOBBER, then single_set is just GET_CODE == SET. I think this function is somewhat useless, and should not be added. An adjustment to move_insn_for_shrink_wrap may be reasonable though. I haven't tried to understand the miscompilation yet. I can imagine that this would disable quite a bit of shrink wrapping for x86 though. Can we do better in understanding when the clobbered register is live at the location to which we'd like to move then insns? r~