This is a fix for pathological, variable shift offset shifts of the form x << x resp. x >> x.
Such shifts need a shift register which might overlap with the shift operand. unsigned char shift (unsigned int x) { return x << x; } Without patch, note r24 is part of operand and used in loop: shift: rjmp 2f 1: lsl r24 rol r25 2: dec r24 brpl 1b ret With patch use tmp_reg (R0) as counter: shift: mov r0,r24 rjmp 2f 1: lsl r24 rol r25 2: dec r0 brpl 1b ret Patch as obvious. Increased instruction length is already taken into account because the RO = Rx will be needed if Rx is used afterwards, anyway. Ok to install? Johann PR target/39386 * config/avr/avr.c (out_shift_with_cnt): Use tmp_reg as shift counter for x << x and x >> x shifts.
Index: config/avr/avr.c =================================================================== --- config/avr/avr.c (revision 176624) +++ config/avr/avr.c (working copy) @@ -3147,8 +3147,11 @@ out_shift_with_cnt (const char *templ, r } else if (register_operand (operands[2], QImode)) { - if (reg_unused_after (insn, operands[2])) - op[3] = op[2]; + if (reg_unused_after (insn, operands[2]) + && !reg_overlap_mentioned_p (operands[0], operands[2])) + { + op[3] = op[2]; + } else { op[3] = tmp_reg_rtx;