On Sun, Aug 31, 2014 at 8:18 PM, Segher Boessenkool <seg...@kernel.crashing.org> wrote: > On Fri, Aug 29, 2014 at 11:58:37PM -0600, Jeff Law wrote: >> One could argue that this mess is a result of trying to optimize a reg >> that is set more than once. Though I guess that might be a bit of a >> big hammer. > > It works fine in other cases, and is quite beneficial for e.g. optimising > instruction sequences that set a fixed carry register twice. > > In the testcase (and comment in the proposed patch), why is combine > combining four insns at all? That means it rejected combining just the > first three. Why did it do that? It is explicitly reject by below code in can_combine_p.
if (GET_CODE (PATTERN (i3)) == PARALLEL) for (i = XVECLEN (PATTERN (i3), 0) - 1; i >= 0; i--) if (GET_CODE (XVECEXP (PATTERN (i3), 0, i)) == CLOBBER) { /* Don't substitute for a register intended as a clobberable operand. */ rtx reg = XEXP (XVECEXP (PATTERN (i3), 0, i), 0); if (rtx_equal_p (reg, dest)) return 0; Since insn i2 in the list of i0/i1/i2 as below contains parallel clobber of dest_of_insn76/use_of_insn77. 32: r84:SI=0 76: flags:CC=cmp(r84:SI,0x1) REG_DEAD r84:SI 77: {r84:SI=-ltu(flags:CC,0);clobber flags:CC;} REG_DEAD flags:CC REG_UNUSED flags:CC Thanks, bin