On Thu, Jun 2, 2022 at 11:48 AM Uros Bizjak via Gcc-patches <gcc-patches@gcc.gnu.org> wrote: > > On Thu, Jun 2, 2022 at 9:20 AM Roger Sayle <ro...@nextmovesoftware.com> wrote: > > > > The simple test case below demonstrates an interesting register > > allocation challenge facing x86_64, imposed by ABI requirements > > on int128. > > > > __int128 foo(__int128 x, __int128 y) > > { > > return x+y; > > } > > > > For which GCC currently generates the unusual sequence: > > > > movq %rsi, %rax > > movq %rdi, %r8 > > movq %rax, %rdi > > movq %rdx, %rax > > movq %rcx, %rdx > > addq %r8, %rax > > adcq %rdi, %rdx > > ret > > > > The challenge is that the x86_64 ABI requires passing the first __int128, > > x, in %rsi:%rdi (highpart in %rsi, lowpart in %rdi), where internally > > GCC prefers TI mode (double word) integers to be register allocated as > > %rdi:%rsi (highpart in %rdi, lowpart in %rsi). So after reload, we have > > four mov instructions, two to move the double word to temporary registers > > and then two to move them back. > > > > This patch adds a peephole2 to spot this register shuffling, and with > > -Os generates a xchg instruction, to produce: > > > > xchgq %rsi, %rdi > > movq %rdx, %rax > > movq %rcx, %rdx > > addq %rsi, %rax > > adcq %rdi, %rdx > > ret > > > > or when optimizing for speed, a three mov sequence, using just one of > > the temporary registers, which ultimately results in the improved: > > > > movq %rdi, %r8 > > movq %rdx, %rax > > movq %rcx, %rdx > > addq %r8, %rax > > adcq %rsi, %rdx > > ret > > > > I've a follow-up patch which improves things further, and with the > > output in flux, I'd like to add the new testcase with part 2, once > > we're back down to requiring only two movq instructions. > > Shouldn't we rather do something about: > > (insn 2 9 3 2 (set (reg:DI 85) > (reg:DI 5 di [ x ])) "dword-2.c":2:1 82 {*movdi_internal} > (nil)) > (insn 3 2 4 2 (set (reg:DI 86) > (reg:DI 4 si [ x+8 ])) "dword-2.c":2:1 82 {*movdi_internal} > (nil)) > (insn 4 3 5 2 (set (reg:TI 84) > (subreg:TI (reg:DI 85) 0)) "dword-2.c":2:1 81 {*movti_internal} > (nil)) > (insn 5 4 6 2 (set (subreg:DI (reg:TI 84) 8) > (reg:DI 86)) "dword-2.c":2:1 82 {*movdi_internal} > (nil)) > (insn 6 5 7 2 (set (reg/v:TI 83 [ x ]) > (reg:TI 84)) "dword-2.c":2:1 81 {*movti_internal} > (nil)) > > The above is how the functionTImode argument is constructed. > > The other problem is that double-word addition gets split only after > reload, mostly due to RA reasons. In the past it was determined that > RA creates better code when registers are split late (this reason > probably does not hold anymore), but nowadays the limitation remains > only for arithmetic and shifts.
Hmm. Presumably the lower-subreg pass doesn't split the above after the double-word adds are split? Or maybe we simply do it too late. > Attached to this message, please find the patch that performs dual > word mode arithmetic splitting before reload. It improves generated > code somehow, but due to the above argument construction sequence, the > bulk of moves remain. Unfortunately, when under register pressure > (e.g. 32-bit targets), the peephole approach gets ineffective due to > register spilling, so IMO the root of the problem should be fixed. > > Uros. > > > > > > This patch has been tested on x86_64-pc-linux-gnu with make bootstrap > > and make -k check, both with and without --target_board=unix{-m32} with > > no new failures. Ok for mainline? > > > > > > 2022-06-02 Roger Sayle <ro...@nextmovesoftware.com> > > > > gcc/ChangeLog > > * config/i386/i386.md (define_peephole2): Recognize double word > > swap sequences, and replace them with more efficient idioms, > > including using xchg when optimizing for size. > > > > > > Thanks in advance, > > Roger > > -- > >