https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104151
--- Comment #8 from Richard Biener <rguenth at gcc dot gnu.org> --- (In reply to rsand...@gcc.gnu.org from comment #7) > (In reply to Richard Biener from comment #6) > > Richard - I'm sure we can construct a similar case for aarch64 where > > argument passing and vector mode use cause spilling? > > > > On x86 the simplest testcase showing this is > > > > typedef unsigned long long v2di __attribute__((vector_size(16))); > > v2di bswap(__uint128_t a) > > { > > return *(v2di *)&a; > > } > > > > that produces > > > > bswap: > > .LFB0: > > .cfi_startproc > > sub sp, sp, #16 > > .cfi_def_cfa_offset 16 > > stp x0, x1, [sp] > > ldr q0, [sp] > > add sp, sp, 16 > > .cfi_def_cfa_offset 0 > > ret > > > > on arm for me. Maybe the stp x0, x1 store can forward to the ldr load > > though and I'm not sure there's another way to move x0/x1 to q0. > It looks like this is a deliberate choice for aarch64. The generic > costing has: > > /* Avoid the use of slow int<->fp moves for spilling by setting > their cost higher than memmov_cost. */ > 5, /* GP2FP */ > > So in cases like the above, we're telling IRA that spilling to > memory and reloading is cheaper than moving between registers. > For -mtune=thunderx we generate: > > fmov d0, x0 > ins v0.d[1], x1 > ret > > instead. Ah, interesting. On x86 we disallow/pessimize GPR<->XMM moves with some tunings as well, still there a sequence like movq %rdi, -24(%rsp) movq %rsi, -16(%rsp) movq -24(%rsp), %xmm0 movq -16(%rsp), %xmm1 unpckhpd %xmm0, %xmm1 (fixme - that's wrong, but you get the idea) instead of movq %rdi, -24(%rsp) movq %rsi, -16(%rsp) movdqa -24(%rsp), %xmm0 would likely be faster. Not sure if one can get LRA to produce this two-staged reload with just appropriate costing though. As said the key of the cost of the bad sequence is the failing store forwarding, so it's special for spilling of two-GPR TImode and reloading as single FPR V2DImode.