https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104151

--- Comment #9 from Richard Biener <rguenth at gcc dot gnu.org> ---
(In reply to Richard Biener from comment #8)
> (In reply to rsand...@gcc.gnu.org from comment #7)
> > (In reply to Richard Biener from comment #6)
> > > Richard - I'm sure we can construct a similar case for aarch64 where
> > > argument passing and vector mode use cause spilling?
> > > 
> > > On x86 the simplest testcase showing this is
> > > 
> > > typedef unsigned long long v2di __attribute__((vector_size(16)));
> > > v2di bswap(__uint128_t a)
> > > {
> > >     return *(v2di *)&a;
> > > }
> > > 
> > > that produces
> > > 
> > > bswap:
> > > .LFB0:
> > >         .cfi_startproc
> > >         sub     sp, sp, #16
> > >         .cfi_def_cfa_offset 16
> > >         stp     x0, x1, [sp]
> > >         ldr     q0, [sp]
> > >         add     sp, sp, 16
> > >         .cfi_def_cfa_offset 0
> > >         ret
> > > 
> > > on arm for me.  Maybe the stp x0, x1 store can forward to the ldr load
> > > though and I'm not sure there's another way to move x0/x1 to q0.
> > It looks like this is a deliberate choice for aarch64.  The generic
> > costing has:
> > 
> >   /* Avoid the use of slow int<->fp moves for spilling by setting
> >      their cost higher than memmov_cost.  */
> >   5, /* GP2FP  */
> > 
> > So in cases like the above, we're telling IRA that spilling to
> > memory and reloading is cheaper than moving between registers.
> > For -mtune=thunderx we generate:
> > 
> >         fmov    d0, x0
> >         ins     v0.d[1], x1
> >         ret
> > 
> > instead.
> 
> Ah, interesting.  On x86 we disallow/pessimize GPR<->XMM moves with
> some tunings as well, still there a sequence like
> 
>        movq    %rdi, -24(%rsp)
>        movq    %rsi, -16(%rsp)
>        movq    -24(%rsp), %xmm0
>        movq    -16(%rsp), %xmm1
>        unpckhpd  %xmm0, %xmm1   (fixme - that's wrong, but you get the idea)
> 
> instead of
> 
>         movq    %rdi, -24(%rsp)
>         movq    %rsi, -16(%rsp)
>         movdqa  -24(%rsp), %xmm0
> 
> would likely be faster.  Not sure if one can get LRA to produce this
> two-staged reload with just appropriate costing though.  As said the
> key of the cost of the bad sequence is the failing store forwarding,
> so it's special for spilling of two-GPR TImode and reloading as
> single FPR V2DImode.

And a speciality for aarch64 seems to be that it has arguments
passed in (reg:TI x0) which supposedly is a register-pair.  On x86
there are no TImode register pair registers I think, instead the
__int128 is passed as two 8-bytes in regular GPRs.  So on aarch64
we have the simpler

(insn 13 3 10 2 (set (reg:TI 95)
        (reg:TI 0 x0 [ a ])) "t.ii":3:2 58 {*movti_aarch64}
     (expr_list:REG_DEAD (reg:TI 0 x0 [ a ])
        (nil)))
(insn 10 13 11 2 (set (reg/i:V2DI 32 v0)
        (subreg:V2DI (reg:TI 95) 0)) "t.ii":5:2 1173 {*aarch64_simd_movv2di}
     (expr_list:REG_DEAD (reg:TI 95)
        (nil)))

before RA.

Reply via email to