https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96098

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
   Last reconfirmed|                            |2020-07-08
             Status|UNCONFIRMED                 |NEW
     Ever confirmed|0                           |1
   Target Milestone|---                         |11.0

--- Comment #1 from Richard Biener <rguenth at gcc dot gnu.org> ---
The testcase probably needs to move to costmodel/*/ because it's outcome now
depends on the actual costing.  On x86_64:

0x54db890 _1 2 times vector_store costs 24 in body
0x54db890 <unknown> 1 times vec_construct costs 8 in prologue
0x54db890 <unknown> 1 times vec_construct costs 8 in prologue
0x54dcba0 _1 1 times scalar_store costs 12 in body
0x54dcba0 _2 1 times scalar_store costs 12 in body
0x54dcba0 _3 1 times scalar_store costs 12 in body
0x54dcba0 _4 1 times scalar_store costs 12 in body

while ppc64le has

0x42edf00 _1 2 times vector_store costs 2 in body
0x42edf00 <unknown> 1 times vec_construct costs 2 in prologue
0x42edf00 <unknown> 1 times vec_construct costs 2 in prologue
0x42ef850 _1 1 times scalar_store costs 1 in body
0x42ef850 _2 1 times scalar_store costs 1 in body
0x42ef850 _3 1 times scalar_store costs 1 in body
0x42ef850 _4 1 times scalar_store costs 1 in body

so for ppc64le it's 6 vector vs. 4 scalar while on x86_64 it's 36 vector
vs. 48 scalar.  As the comment in the testcase explains the vectorization
is considered a "bug" (well, I'd say if write-combining is profitable
we should of course do it):

/* ???  Due to the gaps we fall back to scalar loads which makes the
   vectorization profitable.  */
/* { dg-final { scan-tree-dump "not profitable" "slp2" { xfail *-*-* } } } */
/* { dg-final { scan-tree-dump-times "BB vectorization with gaps at the end of
a load is not supported" 1 "slp2" } } */
/* { dg-final { scan-tree-dump-times "Basic block will be vectorized" 1 "slp2"
} } */

on x86_64 we get

        movsd   a+2048(%rip), %xmm0
        movsd   a(%rip), %xmm1
        movhpd  a+3072(%rip), %xmm0
        movhpd  a+1024(%rip), %xmm1
        movaps  %xmm1, b(%rip)
        movaps  %xmm0, b+16(%rip)

vs.

        movsd   a(%rip), %xmm0
        movsd   %xmm0, b(%rip)
        movsd   a+1024(%rip), %xmm0
        movsd   %xmm0, b+8(%rip)
        movsd   a+2048(%rip), %xmm0
        movsd   %xmm0, b+16(%rip)
        movsd   a+3072(%rip), %xmm0
        movsd   %xmm0, b+24(%rip)

where it looks profitable (larger stores are also always good for STLF)
while on ppc64le we have

0:      addis 2,12,.TOC.-.LCF0@ha
        addi 2,2,.TOC.-.LCF0@l
        .localentry     foo,.-foo
        addis 9,2,.LANCHOR0+1024@toc@ha
        lfd 10,.LANCHOR0+1024@toc@l(9)
        addis 9,2,.LANCHOR0+2048@toc@ha
        lfd 11,.LANCHOR0+2048@toc@l(9)
        addis 9,2,.LANCHOR0+3072@toc@ha
        lfd 12,.LANCHOR0+3072@toc@l(9)
        addis 9,2,.LANCHOR0+4096@toc@ha
        lfd 0,.LANCHOR0+4096@toc@l(9)
        addis 9,2,.LANCHOR0@toc@ha
        stfd 10,.LANCHOR0@toc@l(9)
        addis 9,2,.LANCHOR0+8@toc@ha
        stfd 11,.LANCHOR0+8@toc@l(9)
        addis 9,2,.LANCHOR0+16@toc@ha
        stfd 12,.LANCHOR0+16@toc@l(9)
        addis 9,2,.LANCHOR0+24@toc@ha
        stfd 0,.LANCHOR0+24@toc@l(9)
        blr

vs (cost model disabled):

0:      addis 2,12,.TOC.-.LCF0@ha
        addi 2,2,.TOC.-.LCF0@l
        .localentry     foo,.-foo
        addis 9,2,.LANCHOR0+2048@toc@ha
        addis 8,2,.LANCHOR0@toc@ha
        li 10,16
        lfd 10,.LANCHOR0+2048@toc@l(9)
        lfd 11,.LANCHOR0@toc@l(8)
        addis 9,2,.LANCHOR0+3072@toc@ha
        addis 8,2,.LANCHOR0+1024@toc@ha
        lfd 12,.LANCHOR0+3072@toc@l(9)
        lfd 0,.LANCHOR0+1024@toc@l(8)
        addis 9,2,.LANCHOR0+131072@toc@ha
        addi 9,9,.LANCHOR0+131072@toc@l
        xxpermdi 12,10,12,0
        xxpermdi 0,11,0,0
        stxvd2x 12,9,10
        stxvd2x 0,0,9
        blr

both look comparatively ugly due to the loads of .LANCHOR uses.  I'd have
expected a lea of &a[0][0] and then offsetted addressing of that.  At least
it would avoid a ton of relocations.  Looks like 131072 wouldn't fit in
the 16 bits offset though.  Anyway - offtopic.  Whether the xxpermdi
makes it unprofitable to vectorize is not known to me.

Reply via email to