https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94460

--- Comment #4 from CVS Commits <cvs-commit at gcc dot gnu.org> ---
The master branch has been updated by Jakub Jelinek <ja...@gcc.gnu.org>:

https://gcc.gnu.org/g:b8020a5aafd02af1ccf99372d3d8052c40b59725

commit r10-7541-gb8020a5aafd02af1ccf99372d3d8052c40b59725
Author: Jakub Jelinek <ja...@redhat.com>
Date:   Fri Apr 3 19:44:42 2020 +0200

    i386: Fix vph{add,subs?}[wd] 256-bit AVX2 RTL patterns [PR94460]

    The following testcase is miscompiled, because the AVX2 patterns don't
    describe correctly what the insn does.  E.g. vphaddd with %ymm* operands
    (the second pattern) instruction as per:
   
https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm256_hadd_epi32&expand=2941
    does { a0+a1, a2+a3, b0+b1, b2+b3, a4+a5, a6+a7, b4+b5, b6+b7 }
    but our RTL pattern did
         { a0+a1, a2+a3, a4+a5, a6+a7, b0+b1, b2+b3, b4+b5, b6+b7 }
    where the first and last 64 bits are the same and two middle 64 bits
    swapped.
   
https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm256_hadd_epi16&expand=2939
    similarly, insn does:
         { a0+a1, a2+a3, a4+a5, a6+a7, b0+b1, b2+b3, b4+b5, b6+b7,
           a8+a9, a10+a11, a12+a13, a14+a15, b8+b9, b10+b11, b12+b13, b14+b15 }
    but RTL pattern did
         { a0+a1, a2+a3, a4+a5, a6+a7, a8+a9, a10+a11, a12+a13, a14+a15,
           b0+b1, b2+b3, b4+b5, b6+b7, b8+b9, b10+b11, b12+b13, b14+b15 }
    again, first and last 64 bits are the same and the two middle 64 bits
    swapped.

    2020-04-03  Jakub Jelinek  <ja...@redhat.com>

            PR target/94460
            * config/i386/sse.md (avx2_ph<plusminus_mnemonic>wv16hi3,
            avx2_ph<plusminus_mnemonic>dv8si3): Fix up RTL pattern to do
            second half of first lane from first lane of second operand and
            first half of second lane from second lane of first operand.

            * gcc.target/i386/avx2-pr94460.c: New test.

Reply via email to