https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102622
--- Comment #20 from CVS Commits <cvs-commit at gcc dot gnu.org> ---
The trunk branch has been updated by Andrew Pinski <pins...@gcc.gnu.org>:

https://gcc.gnu.org/g:882d806c1a8f9d2d2ade1133de88d63e5d4fe40c

commit r12-4276-g882d806c1a8f9d2d2ade1133de88d63e5d4fe40c
Author: Andrew Pinski <apin...@marvell.com>
Date:   Sun Oct 10 01:28:59 2021 +0000

    tree-optimization: [PR102622]: wrong code due to signed one bit integer and
"a?-1:0"

    So it turns out this is kinda of a latent bug but not really latent.
    In GCC 9 and 10, phi-opt would transform a?-1:0 (even for signed 1-bit
integer)
    to -(type)a but the type is an one bit integer which means the negation is
    undefined. GCC 11 fixed the problem by checking for a?pow2cst:0
transformation
    before a?-1:0 transformation.

    When I added the transformations to match.pd, I had swapped the order not
paying
    attention and I didn't expect anything of it. Because there was no testcase
failing
    due to this.
    Anyways this fixes the problem on the trunk by swapping the order in
match.pd and
    adding a comment of why the order is this way.

    I will try to come up with a patch for GCC 9 and 10 series later on which
fixes
    the problem there too.

    Note I didn't include the original testcase which requires the vectorizer
and AVX-512f
    as I can't figure out the right dg options to restrict it to avx-512f but I
did come up
    with a testcase which shows the problem and even more shows the problem
with the 9/10
    series as mentioned.

    OK? Bootstrapped and tested on x86_64-linux-gnu.

            PR tree-optimization/102622

    gcc/ChangeLog:

            * match.pd: Swap the order of a?pow2cst:0 and a?-1:0
transformations.
            Swap the order of a?0:pow2cst and a?0:-1 transformations.

    gcc/testsuite/ChangeLog:

            * gcc.c-torture/execute/bitfld-10.c: New test.

Reply via email to