When analyzing a bitwise AND with a constant as part of a bitwise OR, the bswap pass stores the constant in a int64_t variable without checking if it fits. As a result, we get ICE when the constant is an __int128 value. This affects GCC trunk but also GCC 4.9 and 4.8 (and possibly earlier version as well).
ChangeLog are changed as follows: *** gcc/ChangeLog *** 2014-06-05 Thomas Preud'homme <thomas.preudho...@arm.com> PR tree-optimization/61375 * tree-ssa-math-opts.c (init_symbolic_number): Cancel optimization if symbolic number cannot be represented in an unsigned HOST_WIDE_INT. (find_bswap_or_nop_1): Likewise. *** gcc/testsuite/ChangeLog *** 2014-06-05 Thomas Preud'homme <thomas.preudho...@arm.com> PR tree-optimization/61375 * gcc.c-torture/execute/pr61375-1.c: New test. diff --git a/gcc/testsuite/gcc.c-torture/execute/pr61375.c b/gcc/testsuite/gcc.c-torture/execute/pr61375.c new file mode 100644 index 0000000..58df57a --- /dev/null +++ b/gcc/testsuite/gcc.c-torture/execute/pr61375.c @@ -0,0 +1,34 @@ +#ifdef __UINT64_TYPE__ +typedef __UINT64_TYPE__ uint64_t; +#else +typedef unsigned long long uint64_t; +#endif + +#ifndef __SIZEOF_INT128__ +#define __int128 long long +#endif + +/* Some version of bswap optimization would ICE when analyzing a mask constant + too big for an HOST_WIDE_INT (PR210931). */ + +__attribute__ ((noinline, noclone)) uint64_t +uint128_central_bitsi_ior (unsigned __int128 in1, uint64_t in2) +{ + __int128 mask = (__int128)0xffff << 56; + return ((in1 & mask) >> 56) | in2; +} + +int main(int argc) +{ + __int128 in = 1; +#ifdef __SIZEOF_INT128__ + in <<= 64; +#endif + if (sizeof (uint64_t) * __CHAR_BIT__ != 64) + return 0; + if (sizeof (unsigned __int128) * __CHAR_BIT__ != 128) + return 0; + if (uint128_central_bitsi_ior (in, 2) != 0x102) + __builtin_abort (); + return 0; +} diff --git a/gcc/tree-ssa-math-opts.c b/gcc/tree-ssa-math-opts.c index 658b341..95b3f25 100644 --- a/gcc/tree-ssa-math-opts.c +++ b/gcc/tree-ssa-math-opts.c @@ -1717,6 +1717,8 @@ init_symbolic_number (struct symbolic_number *n, tree src) if (n->size % BITS_PER_UNIT != 0) return false; n->size /= BITS_PER_UNIT; + if (n->size > (int)sizeof (unsigned HOST_WIDE_INT)) + return false; n->range = n->size; n->n = CMPNOP; @@ -1883,6 +1885,8 @@ find_bswap_or_nop_1 (gimple stmt, struct symbolic_number *n, int limit) type_size = TYPE_PRECISION (gimple_expr_type (stmt)); if (type_size % BITS_PER_UNIT != 0) return NULL_TREE; + if (type_size > (int)sizeof (unsigned HOST_WIDE_INT) * 8) + return NULL_TREE; if (type_size / BITS_PER_UNIT < (int)(sizeof (int64_t))) { Is this OK for trunk? What about backports for 4.8 and 4.9? Would a reworked patch for these versions be accepted? The change would be trivial: the code in init_symbolic_number now was moved from some other place. Best regards, Thomas