Hi! My patch to introduce native_encode_initializer to fold_ctor_reference apparently broke gnulib/m4 on powerpc64. There it uses a const union with two doubles and corresponding IBM double double long double which actually is the largest normalizable long double value (1 ulp higher than __LDBL_MAX__). The reason our __LDBL_MAX__ is smaller is that we internally treat the double double type as one having 106-bit precision, but it actually has a variable 53-bit to 2000-ish bit precision and for the 0x1.fffffffffffff7ffffffffffffc000p+1023L value gnulib uses we need 107-bit precision, therefore for GCC __LDBL_MAX__ is 0x1.fffffffffffff7ffffffffffff8000p+1023L Before my changes, we wouldn't be able to fold_ctor_reference it and it worked fine at runtime, but with the change we are able to do that, but because it is larger than anything we can handle internally, we treat it weirdly. Similar problem would be if somebody creates this way valid, but much more than 106 bit precision e.g. 1.0 + 1.0e-768. Now, I think similar problem could happen e.g. on i?86/x86_64 with long double there, it also has some weird values in the format, e.g. the unnormals, pseudo infinities and various other magic values.
This patch for floating point types (including vector and complex types with such elements) will try to encode the returned value again and punt if it has different memory representation from the original. Note, this is only done in the path where native_encode_initializer was used, in order not to affect e.g. just reading an unpunned long double value; the value should be compiler generated in that case and thus should be properly representable. It will punt also if e.g. the padding bits are initialized to non-zero values. Or should I do this in native_interpret_real instead, so that we punt even on say VIEW_CONVERT_EXPR from an integral value containing such weird bits? And, do we want to do it for all floating point constants, or just COMPOSITE_MODE_P (element_mode (type)) ones (i.e. only for double double)? Bootstrapped/regtested on {x86_64,i686,powerpc64{,le}}-linux. 2020-08-12 Jakub Jelinek <ja...@redhat.com> PR target/95450 * gimple-fold.c (fold_ctor_reference): When interpreting bytes from native_encode_initializer into a floating point type, verify if it will be encoded back into the same memory representation and punt otherwise. --- gcc/gimple-fold.c.jj 2020-08-04 11:31:26.580268603 +0200 +++ gcc/gimple-fold.c 2020-08-11 19:00:59.147564022 +0200 @@ -7090,7 +7090,19 @@ fold_ctor_reference (tree type, tree cto int len = native_encode_initializer (ctor, buf, size / BITS_PER_UNIT, offset / BITS_PER_UNIT); if (len > 0) - return native_interpret_expr (type, buf, len); + { + ret = native_interpret_expr (type, buf, len); + if (ret && FLOAT_TYPE_P (type)) + { + /* For floating point values, punt if this folding + doesn't preserve bit representation (canonicalizes some + bits e.g. in NaN, etc.), see PR95450. */ + unsigned char ver[MAX_BITSIZE_MODE_ANY_MODE / BITS_PER_UNIT]; + if (native_encode_initializer (ret, ver, len, 0) != len + || memcmp (buf, ver, len) != 0) + ret = NULL_TREE; + } + } } return ret; --- gcc/testsuite/gcc.target/powerpc/pr95450.c.jj 2020-08-11 19:21:35.654633211 +0200 +++ gcc/testsuite/gcc.target/powerpc/pr95450.c 2020-08-11 19:23:24.176147695 +0200 @@ -0,0 +1,29 @@ +/* PR target/95450 */ +/* { dg-do compile } */ +/* { dg-options "-O2 -fdump-tree-optimized" } */ +/* { dg-final { scan-tree-dump-not "return \[0-9.e+]\+;" "optimized" } } */ + +/* Verify this is not optimized for double double into return floating_point_constant, + as while that constant is the maximum normalized floating point value, it needs + 107 bit precision, which is more than GCC supports for this format. */ + +#if __LDBL_MANT_DIG__ == 106 +union U +{ + struct { double hi; double lo; } dd; + long double ld; +}; + +const union U g = { { __DBL_MAX__, __DBL_MAX__ / (double)134217728UL / (double)134217728UL } }; +#else +struct S +{ + long double ld; +} g; +#endif + +long double +foo (void) +{ + return g.ld; +} Jakub