https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109154
--- Comment #12 from Jakub Jelinek <jakub at gcc dot gnu.org> --- (In reply to Richard Biener from comment #11) > _1 shoud be [-Inf, nextafter (0.0, -Inf)], not [-Inf, -0.0] Well, that is a consequence of the decision to always flush denormals to zero in frange::flush_denormals_to_zero, because some CPUs do it always and others do it when asked for (e.g. x86 if linked with -ffast-math). Unless we revert that decision and flush denormals to zero only selectively (say on alpha in non-ieee mode (the default), or if fast math (which exact suboption?) etc. (In reply to Aldy Hernandez from comment #10) > BTW, I don't think it helps at all here, but casting from l_10 to a float, > we know _1 can't be either -0.0 or +-INF or +-NAN. We could add a range-op > entry for NOP_EXPR / CONVERT_EXPR to expose this fact. Well, at the very > least that it can't be a NAN...in the current representation for frange's. We definitely should add range-ops for conversions from integral to floating point and from floating to integral and their reverses. But until we have more than one range, if the integral value is VARYING, for 32-bit signed int the range would be [-0x1.p+31, 0x1.p+31] so nothing specific around zero. With 3+ ranges we could make it [-0x1.p+31, -1.][0., 0.][1., 0x1.p+31] if we think normal values around zero are important special cases. Not sure how that would help in this case. The reduced testcase is invalid because it uses uninitialized l.