https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70731
--- Comment #4 from Josh Triplett <josh at joshtriplett dot org> --- (In reply to Marc Glisse from comment #3) > (In reply to Josh Triplett from comment #2) > > That's a fair point. Perhaps it should go into a separate optimization > > option, then, though it still seems in the spirit of -Ofast. (If overflow > > is a concern, the application would hopefully be checking for that > > separately; GCC also already has various optimizations that assume overflow > > cannot occur.) > > The code might look like: > return (double)lsym + lbits + dsym + (double)dbits; > (we don't see the difference) > that is, the application is very explicitly using double, precisely to avoid > integer overflow issues. > > Assuming that no overflow happens in test2 is normal, it is guaranteed by > the standard. Assuming that with test1 the integers are small enough that if > we reorder things no overflow will happen... Well maybe someone else will be > more optimistic than I am. Maybe if you could restrict it to the case where > VRP information guarantees that there is no overflow? That might be rare > enough that it won't matter to you though :-( In cases where GCC can *guarantee* no overflow, it can optimize those by default. However, I think it's worth having an optimization option that allows GCC to do this even when not guaranteed. Or, at a minimum, perhaps GCC could notice cases where such reordering would reduce conversions, and have a warning option to flag them?