On Fri, Mar 22, 2019 at 10:27:38AM +0100, Allan Sandfeld Jensen wrote:
> But getting back to the question, well GCC carry such information further, 
> and 
> thus break code that is otherwise correct behaving on all known 
> architectures, 
> just because the C standard hasn't decided on one of two possible results?

Of course it will, as will do any other optimizing compilers.

An optimizing compiler optimizes on the assumption that undefined behavior
does not happen.  It is not done with the intent to punish those that write
bad code, but with the intent to generate better code for valid code.

Say if the standard says that signed integer overflow is undefined behavior,
then not taking advantage of that means significant performance degradation
of e.g. many loops with signed integer IVs or signed integer computations in
it.  You can compare performance of normal code vs. one built with
additional -fwrapv.  And in that case we provide a switch that makes it well
defined behavior at the expense of making code slower.
For out of bound shifts, there is no option like
-fout-of-bound-shift={zero,masked,undefined}, it isn't worth it.

And no, out of bounds shift don't have just two possible results even in HW,
as I said, sometimes it is masked with a different mask from the bitmask
of the type, at other times the architecture has multiple different
instructions and some of them have one behavior and others have another
behavior.

        Jakub

Reply via email to