Paul Eggert wrote: > > for '-fsanitize=signed-integer-overflow' there is no reason for an > > ABI change. It's only the code inside functions which behaves differently. > > The main issue here is which of these options are intended to be used in > production code. For those options there will be a lot of user pressure for > backward-compatibility support, due to the dusty-binary problem. For > debugging > options there won't be. -fsanitize=undefined is intended to be for debugging
Indeed, this is a problem. It would not be good if the GCC people turn down enhancement improvements for ubsan with the argument that it is not binary backward compatible. > > So, '-fsanitize=signed-integer-overflow -fsanitize-undefined-trap-on-error' > > and > > '-ftrapv' both work. The former generates better code, whereas the latter > > has > > less surprising behaviour (an abort() is a better response than an illegal > > instruction, IMO). > > Yes, we've discussed this before. I'd rather not have call 'abort' here, > since > arithmetic overflow failures are in the same category as dividing by zero or > (INT_MIN / -1) and 'abort' isn't called there either. Sorry, I had not remembered this earlier discussion. Did we discuss the signal with which the process should be terminated? Division by zero and (INT_MIN / -1) raise a SIGFPE signal (at least on some CPUs), and this signal is defined as "Erroneous arithmetic operation." [1] Like you say, signed integer overflow should raise the same signal. Then, SIGILL is just as wrong as SIGABRT. Bruno [1] https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/signal.h.html