Ross Ridge wrote:
Actually, that's a different issue than catching 100% of overflows,
which apparently Ada doesn't require.
Well basically Ada does require catching all overflows, the only issue
is the optimization issue that we already discussed, it would be
acceptable to not catch this. However, if we write:
X := Y + Z;
then Ada demands we catch overflow with 100% reliability, so it really
depends on whether we can accurately characterize the cases in which
-ftrapv "fails".
Note that for static expressions:
X : Integer := 9817236498761928736498712334 * 123412334 /
918273649876123938474698172364;
The Ada standard requires computation in infinite precision, it would be
wrong to catch intermediate overflow, but the Ada front end takes care
of all that.
Robert is correct that if it's sufficiently more efficient than Ada's
approach, it can be made the default, so that by default range-checking
is on in Ada, but not in a 100% reliable fashion.
On the issue of performance, out of curiosity I tried playing around
with the IA-32 INTO instruction. I noticed two things, the first was
that instruction wasn't supported in 64-bit mode, and the second was
that it on the Linux I was using, it generated SIGSEGV signal that was
indistinguishable from any other SIGSEGV. If Ada needs to be able to
catch and distinguish overflow exceptions, this and possibile other
cases of missing operating support might make processor specific overlow
support detrimental.
Usually there are ways of telling what is going on at a sufficiently
low level, but in any case, code using the conditional jump instruction
(jo/jno) is hugely better than what we do now (and it is often faster
to usea jo than into).
Ross Ridge