Michael Veksler wrote:

This is right to some extent, and I referred to it in my original
mail. I claim that it is easier to write a code that checks these cases
after the overflow, rather than before. I also claim that checking
overflows (as implied by the standard) results in almost pure
unsigned arithmetic, so why do we have signed arithmetic
to begin with?

er .. because we want 1 / (-1) to be -1 instead of 0?
      because we want -1 to be less than 1
      etc.

signed arithmetic is a bit different from unsigned arithmetic :-)

But this is not what C does. In C there is an assumption
"bugs like int overflow may be transformed into any other
possible bug". No exception need be raised.

Right, which is like Ada with checks suppressed, but in general
in critical applications exceptions are turned off anyway, which
leaves Ada in EXACTLY the same situation as C (overflow erroneous)

And as it is written in section 3
"... gives compiler writers substantial freedom to re-order expressions
..."
and then
"A more sound approach is to design a language so that these
ordering effects cannot occur".

This last quote can be implemented only by moving to modulo semantics.

Overflow and ordering effects are quite different. And if you want to
avoid undefined for overflow, modulo semantics is only one of many ways
of doing this (none of which has been suggested or adopted by the C
standards committee -- when you find yourself disagreeing with this
committee in this way, you need to really do some studying to find out
why before deciding they were wrong).







Reply via email to