"Richard Guenther" <[EMAIL PROTECTED]> writes: > On 1/4/07, Richard Sandiford <[EMAIL PROTECTED]> wrote: >> Paul Eggert <[EMAIL PROTECTED]> writes: >> > Mark Mitchell <[EMAIL PROTECTED]> writes: >> >> it sounds like that would eliminate most of the problem. Certainly, >> >> making -INT_MIN evaluate to INT_MIN, when expressed like that, is an >> >> easy thing to do; that's just a guarantee about constant folding. >> > >> > Well, no, just to clarify: the GCC code in question actually computed >> > "- x", and relied on the fact that the result was INT_MIN if x (an >> > unknown integer) happened to be INT_MIN. Also, now that I'm thinking >> > about it, some the Unix v7 atoi() implementation relied on "x + 8" >> > evaluating to INT_MIN when x happened to be (INT_MAX - 7). These are >> > the usual kind of assumptions in this area. >> >> I don't know if you're implicitly only looking for certain types of >> signed overflow, or if this has been mentioned elsewhere (I admit I had >> to skim-read some of the thread) but the assumption that signed overflow >> is defined is _very_ pervasive in gcc at the rtl level. The operand to >> a CONST_INT is a signed HOST_WIDE_INT, and its accessor macro -- INTVAL >> -- returns a value of that type. Most arithmetic related to CONST_INTs >> is therefore done on signed HOST_WIDE_INTs. This means that many parts >> of gcc would produce wrong code if signed arithmetic saturated, for >> example. (FWIW, this is why I suggested adding a UINTVAL, which Stuart >> has since done -- thanks. However, most of gcc still uses INTVAL.) > > I thought all ints are unsigned in the RTL world as there is I believe no way > to express "signedness" of a mode. This would have to change of course > if we ever support non two's-complement arithmetic.
I'm not sure what you mean. Yes, "all ints are unsigned in the RTL world" in the sense that we must use two's complement arithmetic for them -- we have no way of distinguishing what was originally signed from what was originally unsigned. But my point was that gcc _stores_ the integers as _signed_ HOST_WIDE_INTs, and operates on them as such, even though these signed HOST_WIDE_INTs may actually represent unsigned integers. Thus a lot of the arithmetic that gcc does at the rtl level would be wrong for certain inputs if the _compiler used to build gcc_ assumed that signed overflow didn't wrap. In other words, it sounds like you took my message to mean that gcc's rtl code treated signed overflow _in the input files_ as undefined. I didn't mean that. I meant that gcc's own code relies on signed overflow being defined. I think it was instances of the latter that Paul was trying to find. Richard