What should integer_onep mean if we have a signed 1-bit bitfield in which the bit is set? Seen as a 1-bit value it's "obviously" 1, but seen as a value extended to infinite precision it's -1.
Current mainline returns false while wide-int returns true. This came up in gcc.c-torture/execute/930718-1.c, compiled at -O2. Before phiopt we have: <unnamed-signed:1> foo$f2; ... _8 = _7 & 3; if (_8 != 0) goto <bb 4>; else goto <bb 3>; <bb 3>: <bb 4>: # foo$f2_10 = PHI <0(2), -1(3)> foo.f2 = foo$f2_10; On mainline that becomes: _8 = _7 & 3; _3 = _8 == 0; _2 = (<unnamed-signed:1>) _3; _13 = -_2; ... foo.f2 = _13; while on wide-int we avoid the redundant negation: _3 = _8 == 0; _2 = (<unnamed-signed:1>) _3; ... foo.f2 = _2; On some targets this difference persists until final and we get much better code on wide-int. So in this case the wide-int behaviour looks better, but I wanted to check whether there were likely to be downsides. FWIW, the phiopt code is: /* The PHI arguments have the constants 0 and 1, or 0 and -1, then convert it to the conditional. */ if ((integer_zerop (arg0) && integer_onep (arg1)) || (integer_zerop (arg1) && integer_onep (arg0))) neg = false; else if ((integer_zerop (arg0) && integer_all_onesp (arg1)) || (integer_zerop (arg1) && integer_all_onesp (arg0))) neg = true; else return false; Thanks, Richard