https://gcc.gnu.org/bugzilla/show_bug.cgi?id=119114
--- Comment #21 from Richard Sandiford <rsandifo at gcc dot gnu.org> --- Perhaps I'm missing the point, but I don't think we should look at 1 vs -1 for <signed-boolean:1>. <signed-boolean:1> has only a single bit. That bit is interpreted as a sign bit for extension purposes, but that only matters when an extension actually occurs. BI can be used to store both signed and unsigned booleans, just like QI can be used to store both signed and unsigned bytes. Going from https://godbolt.org/z/rer3cWWsz, the code seems to be: <signed-boolean:1> _163; vector(8) <signed-boolean:1> _164; _133 = .VEC_EXTRACT (mask__89.14_104, 0); _29 = MEM[(short int *)&t + 20B]; _30 = _29 & D__lsm0.57_116; _31 = _30 != 0; _122 = (<signed-boolean:1>) _31; _163 = _122 ^ _133; _164 = {_163, _163, _163, _163, _163, _163, _163, _163}; So we have an ^ on two single-bit values, which from the previous comments are both set. The single-bit result (_163) is then zero and that zero is used to build a mask of <signed-boolean:1>s (_164). That mask should also be zero. Is that not happening? What value does _164 actually end up being? In other words, if the XOR is happening in GPRs, it doesn't matter whether the register holds 1 or -1 (or 3) for a true boolean. The upper bits are don't-care, just like for an 8-bit value stored in a 64-bit register (ignoring PROMOTE_MODE for now). 1 ^ -1 gives -2, which correctly has a clear low bit. But it sounds like more than the low bit of the XOR is being used somehow.