On 12/12/2014 09:31 AM, Bastian Koppelmann wrote: > +uint32_t helper_parity(target_ulong r1) > +{ > + uint32_t ret; > + uint32_t nOnes, i; > + > + ret = 0; > + nOnes = 0; > + for (i = 0; i < 8; i++) { > + ret ^= (r1 & 1); > + r1 = r1 >> 1; > + } > + /* second byte */ > + nOnes = 0; > + for (i = 0; i < 8; i++) { > + nOnes ^= (r1 & 1); > + r1 = r1 >> 1; > + } > + ret |= nOnes << 8; > + /* third byte */ > + nOnes = 0; > + for (i = 0; i < 8; i++) { > + nOnes ^= (r1 & 1); > + r1 = r1 >> 1; > + } > + ret |= nOnes << 16; > + /* fourth byte */ > + nOnes = 0; > + for (i = 0; i < 8; i++) { > + nOnes ^= (r1 & 1); > + r1 = r1 >> 1; > + } > + ret |= nOnes << 24; > + > + return ret; > +} > +
Probably doesn't matter much, but ret = (ctpop8(r1) & 1) | ((ctpop8(r1 >> 8) & 1) << 8) | ((ctpop8(r1 >> 16) & 1) << 16) | ((ctpop8(r1 >> 24) & 1) << 24); One could also make a case for adding new helpers that use __builtin_parity rather than __builtin_popcount. I usually like to look at things like this and see how the general infrastructure can be improved... Otherwise, Reviewed-by: Richard Henderson <r...@twiddle.net> r~