Duncan Booth <duncan.bo...@invalid.invalid> writes: > Mark Dickinson <dicki...@gmail.com> wrote: [snip] >> while n: >> count += 1 >> n &= n-1 >> return count >> >> is_even = count_set_bits(the_int) % 2 == 0 >> >> ...but anyone submitting this as a homework >> solution had better be prepared to explain why >> it works. >> > > I remember a programming exercise when I was an undergraduate and anyone > who *didn't* use that trick got marked down for writing inefficient > code.
Curious. I don't see why def parity(x): b = 2 l = 1 while True: b <<= l if x < b: break l <<= 1 while l: b >>= l x ^= x >> l x &= b - 1 l >>= 1 return x & 1 is any less efficient. Indeed, it seems more so to me. The number of top-level loop iterations is bounded by the logarithm of the total number of bits in the input rather than the Hamming weight. In terms of single-precision operations (if we're dealing with bignums) the analysis is more complicated; but the number of single-precision operations in one of my loops is a linear function of l (assuming that the comparison is done sensibly), and l increases and decreases geometrically, so I have performance which is O(log x). Assuming no special Hamming-weight distribution on the input, the `efficient' version takes O(log^2 x) single-precision operations. Given an /a-priori/ upper bound on the length of the input, e.g., it fits in a machine word, the above technique is still probably faster. That is, assuming arithmetic and bitwise operations on integers take constant time, my version runs in O(log log x) time whereas the `efficient' version takes O(log x) time. (My function returns the complement of the value requested. Fixing it is obviously trivial.) -- [mdw] -- http://mail.python.org/mailman/listinfo/python-list