Paul Eggert wrote: > The basic idea seems fine, but isn't that off by a factor of 2? It defines > size_t_bits_minus_2 = sizeof (size_t) * CHAR_BIT - 2 > and then defines SIZE_MAX to (((1U << $size_t_bits_minus_2) - 1) * 2 + 1). > Unless I'm missing something, on a 32-bit host, that will set SIZE_MAX > to 2147483647 instead of the correct value.
Oops, you're right, of course. The "minus 2" is for a signed type. The formula for an unsigned type is "minus 1". Thanks, I'm correcting it. Bruno