interpret_integer has: integer = cpp_interpret_integer (parse_in, token, flags); integer = cpp_num_sign_extend (integer, options->precision); if (integer.overflow) *overflow = OT_OVERFLOW;
where options->precision is the precision of (u)intmax_t. Looking at the implementation of cpp_num_sign_extend, it seems it would sign-extend (u)intmax_t-sized !integer.unsigned literals that have their top bit set. Smaller literals would stay zero-extended. Is that extension needed though? The rest of the function passes "integer" to narrowest_unsigned_type and narrowest_signed_type, both of which do unsigned comparisons between "integer" and various TYPE_MAX_VALUEs. It looks at face value like sign-extending here would make the result depend on the host. E.g. if uintmax_t occupies 2 HWIs with no excess bits, the extension would be a no-op and the result would still be <= TYPE_MAX_VALUE (uintmax_type_node). But if uintmax_t occupies only one HWI, the sign-extended integer would be greater than TYPE_MAX_VALUE (uintmax_type_node). Looking at cpp_interpret_integer, I can't see off-hand how we would end up with a !integer.unsigned literal that is still "negative" according to options->precision. Tested on powerpc64-linux-gnu and x86_64-linux-gnu. OK to install? Or, if the code is still needed, is there a testcase we could add? Thanks, Richard gcc/c-family/ * c-lex.c (interpret_integer): Remove call to cpp_num_sign_extend. Index: gcc/c-family/c-lex.c =================================================================== --- gcc/c-family/c-lex.c 2013-10-27 08:37:55.569236132 +0000 +++ gcc/c-family/c-lex.c 2013-10-27 11:03:57.721834320 +0000 @@ -595,12 +595,10 @@ interpret_integer (const cpp_token *toke tree value, type; enum integer_type_kind itk; cpp_num integer; - cpp_options *options = cpp_get_options (parse_in); *overflow = OT_NONE; integer = cpp_interpret_integer (parse_in, token, flags); - integer = cpp_num_sign_extend (integer, options->precision); if (integer.overflow) *overflow = OT_OVERFLOW;