https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90081

            Bug ID: 90081
           Summary: stdint constant macros evaluating to wrong type
           Product: gcc
           Version: 8.3.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: c
          Assignee: unassigned at gcc dot gnu.org
          Reporter: bafap5 at yahoo dot com
  Target Milestone: ---

stdint.h defines macros for expressing integer constant expressions in such a
way that they evaluate to given types. In the stdint.h spec:

"The macro INTN_C( value) shall expand to an integer constant expression
corresponding to the type int_least N _t. The macro UINTN_C( value) shall
expand to an integer constant expression corresponding to the type uint_least N
_t."

However, within the current version of stdint.h, I find the following:

  /* Signed.  */
  # define INT8_C(c)    c
  # define INT16_C(c)   c
  # define INT32_C(c)   c
  # if __WORDSIZE == 64
  #  define INT64_C(c)  c ## L
  # else
  #  define INT64_C(c)  c ## LL
  # endif

  /* Unsigned.  */
  # define UINT8_C(c)   c
  # define UINT16_C(c)  c
  # define UINT32_C(c)  c ## U
  # if __WORDSIZE == 64
  #  define UINT64_C(c) c ## UL
  # else
  #  define UINT64_C(c) c ## ULL
  # endif

Many of these macros aren't actually transforming the input at all, which leads
to some erroneous results at compile-time. This was first brought to my
attention in a situation similar to the following:

  int32_t x = -5;
  if (x < INT32_C(0xFFFFFFFF))

Upon compiling with -Wall -Wextra, the following warning is generated:

  warning: comparison of integer expressions of different signedness: ‘int32_t’
{aka ‘int’} and ‘unsigned int’ [-Wsign-compare]
       if (x < INT32_C(0xFFFFFFFF))
             ^

In this way, stdint.h violates the spec, as it is supposed to explicitly yield
a signed expression. I was able to work around this issue by using a cast, but
the macro is really what I'd rather be using.

Inspection of the actual macro definitions reveals the potential for further
errors, such as the following:

  int x = (uint8_t) -5; /* Correct, gives 251 */
  int y = UINT8_C(-5);  /* Incorrect, gives -5 */

The suggested resolution is to adjust the macros to always cast to the
appropriate "_leastN_t" types as the spec requires. Even in cases where the
default type for an expression would be large enough for the given value (such
as int8_t being stored in an int), the spec nonetheless requires the
"_leastN_t" type, which becomes meaningful in the context of dereferencing
operations.

I don't know exactly how gcc is currently deciding on which type to use for a
given integer literal, so I don't want to post a suggestion that could
potentially cause problems...  But as far as I can tell, putting explicit casts
in all of the macros should fix the problem.

Reply via email to