I'm trying to compile a Windows program that uses DWORD_MAX, so I'm working
on adding missing definitions like that to MinGW's intsafe.h.  Microsoft's
intsafe.h explicitly defines *MIN/*MAX macros so they have the type they
were named after (e.g. UINT8_MAX is a UINT8).

However, our stdint.h, and many other open-source implementations
implicitly define UINT8_MAX, INT8_MIN, INT8_MAX, UINT16_MAX, INT16_MIN, and
INT16_MAX to have type int.

Microsoft SDK intsafe.h:        #define UINT8_MAX       0xffui8
Microsoft SDK icu.h:            #   define UINT8_MAX       ((uint8_t)(255U))
Microsoft SDK icucommon.h:      #   define UINT8_MAX       ((uint8_t)(255U))

mingw-w64-headers/crt/stdint.h: #define UINT8_MAX 255
mingw-w64-tools/widl:           #define UINT8_MAX              (255U)
gcc-15.2.0/fixinclues:          #define UINT8_MAX       (255)
musl stdint.h:                  #define UINT8_MAX  (0xff)
glibc:                          # define UINT8_MAX            (255)

GCC's builtin __UINT8_MAX__ has type int (as evidenced by the error message
from compiling "char * x = __UINT8_MAX__").

What type scheme should we use in our headers going forward?  Should the
macro's type match its name or should we let the compiler implicitly change
the smaller-value macros to int?

This choice doesn't matter in most code because small integer values tend
to get promoted to int in C/C++, but the type can show up in error messages
and can affect things like `auto`, `_Generic`, or `std::is_same<>`.

To me it makes more sense if UINT8_MAX is an UINT8, not an int.  But if we
want it to be an int, that's fine.  In that case, for consistency, I would
make BYTE_MAX, INT8_MIN, INT8_MAX, UINT16_MAX, USHORT_MAX, WORD_MAX,
INT16_MIN, INT16_MAX, SHORT_MIN, and SHORT_MAX all be int.

--David Grayson

_______________________________________________
Mingw-w64-public mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/mingw-w64-public

Reply via email to