The reproduction is straightforward. Just build any cross gcc with --enable-newlib (e.g. the one in the Zephyr SDK) and compile this (on any 32 or 64 bit 2's complement architecture) with newlib's headers.
#include <stdint.h> extern void takes_fmt(const char *fmt, ...) __attribute__ ((format (printf, 1, 2))); void foo() { int32_t x = 42; takes_fmt("%d", x); } The use of int32_t with the untyped format specifier produces the "expects argument of type ‘int’, but argument 2 has type ‘long int’" warning despite the fact that this platform (it's just an i586 compiler!) is a standard ILP32 architecture. The reason for that is this bit in newlib's headers where they trust gcc if __INT32_TYPE__ is set: https://sourceware.org/git/gitweb.cgi?p=newlib-cygwin.git;a=blob;f=newlib/libc/include/machine/_default_types.h;h=ffc646d9e3f5392a643f8b4ca203a3ad2a7627d8;hb=HEAD#l62 And gcc, as seen by this patch, sets it to a long because it thinks that's what newlib *wants*. But if you look at the preprocessor code immediately following, newlib doesn't want that at all. If it doesn't find __INT32_TYPE__, it will (see line 70) clearly choose a signed int, not a long. Newlib doesn't want that at all. This just seems like some kind of historical mistake to me. GCC's newlib "support" causes newlib code to emit warnings on benign code that is unwarned in AFAIK literally every other platform in existence. (And I understand the point about the PRI stuff in inttypes.h, which we're doing internally. But that doesn't really address the bug in our SDK compiler.) Andy