https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92693

--- Comment #3 from Matthijs Kooijman <matthijs at stdin dot nl> ---
> I don't see why you should expect that, there's nothing in the standards 
> suggesting it should be the case.

This is true, current behaviour is standards-compliant AFAICS. However, I
expect that because it would be consistent, and would make things behave with
least surprise (at least for the usecase I suggested).

> Changing it would be an ABI change, so seems like a bad idea.

Good point.

I did a bit more searching and found this Linux kernel patch. The commit
message suggests that it might at some point have been consistent:

https://patchwork.kernel.org/patch/2845139/

I assume that "bare metal GCC" would refer to the __xxx_TYPE__ macros, or at
least whatever you get when you include <stdint.h>.

> N.B. you get exactly the same overload failure if you call func(1u). The 
> problem is your overload set, not the definition of uintptr_t.

Fair point, though I think that it is hard to define a proper overload set
here. In my case, I'm defining functions to print various sizes of integers.
Because the body of the function needs to know how big the type is, I'm using
the 
uintxx_t types to define them. I could of course define the function for
(unsigned) char, short, int, long, long long, but then I can't make any
assumptions about the exact size of each (I could use sizeof and make a generic
implementation, but I wanted to keep things simple and use a different
implementation for each size).

I guess this might boil down to C/C++ being annoying when it comes to integer
types, and not something GCC can really fix (though it *would* have been more
convenient if this had been consistent from the start).

Feel free to close if that seems appropriate.

Reply via email to