Paul Eggert wrote:
> > While I understand the purpose of namespace cleanliness — so that the
> > compiler can tell the programmer that a '#include <stdint.h>' (in this case)
> > is missing —, I think of it as a secondary goal that we rarely can achieve,
> > and the introduced complexity is starting to be greater than the benefit.
>
> Yes, it's a tradeoff, and opinions can differ about tradeoffs.
>
> My opinion is partly colored by glibc, which must be namespace clean.
> It's not simply the desire to make code shareable (unlikely here); it's
> a matter of priorities and style. Namespace cleanliness is helpful when
> developing on GNU (to help check that the code is portable); it's less
> important when building on non-GNU systems. Hence when feasible Gnulib
> should be as namespace-clean as glibc when Gnulib interposes its own .h
> file in front of glibc's, as it does here.
Yes, I agree with all this. Except instead of "when feasible" I would
say "when feasible without too much complexity".
> >> +# define _GL_STDBIT_UINT_FAST64 __UINT_FAST64_TYPE__
> >
> > The GCC documentation [1] says about these macros:
> > "You should not use these macros directly; instead, include the
> > appropriate headers and use the typedefs."
>
> That's good advice in general, but when implementing standard headers
> Gnulib can disregard it for the same reason glibc does. E.g., see
> glibc's <inttypes.h>, which uses __WCHAR_TYPE__. GCC and Clang won't
> change the macros' meaning, so it's safe to use them. And in the
> unlikely event that the macros are withdrawn, Gnulib stdbit.h will still
> build and run.
I was worrying if maybe some platform defines uint_fast64_t to something
different than __UINT_FAST64_TYPE__. But no, that seems impossible, because
GCC has two unit tests
gcc/testsuite/gcc.dg/c99-stdint-5.c
gcc/testsuite/gcc.dg/c99-stdint-6.c
that would prevent this.
So, the use of __UINT_FAST64_TYPE__ etc. is probably safe.
> > If the intent is to convert a value of type 'unsigned char' to an
> > 'uint_fast16_t',
> > a cast is the perfect, understandable way of doing so. Writing it as an
> > ' | zero' is intentional obfuscation.
>
> It's understandable, but it's not perfect.
I was thinking of a macro
#define _GL_STDBIT_INTEGER_CAST(type, value) \
({ type zero = 0; (value) | zero; })
that would hide the confusing code in a macro that emphasizes the meaning.
But with your newest commit, this is not necessary any more.
> But I take your point that I used a less-than-obvious
> circumlocution. I installed the attached patch, which uses a different
> approach that I hope makes the code easier to follow, in the sense that
> the castless conversions are more obvious.
Thanks.
> > What was your rationale for avoiding casts in the first place? Does it apply
> > here, in the situation of converting an integer to another integer type?
>
> Yes, as I want to catch typos when the argument happens to not be an
> integer.
Well, I can follow the "catch typos" argument when you simplify
TYPE1 *ptr = (TYPE2 *) malloc (...);
to
TYPE1 *ptr = malloc (...);
because here you have eliminated the possibility of a typo in TYPE2.
But when the transformation that you are making is just reshuffling
around tokens:
return (TYPE1) ptr[0] | ((TYPE2) ptr[1] << 8);
to
TYPE1 zero1 = 0;
TYPE2 zero2 = 0;
return (ptr[0] | zero1) | ((ptr[1] | zero2) << 8);
or
TYPE1 v0 = ptr[0];
TYPE2 v1 = ptr[1];
return v0 | (v1 << 8);
you have not eliminated any occurrence of TYPE1 or TYPE2; therefore you
haven't reduced the probability of typos.
Anyway, that style is acceptable to me (as long as it compiles in C++ mode,
which is verified by the unit test tests/test-stdbit-h-c++.cc). Except that
I have a preference for a rule "one initialized variable per declaration only".
Applied here, this rule actually makes it easier to spot typos, due to the
increased symmetry of the code:
2026-03-16 Bruno Haible <[email protected]>
stdbit-h: Make code more readable and typo-proof.
* lib/stdbit.in.h (stdc_load8_beu16, stdc_load8_beu32, stdc_load8_beu64,
stdc_load8_leu16, stdc_load8_leu32, stdc_load8_leu64): Avoid declaring
multiple variables in one declaration.
diff --git a/lib/stdbit.in.h b/lib/stdbit.in.h
index 54df8f3690..258d63c9bb 100644
--- a/lib/stdbit.in.h
+++ b/lib/stdbit.in.h
@@ -1521,23 +1521,32 @@ stdc_load8_beu8 (const unsigned char ptr[1])
_GL_STDC_LOAD8_INLINE uint_least16_t
stdc_load8_beu16 (const unsigned char ptr[2])
{
- _GL_STDBIT_UINT_FAST16 v0 = ptr[0], v1 = ptr[1];
+ _GL_STDBIT_UINT_FAST16 v0 = ptr[0];
+ _GL_STDBIT_UINT_FAST16 v1 = ptr[1];
return (v0 << (8 * 1)) | (v1 << (8 * 0));
}
_GL_STDC_LOAD8_INLINE uint_least32_t
stdc_load8_beu32 (const unsigned char ptr[4])
{
- _GL_STDBIT_UINT_FAST32 v0 = ptr[0], v1 = ptr[1], v2 = ptr[2], v3 = ptr[3];
+ _GL_STDBIT_UINT_FAST32 v0 = ptr[0];
+ _GL_STDBIT_UINT_FAST32 v1 = ptr[1];
+ _GL_STDBIT_UINT_FAST32 v2 = ptr[2];
+ _GL_STDBIT_UINT_FAST32 v3 = ptr[3];
return (v0 << (8 * 3)) | (v1 << (8 * 2)) | (v2 << (8 * 1)) | (v3 << (8 * 0));
}
_GL_STDC_LOAD8_INLINE uint_least64_t
stdc_load8_beu64 (const unsigned char ptr[8])
{
- _GL_STDBIT_UINT_FAST64
- v0 = ptr[0], v1 = ptr[1], v2 = ptr[2], v3 = ptr[3],
- v4 = ptr[4], v5 = ptr[5], v6 = ptr[6], v7 = ptr[7];
+ _GL_STDBIT_UINT_FAST64 v0 = ptr[0];
+ _GL_STDBIT_UINT_FAST64 v1 = ptr[1];
+ _GL_STDBIT_UINT_FAST64 v2 = ptr[2];
+ _GL_STDBIT_UINT_FAST64 v3 = ptr[3];
+ _GL_STDBIT_UINT_FAST64 v4 = ptr[4];
+ _GL_STDBIT_UINT_FAST64 v5 = ptr[5];
+ _GL_STDBIT_UINT_FAST64 v6 = ptr[6];
+ _GL_STDBIT_UINT_FAST64 v7 = ptr[7];
return ((v0 << (8 * 7)) | (v1 << (8 * 6))
| (v2 << (8 * 5)) | (v3 << (8 * 4))
| (v4 << (8 * 3)) | (v5 << (8 * 2))
@@ -1553,23 +1562,32 @@ stdc_load8_leu8 (const unsigned char ptr[1])
_GL_STDC_LOAD8_INLINE uint_least16_t
stdc_load8_leu16 (const unsigned char ptr[2])
{
- _GL_STDBIT_UINT_FAST16 v0 = ptr[0], v1 = ptr[1];
+ _GL_STDBIT_UINT_FAST16 v0 = ptr[0];
+ _GL_STDBIT_UINT_FAST16 v1 = ptr[1];
return (v0 << (8 * 0)) | (v1 << (8 * 1));
}
_GL_STDC_LOAD8_INLINE uint_least32_t
stdc_load8_leu32 (const unsigned char ptr[4])
{
- _GL_STDBIT_UINT_FAST32 v0 = ptr[0], v1 = ptr[1], v2 = ptr[2], v3 = ptr[3];
+ _GL_STDBIT_UINT_FAST32 v0 = ptr[0];
+ _GL_STDBIT_UINT_FAST32 v1 = ptr[1];
+ _GL_STDBIT_UINT_FAST32 v2 = ptr[2];
+ _GL_STDBIT_UINT_FAST32 v3 = ptr[3];
return (v0 << (8 * 0)) | (v1 << (8 * 1)) | (v2 << (8 * 2)) | (v3 << (8 * 3));
}
_GL_STDC_LOAD8_INLINE uint_least64_t
stdc_load8_leu64 (const unsigned char ptr[8])
{
- _GL_STDBIT_UINT_FAST64
- v0 = ptr[0], v1 = ptr[1], v2 = ptr[2], v3 = ptr[3],
- v4 = ptr[4], v5 = ptr[5], v6 = ptr[6], v7 = ptr[7];
+ _GL_STDBIT_UINT_FAST64 v0 = ptr[0];
+ _GL_STDBIT_UINT_FAST64 v1 = ptr[1];
+ _GL_STDBIT_UINT_FAST64 v2 = ptr[2];
+ _GL_STDBIT_UINT_FAST64 v3 = ptr[3];
+ _GL_STDBIT_UINT_FAST64 v4 = ptr[4];
+ _GL_STDBIT_UINT_FAST64 v5 = ptr[5];
+ _GL_STDBIT_UINT_FAST64 v6 = ptr[6];
+ _GL_STDBIT_UINT_FAST64 v7 = ptr[7];
return ((v0 << (8 * 0)) | (v1 << (8 * 1))
| (v2 << (8 * 2)) | (v3 << (8 * 3))
| (v4 << (8 * 4)) | (v5 << (8 * 5))