1. Gcc with stdint.h already
provides such nice predefined types as uint8_t.
Sizes provided are 8,16,32, and 64.
In some sense uint1_t is available too (stdbool.h)
but at least on my machine stdbool uses 8-bits to store a bool,
e.g. an array of 1000 bools takes 8000 bits,
which is asinine and kind of defeats the point.

I suggest adding uint1_t, uint2_t and uint4_t support
to gcc, and packed.  128 would be nice too.
It is obnoxious and arbitrary that only 8,16,32,64 are available,
and it is a pain to make programmers continually
have to reinvent the wheel to implement packed nybbles, etc --
and even when I do so, then my implementation results in ugly
code, different than the nice-looking code for packed bytes.
Why not allow me to write uniform code?  Why not catch up
to freaking PASCAL from 40 years ago, which already
provided packed arrays?  This ought to be pretty trivial given
that you already did 8,16,32,64 to do 1,2,4 also.

2. Gcc already provides a way to produce quotient and remainder
simultaneously, producing a struct with two fields as output:
   div_t x = div(a,b);  causes   x.quot = a/b, x.rem = a%b.
Not a lot of people know gcc provides that, but it does, and that is
good, because
it provides access to the hardware capability.

However, why not provide access to double-precision multiply and
add-with-carry (subtract with borrow? shift-left?) in the same fashion?
   twofer  x = mul(a,b);  would cause  x.hi and x.lo  to be computed.
   twofer  x = addwithcarry(a,b)   ditto.
It is frustrating and arbitrary that gcc only does this for division
and not anything else.
It also is just plain stupid! -- because the C language already
provided % and / operators,
and gcc's optimizer could presumably recognize whenever anybody was computing
both a%b and a/b, and then combine them when generating code.  Hence the
div(a,b) really was not even necessary for gcc to provide at all, if
it had a good enough optimizer+recognizer.  But mul(a,b) and
add_with_carry really ARE necessary because they are NOT available in
C language already and CANNOT be recognized/optimized --
at least not without a rather ridiculous level of cleverness to
recognize it whenever I implement double-length multiply in some slow
ugly manner as a workaround to overcome this stupidity (there are many
possible ways to implement it, so not much hope gcc could recognize
them all).


-- 
Warren D. Smith
http://RangeVoting.org  <-- add your endorsement (by clicking
"endorse" as 1st step)

Reply via email to