> And the hardware really loads 20 bits and not 24 bits? If so, I > think you might want to consider changing the unit to 4 bits instead > of 8 bits. If no, the mode is padded and has 24-bit size so why is > setting TYPE_PRECISION to 20 not sufficient to achieve what you > want?
The hardware transfers data in and out of byte-oriented memory in TYPE_SIZE_UNITS chunks. Once in a hardware register, all operations are either 8, 16, or 20 bits (TYPE_SIZE) in size. So yes, values are padded in memory, but no, they are not padded in registers. Setting TYPE_PRECISION is mostly useless, because most of gcc assumes it's the same as TYPE_SIZE and ignores it. Heck, most of gcc is oblivious to the idea that types might not be powers-of-two in size. GCC doesn't even bother with a DECL_PRECISION. > > Thus, in these cases, TYPE_SIZE and TYPE_SIZE_UNIT no longer have > > a "* BITS_PER_UNIT" mathematical relationship. > > I'm skeptical this can work, it's pretty fundamental. It seems to work just fine in testing, and I'm trying to make it non-fundamental.