Quoting "Joseph S. Myers" <jos...@codesourcery.com>:

If something relates to an interface to a lower-level part of the compiler
then BITS_PER_UNIT is probably right - but if somethis relates to whether
a type is a variant of char, or to alignment of a non-bit-field object
(you can't have smaller than char alignment), or things like that, then
TYPE_PRECISION (char_type_node) may be better.

Yes, I see examples for both in the C++ front end.
The tree optimizers seem mostly (or entirely?) concerned with the
addressable unit size.

Note that BITS_PER_UNIT is used in code built for the target (libgcc2.c,
dfp-bit.h, fixed-bit.h, fp-bit.h, libobjc/encoding.c, ...), and converting
it to a hook requires eliminating those uses.

Full conversion does. For the moment I would be content with a partial conversion so that not every tree optimizer that currently uses
BITS_PER_UNIT has to include tm.h itself once the bogus tm.h includes
from target.h / function.h / gimple.h are gone.

Reply via email to