Hello everybody,

I am currently working on creating a new gcc backend for a word-addressable
machine with 24-Bit general purpose registers.
While doing so I came across a few inconsistencies regarding the usage of the
BITS_PER_UNIT Macro. (and UNITS_PER_WORD, in a related story)

Apparently a lot of places in the gcc sources use the concept of a UNIT
where they actually mean an 8-Bit Byte and vice versa. (often in the form
of bytelen*BITS_PER_UNITS in order to calculate some size in BITS)

generic example from genmodes.c:

>static void
>emit_mode_precision (void)
>{
>  int c;
>  struct mode_data *m;
>
>  print_decl ("unsigned short", "mode_precision", "NUM_MACHINE_MODES");
>
>  for_all_modes (c, m)
>    if (m->precision != (unsigned int)-1)
>      tagged_printf ("%u", m->precision, m->name);
>    else
>      tagged_printf ("%u*BITS_PER_UNIT", m->bytesize, m->name);
>                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>  print_closer ();
>}

So my question is, what is the best thing for me to do? If I simply replace
those instances with constant 8 it generates the correct code, but that does
not seem like a clean solution to me.
Another question is whether there is actually a need to carry around the two
concepts of BYTES and UNITS anyway. It seems that for most backends those
are of the same size anyway, and for the other backends it would be much
easier if there were only UNITS.

thanks a lot,
Adrian

Reply via email to