Adrian Prantl <[EMAIL PROTECTED]> writes: > Another question is whether there is actually a need to carry around the two > concepts of BYTES and UNITS anyway. It seems that for most backends those > are of the same size anyway, and for the other backends it would be much > easier if there were only UNITS.
That is a real question, one which is hard to answer because there are so few machines with BITS_PER_UNIT != 8. Another way to put the question is: what is the size of QImode? At the moment, QImode is the mode with the size of a unit. That is, if BITS_PER_UNIT is 16, then QImode is 16 bits. I developed a backend with BITS_PER_UNIT == 16 and INT_TYPE_SIZE == 16, which then means that the basic operations are addqi3, mulqihi3, etc. It all works fine, but it looks odd to experienced gcc developers. In other words, in practice the way to make everything work is to assume that BYTES and UNITS are the same size. So then why do we ever use the term 'byte'? The alternative is to carefully distinguish them. That implies making QImode always be 8 bits, and using the now-standard definition of 'byte' as an 8-bit value (in my youth, the size of a 'byte' varied from machine to machine). I don't know what the best approach is, but I do know that it will only be answered if there is a fully supported contributed backend for a word-addressable machine. Ian