https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99212

--- Comment #5 from David Malcolm <dmalcolm at gcc dot gnu.org> ---
Possibly a dumb question, but how am I meant to get at the size in bits of a
bitfield?  TYPE_SIZE appears to be expressed in bytes, rather than bits (or
maybe I messed up when debugging?)

On a 1-bit unsigned bitfield I'm seeing:

(gdb) call debug_tree(m_type)
 <integer_type 0x7fffea7a83f0 sizes-gimplified public unsigned QI
    size <integer_cst 0x7fffea644dc8 type <integer_type 0x7fffea65d0a8
bitsizetype> constant 8>
    unit-size <integer_cst 0x7fffea644de0 type <integer_type 0x7fffea65d000
sizetype> constant 1>
    align:8 warn_if_not_align:0 symtab:0 alias-set -1 canonical-type
0x7fffea7a83f0 precision:1 min <integer_cst 0x7fffea78a1c8 0> max <integer_cst
0x7fffea78a1e0 1>>

On a 3-bit unsigned bitfield I'm seeing:
(gdb) call debug_tree(m_type)
 <integer_type 0x7fffea7a8498 sizes-gimplified public unsigned QI
    size <integer_cst 0x7fffea644dc8 type <integer_type 0x7fffea65d0a8
bitsizetype> constant 8>
    unit-size <integer_cst 0x7fffea644de0 type <integer_type 0x7fffea65d000
sizetype> constant 1>
    align:8 warn_if_not_align:0 symtab:0 alias-set -1 canonical-type
0x7fffea7a8498 precision:3 min <integer_cst 0x7fffea78a1f8 0> max <integer_cst
0x7fffea78a210 7>>

so it looks like the "precision" is what I should be using for such types (but
when?)

Is there an equivalent to int_size_in_bytes for bits that I'm missing?  Thanks

Reply via email to