I believe that any after-the-fact attempt to recover bitfield boundaries is
going to fail unless you preserve more information during bitfield layout.
Consider
struct {
char : 8;
char : 0;
char : 8;
};
where the : 0 isn't preserved in any way and you can't distinguish
it from struct { char : 8; char : 8; }.
Huh? In my tests the :0 is preserved, it just doesn't have a DECL_NAME.
(gdb) p fld
$41 = (tree) 0x7ffff7778130
(gdb) pt
<field_decl 0x7ffff7778130 D.1593
...
I have tried the following scenario, and we calculate the beginning of
the bit region correctly (bit 32).
struct bits
{
char a;
int b:7;
int :0; <-- bitregion start
int c:9; <-- bitregion start
unsigned char d;
} *p;
void foo() { p -> c = 55; }
Am I misunderstanding? Why do you suggest we need to preserve more
information during bitfield layout?
FWIW, I should add a zero-length bit test.