On Tue, Nov 5, 2024 at 9:59 AM Jakub Jelinek <ja...@redhat.com> wrote:
>
> On Tue, Nov 05, 2024 at 09:32:23AM +0100, Richard Biener wrote:
> > I think there's the possibility to get back the memory on the GIMPLE
> > side but I wouldn't make
> > this a requirement for this patch.
>
> Sure.  I'll I'm saying is that we should make some effort to avoid the
> growth, not just even without any movements accept we have tons of new bits
> for flags, let's use them.

True.  I'm looking how difficult it is to optimize the vuse/vdef thing now.

> > We could dynamically allocate cpp_token and run-length encode location_t,
> > making it effectively less than 64bit.  For cpp_token we'd have to split
> > it into a low 32bit at the existing location and optionally allocated upper
> > 32bits.  But I'm honestly not sure this is worth the trouble - it would also
> > restrict how we allocate the bits of location_t.  We might want to
> > dynamically allocate cpp_token in general given the current 24 bytes
> > are only for the largest union members - but I have no statistics on the
> > token distribution and no idea if we use an array of cpp_token somewhere
> > which would rule this out.
>
> Actually, I think cpp_token isn't that big deal, that should be short-lived
> unless using huge macros.
> cp_token in the C++ FE is more important, the FE uses a vector of those
> and there is one cp_token per token read from libcpp.
> Unfortunately, I'm afraid there is nothing that can be done there,
> the struct has currently 29 bits of various flags, then 32 bit location_t
> and then union with a single pointer in it, so nicely 16 bytes.
> Now it will be 24 bytes, with 35 spare bits for flags.
> And the vector is live across the whole parsing (pointer to it cleared at
> the end of parsing, so GC collect can use it).

So cp_token[] could be split into two arrays to avoid the 32bit padding with
the enlarged location_t.  Maybe that's even more cache efficient if one
32bit field is often the only accessed one when sweeping over a chain
of tokens.

Richard.

>
>         Jakub
>

Reply via email to