On 04/12/2016 10:32 AM, Jakub Jelinek wrote:
On Tue, Apr 12, 2016 at 10:28:39AM -0400, Jason Merrill wrote:
1) I think this approach is not globally best. We're inventing another
memory-usage related heuristic, rather than relying on the already
available GC machinery. That's why I've gone with a hard coded value
for now, rather than add a new user-frobable param. The right solution
is to stop generating new DECL_UIDs when copying fns for constexpr
evaluation. IIUC the UID is being used to map decls to values. I don't
see why we can't just use the (copied) DECL's address as a key to find
its associated value (but I've not investigated that approach).
The mapping is already from the decl's address.
Oh, good. that's not what comment #9 claimed, and I didn't check. It
does look like a straight forward tree->tree mapper now I look.
It sounds like stopping copy_fn allocating new UIDs should be fine, and
will then stop them being GC-sensitive.
Sounds good.
You mean copy the VAR_DECLs, but don't give them new UIDs? I'm afraid that
would break a lot of things, that is just way too dangerous pass.
It doesn't seem that dangerous to me. The decls are only used within
constexpr evaluation, they never escape.
Or do you mean not copying the VAR_DECLs at all, and instead hash not just
on the VAR_DECL itself, but also on the constexpr context (e.g. how many
recursive constexpr calls there are for the particular function)?
That doesn't sound workable, since we need to be able to refer to
variables from outer frames.
Or perhaps revert the recent constexpr speedup/memory usage changes, reapply
on the trunk after 6.1 is branched, stabilize and then consider backporting
for 6.2 if it works well?
As I mentioned before, it seems to me that the problem is with GC
changing the number of function copies created; we should be able to
just stop deleting the copies table and that should fix things. I've
posted such a patch to the BZ.
Jason