Robert Haas <robertmh...@gmail.com> writes: > On Wed, Mar 13, 2019 at 12:42 PM Alvaro Herrera > <alvhe...@2ndquadrant.com> wrote: >> I remember going over this code's memory allocation strategy a bit to >> avoid the copy while not incurring potential leaks CacheMemoryContext; >> as I recall, my idea was to use two contexts, one of which is temporary >> and used for any potentially leaky callees, and destroyed at the end of >> the function, and the other contains the good stuff and is reparented to >> CacheMemoryContext at the end. So if you have any accidental leaks, >> they don't affect a long-lived context. You have to be mindful of not >> calling leaky code when you're using the permanent one.
> Well, that assumes that the functions which allocate the good stuff do > not also leak, which seems a bit fragile. I'm a bit confused as to why there's an issue here at all. The usual plan for computed-on-demand relcache sub-structures is that we compute a working copy that we're going to return to the caller using the caller's context (which is presumably statement-duration at most) and then do the equivalent of copyObject to stash a long-lived copy into the relcache context. Is this case being done differently, and if so why? If it's being done the same, where are we leaking? I recall having noticed someplace where I thought the relcache partition support was simply failing to make provisions for cleaning up a cached structure at relcache entry drop, but I didn't have time to pursue it right then. Let me see if I can reconstruct what I was worried about. regards, tom lane