On Wed, 20 Jul 2022 at 12:08, Tom Lane <t...@sss.pgh.pa.us> wrote: > > AFAIR, the previous stats collector implementation had no such provision > either: it'd just keep adding hashtable entries as it received info about > new objects. The only thing that's changed is that now those entries are > in shared memory instead of process-local memory. We'd be well advised to > be sure that memory can be swapped out under pressure, but otherwise I'm > not seeing that things have gotten worse.
Just to be clear I'm not looking for ways things have gotten worse. Just trying to understand what I'm reading and I guess I came in with assumptions that led me astray. But... adding entries as it received info about new objects isn't the same as having info on everything. I didn't really understand how the old system worked but if you had a very large schema but each session only worked with a small subset did the local stats data ever absorb info on the objects it never touched? All that said -- having all objects loaded in shared memory makes my work way easier. It actually seems feasible to dump out all the objects from shared memory and including objects from other databases and if I don't need a consistent snapshot it even seems like it would be possible to do that without having a copy of more than one stats entry at a time in local memory. I hope that doesn't cause huge contention on the shared hash table to be doing that regularly. -- greg