On 2020/08/20 0:01, Tom Lane wrote: > The only situation I could imagine where this would have any use is > where there is long-term (cross-query) bloat in, say, CacheMemoryContext Yeah, in cases where a very large number of sessions are connected to the DB for very long periods of time, the memory consumption of the back-end processes may increase slowly, and such a feature is useful for analysis.
And ,as Fujii said, this feature very useful to see which contexts are consuming a lot of memory and to narrow down the causes. On Thu, Aug 20, 2020 at 11:18 AM Fujii Masao <masao.fu...@oss.nttdata.com> wrote: > On 2020/08/20 10:43, Andres Freund wrote: > > Hi, > > Even just being able to see the memory usage in a queryable way is a > > huge benefit. > > +1 +1 I think this feature is very useful in environments where gdb is not available or access to server logs is limited. And it is cumbersome to extract and analyze the memory information from the very large server logs. > > I totally agree that it's not *enough*, but in contrast to you I think > > it's a good step. Subsequently we should add a way to get any backends > > memory usage. > > It's not too hard to imagine how to serialize it in a way that can be > > easily deserialized by another backend. I am imagining something like > > sending a procsignal that triggers (probably at CFR() time) a backend to > > write its own memory usage into pg_memusage/<pid> or something roughly > > like that. > > Sounds good. Maybe we can also provide the SQL-callable function > or view to read pg_memusage/<pid>, to make the analysis easier. +1 Best regards, -- Tatsuhito Kasahara kasahara.tatsuhito _at_ gmail.com