Hi, Thank you for the review.
> > Hmm, would it make sene to use dynamic shared memory for this? The > publishing backend could dsm_create one DSM chunk of the exact size that > it needs, pass the dsm_handle to the consumer, and then have it be > destroy once it's been read. That way you don't have to define an > arbitrary limit of any size. (Maybe you could keep a limit to how much > is published in shared memory and spill the rest to disk, but I think > such a limit should be very high[1], so that it's unlikely to take > effect in normal cases.) > [1] This is very arbitrary of course, but 1 MB gives enough room for > some 7000 contexts, which should cover normal cases. > I used one DSA area per process to share statistics. Currently, the size limit for each DSA is 16 MB, which can accommodate approximately 6,700 MemoryContextInfo structs. Any additional statistics will spill over to a file. I opted for DSAs over DSMs to enable memory reuse by freeing segments for subsequent statistics copies of the same backend, without needing to recreate DSMs for each request. The dsa_handle for each process is stored in an array, indexed by the procNumber, within the shared memory. The maximum size of this array is defined as the sum of MaxBackends and the number of auxiliary processes. As requested earlier, I have renamed the function to pg_get_process_memory_contexts(pid, get_summary). Suggestions for a better name are welcome. When the get_summary argument is set to true, the function provides statistics for memory contexts up to level 2—that is, the top memory context and all its children. Please find attached a rebased patch that includes these changes. I will work on adding a test for the function and some code refactoring suggestions. Thank you, Rahila Syed
v2-0001-Function-to-report-memory-context-stats-of-any-backe.patch
Description: Binary data