> The existing stats collection mechanism seems OK for event > counts, although I'd propose two changes: one, get rid of the > separate buffer process, and two, find a way to emit event > reports in a time-driven way rather than once per transaction > commit. I'm a bit vague about how to do the latter at the moment.
Might it not be a win to also store "per backend global values" in the shared memory segment? Things like "time of last command", "number of transactions executed in this backend", "backend start time" and other values that are fixed-size? You can obviously not do it for things like per-table values, since the size can't be predicted, but all per-backend counters that are fixed size should be able to do this, I think. And if it's just a counter, it should be reasonably safe to just do the increment operation without locking, since there's only one writer for each process. That should have a much lower overhead than UDP or whatever to the stats process, no? It might be worthwhile to add a section for things like bgwriter (and possibly the archiver?) to deliver statics that we can add statistics views for. (they can obviously not use a standard backend "struct" for this since they'd have completely different values to report) //Magnus ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org