Title: RE: [HACKERS] RFC: built-in historical query time profiling
> I see your point. The ugliness of log-parsing beckons.
>
Maybe it would make sense to use a separate log server machine, where they could be written to a database without impacting production?
On Wednesday March 23 2005 5:14, Mark Kirkwood wrote:
> - decide on a snapshot interval (e.g. 30 seconds)
> - capture pg_stat_activity every interval and save the results
> in a timestamped copy of this view (e.g. add a column
> 'snap_time')
That might serve for some purposes, but log-parsing soun
Ed L. wrote:
On Wednesday March 23 2005 4:11, Mark Kirkwood wrote:
Is enabling the various postgresql.conf stats* options and
taking regular snapshots of pg_stat_activity a possible way to
get this?
I don't see how; the duration is the key measurement I'm after,
and I don't believe it is availabl
On Wednesday March 23 2005 3:34, Tom Lane wrote:
>
> This is going to fall down on exactly the same objections that
> have been made to putting the log messages themselves into
> tables. The worst one is that a failed transaction would fail
> to make any entry whatsoever. There are also performanc
On Wednesday March 23 2005 4:11, Mark Kirkwood wrote:
> Is enabling the various postgresql.conf stats* options and
> taking regular snapshots of pg_stat_activity a possible way to
> get this?
I don't see how; the duration is the key measurement I'm after,
and I don't believe it is available anywh
Ed L. wrote:
Hackers,
(some snippage...)
Our Problem: We work with 75+ geographically distributed pg
clusters; it is a significant challenge keeping tabs on
performance. We see degradations from rogue applications,
vacuums, dumps, bloating indices, I/O and memory shortages, and
so on. Custom
"Ed L." <[EMAIL PROTECTED]> writes:
> ... We can do
> this by writing programs to periodically parse log files for
> queries and durations, and then centralizing that information
> into a db for analysis, similar to pqa's effort.
That strikes me as exactly what you ought to be doing.
> Suppose