depending on what you mean with 'monitor'. for up/down monitoring use nagios
(http://www.nagios.org)for performance monitoring (and I guess the reason
why you ask this on the postgresql performance list), use pgstatspack: (
http://pgfoundry.org/projects/pgstatspack/)
frits
On Fri, Jul 31, 2009 at
Hi! I've got the following statement:
SELECT DISTINCT sub.os,
COUNT(sub.os) as total
FROM (
SELECT split_part(system.name, ' ', 1) as os
FROM system, attacks
WHERE 1 = 1
AND timestamp >= 1205708400
AND timestamp <= 120631320
My experience postgresql work good on NFS. Of course, use NFS over TCP, and
use noac if you want to protect your database even more (my experience is
NFS client caching doesn't lead to an irrecoverable database however)
I've encountered problems with RHEL4 as a database server and a client of a
Ne
hat it's impossible to judge the "cost" of a merge join,
because it's time is composited by both the scans and the merge operation
itself, right?
Is there any way to identify nodes in the execplan which "cost" many (CPU
time, IO, etc.)?
regards
frits
On 2/27/08
I've got some long running queries, and want to tune them.
Using simple logic, I can understand what expensive steps in the query plan
ought to be (seq scan and index scans using much rows), but I want to
quantify; use a somewhat more scientific approach.
The manual states: "Actually two numbers a
I've got some long running queries, and want to tune them.
Using simple logic, I can understand what expensive steps in the query plan
ought to be (seq scan and index scans using much rows), but I want to
quantify; use a somewhat more scientific approach.
The manual states: "Actually two numbers a