Hi,
But is there a tool that could compile a summary out of the log? The
log grows awefully big after a short time.
There's also pg_analyzer to check out.
http://www.samse.fr/GPL/pg_analyzer/
Some of it's features are: written in Perl and produces HTML output.
You might want to look at the "Practi
Rory Campbell-Lange wrote:
The present server is a 2GHz Pentium 4/512 KB cache with 2
software-raided ide disks (Maxtors) and 1GB of RAM.
I have been offered the following 1U server which I can just about
afford:
1U server
Intel Xeon 2.8GHz 512K cache 1
512MB PC2100 DDR ECC Reg
> would one not have to repeat this operation regularly, to keep
> any advantage of this ? my impression was that this is a relatively
> heavy operation on a large table.
Yeah, it requires an exclusive lock and a table rebuild. It might be
useful to a message board type database since (one presum
Tom Lane wrote:
> The difference would be pretty marginal --- especially if you choose
to
> use bigints instead of ints. (A timestamp is just a float8 or bigint
> under the hood, and is no more expensive to compare than those
datatypes.
> Timestamps *are* expensive to convert for I/O, but comparis
"Merlin Moncure" <[EMAIL PROTECTED]> wrote:
-- optional
cluster user_message_idx messages;
would one not have to repeat this operation regularly, to keep
any advantage of this ? my impression was that this is a relatively
heavy operation on a large table.
gnari
---(en
> I don't have a huge amount of experience with this in pg, but one of
the
> tricks we do in the ISAM world is a 'reverse date' system, so that you
> can scan forwards on the key to pick up datetimes in descending order.
> This is often a win because the o/s cache may assume read/forwards
> giving
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> The following suggestion works in two principles: one is that instead of
> using timestamps for ordering, integers are quicker,
The difference would be pretty marginal --- especially if you choose to
use bigints instead of ints. (A timestamp is just
> But is there a tool that could compile a summary out of the log? The log
> grows awefully big after a short time.
Actually, yes there is. Check out www.pgfoundry.org. I think it's called
pqa or postgres query analyzer or somethign.
Chris
---(end of broadcast)
> I do a vacuum full analyze every night.
> How can I see if my FSM setting is appropriate?
On a busy website, run vacuum analyze once an hour, or even better, use
contrib/pg_autovacuum
Chris
---(end of broadcast)---
TIP 4: Don't 'kill -9' the po
> This example looks fine, but since userid 51 evidently only has 35
> posts, there's not much time needed to read 'em all and sort 'em. The
> place where the double-column index will win big is on userids with
> hundreds of posts.
>
> You have to keep in mind that each index costs time to mainta
Ulrich Wisser wrote:
You can log queries that run for at least a specified amount of time.
This will be useful in finding what the long running queries are.
You can then use explain analyse to see why they are long running.
But is there a tool that could compile a summary out of the log? The log
g
Hi Bruno,
my web application grows slower and slower over time. After some
profiling I came to the conclusion that my SQL queries are the biggest
time spenders (25 seconds). Obviously I need to optimise my queries and
maybe introduce some new indexes.
This sounds like you aren't doing proper mai
12 matches
Mail list logo