1) Cron job @ 4:00AM that runs a full vacuum analyze on the DB, and reindex on the major tables. (Reindex is to maintain index files in SHM) An alerting feature pages the administrator if the job does not complete within a reasonable amount of time.
2) Every 15 minutes, a cron job runs a vacuum analyze on our five largest tables. An alert is emailed to the administrator if a second vacuum attempts to start before the previous completes.
3) Every week, we review the disk usage numbers from daily peaks. This determines if we need to increase our shmmax & shared buffers.
Additionally, you may want to take a look at your query performance. Are most of your queries doing sequential scans? In my system, the crucial columns of the primary tables are int8 and float8 fields. I have those indexed, and I get a serious performance boost by making sure all SELECT/UPDATE/DELETE queries that use those columns in the WHERE have an explicit ::int8 or ::float8 (Explain analyze is your friend). During peak usage, there is an order of magnitude difference (usually 10 to 15x) between queries doing sequential scans on the table, and queries doing index scans. Might be worth investigating if your queries are taking 5 seconds when your DB is fresh. HTH.
Tomas Szepe wrote:
Hello everybody,
I'm facing a simple yet gravely problem with postgresql 7.3.2 on x86 Linux. My db is used to store IP accounting statistics for about 30 C's. There are a couple truly trivial tables such as the one below:
CREATE TABLE stats_min ( ip inet NOT NULL, start timestamp NOT NULL default CURRENT_TIMESTAMP(0), intlen int4 NOT NULL default 60, d_in int8 NOT NULL, d_out int8 NOT NULL,
constraint "stats_min_pkey" PRIMARY KEY ("ip", "start") ); CREATE INDEX stats_min_start ON stats_min (start);
A typical transaction committed on these tables looks like this:
BEGIN WORK DELETE ... UPDATE/INSERT ... COMMIT WORK
Trouble is, as the rows in the tables get deleted/inserted/updated (the frequency being a couple thousand rows per minute), the database is growing out of proportion in size. After about a week, I have to redump the db by hand so as to get query times back to sensible figures. A transaction that takes ~50 seconds before the redump will then complete in under 5 seconds (the corresponding data/base/ dir having shrunk from ~2 GB to ~0.6GB).
A nightly VACCUM ANALYZE is no use.
A VACUUM FULL is no use.
A VACUUM FULL followed by REINDEX is no use.
It seems that only a full redump involving "pg_dump olddb | \ psql newdb" is capable of restoring the system to its working glory.
Please accept my apologies if I've overlooked a relevant piece of information in the docs. I'm in an urgent need of getting this problem resolved.
---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives?
http://archives.postgresql.org