On 12/15/2011 04:01 AM, Simon Windsor wrote:
Hi
I am having problems recovering storage from a Postgres 9.05 database
that is used to hold large XML blocks for a week, before they are
archived off line.
The main tables are partitioned in daily partitions, and these are
easy to manage, however the DB keeps growing despite using Vacuum
(daily at 0700) and autovacuum (this does not seem to run, although
the process is running). The system is insert only, and partitions are
dropped when over 7 days of age.
I believe the issue lies with pg_largeobject, it is split between 88
files of approx. 1G each.
Are you using pg_largeobject via the lo_ functions, or via the `lo'
datatype?
If you're using it via the `lo' type then certain actions can IIRC cause
large object leaks. Try vacuumlo.
http://www.postgresql.org/docs/current/static/vacuumlo.html
vacuumlo is **NOT** suitable for use on databases where you use the lo_
functions directly.
See also the `lo' module:
http://www.postgresql.org/docs/current/static/lo.html
If you're using the lo_ functions directly and still seeing excessive
space consumption in pg_largeobject then beyond a CLUSTER or VACUUM FULL
run I'm not sure what to advise.
--
Craig Ringer