Hi,

I am encountering problems when trying to run VACUUM FULL ANALYZE on a particular table in my database; namely that the process crashes out with the following problem:

INFO:  vacuuming "pg_catalog.pg_largeobject"
ERROR:  out of memory
DETAIL:  Failed on request of size 536870912.

INFO:  vacuuming "pg_catalog.pg_largeobject"
ERROR:  out of memory
DETAIL:  Failed on request of size 32.

Granted, our largeobject table is a bit large:

INFO:  analyzing "pg_catalog.pg_largeobject"
INFO: "pg_largeobject": scanned 3000 of 116049431 pages, containing 18883 live rows and 409 dead rows; 3000 rows in sample, 730453802 estimated total rows

...but I trust that VACUUM ANALYZE doesn't try to read the entire table into memory at once. :-) The machine was set up with 1.2 GB shared memory and 1 GB maintenance memory, so I would have expected this to be sufficient for the task (we will eventually set this up oa 64-bit machine with 16 GB memory, but at the moment we are restricted to 32 bit).

This is currently running on PostgreSQL 8.3beta2, but since I haven't seen this problem reported before, I guess this will also be a problem in earlier versions. Have we run into a bug/limitation of the Postgres VACUUM or is this something we might be able to solve via reconfiguring the server/database, or downgrading the DBMS version.

I shall be trying to run a simple VACUUM later this evening, in order to see whether that manages to complete. Unfortunately, due to the time it takes to load data, it's not really practicable to shift servers at the moment

A little background on the application:
We are building a raster-database to be used for storing weather and water data. The raster data (2D matrices of floating points) are stored using large objects and indexed using a values table (with multiple dimensions: time, parameter, altitudes, etc). This is a technique I've worked with successfully in the past, though in that case using an Informix DBMS. My current employer is a strong proponent for Open Software, which has led to our implementation of the current system on a PostgreSQL DBMS (we will also be releasing our system as GPL in the near future).

The test instance we are working on now is about 1 TB; we expect to increase that by a factor of at least 5 within the first year of operation, so we'd really like to ensure that we can get VACUUM working (although the data is mostly going to be static on this installation, we will have others that won't be).

Anyone with some insights on VACUUM FULL ANALYZE who can weigh in on what is going wrong?

Regards,

Michael Akinde
----
Database Architect,
met.no

begin:vcard
fn:Michael Akinde
n:Akinde;Michael
org:Meteorologisk Institutt, Norge;IT
adr;quoted-printable:;;Gaustadall=C3=A9en 30D;Oslo;;0313;Norge
email;internet:[EMAIL PROTECTED]
tel;work:22963379
tel;cell:45885379
x-mozilla-html:FALSE
url:http://www.met.no
version:2.1
end:vcard

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

Reply via email to