[Synopsis: VACUUM FULL ANALYZE goes out of memory on a very large
pg_catalog.pg_largeobject table.]
Simon Riggs wrote:
Can you run ANALYZE and then VACUUM VERBOSE, both on just
pg_largeobject, please? It will be useful to know whether they succeed
ANALYZE:
INFO: analyzing "pg_catalog.pg_larg
Martijn van Oosterhout wrote:
IIRC you said you're on a 32-bit architecture? Which means any single
process only has 4GB address space. Take off 1GB for the kernel, 1GB
shared memory, 1 GB maintainence workmem and a collection of libraries,
stack space and general memory fragmentation and I can a
On Tue, Dec 11, 2007 at 03:18:54PM +0100, Michael Akinde wrote:
> The server has 4 GB RAM available, so even if it was trying to use 1.2
> GB shared memory + 1 GB for maintenance_mem all at once, it still seems
> odd that the process would fail. As far as I can tell (running ulimit -a
> ), the
Stefan Kaltenbrunner wrote:
Michael Akinde wrote:
Incidentally, in the first error of the two I posted, the shared
memory setting was significantly lower (24 MB, I believe). I'll try
with 128 MB before I leave in the evening, though (assuming the other
tests I'm running complete by then).
th
On Tue, Dec 11, 2007 at 12:30:43PM +0100, Michael Akinde wrote:
> The way the process was running, it seems to have basically just
> continually allocated memory until (presumably) it broke through the
> slightly less than 1.2 GB shared memory allocation we had provided for
> PostgreSQL (at lea
Michael Akinde wrote:
> Thanks for the rapid responses.
>
> Stefan Kaltenbrunner wrote:
>> this seems simply a problem of setting maintenance_work_mem too high (ie
>> higher than what your OS can support - maybe an ulimit/processlimit is in
>> effect?) . Try reducing maintenance_work_mem to say 1
Michael Akinde wrote:
Thanks for the rapid responses.
Stefan Kaltenbrunner wrote:
this seems simply a problem of setting maintenance_work_mem too high
(ie higher than what your OS can support - maybe an
ulimit/processlimit is in effect?) . Try reducing maintenance_work_mem
to say 128MB and re
Thanks for the rapid responses.
Stefan Kaltenbrunner wrote:
this seems simply a problem of setting maintenance_work_mem too high
(ie higher than what your OS can support - maybe an
ulimit/processlimit is in effect?) . Try reducing maintenance_work_mem
to say 128MB and retry.
If you promise pos
On Tue, 2007-12-11 at 10:59 +0100, Michael Akinde wrote:
> I am encountering problems when trying to run VACUUM FULL ANALYZE on a
> particular table in my database; namely that the process crashes out
> with the following problem:
Probably just as well, since a VACUUM FULL on an 800GB table is
Michael Akinde wrote:
Hi,
I am encountering problems when trying to run VACUUM FULL ANALYZE on a
particular table in my database; namely that the process crashes out
with the following problem:
INFO: vacuuming "pg_catalog.pg_largeobject"
ERROR: out of memory
DETAIL: Failed on request of s
10 matches
Mail list logo