I've been getting database out of memory failures with some queries which deal 
with a reasonable amount of data.

I was wondering what I should be looking at to stop this from happening.

The typical messages I been getting are like this: http://pastebin.com/Jxfu3nYm

The OS is:

Linux TSTLHAPP01 2.6.32-29-server #58-Ubuntu SMP Fri Feb 11 21:06:51 UTC 2011 
x86_64 GNU/Linux.

It's a running on VMWare and, has 2 CPU's and 8GB of RAM. This VM is dedicated 
to PostgreSQL. The main OS parameters I have tuned are:

vm.swappiness=0
vm.overcommit_memory=2
kernel.shmmax = 4196769792
kernel.shmall = 1024602

And the PostgreSQL is:

PostgreSQL 9.0.3 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu 
4.4.3-4ubuntu5) 4.4.3, 64-bit.

The main changed psql parameters I've tuned are:

shared_buffers = 2048MB
maintenance_work_mem = 512MB
work_mem = 200MB
wal_buffers = 16MB
effective_cache_size = 4094MB

I have also try lowering the shared_buffers  down to 1GB but it still ran out 
of memory.

Cheers,
Jeremy



______________________________________________________________________________________________________

This message contains information, which is confidential and may be subject to 
legal privilege. 
If you are not the intended recipient, you must not peruse, use, disseminate, 
distribute or copy this message.
If you have received this message in error, please notify us immediately (Phone 
0800 665 463 or i...@linz.govt.nz) and destroy the original message.
LINZ accepts no responsibility for changes to this email, or for any 
attachments, after its transmission from LINZ.

Thank you.
______________________________________________________________________________________________________

Reply via email to