On Sat, Oct 15, 2011 at 14:16, Rachael Nelson <rulonr...@gmail.com> wrote: > Attached is a tiny bit of top output (top.txt), the file being imported > (postgres.sql.gz - about 32MB), and the perl module script that generates
Thanks. So the problem here seems to be postgres does not handle a multi-hundred megabyte single statement very well. In-fact you don't even need a table, I was able to run it out of memory using a 306MB single insert with the multirow values syntax. I would suggest using COPY FROM instead, it should be much faster. If you can't do that then try breaking up that big insert into multiple smaller ones. For the curious, find below the memory dump. Im not exactly sure where all the memory is going, I count ~1.5G below. This box has 8G total with ~7 Free before I ran the query. TopMemoryContext: 61360 total in 7 blocks; 3920 free (13 chunks); 57440 used TopTransactionContext: 8192 total in 1 blocks; 7520 free (0 chunks); 672 used MessageContext: 1487668504 total in 87 blocks; 2256 free (1 chunks); 1487666248 used Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used smgr relation table: 24576 total in 2 blocks; 13920 free (4 chunks); 10656 used TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks); 32 used Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used PortalMemory: 0 total in 0 blocks; 0 free (0 chunks); 0 used Relcache by OID: 24576 total in 2 blocks; 14912 free (3 chunks); 9664 used CacheMemoryContext: 817840 total in 20 blocks; 111704 free (0 chunks); 706136 used .....<snip> MdSmgr: 8192 total in 1 blocks; 7584 free (0 chunks); 608 used LOCALLOCK hash: 24576 total in 2 blocks; 15984 free (5 chunks); 8592 used Timezones: 83472 total in 2 blocks; 3744 free (0 chunks); 79728 used ErrorContext: 8192 total in 1 blocks; 8160 free (3 chunks); 32 used -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs