After upgrading from 9.1.1 to 9.1.2, memory usage for some queries 
increased to the amount that the OOM-killer on Linux terminated the server.
Further investigation show the problem to occur under multiple circumstances:

- copying data from an external file into a table with indexes
- creating indexes
- complex queries using row_number() and 'partition over'

There might be other cirumstances, the above are the ones  I could 
reproduce.

Information about the testsystem:

Linux gentoo-test 3.1.5-gentoo #1 SMP Sat Dec 10 22:42:53 CET 2011 i686 
Intel(R) Core(TM)2 Duo CPU T8100 @ 2.10GHz GenuineIntel GNU/Linux
PostgreSQL 9.1.2 on i686-pc-linux-gnu, compiled by i686-pc-linux-gnu-gcc 
(Gentoo 4.4.4-r2 p1.2, pie-0.4.5) 4.4.4, 32-bit

Running in a virtual machine (VMWare Workstation), 2GB of memory, and 512 
MB of swap.

Problem was initially observed on a 64bit-system. Downgrading Postgres from
9.1.2 back to 9.1.1 fixed the problem. The other system is running (after
downgrading):

Linux joker 3.1.1-gentoo #2 SMP Thu Nov 24 22:59:20 CET 2011 x86_64 Intel(R) 
Xeon(R) CPU 5110 @ 1.60GHz GenuineIntel GNU/Linux
PostgreSQL 9.1.1 on x86_64-pc-linux-gnu, compiled by x86_64-pc-linux-gnu-gcc 
(Gentoo 4.5.3-r1 p1.0, pie-0.4.5) 4.5.3, 64-bit

Postgres was compiled from source on both machines.

Configuration on the testing machine is mostly unchanged, modified settings 
are:

maintenance_work_mem = 256MB
shared_buffers = 128MB
fsync = off
synchronous_commit = off

The following script can be used to show the behaviour:

create database pgtest;
\c pgtest
create table test(var inet);
insert into test(var) '127.0.0.0'::inet+generate_series(0,256*256*256-1);
create index test_ix on test(var);

The postgres process uses approx. 1000MB during the insert, generating the
index approx. 400MB using Postgres 9.1.1, monitored using top, column VIRT

Same operation with Postgres 9.1.2 uses about the same memory for filling
the table, but generating the index exhausts all available memory, and the
machine slows to a crawl due to swapping.

To me this looks a bit like temporary tables are not correctly used in 9.1.2.

The problem originally was observed with the following tabledefinition and
query:

create table servers_part
  (ip inet not null primary key,
   failcount smallint default -1, 
   lastchange timestamp without time zone default now(),
   cidr smallint default 0);
create index servers_part_fclc on servers_part(failcount,lastchange);

The data for this table contains 2^24 rows, and can be downloaded from
http://www.erwied.eu/~transfer/217.dump.bz2 (116 MB)

The following query shows the same behaviour as index creation (memory 
exhaustion):

SELECT y.ip, y.failcount, y.network
   FROM ( SELECT x.ip, x.failcount, x.network, row_number() OVER (PARTITION BY 
x.network ORDER BY x.lastchange) AS rc
           FROM ( SELECT servers_part.ip, servers_part.failcount, 
set_masklen(servers_part.ip::cidr, COALESCE(servers_part.cidr::integer, 8)) AS 
network, servers_part.lastchange
                   FROM servers_part
                  WHERE servers_part.failcount = 0 AND servers_part.lastchange 
< '2011-11-11 00:00:00'::timestamp without time zone 
                        OR servers_part.failcount >= 1 AND 
servers_part.failcount < 5 AND servers_part.lastchange < '2011-12-08 
00:00:00'::timestamp without time zone 
                        OR servers_part.failcount >= 5 AND 
servers_part.failcount < 10 AND servers_part.lastchange < '2011-12-05 
00:00:00'::timestamp without time zone 
                        OR servers_part.failcount >= 10 AND 
servers_part.failcount < 15 AND servers_part.lastchange < '2011-11-28 
00:00:00'::timestamp without time zone 
                        OR servers_part.failcount = 15 AND 
servers_part.lastchange < '2011-10-01 00:00:00'::timestamp without time zone) 
x) y
  WHERE y.rc <= 128;


-- 
Jochen Erwied     |   home: joc...@erwied.eu     +49-208-38800-18, FAX: -19
Sauerbruchstr. 17 |   work: j...@mbs-software.de  +49-2151-7294-24, FAX: -50
D-45470 Muelheim  | mobile: jochen.erw...@vodafone.de       +49-173-5404164


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Reply via email to