On 11/26/2014 11:00 PM, Andrew Dunstan wrote: > > Attached is some anonymized DDL for a fairly complex schema from a > PostgreSQL Experts client. Also attached is an explain query that runs > against the schema. The client's problem is that in trying to run the > explain, Postgres simply runs out of memory. On my untuned 9.3 test rig, > (Scientific Linux 6.4 with 24Gb of RAM and 24Gb of swap) vmstat clearly > shows the explain chewing up about 7Gb of memory. When it's done the > free memory jumps back to where it was. On a similar case on the clients > test rig we saw memory use jump lots more. > > The client's question is whether this is not a bug. It certainly seems > like it should be possible to plan a query without chewing up this much > memory, or at least to be able to limit the amount of memory that can be > grabbed during planning. Going from humming along happily to OOM > conditions all through running "explain <somequery>" is not very friendly.
It's not trivial to track the whole hierarchy of views, but I think it can result in the FROM list or some JOIN lists being too long. How about setting from_collapse_limit / join_collapse_limit to lower-than-default value ? -- Antonin Houska Cybertec Schönig & Schönig GmbH Gröhrmühlgasse 26 A-2700 Wiener Neustadt Web: http://www.postgresql-support.de, http://www.cybertec.at -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers