On Mon, Mar 26, 2012 at 4:03 PM, Andrew Dunstan <and...@dunslane.net> wrote: > TupleHashTable: 619175960 total in 95 blocks; 821528 free > (331 chunks); 618354432 used
I think the plan you showed isn't the plan that's running out of memory. I think it's running out of memory because it's using a Hash Aggregate and underestimating the number of distinct elements. I would set enable_hash_aggregate=false for this query (not for the whole server, hash aggregates are quite useful in general). This is a long-standing problem with hash aggregates. The problem is that we don't have a clever way to fall back to a non-hash-aggregate if the original estimated memory usage turns out to be way off. Just spilling to disk the way hash joins do would be horrendously expensive, we think. If we had an approach to doing so it would be very useful because simply running out of memory is kind of rude. -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers