Re: [GENERAL] work_mem greater than 2GB issue

2009-05-15 Thread wickro
> HashAggregate doesn't have any ability to spill to disk.  The planner > will not select a HashAggregate if it thinks the required hash table > would be larger than work_mem.  What you've evidently got here is a > misestimate of the required hash table size, which most likely is > stemming from a

Re: [GENERAL] work_mem greater than 2GB issue

2009-05-14 Thread Tom Lane
wickro writes: > So this is a planning mistake? Should a hash be allowed to grow larger > than work_mem before it starts to use the disk? HashAggregate doesn't have any ability to spill to disk. The planner will not select a HashAggregate if it thinks the required hash table would be larger than

Re: [GENERAL] work_mem greater than 2GB issue

2009-05-14 Thread wickro
You're right. At a certain work_mem threshold it switches over to a HashAggregate sort method. When it does, it eats up alot of memory. For GroupAggregate it only uses the max of work_mem. I'm using Postgresql 8.3.3 64bit on Centos 5. The query I'm running is: select keyword, partner_id, sum(num_

Re: [GENERAL] work_mem greater than 2GB issue

2009-05-14 Thread Henry
Quoting wickro : I have a largish table (> 8GB). I'm doing a very simple single group by on. This doesn't answer your question, but you might want to take advantage of table partitioning: http://www.postgresql.org/docs/8.3/interactive/ddl-partitioning.html I've recently gone through this ex

Re: [GENERAL] work_mem greater than 2GB issue

2009-05-14 Thread Gregory Stark
wickro writes: > Hi everyone, > > I have a largish table (> 8GB). I'm doing a very simple single group > by on. I am the only user of this database. If I set work mem to > anything under 2GB (e.g. 1900MB) the postmaster process stops at that > value while it's peforming it's group by. There is on

Re: [GENERAL] work_mem greater than 2GB issue

2009-05-14 Thread Tom Lane
wickro writes: > I have a largish table (> 8GB). I'm doing a very simple single group > by on. I am the only user of this database. If I set work mem to > anything under 2GB (e.g. 1900MB) the postmaster process stops at that > value while it's peforming it's group by. There is only one hash > oper