Tom Lane wrote:
"Todd A. Cook" <[EMAIL PROTECTED]> writes:
oom_test=> explain select val,count(*) from oom_tab group by val;
                                QUERY PLAN
-------------------------------------------------------------------------
  HashAggregate  (cost=1163446.13..1163448.63 rows=200 width=4)
    ->  Seq Scan on oom_tab  (cost=0.00..867748.42 rows=59139542 width=4)

The row estimitate for oom_tab is close to the actual value.  Most of
the values are unique, however, so the result should have around 59M
rows too.

Well, that's the problem right there :-(.  Have you ANALYZEd this table?

My production table and query are more complex.  In the original, the
query above was in a sub-select; the work-around was to create a temp
table with the sub-query results, analyze it, and then do the larger
query based off of the temp table.

There have been off and on discussions on the pg lists about out of
memory issues (see 
http://archives.postgresql.org/pgsql-bugs/2006-03/msg00102.php).
I was just offering my test case as an example in case it might be of
any use in tracking those problems down. :)

-- todd


---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to