Re: [GENERAL] Hash Aggregate plan picked for very large table == out of memory

2007-06-14 Thread Gregory Stark
"Mason Hale" <[EMAIL PROTECTED]> writes: > The default_statistics_target was originally 200. > I upped it to 1000 and still get the same results. You did analyze the table after upping the target right? Actually I would expect you would be better off not raising it so high globally and just raisi

Re: [GENERAL] Hash Aggregate plan picked for very large table == out of memory

2007-06-14 Thread Tom Lane
"Mason Hale" <[EMAIL PROTECTED]> writes: >> What's the actual number of groups >> (target_page_id values)? > Approximately 40 million (I'll have a more precise number when the query > finishes running ). Ouch. The discrepancy between that and the 550K estimate is what's killing you --- the hash

Re: [GENERAL] Hash Aggregate plan picked for very large table == out of memory

2007-06-14 Thread Mason Hale
I should have mentioned this previously: running the same query against the same data on 8.1.5 does not result in a hash aggregate plan or an out of memory error. (Note: the hardware is different but very similar -- the main difference is the 8.1.9 server (with the error) has faster disks) On 6/1

Re: [GENERAL] Hash Aggregate plan picked for very large table == out of memory

2007-06-14 Thread Mason Hale
Thanks Tom. Here's more info: What have you got work_mem set to? 40960 What's the actual number of groups (target_page_id values)? Approximately 40 million (I'll have a more precise number when the query finishes running ). Maybe this helps? crystal=> select null_frac, n_distinct, correl

Re: [GENERAL] Hash Aggregate plan picked for very large table == out of memory

2007-06-14 Thread Tom Lane
"Mason Hale" <[EMAIL PROTECTED]> writes: > SELECT target_page_id, min(created_at) > FROM page_page_link > GROUP BY 1; > The page_page_link table has ~130 million rows. > After analyzing the table, the planner picks a hash aggregate plan, which > results in an out of memory error. What have you g

[GENERAL] Hash Aggregate plan picked for very large table == out of memory

2007-06-14 Thread Mason Hale
With Postgresql 8.1.9 -- I have a simple group by query: SELECT target_page_id, min(created_at) FROM page_page_link GROUP BY 1; The page_page_link table has ~130 million rows. After analyzing the table, the planner picks a hash aggregate plan, which results in an out of memory error. crystal=>