Joseph Shraibman wrote:
I ran a query that was "SELECT field, count(*) INTO TEMP temptable" and it grew to be 10gig (as reported by top)

What was the real query?

First I selected 90634 rows (3 ints) into the first temp table, then I did "select intfield1, count(intfield2) FROM realtable rt WHERE rt.id = temptable.id and other conditions on rt here GROUP BY intfield1". The size of the second temp table should have been no more than 60000 rows.

I can't see where the behaviour you're seeing is coming from.

I remember hearing of cases where PG has picked a plan that ends up requiring massively more RAM than it anticipated. Having committed itself to that plan, the executor then follows it through regardless of wrong initial assumptions. IIRC one example was building a hash where it was expecting 10 distinct values and got 10,000 instead (or something of that sort).

Can you reproduce this?

Can you post the explain for the query?

How many rows are we talking about?

 > and brought the whole machine
to its knees.  How do I keep this from happening again?

Set your per-user limits (man ulimit or man bash) to restrict PG's overall memory consumption.

What happens when PG hits that limit? Will it start using disk space for TEMP tables then?

I'd expect PG to just fail (in that one backend) - it's unlikely there's anything useful it can do if it's refused memory when it needs it. That's to stop a query from overpowering your machine, not for performance issues.

If your memory settings in postgresql.conf are reasonable (and they look fine), this shouldn't happen. Let's see if an EXPLAIN sheds any light.

--
  Richard Huxton
  Archonet Ltd

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to