Re: [PERFORM] Regression: 8.3 2 seconds -> 8.4 100+ seconds

2010-11-06 Thread Robert Haas
On Wed, Oct 27, 2010 at 8:41 AM, Francisco Reyes wrote: >                                ->  Nested Loop  (cost=293.80..719.87 > rows=2434522 width=4) (actual time=228.867..241.909 rows=2 loops=1) >                                      ->  HashAggregate  (cost=293.80..294.13 > rows=33 width=29) (a

Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-06 Thread Craig Ringer
On 11/05/2010 07:32 PM, A B wrote: The server will just boot, load data, run, hopefully not crash but if it would, just start over with load and run. Have you looked at VoltDB? It's designed for fast in-memory use. -- Craig Ringer -- Sent via pgsql-performance mailing list (pgsql-performanc

Re: [PERFORM] postmaster consuming /lots/ of memory with hash aggregate. why?

2010-11-06 Thread Jon Nelson
I also found this. Perhaps it is related? http://postgresql.1045698.n5.nabble.com/Hash-Aggregate-plan-picked-for-very-large-table-out-of-memory-td1883299.html -- Jon -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.

Re: [PERFORM] postmaster consuming /lots/ of memory with hash aggregate. why?

2010-11-06 Thread Pierre C
2. Why do both HashAggregate and GroupAggregate say the cost estimate is 4 rows? I've reproduced this : CREATE TABLE popo AS SELECT (x%1000) AS a,(x%1001) AS b FROM generate_series( 1,100 ) AS x; VACUUM ANALYZE popo; EXPLAIN ANALYZE SELECT a,b,count(*) FROM (SELECT * FROM popo UNI