On Wed, Oct 27, 2010 at 8:41 AM, Francisco Reyes wrote:
> -> Nested Loop (cost=293.80..719.87
> rows=2434522 width=4) (actual time=228.867..241.909 rows=2 loops=1)
> -> HashAggregate (cost=293.80..294.13
> rows=33 width=29) (a
On 11/05/2010 07:32 PM, A B wrote:
The server will just boot, load data, run, hopefully not crash but if
it would, just start over with load and run.
Have you looked at VoltDB? It's designed for fast in-memory use.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performanc
I also found this. Perhaps it is related?
http://postgresql.1045698.n5.nabble.com/Hash-Aggregate-plan-picked-for-very-large-table-out-of-memory-td1883299.html
--
Jon
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.
2. Why do both HashAggregate and GroupAggregate say the cost estimate
is 4 rows?
I've reproduced this :
CREATE TABLE popo AS SELECT (x%1000) AS a,(x%1001) AS b FROM
generate_series( 1,100 ) AS x;
VACUUM ANALYZE popo;
EXPLAIN ANALYZE SELECT a,b,count(*) FROM (SELECT * FROM popo UNI