Robert Osowiecki <[EMAIL PROTECTED]> writes: > While doing some large update on table with over 1 million records:
> HashBatchContext: 360701952 total in 52 blocks; 7158680 free (140 > chunks); 353543272 used Evidently this hashtable got out of hand :-( > Query is EXPLAIN-ed as follows: > Hash Join (cost=6997.64..169707.66 rows=1339172 width=279) > Hash Cond: ("outer".sp_az_artsize = "inner".az_artsize) > InitPlan > -> Seq Scan on tsystemvar (cost=0.00..2.15 rows=1 width=4) > Filter: ((sv_name)::text = 'CURRENT_SEASON'::text) > -> Seq Scan on tordspecif (cost=0.00..38621.72 rows=1339172 width=245) > -> Hash (cost=6512.77..6512.77 rows=44688 width=38) > -> Hash Left Join (cost=1950.38..6512.77 rows=44688 width=38) > Hash Cond: ("outer".az_artsize = "inner".ap_az_artsize) > -> Hash Join (cost=1210.12..4390.20 rows=44688 width=18) > Hash Cond: (("outer".az_ar_code)::text = > ("inner".ar_code)::text) > -> Seq Scan on tarticlesize (cost=0.00..1471.88 > rows=44688 width=16) > -> Hash (cost=1061.30..1061.30 rows=15930 width=27) > -> Seq Scan on tarticle (cost=0.00..1061.30 > rows=15930 width=27) > -> Hash (cost=638.50..638.50 rows=11500 width=24) > -> Seq Scan on tartpricevat (cost=0.00..638.50 > rows=11500 width=24) > Filter: ((ap_deleted = 0) AND (ap_se_code = $0)) One or another of the Hash nodes must have been fed many more rows than it was expecting. Which estimate is off? regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 7: don't forget to increase your free space map settings