Hello,
2009/11/25 Richard Neill :
Also, if you find odd statistics of freshly analyzed table - try
increasing statistics target, using
ALTER TABLE .. ALTER COLUMN .. SET STATISTICS ...
If you're using defaults - it's again low for large tables. Start with
200, for example.
Best regar
ze table every 100k changed
(inserted/updated/deleted) rows. Is this enough for you? Default on
large tables are definatly too low. If you get now consistent times -
then you've been hit by wrong statistics.
Best regards,
Sergey Aleynikov
--
Sent via pgsql-performance mailin
s, i set it
running much more agressivly then in default install.
Best regards,
Sergey Aleynikov
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ans that, for every one of 10669 output rows, DB scanned whole
item_price table, spending 20.4 of 20.8 secs there. Do you have any
indexes there? Especially, on item_id column.
Best regards,
Sergey Aleynikov
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make ch