On 27/07/2006, at 2:00 PM, Peter Watling wrote:

I have a table with only 30 odd records... I use one field on each
record as a sort of status, as a means of handshaking between a number
of clients... It works OK in theory.. however, over time ( just days )
it gets progressively slower.. its as if postgreSQL is keep a list of
all updates... I tried restarting postgres incase it was some
transaction thing, but it doesn seem to help

When you update a row postgres will in effect create an entirely new row that reflects the changes and the old row is marked for deletion, however it still remains in the table until it is garbage collected by running vacuum. If you never run vacuum you would end up with a row for every single update you have ever made.

You need to run vacuum regularly to ensure that these discarded rows are recycled. If you are running 8.0+, turning on autovacuum in your postgresql.conf is probably the easiest way to ensure this is done frequently.

here is the 'explain' results.. I just made the pwdelete_temp table by
doing a create pwdelete_temp as select * from dataprocessors.. so that
new file runs flat out...

I have also tried doing a vacuum full analyse and reindex with no
change in performance.. I dump to a text file and reload works, but
that is a bit tooo savage for something to have to do frequently.

What what I can see, it looks like pg THINKS tere is 284000 records to
scan through.. How can I tell it to flush out the history of changes?

You need to run 'ANALYZE <tablename>' to update the table statistics. Enabling autovacuum will take care of this for you also.

transMET-MGU=# explain select * from pwdelete_temppaths;
                             QUERY PLAN
---------------------------------------------------------------------- -
Seq Scan on pwdelete_temppaths  (cost=0.00..11.40 rows=140 width=515)
(1 row)

transMET-MGU=# explain select * from dataprocessor_path;
                               QUERY PLAN
---------------------------------------------------------------------- ----- Seq Scan on dataprocessor_path (cost=0.00..6900.17 rows=284617 width=92)
(1 row)


Please try running 'analyze' on the tables first and then rerun these queries as 'explain analyze' instead so you can see the difference between what the planner expects compared to what it actually gets.

--
Seeya...Q

               -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

                          _____  /  Quinton Dolan - [EMAIL PROTECTED]
  __  __/  /   /   __/   /      /
     /    __  /   _/    /      /        Gold Coast, QLD, Australia
  __/  __/ __/ ____/   /   -  /            Ph: +61 419 729 806
                    _______  /
                            _\




---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to