Allan,
Postgres is probably not the ideal solution to this problem. If you'd like to try this though, two points:
- If the table really only has 20 rows, drop the index. If the table really only has 20 active rows at a
time, then the planner will never use that index.
(run EXPLAIN on your query to see if it is using your index. If not, it is only slowing you down.)
- As said before, VACUUM frequently, maybe even every 10 seconds (experiment with different intervals.)
Paul Tillotson
I have a small table about 20 rows, a constant, that is receiving about 160 updates per second. The table is used to share gathered data to other process asynchronously. After 5 min it is 12 updates per second. Performance returns after a vacuum analyse.
I'm using 7.4.5.
This is the table structure
Table "public.lastscan"
Column | Type | Modifiers -----------+-----------------------------+-----------
pointnum | integer | not null
parameter | character varying(8) | not null
value | double precision | not null
dt | timestamp without time zone | not null
Indexes:
"lsindex" btree (pointnum, parameter)
---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]