On Wed, Jul 23, 2008 at 4:50 PM, Richard Huxton <[EMAIL PROTECTED]> wrote:

> Guillaume Bog wrote:
>
>> It seems I'm a bit stuck here. I'd appreciate some help. My main general
>> question is "how to handle very small but hot status table that has to be
>> updated every 30 seconds by 100 different persons, read and updated from
>> many sides, and also joined with some more common tables (i.e. much larger
>> but less hot)"
>>
>
> Remove all indexes except the one backing the primary-key. Run a VACUUM
>  FULL and REINDEX or CLUSTER the table. Vacuum *very frequently* - you'll
> want custom values in pg_autovacuum. Add indexes back one at a time to see
> what's really necessary. If you can keep the dead rows to a reasonable
> level, I'd have thought you could get by without indexes.


Yes, such a small table, very frequently updated, would suggest no index at
all. I understand that I may have many dead rows, that would explain that
fact that indexes do speed up the selects. We have a daily VACUUM FULL on
the database, but it may be not enough. I'll check tomorrow if a verboze
vacuum tell me that many rows are dead. The problem is that I need to work
directly on production server, as everything goes very well when there is
not enough people actually working.



> You might want to consider setting synchronous_commit=off for updates to
> the table. I'm assuming the information in the table isn't vital in the
> event of a system crash, and that could reduce WAL activity if you're
> limited by disk bandwidth.


Yes, this table's data is very short lived and can be lost without problem
in case of a crash. I could even have no WAL at all for this table if it is
possible. In my mind, this data should be stored and modified in a fixed
chunk of RAM and never go to hard-drive.

Thanks for your help. I'll try your suggestions tomorrow.

Reply via email to