In my experience, you don't want to store this stuff in the database.
In general, it will work fine, until you have to VACUUM the
pg_largeobject table. Unless you have a very powerful I/O subsystem,
this VACUUM will kill your performance.
> You're forgetting about cleanup and transactions. If yo
We've also experienced problems with VACUUM running for a long time.
A VACUUM on our pg_largeobject table, for example, can take over 24
hours to complete (pg_largeobject in our database has over 45million
rows). With our other tables, we've been able to partition them
(using inheritance) to keep
Thomas F. O'Connell:
>Do you have your Free Space Map settings configured appropriately?
Our current FSM settings are:
max_fsm_pages = 50 # min max_fsm_relations*16, 6 bytes each
max_fsm_relations = 1000# min 100, ~50 bytes each
> You'll want to run a VACUUM VERBOSE and note
Hello,
We have been experiencing poor performance of VACUUM in our production
database. Relevant details of our implementation are as follows:
1. We have a database that grows to about 100GB.
2. The database is a mixture of large and small tables.
3. Bulk data (stored primarily in pg_largeobj