2010/1/9 Nickolay
> Okay, I see your point with staging table. That's a good idea!
> The only problem I see here is the transfer-to-archive-table process. As
> you've correctly noticed, the system is kind of a real-time and there can be
> dozens of processes writing to the staging table, i cannot
I would suggest:
1. turn off autovacuum
1a. ewentually tune db for better performace for this kind of operation
(cant not help here)
2. restart database
3. drop all indexes
4. update
5. vacuum full table
6. create indexes
7. turn on autovacuum
Ludwik
2010/1/7 Leo Mannhart
> Kevin Grittner wrot
2009/10/29 Peter Meszaros
> Hi All,
>
> I use postgresql 8.3.7 as a huge queue. There is a very simple table
> with six columns and two indices, and about 6 million records are
> written into it in every day continously commited every 10 seconds from
> 8 clients. The table stores approximately 12
I would recomend increasing fsm max_fsm_pages and shared_buffers
This changes did speed up vacuum full on my database.
With shared_buffers remember to increase max shm in your OS.
Ludwik
2009/10/29 Peter Meszaros
> Hi All,
>
> I use postgresql 8.3.7 as a huge queue. There is a very simple table
HiI have a database and ~150 clients non-stop writing to this database quite
big pieces of text.
I have a performacne problem so I tried to increase log level, so I could
see which queries take most time.
My postgresql.conf (Log section) is:
log_destination = 'stderr'
logging_collector = on
log_ro
Hello
I have a database where I daily create a table.
Every day it is being inserted with ~3mln rows and each of them is being
updated two times.The process lasts ~24 hours so the db load is the same at
all the time. total size of the table is ~3GB.
My current vacuum settings are:
autovacuum = on