Re: [PERFORM] database size growing continously

2009-11-02 Thread Robert Haas
On Mon, Nov 2, 2009 at 7:50 AM, Peter Meszaros wrote: > Increasing max_fsm_pages can be also helpful, but I've read that > 'vacuum verbose ...' will issue warnings if max_fsm_pages is too small. > I've never seen such messag, this command is either run and finish or > goes to an endless loop as it

Re: [PERFORM] database size growing continously

2009-11-02 Thread Anj Adu
I would recommend (if at all possible) to partition the table and drop the old partitions when not needed. This will guarantee the space free-up without VACUUM overhead. Deletes will kill you at some point and you dont want too much of the VACUUM IO overhead impacting your performance. On Mon, Nov

Re: [PERFORM] database size growing continously

2009-11-02 Thread Peter Meszaros
Thank you all for the fast responses! I changed the delete's schedule from daily to hourly and I will let you know the result. This seems to be the most promising step. The next one is tuning 'max_fsm_pages'. Increasing max_fsm_pages can be also helpful, but I've read that 'vacuum verbose ...' wi

Re: [PERFORM] database size growing continously

2009-10-30 Thread Greg Stark
On Fri, Oct 30, 2009 at 1:18 PM, Jeremy Harris wrote: > So, on the becoming more intelligent front:  PostgreSQL already does > some operations as background maintenance (autovacuum).  Extending > this to de-bloat indices does not seem conceptually impossible It could be done but it's not easy bec

Re: [PERFORM] database size growing continously

2009-10-30 Thread Jeremy Harris
On 10/30/2009 08:01 PM, Greg Stark wrote: On Fri, Oct 30, 2009 at 12:53 PM, Anj Adu wrote: Any relational database worth its salt has partitioning for a reason. 1. Maintenance. You will need to delete data at some point.(cleanup)...Partitions are the only way to do it effectively. This is t

Re: [PERFORM] database size growing continously

2009-10-30 Thread Anj Adu
Database are designed to handle very large tables..but effectiveness is always at question. A full table scan on a partitioned table is always preferable to a FTS on a super large table. The nature of the query will of-course dictate performance..but you run into definite limitations with very larg

Re: [PERFORM] database size growing continously

2009-10-30 Thread Greg Stark
On Fri, Oct 30, 2009 at 12:53 PM, Anj Adu wrote: > Any relational database worth its salt has partitioning for a reason. > > 1. Maintenance.  You will need to delete data at some > point.(cleanup)...Partitions are the only way to do it effectively. This is true and it's unavoidably a manual proce

Re: [PERFORM] database size growing continously

2009-10-30 Thread Anj Adu
Any relational database worth its salt has partitioning for a reason. 1. Maintenance. You will need to delete data at some point.(cleanup)...Partitions are the only way to do it effectively. 2. Performance. Partitioning offer a way to query smaller slices of data automatically (i.e the query opt

Re: [PERFORM] database size growing continously

2009-10-30 Thread Jeremy Harris
On 10/30/2009 12:43 PM, Merlin Moncure wrote: On Thu, Oct 29, 2009 at 11:40 AM, Steve Crawford wrote: Use a parent table and 20 child tables. Create a new child every day and drop the 20-day-old table. Table drops are far faster and lower-impact than delete-from a 120-million row table. Index-

Re: [PERFORM] database size growing continously

2009-10-30 Thread Merlin Moncure
On Thu, Oct 29, 2009 at 11:40 AM, Steve Crawford wrote: > Peter Meszaros wrote: >> >> Hi All, >> >> I use postgresql 8.3.7 as a huge queue. There is a very simple table >> with six columns and two indices, and about 6 million records are >> written into it in every day continously commited every 1

Re: [PERFORM] database size growing continously

2009-10-29 Thread Scott Marlowe
On Thu, Oct 29, 2009 at 8:44 AM, Peter Meszaros wrote: > Hi All, > > I use postgresql 8.3.7 as a huge queue. There is a very simple table > with six columns and two indices, and about 6 million records are > written into it in every day continously commited every 10 seconds from > 8 clients. The t

Re: [PERFORM] database size growing continously

2009-10-29 Thread Steve Crawford
Peter Meszaros wrote: Hi All, I use postgresql 8.3.7 as a huge queue. There is a very simple table with six columns and two indices, and about 6 million records are written into it in every day continously commited every 10 seconds from 8 clients. The table stores approximately 120 million recor

Re: [PERFORM] database size growing continously

2009-10-29 Thread Steve Crawford
Peter Meszaros wrote: Hi All, I use postgresql 8.3.7 as a huge queue. There is a very simple table with six columns and two indices, and about 6 million records are written into it in every day continously commited every 10 seconds from 8 clients. The table stores approximately 120 million recor

Re: [PERFORM] database size growing continously

2009-10-29 Thread Joshua D. Drake
On Thu, 2009-10-29 at 17:00 +0100, Ludwik Dylag wrote: > 2009/10/29 Peter Meszaros > Hi All, > > I use postgresql 8.3.7 as a huge queue. There is a very simple > table > with six columns and two indices, and about 6 million records > are > w

Re: [PERFORM] database size growing continously

2009-10-29 Thread Chris Ernst
Hi Peter, Sounds like you're experiencing index bloat and vacuums do nothing to help that. You can do one of 2 thing to remedy this: 1) The fastest and simplest (but most disruptive) way is to use REINDEX. But this will exclusively lock the table while rebuilding the indexes: REINDEX TABLE p

Re: [PERFORM] database size growing continously

2009-10-29 Thread Ludwik Dylag
2009/10/29 Peter Meszaros > Hi All, > > I use postgresql 8.3.7 as a huge queue. There is a very simple table > with six columns and two indices, and about 6 million records are > written into it in every day continously commited every 10 seconds from > 8 clients. The table stores approximately 12

Re: [PERFORM] database size growing continously

2009-10-29 Thread Matthew Wakeling
On Thu, 29 Oct 2009, Josh Rovero wrote: Do you ever "vacuum full" to reclaim empty record space? Unless you expect the size of the database to permanently decrease by a significant amount, that is a waste of time, and may cause bloat in indexes. In this case, since the space will be used agai

Re: [PERFORM] database size growing continously

2009-10-29 Thread Josh Rovero
On Thu, 2009-10-29 at 15:44 +0100, Peter Meszaros wrote: > Hi All, > > I use postgresql 8.3.7 as a huge queue. There is a very simple table > with six columns and two indices, and about 6 million records are > written into it in every day continously commited every 10 seconds from > 8 clients. The

Re: [PERFORM] database size growing continously

2009-10-29 Thread Ludwik Dylag
I would recomend increasing fsm max_fsm_pages and shared_buffers This changes did speed up vacuum full on my database. With shared_buffers remember to increase max shm in your OS. Ludwik 2009/10/29 Peter Meszaros > Hi All, > > I use postgresql 8.3.7 as a huge queue. There is a very simple table

[PERFORM] database size growing continously

2009-10-29 Thread Peter Meszaros
Hi All, I use postgresql 8.3.7 as a huge queue. There is a very simple table with six columns and two indices, and about 6 million records are written into it in every day continously commited every 10 seconds from 8 clients. The table stores approximately 120 million records, because a cron job d