On Mon, Nov 2, 2009 at 7:50 AM, Peter Meszaros wrote:
> Increasing max_fsm_pages can be also helpful, but I've read that
> 'vacuum verbose ...' will issue warnings if max_fsm_pages is too small.
> I've never seen such messag, this command is either run and finish or
> goes to an endless loop as it
I would recommend (if at all possible) to partition the table and drop
the old partitions when not needed. This will guarantee the space
free-up without VACUUM overhead. Deletes will kill you at some point
and you dont want too much of the VACUUM IO overhead impacting your
performance.
On Mon, Nov
Thank you all for the fast responses!
I changed the delete's schedule from daily to hourly and I will let you
know the result. This seems to be the most promising step.
The next one is tuning 'max_fsm_pages'.
Increasing max_fsm_pages can be also helpful, but I've read that
'vacuum verbose ...' wi
On Fri, Oct 30, 2009 at 1:18 PM, Jeremy Harris wrote:
> So, on the becoming more intelligent front: PostgreSQL already does
> some operations as background maintenance (autovacuum). Extending
> this to de-bloat indices does not seem conceptually impossible
It could be done but it's not easy bec
On 10/30/2009 08:01 PM, Greg Stark wrote:
On Fri, Oct 30, 2009 at 12:53 PM, Anj Adu wrote:
Any relational database worth its salt has partitioning for a reason.
1. Maintenance. You will need to delete data at some
point.(cleanup)...Partitions are the only way to do it effectively.
This is t
Database are designed to handle very large tables..but effectiveness
is always at question. A full table scan on a partitioned table is
always preferable to a FTS on a super large table. The nature of the
query will of-course dictate performance..but you run into definite
limitations with very larg
On Fri, Oct 30, 2009 at 12:53 PM, Anj Adu wrote:
> Any relational database worth its salt has partitioning for a reason.
>
> 1. Maintenance. You will need to delete data at some
> point.(cleanup)...Partitions are the only way to do it effectively.
This is true and it's unavoidably a manual proce
Any relational database worth its salt has partitioning for a reason.
1. Maintenance. You will need to delete data at some
point.(cleanup)...Partitions are the only way to do it effectively.
2. Performance. Partitioning offer a way to query smaller slices of
data automatically (i.e the query opt
On 10/30/2009 12:43 PM, Merlin Moncure wrote:
On Thu, Oct 29, 2009 at 11:40 AM, Steve Crawford
wrote:
Use a parent table and 20 child tables. Create a new child every day and
drop the 20-day-old table. Table drops are far faster and lower-impact than
delete-from a 120-million row table. Index-
On Thu, Oct 29, 2009 at 11:40 AM, Steve Crawford
wrote:
> Peter Meszaros wrote:
>>
>> Hi All,
>>
>> I use postgresql 8.3.7 as a huge queue. There is a very simple table
>> with six columns and two indices, and about 6 million records are
>> written into it in every day continously commited every 1
On Thu, Oct 29, 2009 at 8:44 AM, Peter Meszaros wrote:
> Hi All,
>
> I use postgresql 8.3.7 as a huge queue. There is a very simple table
> with six columns and two indices, and about 6 million records are
> written into it in every day continously commited every 10 seconds from
> 8 clients. The t
Peter Meszaros wrote:
Hi All,
I use postgresql 8.3.7 as a huge queue. There is a very simple table
with six columns and two indices, and about 6 million records are
written into it in every day continously commited every 10 seconds from
8 clients. The table stores approximately 120 million recor
Peter Meszaros wrote:
Hi All,
I use postgresql 8.3.7 as a huge queue. There is a very simple table
with six columns and two indices, and about 6 million records are
written into it in every day continously commited every 10 seconds from
8 clients. The table stores approximately 120 million recor
On Thu, 2009-10-29 at 17:00 +0100, Ludwik Dylag wrote:
> 2009/10/29 Peter Meszaros
> Hi All,
>
> I use postgresql 8.3.7 as a huge queue. There is a very simple
> table
> with six columns and two indices, and about 6 million records
> are
> w
Hi Peter,
Sounds like you're experiencing index bloat and vacuums do nothing to
help that. You can do one of 2 thing to remedy this:
1) The fastest and simplest (but most disruptive) way is to use REINDEX.
But this will exclusively lock the table while rebuilding the indexes:
REINDEX TABLE p
2009/10/29 Peter Meszaros
> Hi All,
>
> I use postgresql 8.3.7 as a huge queue. There is a very simple table
> with six columns and two indices, and about 6 million records are
> written into it in every day continously commited every 10 seconds from
> 8 clients. The table stores approximately 12
On Thu, 29 Oct 2009, Josh Rovero wrote:
Do you ever "vacuum full" to reclaim empty record space?
Unless you expect the size of the database to permanently decrease by a
significant amount, that is a waste of time, and may cause bloat in
indexes. In this case, since the space will be used agai
On Thu, 2009-10-29 at 15:44 +0100, Peter Meszaros wrote:
> Hi All,
>
> I use postgresql 8.3.7 as a huge queue. There is a very simple table
> with six columns and two indices, and about 6 million records are
> written into it in every day continously commited every 10 seconds from
> 8 clients. The
I would recomend increasing fsm max_fsm_pages and shared_buffers
This changes did speed up vacuum full on my database.
With shared_buffers remember to increase max shm in your OS.
Ludwik
2009/10/29 Peter Meszaros
> Hi All,
>
> I use postgresql 8.3.7 as a huge queue. There is a very simple table
Hi All,
I use postgresql 8.3.7 as a huge queue. There is a very simple table
with six columns and two indices, and about 6 million records are
written into it in every day continously commited every 10 seconds from
8 clients. The table stores approximately 120 million records, because a
cron job d
20 matches
Mail list logo