2008/8/1 Matthew Wakeling <[EMAIL PROTECTED]>:
> On Thu, 31 Jul 2008, Andrzej Zawadzki wrote:
>>
>> Maybe I'm wrong but if this "bulk insert and delete" process is cyclical
>> then You don't need vacuum full.
>> Released tuples will fill up again with fresh data next day - after
>> regular vacuum.
On Thu, 31 Jul 2008, Andrzej Zawadzki wrote:
Maybe I'm wrong but if this "bulk insert and delete" process is cyclical then
You don't need vacuum full.
Released tuples will fill up again with fresh data next day - after regular
vacuum.
Yes, a regular manual vacuum will prevent the table from gr
Mark Roberts wrote:
On Wed, 2008-07-30 at 13:51 -0400, Tom Lane wrote:
Huh? Vacuum doesn't block writes.
regards, tom lane
Of course, you are correct. I was thinking of Vacuum full, which is
recommended for use when you're deleting the majority of rows in a
On Wed, 2008-07-30 at 23:58 +0200, Miernik wrote:
> I have a similar, but different situation, where I TRUNCATE a table
> with
> 60k rows every hour, and refill it with new rows. Would it be better
> (concerning bloat) to just DROP the table every hour, and recreate it,
> then to TRUNCATE it? Or d
Valentin Bogdanov <[EMAIL PROTECTED]> wrote:
> I am guessing that you are using DELETE to remove the 75,000
> unimportant. Change your batch job to CREATE a new table consisting
> only of the 5,000 important. You can use "CREATE TABLE table_name AS
> select_statement" command. Then drop the old ta
On Wed, 2008-07-30 at 13:51 -0400, Tom Lane wrote:
>
>
> Huh? Vacuum doesn't block writes.
>
> regards, tom lane
>
Of course, you are correct. I was thinking of Vacuum full, which is
recommended for use when you're deleting the majority of rows in a
table.
http://ww
Mark Roberts <[EMAIL PROTECTED]> writes:
> On Wed, 2008-07-30 at 17:16 +0100, Matthew Wakeling wrote:
>> Anyway, surely it's much safer to just run VACUUM manually?
> Generally, you would think so. The problem comes from Vacuum blocking
> the application process' writes.
Huh? Vacuum doesn't blo
Dave North a écrit :
[...]
I'd suggest re-tuning as follows:
1) Increase shared_buffers to 10,000, test. Things should be
a bit faster.
2) Increase checkpoint_segments to 30, test. What you want
to watch for here whether there are periods where the server
seems to freeze for a couple of
> -Original Message-
> From: Greg Smith [mailto:[EMAIL PROTECTED]
> Sent: July 30, 2008 12:48 PM
> To: Dave North
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Database size Vs performance degradation
>
> On Wed, 30 Jul 2008, Dave North wrote
Dave North wrote:
-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED]
Well, that's pretty much the definition of bloat. Are you sure you're
vacuuming enough?
DN: Well, the auto-vac is kicking off pretty darn frequently...around
once every 2 minutes. However, you just
On Wed, 2008-07-30 at 17:16 +0100, Matthew Wakeling wrote:
>
> I believe this SQL snippet could cause data loss, because there is a
> period during which writes can be made to the old table that will not
> be
> copied to the new table.
It could indeed cause data loss.
> On a side note, I wou
On Wed, 30 Jul 2008, Dave North wrote:
One observation I've made on the DB system is the disk I/O seems
dreadfully slow...we're at around 75% I/O wait sometimes and the read
rates seem quite slow (hdparm says around 2.2MB/sec - 20MB/sec for
un-cached reads).
This is typically what happens when
On Wed, 30 Jul 2008, Craig James wrote:
You don't have to change the application. One of the great advantages of
Postgres is that even table creation, dropping and renaming are
transactional. So you can do the select / drop / rename as a transaction by
an external app, and your main applicati
On Wed, 2008-07-30 at 10:02 -0500, Dave North wrote:
> Thank you for the suggestion..much appreciated. Alas, I don't think
> this will be possible without a change to the application but it's a
> good idea nonetheless.
Affirmative, Dave. I read you.
If I were in your situation (not having acce
Dave North wrote:
Thank you for the suggestion..much appreciated. Alas, I don't think
this will be possible without a change to the application but it's a
good idea nonetheless.
I assume you mean the "create table as select ..." suggestion (don't forget to
include a little quoted material so
"Dave North" <[EMAIL PROTECTED]> writes:
> From: Richard Huxton [mailto:[EMAIL PROTECTED]
>> Well, that's pretty much the definition of bloat. Are you sure you're
>> vacuuming enough?
> DN: Well, the auto-vac is kicking off pretty darn frequently...around
> once every 2 minutes. However, you jus
l Message-
> From: Valentin Bogdanov [mailto:[EMAIL PROTECTED]
> Sent: July 30, 2008 10:58 AM
> To: pgsql-performance@postgresql.org; Dave North
> Subject: Re: [PERFORM] Database size Vs performance degradation
>
> I am guessing that you are using DELETE to remove the 75
h <[EMAIL PROTECTED]>
> Subject: [PERFORM] Database size Vs performance degradation
> To: pgsql-performance@postgresql.org
> Date: Wednesday, 30 July, 2008, 1:09 PM
> Morning folks,
> Long time listener, first time poster. Having an
> interesting
> problem related to p
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Matthew
Wakeling
Sent: July 30, 2008 8:37 AM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Database size Vs performance degradation
On Wed, 30 Jul 2008, Dave North wrote:
> Running on
-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED]
Sent: July 30, 2008 8:28 AM
To: Dave North
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Database size Vs performance degradation
Dave North wrote:
> Morning folks,
> Long time listener, firs
On Wed, 30 Jul 2008, Dave North wrote:
Running on HP DL380 w/ 4GB RAM, dual 10K HDDs in RAID 0+1
Checking the stats, the DB size is around 7.5GB;
Doesn't fit in RAM.
...after the load, the DB size was around 2.7GB
Does fit in RAM.
One observation I've made on the DB system is the disk
Dave North wrote:
Morning folks,
Long time listener, first time poster.
Hi Dave
Postgres 8.1.8
shared_buffers = 2000
max_fsm_pages = 40
Redhat Enterprise 4
Running on HP DL380 w/ 4GB RAM, dual 10K HDDs in RAID 0+1
Also running on the server is a tomcat web server a
Morning folks,
Long time listener, first time poster. Having an interesting
problem related to performance which I'll try and describe below and
hopefully get some enlightenment. First the environment:
Postgres 8.1.8
shared_buffers = 2000
max_fsm_pages = 40
Redhat En
23 matches
Mail list logo