Andrew Sullivan wrote:
The vacuum delay stuff that you're working on may help, but I can't
really believe it's your salvation if this is happening after only a
few minutes. No matter how much you're doing inside those functions,
you surely can't be causing so many dead tuples that a vacuum is
nec
Arthur Ward wrote:
Jan's vacuum-delay-only patch that nobody can find is here:
http://archives.postgresql.org/pgsql-hackers/2003-11/msg00518.php
I've been using it in testing & production without any problems.
Great to know -- many thanks.
I've hacked my own vacuum-delay-only patch form Jan's al
Andrew Sullivan wrote:
Sorry I haven't had a chance to reply to this sooner.
The vacuum delay stuff that you're working on may help, but I can't
really believe it's your salvation if this is happening after only a
few minutes. No matter how much you're doing inside those functions,
you surely can
Sorry I haven't had a chance to reply to this sooner.
On Fri, Mar 12, 2004 at 05:38:37PM -0800, Joe Conway wrote:
> The problem is this: the application runs an insert, that fires off a
> trigger, that cascades into a fairly complex series of functions, that
> do a bunch of calculations, inserts
> The problem with Jan's more complex version of the patch (at least the
> one I found - perhaps not the right one) is it includes a bunch of other
> experimental stuff that I'd not want to mess with at the moment. Would
> changing the input units (for the original patch) from milli-secs to
> micro
Matthew T. O'Connor wrote:
If memory serves, the problem is that you actually sleep 10ms even when
you set it to 1. One of the thing changed in Jan's later patch was the
ability to specify how many pages to work on before sleeping, rather
than how long to sleep inbetween every 1 page. You might b
Joe Conway <[EMAIL PROTECTED]> writes:
> I have tested Tom's original patch now. The good news -- it works great
> in terms of reducing the load imposed by vacuum -- almost to the level
> of being unnoticeable. The bad news -- in a simulation test which loads
> an hour's worth of data, even with
On Tue, 2004-03-16 at 23:49, Joe Conway wrote:
I have tested Tom's original patch now. The good news -- it works great
in terms of reducing the load imposed by vacuum -- almost to the level
of being unnoticeable. The bad news -- in a simulation test which loads
an hour's worth of data, even with
Tom Lane wrote:
Joe Conway <[EMAIL PROTECTED]> writes:
Any idea where I can get my hands on the latest version. I found the
original post from Tom, but I thought there was a later version with
both number of pages and time to sleep as knobs.
That was as far as I got. I think Jan posted a more c
Matthew T. O'Connor wrote:
Strange... I wonder if this is some integer overflow problem. There was
one reported recently and fixed as of CVS head yesterday, you might try
that, however without the -d2 output I'm only guessing at why
pg_autovacuum is vacuuming so much / so often.
I'll see what I
Tom Lane wrote:
Joe Conway <[EMAIL PROTECTED]> writes:
Any idea where I can get my hands on the latest version. I found the
original post from Tom, but I thought there was a later version with
both number of pages and time to sleep as knobs.
That was as far as I got. I think Jan posted a more co
Joe Conway wrote:
Yeah, I'm sure. Snippets from the log:
[...lots-o-tables...]
[2004-03-14 12:44:48 PM] added table: specdb."public"."parametric_states"
[2004-03-14 12:49:48 PM] Performing: VACUUM ANALYZE
"public"."transaction_data"
[2004-03-14 01:29:59 PM] Performing: VACUUM ANALYZE
"public"."
Joe Conway <[EMAIL PROTECTED]> writes:
> Any idea where I can get my hands on the latest version. I found the
> original post from Tom, but I thought there was a later version with
> both number of pages and time to sleep as knobs.
That was as far as I got. I think Jan posted a more complex ver
Matthew T. O'Connor wrote:
I think you understand correctly. A table with 1,000,000 rows should
get vacuumed approx every 2,000,000 changes (assuming default values for
-V ). FYI and insert and a delete count as one change, but and update
counts as two.
Unfortunately, the running with -d2 wou
Joe Conway wrote:
A few pg_autovacuum questions came out of this:
First, the default vacuum scaling factor is 2, which I think implies
the big table should only get vacuumed every 56 million or so changes.
I didn't come anywhere near that volume in my tests, yet the table did
get vacuumed
Joe Conway wrote:
Tom Lane wrote:
Just to be clear on this: you have to restart the postmaster to bring
the time back down? Simply starting a fresh backend session doesn't do
it?
IIRC, shared buffers was reasonable, maybe 128MB. One thing that is
worthy of note is that they are using pg_autov
Joe,
> IIRC, shared buffers was reasonable, maybe 128MB. One thing that is
> worthy of note is that they are using pg_autovacuum and a very low
> vacuum_mem setting (1024). But I also believe that max_fsm_relations and
> max_fsm_pages have been bumped up from default (something like 1 &
>
Marty Scholes wrote:
I have seen similar results to what you are describing.
I found that running a full vacuum:
vacuumdb -fza
followed by a checkpoint makes it run fast again.
Try timing the update with and without a full vacuum.
Will do. I'll let you know how it goes.
Thanks for the reply.
Joe Conway <[EMAIL PROTECTED]> writes:
> ... Immediately
> after a postmaster restart, the first insert or two take about 1.5
> minutes (undoubtedly this could be improved, but it isn't the main
> issue). However by the second or third insert, the time increases to 7 -
> 9 minutes. Restarting t
Joe Conway <[EMAIL PROTECTED]> writes:
> The problem is this: the application runs an insert, that fires off a
> trigger, that cascades into a fairly complex series of functions, that
> do a bunch of calculations, inserts, updates, and deletes. Immediately
> after a postmaster restart, the first
Six days ago I installed Pg 7.4.1 on Sparc Solaris 8 also. I am hopeful
that we as well can migrate a bunch of our apps from Oracle.
After doing some informal benchmarks and performance testing for the
past week I am becoming more and more impressed with what I see.
I have seen similar results
21 matches
Mail list logo