Euler Taveira de Oliveira writes:
> Alvaro Herrera escreveu:
>> Well, the problem is precisely how to size the list. I don't like the
>> idea of keeping an arbitrary number in memory; it adds another
>> mostly-useless tunable that we'll need to answer questions about for all
>> eternity.
Is it s
Alvaro Herrera escreveu:
> Euler Taveira de Oliveira escribió:
>> Alvaro Herrera escreveu:
>>> This could be solved if the workers kept the whole history of tables
>>> that they have vacuumed. Currently we keep only a single table (the one
>>> being vacuumed right now). I proposed writing these h
Euler Taveira de Oliveira escribió:
> Alvaro Herrera escreveu:
> > This could be solved if the workers kept the whole history of tables
> > that they have vacuumed. Currently we keep only a single table (the one
> > being vacuumed right now). I proposed writing these history files back
> > when w
Alvaro Herrera escreveu:
> This could be solved if the workers kept the whole history of tables
> that they have vacuumed. Currently we keep only a single table (the one
> being vacuumed right now). I proposed writing these history files back
> when workers were first implemented, but the idea wa
Tom Lane escribió:
> Alvaro Herrera writes:
> > Martin Pihlak escribi�:
> >> [ patch to fool with stats refresh logic in autovac ]
>
> (1) I still don't understand why we don't just make the launcher request
> a new stats file once per naptime cycle, and then allow the workers to
> work from that
Alvaro Herrera writes:
> Martin Pihlak escribió:
>> [ patch to fool with stats refresh logic in autovac ]
(1) I still don't understand why we don't just make the launcher request
a new stats file once per naptime cycle, and then allow the workers to
work from that.
(2) The current code in autova
Martin Pihlak escribió:
> Alvaro Herrera wrote:
> > I agree that pgstats is not ideal (we've said this from the very
> > beginning), but I doubt that updating pg_class is the answer; you'd be
> > generating thousands of dead tuples there.
>
> But we already do update pg_class after vacuum -- in va
Alvaro Herrera wrote:
> I agree that pgstats is not ideal (we've said this from the very
> beginning), but I doubt that updating pg_class is the answer; you'd be
> generating thousands of dead tuples there.
>
But we already do update pg_class after vacuum -- in vac_update_relstats().
Hmm, that pe
Alvaro Herrera wrote:
Martin Pihlak escribió:
Alvaro Herrera wrote:
You missed putting back the BUG comment that used to be there about
this.
This was deliberate, I did mention the condition in the comment at
the beginning of the file. This actually makes it a feature :)
Seriously though, do
Martin Pihlak escribió:
> Alvaro Herrera wrote:
> > You missed putting back the BUG comment that used to be there about
> > this.
>
> This was deliberate, I did mention the condition in the comment at
> the beginning of the file. This actually makes it a feature :)
>
> Seriously though, do you th
Alvaro Herrera wrote:
> You missed putting back the BUG comment that used to be there about
> this.
>
This was deliberate, I did mention the condition in the comment at
the beginning of the file. This actually makes it a feature :)
Seriously though, do you think that this is still a problem? Giv
I wrote:
> I was thinking that the launcher should only request fresh stats at wakeup,
> the workers could then reuse that file. This could be implemented by calling
> pgstat_clear_snapshot only at launcher wakeup and setting max stats age to
> to autovacuum_naptime for the workers.
>
Attached is
Martin Pihlak escribió:
> I wrote:
> > I was thinking that the launcher should only request fresh stats at wakeup,
> > the workers could then reuse that file. This could be implemented by calling
> > pgstat_clear_snapshot only at launcher wakeup and setting max stats age to
> > to autovacuum_naptim
Tom Lane wrote:
> I never understood why autovacuum should need a particularly short fuse
> on the stats file age to start with. If the launcher is launching
> multiple workers into the same database with only a few milliseconds
> between them, isn't the launcher pretty broken anyhow? ISTM that s
Martin Pihlak writes:
> As I understand the autovacuum workers need up to date stats to minimize the
> risk of re-vacuuming a table (in case it was already vacuumed by someone
> else).
I never understood why autovacuum should need a particularly short fuse
on the stats file age to start with. I
Alvaro Herrera wrote:
> Tom Lane escribió:
>
>> (In fact, maybe this patch ought to include some sort of maximum update
>> rate tunable? The worst case behavior could actually be WORSE than now.)
>
> Some sort of "if stats were requested in the last 500 ms, just tell the
> requester to read the
Martin Pihlak <[EMAIL PROTECTED]> writes:
> Attached is a patch that implements the described signalling. Additionally
> following non-related changes have been made:
> 1. fopen/fclose replaced with AllocateFile/FreeFile
> 2. pgstat_report_stat() now also checks if there are functions to report
> b
Alvaro Herrera wrote:
>> Attached is a patch that implements the described signalling.
>
> Are we keeping the idea that the reader sends a stat message when it
> needs to read stats? What about the lossiness of the transport?
>
As the message is resent in the wait loop, the collector should rec
Martin Pihlak escribió:
> Tom Lane wrote:
> > Hmm. With the timestamp in the file, ISTM that we could put all the
> > intelligence on the reader side. Reader checks file, sends message if
> Attached is a patch that implements the described signalling.
Are we keeping the idea that the reader sen
Tom Lane wrote:
> Hmm. With the timestamp in the file, ISTM that we could put all the
> intelligence on the reader side. Reader checks file, sends message if
... snip ...
> remember the file timestamp it last wrote out. There are various ways
> you could design it but what comes to mind here is
Martin Pihlak <[EMAIL PROTECTED]> writes:
> Attached is a patch which adds a timestamp to pgstat.stat file header,
> backend_read_statsfile uses this to determine if the file is fresh.
> During the wait loop, the stats request message is retransmitted to
> compensate for possible loss of message(s)
Tom Lane wrote:
> Timestamp within the file is certainly better than trying to rely on
> filesystem timestamps. I doubt 1 sec resolution is good enough, and
> I'd also be worried about issues like clock skew between the
> postmaster's time and the filesystem's time.
>
Attached is a patch which a
Tom Lane wrote:
> Magnus Hagander <[EMAIL PROTECTED]> writes:
>> Tom Lane wrote:
>>> I'd also be worried about issues like clock skew between the
>>> postmaster's time and the filesystem's time.
>
>> Can that even happen on a local filesystem? I guess you could put the
>> file on NFS though, but t
Magnus Hagander <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> I'd also be worried about issues like clock skew between the
>> postmaster's time and the filesystem's time.
> Can that even happen on a local filesystem? I guess you could put the
> file on NFS though, but that seems to be.. eh. sub
Tom Lane wrote:
> Martin Pihlak <[EMAIL PROTECTED]> writes:
>> I had also previously experimented with stat() based polling but ran into
>> the same issues - no portable high resolution timestamp on files. I guess
>> stat() is unusable unless we can live with 1 second update interval for the
>> sta
Martin Pihlak wrote:
> Magnus Hagander wrote:
>> I wrote a patch for this some time back, that was actually applied.
>> Turns out it didn't work, and I ran out of time to fix it, so it was
>> backed out again. And then I forgot about it :-) If you look through the
>> cvs history of pgstat you shoul
Martin Pihlak <[EMAIL PROTECTED]> writes:
> I had also previously experimented with stat() based polling but ran into
> the same issues - no portable high resolution timestamp on files. I guess
> stat() is unusable unless we can live with 1 second update interval for the
> stats (eg. backend reads
Magnus Hagander wrote:
> I wrote a patch for this some time back, that was actually applied.
> Turns out it didn't work, and I ran out of time to fix it, so it was
> backed out again. And then I forgot about it :-) If you look through the
> cvs history of pgstat you should be able to find it - mayb
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
Le 7 sept. 08 à 00:45, Tom Lane a écrit :
I dislike the alternative of communicating through shared memory,
though. Right now the stats collector isn't even connected to shared
memory.
Maybe Markus Wanner work for Postgres-R internal messagin
Tom Lane wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
>> As for signalling, maybe we could implement something like we do for the
>> postmaster signal stuff: the requestor stores a dbid in shared memory
>> and sends a SIGUSR2 to pgstat or some such.
>
> No, no, no. Martin already had a per
Martin Pihlak wrote:
>>> Attached is a WIP patch, which basically implements this:
>> This patch breaks deadlock checking and statement_timeout, because
>> backends already use SIGALRM. You can't just take over that signal.
>> It's possible that you could get things to work by treating this as an
I wrote:
> No, no, no. Martin already had a perfectly sane design for that
> direction of signalling: send a special stats message to the collector.
Actually ... given that the stats message mechanism is designed to be
lossy under high load, maybe that isn't so sane. At the very least
there woul
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> As for signalling, maybe we could implement something like we do for the
> postmaster signal stuff: the requestor stores a dbid in shared memory
> and sends a SIGUSR2 to pgstat or some such.
No, no, no. Martin already had a perfectly sane design for th
Tom Lane escribió:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > - Maybe we oughta have separate files, one for each database? That way
> > we'd reduce unnecessary I/O traffic for both the reader and the writer.
>
> The signaling would become way too complex, I think. Also what do you
> do a
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Some sort of "if stats were requested in the last 500 ms, just tell the
> requester to read the existing file".
> Things that come to mind:
> - autovacuum could use a more frequent stats update in certain cases
BTW, we could implement that by, instead
Too frequent read protection is already handled in the patch but these
comments might lead it into new directions. Current implementation had this
same limit that file was written no more than once per 500 ms.
On Sat, Sep 6, 2008 at 9:12 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> Alvaro Herrera <[
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Some sort of "if stats were requested in the last 500 ms, just tell the
> requester to read the existing file".
Hmm, I was thinking of delaying both the write and the reply signal
until 500ms had elapsed. But the above behavior would certainly be
easie
Tom Lane escribió:
> (In fact, maybe this patch ought to include some sort of maximum update
> rate tunable? The worst case behavior could actually be WORSE than now.)
Some sort of "if stats were requested in the last 500 ms, just tell the
requester to read the existing file".
Things that come
Simon Riggs <[EMAIL PROTECTED]> writes:
> On Fri, 2008-09-05 at 15:23 -0400, Tom Lane wrote:
>> How necessary is this given the recent fixes to allow the stats file to
>> be kept on a ramdisk?
> I would prefer this approach and back-out the other change.
Even if we get on-demand done, I wouldn't
On Fri, 2008-09-05 at 15:23 -0400, Tom Lane wrote:
> How necessary is this given the recent fixes to allow the stats file to
> be kept on a ramdisk?
I would prefer this approach and back-out the other change.
On-demand is cheaper and easier to use.
> > Attached is a WIP patch, which basically
On Sat, Sep 6, 2008 at 2:29 AM, Euler Taveira de Oliveira <[EMAIL PROTECTED]
> wrote:
> Martin Pihlak escreveu:
> > I suspected that, but somehow managed to overlook it :( I guess it was
> > too tempting to use it. I'll start looking for alternatives.
> >
> If you can't afford a 500 msec pgstat ti
Euler Taveira de Oliveira <[EMAIL PROTECTED]> writes:
> If you can't afford a 500 msec pgstat time, then you need to make it
> tunable. Another ideas are (i) turn on/off pgstat per table or database
> and (ii) make the pgstat time tunable per table or database. You can use
> the reloptions column t
Martin Pihlak escreveu:
> I suspected that, but somehow managed to overlook it :( I guess it was
> too tempting to use it. I'll start looking for alternatives.
>
If you can't afford a 500 msec pgstat time, then you need to make it
tunable. Another ideas are (i) turn on/off pgstat per table or data
Tom Lane wrote:
> Martin Pihlak <[EMAIL PROTECTED]> writes:
>> So, as a simple optimization I am proposing that the file should be
>> only written when some backend requests statistics. This would
>> significantly reduce the undesired write traffic at the cost of
>> slightly slower stats access.
>
On Fri, 05 Sep 2008 15:23:18 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:
> Martin Pihlak <[EMAIL PROTECTED]> writes:
> > So, as a simple optimization I am proposing that the file should be
> > only written when some backend requests statistics. This would
> > significantly reduce the undesired write
Martin Pihlak <[EMAIL PROTECTED]> writes:
> So, as a simple optimization I am proposing that the file should be
> only written when some backend requests statistics. This would
> significantly reduce the undesired write traffic at the cost of
> slightly slower stats access.
How necessary is this g
Howdy,
The statistics collector currently dumps the stats file at every 500ms. This
is a major problem if the file becomes large -- occasionally we've been forced
to disable stats collection to cope with it. Another issue is that while the
file is frequently written, it is seldom read. Typically
47 matches
Mail list logo