On 1/7/17 12:41 PM, Joel Jacobson wrote:
On Sat, Jan 7, 2017 at 3:25 AM, Greg Stark wrote:
What users need to know is in aggregate how much of the time the
database is spending working on their queries is going into different
states.
This is a separate feature idea, but I think it's really va
On 12/28/16 10:26 PM, Robert Haas wrote:
On Wed, Dec 28, 2016 at 1:06 PM, Tom Lane wrote:
Jim Nasby writes:
On 12/28/16 11:25 AM, Tom Lane wrote:
The idea of just capturing the wait start for heavyweight locks, and
not other lock types, still seems superior to any of the alternatives
that ha
On Sat, Jan 7, 2017 at 3:25 AM, Greg Stark wrote:
> What users need to know is in aggregate how much of the time the
> database is spending working on their queries is going into different
> states.
This is a separate feature idea, but I think it's really valuable as well.
Maybe something simila
On Sat, Jan 7, 2017 at 01:25:08PM +, Greg Stark wrote:
> I would actually argue the reverse of the above proposal would be more
> useful. What we need are counts of how often LWLocks take longer than,
> say, 50ms and for shorter waits we need to know how long. Perhaps not
> precisely for indiv
On 6 January 2017 at 02:59, Bruce Momjian wrote:
>
> Agreed. No need in adding overhead for short-lived locks because the
> milli-second values are going to be meaningless to users. I would be
> happy if we could find some weasel value for non-heavyweight locks.
For what it's worth I don't think
On 2017-01-06 10:43:32 -0500, Bruce Momjian wrote:
> On Thu, Jan 5, 2017 at 06:48:17PM -1000, Joel Jacobson wrote:
> > On Thu, Jan 5, 2017 at 4:59 PM, Bruce Momjian wrote:
> > > Agreed. No need in adding overhead for short-lived locks because the
> > > milli-second values are going to be meaning
On Thu, Jan 5, 2017 at 06:48:17PM -1000, Joel Jacobson wrote:
> On Thu, Jan 5, 2017 at 4:59 PM, Bruce Momjian wrote:
> > Agreed. No need in adding overhead for short-lived locks because the
> > milli-second values are going to be meaningless to users. I would be
> > happy if we could find some w
On Thu, Jan 5, 2017 at 4:59 PM, Bruce Momjian wrote:
> Agreed. No need in adding overhead for short-lived locks because the
> milli-second values are going to be meaningless to users. I would be
> happy if we could find some weasel value for non-heavyweight locks.
To avoid a NULL value for waiti
On Mon, Dec 26, 2016 at 03:36:39PM -0500, Tom Lane wrote:
> In practice, there should never be waits on LWLocks (much less spinlocks)
> that exceed order-of-milliseconds; if there are, either we chose the wrong
> lock type or the system is pretty broken in general. So maybe it's
> sufficient if we
On Wed, Dec 28, 2016 at 1:06 PM, Tom Lane wrote:
> Jim Nasby writes:
>> On 12/28/16 11:25 AM, Tom Lane wrote:
>>> The idea of just capturing the wait start for heavyweight locks, and
>>> not other lock types, still seems superior to any of the alternatives
>>> that have been suggested ...
>
>> Is
On Wed, Dec 28, 2016 at 10:55 PM, Tom Lane wrote:
> Jim Nasby writes:
>> On 12/28/16 7:10 AM, Amit Kapila wrote:
>>> Can we think of introducing new guc trace_system_waits or something
>>> like that which will indicate that the sessions will report the value
>>> of wait_start in pg_stat_activity?
Jim Nasby writes:
> On 12/28/16 11:25 AM, Tom Lane wrote:
>> The idea of just capturing the wait start for heavyweight locks, and
>> not other lock types, still seems superior to any of the alternatives
>> that have been suggested ...
> Is some kind of alarm a viable option for the others? If set
On 12/28/16 11:25 AM, Tom Lane wrote:
The idea of just capturing the wait start for heavyweight locks, and
not other lock types, still seems superior to any of the alternatives
that have been suggested ...
Is some kind of alarm a viable option for the others? If setting the
alarm is cheap, you
Jim Nasby writes:
> On 12/28/16 7:10 AM, Amit Kapila wrote:
>> Can we think of introducing new guc trace_system_waits or something
>> like that which will indicate that the sessions will report the value
>> of wait_start in pg_stat_activity?
> In my experience the problem with those kind of setti
On 12/28/16 7:10 AM, Amit Kapila wrote:
Can we think of introducing new guc trace_system_waits or something
like that which will indicate that the sessions will report the value
of wait_start in pg_stat_activity? The default value of such a
parameter can be false which means wait_start will be s
On Sat, Dec 24, 2016 at 7:46 AM, Tom Lane wrote:
> Stephen Frost writes:
>> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>>> The difficulty with that is it'd require a gettimeofday() call for
>>> every wait start. Even on platforms where those are relatively cheap,
>>> the overhead would be nasty ---
On 12/27/16 11:17 AM, Greg Stark wrote:
On Dec 24, 2016 5:44 PM, "Tom Lane" mailto:t...@sss.pgh.pa.us>> wrote:
I think we'd need at least an order of
magnitude cheaper to consider putting timing calls into spinlock or
lwlock
paths, and that's just not available at all, let alon
On Dec 24, 2016 5:44 PM, "Tom Lane" wrote:
I think we'd need at least an order of
magnitude cheaper to consider putting timing calls into spinlock or lwlock
paths, and that's just not available at all, let alone portably.
For spinlocks we could conceivably just bite the bullet and use a raw rdt
Tom,
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> In practice, there should never be waits on LWLocks (much less spinlocks)
> that exceed order-of-milliseconds; if there are, either we chose the wrong
> lock type or the system is pretty broken in general. So maybe it's
> sufficient if we provide a wa
Joel Jacobson writes:
> Maybe a good tradeoff then would be to let "wait_start" represent the
> very first time the txn started waiting?
> ...
> As long as the documentation is clear on "wait_start" meaning when the
> first wait started in the txn, I think that's useful enough to improve
> the sit
On Sun, Dec 25, 2016 at 8:01 PM, Andres Freund wrote:
> On December 25, 2016 1:21:43 AM GMT+01:00, Joel Jacobson
> wrote:
>
>>Is it really a typical real-life scenario that processes can be
>>waiting extremely often for extremely short periods of time,
>>where the timing overhead would be signif
On December 25, 2016 1:21:43 AM GMT+01:00, Joel Jacobson
wrote:
>Is it really a typical real-life scenario that processes can be
>waiting extremely often for extremely short periods of time,
>where the timing overhead would be significant?
Yes. Consider WAL insertion, procarray or other simil
> This is not an easy problem. See our most recent discussion at
> https://www.postgresql.org/message-id/flat/31856.1400021891%40sss.pgh.pa.us
Thanks for the small test program.
I tested it on my MacBook Pro and gettimeofday() was way faster than time().
The clock_gettime() used by the patch clo
Joel Jacobson writes:
> On Sat, Dec 24, 2016 at 9:00 AM, Tom Lane wrote:
>> The difficulty with that is it'd require a gettimeofday() call for
>> every wait start.
> I don't think we need the microsecond resolution provided by
> gettimeofday() via GetCurrentTimestamp()
> It would be enough to kn
Attached is a patch implementing the seconds-resolution wait_start, but
presented as a timestamptz to the user, just like the other *_start fields:
commit c001e5c537e36d2683a7e55c7c8bfcc154de4c9d
Author: Joel Jacobson
Date: Sat Dec 24 13:20:09 2016 +0700
Add OUT parameter "wait_start" time
On Sat, Dec 24, 2016 at 9:56 AM, Joel Jacobson wrote:
>> The difficulty with that is it'd require a gettimeofday() call for
>> every wait start. Even on platforms where those are relatively cheap,
I just realized how this can be optimized.
We only need to set wait_start for every new waiting per
On Sat, Dec 24, 2016 at 9:00 AM, Tom Lane wrote:
>> I would like to propose adding a fourth such column, "waiting_start",
>> which would tell how long time a backend has been waiting.
>
> The difficulty with that is it'd require a gettimeofday() call for
> every wait start. Even on platforms wher
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Joel Jacobson writes:
> > We already have xact_start, query_start and backend_start
> > to get the timestamptz for when different things happened.
>
> > I would like to propose adding a fourth such column, "waiting_start",
> > which would tell how long tim
Stephen Frost writes:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> The difficulty with that is it'd require a gettimeofday() call for
>> every wait start. Even on platforms where those are relatively cheap,
>> the overhead would be nasty --- and on some platforms, it'd be
>> astonishingly bad. We
Joel Jacobson writes:
> We already have xact_start, query_start and backend_start
> to get the timestamptz for when different things happened.
> I would like to propose adding a fourth such column, "waiting_start",
> which would tell how long time a backend has been waiting.
The difficulty with
Actually, "wait_start" is a better name to match the others
("wait_event_type" and "wait_event").
On Sat, Dec 24, 2016 at 8:20 AM, Joel Jacobson wrote:
> Hi hackers,
>
> We already have xact_start, query_start and backend_start
> to get the timestamptz for when different things happened.
>
> I w
Hi hackers,
We already have xact_start, query_start and backend_start
to get the timestamptz for when different things happened.
I would like to propose adding a fourth such column, "waiting_start",
which would tell how long time a backend has been waiting.
The column would be NULL when waiting=
32 matches
Mail list logo