On Tue, Oct 7, 2014 at 10:51 AM, Andres Freund wrote:
> On 2014-10-07 10:45:24 -0400, Robert Haas wrote:
>> > It's not like it'd be significantly different today - in a read mostly
>> > workload that's bottlenecked on ProcArrayLock you'll not see many
>> > waits. There you'd have to count the tota
On Tue, Oct 7, 2014 at 4:45 PM, Robert Haas wrote:
>> It's not like it'd be significantly different today - in a read mostly
>> workload that's bottlenecked on ProcArrayLock you'll not see many
>> waits. There you'd have to count the total number of spinlocks cycles to
>> measure anything interest
On 2014-10-07 10:45:24 -0400, Robert Haas wrote:
> > It's not like it'd be significantly different today - in a read mostly
> > workload that's bottlenecked on ProcArrayLock you'll not see many
> > waits. There you'd have to count the total number of spinlocks cycles to
> > measure anything interes
On Tue, Oct 7, 2014 at 4:30 PM, Andres Freund wrote:
> On 2014-10-07 17:22:18 +0300, Heikki Linnakangas wrote:
>> FWIW, I liked Ilya's design. Before going to sleep, store the lock ID in
>> shared memory. When you wake up, clear it. That should be cheap enough to
>> have it always enabled. And it
On Tue, Oct 7, 2014 at 10:36 AM, Andres Freund wrote:
>> That gets painful in a hurry. We just got rid of something like that
>> with your patch to get rid of all the backend-local buffer pin arrays;
>> I'm not keen to add another such thing right back.
>
> I think it might be ok if we'd exclude
On Tue, Oct 7, 2014 at 4:12 PM, Andres Freund wrote:
>> I think the easiest way to measure lwlock contention would be to put
>> some counters in the lwlock itself. My guess, based on a lot of
>> fiddling with LWLOCK_STATS over the years, is that there's no way to
>> count lock acquisitions and re
On 2014-10-07 10:30:54 -0400, Robert Haas wrote:
> On Tue, Oct 7, 2014 at 10:12 AM, Andres Freund wrote:
> > Have you tried/considered putting the counters into a per-backend array
> > somewhere in shared memory? That way they don't blow up the size of
> > frequently ping-ponged cachelines. Then y
On Tue, Oct 7, 2014 at 10:12 AM, Andres Freund wrote:
> Have you tried/considered putting the counters into a per-backend array
> somewhere in shared memory? That way they don't blow up the size of
> frequently ping-ponged cachelines. Then you can summarize those values
> whenever querying the res
On 2014-10-07 17:22:18 +0300, Heikki Linnakangas wrote:
> FWIW, I liked Ilya's design. Before going to sleep, store the lock ID in
> shared memory. When you wake up, clear it. That should be cheap enough to
> have it always enabled. And it can easily be extended to other "waits", e.g.
> when you're
On 10/07/2014 05:04 PM, Robert Haas wrote:
On Tue, Oct 7, 2014 at 8:03 AM, Bruce Momjian wrote:
On Fri, Oct 3, 2014 at 06:06:24PM -0400, Bruce Momjian wrote:
I actually don't think that's true. Every lock acquiration implies a
number of atomic locks. Those are expensive. And if you see indivi
On 2014-10-07 10:04:38 -0400, Robert Haas wrote:
> On Tue, Oct 7, 2014 at 8:03 AM, Bruce Momjian wrote:
> > On Fri, Oct 3, 2014 at 06:06:24PM -0400, Bruce Momjian wrote:
> >> > I actually don't think that's true. Every lock acquiration implies a
> >> > number of atomic locks. Those are expensive.
Robert Haas writes:
> I think the easiest way to measure lwlock contention would be to put
> some counters in the lwlock itself. My guess, based on a lot of
> fiddling with LWLOCK_STATS over the years, is that there's no way to
> count lock acquisitions and releases without harming performance
>
On Tue, Oct 7, 2014 at 8:03 AM, Bruce Momjian wrote:
> On Fri, Oct 3, 2014 at 06:06:24PM -0400, Bruce Momjian wrote:
>> > I actually don't think that's true. Every lock acquiration implies a
>> > number of atomic locks. Those are expensive. And if you see individual
>> > locks acquired a high num
On Fri, Oct 3, 2014 at 06:06:24PM -0400, Bruce Momjian wrote:
> > I actually don't think that's true. Every lock acquiration implies a
> > number of atomic locks. Those are expensive. And if you see individual
> > locks acquired a high number of times in multiple proceses that's
> > something impo
On Fri, Oct 3, 2014 at 11:15:13PM +0200, Andres Freund wrote:
> > As far as gathering data, I don't think we are going to do any better in
> > terms of performance/simplicity/reliability than to have a single PGPROC
> > entry to record when we enter/exit a lock, and having a secondary
> > process
On 2014-10-03 11:51:46 -0400, Robert Haas wrote:
> On Fri, Oct 3, 2014 at 11:33 AM, Bruce Momjian wrote:
> > I am assuming almost no one cares about the number of locks, but rather
> > they care about cummulative lock durations.
> >
> > I am having trouble seeing any other option that has such a g
On 2014-10-03 11:33:18 -0400, Bruce Momjian wrote:
> On Thu, Oct 2, 2014 at 11:50:14AM +0200, Andres Freund wrote:
> > The first problem that comes to my mind about collecting enough data is
> > that we have a very large number of lwlocks (fixed_number + 2 *
> > shared_buffers). One 'trivial' way
On Fri, Oct 3, 2014 at 05:53:59PM +0200, Ilya Kosmodemiansky wrote:
> > What that gives us is almost zero overhead on backends, high
> > reliability, and the ability of the scan daemon to give higher weights
> > to locks that are held longer. Basically, if you just stored the locks
> > you held a
On Fri, Oct 3, 2014 at 5:51 PM, Robert Haas wrote:
> I do think that the instrumentation data gathered by LWLOCK_STATS is
> useful - very useful.
Sure, quite useful.
But how about this comment:
/*
* The LWLock stats will be updated within a critical section, which
* requires alloc
On Fri, Oct 3, 2014 at 5:33 PM, Bruce Momjian wrote:
> As far as gathering data, I don't think we are going to do any better in
> terms of performance/simplicity/reliability than to have a single PGPROC
> entry to record when we enter/exit a lock, and having a secondary
> process scan the PGPROC a
On Fri, Oct 3, 2014 at 11:33 AM, Bruce Momjian wrote:
> I am assuming almost no one cares about the number of locks, but rather
> they care about cummulative lock durations.
>
> I am having trouble seeing any other option that has such a good
> cost/benefit profile.
I do think that the instrument
On Thu, Oct 2, 2014 at 11:50:14AM +0200, Andres Freund wrote:
> The first problem that comes to my mind about collecting enough data is
> that we have a very large number of lwlocks (fixed_number + 2 *
> shared_buffers). One 'trivial' way of implementing this is to have a per
> backend array colle
* Andres Freund (and...@2ndquadrant.com) wrote:
> > 1. I've decided to put pg_stat_lwlock into extension pg_stat_lwlock
> > (simply for test purposes). Is it OK, or better to implement it
> > somewhere inside pg_catalog or in another extension (for example
> > pg_stat_statements)?
>
> I personally
* Craig Ringer (cr...@2ndquadrant.com) wrote:
> > The patch https://commitfest.postgresql.org/action/patch_view?id=885
> > (discussion starts here I hope -
> > http://www.postgresql.org/message-id/4fe8ca2c.3030...@uptime.jp)
> > demonstrates performance problems; LWLOCK_STAT, LOCK_DEBUG and
> > DT
On Thu, Oct 2, 2014 at 5:25 AM, Craig Ringer wrote:
> It's not at all clear to me that a DTrace-like (or perf-based, rather)
> approach is unsafe, slow, or unsuitable for production use.
> With appropriate wrapper tools I think we could have quite a useful
> library of perf-based diagnostics and
On Thu, Oct 2, 2014 at 11:50 AM, Andres Freund wrote:
> Not just from a oracle DBA POV ;). Generally.
sure
>> Saying that, principally they mean an
>> Oracle Wait Interface analogue. The Basic idea is to have counters or
>> sensors all around database kernel to measure what a particular
>> backe
On 2014-10-01 18:19:05 +0200, Ilya Kosmodemiansky wrote:
> I have a patch which is actually not commitfest-ready now, but it
> always better to start discussing proof of concept having some patch
> instead of just an idea.
That's a good way to start work on a topic like this.
> From an Oracle DBA
On 10/02/2014 12:19 AM, Ilya Kosmodemiansky wrote:
> From an Oracle DBA's point of view, currently we have a lack of
> performance diagnostics tools.
Agreed. Sometimes very frustratingly so.
> Even better idea is to collect daily LWLock distribution, find most
> frequent of them etc.
While we
Hi,
I have a patch which is actually not commitfest-ready now, but it
always better to start discussing proof of concept having some patch
instead of just an idea.
Since I'am a DBA rather than C programmer, I will appreciate any
suggestions/critics about the patch and code quality to make things
29 matches
Mail list logo