Hi Andres,
On Mon, 14 Jun 2021 15:01:14 -0700
Andres Freund wrote:
> On 2021-06-14 23:20:47 +0200, Jehan-Guillaume de Rorthais wrote:
> > > On 2021-06-14 16:10:32 +0200, Jehan-Guillaume de Rorthais wrote:
> > > > In the patch in attachment, I tried to fix this by using kind of an
> > > > inter
Hi,
On 2021-06-14 23:20:47 +0200, Jehan-Guillaume de Rorthais wrote:
> > On 2021-06-14 16:10:32 +0200, Jehan-Guillaume de Rorthais wrote:
> > > In the patch in attachment, I tried to fix this by using kind of an
> > > internal
> > > hook for pgstat_report_wait_start and pgstat_report_wait_end. Th
Hi,
On Mon, 14 Jun 2021 11:27:21 -0700
Andres Freund wrote:
> On 2021-06-14 16:10:32 +0200, Jehan-Guillaume de Rorthais wrote:
> > In the patch in attachment, I tried to fix this by using kind of an internal
> > hook for pgstat_report_wait_start and pgstat_report_wait_end. This allows to
> > "in
Hi,
On 2021-06-14 11:27:21 -0700, Andres Freund wrote:
> On 2021-06-14 16:10:32 +0200, Jehan-Guillaume de Rorthais wrote:
> > In the patch in attachment, I tried to fix this by using kind of an internal
> > hook for pgstat_report_wait_start and pgstat_report_wait_end. This allows to
> > "instrumen
Hi,
On 2021-06-14 16:10:32 +0200, Jehan-Guillaume de Rorthais wrote:
> In the patch in attachment, I tried to fix this by using kind of an internal
> hook for pgstat_report_wait_start and pgstat_report_wait_end. This allows to
> "instrument" wait events only when required, on the fly, dynamically.
Hi Andres, Hi all,
First, thank you for your feedback!
Please find in attachment a patch implementing accumulated wait event stats
only from the backend point of view. As I wrote when I reviewed and rebased the
existing patch, I was uncomfortable with the global approach. I still volunteer
to wor
HHi,
On 2021-06-05 00:53:44 +0200, Jehan-Guillaume de Rorthais wrote:
> From 88c2779679c5c9625ca5348eec0543daab5ccab4 Mon Sep 17 00:00:00 2001
> From: Jehan-Guillaume de Rorthais
> Date: Tue, 1 Jun 2021 13:25:57 +0200
> Subject: [PATCH 1/2] Add pg_stat_waitaccum view.
>
> pg_stat_waitaccum shows
Hi All,
I faced a few times a situation where a long running query is actually
including the time the backend is waiting for the frontend to fetch all the
rows (see [1] for details). See a sample code fe-time.c and its comments in
attachment to reproduce this behavior.
There's no simple way today
> On 31 Jul 2020, at 07:23, imai.yoshik...@fujitsu.com wrote:
>
>> This patch fails to apply to HEAD, please submit a rebased version. I've
>> marked this as as Waiting on Author.
>
> Sorry for my absence. Unfortunately I couldn't have time to work on this
> patch in this cf.
> I believe I will
> This patch fails to apply to HEAD, please submit a rebased version. I've
> marked this as as Waiting on Author.
Sorry for my absence. Unfortunately I couldn't have time to work on this patch
in this cf.
I believe I will be back in next cf, work on this patch and also review other
patches.
--
Hi,
This patch fails to apply to HEAD, please submit a rebased version. I've
marked this as as Waiting on Author.
cheers ./daniel
Hi Imai-san,
I feel your 'pg_stat_waitaccum' will help us investigate the bottleneck.
So I'd like to do some benchmarks but unfortunately, the latest v6 patch
couldn't be applied to HEAD anymore.
Is it possible to share the latest patches?
If not, I'll make v6 applicable to the HEAD.
Regards,
-
On Wed, Feb 26, 2020 at 1:39 AM, Kyotaro Horiguchi wrote:
> Hello. I had a brief look on this and have some comments on this.
Hi, Horiguchi-san. Thank you for looking at this!
> It uses its own hash implement. Aside from the appropriateness of
> having another implement of existing tool, in the
Hello. I had a brief look on this and have some comments on this.
At Tue, 25 Feb 2020 07:53:26 +, "imai.yoshik...@fujitsu.com"
wrote in
> Thanks for Wang's mail, I noticed my 0002 patch was wrong from v3.
>
> Here, I attach correct patches.
>
> Also I will begin to do some benchmark with
Thank you for this update! I will try it.
Best regards,
Victor wang
| |
王胜利
|
|
邮箱:atti...@126.com
|
签名由 网易邮箱大师 定制
On 02/25/2020 15:53, imai.yoshik...@fujitsu.com wrote:
On Fri, Feb 14, 2020 at 11:59 AM, 王胜利 wrote:
>I am glad to know you are working on PG accumulated statistics
> feat
On Fri, Feb 14, 2020 at 11:59 AM, 王胜利 wrote:
>I am glad to know you are working on PG accumulated statistics
> feature, and I am interested on it.
>I see these two patch file you made, can you let me know which branch
> of PG code based?
>
>when I use this: https://githu
On Wed, Feb 12, 2020 at 5:42 AM, Craig Ringer wrote:
> > It seems performance difference is big in case of read only tests. The
> > reason is that write time is relatively longer than the
> > processing time of the logic I added in the patch.
>
> That's going to be a pretty difficult performance
On Wed, 12 Feb 2020 at 12:36, imai.yoshik...@fujitsu.com
wrote:
> It seems performance difference is big in case of read only tests. The reason
> is that write time is relatively longer than the
> processing time of the logic I added in the patch.
That's going to be a pretty difficult performan
On Sat, Feb 1, 2020 at 5:50 AM, Pavel Stehule wrote:
> today I run 120 5minutes pgbench tests to measure impact of this patch.
> Result is attached.
...
> Thanks to Tomas Vondra and 2ndq for hw for testing
Thank you for doing a lot of these benchmarks!
> The result is interesting - when I run pg
so 1. 2. 2020 v 12:34 odesílatel Tomas Vondra
napsal:
> This patch was in WoA, but that was wrong I think - we got a patch on
> January 15, followed by a benchmark by Pavel Stehule, so I think it
> should still be in "needs review". So I've updated it and moved it to
> the next CF.
>
currently t
This patch was in WoA, but that was wrong I think - we got a patch on
January 15, followed by a benchmark by Pavel Stehule, so I think it
should still be in "needs review". So I've updated it and moved it to
the next CF.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
Postg
Hi
st 15. 1. 2020 v 14:15 odesílatel Imai Yoshikazu
napsal:
> On 2020/01/13 4:11, Pavel Stehule wrote:
> > The following review has been posted through the commitfest application:
> > make installcheck-world: tested, passed
> > Implements feature: tested, passed
> > Spec compliant:
On 2020/01/13 4:11, Pavel Stehule wrote:
> The following review has been posted through the commitfest application:
> make installcheck-world: tested, passed
> Implements feature: tested, passed
> Spec compliant: not tested
> Documentation:tested, passed
>
> I like thi
The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: not tested
Documentation:tested, passed
I like this patch, because I used similar functionality some year
On Sun, Dec 1, 2019 at 1:10 AM, Michael Paquier wrote:
> On Wed, Oct 30, 2019 at 05:55:28AM +,
> imai(dot)yoshikazu(at)fujitsu(dot)com wrote:
> > And here is the patch which counts the wait event and measuring the wait
> > event time. It is currently like POC and has several things to be impr
On Wed, Oct 30, 2019 at 05:55:28AM +, imai.yoshik...@fujitsu.com wrote:
> And here is the patch which counts the wait event and measuring the wait
> event time. It is currently like POC and has several things to be improved.
Please note the patch tester complains about the latest patch:
pgsta
út 15. 1. 2019 v 2:14 odesílatel Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com> napsal:
> From: Pavel Stehule [mailto:pavel.steh...@gmail.com]
> > the cumulated lock statistics maybe doesn't help with debugging - but it
> > is very good indicator of database (in production usage) health.
>
On Wed, Oct 30, 2019 at 5:51 AM, imai.yoshik...@fujitsu.com wrote:
> The overhead which is induced by getting wait event info was discussed from
> old times, but I couldn't find the actual
> measuring results, so I want to measure its overhead.
And here is the patch which counts the wait event an
Hi,
On Tue, Jan 15, 2019 at 1:14 AM, Tsunakawa, Takayuki wrote:
[ ... absent for a long time ]
I read the discussions of this thread.
If we want wait event info, we can currently do sampling from pg_stat_activity
and get pseudo wait event total duration.
(I understand wait event sampling does
From: Pavel Stehule [mailto:pavel.steh...@gmail.com]
> the cumulated lock statistics maybe doesn't help with debugging - but it
> is very good indicator of database (in production usage) health.
I think it will help both. But I don't think the sampling won't be as helpful
as the precise lock sta
pá 11. 1. 2019 v 2:10 odesílatel Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com> napsal:
> From: Robert Haas [mailto:robertmh...@gmail.com]
> > My theory is that the number of wait events is NOT useful information,
> > or at least not nearly as useful the results of a sampling approach.
> >
From: Robert Haas [mailto:robertmh...@gmail.com]
> My theory is that the number of wait events is NOT useful information,
> or at least not nearly as useful the results of a sampling approach.
> The data that LWLOCK_STATS produce are downright misleading -- they
> lead you to think that the bottlen
On Thu, Jan 10, 2019 at 8:42 PM, Robert Hass wrote:
Thanks for comments.
>or at least not nearly as useful the results of a sampling approach.
I agree with your opinion.
Because it can't be asserted that the wait event is a bottleneck just because
the number of wait event is large.
The same thin
On Thu, Dec 20, 2018 at 8:48 PM Yotsunaga, Naoki
wrote:
> If so, is not that the number of wait events is useful information?
My theory is that the number of wait events is NOT useful information,
or at least not nearly as useful the results of a sampling approach.
The data that LWLOCK_STATS prod
From: Adrien NAYRAT [mailto:adrien.nay...@anayrat.info]
> FIY, wait events have been added in PoWA by using pg_wait_sampling
> extension :
> https://rjuju.github.io/postgresql/2018/07/09/wait-events-support-for-
> powa.html
>
> pg_wait_sampling sample the wait events in shared memory and PoWA stor
On 1/7/19 6:34 AM, Tsunakawa, Takayuki wrote:
1. Doesn't provide precise data
Sampling could miss intermittent short waits, e.g., buffer content lock waits
during checkpoints. This might make it difficult or impossible to solve
transient performance problems, such as infrequent 100 millisecond
Hi all,
I think sampling like Oracle ASH should work for the DBA to find probable
bottlenecks in many cases (, so I hope PostgreSQL will incorporate it...) On
the other hand, it seems to have the following disadvantages, some of which
others have already pointed out:
1. Doesn't provide precis
From: Yotsunaga, Naoki [mailto:yotsunaga.na...@jp.fujitsu.com]
> By the way, you can see the number of wait events with "LWLOCK_STATS", right?
> Is this function implemented because it is necessary to know the number
> of waiting events for investigation?
> If so, is not that the number of wait eve
On Wed, Nov 21, 2018 at 9:27 PM, Bruce Momjian wrote:
Hi, thank you for the information.
I understood that sampling is effective for investigation of waiting events.
By the way, you can see the number of wait events with "LWLOCK_STATS", right?
Is this function implemented because it is necessary
On Tue, Nov 6, 2018 at 04:26:03AM +, Yotsunaga, Naoki wrote:
> On Sat, Nov 3, 2018 at 1:28 AM, Phil Florent wrote:
>
>
>
> >2) it consumes system resources
>
> While the system is running, you are always sampling system information, do
> not
> you? Like Oracle ASH.
>
> If so, does sampl
On Sun, Oct 28, 2018 at 6:39 PM, legrand legrand wrote:
Hi, Thanks for comments.
I had overlooked the reply from you.
>You are right, sampling has to be "tuned" regarding the event(s) you want to
>catch.
I see. For tuning, you need to know the length of processing you want to
sample?
> May I
On Mon, Nov 5, 2018 at 4:26 PM, Naoki Yotsunaga wrote:
>>2) it consumes system resources
>While the system is running, you are always sampling system information, do
>not you? Like Oracle ASH.
I don’t understand well how sampling is used.
In which scene do you use the sampling? Or is it both sce
On Sat, Nov 3, 2018 at 1:28 AM, Phil Florent wrote:
>2) it consumes system resources
While the system is running, you are always sampling system information, do not
you? Like Oracle ASH.
If so, does sampling have no significant impact on performance? Even if the
interval is 0.01 s or more.
>The
On Mon, Oct 29, 2018 at 1:52 AM, Phil Florent wrote:
Hi, thank you for comments.
>Yes you will be able to solve bottlenecks with sampling. In interactive mode,
>a 1s interval is probably too large. I use 0s1 - 0s01 with my tool and it is
>normally OK.
With the tool you are using, can you sampl
"snapper" and it is also based on
sampling.
Best regards
Phil
De : Yotsunaga, Naoki
Envoyé : lundi 29 octobre 2018 02:20
À : 'Phil Florent'; 'Michael Paquier'
Cc : 'Tomas Vondra'; 'pgsql-hackers@lists.postgresql.org
Hello,
You are right, sampling has to be "tuned" regarding the event(s) you want to
catch.
Sampling of 1 second interval is good with treatments that take hours, and
not enough for a minute or a second analysis.
May I invite you to try it, using PASH-viewer (github) with pgsentinel
(github).
Cha
On Thu, Oct 4, 2018 at 8:22 PM, Yotsunaga Naoki wrote:
Hi, I understood and thought of your statistic comment once again. In the case
of sampling, is there enough statistic to investigate?
In the case of a long SQL, I think that it is possible to obtain a sufficient
sampling number.
However, in
Bertrand DROUVOT wrote
> Hello Guys,
>
> As you mentioned Oracle like active session history sampling in this
> thread, I just want to let you know that I am working on a brand new
> extension to provide this feature.
>
> You can find the extension here: https://github.com/pgsentinel/pgsentinel
>
On Thu, Oct 4, 2018 at 0:54 AM, Phil Florent wrote:
Phil, Michael, I appreciate your polite comments.
I understand as follows.
We can find it if we shorten the sampling interval, but a lot of information
comes out.
# The balance is important.
Also, it is not good unless we have enough samples.
A
ctivty() was not called by workers in
parallel index creation)
Best regards
Phil
De : Michael Paquier
Envoyé : jeudi 4 octobre 2018 12:58
À : Phil Florent
Cc : Yotsunaga, Naoki; Tomas Vondra; pgsql-hackers@lists.postgresql.org
Objet : Re: [Proposal] Add accumulated statistics for wait event
On
On Thu, Oct 04, 2018 at 09:32:37AM +, Phil Florent wrote:
> I am a DB beginner, so please tell me. It says that you can find
> events that are bottlenecks in sampling, but as you saw above, you can
> not find events shorter than the sampling interval, right?
Yes, which is why it would be as si
st regards
Phil
De : Yotsunaga, Naoki
Envoyé : jeudi 4 octobre 2018 10:31
À : 'Michael Paquier'; Phil Florent
Cc : Tomas Vondra; pgsql-hackers@lists.postgresql.org
Objet : RE: [Proposal] Add accumulated statistics for wait event
On Thu, July 26, 2018 at 1:25 AM, Michael Paquier wrote:
On Thu, July 26, 2018 at 1:25 AM, Michael Paquier wrote:
> Even if you have spiky workloads, sampling may miss those, but even with
> adding counters for each event
> you would need to query the table holding the counters at an insane frequency
> to be able to perhaps get
> something out of it a
; pgsql-hackers@lists.postgresql.org
Objet : Re: [Proposal] Add accumulated statistics for wait event
On Tue, Jul 24, 2018 at 04:23:03PM +, Phil Florent wrote:
> It loses non meaningful details and it's in fact a good point. In this
> example, sampling will definitely find the caus
Hello Guys,
As you mentioned Oracle like active session history sampling in this
thread, I just want to let you know that I am working on a brand new
extension to provide this feature.
You can find the extension here: https://github.com/pgsentinel/pgsentinel
Basically, you could see it as sampli
On Tue, Jul 24, 2018 at 04:23:03PM +, Phil Florent wrote:
> It loses non meaningful details and it's in fact a good point. In this
> example, sampling will definitely find the cause and won't cost
> resources.
The higher the sampling frequency, the more details you get, with the
most load on t
identify the
bottleneck(s)...
Best regards
Phil
De : Tomas Vondra
Envoyé : mardi 24 juillet 2018 17:45
À : pgsql-hackers@lists.postgresql.org
Objet : Re: [Proposal] Add accumulated statistics for wait event
On 07/24/2018 12:06 PM, MyungKyu LIM wrote:
>
On 07/24/2018 12:06 PM, MyungKyu LIM wrote:
2018-07-23 16:53 (GMT+9), Michael Paquier wrote:
On Mon, Jul 23, 2018 at 04:04:42PM +0900, 임명규 wrote:
This proposal is about recording additional statistics of wait events.
I have comments about your patch. First, I don't think that you need
On 07/23/2018 03:57 PM, Tom Lane wrote:
Michael Paquier writes:
This does not need a configure switch.
It probably is there because the OP realizes that most people wouldn't
accept having this code compiled in.
What's the performance penalty? I am pretty sure that this is
measurable as
ook>
De : MyungKyu LIM
Envoyé : mardi 24 juillet 2018 12:10
À : Alexander Korotkov; pgsql-hack...@postgresql.org
Cc : Woosung Sohn; DoHyung HONG
Objet : RE: Re: [Proposal] Add accumulated statistics for wait event
> On Mon, Jul 23, 2018 at 10:53 AM Michae
> On Mon, Jul 23, 2018 at 10:53 AM Michael Paquier wrote:
>> What's the performance penalty? I am pretty sure that this is
>> measurable as wait events are stored for a backend for each I/O
>> operation as well, and you are calling a C routine within an inlined
>> function which is designed to
2018-07-23 16:53 (GMT+9), Michael Paquier wrote:
> On Mon, Jul 23, 2018 at 04:04:42PM +0900, 임명규 wrote:
>> This proposal is about recording additional statistics of wait events.
> I have comments about your patch. First, I don't think that you need to
> count precisely the number of wait events
Michael Paquier writes:
> This does not need a configure switch.
It probably is there because the OP realizes that most people wouldn't
accept having this code compiled in.
> What's the performance penalty? I am pretty sure that this is
> measurable as wait events are stored for a backend for e
On Mon, Jul 23, 2018 at 10:53 AM Michael Paquier wrote:
> What's the performance penalty? I am pretty sure that this is
> measurable as wait events are stored for a backend for each I/O
> operation as well, and you are calling a C routine within an inlined
> function which is designed to be light
Hi,
that will be a great feature.
On 23.07.2018 10:53, Michael Paquier wrote:
I have comments about your patch. First, I don't think that you need to
count precisely the number of wait events triggered as usually when it
comes to analyzing a workload's bottleneck what counts is a periodic
*sa
> This proposal is about recording additional statistics of wait
events.
> The pg_stat_activity view is very useful in analysis for performance
> issues.
> But it is difficult to get information of wait events in detail,
> when you need to deep dive into analysis of performance.
> It is because
On Mon, Jul 23, 2018 at 04:04:42PM +0900, 임명규 wrote:
> This proposal is about recording additional statistics of wait events.
You should avoid sending things in html format, text format being
recommended on those mailing lists... The patch applies after using
patch -p0 by the way.
I would recomm
67 matches
Mail list logo