On 4/7/25 6:41 PM, Melanie Plageman wrote:
On Mon, Feb 3, 2025 at 12:37 AM Sami Imseih wrote:
I started looking at this, and I like the idea.
Thanks for taking a look!
A few comments: I don't understand what 0002 is. For starters, the
commit message says something about pg_stat_database, and
On Mon, Feb 3, 2025 at 12:37 AM Sami Imseih wrote:
>
> Besides that, I think this is ready for committer.
I started looking at this, and I like the idea.
A few comments: I don't understand what 0002 is. For starters, the
commit message says something about pg_stat_database, and there are no
chan
On 2/3/25 6:36 AM, Sami Imseih wrote:
As far as the current set of patches, I had some other changes that
I missed earlier; indentation to the calls in LogParallelWorkersIfNeeded
and comment for the LogParallelWorkersIfNeeded function. I also re-worked the
setup of the GUC as it was not setup the
> The "story" I have in mind is: I need to audit an instance I know
> nothing about. I ask the client to adapt the logging parameters for
> pgbadger (including this one), collect the logs and generate a report
> for the said period to have a broad overview of what is happenning.
Let's see if anyon
On 1/29/25 12:41 AM, Sami Imseih wrote:
There will both be an INFO ( existing behavior ) and LOG ( new behavior ).
This seems wrong to me and there should only really be one
mechanism to log parallel workers for utility statements.
Others may have different opinions.
In the use case you describ
> I feel that observability is important, and I don't understand why we
> would want to have the information for only a portion of the
> functionality's usage (even if it's the most important).
In my opinion, the requirement for parallel usage in
the utility statement is different. In fact, I thin
Here is a new set of patches.
The following changed:
* rebase
* simplify the log message to go back to "launched X parallel workers
(planned: Y)"
* rename the "failure" configuration item to "shortage".
On 1/3/25 17:24, Sami Imseih wrote:> Maintenance work is usually
planned, so if queries
> * Centralization of logs: The maintenance operation can be planned in
> cron jobs. In that context, it could certainly be logged separately via
> the script. However, when I am doing an audit on a client database, I
> think it's useful to have all the relevant information in PostgreSQL logs.
Mai
Hi, thank you for the review and sorry for the delayed answer.
On 12/28/24 05:17, Sami Imseih wrote:> Thinking about this further, it
seems to me this logging only makes sense
> for parallel query and not maintenance commands. This is because
> for the latter case, the commands are executed man
I missed one more point earlier.
I don't think "failure" is a good name for the setting as it's
a bit too harsh. What we really have is a "shortage" of
workers.
Instead of
+{"failure", LOG_PARALLEL_WORKERS_FAILURE, false},
what about?
{"shortage", LOG_PARALLEL_WORKERS_SHORTAGE, false},
Re
Thanks for rebasing.
I took a look and have some high level comments.
1/ The way the patches stand now, different parallel commands
will result in a different formatted log line. This may be
OK for reading the logs manually, but will be hard to
consume via tools that parse logs and if we support
This is just a rebase.
As stated before, I added some information to the error message for
parallel queries as an experiment. It can be removed, if you re not
convinced.
---
Benoit Lobréau
Consultant
http://dalibo.comFrom 7e63f79226357f2fe84acedb950e64383e582b17 Mon Sep 17 00:00:00 2001
From:
Hi,
Le sam. 19 oct. 2024 à 06:46, Benoit Lobréau a
écrit :
> On 10/15/24 09:52, Benoit Lobréau wrote:
> > Thank you, it's a lot cleaner that way.
> > I'll add this asap.
>
> This is an updated version with the suggested changes.
>
>
AFAICT, Benoit answered to all questions and requests. Is there
On 10/15/24 09:52, Benoit Lobréau wrote:
Thank you, it's a lot cleaner that way.
I'll add this asap.
This is an updated version with the suggested changes.
--
Benoit Lobréau
Consultant
http://dalibo.comFrom b9bf7c0fa967c72972fd77333a768dfe89d91765 Mon Sep 17 00:00:00 2001
From: benoit
Date: F
On 10/14/24 22:42, Alena Rybakina wrote:
Attached is the correct version of the patch.
Thank you, it's a lot cleaner that way.
I'll add this asap.
--
Benoit Lobréau
Consultant
http://dalibo.com
Sorry, I was in a hurry and didn't check my patch properly.
On 14.10.2024 23:20, Alena Rybakina wrote:
On 28.08.2024 15:58, Benoit Lobréau wrote:
Hi,
Here is a new version of the patch. Sorry for the long delay, I was
hit by a motivation drought and was quite busy otherwise.
The guc is now
On 28.08.2024 15:58, Benoit Lobréau wrote:
Hi,
Here is a new version of the patch. Sorry for the long delay, I was
hit by a motivation drought and was quite busy otherwise.
The guc is now called `log_parallel_workers` and has three possible
values:
* "none": disables logging
* "all": logs
This is a rebased version.
I have split queries, vacuum and index creation in different patches.
I have also split the declartion that are in common with the
pg_stat_database patch.
--
Benoit Lobréau
Consultant
http://dalibo.comFrom cfa426a080ca2d0c484ac8f5201800bd6434 Mon Sep 17 00:00:00
Here is a new version that fixes the aforementioned problems.
If this patch is accepted in this form, the counters could be used for
the patch in pg_stat_database. [1]
[1]
https://www.postgresql.org/message-id/flat/783bc7f7-659a-42fa-99dd-ee0565644...@dalibo.com
--
Benoit Lobréau
Consultant
h
I found out in [1] that I am not correctly tracking the workers for
vacuum commands. I trap workers used by
parallel_vacuum_cleanup_all_indexes but not
parallel_vacuum_bulkdel_all_indexes.
Back to the drawing board.
[1]
https://www.postgresql.org/message-id/flat/783bc7f7-659a-42fa-99dd-ee056
Hi,
Here is a new version of the patch. Sorry for the long delay, I was hit
by a motivation drought and was quite busy otherwise.
The guc is now called `log_parallel_workers` and has three possible values:
* "none": disables logging
* "all": logs parallel worker info for all parallel queries
On 4/8/24 10:05, Andrey M. Borodin wrote:
Hi Benoit!
This is kind reminder that this thread is waiting for your response.
CF entry [0] is in "Waiting on Author", I'll move it to July CF.
Hi thanks for the reminder,
The past month as been hectic for me.
It should calm down by next week at wich
> On 29 Feb 2024, at 11:24, Benoit Lobréau wrote:
>
> Yes, thanks for the proposal, I'll work on it on report here.
Hi Benoit!
This is kind reminder that this thread is waiting for your response.
CF entry [0] is in "Waiting on Author", I'll move it to July CF.
Thanks!
Best regards, Andrey
On 2/27/24 15:09, Tomas Vondra wrote> That is certainly true, but it's
not a new thing, I believe. IIRC we may
> not report statistics until the end of the transaction, so no progress
> updates, I'm not sure what happens if the doesn't end correctly (e.g.
> backend dies, ...). Similarly for the t
On 2024-02-27 Tu 05:03, Benoit Lobréau wrote:
On 2/25/24 23:32, Peter Smith wrote:
Also, I don't understand how the word "draught" (aka "draft") makes
sense here -- I assume the intended word was "drought" (???).
yes, that was the intent, sorry about that. English is not my native
langage
On 2/27/24 10:55, Benoit Lobréau wrote:
> On 2/25/24 20:13, Tomas Vondra wrote:
>> 1) name of the GUC
> ...
>> 2) logging just the failures provides an incomplete view
>> log_parallel_workers = {none | failures | all}>
>> where "failures" only logs when at least one worker fails to start, and
>> "a
On 2/25/24 23:32, Peter Smith wrote:
Also, I don't understand how the word "draught" (aka "draft") makes
sense here -- I assume the intended word was "drought" (???).
yes, that was the intent, sorry about that. English is not my native
langage and I was convinced the spelling was correct.
On 2/25/24 20:13, Tomas Vondra wrote:
> 1) name of the GUC
...
> 2) logging just the failures provides an incomplete view
> log_parallel_workers = {none | failures | all}>
> where "failures" only logs when at least one worker fails to start, and
> "all" logs everything.
>
> AFAIK Sami made the sam
On Mon, Feb 26, 2024 at 6:13 AM Tomas Vondra
wrote:
>
> 1) name of the GUC
>
> I find the "log_parallel_worker_draught" to be rather unclear :-( Maybe
> it's just me and everyone else just immediately understands what this
> does / what will happen after it's set to "on", but I find it rather
> n
Hi,
I see the thread went a bit quiet, but I think it'd be very useful (and
desirable) to have this information in log. So let me share my thoughts
about the patch / how it should work.
The patch is pretty straightforward, I don't have any comments about the
code as is. Obviously, some of the fol
> I believe both cumulative statistics and logs are needed. Logs excel in
> pinpointing specific queries at precise times, while statistics provide
> a broader overview of the situation. Additionally, I often encounter
> situations where clients lack pg_stat_statements and can't restart their
>
On 10/11/23 17:26, Imseih (AWS), Sami wrote:
Thank you for resurrecting this thread.
Well, if you read Benoit's earlier proposal at [1] you'll see that he
does propose to have some cumulative stats; this LOG line he proposes
here is not a substitute for stats, but rather a complement. I don't
>> Currently explain ( analyze ) will give you the "Workers Planned"
>> and "Workers launched". Logging this via auto_explain is possible, so I am
>> not sure we need additional GUCs or debug levels for this info.
>>
>> -> Gather (cost=10430.00..10430.01 rows=2 width=8) (actual tim
>> e=131.826..13
On 2023-Oct-09, Imseih (AWS), Sami wrote:
> > I think we should definitely be afraid of that. I am in favor of a
> > separate GUC.
I agree.
> Currently explain ( analyze ) will give you the "Workers Planned"
> and "Workers launched". Logging this via auto_explain is possible, so I am
> not sure
Hi,
This thread has been quiet for a while, but I'd like to share some
thoughts.
+1 to the idea of improving visibility into parallel worker saturation.
But overall, we should improve parallel processing visibility, so DBAs can
detect trends in parallel usage ( is the workload doing more parallel
On Tue, May 2, 2023 at 6:57 AM Amit Kapila wrote:
> We can output this at the LOG level to avoid running the server at
> DEBUG1 level. There are a few other cases where we are not able to
> spawn the worker or process and those are logged at the LOG level. For
> example, "could not fork autovacuum
On Mon, May 1, 2023 at 10:03 PM Robert Haas wrote:
>
> On Sat, Apr 22, 2023 at 7:06 AM Amit Kapila wrote:
> > I don't think introducing a GUC for this is a good idea. We can
> > directly output this message in the server log either at LOG or DEBUG1
> > level.
>
> Why not? It seems like something
On 5/1/23 18:33, Robert Haas wrote:
> Why not? It seems like something some people might want to log and
> others not. Running the whole server at DEBUG1 to get this information
> doesn't seem like a suitable answer.
Since the statement is also logged, it could spam the log with huge
queries, wh
On Sat, Apr 22, 2023 at 7:06 AM Amit Kapila wrote:
> I don't think introducing a GUC for this is a good idea. We can
> directly output this message in the server log either at LOG or DEBUG1
> level.
Why not? It seems like something some people might want to log and
others not. Running the whole s
On 4/22/23 13:06, Amit Kapila wrote:
I don't think introducing a GUC for this is a good idea. We can
directly output this message in the server log either at LOG or DEBUG1
level.
Hi,
Sorry for the delayed answer, I was away from my computer for a few
days. I don't mind removing the guc, but I
On Fri, Apr 21, 2023 at 6:34 PM Benoit Lobréau
wrote:
>
> Following my previous mail about adding stats on parallelism[1], this
> patch introduces the log_parallel_worker_draught parameter, which
> controls whether a log message is produced when a backend attempts to
> spawn a parallel worker but
Hi hackers,
Following my previous mail about adding stats on parallelism[1], this
patch introduces the log_parallel_worker_draught parameter, which
controls whether a log message is produced when a backend attempts to
spawn a parallel worker but fails due to insufficient worker slots. The
sho
42 matches
Mail list logo