Hello, This patch was a bit discussed on [1], and with more details on [2]. It's based on another patch sent in 2022 (see [3]). It introduces seven new columns in pg_stat_statements:
* parallelized_queries_planned, number of times the query has been planned to be parallelized, * parallelized_queries_launched, number of times the query has been executed with parallelization, * parallelized_workers_planned, number of parallel workers planned for this query, * parallelized_workers_launched, number of parallel workers executed for this query, * parallelized_nodes, number of parallelized nodes, * parallelized_nodes_all_workers, number of parallelized nodes which had all requested workers, * parallelized_nodes_no_worker, number of parallelized nodes which had no requested workers. As Benoit said yesterday, the intent is to help administrators evaluate the usage of parallel workers in their databases and help configuring parallelization usage. A test script (test2.sql) is attached. You can execute it with "psql -Xef test2.sql your_database" (your_database should not contain a t1 table as it will be dropped and recreated). Here is its result, a bit commented: CREATE EXTENSION IF NOT EXISTS pg_stat_statements; CREATE EXTENSION SELECT pg_stat_statements_reset(); pg_stat_statements_reset ------------------------------- 2024-08-29 18:00:35.314557+02 (1 row) DROP TABLE IF EXISTS t1; DROP TABLE CREATE TABLE t1 (id integer); CREATE TABLE INSERT INTO t1 SELECT generate_series(1, 10_000_000); INSERT 0 10000000 VACUUM ANALYZE t1; VACUUM SELECT query, parallelized_queries_planned, parallelized_queries_launched, parallelized_workers_planned, parallelized_workers_launched, parallelized_nodes, parallelized_nodes_all_workers, parallelized_nodes_no_worker FROM pg_stat_statements WHERE query LIKE 'SELECT%t1%' (0 rows) SELECT * FROM t1 LIMIT 1; id ---- 1 (1 row) SELECT pg_sleep(1); SELECT query, parallelized_queries_planned, parallelized_queries_launched, parallelized_workers_planned, parallelized_workers_launched, parallelized_nodes, parallelized_nodes_all_workers, parallelized_nodes_no_worker FROM pg_stat_statements WHERE query LIKE 'SELECT%t1%' -[ RECORD 1 ]------------------+-------------------------- query | SELECT * FROM t1 LIMIT $1 parallelized_queries_planned | 0 parallelized_queries_launched | 0 parallelized_workers_planned | 0 parallelized_workers_launched | 0 parallelized_nodes | 0 parallelized_nodes_all_workers | 0 parallelized_nodes_no_worker | 0 ==> no parallelization SELECT count(*) FROM t1; count ---------- 10000000 (1 row) SELECT pg_sleep(1); SELECT query, parallelized_queries_planned, parallelized_queries_launched, parallelized_workers_planned, parallelized_workers_launched, parallelized_nodes, parallelized_nodes_all_workers, parallelized_nodes_no_worker FROM pg_stat_statements WHERE query LIKE 'SELECT%t1%' -[ RECORD 1 ]------------------+-------------------------- query | SELECT count(*) FROM t1 parallelized_queries_planned | 1 parallelized_queries_launched | 1 parallelized_workers_planned | 2 parallelized_workers_launched | 2 parallelized_nodes | 1 parallelized_nodes_all_workers | 1 parallelized_nodes_no_worker | 0 -[ RECORD 2 ]------------------+-------------------------- query | SELECT * FROM t1 LIMIT $1 parallelized_queries_planned | 0 parallelized_queries_launched | 0 parallelized_workers_planned | 0 parallelized_workers_launched | 0 parallelized_nodes | 0 parallelized_nodes_all_workers | 0 parallelized_nodes_no_worker | 0 ==> one parallelized query ==> I have the default configuration, so 2 for max_parallel_worker_per_gather ==> hence two workers, with one node with all workers SET max_parallel_workers_per_gather TO 5; SET SELECT count(*) FROM t1; count ---------- 10000000 (1 row) SELECT pg_sleep(1); SELECT query, parallelized_queries_planned, parallelized_queries_launched, parallelized_workers_planned, parallelized_workers_launched, parallelized_nodes, parallelized_nodes_all_workers, parallelized_nodes_no_worker FROM pg_stat_statements WHERE query LIKE 'SELECT%t1%' -[ RECORD 1 ]------------------+-------------------------- query | SELECT count(*) FROM t1 parallelized_queries_planned | 2 parallelized_queries_launched | 2 parallelized_workers_planned | 6 parallelized_workers_launched | 6 parallelized_nodes | 2 parallelized_nodes_all_workers | 2 parallelized_nodes_no_worker | 0 -[ RECORD 2 ]------------------+-------------------------- query | SELECT * FROM t1 LIMIT $1 parallelized_queries_planned | 0 parallelized_queries_launched | 0 parallelized_workers_planned | 0 parallelized_workers_launched | 0 parallelized_nodes | 0 parallelized_nodes_all_workers | 0 parallelized_nodes_no_worker | 0 ==> another parallelized query ==> with 5 as max_parallel_workers_per_gather, but only 4 workers to use ==> hence four workers, with one node with all workers The biggest issue with this patch is that it's unable to know workers for maintenance queries (CREATE INDEX for BTree and VACUUM). Documentation is done, tests are missing. Once there's an agreement on this patch, we'll work on the tests. This has been a collective work with Benoit Lobréau, Jehan-Guillaume de Rorthais, and Franck Boudehen. Thanks. Regards. [1] https://www.postgresql.org/message-id/flat/b4220d15-2e21-0e98-921b-b9892543cc93%40dalibo.com [2] https://www.postgresql.org/message-id/flat/d657df20-c4bf-63f6-e74c-cb85a81d0383%40dalibo.com [3] https://www.postgresql.org/message-id/flat/6acbe570-068e-bd8e-95d5-00c737b865e8%40gmail.com -- Guillaume.
From 59fd586cac8f0bafb1fa66548424f2dab0b38f31 Mon Sep 17 00:00:00 2001 From: Guillaume Lelarge <guillaume.lela...@dalibo.com> Date: Wed, 28 Aug 2024 15:30:05 +0200 Subject: [PATCH] Add parallel columns to pg_stat_statements There are seven new columns: * parallelized_queries_planned (number of times the query has been planned to be parallelized), * parallel_ized_querieslaunched (number of times the query has been executed with parallelization), * parallelized_workers_planned (number of parallel workers planned for this query), * parallelized_workers_launched (number of parallel workers executed for this query), * parallelized_nodes (number of parallelized nodes), * parallelized_nodes_all_workers (number of parallelized nodes which had all requested workers), * parallelized_nodes_no_worker (number of parallelized nodes which had no requested workers). These new columns will help to monitor and better configure query parallelization. --- contrib/pg_stat_statements/Makefile | 2 +- .../pg_stat_statements--1.11--1.12.sql | 80 +++++++++++++ .../pg_stat_statements/pg_stat_statements.c | 108 ++++++++++++++++-- .../pg_stat_statements.control | 2 +- doc/src/sgml/pgstatstatements.sgml | 63 ++++++++++ src/backend/executor/execUtils.c | 7 ++ src/backend/executor/nodeGather.c | 9 +- src/backend/executor/nodeGatherMerge.c | 8 ++ src/include/nodes/execnodes.h | 6 + 9 files changed, 275 insertions(+), 10 deletions(-) create mode 100644 contrib/pg_stat_statements/pg_stat_statements--1.11--1.12.sql diff --git a/contrib/pg_stat_statements/Makefile b/contrib/pg_stat_statements/Makefile index c19ccad77e..62f8df65b5 100644 --- a/contrib/pg_stat_statements/Makefile +++ b/contrib/pg_stat_statements/Makefile @@ -7,7 +7,7 @@ OBJS = \ EXTENSION = pg_stat_statements DATA = pg_stat_statements--1.4.sql \ - pg_stat_statements--1.10--1.11.sql \ + pg_stat_statements--1.11--1.12.sql pg_stat_statements--1.10--1.11.sql \ pg_stat_statements--1.9--1.10.sql pg_stat_statements--1.8--1.9.sql \ pg_stat_statements--1.7--1.8.sql pg_stat_statements--1.6--1.7.sql \ pg_stat_statements--1.5--1.6.sql pg_stat_statements--1.4--1.5.sql \ diff --git a/contrib/pg_stat_statements/pg_stat_statements--1.11--1.12.sql b/contrib/pg_stat_statements/pg_stat_statements--1.11--1.12.sql new file mode 100644 index 0000000000..6f4fe8be48 --- /dev/null +++ b/contrib/pg_stat_statements/pg_stat_statements--1.11--1.12.sql @@ -0,0 +1,80 @@ +/* contrib/pg_stat_statements/pg_stat_statements--1.11--1.12.sql */ + +-- complain if script is sourced in psql, rather than via ALTER EXTENSION +\echo Use "ALTER EXTENSION pg_stat_statements UPDATE TO '1.12'" to load this file. \quit + +/* First we have to remove them from the extension */ +ALTER EXTENSION pg_stat_statements DROP VIEW pg_stat_statements; +ALTER EXTENSION pg_stat_statements DROP FUNCTION pg_stat_statements(boolean); + +/* Then we can drop them */ +DROP VIEW pg_stat_statements; +DROP FUNCTION pg_stat_statements(boolean); + +/* Now redefine */ +CREATE FUNCTION pg_stat_statements(IN showtext boolean, + OUT userid oid, + OUT dbid oid, + OUT toplevel bool, + OUT queryid bigint, + OUT query text, + OUT plans int8, + OUT total_plan_time float8, + OUT min_plan_time float8, + OUT max_plan_time float8, + OUT mean_plan_time float8, + OUT stddev_plan_time float8, + OUT calls int8, + OUT total_exec_time float8, + OUT min_exec_time float8, + OUT max_exec_time float8, + OUT mean_exec_time float8, + OUT stddev_exec_time float8, + OUT rows int8, + OUT shared_blks_hit int8, + OUT shared_blks_read int8, + OUT shared_blks_dirtied int8, + OUT shared_blks_written int8, + OUT local_blks_hit int8, + OUT local_blks_read int8, + OUT local_blks_dirtied int8, + OUT local_blks_written int8, + OUT temp_blks_read int8, + OUT temp_blks_written int8, + OUT shared_blk_read_time float8, + OUT shared_blk_write_time float8, + OUT local_blk_read_time float8, + OUT local_blk_write_time float8, + OUT temp_blk_read_time float8, + OUT temp_blk_write_time float8, + OUT wal_records int8, + OUT wal_fpi int8, + OUT wal_bytes numeric, + OUT jit_functions int8, + OUT jit_generation_time float8, + OUT jit_inlining_count int8, + OUT jit_inlining_time float8, + OUT jit_optimization_count int8, + OUT jit_optimization_time float8, + OUT jit_emission_count int8, + OUT jit_emission_time float8, + OUT jit_deform_count int8, + OUT jit_deform_time float8, + OUT parallelized_queries_planned int8, + OUT parallelized_queries_launched int8, + OUT parallelized_workers_planned int8, + OUT parallelized_workers_launched int8, + OUT parallelized_nodes int8, + OUT parallelized_nodes_all_workers int8, + OUT parallelized_nodes_no_worker int8, + OUT stats_since timestamp with time zone, + OUT minmax_stats_since timestamp with time zone +) +RETURNS SETOF record +AS 'MODULE_PATHNAME', 'pg_stat_statements_1_12' +LANGUAGE C STRICT VOLATILE PARALLEL SAFE; + +CREATE VIEW pg_stat_statements AS + SELECT * FROM pg_stat_statements(true); + +GRANT SELECT ON pg_stat_statements TO PUBLIC; diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c index 362d222f63..840ab0bccd 100644 --- a/contrib/pg_stat_statements/pg_stat_statements.c +++ b/contrib/pg_stat_statements/pg_stat_statements.c @@ -113,6 +113,7 @@ typedef enum pgssVersion PGSS_V1_9, PGSS_V1_10, PGSS_V1_11, + PGSS_V1_12, } pgssVersion; typedef enum pgssStoreKind @@ -204,6 +205,13 @@ typedef struct Counters int64 jit_emission_count; /* number of times emission time has been * > 0 */ double jit_emission_time; /* total time to emit jit code */ + int64 parallelized_queries_planned; /* # of times query was planned to use parallelism */ + int64 parallelized_queries_launched; /* # of times query was executed using parallelism */ + int64 parallelized_workers_planned; /* # of parallel workers planned */ + int64 parallelized_workers_launched; /* # of parallel workers launched */ + int64 parallelized_nodes; /* # of parallelized nodes */ + int64 parallelized_nodes_all_workers; /* # of parallelized nodes with all workers */ + int64 parallelized_nodes_no_worker; /* # of parallelized nodes with no workers */ } Counters; /* @@ -317,6 +325,7 @@ PG_FUNCTION_INFO_V1(pg_stat_statements_1_8); PG_FUNCTION_INFO_V1(pg_stat_statements_1_9); PG_FUNCTION_INFO_V1(pg_stat_statements_1_10); PG_FUNCTION_INFO_V1(pg_stat_statements_1_11); +PG_FUNCTION_INFO_V1(pg_stat_statements_1_12); PG_FUNCTION_INFO_V1(pg_stat_statements); PG_FUNCTION_INFO_V1(pg_stat_statements_info); @@ -347,7 +356,14 @@ static void pgss_store(const char *query, uint64 queryId, const BufferUsage *bufusage, const WalUsage *walusage, const struct JitInstrumentation *jitusage, - JumbleState *jstate); + JumbleState *jstate, + bool parallelized_queries_planned, + bool parallelized_queries_launched, + int parallelized_workers_planned, + int parallelized_workers_launched, + int parallelized_nodes, + int parallelized_nodes_all_workers, + int parallelized_nodes_no_worker); static void pg_stat_statements_internal(FunctionCallInfo fcinfo, pgssVersion api_version, bool showtext); @@ -867,7 +883,14 @@ pgss_post_parse_analyze(ParseState *pstate, Query *query, JumbleState *jstate) NULL, NULL, NULL, - jstate); + jstate, + false, + false, + 0, + 0, + 0, + 0, + 0); } /* @@ -952,7 +975,14 @@ pgss_planner(Query *parse, &bufusage, &walusage, NULL, - NULL); + NULL, + false, + false, + 0, + 0, + 0, + 0, + 0); } else { @@ -1085,7 +1115,14 @@ pgss_ExecutorEnd(QueryDesc *queryDesc) &queryDesc->totaltime->bufusage, &queryDesc->totaltime->walusage, queryDesc->estate->es_jit ? &queryDesc->estate->es_jit->instr : NULL, - NULL); + NULL, + queryDesc->plannedstmt->parallelModeNeeded, + queryDesc->estate->es_used_parallel_mode, + queryDesc->estate->es_parallelized_workers_planned, + queryDesc->estate->es_parallelized_workers_launched, + queryDesc->estate->es_parallelized_nodes, + queryDesc->estate->es_parallelized_nodes_all_workers, + queryDesc->estate->es_parallelized_nodes_no_worker); } if (prev_ExecutorEnd) @@ -1216,7 +1253,14 @@ pgss_ProcessUtility(PlannedStmt *pstmt, const char *queryString, &bufusage, &walusage, NULL, - NULL); + NULL, + false, + false, + 0, + 0, + 0, + 0, + 0); } else { @@ -1277,7 +1321,14 @@ pgss_store(const char *query, uint64 queryId, const BufferUsage *bufusage, const WalUsage *walusage, const struct JitInstrumentation *jitusage, - JumbleState *jstate) + JumbleState *jstate, + bool parallelized_queries_planned, + bool parallelized_queries_launched, + int parallelized_workers_planned, + int parallelized_workers_launched, + int parallelized_nodes, + int parallelized_nodes_all_workers, + int parallelized_nodes_no_worker) { pgssHashKey key; pgssEntry *entry; @@ -1479,6 +1530,20 @@ pgss_store(const char *query, uint64 queryId, entry->counters.jit_emission_count++; entry->counters.jit_emission_time += INSTR_TIME_GET_MILLISEC(jitusage->emission_counter); } + /* inc parallel counters */ + if (parallelized_queries_planned) + { + entry->counters.parallelized_queries_planned += 1; + } + if (parallelized_queries_launched) + { + entry->counters.parallelized_queries_launched += 1; + } + entry->counters.parallelized_workers_planned += parallelized_workers_planned; + entry->counters.parallelized_workers_launched += parallelized_workers_launched; + entry->counters.parallelized_nodes += parallelized_nodes; + entry->counters.parallelized_nodes_all_workers += parallelized_nodes_all_workers; + entry->counters.parallelized_nodes_no_worker += parallelized_nodes_no_worker; SpinLockRelease(&entry->mutex); } @@ -1546,7 +1611,8 @@ pg_stat_statements_reset(PG_FUNCTION_ARGS) #define PG_STAT_STATEMENTS_COLS_V1_9 33 #define PG_STAT_STATEMENTS_COLS_V1_10 43 #define PG_STAT_STATEMENTS_COLS_V1_11 49 -#define PG_STAT_STATEMENTS_COLS 49 /* maximum of above */ +#define PG_STAT_STATEMENTS_COLS_V1_12 56 +#define PG_STAT_STATEMENTS_COLS 56 /* maximum of above */ /* * Retrieve statement statistics. @@ -1558,6 +1624,16 @@ pg_stat_statements_reset(PG_FUNCTION_ARGS) * expected API version is identified by embedding it in the C name of the * function. Unfortunately we weren't bright enough to do that for 1.1. */ +Datum +pg_stat_statements_1_12(PG_FUNCTION_ARGS) +{ + bool showtext = PG_GETARG_BOOL(0); + + pg_stat_statements_internal(fcinfo, PGSS_V1_12, showtext); + + return (Datum) 0; +} + Datum pg_stat_statements_1_11(PG_FUNCTION_ARGS) { @@ -1702,6 +1778,10 @@ pg_stat_statements_internal(FunctionCallInfo fcinfo, if (api_version != PGSS_V1_11) elog(ERROR, "incorrect number of output arguments"); break; + case PG_STAT_STATEMENTS_COLS_V1_12: + if (api_version != PGSS_V1_12) + elog(ERROR, "incorrect number of output arguments"); + break; default: elog(ERROR, "incorrect number of output arguments"); } @@ -1939,6 +2019,19 @@ pg_stat_statements_internal(FunctionCallInfo fcinfo, { values[i++] = Int64GetDatumFast(tmp.jit_deform_count); values[i++] = Float8GetDatumFast(tmp.jit_deform_time); + } + if (api_version >= PGSS_V1_12) + { + values[i++] = Int64GetDatumFast(tmp.parallelized_queries_planned); + values[i++] = Int64GetDatumFast(tmp.parallelized_queries_launched); + values[i++] = Int64GetDatumFast(tmp.parallelized_workers_planned); + values[i++] = Int64GetDatumFast(tmp.parallelized_workers_launched); + values[i++] = Int64GetDatumFast(tmp.parallelized_nodes); + values[i++] = Int64GetDatumFast(tmp.parallelized_nodes_all_workers); + values[i++] = Int64GetDatumFast(tmp.parallelized_nodes_no_worker); + } + if (api_version >= PGSS_V1_11) + { values[i++] = TimestampTzGetDatum(stats_since); values[i++] = TimestampTzGetDatum(minmax_stats_since); } @@ -1951,6 +2044,7 @@ pg_stat_statements_internal(FunctionCallInfo fcinfo, api_version == PGSS_V1_9 ? PG_STAT_STATEMENTS_COLS_V1_9 : api_version == PGSS_V1_10 ? PG_STAT_STATEMENTS_COLS_V1_10 : api_version == PGSS_V1_11 ? PG_STAT_STATEMENTS_COLS_V1_11 : + api_version == PGSS_V1_12 ? PG_STAT_STATEMENTS_COLS_V1_12 : -1 /* fail if you forget to update this assert */ )); tuplestore_putvalues(rsinfo->setResult, rsinfo->setDesc, values, nulls); diff --git a/contrib/pg_stat_statements/pg_stat_statements.control b/contrib/pg_stat_statements/pg_stat_statements.control index 8a76106ec6..d45ebc12e3 100644 --- a/contrib/pg_stat_statements/pg_stat_statements.control +++ b/contrib/pg_stat_statements/pg_stat_statements.control @@ -1,5 +1,5 @@ # pg_stat_statements extension comment = 'track planning and execution statistics of all SQL statements executed' -default_version = '1.11' +default_version = '1.12' module_pathname = '$libdir/pg_stat_statements' relocatable = true diff --git a/doc/src/sgml/pgstatstatements.sgml b/doc/src/sgml/pgstatstatements.sgml index 9b0aff73b1..3fe84ba994 100644 --- a/doc/src/sgml/pgstatstatements.sgml +++ b/doc/src/sgml/pgstatstatements.sgml @@ -527,6 +527,69 @@ </para></entry> </row> + <row> + <entry role="catalog_table_entry"><para role="column_definition"> + <structfield>parallelized_queries_planned</structfield> <type>bigint</type> + </para> + <para> + Number of times the statement was planned to use parallelism. + </para></entry> + </row> + + <row> + <entry role="catalog_table_entry"><para role="column_definition"> + <structfield>parallel_queries_launched</structfield> <type>bigint</type> + </para> + <para> + Number of times that the statement was executed using parallelism + </para></entry> + </row> + + <row> + <entry role="catalog_table_entry"><para role="column_definition"> + <structfield>parallelized_workers_planned</structfield> <type>bigint</type> + </para> + <para> + Number of parallel workers planned for the query. + </para></entry> + </row> + + <row> + <entry role="catalog_table_entry"><para role="column_definition"> + <structfield>parallel_workers_launched</structfield> <type>bigint</type> + </para> + <para> + Number of parallel workers executed for the query. + </para></entry> + </row> + + <row> + <entry role="catalog_table_entry"><para role="column_definition"> + <structfield>parallelized_nodes</structfield> <type>bigint</type> + </para> + <para> + Number of parallelized nodes + </para></entry> + </row> + + <row> + <entry role="catalog_table_entry"><para role="column_definition"> + <structfield>parallelized_nodes_all_workers</structfield> <type>bigint</type> + </para> + <para> + Number of parallelized nodes that got all workers + </para></entry> + </row> + + <row> + <entry role="catalog_table_entry"><para role="column_definition"> + <structfield>parallelized_nodes_no_worker</structfield> <type>bigint</type> + </para> + <para> + Number of parallelized nodes that got no worker at all + </para></entry> + </row> + <row> <entry role="catalog_table_entry"><para role="column_definition"> <structfield>stats_since</structfield> <type>timestamp with time zone</type> diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c index 5737f9f4eb..3b0715d604 100644 --- a/src/backend/executor/execUtils.c +++ b/src/backend/executor/execUtils.c @@ -158,10 +158,17 @@ CreateExecutorState(void) estate->es_sourceText = NULL; estate->es_use_parallel_mode = false; + estate->es_used_parallel_mode = false; estate->es_jit_flags = 0; estate->es_jit = NULL; + estate->es_parallelized_nodes = 0; + estate->es_parallelized_nodes_all_workers = 0; + estate->es_parallelized_nodes_no_worker = 0; + estate->es_parallelized_workers_launched = 0; + estate->es_parallelized_workers_planned = 0; + /* * Return the executor state structure */ diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c index 5d4ffe989c..f1f772ecb6 100644 --- a/src/backend/executor/nodeGather.c +++ b/src/backend/executor/nodeGather.c @@ -181,7 +181,13 @@ ExecGather(PlanState *pstate) LaunchParallelWorkers(pcxt); /* We save # workers launched for the benefit of EXPLAIN */ node->nworkers_launched = pcxt->nworkers_launched; - + if(pcxt->nworkers_launched > 0) + estate->es_used_parallel_mode = true; + estate->es_parallelized_nodes += 1; + estate->es_parallelized_workers_launched += pcxt->nworkers_launched; + estate->es_parallelized_workers_planned += pcxt->nworkers_to_launch; + if (pcxt->nworkers_to_launch == pcxt->nworkers_launched) + estate->es_parallelized_nodes_all_workers += 1; /* Set up tuple queue readers to read the results. */ if (pcxt->nworkers_launched > 0) { @@ -198,6 +204,7 @@ ExecGather(PlanState *pstate) /* No workers? Then never mind. */ node->nreaders = 0; node->reader = NULL; + estate->es_parallelized_nodes_no_worker += 1; } node->nextreader = 0; } diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c index 45f6017c29..5678b25a86 100644 --- a/src/backend/executor/nodeGatherMerge.c +++ b/src/backend/executor/nodeGatherMerge.c @@ -222,6 +222,13 @@ ExecGatherMerge(PlanState *pstate) LaunchParallelWorkers(pcxt); /* We save # workers launched for the benefit of EXPLAIN */ node->nworkers_launched = pcxt->nworkers_launched; + if(pcxt->nworkers_launched > 0) + estate->es_used_parallel_mode = true; + estate->es_parallelized_nodes += 1; + estate->es_parallelized_workers_launched += pcxt->nworkers_launched; + estate->es_parallelized_workers_planned += pcxt->nworkers_to_launch; + if (pcxt->nworkers_to_launch == pcxt->nworkers_launched) + estate->es_parallelized_nodes_all_workers += 1; /* Set up tuple queue readers to read the results. */ if (pcxt->nworkers_launched > 0) @@ -239,6 +246,7 @@ ExecGatherMerge(PlanState *pstate) /* No workers? Then never mind. */ node->nreaders = 0; node->reader = NULL; + estate->es_parallelized_nodes_no_worker += 1; } } diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index af7d8fd1e7..26baa444a2 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -701,6 +701,12 @@ typedef struct EState struct EPQState *es_epq_active; bool es_use_parallel_mode; /* can we use parallel workers? */ + bool es_used_parallel_mode; /* was executed in parallel */ + int es_parallelized_workers_launched; + int es_parallelized_workers_planned; + int es_parallelized_nodes; /* # of parallelized nodes */ + int es_parallelized_nodes_all_workers; /* # of nodes with all workers launched */ + int es_parallelized_nodes_no_worker; /* # of nodes with no workers launched */ /* The per-query shared memory area to use for parallel execution. */ struct dsa_area *es_query_dsa; -- 2.46.0
test2.sql
Description: application/sql