Hi,
Thank you very much for your comments!
In this letter I'll answer both of your recent letters.

On Fri, Aug 8, 2025 at 6:38 AM Masahiko Sawada <sawada.m...@gmail.com> wrote:
>
> Thank you for updating the patch. Here are some review comments.
>
> +        /* Release all launched (i.e. reserved) parallel autovacuum workers. 
> */
> +        if (AmAutoVacuumWorkerProcess())
> +                AutoVacuumReleaseParallelWorkers(nlaunched_workers);
> +
>
> We release the reserved worker in parallel_vacuum_end(). However,
> parallel_vacuum_end() is called only once at the end of vacuum. I
> think we need to release the reserved worker after index vacuuming or
> cleanup, otherwise we would end up holding the reserved workers until
> the end of vacuum even if we invoke index vacuuming multiple times.
>

Yep, you are right. It was easy to miss because typically the autovacuum
takes only one cycle to process a table. Since both index vacuum and
index cleanup uses the parallel_vacuum_process_all_indexes function,
I think that both releasing and reserving should be placed there.

> ---
> +void
> +assign_autovacuum_max_parallel_workers(int newval, void *extra)
> +{
> +        autovacuum_max_parallel_workers = Min(newval, max_worker_processes);
> +}
>
> I don't think we need the assign hook for this GUC parameter. We can
> internally cap the maximum value by max_worker_processes like other
> GUC parameters such as max_parallel_maintenance_workers and
> max_parallel_workers.

Ok, I get it - we don't want to give a configuration error for no serious
reason. Actually, we already internally capping
autovacuum_max_parallel_workers by max_worker_processes (inside
parallel_vacuum_compute_workers function). This is the same behavior
as max_parallel_maintenance_workers got.

I'll get rid of the assign hook and add one more capping inside autovacuum
shmem initialization : Since max_worker_processes is PGC_POSTMASTER
parameter, av_freeParallelWorkers must not exceed its value.

>
> ---+        /* Refresh autovacuum_max_parallel_workers paremeter */
> +        CHECK_FOR_INTERRUPTS();
> +        if (ConfigReloadPending)
> +        {
> +                ConfigReloadPending = false;
> +                ProcessConfigFile(PGC_SIGHUP);
> +        }
> +
> +        LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);
> +
> +        /*
> +         * If autovacuum_max_parallel_workers parameter was reduced during
> +         * parallel autovacuum execution, we must cap available
> workers number by
> +         * its new value.
> +         */
> +        AutoVacuumShmem->av_freeParallelWorkers =
> +                Min(AutoVacuumShmem->av_freeParallelWorkers + nworkers,
> +                        autovacuum_max_parallel_workers);
> +
> +        LWLockRelease(AutovacuumLock);
>
> I think another race condition could occur; suppose
> autovacuum_max_parallel_workers is set to '5' and one autovacuum
> worker reserved 5 workers, meaning that
> AutoVacuumShmem->av_freeParallelWorkers is 0. Then, the user changes
> autovacuum_max_parallel_workers to 3 and reloads the conf file right
> after the autovacuum worker checks the interruption. The launcher
> processes calls adjust_free_parallel_workers() but
> av_freeParallelWorkers remains 0, and the autovacuum worker increments
> it by 5 as its autovacuum_max_parallel_workers value is still 5.
>

I think this problem can be solved if we put AutovacuumLock acquiring
before processing the config file, but I understand that this is a bad way.

> I think that we can have the autovacuum_max_parallel_workers value on
> shmem, and only the launcher process can modify its value if the GUC
> is changed. Autovacuum workers simply increase or decrease the
> av_freeParallelWorkers within the range of 0 and the
> autovacuum_max_parallel_workers value on shmem. When changing
> autovacuum_max_parallel_workers and av_freeParallelWorkers values on
> shmem, the launcher process calculates the number of workers reserved
> at that time and calculate the new av_freeParallelWorkers value by
> subtracting the new autovacuum_max_parallel_workers by the number of
> reserved workers.
>

Good idea, I agree. Replacing the GUC parameter with the variable in shmem
leaves the current logic of free workers management unchanged. Essentially,
this  is the same solution as I described above, but we are holding lock not
during  config reloading, but during a simple value check. It makes much
more sense.

> ---
> +AutoVacuumReserveParallelWorkers(int nworkers)
> +{
> +   int         can_launch;
>
> How about renaming it to 'nreserved' or something? can_launch looks
> like it's a boolean variable to indicate whether the process can
> launch workers.
>

There are no objections.

On Fri, Aug 15, 2025 at 3:41 AM Masahiko Sawada <sawada.m...@gmail.com> wrote:
>
> While testing the patch, I found there are other two problems:
>
> 1. when an autovacuum worker who reserved workers fails with an error,
> the reserved workers are not released. I think we need to ensure that
> all reserved workers are surely released at the end of vacuum even
> with an error.
>

Agree. I'll add a try/catch block to the parallel_vacuum_process_all_indexes
(the only place where we are reserving workers).

> 2. when an autovacuum worker (not parallel vacuum worker) who uses
> parallel vacuum gets SIGHUP, it errors out with the error message
> "parameter "max_stack_depth" cannot be set during a parallel
> operation". Autovacuum checks the configuration file reload in
> vacuum_delay_point(), and while reloading the configuration file, it
> attempts to set max_stack_depth in
> InitializeGUCOptionsFromEnvironment() (which is called by
> ProcessConfigFileInternal()). However, it cannot change
> max_stack_depth since the worker is in parallel mode but
> max_stack_depth doesn't have GUC_ALLOW_IN_PARALLEL flag. This doesn't
> happen in regular backends who are using parallel queries because they
> check the configuration file reload at the end of each SQL command.
>

Hm, this is a really serious problem. I see only two ways to solve it (both are
not really good) :
1)
Do not allow processing of the config file during parallel autovacuum
execution.
2)
Teach the autovacuum to enter parallel mode only during the index vacuum/cleanup
phase. I'm a bit wary about it, because the design says that we should
be in parallel
mode during the whole parallel operation. But actually, if we can make
sure that all
launched workers are exited, I don't see reasons, why can't we just
exit parallel mode
at the end of parallel_vacuum_process_all_indexes.

What do you think about it? By now, I haven't made any changes related
to this problem.

Again, thank you for the review. Please, see v10 patches (only 0001
has been changed) :
1) Reserve and release workers only inside parallel_vacuum_process_all_indexes.
2) Add try/catch block to the parallel_vacuum_process_all_indexes, so we can
release workers even after an error. This required adding a static
variable to account
for the total number of reserved workers (av_nworkers_reserved).
3) Cap autovacuum_max_parallel_workers by max_worker_processes only inside
autovacuum code. Assign hook has been removed.
4) Use shmem value for determining the maximum number of parallel autovacuum
workers (eliminate race condition between launcher and leader process).

--
Best regards,
Daniil Davydov
From e991e071d4798e8c2ec576389f5a8592fe76282b Mon Sep 17 00:00:00 2001
From: Daniil Davidov <d.davy...@postgrespro.ru>
Date: Mon, 18 Aug 2025 15:14:25 +0700
Subject: [PATCH v10 2/3] Logging for parallel autovacuum

---
 src/backend/access/heap/vacuumlazy.c  | 27 ++++++++++++++++++++++++--
 src/backend/commands/vacuumparallel.c | 28 ++++++++++++++++++---------
 src/include/commands/vacuum.h         | 16 +++++++++++++--
 3 files changed, 58 insertions(+), 13 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 14036c27e87..f1a645e79a9 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -348,6 +348,12 @@ typedef struct LVRelState
 
 	/* Instrumentation counters */
 	int			num_index_scans;
+
+	/*
+	 * Number of planned and actually launched parallel workers for all index
+	 * scans, or NULL
+	 */
+	PVWorkersUsage *workers_usage;
 	/* Counters that follow are only for scanned_pages */
 	int64		tuples_deleted; /* # deleted from table */
 	int64		tuples_frozen;	/* # newly frozen */
@@ -688,6 +694,16 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 		indnames = palloc(sizeof(char *) * vacrel->nindexes);
 		for (int i = 0; i < vacrel->nindexes; i++)
 			indnames[i] = pstrdup(RelationGetRelationName(vacrel->indrels[i]));
+
+		/*
+		 * Allocate space for workers usage statistics. Thus, we explicitly
+		 * make clear that such statistics must be accumulated. For now, this
+		 * is used only by autovacuum leader worker, because it must log it in
+		 * the end of table processing.
+		 */
+		vacrel->workers_usage = AmAutoVacuumWorkerProcess() ?
+			(PVWorkersUsage *) palloc0(sizeof(PVWorkersUsage)) :
+			NULL;
 	}
 
 	/*
@@ -1012,6 +1028,11 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 							 vacrel->relnamespace,
 							 vacrel->relname,
 							 vacrel->num_index_scans);
+			if (vacrel->workers_usage)
+				appendStringInfo(&buf,
+								 _("workers usage statistics for all of index scans : launched in total = %d, planned in total = %d\n"),
+								 vacrel->workers_usage->nlaunched,
+								 vacrel->workers_usage->nplanned);
 			appendStringInfo(&buf, _("pages: %u removed, %u remain, %u scanned (%.2f%% of total), %u eagerly scanned\n"),
 							 vacrel->removed_pages,
 							 new_rel_pages,
@@ -2634,7 +2655,8 @@ lazy_vacuum_all_indexes(LVRelState *vacrel)
 	{
 		/* Outsource everything to parallel variant */
 		parallel_vacuum_bulkdel_all_indexes(vacrel->pvs, old_live_tuples,
-											vacrel->num_index_scans);
+											vacrel->num_index_scans,
+											vacrel->workers_usage);
 
 		/*
 		 * Do a postcheck to consider applying wraparound failsafe now.  Note
@@ -3047,7 +3069,8 @@ lazy_cleanup_all_indexes(LVRelState *vacrel)
 		/* Outsource everything to parallel variant */
 		parallel_vacuum_cleanup_all_indexes(vacrel->pvs, reltuples,
 											vacrel->num_index_scans,
-											estimated_count);
+											estimated_count,
+											vacrel->workers_usage);
 	}
 
 	/* Reset the progress counters */
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 4221e6084f5..02870ed1288 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -227,9 +227,10 @@ struct ParallelVacuumState
 static int	parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 											bool *will_parallel_vacuum);
 static void parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
-												bool vacuum);
+												bool vacuum, PVWorkersUsage * wusage);
 static void parallel_vacuum_process_all_indexes_internal(ParallelVacuumState *pvs,
-														 int num_index_scans, bool vacuum);
+														 int num_index_scans, bool vacuum,
+														 PVWorkersUsage * wusage);
 static void parallel_vacuum_process_safe_indexes(ParallelVacuumState *pvs);
 static void parallel_vacuum_process_unsafe_indexes(ParallelVacuumState *pvs);
 static void parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation indrel,
@@ -504,7 +505,7 @@ parallel_vacuum_reset_dead_items(ParallelVacuumState *pvs)
  */
 void
 parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tuples,
-									int num_index_scans)
+									int num_index_scans, PVWorkersUsage * wusage)
 {
 	Assert(!IsParallelWorker());
 
@@ -515,7 +516,7 @@ parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 	pvs->shared->reltuples = num_table_tuples;
 	pvs->shared->estimated_count = true;
 
-	parallel_vacuum_process_all_indexes(pvs, num_index_scans, true);
+	parallel_vacuum_process_all_indexes(pvs, num_index_scans, true, wusage);
 }
 
 /*
@@ -523,7 +524,8 @@ parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tup
  */
 void
 parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tuples,
-									int num_index_scans, bool estimated_count)
+									int num_index_scans, bool estimated_count,
+									PVWorkersUsage * wusage)
 {
 	Assert(!IsParallelWorker());
 
@@ -535,7 +537,7 @@ parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 	pvs->shared->reltuples = num_table_tuples;
 	pvs->shared->estimated_count = estimated_count;
 
-	parallel_vacuum_process_all_indexes(pvs, num_index_scans, false);
+	parallel_vacuum_process_all_indexes(pvs, num_index_scans, false, wusage);
 }
 
 /*
@@ -620,7 +622,7 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
  */
 static void
 parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
-									bool vacuum)
+									bool vacuum, PVWorkersUsage * wusage)
 {
 	/*
 	 * Parallel autovacuum can reserve parallel workers. Use try/catch block
@@ -629,7 +631,7 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 	PG_TRY();
 	{
 		parallel_vacuum_process_all_indexes_internal(pvs, num_index_scans,
-													 false);
+													 false, wusage);
 	}
 	PG_CATCH();
 	{
@@ -644,7 +646,8 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 
 static void
 parallel_vacuum_process_all_indexes_internal(ParallelVacuumState *pvs,
-											 int num_index_scans, bool vacuum)
+											 int num_index_scans, bool vacuum,
+											 PVWorkersUsage * wusage)
 {
 	int			nworkers;
 	PVIndVacStatus new_status;
@@ -768,6 +771,13 @@ parallel_vacuum_process_all_indexes_internal(ParallelVacuumState *pvs,
 									 "launched %d parallel vacuum workers for index cleanup (planned: %d)",
 									 pvs->pcxt->nworkers_launched),
 							pvs->pcxt->nworkers_launched, nworkers)));
+
+		/* Remember these values, if we asked to. */
+		if (wusage != NULL)
+		{
+			wusage->nlaunched += pvs->pcxt->nworkers_launched;
+			wusage->nplanned += nworkers;
+		}
 	}
 
 	/* Vacuum the indexes that can be processed by only leader process */
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index 14eeccbd718..d05ef7461ea 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -295,6 +295,16 @@ typedef struct VacDeadItemsInfo
 	int64		num_items;		/* current # of entries */
 } VacDeadItemsInfo;
 
+/*
+ * PVWorkersUsage stores information about total number of launched and planned
+ * workers during parallel vacuum.
+ */
+typedef struct PVWorkersUsage
+{
+	int			nlaunched;
+	int			nplanned;
+}			PVWorkersUsage;
+
 /* GUC parameters */
 extern PGDLLIMPORT int default_statistics_target;	/* PGDLLIMPORT for PostGIS */
 extern PGDLLIMPORT int vacuum_freeze_min_age;
@@ -389,11 +399,13 @@ extern TidStore *parallel_vacuum_get_dead_items(ParallelVacuumState *pvs,
 extern void parallel_vacuum_reset_dead_items(ParallelVacuumState *pvs);
 extern void parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs,
 												long num_table_tuples,
-												int num_index_scans);
+												int num_index_scans,
+												PVWorkersUsage * wusage);
 extern void parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs,
 												long num_table_tuples,
 												int num_index_scans,
-												bool estimated_count);
+												bool estimated_count,
+												PVWorkersUsage * wusage);
 extern void parallel_vacuum_main(dsm_segment *seg, shm_toc *toc);
 
 /* in commands/analyze.c */
-- 
2.43.0

From 62abb120d888a837e50bb55ba26ba740caad8f7a Mon Sep 17 00:00:00 2001
From: Daniil Davidov <d.davy...@postgrespro.ru>
Date: Tue, 22 Jul 2025 12:31:20 +0700
Subject: [PATCH v10 3/3] Documentation for parallel autovacuum

---
 doc/src/sgml/config.sgml           | 18 ++++++++++++++++++
 doc/src/sgml/maintenance.sgml      | 12 ++++++++++++
 doc/src/sgml/ref/create_table.sgml | 20 ++++++++++++++++++++
 3 files changed, 50 insertions(+)

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 20ccb2d6b54..b74053281de 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -2835,6 +2835,7 @@ include_dir 'conf.d'
         <para>
          When changing this value, consider also adjusting
          <xref linkend="guc-max-parallel-workers"/>,
+         <xref linkend="guc-autovacuum-max-parallel-workers"/>,
          <xref linkend="guc-max-parallel-maintenance-workers"/>, and
          <xref linkend="guc-max-parallel-workers-per-gather"/>.
         </para>
@@ -9189,6 +9190,23 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
        </listitem>
       </varlistentry>
 
+      <varlistentry id="guc-autovacuum-max-parallel-workers" xreflabel="autovacuum_max_parallel_workers">
+        <term><varname>autovacuum_max_parallel_workers</varname> (<type>integer</type>)
+        <indexterm>
+         <primary><varname>autovacuum_max_parallel_workers</varname></primary>
+         <secondary>configuration parameter</secondary>
+        </indexterm>
+        </term>
+        <listitem>
+         <para>
+          Sets the maximum number of parallel autovacuum workers that
+          can be used for parallel index vacuuming at one time. Is capped by
+          <xref linkend="guc-max-worker-processes"/>. The default is 0,
+          which means no parallel index vacuuming.
+         </para>
+        </listitem>
+     </varlistentry>
+
      </variablelist>
     </sect2>
 
diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml
index e7a9f58c015..4e450ba9066 100644
--- a/doc/src/sgml/maintenance.sgml
+++ b/doc/src/sgml/maintenance.sgml
@@ -896,6 +896,18 @@ HINT:  Execute a database-wide VACUUM in that database.
     autovacuum workers' activity.
    </para>
 
+   <para>
+    If an autovacuum worker process comes across a table with the enabled
+    <xref linkend="reloption-autovacuum-parallel-workers"/> storage parameter,
+    it will launch parallel workers in order to vacuum indexes of this table
+    in a parallel mode. Parallel workers are taken from the pool of processes
+    established by <xref linkend="guc-max-worker-processes"/>, limited by
+    <xref linkend="guc-max-parallel-workers"/>.
+    The total number of parallel autovacuum workers that can be active at one
+    time is limited by the <xref linkend="guc-autovacuum-max-parallel-workers"/>
+    configuration parameter.
+   </para>
+
    <para>
     If several large tables all become eligible for vacuuming in a short
     amount of time, all autovacuum workers might become occupied with
diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml
index dc000e913c1..288de6b0ffd 100644
--- a/doc/src/sgml/ref/create_table.sgml
+++ b/doc/src/sgml/ref/create_table.sgml
@@ -1717,6 +1717,26 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
     </listitem>
    </varlistentry>
 
+  <varlistentry id="reloption-autovacuum-parallel-workers" xreflabel="autovacuum_parallel_workers">
+    <term><literal>autovacuum_parallel_workers</literal> (<type>integer</type>)
+    <indexterm>
+     <primary><varname>autovacuum_parallel_workers</varname> storage parameter</primary>
+    </indexterm>
+    </term>
+    <listitem>
+     <para>
+      Sets the maximum number of parallel autovacuum workers that can process
+      indexes of this table.
+      The default value is -1, which means no parallel index vacuuming for
+      this table. If value is 0 then parallel degree will computed based on
+      number of indexes.
+      Note that the computed number of workers may not actually be available at
+      run time. If this occurs, the autovacuum will run with fewer workers
+      than expected.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="reloption-autovacuum-vacuum-threshold" xreflabel="autovacuum_vacuum_threshold">
     <term><literal>autovacuum_vacuum_threshold</literal>, <literal>toast.autovacuum_vacuum_threshold</literal> (<type>integer</type>)
     <indexterm>
-- 
2.43.0

From a470d95603b437ef5aa45470ad7be61f03682493 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <d.davy...@postgrespro.ru>
Date: Sun, 20 Jul 2025 23:03:57 +0700
Subject: [PATCH v10 1/3] Parallel index autovacuum

---
 src/backend/access/common/reloptions.c        |  11 ++
 src/backend/commands/vacuumparallel.c         |  68 ++++++++-
 src/backend/postmaster/autovacuum.c           | 144 +++++++++++++++++-
 src/backend/utils/init/globals.c              |   1 +
 src/backend/utils/misc/guc_tables.c           |  10 ++
 src/backend/utils/misc/postgresql.conf.sample |   1 +
 src/bin/psql/tab-complete.in.c                |   1 +
 src/include/miscadmin.h                       |   1 +
 src/include/postmaster/autovacuum.h           |   5 +
 src/include/utils/rel.h                       |   7 +
 10 files changed, 241 insertions(+), 8 deletions(-)

diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index 0af3fea68fa..1c98d43c6eb 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -222,6 +222,15 @@ static relopt_int intRelOpts[] =
 		},
 		SPGIST_DEFAULT_FILLFACTOR, SPGIST_MIN_FILLFACTOR, 100
 	},
+	{
+		{
+			"autovacuum_parallel_workers",
+			"Maximum number of parallel autovacuum workers that can be used for processing this table.",
+			RELOPT_KIND_HEAP,
+			ShareUpdateExclusiveLock
+		},
+		-1, -1, 1024
+	},
 	{
 		{
 			"autovacuum_vacuum_threshold",
@@ -1872,6 +1881,8 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
 		{"fillfactor", RELOPT_TYPE_INT, offsetof(StdRdOptions, fillfactor)},
 		{"autovacuum_enabled", RELOPT_TYPE_BOOL,
 		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, enabled)},
+		{"autovacuum_parallel_workers", RELOPT_TYPE_INT,
+		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, autovacuum_parallel_workers)},
 		{"autovacuum_vacuum_threshold", RELOPT_TYPE_INT,
 		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_threshold)},
 		{"autovacuum_vacuum_max_threshold", RELOPT_TYPE_INT,
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 0feea1d30ec..4221e6084f5 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -1,7 +1,9 @@
 /*-------------------------------------------------------------------------
  *
  * vacuumparallel.c
- *	  Support routines for parallel vacuum execution.
+ *	  Support routines for parallel vacuum and autovacuum execution. In the
+ *	  future comments, the word "vacuum" will refer to both vacuum and
+ *	  autovacuum.
  *
  * This file contains routines that are intended to support setting up, using,
  * and tearing down a ParallelVacuumState.
@@ -34,6 +36,7 @@
 #include "executor/instrument.h"
 #include "optimizer/paths.h"
 #include "pgstat.h"
+#include "postmaster/autovacuum.h"
 #include "storage/bufmgr.h"
 #include "tcop/tcopprot.h"
 #include "utils/lsyscache.h"
@@ -225,6 +228,8 @@ static int	parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int
 											bool *will_parallel_vacuum);
 static void parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
 												bool vacuum);
+static void parallel_vacuum_process_all_indexes_internal(ParallelVacuumState *pvs,
+														 int num_index_scans, bool vacuum);
 static void parallel_vacuum_process_safe_indexes(ParallelVacuumState *pvs);
 static void parallel_vacuum_process_unsafe_indexes(ParallelVacuumState *pvs);
 static void parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation indrel,
@@ -373,8 +378,9 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
 	shared->queryid = pgstat_get_my_query_id();
 	shared->maintenance_work_mem_worker =
 		(nindexes_mwm > 0) ?
-		maintenance_work_mem / Min(parallel_workers, nindexes_mwm) :
-		maintenance_work_mem;
+		vac_work_mem / Min(parallel_workers, nindexes_mwm) :
+		vac_work_mem;
+
 	shared->dead_items_info.max_bytes = vac_work_mem * (size_t) 1024;
 
 	/* Prepare DSA space for dead items */
@@ -553,12 +559,17 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 	int			nindexes_parallel_bulkdel = 0;
 	int			nindexes_parallel_cleanup = 0;
 	int			parallel_workers;
+	int			max_workers;
+
+	max_workers = AmAutoVacuumWorkerProcess() ?
+		autovacuum_max_parallel_workers :
+		max_parallel_maintenance_workers;
 
 	/*
 	 * We don't allow performing parallel operation in standalone backend or
 	 * when parallelism is disabled.
 	 */
-	if (!IsUnderPostmaster || max_parallel_maintenance_workers == 0)
+	if (!IsUnderPostmaster || max_workers == 0)
 		return 0;
 
 	/*
@@ -597,8 +608,8 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 	parallel_workers = (nrequested > 0) ?
 		Min(nrequested, nindexes_parallel) : nindexes_parallel;
 
-	/* Cap by max_parallel_maintenance_workers */
-	parallel_workers = Min(parallel_workers, max_parallel_maintenance_workers);
+	/* Cap by GUC variable */
+	parallel_workers = Min(parallel_workers, max_workers);
 
 	return parallel_workers;
 }
@@ -610,6 +621,30 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 static void
 parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
 									bool vacuum)
+{
+	/*
+	 * Parallel autovacuum can reserve parallel workers. Use try/catch block
+	 * to make ensure that all workers are released.
+	 */
+	PG_TRY();
+	{
+		parallel_vacuum_process_all_indexes_internal(pvs, num_index_scans,
+													 false);
+	}
+	PG_CATCH();
+	{
+		/* Release all reserved parallel workers, if any. */
+		if (AmAutoVacuumWorkerProcess())
+			AutoVacuumReleaseAllParallelWorkers();
+
+		PG_RE_THROW();
+	}
+	PG_END_TRY();
+}
+
+static void
+parallel_vacuum_process_all_indexes_internal(ParallelVacuumState *pvs,
+											 int num_index_scans, bool vacuum)
 {
 	int			nworkers;
 	PVIndVacStatus new_status;
@@ -646,6 +681,13 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 	 */
 	nworkers = Min(nworkers, pvs->pcxt->nworkers);
 
+	/*
+	 * Reserve workers in autovacuum global state. Note, that we may be given
+	 * fewer workers than we requested.
+	 */
+	if (AmAutoVacuumWorkerProcess() && nworkers > 0)
+		nworkers = AutoVacuumReserveParallelWorkers(nworkers);
+
 	/*
 	 * Set index vacuum status and mark whether parallel vacuum worker can
 	 * process it.
@@ -690,6 +732,16 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 
 		LaunchParallelWorkers(pvs->pcxt);
 
+		if (AmAutoVacuumWorkerProcess() &&
+			pvs->pcxt->nworkers_launched < nworkers)
+		{
+			/*
+			 * Tell autovacuum that we could not launch all the previously
+			 * reserved workers.
+			 */
+			AutoVacuumReleaseParallelWorkers(nworkers - pvs->pcxt->nworkers_launched);
+		}
+
 		if (pvs->pcxt->nworkers_launched > 0)
 		{
 			/*
@@ -738,6 +790,10 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 
 		for (int i = 0; i < pvs->pcxt->nworkers_launched; i++)
 			InstrAccumParallelQuery(&pvs->buffer_usage[i], &pvs->wal_usage[i]);
+
+		/* Also release all previously reserved parallel autovacuum workers */
+		if (AmAutoVacuumWorkerProcess() && pvs->pcxt->nworkers_launched > 0)
+			AutoVacuumReleaseParallelWorkers(pvs->pcxt->nworkers_launched);
 	}
 
 	/*
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index ff96b36d710..78ceac67319 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -151,6 +151,12 @@ int			Log_autovacuum_min_duration = 600000;
 static double av_storage_param_cost_delay = -1;
 static int	av_storage_param_cost_limit = -1;
 
+/*
+ * Variable to keep number of currently reserved parallel autovacuum workers.
+ * It is only relevant for parallel autovacuum leader process.
+ */
+static int av_nworkers_reserved = 0;
+
 /* Flags set by signal handlers */
 static volatile sig_atomic_t got_SIGUSR2 = false;
 
@@ -285,6 +291,8 @@ typedef struct AutoVacuumWorkItem
  * av_workItems		work item array
  * av_nworkersForBalance the number of autovacuum workers to use when
  * 					calculating the per worker cost limit
+ * av_freeParallelWorkers the number of free parallel autovacuum workers
+ * av_maxParallelWorkers the maximum number of parallel autovacuum workers
  *
  * This struct is protected by AutovacuumLock, except for av_signal and parts
  * of the worker list (see above).
@@ -299,6 +307,8 @@ typedef struct
 	WorkerInfo	av_startingWorker;
 	AutoVacuumWorkItem av_workItems[NUM_WORKITEMS];
 	pg_atomic_uint32 av_nworkersForBalance;
+	uint32		av_freeParallelWorkers;
+	uint32		av_maxParallelWorkers;
 } AutoVacuumShmemStruct;
 
 static AutoVacuumShmemStruct *AutoVacuumShmem;
@@ -364,6 +374,7 @@ static void autovac_report_workitem(AutoVacuumWorkItem *workitem,
 static void avl_sigusr2_handler(SIGNAL_ARGS);
 static bool av_worker_available(void);
 static void check_av_worker_gucs(void);
+static void adjust_free_parallel_workers(int prev_max_parallel_workers);
 
 
 
@@ -763,6 +774,8 @@ ProcessAutoVacLauncherInterrupts(void)
 	if (ConfigReloadPending)
 	{
 		int			autovacuum_max_workers_prev = autovacuum_max_workers;
+		int			autovacuum_max_parallel_workers_prev =
+			autovacuum_max_parallel_workers;
 
 		ConfigReloadPending = false;
 		ProcessConfigFile(PGC_SIGHUP);
@@ -779,6 +792,15 @@ ProcessAutoVacLauncherInterrupts(void)
 		if (autovacuum_max_workers_prev != autovacuum_max_workers)
 			check_av_worker_gucs();
 
+		/*
+		 * If autovacuum_max_parallel_workers changed, we must take care of
+		 * the correct value of available parallel autovacuum workers in
+		 * shmem.
+		 */
+		if (autovacuum_max_parallel_workers_prev !=
+			autovacuum_max_parallel_workers)
+			adjust_free_parallel_workers(autovacuum_max_parallel_workers_prev);
+
 		/* rebuild the list in case the naptime changed */
 		rebuild_database_list(InvalidOid);
 	}
@@ -2871,8 +2893,12 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		 */
 		tab->at_params.index_cleanup = VACOPTVALUE_UNSPECIFIED;
 		tab->at_params.truncate = VACOPTVALUE_UNSPECIFIED;
-		/* As of now, we don't support parallel vacuum for autovacuum */
-		tab->at_params.nworkers = -1;
+
+		/* Decide whether we need to process indexes of table in parallel. */
+		tab->at_params.nworkers = avopts
+			? avopts->autovacuum_parallel_workers
+			: -1;
+
 		tab->at_params.freeze_min_age = freeze_min_age;
 		tab->at_params.freeze_table_age = freeze_table_age;
 		tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;
@@ -3353,6 +3379,85 @@ AutoVacuumRequestWork(AutoVacuumWorkItemType type, Oid relationId,
 	return result;
 }
 
+/*
+ * In order to meet the 'autovacuum_max_parallel_workers' limit, leader
+ * autovacuum process must call this function. It returns the number of
+ * parallel workers that actually can be launched and reserves these workers
+ * (if any) in global autovacuum state.
+ *
+ * NOTE: We will try to provide as many workers as requested, even if caller
+ * will occupy all available workers.
+ */
+int
+AutoVacuumReserveParallelWorkers(int nworkers)
+{
+	int			nreserved;
+
+	/* Only leader worker can call this function. */
+	Assert(AmAutoVacuumWorkerProcess() && !IsParallelWorker());
+
+	/*
+	 * We can only reserve workers at the beginning of parallel index
+	 * processing, so we must not have any reserved workers right now.
+	 */
+	Assert(av_nworkers_reserved == 0);
+
+	LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);
+
+	/* Provide as many workers as we can. */
+	nreserved = Min(AutoVacuumShmem->av_freeParallelWorkers, nworkers);
+	AutoVacuumShmem->av_freeParallelWorkers -= nworkers;
+
+	/* Remember how many workers we have reserved. */
+	av_nworkers_reserved += nworkers;
+
+	LWLockRelease(AutovacuumLock);
+	return nreserved;
+}
+
+/*
+ * Leader autovacuum process must call this function in order to update global
+ * autovacuum state, so other leaders will be able to use these parallel
+ * workers.
+ *
+ * 'nworkers' - how many workers caller wants to release.
+ */
+void
+AutoVacuumReleaseParallelWorkers(int nworkers)
+{
+	/* Only leader worker can call this function. */
+	Assert(AmAutoVacuumWorkerProcess() && !IsParallelWorker());
+
+	LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);
+
+	/*
+	 * If the maximum number of parallel workers was reduced during execution,
+	 * we must cap available workers number by its new value.
+	 */
+	AutoVacuumShmem->av_freeParallelWorkers =
+		Min(AutoVacuumShmem->av_freeParallelWorkers + nworkers,
+			AutoVacuumShmem->av_maxParallelWorkers);
+
+	/* Don't have to remember these workers anymore. */
+	av_nworkers_reserved -= nworkers;
+
+	LWLockRelease(AutovacuumLock);
+}
+
+/*
+ * Same as above, but release *all* parallel workers, that were reserved by
+ * current leader autovacuum process.
+ */
+void
+AutoVacuumReleaseAllParallelWorkers(void)
+{
+	/* Only leader worker can call this function. */
+	Assert(AmAutoVacuumWorkerProcess() && !IsParallelWorker());
+
+	if (av_nworkers_reserved > 0)
+		AutoVacuumReleaseParallelWorkers(av_nworkers_reserved);
+}
+
 /*
  * autovac_init
  *		This is called at postmaster initialization.
@@ -3413,6 +3518,10 @@ AutoVacuumShmemInit(void)
 		Assert(!found);
 
 		AutoVacuumShmem->av_launcherpid = 0;
+		AutoVacuumShmem->av_maxParallelWorkers =
+			Min(autovacuum_max_parallel_workers, max_worker_processes);
+		AutoVacuumShmem->av_freeParallelWorkers =
+			AutoVacuumShmem->av_maxParallelWorkers;
 		dclist_init(&AutoVacuumShmem->av_freeWorkers);
 		dlist_init(&AutoVacuumShmem->av_runningWorkers);
 		AutoVacuumShmem->av_startingWorker = NULL;
@@ -3494,3 +3603,34 @@ check_av_worker_gucs(void)
 				 errdetail("The server will only start up to \"autovacuum_worker_slots\" (%d) autovacuum workers at a given time.",
 						   autovacuum_worker_slots)));
 }
+
+/*
+ * Make sure that number of free parallel workers corresponds to the
+ * autovacuum_max_parallel_workers parameter (after it was changed).
+ */
+static void
+adjust_free_parallel_workers(int prev_max_parallel_workers)
+{
+	LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);
+
+	/*
+	 * Cap the number of free workers by new parameter's value, if needed.
+	 */
+	AutoVacuumShmem->av_freeParallelWorkers =
+		Min(AutoVacuumShmem->av_freeParallelWorkers,
+			autovacuum_max_parallel_workers);
+
+	if (autovacuum_max_parallel_workers > prev_max_parallel_workers)
+	{
+		/*
+		 * If user wants to increase number of parallel autovacuum workers, we
+		 * must increase number of free workers.
+		 */
+		AutoVacuumShmem->av_freeParallelWorkers +=
+			(autovacuum_max_parallel_workers - prev_max_parallel_workers);
+	}
+
+	AutoVacuumShmem->av_maxParallelWorkers = autovacuum_max_parallel_workers;
+
+	LWLockRelease(AutovacuumLock);
+}
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index d31cb45a058..fd00d6f89dc 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -143,6 +143,7 @@ int			NBuffers = 16384;
 int			MaxConnections = 100;
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
+int			autovacuum_max_parallel_workers = 0;
 int			MaxBackends = 0;
 
 /* GUC parameters for vacuum */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index d14b1678e7f..9ecb14227e5 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3604,6 +3604,16 @@ struct config_int ConfigureNamesInt[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"autovacuum_max_parallel_workers", PGC_SIGHUP, VACUUM_AUTOVACUUM,
+			gettext_noop("Maximum number of parallel autovacuum workers, that can be taken from bgworkers pool."),
+			gettext_noop("This parameter is capped by \"max_worker_processes\" (not by \"autovacuum_max_workers\"!)."),
+		},
+		&autovacuum_max_parallel_workers,
+		0, 0, MAX_BACKENDS,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"max_parallel_maintenance_workers", PGC_USERSET, RESOURCES_WORKER_PROCESSES,
 			gettext_noop("Sets the maximum number of parallel processes per maintenance operation."),
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index a9d8293474a..bbf5307000f 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -683,6 +683,7 @@
 autovacuum_worker_slots = 16	# autovacuum worker slots to allocate
 					# (change requires restart)
 #autovacuum_max_workers = 3		# max number of autovacuum subprocesses
+#autovacuum_max_parallel_workers = 0	# disabled by default and limited by max_worker_processes
 #autovacuum_naptime = 1min		# time between autovacuum runs
 #autovacuum_vacuum_threshold = 50	# min number of row updates before
 					# vacuum
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 8b10f2313f3..290dd5cb8ec 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -1402,6 +1402,7 @@ static const char *const table_storage_parameters[] = {
 	"autovacuum_multixact_freeze_max_age",
 	"autovacuum_multixact_freeze_min_age",
 	"autovacuum_multixact_freeze_table_age",
+	"autovacuum_parallel_workers",
 	"autovacuum_vacuum_cost_delay",
 	"autovacuum_vacuum_cost_limit",
 	"autovacuum_vacuum_insert_scale_factor",
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 1bef98471c3..85926415657 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -177,6 +177,7 @@ extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
+extern PGDLLIMPORT int autovacuum_max_parallel_workers;
 
 extern PGDLLIMPORT int commit_timestamp_buffers;
 extern PGDLLIMPORT int multixact_member_buffers;
diff --git a/src/include/postmaster/autovacuum.h b/src/include/postmaster/autovacuum.h
index e8135f41a1c..904c5ce37d8 100644
--- a/src/include/postmaster/autovacuum.h
+++ b/src/include/postmaster/autovacuum.h
@@ -64,6 +64,11 @@ pg_noreturn extern void AutoVacWorkerMain(const void *startup_data, size_t start
 extern bool AutoVacuumRequestWork(AutoVacuumWorkItemType type,
 								  Oid relationId, BlockNumber blkno);
 
+/* parallel autovacuum stuff */
+extern int	AutoVacuumReserveParallelWorkers(int nworkers);
+extern void AutoVacuumReleaseParallelWorkers(int nworkers);
+extern void AutoVacuumReleaseAllParallelWorkers(void);
+
 /* shared memory stuff */
 extern Size AutoVacuumShmemSize(void);
 extern void AutoVacuumShmemInit(void);
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index b552359915f..edd286808bf 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -311,6 +311,13 @@ typedef struct ForeignKeyCacheInfo
 typedef struct AutoVacOpts
 {
 	bool		enabled;
+
+	/*
+	 * Max number of parallel autovacuum workers. If value is 0 then parallel
+	 * degree will computed based on number of indexes.
+	 */
+	int			autovacuum_parallel_workers;
+
 	int			vacuum_threshold;
 	int			vacuum_max_threshold;
 	int			vacuum_ins_threshold;
-- 
2.43.0

Reply via email to