Hi,

On Thu, Mar 26, 2026 at 5:43 AM Masahiko Sawada <[email protected]> wrote:
>
> On Wed, Mar 25, 2026 at 12:45 AM Daniil Davydov <[email protected]> wrote:
> >
> >  Searching for arguments in
> > favor of opt-in style, I asked for help from another person who has been
> > managing the setup of highload systems for decades. He promised to share his
> > opinion next week.
>
> Given that we have one and half weeks before the feature freeze, I
> think it's better to complete the project first before waiting for
> his/her comments next week. Even if we finish this feature with the
> opt-out style, we can hear more opinions on it and change the default
> behavior as the change would be privial. What do you think?
>

Sure, if we can change the default value after the feature freeze, I don't
mind leaving our parameter in opt-out style by now.

> I've squashed all patches except for the documentation patch as I
> assume you're working on it. The attached fixup patch contains several
> changes: using opt-out style, comment improvements, and fixing typos
> etc.
>

Thank you very much for the proposed fixes!
I like the way you have changed nparallel_workers calculation (autovacuum.c).
Forcing parallel workers to always read shared cost params at the first time
is a good decision. All comments changes are also LGTM.

The only place that I have changed is reloptions.c :
As you have explained, it is not appropriate to use the "overrides" wording
in the reloption's description, so I decided to return an old one.

On Fri, Mar 27, 2026 at 10:54 AM Bharath Rupireddy
<[email protected]> wrote:
>
> Hi,
>
> On Wed, Mar 25, 2026 at 3:43 PM Masahiko Sawada <[email protected]> wrote:
> >
> > Given that we have one and half weeks before the feature freeze, I
> > think it's better to complete the project first before waiting for
> > his/her comments next week. Even if we finish this feature with the
> > opt-out style, we can hear more opinions on it and change the default
> > behavior as the change would be privial. What do you think?
> >
> > I've squashed all patches except for the documentation patch as I
> > assume you're working on it. The attached fixup patch contains several
> > changes: using opt-out style, comment improvements, and fixing typos
> > etc.
>
> +1 for enabling this feature by default. When enough CPU is available,
> vacuuming multiple indexes of a table in parallel in autovacuum
> definitely speeds things up.

Yes, for sure. But I have concerns that enabling parallel a/v for everyone
will cause the parallel workers shortage during processing of the most huge
tables.

> Thank you for sending the latest patches. I quickly reviewed the v31
> patches. Here are some comments.
>
> 1/ +       {"autovacuum_parallel_workers", RELOPT_TYPE_INT,
>
> I haven't looked at the whole thread, but do we all think we need this
> as a relopt? IMHO, we can wait for field experience and introduce this
> later.

I think that we should leave both reloption and the config parameter.
Getting rid from the reloption will greatly reduce the ability of users to
tune this feature. I'm afraid that this may lead to people not using parallel
autovacuum.

> I'm having a hard time finding a use-case where one wants to
> disable the indexes at the table level. If there was already an
> agreement, I agree to commit to that decision.

You can read discussion from [1] to the current message in order to dive into
the question.

To make the long story short, I think that the most common use case for this
feature is allowing parallel a/v for 2-3 tables, each of which has ~100
indexes. The rest of the tables do not require parallel processing (at least
it's a much lower priority for them).

At the same time, Masahiko-san thinks that only the system should decide which
tables will be processed in parallel. System's decision should be based on the
number of indexes and a few other config parameters (e.g.
min_parallel_index_scan_size). Thus, possibly many tables will be able to be
processed in parallel.

(Both opinions are pretty simplified).

>
> 2/  +   /*
> +    * If 'true' then we are running parallel autovacuum. Otherwise, we are
> +    * running parallel maintenence VACUUM.
> +    */
> +   bool        is_autovacuum;
> +
>
> The variable name looks a bit confusing. How about we rely on
> AmAutoVacuumWorkerProcess() and avoid the bool in shared memory?

This variable is needed for parallel workers, which are taken from the
bgworkers pool. I.e. AmAutovacuumWorker() will return 'false' for them.
We need the "is_autovacuum" variable in order to understand exactly what this
process was started for (VACUUM PARALLEL or parallel autovacuum).


Thanks everyone for the review!
Please, see an updated set of patches :
As I promised, I created a dedicated chapter for Parallel Vacuum description.
Both maintenance VACUUM and autovacuum now refer to this chapter.

I am pretty inexperienced in the documentation writing, so forgive me if
something is out of code style.

[1] 
https://www.postgresql.org/message-id/CAJDiXggH1bW%3D4n%2B55CGLvs_sRU4SYNXwYLZ37wvJ5H_3yURSPw%40mail.gmail.com

--
Best regards,
Daniil Davydov
From d13ccaa55862efaad2abda73cf6870dde8ca24d1 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Sat, 28 Mar 2026 17:37:07 +0700
Subject: [PATCH v32 2/2] Documantation for parallel autovacuum

---
 doc/src/sgml/config.sgml           | 19 +++++++++++++++
 doc/src/sgml/maintenance.sgml      | 38 ++++++++++++++++++++++++++++++
 doc/src/sgml/ref/create_table.sgml | 18 ++++++++++++++
 doc/src/sgml/ref/vacuum.sgml       | 24 ++++---------------
 4 files changed, 79 insertions(+), 20 deletions(-)

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 229f41353eb..cbe73172b2e 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -2918,6 +2918,7 @@ include_dir 'conf.d'
         <para>
          When changing this value, consider also adjusting
          <xref linkend="guc-max-parallel-workers"/>,
+         <xref linkend="guc-autovacuum-max-parallel-workers"/>,
          <xref linkend="guc-max-parallel-maintenance-workers"/>, and
          <xref linkend="guc-max-parallel-workers-per-gather"/>.
         </para>
@@ -9485,6 +9486,24 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
        </listitem>
       </varlistentry>
 
+      <varlistentry id="guc-autovacuum-max-parallel-workers" xreflabel="autovacuum_max_parallel_workers">
+        <term><varname>autovacuum_max_parallel_workers</varname> (<type>integer</type>)
+        <indexterm>
+         <primary><varname>autovacuum_max_parallel_workers</varname></primary>
+         <secondary>configuration parameter</secondary>
+        </indexterm>
+        </term>
+        <listitem>
+         <para>
+          Sets the maximum number of parallel autovacuum workers that
+          can be used for the <xref linkend="parallel-vacuum"/>
+          by a single autovacuum worker. Is capped by <xref linkend="guc-max-parallel-workers"/>.
+          The default is 0, which means no parallel index vacuuming for this
+          table.
+         </para>
+        </listitem>
+     </varlistentry>
+
      </variablelist>
     </sect2>
 
diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml
index 0d2a28207ed..a1b851ea58f 100644
--- a/doc/src/sgml/maintenance.sgml
+++ b/doc/src/sgml/maintenance.sgml
@@ -927,6 +927,13 @@ HINT:  Execute a database-wide VACUUM in that database.
     autovacuum workers' activity.
    </para>
 
+   <para>
+    If an autovacuum worker process comes across a table with the enabled
+    <xref linkend="reloption-autovacuum-parallel-workers"/> storage parameter,
+    it will launch parallel workers in order to perform the
+    <xref linkend="parallel-vacuum"/>.
+   </para>
+
    <para>
     If several large tables all become eligible for vacuuming in a short
     amount of time, all autovacuum workers might become occupied with
@@ -1166,6 +1173,37 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu
     </para>
    </sect3>
   </sect2>
+
+  <sect2 id="parallel-vacuum" xreflabel="Parallel Vacuum">
+   <title>Parallel Vacuum</title>
+
+   <para>
+    Both <command>VACUUM</command> command and autovacuum daemon can perform
+    index vacuum and index cleanup phases in parallel using
+    <replaceable class="parameter">integer</replaceable> background workers
+    (for the details of each vacuum phase, please refer to
+    <xref linkend="vacuum-phases"/>). The number of workers used to perform
+    the operation is equal to the number of indexes on the relation that
+    support parallel vacuum which may be further limited by the value specific
+    to <command>VACUUM</command> or autovacuum (see <literal>PARALLEL</literal>
+    option for <xref linkend="sql-vacuum"/> or
+    <xref linkend="reloption-autovacuum-parallel-workers"/>, respectively).
+   </para>
+
+   <para>
+    An index can participate in parallel vacuum if and only if the size of the
+    index is more than <xref linkend="guc-min-parallel-index-scan-size"/>.
+    Please note that it is not guaranteed that the number of parallel workers
+    specified in <replaceable class="parameter">integer</replaceable> will be
+    used during execution.  It is possible for a vacuum to run with fewer
+    workers than specified, or even with no workers at all.  Only one worker
+    can be used per index.  So parallel workers are launched only when there
+    are at least <literal>2</literal> indexes in the table.  Workers for
+    vacuum are launched before the start of each phase and exit at the end of
+    the phase.  These behaviors might change in a future release.
+   </para>
+
+  </sect2>
  </sect1>
 
 
diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml
index 80829b23945..8f5b25411cd 100644
--- a/doc/src/sgml/ref/create_table.sgml
+++ b/doc/src/sgml/ref/create_table.sgml
@@ -1738,6 +1738,24 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
     </listitem>
    </varlistentry>
 
+  <varlistentry id="reloption-autovacuum-parallel-workers" xreflabel="autovacuum_parallel_workers">
+    <term><literal>autovacuum_parallel_workers</literal> (<type>integer</type>)
+    <indexterm>
+     <primary><varname>autovacuum_parallel_workers</varname> storage parameter</primary>
+    </indexterm>
+    </term>
+    <listitem>
+     <para>
+      Sets the maximum number of parallel autovacuum workers that can be used
+      for the <xref linkend="parallel-vacuum"/>.
+      The default value is 0, which means no parallel index vacuuming for
+      this table. If value is -1 then parallel degree will computed based on
+      number of indexes and limited by the <xref linkend="guc-autovacuum-max-parallel-workers"/>
+      configuration parameter.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="reloption-autovacuum-vacuum-threshold" xreflabel="autovacuum_vacuum_threshold">
     <term><literal>autovacuum_vacuum_threshold</literal>, <literal>toast.autovacuum_vacuum_threshold</literal> (<type>integer</type>)
     <indexterm>
diff --git a/doc/src/sgml/ref/vacuum.sgml b/doc/src/sgml/ref/vacuum.sgml
index ac5d083d468..d3f32851386 100644
--- a/doc/src/sgml/ref/vacuum.sgml
+++ b/doc/src/sgml/ref/vacuum.sgml
@@ -81,7 +81,7 @@ VACUUM [ ( <replaceable class="parameter">option</replaceable> [, ...] ) ] [ <re
    is not obtained.  However, extra space is not returned to the operating
    system (in most cases); it's just kept available for re-use within the
    same table.  It also allows us to leverage multiple CPUs in order to process
-   indexes.  This feature is known as <firstterm>parallel vacuum</firstterm>.
+   indexes.  This feature is known as <xref linkend="parallel-vacuum"/>.
    To disable this feature, one can use <literal>PARALLEL</literal> option and
    specify parallel workers as zero.  <command>VACUUM FULL</command> rewrites
    the entire contents of the table into a new disk file with no extra space,
@@ -266,25 +266,9 @@ VACUUM [ ( <replaceable class="parameter">option</replaceable> [, ...] ) ] [ <re
     <term><literal>PARALLEL</literal></term>
     <listitem>
      <para>
-      Perform index vacuum and index cleanup phases of <command>VACUUM</command>
-      in parallel using <replaceable class="parameter">integer</replaceable>
-      background workers (for the details of each vacuum phase, please
-      refer to <xref linkend="vacuum-phases"/>).  The number of workers used
-      to perform the operation is equal to the number of indexes on the
-      relation that support parallel vacuum which is limited by the number of
-      workers specified with <literal>PARALLEL</literal> option if any which is
-      further limited by <xref linkend="guc-max-parallel-maintenance-workers"/>.
-      An index can participate in parallel vacuum if and only if the size of the
-      index is more than <xref linkend="guc-min-parallel-index-scan-size"/>.
-      Please note that it is not guaranteed that the number of parallel workers
-      specified in <replaceable class="parameter">integer</replaceable> will be
-      used during execution.  It is possible for a vacuum to run with fewer
-      workers than specified, or even with no workers at all.  Only one worker
-      can be used per index.  So parallel workers are launched only when there
-      are at least <literal>2</literal> indexes in the table.  Workers for
-      vacuum are launched before the start of each phase and exit at the end of
-      the phase.  These behaviors might change in a future release.  This
-      option can't be used with the <literal>FULL</literal> option.
+      Limits the number of parallel workers used to perform the
+      <xref linkend="parallel-vacuum"/>, which is further limited by
+      <xref linkend="guc-max-parallel-maintenance-workers"/>.
      </para>
     </listitem>
    </varlistentry>
-- 
2.43.0

From 5dcd6b42a0d1aae8f3f772b57f5568545e386ad6 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Tue, 17 Mar 2026 02:18:09 +0700
Subject: [PATCH v32 1/2] Parallel autovacuum

---
 src/backend/access/common/reloptions.c        |  11 +
 src/backend/access/heap/vacuumlazy.c          |  12 +
 src/backend/commands/vacuum.c                 |  21 +-
 src/backend/commands/vacuumparallel.c         | 223 +++++++++++++++++-
 src/backend/postmaster/autovacuum.c           |  26 +-
 src/backend/utils/init/globals.c              |   1 +
 src/backend/utils/misc/guc.c                  |   9 +-
 src/backend/utils/misc/guc_parameters.dat     |   8 +
 src/backend/utils/misc/postgresql.conf.sample |   1 +
 src/bin/psql/tab-complete.in.c                |   1 +
 src/include/commands/vacuum.h                 |   2 +
 src/include/miscadmin.h                       |   1 +
 src/include/utils/rel.h                       |   2 +
 src/test/modules/Makefile                     |   1 +
 src/test/modules/meson.build                  |   1 +
 src/test/modules/test_autovacuum/.gitignore   |   2 +
 src/test/modules/test_autovacuum/Makefile     |  20 ++
 src/test/modules/test_autovacuum/meson.build  |  15 ++
 .../t/001_parallel_autovacuum.pl              | 195 +++++++++++++++
 src/tools/pgindent/typedefs.list              |   1 +
 20 files changed, 539 insertions(+), 14 deletions(-)
 create mode 100644 src/test/modules/test_autovacuum/.gitignore
 create mode 100644 src/test/modules/test_autovacuum/Makefile
 create mode 100644 src/test/modules/test_autovacuum/meson.build
 create mode 100644 src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl

diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index a6002ae9b07..56dc99bdbe1 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -236,6 +236,15 @@ static relopt_int intRelOpts[] =
 		},
 		SPGIST_DEFAULT_FILLFACTOR, SPGIST_MIN_FILLFACTOR, 100
 	},
+	{
+		{
+			"autovacuum_parallel_workers",
+			"Maximum number of parallel autovacuum workers that can be used for processing this table.",
+			RELOPT_KIND_HEAP,
+			ShareUpdateExclusiveLock
+		},
+		-1, -1, 1024
+	},
 	{
 		{
 			"autovacuum_vacuum_threshold",
@@ -1969,6 +1978,8 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
 		{"fillfactor", RELOPT_TYPE_INT, offsetof(StdRdOptions, fillfactor)},
 		{"autovacuum_enabled", RELOPT_TYPE_BOOL,
 		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, enabled)},
+		{"autovacuum_parallel_workers", RELOPT_TYPE_INT,
+		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, autovacuum_parallel_workers)},
 		{"autovacuum_vacuum_threshold", RELOPT_TYPE_INT,
 		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_threshold)},
 		{"autovacuum_vacuum_max_threshold", RELOPT_TYPE_INT,
diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index f698c2d899b..9fd4f6febbe 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -152,6 +152,7 @@
 #include "storage/latch.h"
 #include "storage/lmgr.h"
 #include "storage/read_stream.h"
+#include "utils/injection_point.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_rusage.h"
 #include "utils/timestamp.h"
@@ -862,6 +863,17 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 	lazy_check_wraparound_failsafe(vacrel);
 	dead_items_alloc(vacrel, params.nworkers);
 
+#ifdef USE_INJECTION_POINTS
+
+	/*
+	 * Used by tests to pause before parallel vacuum is launched, allowing
+	 * test code to modify configuration that the leader then propagates to
+	 * workers.
+	 */
+	if (AmAutoVacuumWorkerProcess() && ParallelVacuumIsActive(vacrel))
+		INJECTION_POINT("autovacuum-start-parallel-vacuum", NULL);
+#endif
+
 	/*
 	 * Call lazy_scan_heap to perform all required heap pruning, index
 	 * vacuuming, and heap vacuuming (plus related processing)
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index bce3a2daa24..1b5ba3ce1ef 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -2435,8 +2435,19 @@ vacuum_delay_point(bool is_analyze)
 	/* Always check for interrupts */
 	CHECK_FOR_INTERRUPTS();
 
-	if (InterruptPending ||
-		(!VacuumCostActive && !ConfigReloadPending))
+	if (InterruptPending)
+		return;
+
+	if (IsParallelWorker())
+	{
+		/*
+		 * Update cost-based vacuum delay parameters for a parallel autovacuum
+		 * worker if any changes are detected.
+		 */
+		parallel_vacuum_update_shared_delay_params();
+	}
+
+	if (!VacuumCostActive && !ConfigReloadPending)
 		return;
 
 	/*
@@ -2450,6 +2461,12 @@ vacuum_delay_point(bool is_analyze)
 		ConfigReloadPending = false;
 		ProcessConfigFile(PGC_SIGHUP);
 		VacuumUpdateCosts();
+
+		/*
+		 * Propagate cost-based vacuum delay parameters to shared memory if
+		 * any of them have changed during the config reload.
+		 */
+		parallel_vacuum_propagate_shared_delay_params();
 	}
 
 	/*
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 77834b96a21..13544de5b93 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -1,7 +1,9 @@
 /*-------------------------------------------------------------------------
  *
  * vacuumparallel.c
- *	  Support routines for parallel vacuum execution.
+ *	  Support routines for parallel vacuum and autovacuum execution. In the
+ *	  comments below, the word "vacuum" will refer to both vacuum and
+ *	  autovacuum.
  *
  * This file contains routines that are intended to support setting up, using,
  * and tearing down a ParallelVacuumState.
@@ -16,6 +18,13 @@
  * the parallel context is re-initialized so that the same DSM can be used for
  * multiple passes of index bulk-deletion and index cleanup.
  *
+ * For parallel autovacuum, we need to propagate cost-based vacuum delay
+ * parameters from the leader to its workers, as the leader's parameters can
+ * change even while processing a table (e.g., due to a config reload).
+ * The PVSharedCostParams struct manages these parameters using a
+ * generation counter. Each parallel worker polls this shared state and
+ * refreshes its local delay parameters whenever a change is detected.
+ *
  * Portions Copyright (c) 1996-2026, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
@@ -37,6 +46,7 @@
 #include "storage/bufmgr.h"
 #include "storage/proc.h"
 #include "tcop/tcopprot.h"
+#include "utils/injection_point.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 
@@ -51,6 +61,33 @@
 #define PARALLEL_VACUUM_KEY_WAL_USAGE		4
 #define PARALLEL_VACUUM_KEY_INDEX_STATS		5
 
+/*
+ * Struct for cost-based vacuum delay related parameters to share among an
+ * autovacuum worker and its parallel vacuum workers.
+ */
+typedef struct PVSharedCostParams
+{
+	/*
+	 * The generation counter is incremented by the leader process each time
+	 * it updates the shared cost-based vacuum delay parameters. Parallel
+	 * vacuum workers compares it with their local generation,
+	 * shared_params_generation_local, to detect whether they need to refresh
+	 * their local parameters. The generation starts from 1 so that a freshly
+	 * started worker (whose local copy is 0) will always load the initial
+	 * parameters on its first check.
+	 */
+	pg_atomic_uint32 generation;
+
+	slock_t		mutex;			/* protects all fields below */
+
+	/* Parameters to share with parallel workers */
+	double		cost_delay;
+	int			cost_limit;
+	int			cost_page_dirty;
+	int			cost_page_hit;
+	int			cost_page_miss;
+} PVSharedCostParams;
+
 /*
  * Shared information among parallel workers.  So this is allocated in the DSM
  * segment.
@@ -120,6 +157,18 @@ typedef struct PVShared
 
 	/* Statistics of shared dead items */
 	VacDeadItemsInfo dead_items_info;
+
+	/*
+	 * If 'true' then we are running parallel autovacuum. Otherwise, we are
+	 * running parallel maintenance VACUUM.
+	 */
+	bool		is_autovacuum;
+
+	/*
+	 * Cost-based vacuum delay parameters shared between the autovacuum leader
+	 * and its parallel workers.
+	 */
+	PVSharedCostParams cost_params;
 } PVShared;
 
 /* Status used during parallel index vacuum or cleanup */
@@ -222,6 +271,17 @@ struct ParallelVacuumState
 	PVIndVacStatus status;
 };
 
+static PVSharedCostParams *pv_shared_cost_params = NULL;
+
+/*
+ * Worker-local copy of the last cost-parameter generation this worker has
+ * applied.  Initialized to 0; since the leader initializes the shared
+ * generation counter to 1, the first call to
+ * parallel_vacuum_update_shared_delay_params() will always detect a
+ * mismatch and read the initial parameters from shared memory.
+ */
+static uint32 shared_params_generation_local = 0;
+
 static int	parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 											bool *will_parallel_vacuum);
 static void parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
@@ -233,6 +293,7 @@ static void parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation
 static bool parallel_vacuum_index_is_parallel_safe(Relation indrel, int num_index_scans,
 												   bool vacuum);
 static void parallel_vacuum_error_callback(void *arg);
+static inline void parallel_vacuum_set_cost_parameters(PVSharedCostParams *params);
 
 /*
  * Try to enter parallel mode and create a parallel context.  Then initialize
@@ -374,8 +435,9 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
 	shared->queryid = pgstat_get_my_query_id();
 	shared->maintenance_work_mem_worker =
 		(nindexes_mwm > 0) ?
-		maintenance_work_mem / Min(parallel_workers, nindexes_mwm) :
-		maintenance_work_mem;
+		vac_work_mem / Min(parallel_workers, nindexes_mwm) :
+		vac_work_mem;
+
 	shared->dead_items_info.max_bytes = vac_work_mem * (size_t) 1024;
 
 	/* Prepare DSA space for dead items */
@@ -392,6 +454,21 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
 	pg_atomic_init_u32(&(shared->active_nworkers), 0);
 	pg_atomic_init_u32(&(shared->idx), 0);
 
+	shared->is_autovacuum = AmAutoVacuumWorkerProcess();
+
+	/*
+	 * Initialize shared cost-based vacuum delay parameters if it's for
+	 * autovacuum.
+	 */
+	if (shared->is_autovacuum)
+	{
+		parallel_vacuum_set_cost_parameters(&shared->cost_params);
+		pg_atomic_init_u32(&shared->cost_params.generation, 1);
+		SpinLockInit(&shared->cost_params.mutex);
+
+		pv_shared_cost_params = &(shared->cost_params);
+	}
+
 	shm_toc_insert(pcxt->toc, PARALLEL_VACUUM_KEY_SHARED, shared);
 	pvs->shared = shared;
 
@@ -457,6 +534,9 @@ parallel_vacuum_end(ParallelVacuumState *pvs, IndexBulkDeleteResult **istats)
 	DestroyParallelContext(pvs->pcxt);
 	ExitParallelMode();
 
+	if (AmAutoVacuumWorkerProcess())
+		pv_shared_cost_params = NULL;
+
 	pfree(pvs->will_parallel_vacuum);
 	pfree(pvs);
 }
@@ -534,6 +614,103 @@ parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 	parallel_vacuum_process_all_indexes(pvs, num_index_scans, false, wstats);
 }
 
+/*
+ * Fill in the given structure with cost-based vacuum delay parameter values.
+ */
+static inline void
+parallel_vacuum_set_cost_parameters(PVSharedCostParams *params)
+{
+	params->cost_delay = vacuum_cost_delay;
+	params->cost_limit = vacuum_cost_limit;
+	params->cost_page_dirty = VacuumCostPageDirty;
+	params->cost_page_hit = VacuumCostPageHit;
+	params->cost_page_miss = VacuumCostPageMiss;
+}
+
+/*
+ * Updates the cost-based vacuum delay parameters for parallel autovacuum
+ * workers.
+ *
+ * For non-autovacuum parallel workers, this function will have no effect.
+ */
+void
+parallel_vacuum_update_shared_delay_params(void)
+{
+	uint32		params_generation;
+
+	Assert(IsParallelWorker());
+
+	/* Quick return if the worker is not running for the autovacuum */
+	if (pv_shared_cost_params == NULL)
+		return;
+
+	params_generation = pg_atomic_read_u32(&pv_shared_cost_params->generation);
+	Assert(shared_params_generation_local <= params_generation);
+
+	/* Return if parameters had not changed in the leader */
+	if (params_generation == shared_params_generation_local)
+		return;
+
+	SpinLockAcquire(&pv_shared_cost_params->mutex);
+	VacuumCostDelay = pv_shared_cost_params->cost_delay;
+	VacuumCostLimit = pv_shared_cost_params->cost_limit;
+	VacuumCostPageDirty = pv_shared_cost_params->cost_page_dirty;
+	VacuumCostPageHit = pv_shared_cost_params->cost_page_hit;
+	VacuumCostPageMiss = pv_shared_cost_params->cost_page_miss;
+	SpinLockRelease(&pv_shared_cost_params->mutex);
+
+	VacuumUpdateCosts();
+
+	shared_params_generation_local = params_generation;
+
+	elog(DEBUG2,
+		 "parallel autovacuum worker updated cost params: cost_limit=%d, cost_delay=%g, cost_page_miss=%d, cost_page_dirty=%d, cost_page_hit=%d",
+		 vacuum_cost_limit,
+		 vacuum_cost_delay,
+		 VacuumCostPageMiss,
+		 VacuumCostPageDirty,
+		 VacuumCostPageHit);
+}
+
+/*
+ * Store the cost-based vacuum delay parameters in the shared memory so that
+ * parallel vacuum workers can consume them (see
+ * parallel_vacuum_update_shared_delay_params()).
+ */
+void
+parallel_vacuum_propagate_shared_delay_params(void)
+{
+	Assert(AmAutoVacuumWorkerProcess());
+
+	/*
+	 * Quick return if the leader process is not sharing the delay parameters.
+	 */
+	if (pv_shared_cost_params == NULL)
+		return;
+
+	/*
+	 * Check if any delay parameters have changed. We can read them without
+	 * locks as only the leader can modify them.
+	 */
+	if (vacuum_cost_delay == pv_shared_cost_params->cost_delay &&
+		vacuum_cost_limit == pv_shared_cost_params->cost_limit &&
+		VacuumCostPageDirty == pv_shared_cost_params->cost_page_dirty &&
+		VacuumCostPageHit == pv_shared_cost_params->cost_page_hit &&
+		VacuumCostPageMiss == pv_shared_cost_params->cost_page_miss)
+		return;
+
+	/* Update the shared delay parameters */
+	SpinLockAcquire(&pv_shared_cost_params->mutex);
+	parallel_vacuum_set_cost_parameters(pv_shared_cost_params);
+	SpinLockRelease(&pv_shared_cost_params->mutex);
+
+	/*
+	 * Increment the generation of the parameters, i.e. let parallel workers
+	 * know that they should re-read shared cost params.
+	 */
+	pg_atomic_fetch_add_u32(&pv_shared_cost_params->generation, 1);
+}
+
 /*
  * Compute the number of parallel worker processes to request.  Both index
  * vacuum and index cleanup can be executed with parallel workers.
@@ -555,12 +732,17 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 	int			nindexes_parallel_bulkdel = 0;
 	int			nindexes_parallel_cleanup = 0;
 	int			parallel_workers;
+	int			max_workers;
+
+	max_workers = AmAutoVacuumWorkerProcess() ?
+		autovacuum_max_parallel_workers :
+		max_parallel_maintenance_workers;
 
 	/*
 	 * We don't allow performing parallel operation in standalone backend or
 	 * when parallelism is disabled.
 	 */
-	if (!IsUnderPostmaster || max_parallel_maintenance_workers == 0)
+	if (!IsUnderPostmaster || max_workers == 0)
 		return 0;
 
 	/*
@@ -599,8 +781,8 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 	parallel_workers = (nrequested > 0) ?
 		Min(nrequested, nindexes_parallel) : nindexes_parallel;
 
-	/* Cap by max_parallel_maintenance_workers */
-	parallel_workers = Min(parallel_workers, max_parallel_maintenance_workers);
+	/* Cap by GUC variable */
+	parallel_workers = Min(parallel_workers, max_workers);
 
 	return parallel_workers;
 }
@@ -730,6 +912,16 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 							pvs->pcxt->nworkers_launched, nworkers)));
 	}
 
+#ifdef USE_INJECTION_POINTS
+
+	/*
+	 * Used by tests to pause after workers are launched but before index
+	 * vacuuming begins.
+	 */
+	if (nworkers > 0)
+		INJECTION_POINT("autovacuum-leader-before-indexes-processing", NULL);
+#endif
+
 	/* Vacuum the indexes that can be processed by only leader process */
 	parallel_vacuum_process_unsafe_indexes(pvs);
 
@@ -1064,7 +1256,21 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)
 								shared->dead_items_handle);
 
 	/* Set cost-based vacuum delay */
-	VacuumUpdateCosts();
+	if (shared->is_autovacuum)
+	{
+		/*
+		 * Parallel autovacuum workers initialize cost-based delay parameters
+		 * from the leader's shared state rather than GUC defaults, because
+		 * the leader may have applied per-table or autovacuum-specific
+		 * overrides.  pv_shared_cost_params must be set before calling
+		 * parallel_vacuum_update_shared_delay_params().
+		 */
+		pv_shared_cost_params = &(shared->cost_params);
+		parallel_vacuum_update_shared_delay_params();
+	}
+	else
+		VacuumUpdateCosts();
+
 	VacuumCostBalance = 0;
 	VacuumCostBalanceLocal = 0;
 	VacuumSharedCostBalance = &(shared->cost_balance);
@@ -1119,6 +1325,9 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)
 	vac_close_indexes(nindexes, indrels, RowExclusiveLock);
 	table_close(rel, ShareUpdateExclusiveLock);
 	FreeAccessStrategy(pvs.bstrategy);
+
+	if (shared->is_autovacuum)
+		pv_shared_cost_params = NULL;
 }
 
 /*
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index d695f1de4bd..ced4970a3a8 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -1688,7 +1688,7 @@ VacuumUpdateCosts(void)
 	}
 	else
 	{
-		/* Must be explicit VACUUM or ANALYZE */
+		/* Must be explicit VACUUM or ANALYZE or parallel autovacuum worker */
 		vacuum_cost_delay = VacuumCostDelay;
 		vacuum_cost_limit = VacuumCostLimit;
 	}
@@ -2928,8 +2928,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		 */
 		tab->at_params.index_cleanup = VACOPTVALUE_UNSPECIFIED;
 		tab->at_params.truncate = VACOPTVALUE_UNSPECIFIED;
-		/* As of now, we don't support parallel vacuum for autovacuum */
-		tab->at_params.nworkers = -1;
+
 		tab->at_params.freeze_min_age = freeze_min_age;
 		tab->at_params.freeze_table_age = freeze_table_age;
 		tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;
@@ -2939,6 +2938,27 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		tab->at_params.log_analyze_min_duration = log_analyze_min_duration;
 		tab->at_params.toast_parent = InvalidOid;
 
+		/* Determine the number of parallel vacuum workers to use */
+		tab->at_params.nworkers = 0;
+		if (avopts)
+		{
+			if (avopts->autovacuum_parallel_workers == 0)
+			{
+				/*
+				 * Disable parallel vacuum, if the reloption sets the parallel
+				 * degree as zero.
+				 */
+				tab->at_params.nworkers = -1;
+			}
+			else if (avopts->autovacuum_parallel_workers > 0)
+				tab->at_params.nworkers = avopts->autovacuum_parallel_workers;
+
+			/*
+			 * autovacuum_parallel_workers == -1 falls through, keep
+			 * nworkers=0
+			 */
+		}
+
 		/*
 		 * Later, in vacuum_rel(), we check reloptions for any
 		 * vacuum_max_eager_freeze_failure_rate override.
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 36ad708b360..24ddb276f0c 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -143,6 +143,7 @@ int			NBuffers = 16384;
 int			MaxConnections = 100;
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
+int			autovacuum_max_parallel_workers = 0;
 int			MaxBackends = 0;
 
 /* GUC parameters for vacuum */
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index e1546d9c97a..1ac8e8fc3be 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -3358,9 +3358,14 @@ set_config_with_handle(const char *name, config_handle *handle,
 	 *
 	 * Also allow normal setting if the GUC is marked GUC_ALLOW_IN_PARALLEL.
 	 *
-	 * Other changes might need to affect other workers, so forbid them.
+	 * Other changes might need to affect other workers, so forbid them. Note,
+	 * that parallel autovacuum leader is an exception because only cost-based
+	 * delays need to be affected also to parallel autovacuum workers. These
+	 * parameters are propagated to its workers during parallel vacuum (see
+	 * vacuumparallel.c for details).
 	 */
-	if (IsInParallelMode() && changeVal && action != GUC_ACTION_SAVE &&
+	if (IsInParallelMode() && !AmAutoVacuumWorkerProcess() && changeVal &&
+		action != GUC_ACTION_SAVE &&
 		(record->flags & GUC_ALLOW_IN_PARALLEL) == 0)
 	{
 		ereport(elevel,
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index 0a862693fcd..6ef46d88155 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -170,6 +170,14 @@
   max => '10.0',
 },
 
+{ name => 'autovacuum_max_parallel_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
+  short_desc => 'Maximum number of parallel workers that can be used by a single autovacuum worker.',
+  variable => 'autovacuum_max_parallel_workers',
+  boot_val => '0',
+  min => '0',
+  max => 'MAX_PARALLEL_WORKER_LIMIT',
+},
+
 { name => 'autovacuum_max_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
   short_desc => 'Sets the maximum number of simultaneously running autovacuum worker processes.',
   variable => 'autovacuum_max_workers',
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index cf15597385b..73c49f09aef 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -713,6 +713,7 @@
 #autovacuum_worker_slots = 16           # autovacuum worker slots to allocate
                                         # (change requires restart)
 #autovacuum_max_workers = 3             # max number of autovacuum subprocesses
+#autovacuum_max_parallel_workers = 0    # limited by max_parallel_workers
 #autovacuum_naptime = 1min              # time between autovacuum runs
 #autovacuum_vacuum_threshold = 50       # min number of row updates before
                                         # vacuum
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 523d3f39fc5..f6bf072bab5 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -1432,6 +1432,7 @@ static const char *const table_storage_parameters[] = {
 	"autovacuum_multixact_freeze_max_age",
 	"autovacuum_multixact_freeze_min_age",
 	"autovacuum_multixact_freeze_table_age",
+	"autovacuum_parallel_workers",
 	"autovacuum_vacuum_cost_delay",
 	"autovacuum_vacuum_cost_limit",
 	"autovacuum_vacuum_insert_scale_factor",
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index 1f45bca015c..8b42808e70b 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -422,6 +422,8 @@ extern void parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs,
 												int num_index_scans,
 												bool estimated_count,
 												PVWorkerStats *wstats);
+extern void parallel_vacuum_update_shared_delay_params(void);
+extern void parallel_vacuum_propagate_shared_delay_params(void);
 extern void parallel_vacuum_main(dsm_segment *seg, shm_toc *toc);
 
 /* in commands/analyze.c */
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index f16f35659b9..00190c67ecf 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -178,6 +178,7 @@ extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
+extern PGDLLIMPORT int autovacuum_max_parallel_workers;
 
 extern PGDLLIMPORT int commit_timestamp_buffers;
 extern PGDLLIMPORT int multixact_member_buffers;
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index 236830f6b93..cd1e92f2302 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -311,6 +311,8 @@ typedef struct ForeignKeyCacheInfo
 typedef struct AutoVacOpts
 {
 	bool		enabled;
+
+	int			autovacuum_parallel_workers;
 	int			vacuum_threshold;
 	int			vacuum_max_threshold;
 	int			vacuum_ins_threshold;
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 28ce3b35eda..336a212faf4 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -16,6 +16,7 @@ SUBDIRS = \
 		  plsample \
 		  spgist_name_ops \
 		  test_aio \
+		  test_autovacuum \
 		  test_binaryheap \
 		  test_bitmapset \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index 3ac291656c1..929659956cb 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -16,6 +16,7 @@ subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
 subdir('test_aio')
+subdir('test_autovacuum')
 subdir('test_binaryheap')
 subdir('test_bitmapset')
 subdir('test_bloomfilter')
diff --git a/src/test/modules/test_autovacuum/.gitignore b/src/test/modules/test_autovacuum/.gitignore
new file mode 100644
index 00000000000..716e17f5a2a
--- /dev/null
+++ b/src/test/modules/test_autovacuum/.gitignore
@@ -0,0 +1,2 @@
+# Generated subdirectories
+/tmp_check/
diff --git a/src/test/modules/test_autovacuum/Makefile b/src/test/modules/test_autovacuum/Makefile
new file mode 100644
index 00000000000..188ec9f96a2
--- /dev/null
+++ b/src/test/modules/test_autovacuum/Makefile
@@ -0,0 +1,20 @@
+# src/test/modules/test_autovacuum/Makefile
+
+PGFILEDESC = "test_autovacuum - test code for parallel autovacuum"
+
+TAP_TESTS = 1
+
+EXTRA_INSTALL = src/test/modules/injection_points
+
+export enable_injection_points
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/test_autovacuum
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/test_autovacuum/meson.build b/src/test/modules/test_autovacuum/meson.build
new file mode 100644
index 00000000000..86e392bc0de
--- /dev/null
+++ b/src/test/modules/test_autovacuum/meson.build
@@ -0,0 +1,15 @@
+# Copyright (c) 2024-2026, PostgreSQL Global Development Group
+
+tests += {
+  'name': 'test_autovacuum',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'env': {
+       'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
+    },
+    'tests': [
+      't/001_parallel_autovacuum.pl',
+    ],
+  },
+}
diff --git a/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
new file mode 100644
index 00000000000..2aca32374a2
--- /dev/null
+++ b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
@@ -0,0 +1,195 @@
+# Test parallel autovacuum behavior
+
+use warnings FATAL => 'all';
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if ($ENV{enable_injection_points} ne 'yes')
+{
+	plan skip_all => 'Injection points not supported by this build';
+}
+
+# Before each test we should disable autovacuum for 'test_autovac' table and
+# generate some dead tuples in it. Returns the current autovacuum_count of
+# the table test_autovac.
+sub prepare_for_next_test
+{
+	my ($node, $test_number) = @_;
+
+	$node->safe_psql(
+		'postgres', qq{
+		ALTER TABLE test_autovac SET (autovacuum_enabled = false);
+		UPDATE test_autovac SET col_1 = $test_number;
+	});
+
+	my $count = $node->safe_psql(
+		'postgres', qq{
+		SELECT autovacuum_count FROM pg_stat_user_tables WHERE relname = 'test_autovac'
+	});
+
+	return $count;
+}
+
+# Wait for the table to be vacuumed by an autovacuum worker.
+sub wait_for_autovacuum_complete
+{
+	my ($node, $old_count) = @_;
+
+	$node->poll_query_until(
+		'postgres', qq{
+		SELECT autovacuum_count > $old_count FROM pg_stat_user_tables WHERE relname = 'test_autovac'
+	});
+}
+
+my $psql_out;
+
+my $node = PostgreSQL::Test::Cluster->new('main');
+$node->init;
+
+# Limit to one autovacuum worker and disable autovacuum logging globally
+# (enabled only on the test table) so that log checks below match only
+# activity on the expected table.
+$node->append_conf(
+	'postgresql.conf', qq{
+autovacuum_max_workers = 1
+autovacuum_worker_slots = 1
+autovacuum_max_parallel_workers = 2
+max_worker_processes = 10
+max_parallel_workers = 10
+log_min_messages = debug2
+autovacuum_naptime = '1s'
+min_parallel_index_scan_size = 0
+log_autovacuum_min_duration = -1
+});
+$node->start;
+
+# Check if the extension injection_points is available, as it may be
+# possible that this script is run with installcheck, where the module
+# would not be installed by default.
+if (!$node->check_extension('injection_points'))
+{
+	plan skip_all => 'Extension injection_points not installed';
+}
+
+# Create all functions needed for testing
+$node->safe_psql(
+	'postgres', qq{
+	CREATE EXTENSION injection_points;
+});
+
+my $indexes_num = 3;
+my $initial_rows_num = 10_000;
+my $autovacuum_parallel_workers = 2;
+
+# Create table and fill it with some data
+$node->safe_psql(
+	'postgres', qq{
+	CREATE TABLE test_autovac (
+		id SERIAL PRIMARY KEY,
+		col_1 INTEGER,  col_2 INTEGER,  col_3 INTEGER,  col_4 INTEGER
+	) WITH (autovacuum_parallel_workers = $autovacuum_parallel_workers,
+			log_autovacuum_min_duration = 0);
+
+	INSERT INTO test_autovac
+	SELECT
+		g AS col1,
+		g + 1 AS col2,
+		g + 2 AS col3,
+		g + 3 AS col4
+	FROM generate_series(1, $initial_rows_num) AS g;
+});
+
+# Create specified number of b-tree indexes on the table
+$node->safe_psql(
+	'postgres', qq{
+	DO \$\$
+	DECLARE
+		i INTEGER;
+	BEGIN
+		FOR i IN 1..$indexes_num LOOP
+			EXECUTE format('CREATE INDEX idx_col_\%s ON test_autovac (col_\%s);', i, i);
+		END LOOP;
+	END \$\$;
+});
+
+# Test 1 :
+# Our table has enough indexes and appropriate reloptions, so autovacuum must
+# be able to process it in parallel mode. Just check if it can do it.
+
+my $av_count = prepare_for_next_test($node, 1);
+my $log_offset = -s $node->logfile;
+
+$node->safe_psql(
+	'postgres', qq{
+	ALTER TABLE test_autovac SET (autovacuum_enabled = true);
+});
+
+# Wait until the parallel autovacuum on table is completed. At the same time,
+# we check that the required number of parallel workers has been started.
+wait_for_autovacuum_complete($node, $av_count);
+ok( $node->log_contains(
+		qr/parallel workers: index vacuum: 2 planned, 2 launched in total/,
+		$log_offset));
+
+# Test 2:
+# Check whether parallel autovacuum leader can propagate cost-based parameters
+# to the parallel workers.
+
+$av_count = prepare_for_next_test($node, 2);
+$log_offset = -s $node->logfile;
+
+$node->safe_psql(
+	'postgres', qq{
+	SELECT injection_points_attach('autovacuum-start-parallel-vacuum', 'wait');
+	SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'wait');
+
+	ALTER TABLE test_autovac SET (autovacuum_parallel_workers = 1, autovacuum_enabled = true);
+});
+
+# Wait until parallel autovacuum is inited
+$node->wait_for_event('autovacuum worker',
+	'autovacuum-start-parallel-vacuum');
+
+# Update the shared cost-based delay parameters.
+$node->safe_psql(
+	'postgres', qq{
+	ALTER SYSTEM SET vacuum_cost_limit = 500;
+	ALTER SYSTEM SET vacuum_cost_page_miss = 10;
+	ALTER SYSTEM SET vacuum_cost_page_dirty = 10;
+	ALTER SYSTEM SET vacuum_cost_page_hit = 10;
+	SELECT pg_reload_conf();
+});
+
+# Resume the leader process to update the shared parameters during heap scan (i.e.
+# vacuum_delay_point() is called) and launch a parallel vacuum worker, but it stops
+# before vacuuming indexes due to the injection point.
+$node->safe_psql(
+	'postgres', qq{
+	SELECT injection_points_wakeup('autovacuum-start-parallel-vacuum');
+});
+$node->wait_for_event('autovacuum worker',
+	'autovacuum-leader-before-indexes-processing');
+
+# Check whether parallel worker successfully updated all parameters during
+# index processing
+$node->wait_for_log(
+	qr/parallel autovacuum worker updated cost params: cost_limit=500, cost_delay=2, cost_page_miss=10, cost_page_dirty=10, cost_page_hit=10/,
+	$log_offset);
+
+$node->safe_psql(
+	'postgres', qq{
+	SELECT injection_points_wakeup('autovacuum-leader-before-indexes-processing');
+});
+
+wait_for_autovacuum_complete($node, $av_count);
+
+# Cleanup
+$node->safe_psql(
+	'postgres', qq{
+	SELECT injection_points_detach('autovacuum-start-parallel-vacuum');
+	SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
+});
+
+$node->stop;
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e3c1007abdf..aeaf9307558 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2097,6 +2097,7 @@ PVIndStats
 PVIndVacStatus
 PVOID
 PVShared
+PVSharedCostParams
 PVWorkerUsage
 PVWorkerStats
 PX_Alias
-- 
2.43.0

From 3954e445a3e794ef19dbccb699a0e61d36a33733 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Sat, 28 Mar 2026 15:32:01 +0700
Subject: [PATCH 2/2] fixes for 0001 patch

---
 src/backend/access/common/reloptions.c        |  4 +-
 src/backend/access/heap/vacuumlazy.c          |  5 +-
 src/backend/commands/vacuumparallel.c         | 52 +++++++++++++------
 src/backend/postmaster/autovacuum.c           | 36 +++++++------
 src/backend/utils/init/globals.c              |  2 +-
 src/backend/utils/misc/guc.c                  |  7 +--
 src/backend/utils/misc/guc_parameters.dat     |  2 +-
 src/backend/utils/misc/postgresql.conf.sample |  2 +-
 .../t/001_parallel_autovacuum.pl              | 22 ++++----
 9 files changed, 83 insertions(+), 49 deletions(-)

diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index ce41b015b32..56dc99bdbe1 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -239,11 +239,11 @@ static relopt_int intRelOpts[] =
 	{
 		{
 			"autovacuum_parallel_workers",
-			"Overrides value of the autovacuum_max_parallel_workers parameter for this table, if > -1.",
+			"Maximum number of parallel autovacuum workers that can be used for processing this table.",
 			RELOPT_KIND_HEAP,
 			ShareUpdateExclusiveLock
 		},
-		0, -1, 1024
+		-1, -1, 1024
 	},
 	{
 		{
diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 8c7de657976..9fd4f6febbe 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -864,8 +864,11 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 	dead_items_alloc(vacrel, params.nworkers);
 
 #ifdef USE_INJECTION_POINTS
+
 	/*
-	 * Trigger injection point, if parallel autovacuum is about to be started.
+	 * Used by tests to pause before parallel vacuum is launched, allowing
+	 * test code to modify configuration that the leader then propagates to
+	 * workers.
 	 */
 	if (AmAutoVacuumWorkerProcess() && ParallelVacuumIsActive(vacrel))
 		INJECTION_POINT("autovacuum-start-parallel-vacuum", NULL);
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 62b6f50b538..13544de5b93 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -69,10 +69,12 @@ typedef struct PVSharedCostParams
 {
 	/*
 	 * The generation counter is incremented by the leader process each time
-	 * it updates the shared cost-based vacuum delay parameters. Paralell
+	 * it updates the shared cost-based vacuum delay parameters. Parallel
 	 * vacuum workers compares it with their local generation,
 	 * shared_params_generation_local, to detect whether they need to refresh
-	 * their local parameters.
+	 * their local parameters. The generation starts from 1 so that a freshly
+	 * started worker (whose local copy is 0) will always load the initial
+	 * parameters on its first check.
 	 */
 	pg_atomic_uint32 generation;
 
@@ -158,13 +160,13 @@ typedef struct PVShared
 
 	/*
 	 * If 'true' then we are running parallel autovacuum. Otherwise, we are
-	 * running parallel maintenence VACUUM.
+	 * running parallel maintenance VACUUM.
 	 */
 	bool		is_autovacuum;
 
 	/*
-	 * Struct for syncing cost-based vacuum delay parameters between
-	 * supportive parallel autovacuum workers with leader worker.
+	 * Cost-based vacuum delay parameters shared between the autovacuum leader
+	 * and its parallel workers.
 	 */
 	PVSharedCostParams cost_params;
 } PVShared;
@@ -271,7 +273,13 @@ struct ParallelVacuumState
 
 static PVSharedCostParams *pv_shared_cost_params = NULL;
 
-/* See comments in the PVSharedCostParams for the details */
+/*
+ * Worker-local copy of the last cost-parameter generation this worker has
+ * applied.  Initialized to 0; since the leader initializes the shared
+ * generation counter to 1, the first call to
+ * parallel_vacuum_update_shared_delay_params() will always detect a
+ * mismatch and read the initial parameters from shared memory.
+ */
 static uint32 shared_params_generation_local = 0;
 
 static int	parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
@@ -455,7 +463,7 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
 	if (shared->is_autovacuum)
 	{
 		parallel_vacuum_set_cost_parameters(&shared->cost_params);
-		pg_atomic_init_u32(&shared->cost_params.generation, 0);
+		pg_atomic_init_u32(&shared->cost_params.generation, 1);
 		SpinLockInit(&shared->cost_params.mutex);
 
 		pv_shared_cost_params = &(shared->cost_params);
@@ -623,7 +631,7 @@ parallel_vacuum_set_cost_parameters(PVSharedCostParams *params)
  * Updates the cost-based vacuum delay parameters for parallel autovacuum
  * workers.
  *
- * For non-autovacuum parallel worker this function will have no effect.
+ * For non-autovacuum parallel workers, this function will have no effect.
  */
 void
 parallel_vacuum_update_shared_delay_params(void)
@@ -632,7 +640,7 @@ parallel_vacuum_update_shared_delay_params(void)
 
 	Assert(IsParallelWorker());
 
-	/* Quick return if the wokrer is not running for the autovacuum */
+	/* Quick return if the worker is not running for the autovacuum */
 	if (pv_shared_cost_params == NULL)
 		return;
 
@@ -681,7 +689,7 @@ parallel_vacuum_propagate_shared_delay_params(void)
 		return;
 
 	/*
-	 * Check if any delay parameters has changed. We can read them without
+	 * Check if any delay parameters have changed. We can read them without
 	 * locks as only the leader can modify them.
 	 */
 	if (vacuum_cost_delay == pv_shared_cost_params->cost_delay &&
@@ -905,9 +913,10 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 	}
 
 #ifdef USE_INJECTION_POINTS
+
 	/*
-	 * This injection point is used to wait until parallel autovacuum workers
-	 * finishes their part of index processing.
+	 * Used by tests to pause after workers are launched but before index
+	 * vacuuming begins.
 	 */
 	if (nworkers > 0)
 		INJECTION_POINT("autovacuum-leader-before-indexes-processing", NULL);
@@ -1247,15 +1256,26 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)
 								shared->dead_items_handle);
 
 	/* Set cost-based vacuum delay */
-	VacuumUpdateCosts();
+	if (shared->is_autovacuum)
+	{
+		/*
+		 * Parallel autovacuum workers initialize cost-based delay parameters
+		 * from the leader's shared state rather than GUC defaults, because
+		 * the leader may have applied per-table or autovacuum-specific
+		 * overrides.  pv_shared_cost_params must be set before calling
+		 * parallel_vacuum_update_shared_delay_params().
+		 */
+		pv_shared_cost_params = &(shared->cost_params);
+		parallel_vacuum_update_shared_delay_params();
+	}
+	else
+		VacuumUpdateCosts();
+
 	VacuumCostBalance = 0;
 	VacuumCostBalanceLocal = 0;
 	VacuumSharedCostBalance = &(shared->cost_balance);
 	VacuumActiveNWorkers = &(shared->active_nworkers);
 
-	if (shared->is_autovacuum)
-		pv_shared_cost_params = &(shared->cost_params);
-
 	/* Set parallel vacuum state */
 	pvs.indrels = indrels;
 	pvs.nindexes = nindexes;
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 3124341721c..ced4970a3a8 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -2869,7 +2869,6 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		int			multixact_freeze_table_age;
 		int			log_vacuum_min_duration;
 		int			log_analyze_min_duration;
-		int			nparallel_workers = -1; /* disabled by default */
 
 		/*
 		 * Calculate the vacuum cost parameters and the freeze ages.  If there
@@ -2930,19 +2929,6 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		tab->at_params.index_cleanup = VACOPTVALUE_UNSPECIFIED;
 		tab->at_params.truncate = VACOPTVALUE_UNSPECIFIED;
 
-		/* Decide whether we need to process indexes of table in parallel. */
-		if (avopts)
-		{
-			if (avopts->autovacuum_parallel_workers > 0)
-				nparallel_workers = avopts->autovacuum_parallel_workers;
-			else if (avopts->autovacuum_parallel_workers == -1)
-			{
-				nparallel_workers = autovacuum_max_parallel_workers > 0
-					? autovacuum_max_parallel_workers
-					: -1; /* disable parallelism if parameter's value is 0 */
-			}
-		}
-
 		tab->at_params.freeze_min_age = freeze_min_age;
 		tab->at_params.freeze_table_age = freeze_table_age;
 		tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;
@@ -2951,7 +2937,27 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		tab->at_params.log_vacuum_min_duration = log_vacuum_min_duration;
 		tab->at_params.log_analyze_min_duration = log_analyze_min_duration;
 		tab->at_params.toast_parent = InvalidOid;
-		tab->at_params.nworkers = nparallel_workers;
+
+		/* Determine the number of parallel vacuum workers to use */
+		tab->at_params.nworkers = 0;
+		if (avopts)
+		{
+			if (avopts->autovacuum_parallel_workers == 0)
+			{
+				/*
+				 * Disable parallel vacuum, if the reloption sets the parallel
+				 * degree as zero.
+				 */
+				tab->at_params.nworkers = -1;
+			}
+			else if (avopts->autovacuum_parallel_workers > 0)
+				tab->at_params.nworkers = avopts->autovacuum_parallel_workers;
+
+			/*
+			 * autovacuum_parallel_workers == -1 falls through, keep
+			 * nworkers=0
+			 */
+		}
 
 		/*
 		 * Later, in vacuum_rel(), we check reloptions for any
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 8265a82b639..24ddb276f0c 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -143,7 +143,7 @@ int			NBuffers = 16384;
 int			MaxConnections = 100;
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
-int			autovacuum_max_parallel_workers = 2;
+int			autovacuum_max_parallel_workers = 0;
 int			MaxBackends = 0;
 
 /* GUC parameters for vacuum */
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 45b39b7c47f..1ac8e8fc3be 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -3359,9 +3359,10 @@ set_config_with_handle(const char *name, config_handle *handle,
 	 * Also allow normal setting if the GUC is marked GUC_ALLOW_IN_PARALLEL.
 	 *
 	 * Other changes might need to affect other workers, so forbid them. Note,
-	 * that parallel autovacuum leader is an exception, because only
-	 * cost-based delays need to be affected also to parallel autovacuum
-	 * workers, and we will handle it elsewhere if appropriate.
+	 * that parallel autovacuum leader is an exception because only cost-based
+	 * delays need to be affected also to parallel autovacuum workers. These
+	 * parameters are propagated to its workers during parallel vacuum (see
+	 * vacuumparallel.c for details).
 	 */
 	if (IsInParallelMode() && !AmAutoVacuumWorkerProcess() && changeVal &&
 		action != GUC_ACTION_SAVE &&
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index e16c756e2cc..6ef46d88155 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -173,7 +173,7 @@
 { name => 'autovacuum_max_parallel_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
   short_desc => 'Maximum number of parallel workers that can be used by a single autovacuum worker.',
   variable => 'autovacuum_max_parallel_workers',
-  boot_val => '2',
+  boot_val => '0',
   min => '0',
   max => 'MAX_PARALLEL_WORKER_LIMIT',
 },
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index 6097d967ec1..73c49f09aef 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -713,7 +713,7 @@
 #autovacuum_worker_slots = 16           # autovacuum worker slots to allocate
                                         # (change requires restart)
 #autovacuum_max_workers = 3             # max number of autovacuum subprocesses
-#autovacuum_max_parallel_workers = 2    # limited by max_parallel_workers
+#autovacuum_max_parallel_workers = 0    # limited by max_parallel_workers
 #autovacuum_naptime = 1min              # time between autovacuum runs
 #autovacuum_vacuum_threshold = 50       # min number of row updates before
                                         # vacuum
diff --git a/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
index 0364019d5f0..2aca32374a2 100644
--- a/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
+++ b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
@@ -12,7 +12,7 @@ if ($ENV{enable_injection_points} ne 'yes')
 
 # Before each test we should disable autovacuum for 'test_autovac' table and
 # generate some dead tuples in it. Returns the current autovacuum_count of
-# the table tset_autovac.
+# the table test_autovac.
 sub prepare_for_next_test
 {
 	my ($node, $test_number) = @_;
@@ -47,16 +47,20 @@ my $psql_out;
 my $node = PostgreSQL::Test::Cluster->new('main');
 $node->init;
 
-# Configure postgres, so it can launch parallel autovacuum workers, log all
-# information we are interested in and autovacuum works frequently
+# Limit to one autovacuum worker and disable autovacuum logging globally
+# (enabled only on the test table) so that log checks below match only
+# activity on the expected table.
 $node->append_conf(
 	'postgresql.conf', qq{
-	max_worker_processes = 20
-	max_parallel_workers = 20
-	autovacuum_max_parallel_workers = 4
-	log_min_messages = debug2
-	autovacuum_naptime = '1s'
-	min_parallel_index_scan_size = 0
+autovacuum_max_workers = 1
+autovacuum_worker_slots = 1
+autovacuum_max_parallel_workers = 2
+max_worker_processes = 10
+max_parallel_workers = 10
+log_min_messages = debug2
+autovacuum_naptime = '1s'
+min_parallel_index_scan_size = 0
+log_autovacuum_min_duration = -1
 });
 $node->start;
 
-- 
2.43.0

Reply via email to