Hi,

On Thu, Feb 26, 2026 at 6:59 AM Masahiko Sawada <[email protected]> wrote:
>
> For example, if users want to disable all parallel queries, they can do
> that by setting max_parallel_workers to 0. If parallel vacuum workers
> for autovacuums are taken from max_worker_processes pool (i.e.,
> without max_paralle_workers limit), users would need to set both
> max_parallel_workers and autovacuum_max_parallel_workers to 0.
>

This is kinda off-topic already, but I really want to clarify this question.

If parallel a/v workers are not limited by max_parallel_workers and the
user wants to disable all parallel operations, it is still enough to set
max_parallel_workers to 0. In this case parallel a/v could not acquire any
workers from bgworkers pool, and thus the user's goal is reached (and there
is no need to set autovacuum_max_parallel_workers to 0).

**Comments on the 0002 patch**

>
> +                                       /* Worker usage stats for
> parallel autovacuum. */
> +                                       appendStringInfo(&buf,
> +
>   _("parallel index vacuum: %d workers were planned, %d workers were
> reserved and %d workers were launched in total\n"),
> +
>   vacrel->workers_usage.vacuum.nplanned,
> +
>   vacrel->workers_usage.vacuum.nreserved,
> +
>   vacrel->workers_usage.vacuum.nlaunched);
>
> These log messages need to take care of plural forms but it seems to
> be too long if we use errmsg_plural() for each number. So how about
> something like:
>
> parallel workers: index: %d planned, %d reserved, %d launched in total
> parallel workers: cleanup %d planned, %d reserved, %d launched
>
> (Index cleanup is executed at most once so we don't need "in total" in
> the message.)

Oh, I forgot about plural form preservation. Agree with your suggestion.

**Comments on the 0003 patch**

>
> +typedef struct CostParamsData
> +{
> +        double         cost_delay;
> +        int                    cost_limit;
> +        int                    cost_page_dirty;
> +        int                    cost_page_hit;
> +        int                    cost_page_miss;
> +} CostParamsData;
>
> The name CostParamsData sounds too generic and I guess it could
> conflict with optimizer-related struct names in the future. How about
> renaming it to VacuumDelayParams?

I agree with the idea to rename this structure. But maybe we should rename
it to "VacuumCostParams"? This name conveys the contents of the structure
better, because enabling these parameters is called "VacuumCostActive".

> +        SpinLockAcquire(&pv_shared_cost_params->mutex);
> +
> +        shared_params_data = pv_shared_cost_params->params_data;
> +
> +        VacuumCostDelay = shared_params_data.cost_delay;
> +        VacuumCostLimit = shared_params_data.cost_limit;
> +        VacuumCostPageDirty = shared_params_data.cost_page_dirty;
> +        VacuumCostPageHit = shared_params_data.cost_page_hit;
> +        VacuumCostPageMiss = shared_params_data.cost_page_miss;
> +
> +        SpinLockRelease(&pv_shared_cost_params->mutex);
>
> If we copy the shared values in pv_shared_cost_params, we should
> release the spinlock earlier, i.e., before updating VacuumCostXXX
> variables. But I don't think we would even need to set these values in
> the local variables in this case as updating 4 local variables is
> fairly cheap.
>

Do you mean that we can release spinlock because we already copied the values
from the shared state to the local variable "shared_params_data"? I added this
variable as an alias for the long string "pv_shared_cost_params->params_data"
and I guess that compiler will get rid of it.

But now it doesn't seem like a good solution to me anymore. I'll get rid of
the local variable and copy the values directly from the shared state
(under spinlock).

> ---
> +        FillCostParamsData(&local_params_data);
> +        SpinLockAcquire(&pv_shared_cost_params->mutex);
> +
> +        if (CostParamsDataEqual(pv_shared_cost_params->params_data,
> +                                                        local_params_data))
> +        {
>
> IIUC it stores cost-based vacuum delay parameters into the
> local_params_data only for using CostParamsDataEqual() macro. I think
> it's better to directly compare values in pv_shared_cost_params and
> the cost-based vacuum delay parameters.

I agree.

> > > How about renaming it to use_shared_delay_params? I think it conveys
> > > better what the field is used for.
> >
> > I think that we should leave this name, because in the future some other
> > behavior differences may occur between manual VACUUM and autovacuum.
> > If so, we will already have an "am_autovacuum" field which we can use in
> > the code.
> > The existing logic with the "am_autovacuum" name is also LGTM - we should
> > use shared delay params only because we are running parallel autovacuum.
>
> It may occur but we can change the field name when it really comes.
>
> I'm slightly concerned that we've been using am_xxx variables in a
> different way. For instance, am_walsender is a global variable that is
> set to true only in wal sender processes. Also we have a bunch of
> AmXXProcess() macros that checks the global variable MyBackendType, to
> check the kinds of the current process. That is, the subject of 'am'
> is typically the process, I guess. On the other hand,
> am_parallel_autovacuum is stored in DSM space and indicates whether a
> parallel vacuum is invoked by manual VACUUM or autovacuum.

Yeah, I agree that "am_xxx" is not the best choice.
What about a simple "bool is_autovacuum"?

**Comments on the 0004 patch**

> If we write the log "%d parallel autovacuum workers have been
> released" in AutoVacuumReleaseParallelWorkres(), can we simplify both
> tests (4 and 5) further?
>

It won't help the 4th test, because ReleaseParallelWorkers is called
due to both ERROR and shmem_exit, but we want to be sure that
workers are released in the try/catch block (i.e. before the shmem_exit).
I thought that we could pass some additional info to the
"ReleaseAllParallelWorkers" such as "bool error_occured", but I decided
not to do so.

Also, I don't know whether the 5th test needs this log at all, because in
the end we are checking the number of free parallel workers. If a killed
a/v leader doesn't release parallel workers, we'll notice it.

> +        if (nworkers > 0)
> +
> INJECTION_POINT("autovacuum-leader-before-indexes-processing", NULL);
>
> I think it's better to use #ifdef USE_INJECTION_POINTS here.
>

Agree. I'll also fix it in vacuumlazy.c

> +#ifdef USE_INJECTION_POINTS
> +/*
> + * Log values of the related to cost-based delay parameters. It is used for
>
> s/values of the related to/values related to/
>

OK

> + * testing purpose.
> + */
> +static void
> +parallel_vacuum_report_cost_based_params(void)
> +{
> +       StringInfoData buf;
> +
> +       /* Simulate config reload during normal processing */
> +       pg_atomic_add_fetch_u32(VacuumActiveNWorkers, 1);
> +       vacuum_delay_point(false);
> +       pg_atomic_sub_fetch_u32(VacuumActiveNWorkers, 1);
>
> Calling vacuum_delay_point() here feels a bit arbitrary to me. Since
> parallel vacuum workers are calling
> parallel_vacuum_report_cost_based_params() after
> parallel_vacuum_process_safe_indexes(), I think we don't necessarily
> call vacuum_delay_point() here.
>

Sure! It is left from the previous implementation of the test. I'll remove
this call.

> +       appendStringInfo(&buf, "Vacuum cost-based delay parameters of
> parallel worker:\n");
> +       appendStringInfo(&buf, "vacuum_cost_limit = %d\n",vacuum_cost_limit);
> +       appendStringInfo(&buf, "vacuum_cost_delay = %g\n", vacuum_cost_delay);
> +       appendStringInfo(&buf, "vacuum_cost_page_miss = %d\n",
> VacuumCostPageMiss);
> +       appendStringInfo(&buf, "vacuum_cost_page_dirty = %d\n",
> VacuumCostPageDirty);
> +       appendStringInfo(&buf, "vacuum_cost_page_hit = %d\n",
> VacuumCostPageHit);
>
> I'd write these messages directly in elog() instead of using
> StringInfoData, which is simpler and can save palloc()/pfree().
>

OK

> +       ereport(DEBUG2, errmsg("%s", buf.data));
>
> Let's use elog() instead of ereport().
>

I suppose this is suggested because we don't want to translate error
messages of DEBUG level. Did I understand you correctly?

> +# Create role with pg_signal_autovacuum_worker for terminating
> autovacuum worker.
> +$node->safe_psql('postgres', qq{
> +        CREATE ROLE regress_worker_role;
> +        GRANT pg_signal_autovacuum_worker TO regress_worker_role;
> +        SET ROLE regress_worker_role;
> +});
> +
> +$node->safe_psql('postgres', qq{
> +        SELECT pg_terminate_backend('$av_pid');
> +});
>
> These two safe_psql calls use separate connections, meaning that
> pg_terminate_backend() is executed by the superuser rather than
> regress_worker_role. I think we don't need to create the
> regrss_worker_role and we can use the superuser in this test case.
>

Hm, looks like another one piece of code from my previous attempts to
implement this test. I'll remove it.

> We would add more autovacuum related tests to the test_autovacuum
> directory in the future. Given that the 001_basic.pl focuses on
> parallel vacuum tests, how about renaming it to 001_parallel_vacuum.pl
> or something?
>

Agree, I'll rename it.

> > This time I'll try something experimental - besides the patches I'll also
> > post differences between corresponding patches from v20 and v21.
> > I.e. you can apply v20--v21-diff-for-0001 on the v20-0001 patch and
> > get the v21-0001 patch. There are a lot of changes, so I guess it will
> > help you during review.  Please, let me know whether it is useful for you.
>
> It was helpful to easily see the changes from the previous version. Thank you!
>

I'm glad to hear it :) I will keep this tradition alive.


Thank you very much for the review!
Please, see updated set of patches and diffs between v21 and v22.

--
Best regards,
Daniil Davydov
From 68db56a95032518bf527376e152540cc11ddbb31 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Sun, 23 Nov 2025 01:08:14 +0700
Subject: [PATCH v22 4/5] Tests for parallel autovacuum

---
 src/backend/access/heap/vacuumlazy.c          |   9 +
 src/backend/commands/vacuumparallel.c         |  49 +++
 src/backend/postmaster/autovacuum.c           |  28 ++
 src/include/postmaster/autovacuum.h           |   1 +
 src/test/modules/Makefile                     |   1 +
 src/test/modules/meson.build                  |   1 +
 src/test/modules/test_autovacuum/.gitignore   |   2 +
 src/test/modules/test_autovacuum/Makefile     |  28 ++
 src/test/modules/test_autovacuum/meson.build  |  36 ++
 .../t/001_parallel_autovacuum.pl              | 319 ++++++++++++++++++
 .../test_autovacuum/test_autovacuum--1.0.sql  |  12 +
 .../modules/test_autovacuum/test_autovacuum.c |  41 +++
 .../test_autovacuum/test_autovacuum.control   |   3 +
 13 files changed, 530 insertions(+)
 create mode 100644 src/test/modules/test_autovacuum/.gitignore
 create mode 100644 src/test/modules/test_autovacuum/Makefile
 create mode 100644 src/test/modules/test_autovacuum/meson.build
 create mode 100644 src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
 create mode 100644 src/test/modules/test_autovacuum/test_autovacuum--1.0.sql
 create mode 100644 src/test/modules/test_autovacuum/test_autovacuum.c
 create mode 100644 src/test/modules/test_autovacuum/test_autovacuum.control

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 91be2502c09..6407c10524b 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -151,6 +151,7 @@
 #include "storage/freespace.h"
 #include "storage/lmgr.h"
 #include "storage/read_stream.h"
+#include "utils/injection_point.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_rusage.h"
 #include "utils/timestamp.h"
@@ -869,6 +870,14 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 	lazy_check_wraparound_failsafe(vacrel);
 	dead_items_alloc(vacrel, params.nworkers);
 
+#ifdef USE_INJECTION_POINTS
+	/*
+	 * Trigger injection point, if parallel autovacuum is about to be started.
+	 */
+	if (AmAutoVacuumWorkerProcess() && ParallelVacuumIsActive(vacrel))
+		INJECTION_POINT("autovacuum-start-parallel-vacuum", NULL);
+#endif
+
 	/*
 	 * Call lazy_scan_heap to perform all required heap pruning, index
 	 * vacuuming, and heap vacuuming (plus related processing)
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 27a6120b0e3..78ccfede031 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -39,6 +39,7 @@
 #include "postmaster/autovacuum.h"
 #include "storage/bufmgr.h"
 #include "tcop/tcopprot.h"
+#include "utils/injection_point.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 
@@ -306,6 +307,10 @@ static bool parallel_vacuum_index_is_parallel_safe(Relation indrel, int num_inde
 												   bool vacuum);
 static void parallel_vacuum_error_callback(void *arg);
 
+#ifdef USE_INJECTION_POINTS
+static inline void parallel_vacuum_report_cost_based_params(void);
+#endif
+
 /*
  * Try to enter parallel mode and create a parallel context.  Then initialize
  * shared memory state.
@@ -918,6 +923,19 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 							pvs->pcxt->nworkers_launched, nworkers)));
 	}
 
+#ifdef USE_INJECTION_POINTS
+	/*
+	 * To be able to exercise whether all reserved parallel workers are being
+	 * released anyway, allow injection points to trigger a failure at this
+	 * point.
+	 *
+	 * This injection point is also used to wait until parallel workers
+	 * finishes their part of index processing.
+	 */
+	if (nworkers > 0)
+		INJECTION_POINT("autovacuum-leader-before-indexes-processing", NULL);
+#endif
+
 	/* Vacuum the indexes that can be processed by only leader process */
 	parallel_vacuum_process_unsafe_indexes(pvs);
 
@@ -1295,6 +1313,16 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)
 	/* Process indexes to perform vacuum/cleanup */
 	parallel_vacuum_process_safe_indexes(&pvs);
 
+#ifdef USE_INJECTION_POINTS
+	/*
+	 * If we are parallel autovacuum worker, we can consume delay parameters
+	 * during index processing (via vacuum_delay_point call). This logging
+	 * allows tests to ensure this.
+	 */
+	if (shared->is_autovacuum)
+		parallel_vacuum_report_cost_based_params();
+#endif
+
 	/* Report buffer/WAL usage during parallel execution */
 	buffer_usage = shm_toc_lookup(toc, PARALLEL_VACUUM_KEY_BUFFER_USAGE, false);
 	wal_usage = shm_toc_lookup(toc, PARALLEL_VACUUM_KEY_WAL_USAGE, false);
@@ -1347,3 +1375,24 @@ parallel_vacuum_error_callback(void *arg)
 			return;
 	}
 }
+
+#ifdef USE_INJECTION_POINTS
+/*
+ * Log values related to cost-based vacuum delay parameters. It is used for
+ * testing purpose.
+ */
+static inline void
+parallel_vacuum_report_cost_based_params(void)
+{
+	const char *msg_format =
+		_("Parallel autovacuum worker cost params: cost_limit=%d, cost_delay=%g, cost_page_miss=%d, cost_page_dirty=%d, cost_page_hit=%d");
+
+	elog(DEBUG2,
+		 msg_format,
+		 vacuum_cost_limit,
+		 vacuum_cost_delay,
+		 VacuumCostPageMiss,
+		 VacuumCostPageDirty,
+		 VacuumCostPageHit);
+}
+#endif
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 0d78d02bd09..7b24a5d6e67 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -2495,12 +2495,20 @@ do_autovacuum(void)
 		}
 		PG_CATCH();
 		{
+			int	nreserved_workers = av_nworkers_reserved;
+
 			/*
 			 * Parallel autovacuum can reserve parallel workers. Make sure
 			 * that all reserved workers are released.
 			 */
 			AutoVacuumReleaseAllParallelWorkers();
 
+			if (nreserved_workers > 0)
+				ereport(DEBUG2,
+						(errmsg("%d parallel autovacuum workers has been released after occured error",
+								nreserved_workers),
+						 errhidecontext(true)));
+
 			/*
 			 * Abort the transaction, start a new one, and proceed with the
 			 * next table in our list.
@@ -3465,6 +3473,21 @@ AutoVacuumReleaseAllParallelWorkers(void)
 	Assert(av_nworkers_reserved == 0);
 }
 
+/*
+ * Get number of free autovacuum parallel workers.
+ */
+uint32
+AutoVacuumGetFreeParallelWorkers(void)
+{
+	uint32		nfree_workers;
+
+	LWLockAcquire(AutovacuumLock, LW_SHARED);
+	nfree_workers = AutoVacuumShmem->av_freeParallelWorkers;
+	LWLockRelease(AutovacuumLock);
+
+	return nfree_workers;
+}
+
 /*
  * autovac_init
  *		This is called at postmaster initialization.
@@ -3633,5 +3656,10 @@ adjust_free_parallel_workers(int prev_max_parallel_workers)
 	AutoVacuumShmem->av_freeParallelWorkers = Max(nfree_workers, 0);
 	AutoVacuumShmem->av_maxParallelWorkers = autovacuum_max_parallel_workers;
 
+	ereport(DEBUG2,
+			(errmsg("number of free parallel autovacuum workers is set to %u due to config reload",
+					AutoVacuumShmem->av_freeParallelWorkers),
+			 errhidecontext(true)));
+
 	LWLockRelease(AutovacuumLock);
 }
diff --git a/src/include/postmaster/autovacuum.h b/src/include/postmaster/autovacuum.h
index f3783afb51b..52be260e15f 100644
--- a/src/include/postmaster/autovacuum.h
+++ b/src/include/postmaster/autovacuum.h
@@ -66,6 +66,7 @@ extern bool AutoVacuumRequestWork(AutoVacuumWorkItemType type,
 extern void	AutoVacuumReserveParallelWorkers(int *nworkers);
 extern void AutoVacuumReleaseParallelWorkers(int nworkers);
 extern void AutoVacuumReleaseAllParallelWorkers(void);
+extern uint32 AutoVacuumGetFreeParallelWorkers(void);
 
 /* shared memory stuff */
 extern Size AutoVacuumShmemSize(void);
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 44c7163c1cd..937dbb64fd2 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -16,6 +16,7 @@ SUBDIRS = \
 		  plsample \
 		  spgist_name_ops \
 		  test_aio \
+		  test_autovacuum \
 		  test_binaryheap \
 		  test_bitmapset \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index 2634a519935..5ac8d87702d 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -16,6 +16,7 @@ subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
 subdir('test_aio')
+subdir('test_autovacuum')
 subdir('test_binaryheap')
 subdir('test_bitmapset')
 subdir('test_bloomfilter')
diff --git a/src/test/modules/test_autovacuum/.gitignore b/src/test/modules/test_autovacuum/.gitignore
new file mode 100644
index 00000000000..716e17f5a2a
--- /dev/null
+++ b/src/test/modules/test_autovacuum/.gitignore
@@ -0,0 +1,2 @@
+# Generated subdirectories
+/tmp_check/
diff --git a/src/test/modules/test_autovacuum/Makefile b/src/test/modules/test_autovacuum/Makefile
new file mode 100644
index 00000000000..32254c53a5d
--- /dev/null
+++ b/src/test/modules/test_autovacuum/Makefile
@@ -0,0 +1,28 @@
+# src/test/modules/test_autovacuum/Makefile
+
+PGFILEDESC = "test_autovacuum - test code for parallel autovacuum"
+
+MODULE_big = test_autovacuum
+OBJS = \
+	$(WIN32RES) \
+	test_autovacuum.o
+
+EXTENSION = test_autovacuum
+DATA = test_autovacuum--1.0.sql
+
+TAP_TESTS = 1
+
+EXTRA_INSTALL = src/test/modules/injection_points
+
+export enable_injection_points
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/test_autovacuum
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/test_autovacuum/meson.build b/src/test/modules/test_autovacuum/meson.build
new file mode 100644
index 00000000000..3441e5e49cf
--- /dev/null
+++ b/src/test/modules/test_autovacuum/meson.build
@@ -0,0 +1,36 @@
+# Copyright (c) 2024-2025, PostgreSQL Global Development Group
+
+test_autovacuum_sources = files(
+  'test_autovacuum.c',
+)
+
+if host_system == 'windows'
+  test_autovacuum_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'test_autovacuum',
+    '--FILEDESC', 'test_autovacuum - test code for parallel autovacuum',])
+endif
+
+test_autovacuum = shared_module('test_autovacuum',
+  test_autovacuum_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += test_autovacuum
+
+test_install_data += files(
+  'test_autovacuum.control',
+  'test_autovacuum--1.0.sql',
+)
+
+tests += {
+  'name': 'test_autovacuum',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'env': {
+       'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
+    },
+    'tests': [
+      't/001_basic.pl',
+    ],
+  },
+}
diff --git a/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
new file mode 100644
index 00000000000..9b80d371f5c
--- /dev/null
+++ b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
@@ -0,0 +1,319 @@
+# Test parallel autovacuum behavior
+
+use warnings FATAL => 'all';
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if ($ENV{enable_injection_points} ne 'yes')
+{
+	plan skip_all => 'Injection points not supported by this build';
+}
+
+# Before each test we should disable autovacuum for 'test_autovac' table and
+# generate some dead tuples in it.
+
+sub prepare_for_next_test
+{
+	my ($node, $test_number) = @_;
+
+	$node->safe_psql('postgres', qq{
+		ALTER TABLE test_autovac SET (autovacuum_enabled = false);
+	});
+
+	$node->safe_psql('postgres', qq{
+		UPDATE test_autovac SET col_1 = $test_number;
+	});
+}
+
+
+my $psql_out;
+
+my $node = PostgreSQL::Test::Cluster->new('node1');
+$node->init;
+
+# Configure postgres, so it can launch parallel autovacuum workers, log all
+# information we are interested in and autovacuum works frequently
+$node->append_conf('postgresql.conf', qq{
+	max_worker_processes = 20
+	max_parallel_workers = 20
+	max_parallel_maintenance_workers = 20
+	autovacuum_max_parallel_workers = 20
+	log_min_messages = debug2
+	log_autovacuum_min_duration = 0
+	autovacuum_naptime = '1s'
+	min_parallel_index_scan_size = 0
+	shared_preload_libraries=test_autovacuum
+});
+$node->start;
+
+# Check if the extension injection_points is available, as it may be
+# possible that this script is run with installcheck, where the module
+# would not be installed by default.
+if (!$node->check_extension('injection_points'))
+{
+	plan skip_all => 'Extension injection_points not installed';
+}
+
+# Create all functions needed for testing
+$node->safe_psql('postgres', qq{
+	CREATE EXTENSION test_autovacuum;
+	CREATE EXTENSION injection_points;
+});
+
+my $indexes_num = 4;
+my $initial_rows_num = 10_000;
+my $autovacuum_parallel_workers = 2;
+
+# Create table and fill it with some data
+$node->safe_psql('postgres', qq{
+	CREATE TABLE test_autovac (
+		id SERIAL PRIMARY KEY,
+		col_1 INTEGER,  col_2 INTEGER,  col_3 INTEGER,  col_4 INTEGER
+	) WITH (autovacuum_parallel_workers = $autovacuum_parallel_workers);
+
+	INSERT INTO test_autovac
+	SELECT
+		g AS col1,
+		g + 1 AS col2,
+		g + 2 AS col3,
+		g + 3 AS col4
+	FROM generate_series(1, $initial_rows_num) AS g;
+});
+
+# Create specified number of b-tree indexes on the table
+$node->safe_psql('postgres', qq{
+	DO \$\$
+	DECLARE
+		i INTEGER;
+	BEGIN
+		FOR i IN 1..$indexes_num LOOP
+			EXECUTE format('CREATE INDEX idx_col_\%s ON test_autovac (col_\%s);', i, i);
+		END LOOP;
+	END \$\$;
+});
+
+# Test 1 :
+# Our table has enough indexes and appropriate reloptions, so autovacuum must
+# be able to process it in parallel mode. Just check if it can.
+# Also check whether all requested workers:
+# 	1) launched
+# 	2) correctly released
+
+prepare_for_next_test($node, 1);
+
+$node->safe_psql('postgres', qq{
+	ALTER TABLE test_autovac SET (autovacuum_enabled = true);
+});
+
+# Wait until the parallel autovacuum on table is completed. At the same time,
+# we check that the required number of parallel workers has been started.
+$log_start = $node->wait_for_log(
+	qr/parallel workers: index vacuum: 2 planned, 2 reserved, 2 launched/,
+	$log_start
+);
+
+$psql_out = $node->safe_psql('postgres', qq{
+	SELECT get_parallel_autovacuum_free_workers();
+});
+is($psql_out, 20, 'All parallel workers has been released by the leader');
+
+# Test 2:
+# Check whether parallel autovacuum leader can propagate cost-based parameters
+# to parallel workers.
+
+prepare_for_next_test($node, 2);
+
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_attach('autovacuum-start-parallel-vacuum', 'wait');
+	SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'wait');
+
+	ALTER TABLE test_autovac SET (autovacuum_parallel_workers = 1, autovacuum_enabled = true);
+});
+
+# Wait until parallel autovacuum is inited
+$node->wait_for_event(
+	'autovacuum worker',
+	'autovacuum-start-parallel-vacuum'
+);
+
+# Reload config - leader worker must update its own parameters during indexes
+# processing
+$node->safe_psql('postgres', qq{
+	ALTER SYSTEM SET vacuum_cost_limit = 500;
+	ALTER SYSTEM SET vacuum_cost_page_miss = 10;
+	ALTER SYSTEM SET vacuum_cost_page_dirty = 10;
+	ALTER SYSTEM SET vacuum_cost_page_hit = 10;
+	SELECT pg_reload_conf();
+});
+
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_wakeup('autovacuum-start-parallel-vacuum');
+});
+
+# Now wait until parallel autovacuum leader completes processing table (i.e.
+# guaranteed to call vacuum_delay_point) and launches parallel worker.
+$node->wait_for_event(
+	'autovacuum worker',
+	'autovacuum-leader-before-indexes-processing'
+);
+
+# Check whether parallel worker successfully updated all parameters during
+# index processing
+$log_start = $node->wait_for_log(
+	qr/Parallel autovacuum worker cost params: cost_limit=500, cost_delay=2, / .
+	qr/cost_page_miss=10, cost_page_dirty=10, cost_page_hit=10/,
+	$log_start
+);
+
+# Cleanup
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_wakeup('autovacuum-leader-before-indexes-processing');
+
+	SELECT injection_points_detach('autovacuum-start-parallel-vacuum');
+	SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
+
+	ALTER TABLE test_autovac SET (autovacuum_parallel_workers = $autovacuum_parallel_workers);
+});
+
+# Test 3:
+# Test adjustment of free parallel workers number when changing
+# autovacuum_max_parallel_workers parameter
+
+prepare_for_next_test($node, 4);
+
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'wait');
+	ALTER TABLE test_autovac SET (autovacuum_enabled = true);
+});
+
+$node->wait_for_event(
+	'autovacuum worker',
+	'autovacuum-leader-before-indexes-processing'
+);
+
+$node->safe_psql('postgres', qq{
+	ALTER SYSTEM SET autovacuum_max_parallel_workers = 1;
+	SELECT pg_reload_conf();
+});
+
+# Since 2 parallel workers already launched and will be released in the future,
+# we are expecting that :
+# 1) number of free workers will be '0' after config reload
+# 2) number of free workers will be '1' after releasing workers
+
+# Check statement (1)
+$log_start = $node->wait_for_log(
+	qr/number of free parallel autovacuum workers is set to 0 due to config reload/,
+	$log_start
+);
+
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_wakeup('autovacuum-leader-before-indexes-processing');
+});
+
+# Wait until the end of parallel processing
+$log_start = $node->wait_for_log(
+	qr/parallel workers: index vacuum: 2 planned, 2 reserved, 2 launched/,
+	$log_start
+);
+
+# Check statement (2)
+$psql_out = $node->safe_psql('postgres', qq{
+	SELECT get_parallel_autovacuum_free_workers();
+});
+is($psql_out, 1, 'Number of free parallel workers is consistent');
+
+# Cleanup
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
+	ALTER SYSTEM SET autovacuum_max_parallel_workers = 10;
+	SELECT pg_reload_conf();
+});
+
+# Test 4:
+# We want parallel autovacuum workers to be released even if leader gets an
+# error. At first, simulate situation, when leader exits due to an ERROR.
+
+prepare_for_next_test($node, 4);
+
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'error');
+	ALTER TABLE test_autovac SET (autovacuum_enabled = true);
+});
+
+$log_start = $node->wait_for_log(
+	qr/error triggered for injection point / .
+	qr/autovacuum-leader-before-indexes-processing/,
+	$log_start
+);
+
+$log_start = $node->wait_for_log(
+	qr/2 parallel autovacuum workers has been released after occured error/,
+	$log_start
+);
+
+# Cleanup
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
+});
+
+# Test 5:
+# Same as above test, but simulate situation, when leader exits due to FATAL.
+
+prepare_for_next_test($node, 5);
+
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_attach('autovacuum-start-parallel-vacuum', 'wait');
+	SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'wait');
+	ALTER TABLE test_autovac SET (autovacuum_enabled = true);
+});
+
+# Wait until parallel autovacuum is inited and wake up the leader
+$node->wait_for_event(
+	'autovacuum worker',
+	'autovacuum-start-parallel-vacuum'
+);
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_wakeup('autovacuum-start-parallel-vacuum');
+});
+
+$node->wait_for_event(
+	'autovacuum worker',
+	'autovacuum-leader-before-indexes-processing'
+);
+
+my $av_pid = $node->safe_psql('postgres', qq{
+	SELECT pid FROM pg_stat_activity
+	WHERE backend_type = 'autovacuum worker'
+	  AND wait_event = 'autovacuum-leader-before-indexes-processing'
+	LIMIT 1;
+});
+
+$node->safe_psql('postgres', qq{
+	SELECT pg_terminate_backend('$av_pid');
+});
+
+$log_start = $node->wait_for_log(
+	qr/terminating autovacuum process due to administrator command/,
+	$log_start
+);
+
+# Now it is safe to check the number of free parallel workers, because even if
+# autovacuum is trying to vacuum table in parallel mode again, the leader
+# worker cannot go any further than "autovacuum-start-parallel-vacuum" point.
+# I.e. no one can interfere and change the number of free parallel workers.
+
+$psql_out = $node->safe_psql('postgres', qq{
+	SELECT get_parallel_autovacuum_free_workers();
+});
+is($psql_out, 10, 'All parallel workers has been released by the leader after FATAL');
+
+# Cleanup
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_detach('autovacuum-start-parallel-vacuum');
+	SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
+});
+
+$node->stop;
+done_testing();
diff --git a/src/test/modules/test_autovacuum/test_autovacuum--1.0.sql b/src/test/modules/test_autovacuum/test_autovacuum--1.0.sql
new file mode 100644
index 00000000000..e5646e0def5
--- /dev/null
+++ b/src/test/modules/test_autovacuum/test_autovacuum--1.0.sql
@@ -0,0 +1,12 @@
+/* src/test/modules/test_autovacuum/test_autovacuum--1.0.sql */
+
+-- complain if script is sourced in psql, rather than via CREATE EXTENSION
+\echo Use "CREATE EXTENSION test_autovacuum" to load this file. \quit
+
+/*
+ * Functions for expecting shared autovacuum state
+ */
+
+CREATE FUNCTION get_parallel_autovacuum_free_workers()
+RETURNS INTEGER STRICT
+AS 'MODULE_PATHNAME' LANGUAGE C;
diff --git a/src/test/modules/test_autovacuum/test_autovacuum.c b/src/test/modules/test_autovacuum/test_autovacuum.c
new file mode 100644
index 00000000000..959629c7685
--- /dev/null
+++ b/src/test/modules/test_autovacuum/test_autovacuum.c
@@ -0,0 +1,41 @@
+/*-------------------------------------------------------------------------
+ *
+ * test_autovacuum.c
+ *		Helpers to write tests for parallel autovacuum
+ *
+ * Copyright (c) 2020-2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/test/modules/test_autovacuum/test_autovacuum.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "commands/vacuum.h"
+#include "fmgr.h"
+#include "miscadmin.h"
+#include "postmaster/autovacuum.h"
+#include "storage/shmem.h"
+#include "storage/ipc.h"
+#include "storage/lwlock.h"
+#include "utils/builtins.h"
+#include "utils/injection_point.h"
+
+PG_MODULE_MAGIC;
+
+PG_FUNCTION_INFO_V1(get_parallel_autovacuum_free_workers);
+Datum
+get_parallel_autovacuum_free_workers(PG_FUNCTION_ARGS)
+{
+	uint32		nfree_workers;
+
+#ifndef USE_INJECTION_POINTS
+	ereport(ERROR, errmsg("injection points not supported"));
+#endif
+
+	nfree_workers = AutoVacuumGetFreeParallelWorkers();
+
+	PG_RETURN_UINT32(nfree_workers);
+}
diff --git a/src/test/modules/test_autovacuum/test_autovacuum.control b/src/test/modules/test_autovacuum/test_autovacuum.control
new file mode 100644
index 00000000000..1b7fad258f0
--- /dev/null
+++ b/src/test/modules/test_autovacuum/test_autovacuum.control
@@ -0,0 +1,3 @@
+comment = 'Test code for parallel autovacuum'
+default_version = '1.0'
+module_pathname = '$libdir/test_autovacuum'
-- 
2.43.0

From f535c603f11233d5ae6eb3ca441027d5196e20ee Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Thu, 15 Jan 2026 23:15:48 +0700
Subject: [PATCH v22 3/5] Cost based parameters propagation for parallel
 autovacuum

---
 src/backend/commands/vacuum.c         |  23 +++-
 src/backend/commands/vacuumparallel.c | 160 ++++++++++++++++++++++++++
 src/backend/postmaster/autovacuum.c   |   2 +-
 src/include/commands/vacuum.h         |   2 +
 src/tools/pgindent/typedefs.list      |   2 +
 5 files changed, 186 insertions(+), 3 deletions(-)

diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index 03932f45c8a..70882544d05 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -2430,8 +2430,21 @@ vacuum_delay_point(bool is_analyze)
 	/* Always check for interrupts */
 	CHECK_FOR_INTERRUPTS();
 
-	if (InterruptPending ||
-		(!VacuumCostActive && !ConfigReloadPending))
+	if (InterruptPending)
+		return;
+
+	if (IsParallelWorker())
+	{
+		/*
+		 * Possibly update cost-based delay parameters.
+		 *
+		 * Do it before checking VacuumCostActive, because its value might be
+		 * changed after calling this function.
+		 */
+		parallel_vacuum_update_shared_delay_params();
+	}
+
+	if (!VacuumCostActive && !ConfigReloadPending)
 		return;
 
 	/*
@@ -2445,6 +2458,12 @@ vacuum_delay_point(bool is_analyze)
 		ConfigReloadPending = false;
 		ProcessConfigFile(PGC_SIGHUP);
 		VacuumUpdateCosts();
+
+		/*
+		 * If we are parallel autovacuum leader and some of cost-based
+		 * parameters had changed, let other parallel workers know.
+		 */
+		parallel_vacuum_propagate_shared_delay_params();
 	}
 
 	/*
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 86d9f2b74c9..27a6120b0e3 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -53,6 +53,59 @@
 #define PARALLEL_VACUUM_KEY_WAL_USAGE		4
 #define PARALLEL_VACUUM_KEY_INDEX_STATS		5
 
+/*
+ * Helper for the PVSharedCostParams structure (see below), to avoid
+ * repetition.
+ */
+typedef struct VacuumCostParams
+{
+	double		cost_delay;
+	int			cost_limit;
+	int			cost_page_dirty;
+	int			cost_page_hit;
+	int			cost_page_miss;
+} VacuumCostParams;
+
+#define	FillVacCostParams(cost_params) \
+	(cost_params)->cost_delay = vacuum_cost_delay; \
+	(cost_params)->cost_limit = vacuum_cost_limit; \
+	(cost_params)->cost_page_dirty = VacuumCostPageDirty; \
+	(cost_params)->cost_page_hit = VacuumCostPageHit; \
+	(cost_params)->cost_page_miss = VacuumCostPageMiss
+
+#define VacCostParamsEquals(params) \
+	(vacuum_cost_delay == (params).cost_delay && \
+	 vacuum_cost_limit == (params).cost_limit && \
+	 VacuumCostPageDirty == (params).cost_page_dirty && \
+	 VacuumCostPageHit == (params).cost_page_hit && \
+	 VacuumCostPageMiss == (params).cost_page_miss)
+
+/*
+ * Struct for cost-based vacuum delay related parameters to share among an
+ * autovacuum worker and its parallel vacuum workers.
+ */
+typedef struct PVSharedCostParams
+{
+	/*
+	 * Each time leader worker updates its parameters, it must increase
+	 * generation. Every parallel worker keeps the generation
+	 * (shared_params_local_generation) at which it had last time received
+	 * parameters from the leader.
+	 *
+	 * It is enough for worker to compare it's local_generation with the field
+	 * below to determine whether it needs to receive new parameters' values.
+	 */
+	pg_atomic_uint32 generation;
+
+	slock_t		mutex;			/* protects all fields below */
+
+	/*
+	 * Copies of the corresponding cost-based vacuum delay parameters from
+	 * autovacuum leader process.
+	 */
+	VacuumCostParams params_data;
+} PVSharedCostParams;
+
 /*
  * Shared information among parallel workers.  So this is allocated in the DSM
  * segment.
@@ -122,6 +175,18 @@ typedef struct PVShared
 
 	/* Statistics of shared dead items */
 	VacDeadItemsInfo dead_items_info;
+
+	/*
+	 * If 'true' then we are running parallel autovacuum. Otherwise, we are
+	 * running parallel maintenence VACUUM.
+	 */
+	bool		is_autovacuum;
+
+	/*
+	 * Struct for syncing cost-based vacuum delay parameters between
+	 * supportive parallel autovacuum workers with leader worker.
+	 */
+	PVSharedCostParams cost_params;
 } PVShared;
 
 /* Status used during parallel index vacuum or cleanup */
@@ -224,6 +289,11 @@ struct ParallelVacuumState
 	PVIndVacStatus status;
 };
 
+static PVSharedCostParams *pv_shared_cost_params = NULL;
+
+/* See comments for the PVSharedCostParams structure for the explanation. */
+static uint32 shared_params_generation_local = 0;
+
 static int	parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 											bool *will_parallel_vacuum);
 static void parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
@@ -395,6 +465,17 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
 	pg_atomic_init_u32(&(shared->active_nworkers), 0);
 	pg_atomic_init_u32(&(shared->idx), 0);
 
+	shared->is_autovacuum = AmAutoVacuumWorkerProcess();
+
+	if (shared->is_autovacuum)
+	{
+		FillVacCostParams(&shared->cost_params.params_data);
+		pg_atomic_init_u32(&shared->cost_params.generation, 0);
+		SpinLockInit(&shared->cost_params.mutex);
+
+		pv_shared_cost_params = &(shared->cost_params);
+	}
+
 	shm_toc_insert(pcxt->toc, PARALLEL_VACUUM_KEY_SHARED, shared);
 	pvs->shared = shared;
 
@@ -539,6 +620,82 @@ parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 										&wusage->cleanup);
 }
 
+/*
+ * If we are parallel *autovacuum* worker, check whether related to cost-based
+ * vacuum delay parameters had changed in the leader worker. If so,
+ * corresponding parameters will be updated to the values which leader worker
+ * is operating on.
+ *
+ * For non-autovacuum parallel worker this function will have no effect.
+ */
+void
+parallel_vacuum_update_shared_delay_params(void)
+{
+	uint32		params_generation;
+
+	Assert(IsParallelWorker());
+
+	/* Check whether we are running parallel autovacuum */
+	if (pv_shared_cost_params == NULL)
+		return;
+
+	params_generation = pg_atomic_read_u32(&pv_shared_cost_params->generation);
+	Assert(shared_params_generation_local <= params_generation);
+
+	/* Return if parameters had not changed in the leader */
+	if (params_generation == shared_params_generation_local)
+		return;
+
+	SpinLockAcquire(&pv_shared_cost_params->mutex);
+
+	VacuumCostDelay = pv_shared_cost_params->params_data.cost_delay;
+	VacuumCostLimit = pv_shared_cost_params->params_data.cost_limit;
+	VacuumCostPageDirty = pv_shared_cost_params->params_data.cost_page_dirty;
+	VacuumCostPageHit = pv_shared_cost_params->params_data.cost_page_hit;
+	VacuumCostPageMiss = pv_shared_cost_params->params_data.cost_page_miss;
+
+	SpinLockRelease(&pv_shared_cost_params->mutex);
+
+	VacuumUpdateCosts();
+
+	shared_params_generation_local = params_generation;
+}
+
+/*
+ * Function to be called from parallel autovacuum leader in order to propagate
+ * some cost-based vacuum delay parameters to the supportive workers.
+ */
+void
+parallel_vacuum_propagate_shared_delay_params(void)
+{
+	Assert(AmAutoVacuumWorkerProcess());
+
+	/* Check whether we are running parallel autovacuum */
+	if (pv_shared_cost_params == NULL)
+		return;
+
+	SpinLockAcquire(&pv_shared_cost_params->mutex);
+
+	if (VacCostParamsEquals(pv_shared_cost_params->params_data))
+	{
+		/*
+		 * We don't need to update shared cost-based vacuum delay params if
+		 * they haven't changed.
+		 */
+		SpinLockRelease(&pv_shared_cost_params->mutex);
+		return;
+	}
+
+	FillVacCostParams(&pv_shared_cost_params->params_data);
+	SpinLockRelease(&pv_shared_cost_params->mutex);
+
+	/*
+	 * Increase generation of the parameters, i.e. let parallel workers know
+	 * that they should re-read shared cost params.
+	 */
+	pg_atomic_fetch_add_u32(&pv_shared_cost_params->generation, 1);
+}
+
 /*
  * Compute the number of parallel worker processes to request.  Both index
  * vacuum and index cleanup can be executed with parallel workers.
@@ -1105,6 +1262,9 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)
 	VacuumSharedCostBalance = &(shared->cost_balance);
 	VacuumActiveNWorkers = &(shared->active_nworkers);
 
+	if (shared->is_autovacuum)
+		pv_shared_cost_params = &(shared->cost_params);
+
 	/* Set parallel vacuum state */
 	pvs.indrels = indrels;
 	pvs.nindexes = nindexes;
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index f40abe90ed5..0d78d02bd09 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -1690,7 +1690,7 @@ VacuumUpdateCosts(void)
 	}
 	else
 	{
-		/* Must be explicit VACUUM or ANALYZE */
+		/* Must be explicit VACUUM or ANALYZE or parallel autovacuum worker */
 		vacuum_cost_delay = VacuumCostDelay;
 		vacuum_cost_limit = VacuumCostLimit;
 	}
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index d3dc4e8cc67..b10829a9379 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -423,6 +423,8 @@ extern void parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs,
 												int num_index_scans,
 												bool estimated_count,
 												PVWorkersUsage *wusage);
+extern void parallel_vacuum_update_shared_delay_params(void);
+extern void parallel_vacuum_propagate_shared_delay_params(void);
 extern void parallel_vacuum_main(dsm_segment *seg, shm_toc *toc);
 
 /* in commands/analyze.c */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index ae1047ddf5d..20fe34f8cc7 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2069,6 +2069,7 @@ PVIndStats
 PVIndVacStatus
 PVOID
 PVShared
+PVSharedCostParams
 PVWorkersUsage
 PVWorkersStats
 PX_Alias
@@ -3249,6 +3250,7 @@ VacAttrStatsP
 VacDeadItemsInfo
 VacErrPhase
 VacOptValue
+VacuumCostParams
 VacuumParams
 VacuumRelation
 VacuumStmt
-- 
2.43.0

From e8ecbc65ef61acdc8d3184ec93ac4f4877358fc1 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Sun, 23 Nov 2025 01:07:47 +0700
Subject: [PATCH v22 2/5] Logging for parallel autovacuum

---
 src/backend/access/heap/vacuumlazy.c  | 61 ++++++++++++++++++++++++++-
 src/backend/commands/vacuumparallel.c | 29 ++++++++++---
 src/include/commands/vacuum.h         | 28 +++++++++++-
 src/tools/pgindent/typedefs.list      |  3 ++
 4 files changed, 111 insertions(+), 10 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 4be267ff657..91be2502c09 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -340,6 +340,12 @@ typedef struct LVRelState
 	int			num_index_scans;
 	int			num_dead_items_resets;
 	Size		total_dead_items_bytes;
+
+	/*
+	 * Total number of planned and actually launched parallel workers for
+	 * index scans.
+	 */
+	PVWorkersUsage workers_usage;
 	/* Counters that follow are only for scanned_pages */
 	int64		tuples_deleted; /* # deleted from table */
 	int64		tuples_frozen;	/* # newly frozen */
@@ -778,6 +784,11 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 	vacrel->vm_new_visible_frozen_pages = 0;
 	vacrel->vm_new_frozen_pages = 0;
 
+	vacrel->workers_usage.vacuum.nlaunched = 0;
+	vacrel->workers_usage.vacuum.nplanned = 0;
+	vacrel->workers_usage.cleanup.nlaunched = 0;
+	vacrel->workers_usage.cleanup.nplanned = 0;
+
 	/*
 	 * Get cutoffs that determine which deleted tuples are considered DEAD,
 	 * not just RECENTLY_DEAD, and which XIDs/MXIDs to freeze.  Then determine
@@ -1120,6 +1131,50 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 							 orig_rel_pages == 0 ? 100.0 :
 							 100.0 * vacrel->lpdead_item_pages / orig_rel_pages,
 							 vacrel->lpdead_items);
+			if (vacrel->workers_usage.vacuum.nplanned > 0)
+			{
+				/* Stats for vacuum phase of index vacuuming. */
+
+				if (AmAutoVacuumWorkerProcess())
+				{
+					/* Worker usage stats for parallel autovacuum. */
+					appendStringInfo(&buf,
+									 _("parallel workers: index vacuum: %d planned, %d reserved, %d launched in total\n"),
+									 vacrel->workers_usage.vacuum.nplanned,
+									 vacrel->workers_usage.vacuum.nreserved,
+									 vacrel->workers_usage.vacuum.nlaunched);
+				}
+				else
+				{
+					/* Worker usage stats for manual VACUUM (PARALLEL). */
+					appendStringInfo(&buf,
+									 _("parallel workers: index vacuum: %d planned, %d launched in total\n"),
+									 vacrel->workers_usage.vacuum.nplanned,
+									 vacrel->workers_usage.vacuum.nlaunched);
+				}
+			}
+			if (vacrel->workers_usage.cleanup.nplanned > 0)
+			{
+				/* Stats for cleanup phase of index vacuuming. */
+
+				if (AmAutoVacuumWorkerProcess())
+				{
+					/* Worker usage stats for parallel autovacuum. */
+					appendStringInfo(&buf,
+									 _("parallel workers: index cleanup: %d planned, %d reserved, %d launched\n"),
+									 vacrel->workers_usage.cleanup.nplanned,
+									 vacrel->workers_usage.cleanup.nreserved,
+									 vacrel->workers_usage.cleanup.nlaunched);
+				}
+				else
+				{
+					/* Worker usage stats for manual VACUUM (PARALLEL). */
+					appendStringInfo(&buf,
+									 _("parallel workers: index cleanup: %d planned, %d launched\n"),
+									 vacrel->workers_usage.cleanup.nplanned,
+									 vacrel->workers_usage.cleanup.nlaunched);
+				}
+			}
 			for (int i = 0; i < vacrel->nindexes; i++)
 			{
 				IndexBulkDeleteResult *istat = vacrel->indstats[i];
@@ -2664,7 +2719,8 @@ lazy_vacuum_all_indexes(LVRelState *vacrel)
 	{
 		/* Outsource everything to parallel variant */
 		parallel_vacuum_bulkdel_all_indexes(vacrel->pvs, old_live_tuples,
-											vacrel->num_index_scans);
+											vacrel->num_index_scans,
+											&vacrel->workers_usage);
 
 		/*
 		 * Do a postcheck to consider applying wraparound failsafe now.  Note
@@ -3097,7 +3153,8 @@ lazy_cleanup_all_indexes(LVRelState *vacrel)
 		/* Outsource everything to parallel variant */
 		parallel_vacuum_cleanup_all_indexes(vacrel->pvs, reltuples,
 											vacrel->num_index_scans,
-											estimated_count);
+											estimated_count,
+											&vacrel->workers_usage);
 	}
 
 	/* Reset the progress counters */
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index d3e0c32b7ee..86d9f2b74c9 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -227,7 +227,7 @@ struct ParallelVacuumState
 static int	parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 											bool *will_parallel_vacuum);
 static void parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
-												bool vacuum);
+												bool vacuum, PVWorkersStats *wstats);
 static void parallel_vacuum_process_safe_indexes(ParallelVacuumState *pvs);
 static void parallel_vacuum_process_unsafe_indexes(ParallelVacuumState *pvs);
 static void parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation indrel,
@@ -502,7 +502,7 @@ parallel_vacuum_reset_dead_items(ParallelVacuumState *pvs)
  */
 void
 parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tuples,
-									int num_index_scans)
+									int num_index_scans, PVWorkersUsage *wusage)
 {
 	Assert(!IsParallelWorker());
 
@@ -513,7 +513,8 @@ parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 	pvs->shared->reltuples = num_table_tuples;
 	pvs->shared->estimated_count = true;
 
-	parallel_vacuum_process_all_indexes(pvs, num_index_scans, true);
+	parallel_vacuum_process_all_indexes(pvs, num_index_scans, true,
+										&wusage->vacuum);
 }
 
 /*
@@ -521,7 +522,8 @@ parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tup
  */
 void
 parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tuples,
-									int num_index_scans, bool estimated_count)
+									int num_index_scans, bool estimated_count,
+									PVWorkersUsage *wusage)
 {
 	Assert(!IsParallelWorker());
 
@@ -533,7 +535,8 @@ parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 	pvs->shared->reltuples = num_table_tuples;
 	pvs->shared->estimated_count = estimated_count;
 
-	parallel_vacuum_process_all_indexes(pvs, num_index_scans, false);
+	parallel_vacuum_process_all_indexes(pvs, num_index_scans, false,
+										&wusage->cleanup);
 }
 
 /*
@@ -618,7 +621,7 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
  */
 static void
 parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
-									bool vacuum)
+									bool vacuum, PVWorkersStats *wstats)
 {
 	int			nworkers;
 	PVIndVacStatus new_status;
@@ -655,13 +658,23 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 	 */
 	nworkers = Min(nworkers, pvs->pcxt->nworkers);
 
+	/* Remember this value, if we asked to */
+	if (wstats != NULL && nworkers > 0)
+		wstats->nplanned += nworkers;
+
 	/*
 	 * Reserve workers in autovacuum global state. Note that we may be given
 	 * fewer workers than we requested.
 	 */
 	if (AmAutoVacuumWorkerProcess() && nworkers > 0)
+	{
 		AutoVacuumReserveParallelWorkers(&nworkers);
 
+		/* Remember this value, if we asked to */
+		if (wstats != NULL)
+			wstats->nreserved += nworkers;
+	}
+
 	/*
 	 * Set index vacuum status and mark whether parallel vacuum worker can
 	 * process it.
@@ -728,6 +741,10 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 			/* Enable shared cost balance for leader backend */
 			VacuumSharedCostBalance = &(pvs->shared->cost_balance);
 			VacuumActiveNWorkers = &(pvs->shared->active_nworkers);
+
+			/* Remember this value, if we asked to */
+			if (wstats != NULL)
+				wstats->nlaunched += pvs->pcxt->nworkers_launched;
 		}
 
 		if (vacuum)
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index e885a4b9c77..d3dc4e8cc67 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -300,6 +300,28 @@ typedef struct VacDeadItemsInfo
 	int64		num_items;		/* current # of entries */
 } VacDeadItemsInfo;
 
+/*
+ * Helper for the PVWorkersUsage structure (see below), to avoid repetition.
+ */
+typedef struct PVWorkersStats
+{
+	int			nplanned;		/* # of parallel workers we are planned to
+								 * launch */
+	int			nreserved;		/* for autovacuum only - # of parallel workers
+								 * we have managed to reserve */
+	int			nlaunched;		/* # of launched parallel workers */
+} PVWorkersStats;
+
+/*
+ * PVWorkersUsage stores information about total number of launched, reserved
+ * and planned workers during parallel vacuum (both for vacuum and cleanup).
+ */
+typedef struct PVWorkersUsage
+{
+	PVWorkersStats vacuum;
+	PVWorkersStats cleanup;
+} PVWorkersUsage;
+
 /* GUC parameters */
 extern PGDLLIMPORT int default_statistics_target;	/* PGDLLIMPORT for PostGIS */
 extern PGDLLIMPORT int vacuum_freeze_min_age;
@@ -394,11 +416,13 @@ extern TidStore *parallel_vacuum_get_dead_items(ParallelVacuumState *pvs,
 extern void parallel_vacuum_reset_dead_items(ParallelVacuumState *pvs);
 extern void parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs,
 												long num_table_tuples,
-												int num_index_scans);
+												int num_index_scans,
+												PVWorkersUsage *wusage);
 extern void parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs,
 												long num_table_tuples,
 												int num_index_scans,
-												bool estimated_count);
+												bool estimated_count,
+												PVWorkersUsage *wusage);
 extern void parallel_vacuum_main(dsm_segment *seg, shm_toc *toc);
 
 /* in commands/analyze.c */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 241945734ec..ae1047ddf5d 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2069,6 +2069,8 @@ PVIndStats
 PVIndVacStatus
 PVOID
 PVShared
+PVWorkersUsage
+PVWorkersStats
 PX_Alias
 PX_Cipher
 PX_Combo
@@ -2407,6 +2409,7 @@ PullFilterOps
 PushFilter
 PushFilterOps
 PushFunction
+PVWorkersUsage
 PyCFunction
 PyMethodDef
 PyModuleDef
-- 
2.43.0

From e192de925dcfc932d1c223c051b71835dceded0e Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Sun, 23 Nov 2025 02:32:44 +0700
Subject: [PATCH v22 5/5] Documentation for parallel autovacuum

---
 doc/src/sgml/config.sgml           | 17 +++++++++++++++++
 doc/src/sgml/maintenance.sgml      | 12 ++++++++++++
 doc/src/sgml/ref/create_table.sgml | 20 ++++++++++++++++++++
 3 files changed, 49 insertions(+)

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index f670e2d4c31..07139ec7ff2 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -2918,6 +2918,7 @@ include_dir 'conf.d'
         <para>
          When changing this value, consider also adjusting
          <xref linkend="guc-max-parallel-workers"/>,
+         <xref linkend="guc-autovacuum-max-parallel-workers"/>,
          <xref linkend="guc-max-parallel-maintenance-workers"/>, and
          <xref linkend="guc-max-parallel-workers-per-gather"/>.
         </para>
@@ -9380,6 +9381,22 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
        </listitem>
       </varlistentry>
 
+      <varlistentry id="guc-autovacuum-max-parallel-workers" xreflabel="autovacuum_max_parallel_workers">
+        <term><varname>autovacuum_max_parallel_workers</varname> (<type>integer</type>)
+        <indexterm>
+         <primary><varname>autovacuum_max_parallel_workers</varname></primary>
+         <secondary>configuration parameter</secondary>
+        </indexterm>
+        </term>
+        <listitem>
+         <para>
+          Sets the maximum number of parallel autovacuum workers that
+          can be used for parallel index vacuuming at one time. Is capped by
+          <xref linkend="guc-max-parallel-workers"/>. The default is 2.
+         </para>
+        </listitem>
+     </varlistentry>
+
      </variablelist>
     </sect2>
 
diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml
index 7c958b06273..c9f9163c551 100644
--- a/doc/src/sgml/maintenance.sgml
+++ b/doc/src/sgml/maintenance.sgml
@@ -926,6 +926,18 @@ HINT:  Execute a database-wide VACUUM in that database.
     autovacuum workers' activity.
    </para>
 
+   <para>
+    If an autovacuum worker process comes across a table with the enabled
+    <xref linkend="reloption-autovacuum-parallel-workers"/> storage parameter,
+    it will launch parallel workers in order to vacuum indexes of this table
+    in a parallel mode. Parallel workers are taken from the pool of processes
+    established by <xref linkend="guc-max-worker-processes"/>, limited by
+    <xref linkend="guc-max-parallel-workers"/>.
+    The total number of parallel autovacuum workers that can be active at one
+    time is limited by the <xref linkend="guc-autovacuum-max-parallel-workers"/>
+    configuration parameter.
+   </para>
+
    <para>
     If several large tables all become eligible for vacuuming in a short
     amount of time, all autovacuum workers might become occupied with
diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml
index 982532fe725..4894de021cd 100644
--- a/doc/src/sgml/ref/create_table.sgml
+++ b/doc/src/sgml/ref/create_table.sgml
@@ -1718,6 +1718,26 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
     </listitem>
    </varlistentry>
 
+  <varlistentry id="reloption-autovacuum-parallel-workers" xreflabel="autovacuum_parallel_workers">
+    <term><literal>autovacuum_parallel_workers</literal> (<type>integer</type>)
+    <indexterm>
+     <primary><varname>autovacuum_parallel_workers</varname> storage parameter</primary>
+    </indexterm>
+    </term>
+    <listitem>
+     <para>
+      Sets the maximum number of parallel autovacuum workers that can process
+      indexes of this table.
+      The default value is -1, which means no parallel index vacuuming for
+      this table. If value is 0 then parallel degree will computed based on
+      number of indexes.
+      Note that the computed number of workers may not actually be available at
+      run time. If this occurs, the autovacuum will run with fewer workers
+      than expected.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="reloption-autovacuum-vacuum-threshold" xreflabel="autovacuum_vacuum_threshold">
     <term><literal>autovacuum_vacuum_threshold</literal>, <literal>toast.autovacuum_vacuum_threshold</literal> (<type>integer</type>)
     <indexterm>
-- 
2.43.0

From d312736690ffe1df7ae73c73dbb2ef334dfa3249 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Sun, 23 Nov 2025 01:03:24 +0700
Subject: [PATCH v22 1/5] Parallel autovacuum

---
 src/backend/access/common/reloptions.c        |  11 ++
 src/backend/commands/vacuumparallel.c         |  42 ++++-
 src/backend/postmaster/autovacuum.c           | 164 +++++++++++++++++-
 src/backend/utils/init/globals.c              |   1 +
 src/backend/utils/misc/guc.c                  |   8 +-
 src/backend/utils/misc/guc_parameters.dat     |   8 +
 src/backend/utils/misc/postgresql.conf.sample |   1 +
 src/bin/psql/tab-complete.in.c                |   1 +
 src/include/miscadmin.h                       |   1 +
 src/include/postmaster/autovacuum.h           |   5 +
 src/include/utils/rel.h                       |   7 +
 11 files changed, 239 insertions(+), 10 deletions(-)

diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index 237ab8d0ed9..9459a010cc3 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -235,6 +235,15 @@ static relopt_int intRelOpts[] =
 		},
 		SPGIST_DEFAULT_FILLFACTOR, SPGIST_MIN_FILLFACTOR, 100
 	},
+	{
+		{
+			"autovacuum_parallel_workers",
+			"Maximum number of parallel autovacuum workers that can be used for processing this table.",
+			RELOPT_KIND_HEAP,
+			ShareUpdateExclusiveLock
+		},
+		-1, -1, 1024
+	},
 	{
 		{
 			"autovacuum_vacuum_threshold",
@@ -1968,6 +1977,8 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
 		{"fillfactor", RELOPT_TYPE_INT, offsetof(StdRdOptions, fillfactor)},
 		{"autovacuum_enabled", RELOPT_TYPE_BOOL,
 		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, enabled)},
+		{"autovacuum_parallel_workers", RELOPT_TYPE_INT,
+		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, autovacuum_parallel_workers)},
 		{"autovacuum_vacuum_threshold", RELOPT_TYPE_INT,
 		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_threshold)},
 		{"autovacuum_vacuum_max_threshold", RELOPT_TYPE_INT,
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index c3b3c9ea21a..d3e0c32b7ee 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -1,7 +1,9 @@
 /*-------------------------------------------------------------------------
  *
  * vacuumparallel.c
- *	  Support routines for parallel vacuum execution.
+ *	  Support routines for parallel vacuum and autovacuum execution. In the
+ *	  comments below, the word "vacuum" will refer to both vacuum and
+ *	  autovacuum.
  *
  * This file contains routines that are intended to support setting up, using,
  * and tearing down a ParallelVacuumState.
@@ -34,6 +36,7 @@
 #include "executor/instrument.h"
 #include "optimizer/paths.h"
 #include "pgstat.h"
+#include "postmaster/autovacuum.h"
 #include "storage/bufmgr.h"
 #include "tcop/tcopprot.h"
 #include "utils/lsyscache.h"
@@ -373,8 +376,9 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
 	shared->queryid = pgstat_get_my_query_id();
 	shared->maintenance_work_mem_worker =
 		(nindexes_mwm > 0) ?
-		maintenance_work_mem / Min(parallel_workers, nindexes_mwm) :
-		maintenance_work_mem;
+		vac_work_mem / Min(parallel_workers, nindexes_mwm) :
+		vac_work_mem;
+
 	shared->dead_items_info.max_bytes = vac_work_mem * (size_t) 1024;
 
 	/* Prepare DSA space for dead items */
@@ -553,12 +557,17 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 	int			nindexes_parallel_bulkdel = 0;
 	int			nindexes_parallel_cleanup = 0;
 	int			parallel_workers;
+	int			max_workers;
+
+	max_workers = AmAutoVacuumWorkerProcess() ?
+		autovacuum_max_parallel_workers :
+		max_parallel_maintenance_workers;
 
 	/*
 	 * We don't allow performing parallel operation in standalone backend or
 	 * when parallelism is disabled.
 	 */
-	if (!IsUnderPostmaster || max_parallel_maintenance_workers == 0)
+	if (!IsUnderPostmaster || max_workers == 0)
 		return 0;
 
 	/*
@@ -597,8 +606,8 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 	parallel_workers = (nrequested > 0) ?
 		Min(nrequested, nindexes_parallel) : nindexes_parallel;
 
-	/* Cap by max_parallel_maintenance_workers */
-	parallel_workers = Min(parallel_workers, max_parallel_maintenance_workers);
+	/* Cap by GUC variable */
+	parallel_workers = Min(parallel_workers, max_workers);
 
 	return parallel_workers;
 }
@@ -646,6 +655,13 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 	 */
 	nworkers = Min(nworkers, pvs->pcxt->nworkers);
 
+	/*
+	 * Reserve workers in autovacuum global state. Note that we may be given
+	 * fewer workers than we requested.
+	 */
+	if (AmAutoVacuumWorkerProcess() && nworkers > 0)
+		AutoVacuumReserveParallelWorkers(&nworkers);
+
 	/*
 	 * Set index vacuum status and mark whether parallel vacuum worker can
 	 * process it.
@@ -690,6 +706,16 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 
 		LaunchParallelWorkers(pvs->pcxt);
 
+		/*
+		 * Tell autovacuum that we could not launch all the previously
+		 * reserved workers.
+		 */
+		if (AmAutoVacuumWorkerProcess() &&
+			pvs->pcxt->nworkers_launched < nworkers)
+		{
+			AutoVacuumReleaseParallelWorkers(nworkers - pvs->pcxt->nworkers_launched);
+		}
+
 		if (pvs->pcxt->nworkers_launched > 0)
 		{
 			/*
@@ -738,6 +764,10 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 
 		for (int i = 0; i < pvs->pcxt->nworkers_launched; i++)
 			InstrAccumParallelQuery(&pvs->buffer_usage[i], &pvs->wal_usage[i]);
+
+		/* Release all the reserved parallel workers for autovacuum */
+		if (AmAutoVacuumWorkerProcess() && pvs->pcxt->nworkers_launched > 0)
+			AutoVacuumReleaseAllParallelWorkers();
 	}
 
 	/*
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 6fde740465f..f40abe90ed5 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -151,6 +151,13 @@ int			Log_autoanalyze_min_duration = 600000;
 static double av_storage_param_cost_delay = -1;
 static int	av_storage_param_cost_limit = -1;
 
+/*
+ * Tracks the number of parallel workers currently reserved by the
+ * autovacuum worker. This is non-zero only for the parallel autovacuum
+ * leader process.
+ */
+static int	av_nworkers_reserved = 0;
+
 /* Flags set by signal handlers */
 static volatile sig_atomic_t got_SIGUSR2 = false;
 
@@ -285,6 +292,8 @@ typedef struct AutoVacuumWorkItem
  * av_workItems		work item array
  * av_nworkersForBalance the number of autovacuum workers to use when
  * 					calculating the per worker cost limit
+ * av_freeParallelWorkers the number of free parallel autovacuum workers
+ * av_maxParallelWorkers the maximum number of parallel autovacuum workers
  *
  * This struct is protected by AutovacuumLock, except for av_signal and parts
  * of the worker list (see above).
@@ -299,6 +308,8 @@ typedef struct
 	WorkerInfo	av_startingWorker;
 	AutoVacuumWorkItem av_workItems[NUM_WORKITEMS];
 	pg_atomic_uint32 av_nworkersForBalance;
+	uint32		av_freeParallelWorkers;
+	uint32		av_maxParallelWorkers;
 } AutoVacuumShmemStruct;
 
 static AutoVacuumShmemStruct *AutoVacuumShmem;
@@ -361,6 +372,7 @@ static void autovac_report_workitem(AutoVacuumWorkItem *workitem,
 static void avl_sigusr2_handler(SIGNAL_ARGS);
 static bool av_worker_available(void);
 static void check_av_worker_gucs(void);
+static void adjust_free_parallel_workers(int prev_max_parallel_workers);
 
 
 
@@ -759,6 +771,8 @@ ProcessAutoVacLauncherInterrupts(void)
 	if (ConfigReloadPending)
 	{
 		int			autovacuum_max_workers_prev = autovacuum_max_workers;
+		int			autovacuum_max_parallel_workers_prev =
+			autovacuum_max_parallel_workers;
 
 		ConfigReloadPending = false;
 		ProcessConfigFile(PGC_SIGHUP);
@@ -775,6 +789,15 @@ ProcessAutoVacLauncherInterrupts(void)
 		if (autovacuum_max_workers_prev != autovacuum_max_workers)
 			check_av_worker_gucs();
 
+		/*
+		 * If autovacuum_max_parallel_workers changed, we must take care of
+		 * the correct value of available parallel autovacuum workers in
+		 * shmem.
+		 */
+		if (autovacuum_max_parallel_workers_prev !=
+			autovacuum_max_parallel_workers)
+			adjust_free_parallel_workers(autovacuum_max_parallel_workers_prev);
+
 		/* rebuild the list in case the naptime changed */
 		rebuild_database_list(InvalidOid);
 	}
@@ -1379,6 +1402,16 @@ avl_sigusr2_handler(SIGNAL_ARGS)
  *					  AUTOVACUUM WORKER CODE
  ********************************************************************/
 
+/*
+ * Make sure that all reserved workers are released, even if parallel
+ * autovacuum leader is finishing due to FATAL error.
+ */
+static void
+autovacuum_worker_before_shmem_exit(int code, Datum arg)
+{
+	AutoVacuumReleaseAllParallelWorkers();
+}
+
 /*
  * Main entry point for autovacuum worker processes.
  */
@@ -2275,6 +2308,12 @@ do_autovacuum(void)
 										  "Autovacuum Portal",
 										  ALLOCSET_DEFAULT_SIZES);
 
+	/*
+	 * Parallel autovacuum can reserve parallel workers. Make sure that all
+	 * reserved workers are released even after FATAL error.
+	 */
+	before_shmem_exit(autovacuum_worker_before_shmem_exit, 0);
+
 	/*
 	 * Perform operations on collected tables.
 	 */
@@ -2456,6 +2495,12 @@ do_autovacuum(void)
 		}
 		PG_CATCH();
 		{
+			/*
+			 * Parallel autovacuum can reserve parallel workers. Make sure
+			 * that all reserved workers are released.
+			 */
+			AutoVacuumReleaseAllParallelWorkers();
+
 			/*
 			 * Abort the transaction, start a new one, and proceed with the
 			 * next table in our list.
@@ -2856,8 +2901,12 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		 */
 		tab->at_params.index_cleanup = VACOPTVALUE_UNSPECIFIED;
 		tab->at_params.truncate = VACOPTVALUE_UNSPECIFIED;
-		/* As of now, we don't support parallel vacuum for autovacuum */
-		tab->at_params.nworkers = -1;
+
+		/* Decide whether we need to process indexes of table in parallel. */
+		tab->at_params.nworkers = avopts
+			? avopts->autovacuum_parallel_workers
+			: -1;
+
 		tab->at_params.freeze_min_age = freeze_min_age;
 		tab->at_params.freeze_table_age = freeze_table_age;
 		tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;
@@ -3334,6 +3383,88 @@ AutoVacuumRequestWork(AutoVacuumWorkItemType type, Oid relationId,
 	return result;
 }
 
+/*
+ * Reserves parallel workers for autovacuum.
+ *
+ * nworkers is an in/out parameter; the requested number of parallel workers
+ * to reserve by the caller, and set to the actual number of reserved workers.
+ *
+ * The caller must call AutoVacuumRelease[All]ParallelWorkers() to release the
+ * reserved workers.
+ *
+ * NOTE: We will try to provide as many workers as requested, even if caller
+ * will occupy all available workers.
+ */
+void
+AutoVacuumReserveParallelWorkers(int *nworkers)
+{
+	/* Only leader autovacuum worker can call this function. */
+	Assert(AmAutoVacuumWorkerProcess());
+
+	/* The worker must not have any reserved workers yet */
+	Assert(av_nworkers_reserved == 0);
+
+	LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);
+
+	/* Provide as many workers as we can. */
+	*nworkers = Min(AutoVacuumShmem->av_freeParallelWorkers, *nworkers);
+	AutoVacuumShmem->av_freeParallelWorkers -= *nworkers;
+
+	LWLockRelease(AutovacuumLock);
+
+	/* Remember how many workers we have reserved. */
+	av_nworkers_reserved = *nworkers;
+}
+
+/*
+ * Releases the reserved parallel workers for autovacuum.
+ *
+ * This function should be used to release the parallel workers that an
+ * autovacuum worker reserved by AutoVacuumReserveParallelWorkers(). nworkers
+ * is the number of workers to release, which must not be greater than the
+ * number of workers currently reserved, av_nworkers_reserved.
+ */
+void
+AutoVacuumReleaseParallelWorkers(int nworkers)
+{
+	/* Only leader worker can call this function. */
+	Assert(AmAutoVacuumWorkerProcess());
+
+	/* Cannot release more workers than reserved */
+	Assert(nworkers <= av_nworkers_reserved);
+
+	LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);
+
+	/*
+	 * If the maximum number of parallel workers was reduced during execution,
+	 * we must cap available workers number by its new value.
+	 */
+	AutoVacuumShmem->av_freeParallelWorkers =
+		Min(AutoVacuumShmem->av_freeParallelWorkers + nworkers,
+			AutoVacuumShmem->av_maxParallelWorkers);
+
+	LWLockRelease(AutovacuumLock);
+
+	/* Don't have to remember these workers anymore. */
+	av_nworkers_reserved -= nworkers;
+}
+
+/*
+ * Same as above, but this function releases all the parallel workers that
+ * this autovacuum worker reserved.
+ */
+void
+AutoVacuumReleaseAllParallelWorkers(void)
+{
+	/* Only leader worker can call this function. */
+	Assert(AmAutoVacuumWorkerProcess());
+
+	if (av_nworkers_reserved > 0)
+		AutoVacuumReleaseParallelWorkers(av_nworkers_reserved);
+
+	Assert(av_nworkers_reserved == 0);
+}
+
 /*
  * autovac_init
  *		This is called at postmaster initialization.
@@ -3394,6 +3525,10 @@ AutoVacuumShmemInit(void)
 		Assert(!found);
 
 		AutoVacuumShmem->av_launcherpid = 0;
+		AutoVacuumShmem->av_maxParallelWorkers =
+			Min(autovacuum_max_parallel_workers, max_parallel_workers);
+		AutoVacuumShmem->av_freeParallelWorkers =
+			AutoVacuumShmem->av_maxParallelWorkers;
 		dclist_init(&AutoVacuumShmem->av_freeWorkers);
 		dlist_init(&AutoVacuumShmem->av_runningWorkers);
 		AutoVacuumShmem->av_startingWorker = NULL;
@@ -3475,3 +3610,28 @@ check_av_worker_gucs(void)
 				 errdetail("The server will only start up to \"autovacuum_worker_slots\" (%d) autovacuum workers at a given time.",
 						   autovacuum_worker_slots)));
 }
+
+/*
+ * Adjusts the number of free parallel workers corresponds to the new
+ * autovacuum_max_parallel_workers value.
+ */
+static void
+adjust_free_parallel_workers(int prev_max_parallel_workers)
+{
+	int	nfree_workers;
+
+	LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);
+
+	/*
+	 * Cap or increase number of free parallel workers according to the
+	 * parameter change.
+	 */
+	nfree_workers =
+		autovacuum_max_parallel_workers - prev_max_parallel_workers +
+		AutoVacuumShmem->av_freeParallelWorkers;
+
+	AutoVacuumShmem->av_freeParallelWorkers = Max(nfree_workers, 0);
+	AutoVacuumShmem->av_maxParallelWorkers = autovacuum_max_parallel_workers;
+
+	LWLockRelease(AutovacuumLock);
+}
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 36ad708b360..8265a82b639 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -143,6 +143,7 @@ int			NBuffers = 16384;
 int			MaxConnections = 100;
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
+int			autovacuum_max_parallel_workers = 2;
 int			MaxBackends = 0;
 
 /* GUC parameters for vacuum */
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index d77502838c4..4a5c73a9e33 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -3326,9 +3326,13 @@ set_config_with_handle(const char *name, config_handle *handle,
 	 *
 	 * Also allow normal setting if the GUC is marked GUC_ALLOW_IN_PARALLEL.
 	 *
-	 * Other changes might need to affect other workers, so forbid them.
+	 * Other changes might need to affect other workers, so forbid them. Note,
+	 * that parallel autovacuum leader is an exception, because only
+	 * cost-based delays need to be affected also to parallel vacuum workers,
+	 * and we will handle it elsewhere if appropriate.
 	 */
-	if (IsInParallelMode() && changeVal && action != GUC_ACTION_SAVE &&
+	if (IsInParallelMode() && !AmAutoVacuumWorkerProcess() && changeVal &&
+		action != GUC_ACTION_SAVE &&
 		(record->flags & GUC_ALLOW_IN_PARALLEL) == 0)
 	{
 		ereport(elevel,
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index 9507778415d..92b69c65e83 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -154,6 +154,14 @@
   max => '2000000000',
 },
 
+{ name => 'autovacuum_max_parallel_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
+  short_desc => 'Maximum number of parallel autovacuum workers, that can be taken from bgworkers pool.',
+  variable => 'autovacuum_max_parallel_workers',
+  boot_val => '2',
+  min => '0',
+  max => 'MAX_BACKENDS',
+},
+
 { name => 'autovacuum_max_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
   short_desc => 'Sets the maximum number of simultaneously running autovacuum worker processes.',
   variable => 'autovacuum_max_workers',
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index f938cc65a3a..ef8126f3790 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -710,6 +710,7 @@
 #autovacuum_worker_slots = 16           # autovacuum worker slots to allocate
                                         # (change requires restart)
 #autovacuum_max_workers = 3             # max number of autovacuum subprocesses
+#autovacuum_max_parallel_workers = 2    # limited by max_parallel_workers
 #autovacuum_naptime = 1min              # time between autovacuum runs
 #autovacuum_vacuum_threshold = 50       # min number of row updates before
                                         # vacuum
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 8b91bc00062..ed59a21289c 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -1423,6 +1423,7 @@ static const char *const table_storage_parameters[] = {
 	"autovacuum_multixact_freeze_max_age",
 	"autovacuum_multixact_freeze_min_age",
 	"autovacuum_multixact_freeze_table_age",
+	"autovacuum_parallel_workers",
 	"autovacuum_vacuum_cost_delay",
 	"autovacuum_vacuum_cost_limit",
 	"autovacuum_vacuum_insert_scale_factor",
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index f16f35659b9..00190c67ecf 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -178,6 +178,7 @@ extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
+extern PGDLLIMPORT int autovacuum_max_parallel_workers;
 
 extern PGDLLIMPORT int commit_timestamp_buffers;
 extern PGDLLIMPORT int multixact_member_buffers;
diff --git a/src/include/postmaster/autovacuum.h b/src/include/postmaster/autovacuum.h
index 5aa0f3a8ac1..f3783afb51b 100644
--- a/src/include/postmaster/autovacuum.h
+++ b/src/include/postmaster/autovacuum.h
@@ -62,6 +62,11 @@ pg_noreturn extern void AutoVacWorkerMain(const void *startup_data, size_t start
 extern bool AutoVacuumRequestWork(AutoVacuumWorkItemType type,
 								  Oid relationId, BlockNumber blkno);
 
+/* parallel autovacuum stuff */
+extern void	AutoVacuumReserveParallelWorkers(int *nworkers);
+extern void AutoVacuumReleaseParallelWorkers(int nworkers);
+extern void AutoVacuumReleaseAllParallelWorkers(void);
+
 /* shared memory stuff */
 extern Size AutoVacuumShmemSize(void);
 extern void AutoVacuumShmemInit(void);
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index 236830f6b93..7c5e35a486c 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -311,6 +311,13 @@ typedef struct ForeignKeyCacheInfo
 typedef struct AutoVacOpts
 {
 	bool		enabled;
+
+	/*
+	 * Max number of parallel autovacuum workers. If value is 0 then parallel
+	 * degree will computed based on number of indexes.
+	 */
+	int			autovacuum_parallel_workers;
+
 	int			vacuum_threshold;
 	int			vacuum_max_threshold;
 	int			vacuum_ins_threshold;
-- 
2.43.0

From 2e5ab0a4f025900a61a1e34f5d2d163b6ff23f0d Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Fri, 27 Feb 2026 14:45:22 +0700
Subject: [PATCH 1/3] fixes for 0003 patch

---
 src/backend/commands/vacuumparallel.c | 74 +++++++++++++--------------
 src/tools/pgindent/typedefs.list      |  2 +-
 2 files changed, 36 insertions(+), 40 deletions(-)

diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index ccb3812165c..27a6120b0e3 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -57,28 +57,28 @@
  * Helper for the PVSharedCostParams structure (see below), to avoid
  * repetition.
  */
-typedef struct CostParamsData
+typedef struct VacuumCostParams
 {
 	double		cost_delay;
 	int			cost_limit;
 	int			cost_page_dirty;
 	int			cost_page_hit;
 	int			cost_page_miss;
-} CostParamsData;
+} VacuumCostParams;
 
-#define	FillCostParamsData(cost_params) \
+#define	FillVacCostParams(cost_params) \
 	(cost_params)->cost_delay = vacuum_cost_delay; \
 	(cost_params)->cost_limit = vacuum_cost_limit; \
 	(cost_params)->cost_page_dirty = VacuumCostPageDirty; \
 	(cost_params)->cost_page_hit = VacuumCostPageHit; \
 	(cost_params)->cost_page_miss = VacuumCostPageMiss
 
-#define CostParamsDataEqual(params_1, params_2) \
-	((params_1).cost_delay == (params_2).cost_delay && \
-	 (params_1).cost_limit == (params_2).cost_limit && \
-	 (params_1).cost_page_dirty == (params_2).cost_page_dirty && \
-	 (params_1).cost_page_hit == (params_2).cost_page_hit && \
-	 (params_1).cost_page_miss == (params_2).cost_page_miss)
+#define VacCostParamsEquals(params) \
+	(vacuum_cost_delay == (params).cost_delay && \
+	 vacuum_cost_limit == (params).cost_limit && \
+	 VacuumCostPageDirty == (params).cost_page_dirty && \
+	 VacuumCostPageHit == (params).cost_page_hit && \
+	 VacuumCostPageMiss == (params).cost_page_miss)
 
 /*
  * Struct for cost-based vacuum delay related parameters to share among an
@@ -99,8 +99,11 @@ typedef struct PVSharedCostParams
 
 	slock_t		mutex;			/* protects all fields below */
 
-	/* Copies of corresponding parameters from autovacuum leader process */
-	CostParamsData params_data;
+	/*
+	 * Copies of the corresponding cost-based vacuum delay parameters from
+	 * autovacuum leader process.
+	 */
+	VacuumCostParams params_data;
 } PVSharedCostParams;
 
 /*
@@ -177,11 +180,11 @@ typedef struct PVShared
 	 * If 'true' then we are running parallel autovacuum. Otherwise, we are
 	 * running parallel maintenence VACUUM.
 	 */
-	bool		am_parallel_autovacuum;
+	bool		is_autovacuum;
 
 	/*
-	 * Struct for syncing parameters between supportive parallel autovacuum
-	 * workers with leader worker.
+	 * Struct for syncing cost-based vacuum delay parameters between
+	 * supportive parallel autovacuum workers with leader worker.
 	 */
 	PVSharedCostParams cost_params;
 } PVShared;
@@ -462,11 +465,11 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
 	pg_atomic_init_u32(&(shared->active_nworkers), 0);
 	pg_atomic_init_u32(&(shared->idx), 0);
 
-	shared->am_parallel_autovacuum = AmAutoVacuumWorkerProcess();
+	shared->is_autovacuum = AmAutoVacuumWorkerProcess();
 
-	if (shared->am_parallel_autovacuum)
+	if (shared->is_autovacuum)
 	{
-		FillCostParamsData(&shared->cost_params.params_data);
+		FillVacCostParams(&shared->cost_params.params_data);
 		pg_atomic_init_u32(&shared->cost_params.generation, 0);
 		SpinLockInit(&shared->cost_params.mutex);
 
@@ -618,10 +621,10 @@ parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 }
 
 /*
- * If we are parallel *autovacuum* worker, check whether related to
- * cost-based delay parameters had changed in the leader worker. If
- * so, corresponding parameters will be updated to the values which
- * leader worker is operating on.
+ * If we are parallel *autovacuum* worker, check whether related to cost-based
+ * vacuum delay parameters had changed in the leader worker. If so,
+ * corresponding parameters will be updated to the values which leader worker
+ * is operating on.
  *
  * For non-autovacuum parallel worker this function will have no effect.
  */
@@ -629,7 +632,6 @@ void
 parallel_vacuum_update_shared_delay_params(void)
 {
 	uint32		params_generation;
-	CostParamsData shared_params_data;
 
 	Assert(IsParallelWorker());
 
@@ -646,13 +648,11 @@ parallel_vacuum_update_shared_delay_params(void)
 
 	SpinLockAcquire(&pv_shared_cost_params->mutex);
 
-	shared_params_data = pv_shared_cost_params->params_data;
-
-	VacuumCostDelay = shared_params_data.cost_delay;
-	VacuumCostLimit = shared_params_data.cost_limit;
-	VacuumCostPageDirty = shared_params_data.cost_page_dirty;
-	VacuumCostPageHit = shared_params_data.cost_page_hit;
-	VacuumCostPageMiss = shared_params_data.cost_page_miss;
+	VacuumCostDelay = pv_shared_cost_params->params_data.cost_delay;
+	VacuumCostLimit = pv_shared_cost_params->params_data.cost_limit;
+	VacuumCostPageDirty = pv_shared_cost_params->params_data.cost_page_dirty;
+	VacuumCostPageHit = pv_shared_cost_params->params_data.cost_page_hit;
+	VacuumCostPageMiss = pv_shared_cost_params->params_data.cost_page_miss;
 
 	SpinLockRelease(&pv_shared_cost_params->mutex);
 
@@ -663,34 +663,30 @@ parallel_vacuum_update_shared_delay_params(void)
 
 /*
  * Function to be called from parallel autovacuum leader in order to propagate
- * some cost-based parameters to the supportive workers.
+ * some cost-based vacuum delay parameters to the supportive workers.
  */
 void
 parallel_vacuum_propagate_shared_delay_params(void)
 {
-	CostParamsData local_params_data;
-
 	Assert(AmAutoVacuumWorkerProcess());
 
 	/* Check whether we are running parallel autovacuum */
 	if (pv_shared_cost_params == NULL)
 		return;
 
-	FillCostParamsData(&local_params_data);
 	SpinLockAcquire(&pv_shared_cost_params->mutex);
 
-	if (CostParamsDataEqual(pv_shared_cost_params->params_data,
-							local_params_data))
+	if (VacCostParamsEquals(pv_shared_cost_params->params_data))
 	{
 		/*
-		 * We don't need to update shared delay params if they haven't
-		 * changed.
+		 * We don't need to update shared cost-based vacuum delay params if
+		 * they haven't changed.
 		 */
 		SpinLockRelease(&pv_shared_cost_params->mutex);
 		return;
 	}
 
-	FillCostParamsData(&pv_shared_cost_params->params_data);
+	FillVacCostParams(&pv_shared_cost_params->params_data);
 	SpinLockRelease(&pv_shared_cost_params->mutex);
 
 	/*
@@ -1266,7 +1262,7 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)
 	VacuumSharedCostBalance = &(shared->cost_balance);
 	VacuumActiveNWorkers = &(shared->active_nworkers);
 
-	if (shared->am_parallel_autovacuum)
+	if (shared->is_autovacuum)
 		pv_shared_cost_params = &(shared->cost_params);
 
 	/* Set parallel vacuum state */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 2d6b57232e6..20fe34f8cc7 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -545,7 +545,6 @@ CopyToRoutine
 CopyToState
 CopyToStateData
 Cost
-CostParamsData
 CostSelector
 Counters
 CoverExt
@@ -3251,6 +3250,7 @@ VacAttrStatsP
 VacDeadItemsInfo
 VacErrPhase
 VacOptValue
+VacuumCostParams
 VacuumParams
 VacuumRelation
 VacuumStmt
-- 
2.43.0

From dd5df106946a188342992f50e587f269881cacae Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Fri, 27 Feb 2026 14:03:51 +0700
Subject: [PATCH] fixes for 0002 patch

---
 src/backend/access/heap/vacuumlazy.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index d19e15cbcce..91be2502c09 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -1139,7 +1139,7 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 				{
 					/* Worker usage stats for parallel autovacuum. */
 					appendStringInfo(&buf,
-									 _("parallel index vacuum: %d workers were planned, %d workers were reserved and %d workers were launched in total\n"),
+									 _("parallel workers: index vacuum: %d planned, %d reserved, %d launched in total\n"),
 									 vacrel->workers_usage.vacuum.nplanned,
 									 vacrel->workers_usage.vacuum.nreserved,
 									 vacrel->workers_usage.vacuum.nlaunched);
@@ -1148,7 +1148,7 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 				{
 					/* Worker usage stats for manual VACUUM (PARALLEL). */
 					appendStringInfo(&buf,
-									 _("parallel index vacuum: %d workers were planned and %d workers were launched in total\n"),
+									 _("parallel workers: index vacuum: %d planned, %d launched in total\n"),
 									 vacrel->workers_usage.vacuum.nplanned,
 									 vacrel->workers_usage.vacuum.nlaunched);
 				}
@@ -1161,7 +1161,7 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 				{
 					/* Worker usage stats for parallel autovacuum. */
 					appendStringInfo(&buf,
-									 _("parallel index cleanup: %d workers were planned, %d workers were reserved and %d workers were launched in total\n"),
+									 _("parallel workers: index cleanup: %d planned, %d reserved, %d launched\n"),
 									 vacrel->workers_usage.cleanup.nplanned,
 									 vacrel->workers_usage.cleanup.nreserved,
 									 vacrel->workers_usage.cleanup.nlaunched);
@@ -1170,7 +1170,7 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 				{
 					/* Worker usage stats for manual VACUUM (PARALLEL). */
 					appendStringInfo(&buf,
-									 _("parallel index cleanup: %d workers were planned and %d workers were launched in total\n"),
+									 _("parallel workers: index cleanup: %d planned, %d launched\n"),
 									 vacrel->workers_usage.cleanup.nplanned,
 									 vacrel->workers_usage.cleanup.nlaunched);
 				}
-- 
2.43.0

From d38013b4abe14b69f4058337cd7231ab1150e12f Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Fri, 27 Feb 2026 16:15:34 +0700
Subject: [PATCH 3/3] fixes for 0004 patch

---
 src/backend/access/heap/vacuumlazy.c          |  2 +
 src/backend/commands/vacuumparallel.c         | 38 ++++++++-----------
 .../modules/test_autovacuum/t/001_basic.pl    | 21 ++--------
 3 files changed, 22 insertions(+), 39 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 2498edcc0d5..6407c10524b 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -870,11 +870,13 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 	lazy_check_wraparound_failsafe(vacrel);
 	dead_items_alloc(vacrel, params.nworkers);
 
+#ifdef USE_INJECTION_POINTS
 	/*
 	 * Trigger injection point, if parallel autovacuum is about to be started.
 	 */
 	if (AmAutoVacuumWorkerProcess() && ParallelVacuumIsActive(vacrel))
 		INJECTION_POINT("autovacuum-start-parallel-vacuum", NULL);
+#endif
 
 	/*
 	 * Call lazy_scan_heap to perform all required heap pruning, index
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 88842c5cec9..78ccfede031 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -308,7 +308,7 @@ static bool parallel_vacuum_index_is_parallel_safe(Relation indrel, int num_inde
 static void parallel_vacuum_error_callback(void *arg);
 
 #ifdef USE_INJECTION_POINTS
-static void parallel_vacuum_report_cost_based_params(void);
+static inline void parallel_vacuum_report_cost_based_params(void);
 #endif
 
 /*
@@ -923,6 +923,7 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 							pvs->pcxt->nworkers_launched, nworkers)));
 	}
 
+#ifdef USE_INJECTION_POINTS
 	/*
 	 * To be able to exercise whether all reserved parallel workers are being
 	 * released anyway, allow injection points to trigger a failure at this
@@ -933,6 +934,7 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 	 */
 	if (nworkers > 0)
 		INJECTION_POINT("autovacuum-leader-before-indexes-processing", NULL);
+#endif
 
 	/* Vacuum the indexes that can be processed by only leader process */
 	parallel_vacuum_process_unsafe_indexes(pvs);
@@ -1317,7 +1319,7 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)
 	 * during index processing (via vacuum_delay_point call). This logging
 	 * allows tests to ensure this.
 	 */
-	if (shared->am_parallel_autovacuum)
+	if (shared->is_autovacuum)
 		parallel_vacuum_report_cost_based_params();
 #endif
 
@@ -1376,29 +1378,21 @@ parallel_vacuum_error_callback(void *arg)
 
 #ifdef USE_INJECTION_POINTS
 /*
- * Log values of the related to cost-based delay parameters. It is used for
+ * Log values related to cost-based vacuum delay parameters. It is used for
  * testing purpose.
  */
-static void
+static inline void
 parallel_vacuum_report_cost_based_params(void)
 {
-	StringInfoData buf;
-
-	/* Simulate config reload during normal processing */
-	pg_atomic_add_fetch_u32(VacuumActiveNWorkers, 1);
-	vacuum_delay_point(false);
-	pg_atomic_sub_fetch_u32(VacuumActiveNWorkers, 1);
-
-	initStringInfo(&buf);
-
-	appendStringInfo(&buf, "Vacuum cost-based delay parameters of parallel worker:\n");
-	appendStringInfo(&buf, "vacuum_cost_limit = %d\n",vacuum_cost_limit);
-	appendStringInfo(&buf, "vacuum_cost_delay = %g\n", vacuum_cost_delay);
-	appendStringInfo(&buf, "vacuum_cost_page_miss = %d\n", VacuumCostPageMiss);
-	appendStringInfo(&buf, "vacuum_cost_page_dirty = %d\n", VacuumCostPageDirty);
-	appendStringInfo(&buf, "vacuum_cost_page_hit = %d\n", VacuumCostPageHit);
-
-	ereport(DEBUG2, errmsg("%s", buf.data));
-	pfree(buf.data);
+	const char *msg_format =
+		_("Parallel autovacuum worker cost params: cost_limit=%d, cost_delay=%g, cost_page_miss=%d, cost_page_dirty=%d, cost_page_hit=%d");
+
+	elog(DEBUG2,
+		 msg_format,
+		 vacuum_cost_limit,
+		 vacuum_cost_delay,
+		 VacuumCostPageMiss,
+		 VacuumCostPageDirty,
+		 VacuumCostPageHit);
 }
 #endif
diff --git a/src/test/modules/test_autovacuum/t/001_basic.pl b/src/test/modules/test_autovacuum/t/001_basic.pl
index b3d22361dcf..9b80d371f5c 100644
--- a/src/test/modules/test_autovacuum/t/001_basic.pl
+++ b/src/test/modules/test_autovacuum/t/001_basic.pl
@@ -109,8 +109,7 @@ $node->safe_psql('postgres', qq{
 # Wait until the parallel autovacuum on table is completed. At the same time,
 # we check that the required number of parallel workers has been started.
 $log_start = $node->wait_for_log(
-	qr/parallel index vacuum: 2 workers were planned, / .
-	qr/2 workers were reserved and 2 workers were launched in total/,
+	qr/parallel workers: index vacuum: 2 planned, 2 reserved, 2 launched/,
 	$log_start
 );
 
@@ -162,12 +161,8 @@ $node->wait_for_event(
 # Check whether parallel worker successfully updated all parameters during
 # index processing
 $log_start = $node->wait_for_log(
-	qr/Vacuum cost-based delay parameters of parallel worker:\n/ .
-	qr/\tvacuum_cost_limit = 500\n/ .
-	qr/\tvacuum_cost_delay = 2\n/ .
-	qr/\tvacuum_cost_page_miss = 10\n/ .
-	qr/\tvacuum_cost_page_dirty = 10\n/ .
-	qr/\tvacuum_cost_page_hit = 10\n/,
+	qr/Parallel autovacuum worker cost params: cost_limit=500, cost_delay=2, / .
+	qr/cost_page_miss=10, cost_page_dirty=10, cost_page_hit=10/,
 	$log_start
 );
 
@@ -219,8 +214,7 @@ $node->safe_psql('postgres', qq{
 
 # Wait until the end of parallel processing
 $log_start = $node->wait_for_log(
-	qr/parallel index vacuum: 2 workers were planned, / .
-	qr/2 workers were reserved and 2 workers were launched in total/,
+	qr/parallel workers: index vacuum: 2 planned, 2 reserved, 2 launched/,
 	$log_start
 );
 
@@ -296,13 +290,6 @@ my $av_pid = $node->safe_psql('postgres', qq{
 	LIMIT 1;
 });
 
-# Create role with pg_signal_autovacuum_worker for terminating autovacuum worker.
-$node->safe_psql('postgres', qq{
-	CREATE ROLE regress_worker_role;
-	GRANT pg_signal_autovacuum_worker TO regress_worker_role;
-	SET ROLE regress_worker_role;
-});
-
 $node->safe_psql('postgres', qq{
 	SELECT pg_terminate_backend('$av_pid');
 });
-- 
2.43.0

Reply via email to