On Tue, Nov 27, 2018 at 11:26 AM Amit Kapila <amit.kapil...@gmail.com> wrote: > > On Mon, Nov 26, 2018 at 2:08 PM Masahiko Sawada <sawada.m...@gmail.com> wrote: > > > > On Sun, Nov 25, 2018 at 2:35 PM Amit Kapila <amit.kapil...@gmail.com> wrote: > > > > > > On Sat, Nov 24, 2018 at 5:47 PM Amit Kapila <amit.kapil...@gmail.com> > > > wrote: > > > > On Tue, Oct 30, 2018 at 2:04 PM Masahiko Sawada <sawada.m...@gmail.com> > > > > wrote: > > > > > > > > > > > > > Thank you for the comment. > > > > > > I could see that you have put a lot of effort on this patch and still > > > > we are not able to make much progress mainly I guess because of > > > > relation extension lock problem. I think we can park that problem for > > > > some time (as already we have invested quite some time on it), discuss > > > > a bit about actual parallel vacuum patch and then come back to it. > > > > > > > > > > Today, I was reading this and previous related thread [1] and it seems > > > to me multiple people Andres [2], Simon [3] have pointed out that > > > parallelization for index portion is more valuable. Also, some of the > > > results [4] indicate the same. Now, when there are no indexes, > > > parallelizing heap scans also have benefit, but I think in practice we > > > will see more cases where the user wants to vacuum tables with > > > indexes. So how about if we break this problem in the following way > > > where each piece give the benefit of its own: > > > (a) Parallelize index scans wherein the workers will be launched only > > > to vacuum indexes. Only one worker per index will be spawned. > > > (b) Parallelize per-index vacuum. Each index can be vacuumed by > > > multiple workers. > > > (c) Parallelize heap scans where multiple workers will scan the heap, > > > collect dead TIDs and then launch multiple workers for indexes. > > > > > > I think if we break this problem into multiple patches, it will reduce > > > the scope of each patch and help us in making progress. Now, it's > > > been more than 2 years that we are trying to solve this problem, but > > > still didn't make much progress. I understand there are various > > > genuine reasons and all of that work will help us in solving all the > > > problems in this area. How about if we first target problem (a) and > > > once we are done with that we can see which of (b) or (c) we want to > > > do first? > > > > Thank you for suggestion. It seems good to me. We would get a nice > > performance scalability even by only (a), and vacuum will get more > > powerful by (b) or (c). Also, (a) would not require to resovle the > > relation extension lock issue IIUC. > > > > Yes, I also think so. We do acquire 'relation extension lock' during > index vacuum, but as part of (a), we are talking one worker per-index, > so there shouldn't be a problem with respect to deadlocks. > > > I'll change the patch and submit > > to the next CF. > > > > Okay. >
Attached the updated patches. I scaled back the scope of this patch. The patch now includes only feature (a), that is it execute both index vacuum and cleanup index in parallel. It also doesn't include autovacuum support for now. The PARALLEL option works alomst same as before patch. In VACUUM command, we can specify 'PARALLEL n' option where n is the number of parallel workers to request. If the n is omitted the number of parallel worekrs would be # of indexes -1. Also we can specify parallel degree by parallel_worker reloption. The number or parallel workers is capped by Min(# of indexes - 1, max_maintenance_parallel_workers). That is, parallel vacuum can be executed for a table if it has more than one indexes. For internal design, the details are written at the top of comment in vacuumlazy.c file. In parallel vacuum mode, we allocate DSM at the beginning of lazy vacuum which stores shared information as well as dead tuples. When starting either index vacuum or cleanup vacuum we launch parallel workers. The parallel workers perform either index vacuum or clenaup vacuum for each indexes, and then exit after done all indexes. Then the leader process re-initialize DSM and re-launch at the next time, not destroy parallel context here. After done lazy vacuum, the leader process exits the parallel mode and updates index statistics since we are not allowed any writes during parallel mode. Also I've attached 0002 patch to support parallel lazy vacuum for vacuumdb command. Regards, -- Masahiko Sawada NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center
From 33a9a44fa4f090d7dd6dd319edcb1cb754064de8 Mon Sep 17 00:00:00 2001 From: Masahiko Sawada <sawada.mshk@gmail.com> Date: Tue, 18 Dec 2018 16:08:24 +0900 Subject: [PATCH v9 2/2] Add -P option to vacuumdb command. --- doc/src/sgml/ref/vacuumdb.sgml | 16 +++++++++++++ src/bin/scripts/t/100_vacuumdb.pl | 10 +++++++- src/bin/scripts/vacuumdb.c | 50 ++++++++++++++++++++++++++++++++++++++- 3 files changed, 74 insertions(+), 2 deletions(-) diff --git a/doc/src/sgml/ref/vacuumdb.sgml b/doc/src/sgml/ref/vacuumdb.sgml index 955a17a..0d085a6 100644 --- a/doc/src/sgml/ref/vacuumdb.sgml +++ b/doc/src/sgml/ref/vacuumdb.sgml @@ -158,6 +158,22 @@ PostgreSQL documentation </varlistentry> <varlistentry> + <term><option>-P <replaceable class="parameter">workers</replaceable></option></term> + <term><option>--parallel=<replaceable class="parameter">workers</replaceable></option></term> + <listitem> + <para> + Execute parallel vacuum with <productname>PostgreSQL</productname>'s + <replaceable class="parameter">workers</replaceable> background workers. + </para> + <para> + <application>vacuumdb</application> will require background workers, + so make sure your <xref linkend="guc-max-parallel-workers-maintenance"/> + setting is more than one. + </para> + </listitem> + </varlistentry> + + <varlistentry> <term><option>-q</option></term> <term><option>--quiet</option></term> <listitem> diff --git a/src/bin/scripts/t/100_vacuumdb.pl b/src/bin/scripts/t/100_vacuumdb.pl index 4c477a2..4d513a1 100644 --- a/src/bin/scripts/t/100_vacuumdb.pl +++ b/src/bin/scripts/t/100_vacuumdb.pl @@ -3,7 +3,7 @@ use warnings; use PostgresNode; use TestLib; -use Test::More tests => 23; +use Test::More tests => 27; program_help_ok('vacuumdb'); program_version_ok('vacuumdb'); @@ -33,6 +33,14 @@ $node->issues_sql_like( [ 'vacuumdb', '-Z', 'postgres' ], qr/statement: ANALYZE;/, 'vacuumdb -Z'); +$node->issues_sql_like( + [ 'vacuumdb', '-P2', 'postgres' ], + qr/statement: VACUUM \(PARALLEL 2\);/, + 'vacuumdb -P2'); +$node->issues_sql_like( + [ 'vacuumdb', '-P', 'postgres' ], + qr/statement: VACUUM \(PARALLEL\);/, + 'vacuumdb -P2'); $node->command_ok([qw(vacuumdb -Z --table=pg_am dbname=template1)], 'vacuumdb with connection string'); diff --git a/src/bin/scripts/vacuumdb.c b/src/bin/scripts/vacuumdb.c index bcea9e5..ee7bd7e 100644 --- a/src/bin/scripts/vacuumdb.c +++ b/src/bin/scripts/vacuumdb.c @@ -40,6 +40,9 @@ typedef struct vacuumingOptions bool and_analyze; bool full; bool freeze; + int parallel_workers; /* -1: disabled, 0: PARALLEL without number of + * workers. + */ } vacuumingOptions; @@ -108,6 +111,7 @@ main(int argc, char *argv[]) {"full", no_argument, NULL, 'f'}, {"verbose", no_argument, NULL, 'v'}, {"jobs", required_argument, NULL, 'j'}, + {"parallel", optional_argument, NULL, 'P'}, {"maintenance-db", required_argument, NULL, 2}, {"analyze-in-stages", no_argument, NULL, 3}, {NULL, 0, NULL, 0} @@ -133,6 +137,7 @@ main(int argc, char *argv[]) /* initialize options to all false */ memset(&vacopts, 0, sizeof(vacopts)); + vacopts.parallel_workers = -1; progname = get_progname(argv[0]); @@ -140,7 +145,7 @@ main(int argc, char *argv[]) handle_help_version_opts(argc, argv, "vacuumdb", help); - while ((c = getopt_long(argc, argv, "h:p:U:wWeqd:zZFat:fvj:", long_options, &optindex)) != -1) + while ((c = getopt_long(argc, argv, "h:p:P::U:wWeqd:zZFat:fvj:", long_options, &optindex)) != -1) { switch (c) { @@ -207,6 +212,25 @@ main(int argc, char *argv[]) exit(1); } break; + case 'P': + { + int parallel_workers = 0; + + if (optarg != NULL) + { + parallel_workers = atoi(optarg); + if (parallel_workers <= 0) + { + fprintf(stderr, _("%s: number of parallel workers must be at least 1\n"), + progname); + exit(1); + } + } + + /* allow to set 0, meaning PARALLEL without the parallel degree */ + vacopts.parallel_workers = parallel_workers; + break; + } case 2: maintenance_db = pg_strdup(optarg); break; @@ -251,9 +275,22 @@ main(int argc, char *argv[]) progname, "freeze"); exit(1); } + if (vacopts.parallel_workers >= 0) + { + fprintf(stderr, _("%s: cannot use the \"%s\" option when performing only analyze\n"), + progname, "parallel"); + exit(1); + } /* allow 'and_analyze' with 'analyze_only' */ } + if (vacopts.full && vacopts.parallel_workers >= 0) + { + fprintf(stderr, _("%s: cannot use the \"%s\" option with \"%s\" option"), + progname, "full", "parallel"); + exit(1); + } + setup_cancel_handler(); /* Avoid opening extra connections. */ @@ -667,6 +704,16 @@ prepare_vacuum_command(PQExpBuffer sql, PGconn *conn, appendPQExpBuffer(sql, "%sANALYZE", sep); sep = comma; } + if (vacopts->parallel_workers > 0) + { + appendPQExpBuffer(sql, "%sPARALLEL %d", sep, vacopts->parallel_workers); + sep = comma; + } + if (vacopts->parallel_workers == 0) + { + appendPQExpBuffer(sql, "%sPARALLEL", sep); + sep = comma; + } if (sep != paren) appendPQExpBufferChar(sql, ')'); } @@ -1004,6 +1051,7 @@ help(const char *progname) printf(_(" -f, --full do full vacuuming\n")); printf(_(" -F, --freeze freeze row transaction information\n")); printf(_(" -j, --jobs=NUM use this many concurrent connections to vacuum\n")); + printf(_(" -P, --parallel=NUM do parallel vacuuming\n")); printf(_(" -q, --quiet don't write any messages\n")); printf(_(" -t, --table='TABLE[(COLUMNS)]' vacuum specific table(s) only\n")); printf(_(" -v, --verbose write a lot of output\n")); -- 2.10.5
From 291fb83c321c720b45b3eda227af28dd8350d2ed Mon Sep 17 00:00:00 2001 From: Masahiko Sawada <sawada.mshk@gmail.com> Date: Tue, 18 Dec 2018 14:48:34 +0900 Subject: [PATCH v9 1/2] Add parallel option to VACUUM command In parallel vacuum, we do both index vacuum and cleanup vacuum in parallel with parallel worker processes if the table has more than one index. All processes including the leader process process indexes one by one. Parallel vacuum can be performed by specifying like VACUUM (PARALLEL 2) tbl, meaning that performing vacuum with 2 parallel worker processes. Or the setting parallel_workers reloption more than 0 invokes parallel vacuum. The parallel vacuum degree is limited by both the number of indexes the table has and max_parallel_maintenance_workers. --- doc/src/sgml/config.sgml | 11 +- doc/src/sgml/ref/vacuum.sgml | 17 + src/backend/access/transam/parallel.c | 4 + src/backend/commands/vacuum.c | 76 ++-- src/backend/commands/vacuumlazy.c | 815 +++++++++++++++++++++++++++++----- src/backend/nodes/equalfuncs.c | 6 +- src/backend/parser/gram.y | 73 ++- src/backend/postmaster/autovacuum.c | 8 +- src/backend/tcop/utility.c | 4 +- src/include/commands/vacuum.h | 7 +- src/include/nodes/parsenodes.h | 17 +- src/test/regress/expected/vacuum.out | 2 + src/test/regress/sql/vacuum.sql | 3 + 13 files changed, 851 insertions(+), 192 deletions(-) diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index 4a7121a..5c3cd09 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -2185,11 +2185,12 @@ include_dir 'conf.d' <listitem> <para> Sets the maximum number of parallel workers that can be - started by a single utility command. Currently, the only - parallel utility command that supports the use of parallel - workers is <command>CREATE INDEX</command>, and only when - building a B-tree index. Parallel workers are taken from the - pool of processes established by <xref + started by a single utility command. Currently, the parallel + utility commands that supports the use of parallel worker are + <command>CREATE INDEX</command>, and only when + building a B-tree index and <command>VACUUM</command> without + <literal>FULL</literal> option. Parallel workers are taken from + the pool of processes established by <xref linkend="guc-max-worker-processes"/>, limited by <xref linkend="guc-max-parallel-workers"/>. Note that the requested number of workers may not actually be available at run time. diff --git a/doc/src/sgml/ref/vacuum.sgml b/doc/src/sgml/ref/vacuum.sgml index fd911f5..453890d 100644 --- a/doc/src/sgml/ref/vacuum.sgml +++ b/doc/src/sgml/ref/vacuum.sgml @@ -30,6 +30,7 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ <replaceable class="paramet FREEZE VERBOSE ANALYZE + PARALLEL [ <replaceable class="parameter">N</replaceable> ] DISABLE_PAGE_SKIPPING SKIP_LOCKED @@ -143,6 +144,22 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ <replaceable class="paramet </varlistentry> <varlistentry> + <term><literal>PARALLEL <replaceable class="parameter">N</replaceable></literal></term> + <listitem> + <para> + Execute index vacuum and cleanup index in parallel with + <replaceable class="parameter">N</replaceable> background workers. If the parallel + degree <replaceable class="parameter">N</replaceable> is omitted, + <command>VACUUM</command> requests the number of indexes - 1 processes, which is the + maximum number of parallel vacuum workers since individual indexes is processed by + one process. The actual number of parallel vacuum workers may be less due to the + setting of <xref linkend="guc-max-parallel-workers-maintenance"/>. + This option can not use with <literal>FULL</literal> option. + </para> + </listitem> + </varlistentry> + + <varlistentry> <term><literal>DISABLE_PAGE_SKIPPING</literal></term> <listitem> <para> diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c index b9a9ae5..33c46e0 100644 --- a/src/backend/access/transam/parallel.c +++ b/src/backend/access/transam/parallel.c @@ -23,6 +23,7 @@ #include "catalog/index.h" #include "catalog/namespace.h" #include "commands/async.h" +#include "commands/vacuum.h" #include "executor/execParallel.h" #include "libpq/libpq.h" #include "libpq/pqformat.h" @@ -138,6 +139,9 @@ static const struct }, { "_bt_parallel_build_main", _bt_parallel_build_main + }, + { + "lazy_parallel_vacuum_main", lazy_parallel_vacuum_main } }; diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c index 25b3b03..401262e 100644 --- a/src/backend/commands/vacuum.c +++ b/src/backend/commands/vacuum.c @@ -68,13 +68,13 @@ static BufferAccessStrategy vac_strategy; /* non-export function prototypes */ -static List *expand_vacuum_rel(VacuumRelation *vrel, int options); -static List *get_all_vacuum_rels(int options); +static List *expand_vacuum_rel(VacuumRelation *vrel, VacuumOption options); +static List *get_all_vacuum_rels(VacuumOption options); static void vac_truncate_clog(TransactionId frozenXID, MultiXactId minMulti, TransactionId lastSaneFrozenXid, MultiXactId lastSaneMinMulti); -static bool vacuum_rel(Oid relid, RangeVar *relation, int options, +static bool vacuum_rel(Oid relid, RangeVar *relation, VacuumOption options, VacuumParams *params); /* @@ -89,15 +89,15 @@ ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel) VacuumParams params; /* sanity checks on options */ - Assert(vacstmt->options & (VACOPT_VACUUM | VACOPT_ANALYZE)); - Assert((vacstmt->options & VACOPT_VACUUM) || - !(vacstmt->options & (VACOPT_FULL | VACOPT_FREEZE))); - Assert(!(vacstmt->options & VACOPT_SKIPTOAST)); + Assert(vacstmt->options.flags & (VACOPT_VACUUM | VACOPT_ANALYZE)); + Assert((vacstmt->options.flags & VACOPT_VACUUM) || + !(vacstmt->options.flags & (VACOPT_FULL | VACOPT_FREEZE))); + Assert(!(vacstmt->options.flags & VACOPT_SKIPTOAST)); /* * Make sure VACOPT_ANALYZE is specified if any column lists are present. */ - if (!(vacstmt->options & VACOPT_ANALYZE)) + if (!(vacstmt->options.flags & VACOPT_ANALYZE)) { ListCell *lc; @@ -112,11 +112,17 @@ ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel) } } + if ((vacstmt->options.flags & VACOPT_FULL) && + (vacstmt->options.flags & VACOPT_PARALLEL)) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("cannot specify FULL option with PARALLEL option"))); + /* * All freeze ages are zero if the FREEZE option is given; otherwise pass * them as -1 which means to use the default values. */ - if (vacstmt->options & VACOPT_FREEZE) + if (vacstmt->options.flags & VACOPT_FREEZE) { params.freeze_min_age = 0; params.freeze_table_age = 0; @@ -163,7 +169,7 @@ ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel) * memory context that will not disappear at transaction commit. */ void -vacuum(int options, List *relations, VacuumParams *params, +vacuum(VacuumOption options, List *relations, VacuumParams *params, BufferAccessStrategy bstrategy, bool isTopLevel) { static bool in_vacuum = false; @@ -174,7 +180,7 @@ vacuum(int options, List *relations, VacuumParams *params, Assert(params != NULL); - stmttype = (options & VACOPT_VACUUM) ? "VACUUM" : "ANALYZE"; + stmttype = (options.flags & VACOPT_VACUUM) ? "VACUUM" : "ANALYZE"; /* * We cannot run VACUUM inside a user transaction block; if we were inside @@ -184,7 +190,7 @@ vacuum(int options, List *relations, VacuumParams *params, * * ANALYZE (without VACUUM) can run either way. */ - if (options & VACOPT_VACUUM) + if (options.flags & VACOPT_VACUUM) { PreventInTransactionBlock(isTopLevel, stmttype); in_outer_xact = false; @@ -206,8 +212,8 @@ vacuum(int options, List *relations, VacuumParams *params, /* * Sanity check DISABLE_PAGE_SKIPPING option. */ - if ((options & VACOPT_FULL) != 0 && - (options & VACOPT_DISABLE_PAGE_SKIPPING) != 0) + if ((options.flags & VACOPT_FULL) != 0 && + (options.flags & VACOPT_DISABLE_PAGE_SKIPPING) != 0) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("VACUUM option DISABLE_PAGE_SKIPPING cannot be used with FULL"))); @@ -216,7 +222,7 @@ vacuum(int options, List *relations, VacuumParams *params, * Send info about dead objects to the statistics collector, unless we are * in autovacuum --- autovacuum.c does this for itself. */ - if ((options & VACOPT_VACUUM) && !IsAutoVacuumWorkerProcess()) + if ((options.flags & VACOPT_VACUUM) && !IsAutoVacuumWorkerProcess()) pgstat_vacuum_stat(); /* @@ -281,11 +287,11 @@ vacuum(int options, List *relations, VacuumParams *params, * transaction block, and also in an autovacuum worker, use own * transactions so we can release locks sooner. */ - if (options & VACOPT_VACUUM) + if (options.flags & VACOPT_VACUUM) use_own_xacts = true; else { - Assert(options & VACOPT_ANALYZE); + Assert(options.flags & VACOPT_ANALYZE); if (IsAutoVacuumWorkerProcess()) use_own_xacts = true; else if (in_outer_xact) @@ -335,13 +341,13 @@ vacuum(int options, List *relations, VacuumParams *params, { VacuumRelation *vrel = lfirst_node(VacuumRelation, cur); - if (options & VACOPT_VACUUM) + if (options.flags & VACOPT_VACUUM) { if (!vacuum_rel(vrel->oid, vrel->relation, options, params)) continue; } - if (options & VACOPT_ANALYZE) + if (options.flags & VACOPT_ANALYZE) { /* * If using separate xacts, start one for analyze. Otherwise, @@ -354,7 +360,7 @@ vacuum(int options, List *relations, VacuumParams *params, PushActiveSnapshot(GetTransactionSnapshot()); } - analyze_rel(vrel->oid, vrel->relation, options, params, + analyze_rel(vrel->oid, vrel->relation, options.flags, params, vrel->va_cols, in_outer_xact, vac_strategy); if (use_own_xacts) @@ -390,7 +396,7 @@ vacuum(int options, List *relations, VacuumParams *params, StartTransactionCommand(); } - if ((options & VACOPT_VACUUM) && !IsAutoVacuumWorkerProcess()) + if ((options.flags & VACOPT_VACUUM) && !IsAutoVacuumWorkerProcess()) { /* * Update pg_database.datfrozenxid, and truncate pg_xact if possible. @@ -603,7 +609,7 @@ vacuum_open_relation(Oid relid, RangeVar *relation, VacuumParams *params, * are made in vac_context. */ static List * -expand_vacuum_rel(VacuumRelation *vrel, int options) +expand_vacuum_rel(VacuumRelation *vrel, VacuumOption options) { List *vacrels = NIL; MemoryContext oldcontext; @@ -635,7 +641,7 @@ expand_vacuum_rel(VacuumRelation *vrel, int options) * below, as well as find_all_inheritors's expectation that the caller * holds some lock on the starting relation. */ - rvr_opts = (options & VACOPT_SKIP_LOCKED) ? RVR_SKIP_LOCKED : 0; + rvr_opts = (options.flags & VACOPT_SKIP_LOCKED) ? RVR_SKIP_LOCKED : 0; relid = RangeVarGetRelidExtended(vrel->relation, AccessShareLock, rvr_opts, @@ -647,7 +653,7 @@ expand_vacuum_rel(VacuumRelation *vrel, int options) */ if (!OidIsValid(relid)) { - if (options & VACOPT_VACUUM) + if (options.flags & VACOPT_VACUUM) ereport(WARNING, (errcode(ERRCODE_LOCK_NOT_AVAILABLE), errmsg("skipping vacuum of \"%s\" --- lock not available", @@ -673,7 +679,7 @@ expand_vacuum_rel(VacuumRelation *vrel, int options) * Make a returnable VacuumRelation for this rel if user is a proper * owner. */ - if (vacuum_is_relation_owner(relid, classForm, options)) + if (vacuum_is_relation_owner(relid, classForm, options.flags)) { oldcontext = MemoryContextSwitchTo(vac_context); vacrels = lappend(vacrels, makeVacuumRelation(vrel->relation, @@ -742,7 +748,7 @@ expand_vacuum_rel(VacuumRelation *vrel, int options) * the current database. The list is built in vac_context. */ static List * -get_all_vacuum_rels(int options) +get_all_vacuum_rels(VacuumOption options) { List *vacrels = NIL; Relation pgclass; @@ -760,7 +766,7 @@ get_all_vacuum_rels(int options) Oid relid = classForm->oid; /* check permissions of relation */ - if (!vacuum_is_relation_owner(relid, classForm, options)) + if (!vacuum_is_relation_owner(relid, classForm, options.flags)) continue; /* @@ -1521,7 +1527,7 @@ vac_truncate_clog(TransactionId frozenXID, * At entry and exit, we are not inside a transaction. */ static bool -vacuum_rel(Oid relid, RangeVar *relation, int options, VacuumParams *params) +vacuum_rel(Oid relid, RangeVar *relation, VacuumOption options, VacuumParams *params) { LOCKMODE lmode; Relation onerel; @@ -1542,7 +1548,7 @@ vacuum_rel(Oid relid, RangeVar *relation, int options, VacuumParams *params) */ PushActiveSnapshot(GetTransactionSnapshot()); - if (!(options & VACOPT_FULL)) + if (!(options.flags & VACOPT_FULL)) { /* * In lazy vacuum, we can set the PROC_IN_VACUUM flag, which lets @@ -1582,10 +1588,10 @@ vacuum_rel(Oid relid, RangeVar *relation, int options, VacuumParams *params) * vacuum, but just ShareUpdateExclusiveLock for concurrent vacuum. Either * way, we can be sure that no other backend is vacuuming the same table. */ - lmode = (options & VACOPT_FULL) ? AccessExclusiveLock : ShareUpdateExclusiveLock; + lmode = (options.flags & VACOPT_FULL) ? AccessExclusiveLock : ShareUpdateExclusiveLock; /* open the relation and get the appropriate lock on it */ - onerel = vacuum_open_relation(relid, relation, params, options, lmode); + onerel = vacuum_open_relation(relid, relation, params, options.flags, lmode); /* leave if relation could not be opened or locked */ if (!onerel) @@ -1605,7 +1611,7 @@ vacuum_rel(Oid relid, RangeVar *relation, int options, VacuumParams *params) */ if (!vacuum_is_relation_owner(RelationGetRelid(onerel), onerel->rd_rel, - options & VACOPT_VACUUM)) + options.flags & VACOPT_VACUUM)) { relation_close(onerel, lmode); PopActiveSnapshot(); @@ -1677,7 +1683,7 @@ vacuum_rel(Oid relid, RangeVar *relation, int options, VacuumParams *params) * us to process it. In VACUUM FULL, though, the toast table is * automatically rebuilt by cluster_rel so we shouldn't recurse to it. */ - if (!(options & VACOPT_SKIPTOAST) && !(options & VACOPT_FULL)) + if (!(options.flags & VACOPT_SKIPTOAST) && !(options.flags & VACOPT_FULL)) toast_relid = onerel->rd_rel->reltoastrelid; else toast_relid = InvalidOid; @@ -1696,7 +1702,7 @@ vacuum_rel(Oid relid, RangeVar *relation, int options, VacuumParams *params) /* * Do the actual work --- either FULL or "lazy" vacuum */ - if (options & VACOPT_FULL) + if (options.flags & VACOPT_FULL) { int cluster_options = 0; @@ -1704,7 +1710,7 @@ vacuum_rel(Oid relid, RangeVar *relation, int options, VacuumParams *params) relation_close(onerel, NoLock); onerel = NULL; - if ((options & VACOPT_VERBOSE) != 0) + if ((options.flags & VACOPT_VERBOSE) != 0) cluster_options |= CLUOPT_VERBOSE; /* VACUUM FULL is now a variant of CLUSTER; see cluster.c */ diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c index 8134c52..d4acb47 100644 --- a/src/backend/commands/vacuumlazy.c +++ b/src/backend/commands/vacuumlazy.c @@ -22,6 +22,19 @@ * of index scans performed. So we don't use maintenance_work_mem memory for * the TID array, just enough to hold as many heap tuples as fit on one page. * + * Lazy vacuum supports parallel execution with parallel worker processes. In + * parallel vacuum, we perform both index vacuum and index cleanup in parallel. + * Individual indexes is processed by one vacuum process. At beginning of + * lazy vacuum (at lazy_scan_heap) we prepare the parallel context and initialize + * the shared memory segments that contains shared information as well as the + * memory space for dead tuples. When starting either index vacuum or cleanup + * vacuum, we launch parallel worker processes. Once all indexes are processed + * the parallel worker processes exit and the leader process re-initializes the + * shared memory segment. Note that all parallel workers live during one either + * index vacuum or cleanup index but the leader process neither exits from the + * parallel mode nor destories the parallel context. For updating the index + * statistics, since any updates are not allowed during parallel mode we update + * the index statistics after exited from the parallel mode. * * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California @@ -41,8 +54,10 @@ #include "access/heapam_xlog.h" #include "access/htup_details.h" #include "access/multixact.h" +#include "access/parallel.h" #include "access/transam.h" #include "access/visibilitymap.h" +#include "access/xact.h" #include "access/xlog.h" #include "catalog/storage.h" #include "commands/dbcommands.h" @@ -55,6 +70,8 @@ #include "storage/bufmgr.h" #include "storage/freespace.h" #include "storage/lmgr.h" +#include "storage/spin.h" +#include "tcop/tcopprot.h" #include "utils/lsyscache.h" #include "utils/memutils.h" #include "utils/pg_rusage.h" @@ -111,10 +128,79 @@ */ #define PREFETCH_SIZE ((BlockNumber) 32) +/* DSM key for parallel lazy vacuum */ +#define PARALLEL_VACUUM_KEY_SHARED UINT64CONST(0xFFFFFFFFFFF00001) +#define PARALLEL_VACUUM_KEY_DEAD_TUPLES UINT64CONST(0xFFFFFFFFFFF00002) +#define PARALLEL_VACUUM_KEY_QUERY_TEXT UINT64CONST(0xFFFFFFFFFFF00003) + +/* + * Structs for an index bulk-deletion statistic that is used for parallel + * lazy vacuum. This is allocated in a dynamic shared memory segment. + */ +typedef struct LVIndStats +{ + bool updated; /* is the stats updated? */ + IndexBulkDeleteResult stats; +} LVIndStats; + +/* + * LVTidMap controls the dead tuple TIDs collected during heap scan. This is + * allocated in a dynamic shared memory segment in parallel lazy vacuum mode, or + * in a local memory. + */ +typedef struct LVTidMap +{ + int max_dead_tuples; /* # slots allocated in array */ + int num_dead_tuples; /* current # of entries */ + /* List of TIDs of tuples we intend to delete */ + /* NB: this list is ordered by TID address */ + ItemPointerData itemptrs[FLEXIBLE_ARRAY_MEMBER]; /* array of ItemPointerData */ +} LVTidMap; +#define SizeOfLVTidMap offsetof(LVTidMap, itemptrs) + sizeof(ItemPointerData) + +/* + * Status for parallel vacuum index and cleanup index. This is allocated in a + * dynamic shared memory segment. + */ +typedef struct LVShared +{ + /* + * Target table relid and vacuum settings. These fields are not modified + * during the lazy vacuum. + */ + Oid relid; + bool is_wraparound; + int elevel; + + /* + * An indication for vacuum workers of doing either vacuuming index or + * cleanup index. + */ + bool for_cleanup; + + /* + * Fields for vacuum index or cleanup index, or both necessary for + * IndexVacuumInfo. + * + * reltuples is the total number of input heap tuples. We set either an + * old live tuples in vacuum index or th new live tuples in cleanup index. + * + * estimated_count is true if the reltuples is estimated value. + */ + double reltuples; + bool estimated_count; + + /* + * Variables to control parallel index vacuum. An variable-sized field + * 'indstats' must come last. + */ + pg_atomic_uint32 nprocessed; + LVIndStats indstats[FLEXIBLE_ARRAY_MEMBER]; +} LVShared; +#define SizeOfLVShared offsetof(LVShared, indstats) + sizeof(LVIndStats) + typedef struct LVRelStats { - /* hasindex = true means two-pass strategy; false means one-pass */ - bool hasindex; /* Overall statistics about rel */ BlockNumber old_rel_pages; /* previous value of pg_class.relpages */ BlockNumber rel_pages; /* total number of pages */ @@ -129,16 +215,34 @@ typedef struct LVRelStats BlockNumber pages_removed; double tuples_deleted; BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */ - /* List of TIDs of tuples we intend to delete */ - /* NB: this list is ordered by TID address */ - int num_dead_tuples; /* current # of entries */ - int max_dead_tuples; /* # slots allocated in array */ - ItemPointer dead_tuples; /* array of ItemPointerData */ int num_index_scans; TransactionId latestRemovedXid; bool lock_waiter_detected; } LVRelStats; +/* + * Working state for lazy vacuum execution used by only leader process. This is + * present only in the leader process. In parallel lazy vacuum, the 'lvshared' + * and 'pcxt' are not NULL and they point to a dynamic shared memory segment. + */ +typedef struct LVState +{ + Oid relid; + Relation relation; + LVRelStats *vacrelstats; + Relation *indRels; + /* nindexes > 0 means two-pass strategy; false means one-pass */ + int nindexes; + + /* Lazy vacuum options and scan status */ + VacuumOption options; + bool is_wraparound; + bool aggressive; + + /* Variables for parallel lazy vacuum */ + LVShared *lvshared; + ParallelContext *pcxt; +} LVState; /* A few variables that don't seem worth passing around as parameters */ static int elevel = -1; @@ -151,31 +255,44 @@ static BufferAccessStrategy vac_strategy; /* non-export function prototypes */ -static void lazy_scan_heap(Relation onerel, int options, - LVRelStats *vacrelstats, Relation *Irel, int nindexes, - bool aggressive); -static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats); +static void lazy_scan_heap(LVState *lvstate); +static void lazy_vacuum_heap(LVState *lvstate, LVTidMap *dead_tuples); static bool lazy_check_needs_freeze(Buffer buf, bool *hastup); -static void lazy_vacuum_index(Relation indrel, - IndexBulkDeleteResult **stats, - LVRelStats *vacrelstats); -static void lazy_cleanup_index(Relation indrel, - IndexBulkDeleteResult *stats, - LVRelStats *vacrelstats); -static int lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer, - int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer); +static IndexBulkDeleteResult *lazy_vacuum_index(Relation indrel, + IndexBulkDeleteResult *stats, + double reltuples, + LVTidMap *dead_tuples); +static IndexBulkDeleteResult *lazy_cleanup_index(Relation indrel, + IndexBulkDeleteResult *stats, + double reltuples, bool estimated_count, + bool update_stats); +static int lazy_vacuum_page(LVState *lvstate, Relation onerel, BlockNumber blkno, + Buffer buffer, int tupindex, Buffer *vmbuffer, + TransactionId latestRemovedXid, LVTidMap *dead_tuples); static bool should_attempt_truncation(LVRelStats *vacrelstats); static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats); static BlockNumber count_nondeletable_pages(Relation onerel, LVRelStats *vacrelstats); -static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks); -static void lazy_record_dead_tuple(LVRelStats *vacrelstats, - ItemPointer itemptr); +static LVTidMap *lazy_space_alloc(LVState *lvstate, BlockNumber relblocks, + int parallel_workers); +static void lazy_record_dead_tuple(LVTidMap *dead_tuples, ItemPointer itemptr); static bool lazy_tid_reaped(ItemPointer itemptr, void *state); static int vac_cmp_itemptr(const void *left, const void *right); static bool heap_page_is_all_visible(Relation rel, Buffer buf, TransactionId *visibility_cutoff_xid, bool *all_frozen); - +static int lazy_compute_parallel_workers(Relation rel, int nrequests, int nindexes); +static LVTidMap *lazy_prepare_parallel(LVState *lvstate, long maxtuples, int request); +static void lazy_end_parallel(LVState *lvstate, bool update_indstats); +static void lazy_begin_parallel_vacuum_index(LVState *lvstate, bool for_cleanup); +static void lazy_end_parallel_vacuum_index(LVState *lvstate); +static void lazy_vacuum_all_indexes_for_leader(LVState *lvstate, + IndexBulkDeleteResult **stats, + LVTidMap *dead_tuples, + bool do_parallel, + bool for_cleanup); +static void lazy_vacuum_all_indexes_for_worker(Relation *indrels, int nindexes, + LVShared *lvshared, LVTidMap *dead_tuples, + bool for_cleanup); /* * lazy_vacuum_rel() -- perform LAZY VACUUM for one heap relation @@ -187,9 +304,10 @@ static bool heap_page_is_all_visible(Relation rel, Buffer buf, * and locked the relation. */ void -lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params, +lazy_vacuum_rel(Relation onerel, VacuumOption options, VacuumParams *params, BufferAccessStrategy bstrategy) { + LVState *lvstate; LVRelStats *vacrelstats; Relation *Irel; int nindexes; @@ -201,6 +319,7 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params, write_rate; bool aggressive; /* should we scan all unfrozen pages? */ bool scanned_all_unfrozen; /* actually scanned all such pages? */ + bool hasindex; TransactionId xidFullScanLimit; MultiXactId mxactFullScanLimit; BlockNumber new_rel_pages; @@ -218,7 +337,7 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params, starttime = GetCurrentTimestamp(); } - if (options & VACOPT_VERBOSE) + if (options.flags & VACOPT_VERBOSE) elevel = INFO; else elevel = DEBUG2; @@ -246,7 +365,7 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params, xidFullScanLimit); aggressive |= MultiXactIdPrecedesOrEquals(onerel->rd_rel->relminmxid, mxactFullScanLimit); - if (options & VACOPT_DISABLE_PAGE_SKIPPING) + if (options.flags & VACOPT_DISABLE_PAGE_SKIPPING) aggressive = true; vacrelstats = (LVRelStats *) palloc0(sizeof(LVRelStats)); @@ -259,10 +378,20 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params, /* Open all indexes of the relation */ vac_open_indexes(onerel, RowExclusiveLock, &nindexes, &Irel); - vacrelstats->hasindex = (nindexes > 0); + hasindex = (nindexes > 0); + + /* Create a lazy vacuum working state */ + lvstate = (LVState *) palloc0(sizeof(LVState)); + lvstate->vacrelstats = vacrelstats; + lvstate->relation = onerel; + lvstate->indRels = Irel; + lvstate->nindexes = nindexes; + lvstate->options = options; + lvstate->aggressive = aggressive; + lvstate->is_wraparound = params->is_wraparound; /* Do the vacuuming */ - lazy_scan_heap(onerel, options, vacrelstats, Irel, nindexes, aggressive); + lazy_scan_heap(lvstate); /* Done with indexes */ vac_close_indexes(nindexes, Irel, NoLock); @@ -333,7 +462,7 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params, new_rel_pages, new_live_tuples, new_rel_allvisible, - vacrelstats->hasindex, + hasindex, new_frozen_xid, new_min_multi, false); @@ -465,14 +594,29 @@ vacuum_log_cleanup_info(Relation rel, LVRelStats *vacrelstats) * dead-tuple TIDs, invoke vacuuming of indexes and call lazy_vacuum_heap * to reclaim dead line pointers. * + * If the table has more than one index and parallel lazy vacuum is requested, + * we run both index vacuum and cleanup index in parallel. When allocating the + * space for lazy scan heap, we enter the parallel mode, create the parallel + * context and initailize a dynamic shared memory segment for dead tuples. + * The dead_tuples points either to a dynamic shared memory segment in parallel + * vacuum or to a local memory in single vacuum. Before starting parallel + * index vacuum and parallel cleanup index we launch parallel workers. All + * parallel workers will exit after all indexes are processed and the leader + * process re-initialize parallel context and then re-launch them at the next + * timing. The index statistics are updated by the leader after exited from + * the parallel mode since currently all writes are not allowed during the + * parallel mode. + * * If there are no indexes then we can reclaim line pointers on the fly; * dead line pointers need only be retained until all index pointers that * reference them have been killed. */ static void -lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, - Relation *Irel, int nindexes, bool aggressive) +lazy_scan_heap(LVState *lvstate) { + Relation onerel = lvstate->relation; + LVRelStats *vacrelstats = lvstate->vacrelstats; + LVTidMap *dead_tuples = NULL; BlockNumber nblocks, blkno; HeapTupleData tuple; @@ -495,6 +639,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, bool skipping_blocks; xl_heap_freeze_tuple *frozen; StringInfoData buf; + bool do_parallel; + int parallel_workers = 0; const int initprog_index[] = { PROGRESS_VACUUM_PHASE, PROGRESS_VACUUM_TOTAL_HEAP_BLKS, @@ -505,7 +651,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, pg_rusage_init(&ru0); relname = RelationGetRelationName(onerel); - if (aggressive) + if (lvstate->aggressive) ereport(elevel, (errmsg("aggressively vacuuming \"%s.%s\"", get_namespace_name(RelationGetNamespace(onerel)), @@ -521,7 +667,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, num_tuples = live_tuples = tups_vacuumed = nkeep = nunused = 0; indstats = (IndexBulkDeleteResult **) - palloc0(nindexes * sizeof(IndexBulkDeleteResult *)); + palloc0(lvstate->nindexes * sizeof(IndexBulkDeleteResult *)); nblocks = RelationGetNumberOfBlocks(onerel); vacrelstats->rel_pages = nblocks; @@ -530,13 +676,26 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, vacrelstats->nonempty_pages = 0; vacrelstats->latestRemovedXid = InvalidTransactionId; - lazy_space_alloc(vacrelstats, nblocks); + /* + * Compute the number of parallel vacuum worker to request and then enable + * parallel lazy vacuum. + */ + if ((lvstate->options.flags & VACOPT_PARALLEL) != 0) + { + parallel_workers = lazy_compute_parallel_workers(lvstate->relation, + lvstate->options.nworkers, + lvstate->nindexes); + if (parallel_workers > 0) + do_parallel = true; + } + + dead_tuples = lazy_space_alloc(lvstate, nblocks, parallel_workers); frozen = palloc(sizeof(xl_heap_freeze_tuple) * MaxHeapTuplesPerPage); /* Report that we're scanning the heap, advertising total # of blocks */ initprog_val[0] = PROGRESS_VACUUM_PHASE_SCAN_HEAP; initprog_val[1] = nblocks; - initprog_val[2] = vacrelstats->max_dead_tuples; + initprog_val[2] = dead_tuples->max_dead_tuples; pgstat_progress_update_multi_param(3, initprog_index, initprog_val); /* @@ -584,7 +743,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, * be replayed on any hot standby, where it can be disruptive. */ next_unskippable_block = 0; - if ((options & VACOPT_DISABLE_PAGE_SKIPPING) == 0) + if ((lvstate->options.flags & VACOPT_DISABLE_PAGE_SKIPPING) == 0) { while (next_unskippable_block < nblocks) { @@ -592,7 +751,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, vmstatus = visibilitymap_get_status(onerel, next_unskippable_block, &vmbuffer); - if (aggressive) + if (lvstate->aggressive) { if ((vmstatus & VISIBILITYMAP_ALL_FROZEN) == 0) break; @@ -639,7 +798,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, { /* Time to advance next_unskippable_block */ next_unskippable_block++; - if ((options & VACOPT_DISABLE_PAGE_SKIPPING) == 0) + if ((lvstate->options.flags & VACOPT_DISABLE_PAGE_SKIPPING) == 0) { while (next_unskippable_block < nblocks) { @@ -648,7 +807,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, vmskipflags = visibilitymap_get_status(onerel, next_unskippable_block, &vmbuffer); - if (aggressive) + if (lvstate->aggressive) { if ((vmskipflags & VISIBILITYMAP_ALL_FROZEN) == 0) break; @@ -677,7 +836,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, * it's not all-visible. But in an aggressive vacuum we know only * that it's not all-frozen, so it might still be all-visible. */ - if (aggressive && VM_ALL_VISIBLE(onerel, blkno, &vmbuffer)) + if (lvstate->aggressive && VM_ALL_VISIBLE(onerel, blkno, &vmbuffer)) all_visible_according_to_vm = true; } else @@ -701,7 +860,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, * know whether it was all-frozen, so we have to recheck; but * in this case an approximate answer is OK. */ - if (aggressive || VM_ALL_FROZEN(onerel, blkno, &vmbuffer)) + if (lvstate->aggressive || VM_ALL_FROZEN(onerel, blkno, &vmbuffer)) vacrelstats->frozenskipped_pages++; continue; } @@ -714,8 +873,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, * If we are close to overrunning the available space for dead-tuple * TIDs, pause and do a cycle of vacuuming before we tackle this page. */ - if ((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage && - vacrelstats->num_dead_tuples > 0) + if ((dead_tuples->max_dead_tuples - dead_tuples->num_dead_tuples) < MaxHeapTuplesPerPage && + dead_tuples->num_dead_tuples > 0) { const int hvp_index[] = { PROGRESS_VACUUM_PHASE, @@ -743,10 +902,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, PROGRESS_VACUUM_PHASE_VACUUM_INDEX); /* Remove index entries */ - for (i = 0; i < nindexes; i++) - lazy_vacuum_index(Irel[i], - &indstats[i], - vacrelstats); + lazy_vacuum_all_indexes_for_leader(lvstate, indstats, dead_tuples, + do_parallel, false); /* * Report that we are now vacuuming the heap. We also increase @@ -759,14 +916,14 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, pgstat_progress_update_multi_param(2, hvp_index, hvp_val); /* Remove tuples from heap */ - lazy_vacuum_heap(onerel, vacrelstats); + lazy_vacuum_heap(lvstate, dead_tuples); /* * Forget the now-vacuumed tuples, and press on, but be careful * not to reset latestRemovedXid since we want that value to be * valid. */ - vacrelstats->num_dead_tuples = 0; + dead_tuples->num_dead_tuples = 0; vacrelstats->num_index_scans++; /* @@ -804,7 +961,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, * it's OK to skip vacuuming pages we get a lock conflict on. They * will be dealt with in some future vacuum. */ - if (!aggressive && !FORCE_CHECK_PAGE()) + if (!lvstate->aggressive && !FORCE_CHECK_PAGE()) { ReleaseBuffer(buf); vacrelstats->pinskipped_pages++; @@ -837,7 +994,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, vacrelstats->nonempty_pages = blkno + 1; continue; } - if (!aggressive) + if (!lvstate->aggressive) { /* * Here, we must not advance scanned_pages; that would amount @@ -956,7 +1113,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, has_dead_tuples = false; nfrozen = 0; hastup = false; - prev_dead_count = vacrelstats->num_dead_tuples; + prev_dead_count = dead_tuples->num_dead_tuples; maxoff = PageGetMaxOffsetNumber(page); /* @@ -995,7 +1152,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, */ if (ItemIdIsDead(itemid)) { - lazy_record_dead_tuple(vacrelstats, &(tuple.t_self)); + lazy_record_dead_tuple(dead_tuples, &(tuple.t_self)); all_visible = false; continue; } @@ -1135,7 +1292,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, if (tupgone) { - lazy_record_dead_tuple(vacrelstats, &(tuple.t_self)); + lazy_record_dead_tuple(dead_tuples, &(tuple.t_self)); HeapTupleHeaderAdvanceLatestRemovedXid(tuple.t_data, &vacrelstats->latestRemovedXid); tups_vacuumed += 1; @@ -1204,11 +1361,12 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, * If there are no indexes then we can vacuum the page right now * instead of doing a second scan. */ - if (nindexes == 0 && - vacrelstats->num_dead_tuples > 0) + if (lvstate->nindexes == 0 && dead_tuples->num_dead_tuples > 0) { /* Remove tuples from heap */ - lazy_vacuum_page(onerel, blkno, buf, 0, vacrelstats, &vmbuffer); + lazy_vacuum_page(lvstate, onerel, blkno, buf, 0, &vmbuffer, + lvstate->vacrelstats->latestRemovedXid, + dead_tuples); has_dead_tuples = false; /* @@ -1216,7 +1374,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, * not to reset latestRemovedXid since we want that value to be * valid. */ - vacrelstats->num_dead_tuples = 0; + dead_tuples->num_dead_tuples = 0; vacuumed_pages++; /* @@ -1332,7 +1490,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, * page, so remember its free space as-is. (This path will always be * taken if there are no indexes.) */ - if (vacrelstats->num_dead_tuples == prev_dead_count) + if (dead_tuples->num_dead_tuples == prev_dead_count) RecordPageWithFreeSpace(onerel, blkno, freespace); } @@ -1366,7 +1524,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, /* If any tuples need to be deleted, perform final vacuum cycle */ /* XXX put a threshold on min number of tuples here? */ - if (vacrelstats->num_dead_tuples > 0) + if (dead_tuples->num_dead_tuples > 0) { const int hvp_index[] = { PROGRESS_VACUUM_PHASE, @@ -1382,10 +1540,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, PROGRESS_VACUUM_PHASE_VACUUM_INDEX); /* Remove index entries */ - for (i = 0; i < nindexes; i++) - lazy_vacuum_index(Irel[i], - &indstats[i], - vacrelstats); + lazy_vacuum_all_indexes_for_leader(lvstate, indstats, dead_tuples, + do_parallel, false); /* Report that we are now vacuuming the heap */ hvp_val[0] = PROGRESS_VACUUM_PHASE_VACUUM_HEAP; @@ -1395,7 +1551,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, /* Remove tuples from heap */ pgstat_progress_update_param(PROGRESS_VACUUM_PHASE, PROGRESS_VACUUM_PHASE_VACUUM_HEAP); - lazy_vacuum_heap(onerel, vacrelstats); + lazy_vacuum_heap(lvstate, dead_tuples); vacrelstats->num_index_scans++; } @@ -1412,8 +1568,11 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, PROGRESS_VACUUM_PHASE_INDEX_CLEANUP); /* Do post-vacuum cleanup and statistics update for each index */ - for (i = 0; i < nindexes; i++) - lazy_cleanup_index(Irel[i], indstats[i], vacrelstats); + lazy_vacuum_all_indexes_for_leader(lvstate, indstats, dead_tuples, + do_parallel, true); + + if (do_parallel) + lazy_end_parallel(lvstate, true); /* If no indexes, make log report that lazy_vacuum_heap would've made */ if (vacuumed_pages) @@ -1468,8 +1627,9 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, * process index entry removal in batches as large as possible. */ static void -lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats) +lazy_vacuum_heap(LVState *lvstate, LVTidMap *dead_tuples) { + Relation onerel = lvstate->relation; int tupindex; int npages; PGRUsage ru0; @@ -1479,7 +1639,7 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats) npages = 0; tupindex = 0; - while (tupindex < vacrelstats->num_dead_tuples) + while (tupindex < dead_tuples->num_dead_tuples) { BlockNumber tblk; Buffer buf; @@ -1488,7 +1648,7 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats) vacuum_delay_point(); - tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]); + tblk = ItemPointerGetBlockNumber(&dead_tuples->itemptrs[tupindex]); buf = ReadBufferExtended(onerel, MAIN_FORKNUM, tblk, RBM_NORMAL, vac_strategy); if (!ConditionalLockBufferForCleanup(buf)) @@ -1497,8 +1657,9 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats) ++tupindex; continue; } - tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats, - &vmbuffer); + tupindex = lazy_vacuum_page(lvstate, onerel, tblk, buf, tupindex, + &vmbuffer, lvstate->vacrelstats->latestRemovedXid, + dead_tuples); /* Now that we've compacted the page, record its available space */ page = BufferGetPage(buf); @@ -1533,8 +1694,9 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats) * The return value is the first tupindex after the tuples of this page. */ static int -lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer, - int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer) +lazy_vacuum_page(LVState *lvstate, Relation onerel, BlockNumber blkno, + Buffer buffer, int tupindex, Buffer *vmbuffer, + TransactionId latestRemovedXid, LVTidMap *dead_tuples) { Page page = BufferGetPage(buffer); OffsetNumber unused[MaxOffsetNumber]; @@ -1546,16 +1708,16 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer, START_CRIT_SECTION(); - for (; tupindex < vacrelstats->num_dead_tuples; tupindex++) + for (; tupindex < dead_tuples->num_dead_tuples; tupindex++) { BlockNumber tblk; OffsetNumber toff; ItemId itemid; - tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]); + tblk = ItemPointerGetBlockNumber(&dead_tuples->itemptrs[tupindex]); if (tblk != blkno) break; /* past end of tuples for this block */ - toff = ItemPointerGetOffsetNumber(&vacrelstats->dead_tuples[tupindex]); + toff = ItemPointerGetOffsetNumber(&dead_tuples->itemptrs[tupindex]); itemid = PageGetItemId(page, toff); ItemIdSetUnused(itemid); unused[uncnt++] = toff; @@ -1576,7 +1738,7 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer, recptr = log_heap_clean(onerel, buffer, NULL, 0, NULL, 0, unused, uncnt, - vacrelstats->latestRemovedXid); + latestRemovedXid); PageSetLSN(page, recptr); } @@ -1675,6 +1837,86 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup) return false; } +/* + * Vacuum or cleanup all indexes with parallel workers if requested. This function + * is used by the parallel vacuum leader process. In parallel lazy vacuum, we save + * the index bulk-deletion results to the shared memory space. All vacuum workers + * process different indexes we can write them without locking. + */ +static void +lazy_vacuum_all_indexes_for_leader(LVState *lvstate, IndexBulkDeleteResult **stats, + LVTidMap *dead_tuples, bool do_parallel, + bool for_cleanup) +{ + LVShared *lvshared = lvstate->lvshared; + LVRelStats *vacrelstats = lvstate->vacrelstats; + int nprocessed = 0; + int idx; + + Assert(!IsParallelWorker()); + + /* no job if the table has no index */ + if (lvstate->nindexes < 1) + return; + + if (do_parallel) + lazy_begin_parallel_vacuum_index(lvstate, for_cleanup); + + for (;;) + { + IndexBulkDeleteResult *r = NULL; + + /* + * Get the next index number to vacuum and set index statistics. In parallel + * lazy vacuum, index bulk-deletion results are stored in the shared memory + * segment. If it's already updated we use it rather than setting it to NULL. + * In single vacuum, we can always use an element of the 'stats'. + */ + if (do_parallel) + { + idx = pg_atomic_fetch_add_u32(&(lvshared->nprocessed), 1); + + if (lvshared->indstats[idx].updated) + r = &(lvshared->indstats[idx].stats); + } + else + { + idx = nprocessed++; + r = stats[idx]; + } + + /* all indexes are processed? */ + if (idx >= lvstate->nindexes) + break; + + /* + * Do vacuuming or cleanup one index. For cleanup index, we don't update + * index statistics during parallel mode. + */ + if (!for_cleanup) + r = lazy_vacuum_index(lvstate->indRels[idx], r, + vacrelstats->old_rel_pages, + dead_tuples); + else + r = lazy_cleanup_index(lvstate->indRels[idx], r, + vacrelstats->new_rel_tuples, + vacrelstats->tupcount_pages < vacrelstats->rel_pages, + !do_parallel); + + if (do_parallel && r) + { + /* save index bulk-deletion result to the shared memory space */ + lvshared->indstats[idx].updated = true; + memcpy(&(lvshared->indstats[idx].stats), r, sizeof(IndexBulkDeleteResult)); + + /* save pointer to the shared memory segment */ + r = &(lvshared->indstats[idx].stats); + } + } + + if (do_parallel) + lazy_end_parallel_vacuum_index(lvstate); +} /* * lazy_vacuum_index() -- vacuum one index relation. @@ -1682,11 +1924,11 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup) * Delete all the index entries pointing to tuples listed in * vacrelstats->dead_tuples, and update running statistics. */ -static void -lazy_vacuum_index(Relation indrel, - IndexBulkDeleteResult **stats, - LVRelStats *vacrelstats) +static IndexBulkDeleteResult * +lazy_vacuum_index(Relation indrel, IndexBulkDeleteResult *stats, + double reltuples, LVTidMap *dead_tuples) { + IndexBulkDeleteResult *res; IndexVacuumInfo ivinfo; PGRUsage ru0; @@ -1696,28 +1938,29 @@ lazy_vacuum_index(Relation indrel, ivinfo.analyze_only = false; ivinfo.estimated_count = true; ivinfo.message_level = elevel; - /* We can only provide an approximate value of num_heap_tuples here */ - ivinfo.num_heap_tuples = vacrelstats->old_live_tuples; + ivinfo.num_heap_tuples = reltuples; ivinfo.strategy = vac_strategy; /* Do bulk deletion */ - *stats = index_bulk_delete(&ivinfo, *stats, - lazy_tid_reaped, (void *) vacrelstats); + res = index_bulk_delete(&ivinfo, stats, + lazy_tid_reaped, (void *) dead_tuples); ereport(elevel, - (errmsg("scanned index \"%s\" to remove %d row versions", + (errmsg("scanned index \"%s\" to remove %d row versions %s", RelationGetRelationName(indrel), - vacrelstats->num_dead_tuples), + dead_tuples->num_dead_tuples, + IsParallelWorker() ? "by vacuum worker" : ""), errdetail_internal("%s", pg_rusage_show(&ru0)))); + + return res; } /* * lazy_cleanup_index() -- do post-vacuum cleanup for one index relation. */ -static void -lazy_cleanup_index(Relation indrel, - IndexBulkDeleteResult *stats, - LVRelStats *vacrelstats) +static IndexBulkDeleteResult * +lazy_cleanup_index(Relation indrel, IndexBulkDeleteResult *stats, + double reltuples, bool estimated_count, bool update_stats) { IndexVacuumInfo ivinfo; PGRUsage ru0; @@ -1726,27 +1969,21 @@ lazy_cleanup_index(Relation indrel, ivinfo.index = indrel; ivinfo.analyze_only = false; - ivinfo.estimated_count = (vacrelstats->tupcount_pages < vacrelstats->rel_pages); + ivinfo.estimated_count = estimated_count; ivinfo.message_level = elevel; - - /* - * Now we can provide a better estimate of total number of surviving - * tuples (we assume indexes are more interested in that than in the - * number of nominally live tuples). - */ - ivinfo.num_heap_tuples = vacrelstats->new_rel_tuples; + ivinfo.num_heap_tuples = reltuples; ivinfo.strategy = vac_strategy; stats = index_vacuum_cleanup(&ivinfo, stats); if (!stats) - return; + return NULL; /* * Now update statistics in pg_class, but only if the index says the count * is accurate. */ - if (!stats->estimated_count) + if (!stats->estimated_count && update_stats) vac_update_relstats(indrel, stats->num_pages, stats->num_index_tuples, @@ -1767,8 +2004,7 @@ lazy_cleanup_index(Relation indrel, stats->tuples_removed, stats->pages_deleted, stats->pages_free, pg_rusage_show(&ru0)))); - - pfree(stats); + return stats; } /* @@ -2078,15 +2314,16 @@ count_nondeletable_pages(Relation onerel, LVRelStats *vacrelstats) * * See the comments at the head of this file for rationale. */ -static void -lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks) +static LVTidMap * +lazy_space_alloc(LVState *lvstate, BlockNumber relblocks, int parallel_workers) { + LVTidMap *dead_tuples = NULL; long maxtuples; int vac_work_mem = IsAutoVacuumWorkerProcess() && autovacuum_work_mem != -1 ? autovacuum_work_mem : maintenance_work_mem; - if (vacrelstats->hasindex) + if (lvstate->nindexes > 0) { maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData); maxtuples = Min(maxtuples, INT_MAX); @@ -2100,34 +2337,44 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks) maxtuples = Max(maxtuples, MaxHeapTuplesPerPage); } else - { maxtuples = MaxHeapTuplesPerPage; + + /* + * Allocate memory for dead tuples. In parallel lazy vacuum, we enter the parallel + * mode and prepare all memory necessary for executing parallel lazy vacuum + * including the space to store dead tuples. In single process vacuum, we allocate + * them in local memory. + */ + if (parallel_workers > 0) + dead_tuples = lazy_prepare_parallel(lvstate, maxtuples, parallel_workers); + else + { + dead_tuples = (LVTidMap *) + palloc(SizeOfLVTidMap + maxtuples * sizeof(ItemPointerData)); + dead_tuples->num_dead_tuples = 0; + dead_tuples->max_dead_tuples = (int) maxtuples; } - vacrelstats->num_dead_tuples = 0; - vacrelstats->max_dead_tuples = (int) maxtuples; - vacrelstats->dead_tuples = (ItemPointer) - palloc(maxtuples * sizeof(ItemPointerData)); + return dead_tuples; } /* * lazy_record_dead_tuple - remember one deletable tuple */ static void -lazy_record_dead_tuple(LVRelStats *vacrelstats, - ItemPointer itemptr) +lazy_record_dead_tuple(LVTidMap *dead_tuples, ItemPointer itemptr) { /* * The array shouldn't overflow under normal behavior, but perhaps it * could if we are given a really small maintenance_work_mem. In that * case, just forget the last few tuples (we'll get 'em next time). */ - if (vacrelstats->num_dead_tuples < vacrelstats->max_dead_tuples) + if (dead_tuples->num_dead_tuples < dead_tuples->max_dead_tuples) { - vacrelstats->dead_tuples[vacrelstats->num_dead_tuples] = *itemptr; - vacrelstats->num_dead_tuples++; + dead_tuples->itemptrs[dead_tuples->num_dead_tuples] = *itemptr; + dead_tuples->num_dead_tuples++; pgstat_progress_update_param(PROGRESS_VACUUM_NUM_DEAD_TUPLES, - vacrelstats->num_dead_tuples); + dead_tuples->num_dead_tuples); } } @@ -2141,12 +2388,12 @@ lazy_record_dead_tuple(LVRelStats *vacrelstats, static bool lazy_tid_reaped(ItemPointer itemptr, void *state) { - LVRelStats *vacrelstats = (LVRelStats *) state; + LVTidMap *dead_tuples = (LVTidMap *) state; ItemPointer res; res = (ItemPointer) bsearch((void *) itemptr, - (void *) vacrelstats->dead_tuples, - vacrelstats->num_dead_tuples, + (void *) dead_tuples->itemptrs, + dead_tuples->num_dead_tuples, sizeof(ItemPointerData), vac_cmp_itemptr); @@ -2294,3 +2541,329 @@ heap_page_is_all_visible(Relation rel, Buffer buf, return all_visible; } + +/* + * Compute the number of parallel worker process to request. Vacuums can be executed + * in parallel if the table has more than one index since we support parallel index + * vacuum that processes one index by one vacuum process. The relation size of table + * and indexes doesn't affect to the parallel degree. + */ +static int +lazy_compute_parallel_workers(Relation rel, int nrequests, int nindexes) +{ + int parallel_workers = nindexes - 1; + + if (nindexes < 2) + return 0; + + if (nrequests) + parallel_workers = Min(nrequests, nindexes - 1); + else if (rel->rd_options) + { + StdRdOptions *relopts = (StdRdOptions *) rel->rd_options; + parallel_workers = Min(relopts->parallel_workers, nindexes - 1); + } + + /* cap by max_parallel_maintenace_workers */ + parallel_workers = Min(parallel_workers, max_parallel_maintenance_workers); + + return parallel_workers; +} + +/* + * Enter the parallel mode, allocate and initialize a DSM segment. Return + * the memory space for storing dead tuples. + */ +static LVTidMap * +lazy_prepare_parallel(LVState *lvstate, long maxtuples, int request) +{ + LVShared *shared; + ParallelContext *pcxt; + LVTidMap *tidmap; + char *sharedquery; + Size estshared; + Size estdt; + int querylen; + int i; + int keys = 0; + + EnterParallelMode(); + pcxt = CreateParallelContext("postgres", "lazy_parallel_vacuum_main", + request, true); + lvstate->pcxt = pcxt; + + /* Estimate size for shared information -- PARALLEL_VACUUM_KEY_SHARED */ + estshared = MAXALIGN(add_size(SizeOfLVShared, + mul_size(sizeof(LVIndStats), lvstate->nindexes))); + shm_toc_estimate_chunk(&pcxt->estimator, estshared); + keys++; + + /* Estimate size for dead tuples -- PARALLEL_VACUUM_KEY_DEAD_TUPLES */ + estdt = MAXALIGN(add_size(sizeof(LVTidMap), + mul_size(sizeof(ItemPointerData), maxtuples))); + shm_toc_estimate_chunk(&pcxt->estimator, estdt); + keys++; + + shm_toc_estimate_keys(&pcxt->estimator, keys); + + /* + * Finally, estimate VACUUM_KEY_QUERY_TEXT space. Auto vacuums don't have + * debug_query_string. + */ + if (debug_query_string) + { + querylen = strlen(debug_query_string); + shm_toc_estimate_chunk(&pcxt->estimator, querylen + 1); + shm_toc_estimate_keys(&pcxt->estimator, 1); + } + + /* create the DSM */ + InitializeParallelDSM(pcxt); + + /* prepare shared information */ + shared = (LVShared *) shm_toc_allocate(pcxt->toc, estshared); + shared->relid = RelationGetRelid(lvstate->relation); + shared->is_wraparound = lvstate->is_wraparound; + shared->elevel = elevel; + pg_atomic_init_u32(&(shared->nprocessed), 0); + + for (i = 0; i < lvstate->nindexes; i++) + { + LVIndStats *s = &(shared->indstats[i]); + s->updated = false; + MemSet(&(s->stats), 0, sizeof(IndexBulkDeleteResult)); + } + + shm_toc_insert(pcxt->toc, PARALLEL_VACUUM_KEY_SHARED, shared); + lvstate->lvshared = shared; + + /* prepare the dead tuple space */ + tidmap = (LVTidMap *) shm_toc_allocate(pcxt->toc, estdt); + tidmap->max_dead_tuples = maxtuples; + tidmap->num_dead_tuples = 0; + MemSet(tidmap->itemptrs, 0, sizeof(ItemPointerData) * maxtuples); + shm_toc_insert(pcxt->toc, PARALLEL_VACUUM_KEY_DEAD_TUPLES, tidmap); + + /* Store query string for workers */ + sharedquery = (char *) shm_toc_allocate(pcxt->toc, querylen + 1); + memcpy(sharedquery, debug_query_string, querylen + 1); + sharedquery[querylen] = '\0'; + shm_toc_insert(pcxt->toc, PARALLEL_VACUUM_KEY_QUERY_TEXT, sharedquery); + + return tidmap; +} + +/* + * Shutdown workers, destroy parallel context, and end parallel mode. If + * 'update_indstats' is true, we copy statistics of all indexes before + * destroying the parallel context, and then update them after exit parallel + * mode. + */ +static void +lazy_end_parallel(LVState *lvstate, bool update_indstats) +{ + LVIndStats *copied_indstats = NULL; + + Assert(!IsParallelWorker()); + + if (update_indstats && lvstate->nindexes > 0) + { + /* copy the index statistics to a temporary space */ + copied_indstats = palloc(sizeof(LVIndStats) * lvstate->nindexes); + memcpy(copied_indstats, lvstate->lvshared->indstats, + sizeof(LVIndStats) * lvstate->nindexes); + } + + /* Shutdown worker processes and destroy the parallel context */ + WaitForParallelWorkersToFinish(lvstate->pcxt); + DestroyParallelContext(lvstate->pcxt); + ExitParallelMode(); + + if (copied_indstats) + { + int i; + + for (i = 0; i < lvstate->nindexes; i++) + { + LVIndStats *s = &(copied_indstats[i]); + + /* Update index statistics */ + if (s->updated && !s->stats.estimated_count) + vac_update_relstats(lvstate->indRels[i], + s->stats.num_pages, + s->stats.num_index_tuples, + 0, + false, + InvalidTransactionId, + InvalidMultiXactId, + false); + } + + pfree(copied_indstats); + } +} + +/* + * Begin a parallel index vacuum or cleanup index. Set shared information + * and launch parallel worker processes. + */ +static void +lazy_begin_parallel_vacuum_index(LVState *lvstate, bool for_cleanup) +{ + LVRelStats *vacrelstats = lvstate->vacrelstats; + + Assert(!IsParallelWorker()); + + /* + * Request workers to do either vacuuming indexes or cleaning indexes. + */ + lvstate->lvshared->for_cleanup = for_cleanup; + + if (for_cleanup) + { + /* + * Now we can provide a better estimate of total number of surviving + * tuples (we assume indexes are more interested in that than in the + * number of nominally live tuples). + */ + lvstate->lvshared->reltuples = vacrelstats->new_rel_tuples; + lvstate->lvshared->estimated_count = + (vacrelstats->tupcount_pages < vacrelstats->rel_pages); + } + else + { + /* We can only provide an approximate value of num_heap_tuples here */ + lvstate->lvshared->reltuples = vacrelstats->old_live_tuples; + lvstate->lvshared->estimated_count = true; + } + + LaunchParallelWorkers(lvstate->pcxt); + + /* + * if no workers launched, we vacuum all indexes by the leader process + * alone. Since there is hope that we can launch workers in the next + * execution time we don't want to end the parallel mode yet. + */ + if (lvstate->pcxt->nworkers_launched == 0) + return; + + WaitForParallelWorkersToAttach(lvstate->pcxt); +} + +/* + * Wait for all worker processes to finish and reinitialize DSM for + * the next execution. + */ +static void +lazy_end_parallel_vacuum_index(LVState *lvstate) +{ + Assert(!IsParallelWorker()); + + WaitForParallelWorkersToFinish(lvstate->pcxt); + + /* Reset the processing count */ + pg_atomic_write_u32(&(lvstate->lvshared->nprocessed), 0); + + /* + * Reinitialize the DSM space except to relaunch parallel workers for + * the next execution. + */ + ReinitializeParallelDSM(lvstate->pcxt); +} + +/* + * Perform work within a launched parallel process. + * + * Parallel vacuum worker processes doesn't report the vacuum progress + * information. + */ +void +lazy_parallel_vacuum_main(dsm_segment *seg, shm_toc *toc) +{ + Relation onerel; + Relation *indrels; + LVShared *lvshared; + LVTidMap *dead_tuples; + int nindexes; + char *sharedquery; + + /* Set lazy vacuum state and open relations */ + lvshared = (LVShared *) shm_toc_lookup(toc, PARALLEL_VACUUM_KEY_SHARED, false); + onerel = heap_open(lvshared->relid, ShareUpdateExclusiveLock); + vac_open_indexes(onerel, RowExclusiveLock, &nindexes, &indrels); + elevel = lvshared->elevel; + + ereport(DEBUG1, + (errmsg("starting parallel lazy vacuum worker for %s", + lvshared->for_cleanup ? "cleanup" : "vacuuming"))); + + /* Set debug_query_string for individual workers */ + sharedquery = shm_toc_lookup(toc, PARALLEL_VACUUM_KEY_QUERY_TEXT, true); + + /* Report the query string from leader */ + debug_query_string = sharedquery; + pgstat_report_activity(STATE_RUNNING, debug_query_string); + + /* Set dead tuple space within worker */ + dead_tuples = (LVTidMap *) shm_toc_lookup(toc, PARALLEL_VACUUM_KEY_DEAD_TUPLES, false); + + /* Set cost-based vacuum delay */ + VacuumCostActive = (VacuumCostDelay > 0); + VacuumCostBalance = 0; + VacuumPageHit = 0; + VacuumPageMiss = 0; + VacuumPageDirty = 0; + + /* Do either vacuuming indexes or cleaning indexes */ + lazy_vacuum_all_indexes_for_worker(indrels, nindexes, lvshared, + dead_tuples, + lvshared->for_cleanup); + + vac_close_indexes(nindexes, indrels, RowExclusiveLock); + heap_close(onerel, ShareUpdateExclusiveLock); +} + +/* + * Vacuum or cleanup indexes. This function is used by the parallel vacuum + * worker processes. Similar to the leader process in parallel lazy vacuum, we save + * the bulk-deletion result to the shared memory space. + */ +static void +lazy_vacuum_all_indexes_for_worker(Relation *indrels, int nindexes, + LVShared *lvshared, LVTidMap *dead_tuples, + bool for_cleanup) +{ + int idx = 0; + + Assert(IsParallelWorker()); + + for (;;) + { + IndexBulkDeleteResult *stats = NULL; + + /* Get next index to process */ + idx = pg_atomic_fetch_add_u32(&(lvshared->nprocessed), 1); + + /* all indexes are processed? */ + if (idx >= nindexes) + break; + + /* If this index has already been processed before, get the pointer */ + if (lvshared->indstats[idx].updated) + stats = &(lvshared->indstats[idx].stats); + + if (!for_cleanup) + stats = lazy_vacuum_index(indrels[idx], stats, lvshared->reltuples, + dead_tuples); + else + lazy_cleanup_index(indrels[idx], stats, lvshared->reltuples, + lvshared->estimated_count, false); + + if (stats) + { + /* Update the shared index statistics */ + lvshared->indstats[idx].updated = true; + memcpy(&(lvshared->indstats[idx].stats), stats, sizeof(IndexBulkDeleteResult)); + } + } +} diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c index 273e275..2d27af5 100644 --- a/src/backend/nodes/equalfuncs.c +++ b/src/backend/nodes/equalfuncs.c @@ -1668,8 +1668,10 @@ _equalDropdbStmt(const DropdbStmt *a, const DropdbStmt *b) static bool _equalVacuumStmt(const VacuumStmt *a, const VacuumStmt *b) { - COMPARE_SCALAR_FIELD(options); - COMPARE_NODE_FIELD(rels); + if (a->options.flags != b->options.flags) + return false; + if (a->options.nworkers != b->options.nworkers) + return false; return true; } diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y index 2c2208f..1707959 100644 --- a/src/backend/parser/gram.y +++ b/src/backend/parser/gram.y @@ -187,6 +187,7 @@ static void processCASbits(int cas_bits, int location, const char *constrType, bool *deferrable, bool *initdeferred, bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner); static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query); +static VacuumOption *makeVacOpt(VacuumOptionFlag flag, int nworkers); %} @@ -237,6 +238,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query); struct ImportQual *importqual; InsertStmt *istmt; VariableSetStmt *vsetstmt; + VacuumOption *vacopt; PartitionElem *partelem; PartitionSpec *partspec; PartitionBoundSpec *partboundspec; @@ -305,8 +307,8 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query); create_extension_opt_item alter_extension_opt_item %type <ival> opt_lock lock_type cast_context -%type <ival> vacuum_option_list vacuum_option_elem - analyze_option_list analyze_option_elem +%type <vacopt> vacuum_option_list vacuum_option_elem +%type <ival> analyze_option_list analyze_option_elem %type <boolean> opt_or_replace opt_grant_grant_option opt_grant_admin_option opt_nowait opt_if_exists opt_with_data @@ -10478,22 +10480,24 @@ cluster_index_specification: VacuumStmt: VACUUM opt_full opt_freeze opt_verbose opt_analyze opt_vacuum_relation_list { VacuumStmt *n = makeNode(VacuumStmt); - n->options = VACOPT_VACUUM; + n->options.flags = VACOPT_VACUUM; if ($2) - n->options |= VACOPT_FULL; + n->options.flags |= VACOPT_FULL; if ($3) - n->options |= VACOPT_FREEZE; + n->options.flags |= VACOPT_FREEZE; if ($4) - n->options |= VACOPT_VERBOSE; + n->options.flags |= VACOPT_VERBOSE; if ($5) - n->options |= VACOPT_ANALYZE; + n->options.flags |= VACOPT_ANALYZE; + n->options.nworkers = 0; n->rels = $6; $$ = (Node *)n; } | VACUUM '(' vacuum_option_list ')' opt_vacuum_relation_list { VacuumStmt *n = makeNode(VacuumStmt); - n->options = VACOPT_VACUUM | $3; + n->options.flags = VACOPT_VACUUM | $3->flags; + n->options.nworkers = $3->nworkers; n->rels = $5; $$ = (Node *) n; } @@ -10501,20 +10505,40 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose opt_analyze opt_vacuum_relati vacuum_option_list: vacuum_option_elem { $$ = $1; } - | vacuum_option_list ',' vacuum_option_elem { $$ = $1 | $3; } + | vacuum_option_list ',' vacuum_option_elem + { + VacuumOption *vacopt1 = $1; + VacuumOption *vacopt2 = $3; + + vacopt1->flags |= vacopt2->flags; + if (vacopt2->flags == VACOPT_PARALLEL) + vacopt1->nworkers = vacopt2->nworkers; + pfree(vacopt2); + $$ = vacopt1; + } ; vacuum_option_elem: - analyze_keyword { $$ = VACOPT_ANALYZE; } - | VERBOSE { $$ = VACOPT_VERBOSE; } - | FREEZE { $$ = VACOPT_FREEZE; } - | FULL { $$ = VACOPT_FULL; } + analyze_keyword { $$ = makeVacOpt(VACOPT_ANALYZE, 0); } + | VERBOSE { $$ = makeVacOpt(VACOPT_VERBOSE, 0); } + | FREEZE { $$ = makeVacOpt(VACOPT_FREEZE, 0); } + | FULL { $$ = makeVacOpt(VACOPT_FULL, 0); } + | PARALLEL { $$ = makeVacOpt(VACOPT_PARALLEL, 0); } + | PARALLEL ICONST + { + if ($2 < 1) + ereport(ERROR, + (errcode(ERRCODE_SYNTAX_ERROR), + errmsg("parallel vacuum degree must be more than 1"), + parser_errposition(@1))); + $$ = makeVacOpt(VACOPT_PARALLEL, $2); + } | IDENT { if (strcmp($1, "disable_page_skipping") == 0) - $$ = VACOPT_DISABLE_PAGE_SKIPPING; + $$ = makeVacOpt(VACOPT_DISABLE_PAGE_SKIPPING, 0); else if (strcmp($1, "skip_locked") == 0) - $$ = VACOPT_SKIP_LOCKED; + $$ = makeVacOpt(VACOPT_SKIP_LOCKED, 0); else ereport(ERROR, (errcode(ERRCODE_SYNTAX_ERROR), @@ -10526,16 +10550,16 @@ vacuum_option_elem: AnalyzeStmt: analyze_keyword opt_verbose opt_vacuum_relation_list { VacuumStmt *n = makeNode(VacuumStmt); - n->options = VACOPT_ANALYZE; + n->options.flags = VACOPT_ANALYZE; if ($2) - n->options |= VACOPT_VERBOSE; + n->options.flags |= VACOPT_VERBOSE; n->rels = $3; $$ = (Node *)n; } | analyze_keyword '(' analyze_option_list ')' opt_vacuum_relation_list { VacuumStmt *n = makeNode(VacuumStmt); - n->options = VACOPT_ANALYZE | $3; + n->options.flags = VACOPT_ANALYZE | $3; n->rels = $5; $$ = (Node *) n; } @@ -16033,6 +16057,19 @@ makeXmlExpr(XmlExprOp op, char *name, List *named_args, List *args, return (Node *) x; } + +/* + * Create a VacuumOption with the given options. + */ +static VacuumOption * +makeVacOpt(VacuumOptionFlag flag, int nworkers) +{ + VacuumOption *vacopt = palloc(sizeof(VacuumOption)); + + vacopt->flags = flag; + vacopt->nworkers = nworkers; + return vacopt; +} /* * Merge the input and output parameters of a table function. */ diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c index 2d5086d..a84878a 100644 --- a/src/backend/postmaster/autovacuum.c +++ b/src/backend/postmaster/autovacuum.c @@ -184,11 +184,15 @@ typedef struct av_relation * reloptions, or NULL if none */ } av_relation; -/* struct to keep track of tables to vacuum and/or analyze, after rechecking */ +/* + * struct to keep track of tables to vacuum and/or analyze, after rechecking. + * Since autovacuum doesn't support parallel lazy vacuum the at_vacoptions + * is just a bitmak of VacuumOptionFlag. + */ typedef struct autovac_table { Oid at_relid; - int at_vacoptions; /* bitmask of VacuumOption */ + int at_vacoptions; /* bitmask of VacuumOptionFlag */ VacuumParams at_params; int at_vacuum_cost_delay; int at_vacuum_cost_limit; diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c index 970c94e..23dc6d3 100644 --- a/src/backend/tcop/utility.c +++ b/src/backend/tcop/utility.c @@ -664,7 +664,7 @@ standard_ProcessUtility(PlannedStmt *pstmt, VacuumStmt *stmt = (VacuumStmt *) parsetree; /* we choose to allow this during "read only" transactions */ - PreventCommandDuringRecovery((stmt->options & VACOPT_VACUUM) ? + PreventCommandDuringRecovery((stmt->options.flags & VACOPT_VACUUM) ? "VACUUM" : "ANALYZE"); /* forbidden in parallel mode due to CommandIsReadOnly */ ExecVacuum(stmt, isTopLevel); @@ -2570,7 +2570,7 @@ CreateCommandTag(Node *parsetree) break; case T_VacuumStmt: - if (((VacuumStmt *) parsetree)->options & VACOPT_VACUUM) + if (((VacuumStmt *) parsetree)->options.flags & VACOPT_VACUUM) tag = "VACUUM"; else tag = "ANALYZE"; diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h index dfff23a..5b87241 100644 --- a/src/include/commands/vacuum.h +++ b/src/include/commands/vacuum.h @@ -15,6 +15,7 @@ #define VACUUM_H #include "access/htup.h" +#include "access/parallel.h" #include "catalog/pg_class.h" #include "catalog/pg_statistic.h" #include "catalog/pg_type.h" @@ -163,7 +164,7 @@ extern int vacuum_multixact_freeze_table_age; /* in commands/vacuum.c */ extern void ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel); -extern void vacuum(int options, List *relations, VacuumParams *params, +extern void vacuum(VacuumOption options, List *relations, VacuumParams *params, BufferAccessStrategy bstrategy, bool isTopLevel); extern void vac_open_indexes(Relation relation, LOCKMODE lockmode, int *nindexes, Relation **Irel); @@ -197,8 +198,10 @@ extern Relation vacuum_open_relation(Oid relid, RangeVar *relation, VacuumParams *params, int options, LOCKMODE lmode); /* in commands/vacuumlazy.c */ -extern void lazy_vacuum_rel(Relation onerel, int options, +extern void lazy_vacuum_rel(Relation onerel, VacuumOption options, VacuumParams *params, BufferAccessStrategy bstrategy); +extern void lazy_parallel_vacuum_main(dsm_segment *seg, shm_toc *toc); + /* in commands/analyze.c */ extern void analyze_rel(Oid relid, RangeVar *relation, int options, diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h index e5bdc1c..a2b4662 100644 --- a/src/include/nodes/parsenodes.h +++ b/src/include/nodes/parsenodes.h @@ -3144,7 +3144,7 @@ typedef struct ClusterStmt * and VACOPT_ANALYZE must be set in options. * ---------------------- */ -typedef enum VacuumOption +typedef enum VacuumOptionFlag { VACOPT_VACUUM = 1 << 0, /* do VACUUM */ VACOPT_ANALYZE = 1 << 1, /* do ANALYZE */ @@ -3153,7 +3153,14 @@ typedef enum VacuumOption VACOPT_FULL = 1 << 4, /* FULL (non-concurrent) vacuum */ VACOPT_SKIP_LOCKED = 1 << 5, /* skip if cannot get lock */ VACOPT_SKIPTOAST = 1 << 6, /* don't process the TOAST table, if any */ - VACOPT_DISABLE_PAGE_SKIPPING = 1 << 7 /* don't skip any pages */ + VACOPT_PARALLEL = 1 << 7, /* do lazy VACUUM in parallel */ + VACOPT_DISABLE_PAGE_SKIPPING = 1 << 8 /* don't skip any pages */ +} VacuumOptionFlag; + +typedef struct VacuumOption +{ + VacuumOptionFlag flags; /* OR of VacuumOptionFlag */ + int nworkers; /* # of parallel vacuum workers */ } VacuumOption; /* @@ -3173,9 +3180,9 @@ typedef struct VacuumRelation typedef struct VacuumStmt { - NodeTag type; - int options; /* OR of VacuumOption flags */ - List *rels; /* list of VacuumRelation, or NIL for all */ + NodeTag type; + VacuumOption options; + List *rels; /* list of VacuumRelation, or NIL for all */ } VacuumStmt; /* ---------------------- diff --git a/src/test/regress/expected/vacuum.out b/src/test/regress/expected/vacuum.out index fa9d663..9b5b7dc 100644 --- a/src/test/regress/expected/vacuum.out +++ b/src/test/regress/expected/vacuum.out @@ -80,6 +80,8 @@ CONTEXT: SQL function "do_analyze" statement 1 SQL function "wrap_do_analyze" statement 1 VACUUM FULL vactst; VACUUM (DISABLE_PAGE_SKIPPING) vaccluster; +VACUUM (PARALLEL) vaccluster; +VACUUM (PARALLEL 2) vaccluster; -- partitioned table CREATE TABLE vacparted (a int, b char) PARTITION BY LIST (a); CREATE TABLE vacparted1 PARTITION OF vacparted FOR VALUES IN (1); diff --git a/src/test/regress/sql/vacuum.sql b/src/test/regress/sql/vacuum.sql index 9defa0d..f92c4e5 100644 --- a/src/test/regress/sql/vacuum.sql +++ b/src/test/regress/sql/vacuum.sql @@ -61,6 +61,9 @@ VACUUM FULL vaccluster; VACUUM FULL vactst; VACUUM (DISABLE_PAGE_SKIPPING) vaccluster; +VACUUM (PARALLEL) vaccluster; +VACUUM (PARALLEL 2) vaccluster; + -- partitioned table CREATE TABLE vacparted (a int, b char) PARTITION BY LIST (a); -- 2.10.5