On Thu, Nov 7, 2024 at 10:42 AM Andres Freund <and...@anarazel.de> wrote:
>
> Hi,

Thanks for the review!
Attached v2 should address your feedback and also fixes a few bugs with v1.

I've still yet to run very long-running benchmarks. I did start running more
varied benchmark scenarios -- but all still under two hours. So far, the
behavior is as expected.

> On 2024-11-01 19:35:22 -0400, Melanie Plageman wrote:
> > Because we want to amortize our eager scanning across a few vacuums,
> > we cap the maximum number of successful eager scans to a percentage of
> > the total number of all-visible but not all-frozen pages in the table
> > (currently 20%).
>
> One thing worth mentioning around here seems that we currently can't
> partially-aggressively freeze tuples that are "too young" and how that
> interacts with everything else.

I'm not sure I know what you mean. Are you talking about how we don't freeze
tuples that are visible to everyone but younger than the freeze limit?

> > In the attached chart.png, you can see the vm_page_freezes climbing
> > steadily with the patch, whereas on master, there are sudden spikes
> > aligned with the aggressive vacuums. You can also see that the number
> > of pages that are all-visible but not all-frozen grows steadily on
> > master until the aggressive vacuum. This is vacuum's "backlog" of
> > freezing work.
>
> What's the reason for all-visible-but-not-all-frozen to increase to a higher
> value initially than where it later settles?

My guess is that it has to do with shorter, more frequent vacuums at the
beginning of the benchmark when the relation is smaller (and we haven't
exceeded shared buffers or memory yet). They are setting pages all-visible, but
we haven't used up enough xids yet to qualify for an eager vacuum.

The peak of AVnAF pages aligns with the start of the first eager vacuum. We
don't do any eager scanning until we are sure there is some data requiring
freeze (see this criteria):

    if (TransactionIdIsNormal(vacrel->cutoffs.relfrozenxid) &&
        TransactionIdPrecedesOrEquals(vacrel->cutoffs.relfrozenxid,
                                      vacrel->cutoffs.FreezeLimit))

Once we have used up enough xids to qualify for the first eager vacuum, the
number of AVnAF pages starts to go down.

It would follow from this theory that we would see a build-up like this after
each relfrozenxid advancement (so after the next aggressive vacuum).

But I think we don't see this because the vacuums are longer by the time
aggressive vacuums have started, so we end up using up enough XIDs between
vacuums to qualify for eager vacuums on vacuums after the aggressive vacuum.

That is just my theory though.

> > Below is the comparative WAL volume, checkpointer and background
> > writer writes, reads and writes done by all other backend types, time
> > spent vacuuming in milliseconds, and p99 latency. Notice that overall
> > vacuum IO time is substantially lower with the patch.
> >
> >    version     wal  cptr_bgwriter_w   other_rw  vac_io_time  p99_lat
> >     patch   770 GB          5903264  235073744   513722         1
> >     master  767 GB          5908523  216887764  1003654        16
>
> Hm. It's not clear to me why other_rw is higher with the patch? After all,
> given the workload, there's no chance of unnecessarily freezing tuples? Is
> that just because at the end of the benchmark there's leftover work?

So other_rw is mostly client backend and autovacuum reads and writes. It is
higher with the patch because there are actually more vacuum reads and writes
with the patch than on master. However the autovacuum worker read and write
time is much lower. Those blocks are more often in shared buffers, I would
guess.

> > From 67781cc2511bb7d62ccc9461f1787272820abcc4 Mon Sep 17 00:00:00 2001
> > From: Melanie Plageman <melanieplage...@gmail.com>
> > Date: Mon, 28 Oct 2024 11:07:50 -0400
> > Subject: [PATCH v1 4/9] Replace uses of blkno local variable in
> >  lazy_scan_heap()
>
> Largely LGTM, but I'm not sure that it's worth having as a separate commit.

I've squashed it into the commit that makes heap_vac_scan_next_block() return
the next block number.

> > From 67b5565ad57d3b196695f85811dde2044ba79f3e Mon Sep 17 00:00:00 2001
> > From: Melanie Plageman <melanieplage...@gmail.com>
> > Date: Mon, 28 Oct 2024 11:14:24 -0400
> > Subject: [PATCH v1 5/9] Move vacuum VM buffer release
> >
> > The VM buffer for the next unskippable block can be released after the
> > main loop in lazy_scan_heap(). Doing so de-clutters
> > heap_vac_scan_next_block() and opens up more refactoring options.
>
> That's vague...

I've changed the commit message justification to the fact that all the other
vmbuffer releases in vacuum code are in the body of lazy_scan_heap() too (not
in helpers).

> > From 8485dc400b3d4e9f895170af4f5fb1bb959b8495 Mon Sep 17 00:00:00 2001
> > From: Melanie Plageman <melanieplage...@gmail.com>
> > Date: Mon, 28 Oct 2024 11:36:58 -0400
> > Subject: [PATCH v1 6/9] Remove superfluous next_block local variable in 
> > vacuum
> >  code
> >
> > Reduce the number of block related variables in lazy_scan_heap() and its
> > helpers by removing the next_block local variable from
> > heap_vac_scan_next_block().
>
> I don't mind this change, but I also don't get how it's related to anything
> else here or why it's really better than the status quo.

So because this feature adds more complexity to the already complex vacuum code
selecting what blocks to scan, I thought it was important to reduce the number
of variables.
I think the patches in this set that seek to streamline
heap_vac_scan_next_block() help overall clarity.

> > diff --git a/src/backend/access/heap/vacuumlazy.c 
> > b/src/backend/access/heap/vacuumlazy.c
> > index 4b1eadea1f2..52c9d49f2b1 100644
> > --- a/src/backend/access/heap/vacuumlazy.c
> > +++ b/src/backend/access/heap/vacuumlazy.c
> > @@ -1112,19 +1112,17 @@ static bool
> >  heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno,
> >                                                bool 
> > *all_visible_according_to_vm)
> >  {
> > -     BlockNumber next_block;
> > -
> >       /* relies on InvalidBlockNumber + 1 overflowing to 0 on first call */
> > -     next_block = vacrel->current_block + 1;
> > +     vacrel->current_block++;
>
> I realize this isn't introduced in this commit, but darn, that's ugly.

I didn't like having special cases for block 0 in heap_vac_scan_next_block()
and personally prefer it this way. I thought it made it more error-prone and
harder to understand.

> > From 78ad9e022b95e024ff5bfa96af78e9e44730c970 Mon Sep 17 00:00:00 2001
> > From: Melanie Plageman <melanieplage...@gmail.com>
> > Date: Mon, 28 Oct 2024 11:42:10 -0400
> > Subject: [PATCH v1 7/9] Make heap_vac_scan_next_block() return BlockNumber
>
>
> > @@ -857,7 +857,8 @@ lazy_scan_heap(LVRelState *vacrel)
> >       vacrel->next_unskippable_allvis = false;
> >       vacrel->next_unskippable_vmbuffer = InvalidBuffer;
> >
> > -     while (heap_vac_scan_next_block(vacrel, &blkno, 
> > &all_visible_according_to_vm))
> > +     while (BlockNumberIsValid(blkno = heap_vac_scan_next_block(vacrel,
> > +                                                                           
> >                                              &all_visible_according_to_vm)))
>
> Personally I'd write this as
>
> while (true)
> {
>     BlockNumber blkno;
>
>     blkno = heap_vac_scan_next_block(vacrel, ...);
>
>     if (!BlockNumberIsValid(blkno))
>        break;
>
> Mostly because it's good to use more minimal scopes when possible,
> particularly when previously the scope intentionally was larger. But also
> partially because I don't love variable assignments inside a macro call,
> inside a while().

I changed it to be as you suggest. I will concede that variable assignments
inside a macro call inside a while() are a bit much.

> > From 818d1c3b068c6705611256cfc3eb1f10bdc0b684 Mon Sep 17 00:00:00 2001
> > From: Melanie Plageman <melanieplage...@gmail.com>
> > Date: Fri, 1 Nov 2024 18:25:05 -0400
> > Subject: [PATCH v1 8/9] WIP: Add more general summary to vacuumlazy.c
> >
> > Currently the summary at the top of vacuumlazy.c provides some specific
> > details related to the new dead TID storage in 17. I plan to add a
> > summary and maybe some sub-sections to contextualize it.
>
> I like this idea. It's hard to understand vacuumlazy.c without already
> understanding vacuumlazy.c, which isn't a good situation.

I've added a bit more to it in this version, but I likely could use some more
text on index vacuuming. I'm thinking I'll commit something minimal but correct
and let people elaborate more later.

> > ---
> >  src/backend/access/heap/vacuumlazy.c | 11 +++++++++++
> >  1 file changed, 11 insertions(+)
> >
> > diff --git a/src/backend/access/heap/vacuumlazy.c 
> > b/src/backend/access/heap/vacuumlazy.c
> > index 7ce69953ba0..15a04c6b10b 100644
> > --- a/src/backend/access/heap/vacuumlazy.c
> > +++ b/src/backend/access/heap/vacuumlazy.c
> > @@ -3,6 +3,17 @@
> >   * vacuumlazy.c
> >   *     Concurrent ("lazy") vacuuming.
> >   *
> > + * Heap relations are vacuumed in three main phases. In the first phase,
> > + * vacuum scans relation pages, pruning and freezing tuples and saving dead
> > + * tuples' TIDs in a TID store. If that TID store fills up or vacuum 
> > finishes
> > + * scanning the relation, it progresses to the second phase: index 
> > vacuuming.
> > + * After index vacuuming is complete, vacuum scans the blocks of the 
> > relation
> > + * indicated by the TIDs in the TID store and reaps the dead tuples, 
> > freeing
> > + * that space for future tuples. Finally, vacuum may truncate the relation 
> > if
> > + * it has emptied pages at the end. XXX: this summary needs work.
>
> Yea, at least we ought to mention that the phasing can be different when there
> are no indexes and that the later phases can heuristically be omitted when
> there aren't enough dead items.

I've done this.

> > From f21f0bab1dbe675be4b4dddcb2eea486d8a69d36 Mon Sep 17 00:00:00 2001
> > From: Melanie Plageman <melanieplage...@gmail.com>
> > Date: Mon, 28 Oct 2024 12:15:08 -0400
> > Subject: [PATCH v1 9/9] Eagerly scan all-visible pages to amortize 
> > aggressive
> >  vacuum
> >
> > Introduce semi-aggressive vacuums, which scan some of the all-visible
> > but not all-frozen pages in the relation to amortize the cost of an
> > aggressive vacuum.
>
> I wonder if "aggressive" is really the right terminology going
> forward... Somehow it doesn't seem particularly descriptive anymore if, in
> many workloads, almost all vacuums are going to be aggressive-ish.

I've changed it to normal, eager, and aggressive

> > diff --git a/src/backend/access/heap/vacuumlazy.c 
> > b/src/backend/access/heap/vacuumlazy.c
> > index 15a04c6b10b..adabb5ff5f1 100644
> > --- a/src/backend/access/heap/vacuumlazy.c
> > +++ b/src/backend/access/heap/vacuumlazy.c
> > + *
> > + * On the assumption that different regions of the table are likely to 
> > contain
> > + * similarly aged data, we use a localized failure cap instead of a global 
> > cap
> > + * for the whole relation. The failure count is reset on each region of the
> > + * table, comprised of RELSEG_SIZE blocks (or 1/4 of the table size for a
> > + * small table). In each region, we tolerate 
> > MAX_SUCCESSIVE_EAGER_SCAN_FAILS
> > + * before suspending eager scanning until the end of the region.
>
> I'm a bit surprised to see such large regions. Why not something finer, in the
> range of a few megabytes?  The FSM steers new tuples quite aggressively to the
> start of the table, which means that in many workloads there will be old and
> new data interspersed at the start of the table. Using RELSEG_SIZE sized
> regions for semi-aggressive vacuuming will mean that we'll often not do any
> semi-aggressive processing beyond the start of the relation, as we'll reach
> the failure rate quickly.

I've changed the region size to 32 MB but I also decreased the allowed failures
to 128 blocks per region (to avoid eager scanning too many blocks if we are
failing to freeze them).

This doesn't completely address your concern about missing freezing
opportunities.

However, this version does randomize the eager scan start block selection in
the first region. The first eager scan block will be somewhere in the first
region to avoid re-scanning unfreezable blocks across multiple vacuums. I will
note that this problem is unlikely to persist across multiple vacuums. If the
page is being modified frequently, it won't be all-visible. You would have to
have this pattern for it to be an issue: modify the page, vacuum, vacuum,
modify, vacuum, vacuum (since the first vacuum after the modification will set
the page all-visible).

> I also find it layering-wise a bit weird to use RELSEG_SIZE, that's really imo
> is just an md.c concept.

Makes sense. New version has a dedicated macro.

> > +/*
> > + * Semi-aggressive vacuums eagerly scan some all-visible but not all-frozen
> > + * pages. Since our goal is to freeze these pages, an eager scan that 
> > fails to
> > + * set the page all-frozen in the VM is considered to have "failed".
> > + *
> > + * On the assumption that different regions of the table tend to have
> > + * similarly aged data, once we fail to freeze 
> > MAX_SUCCESSIVE_EAGER_SCAN_FAILS
> > + * blocks, we suspend eager scanning until vacuum has progressed to another
> > + * region of the table with potentially older data.
> > + */
> > +#define MAX_SUCCESSIVE_EAGER_SCAN_FAILS 1024
>
> Can this really be a constant, given that the semi-aggressive regions are
> shrunk for small tables?

Good point. This version actually disables eager scans for relations smaller
than a single region.

> >  {
> >       /* Target heap relation and its indexes */
> > @@ -153,8 +208,22 @@ typedef struct LVRelState
> > +     /*
> > +      * Whether or not this is an aggressive, semi-aggressive, or 
> > unaggressive
> > +      * VACUUM. A fully aggressive vacuum must set relfrozenxid >= 
> > FreezeLimit
> > +      * and therefore must scan every unfrozen tuple. A semi-aggressive 
> > vacuum
> > +      * will scan a certain number of all-visible pages until it is 
> > downgraded
> > +      * to an unaggressive vacuum.
> > +      */
> > +     VacAggressive aggressive;
>
> - why is VacAggressive defined in vacuum.h? Isn't this fairly tightly coupled
>   to heapam?

It was because I had vacuum_get_cutoffs() return the aggressiveness. I've
changed this so that the enum can be defined in vacuumlazy.c.

> - Kinda feels like the type should be named VacAggressivness or such?

I changed it to VacEagerness.

> > +     /*
> > +      * A semi-aggressive vacuum that has failed to freeze too many eagerly
> > +      * scanned blocks in a row suspends eager scanning. unaggressive_to 
> > is the
> > +      * block number of the first block eligible for resumed eager 
> > scanning.
> > +      */
> > +     BlockNumber unaggressive_to;
>
> What's it set to otherwise? What is it set to in aggressive vacuums?

The idea was to set it to 0 for aggressive vacuum and never advance it.

However, for eager vacuum, there was actually a problem with this version of
the patch set that meant that we weren't actually enabling and disabling eager
scanning per region. Instead we were waiting until we hit the fail limit and
then disabling eager scanning for region-size # of blocks. This was effectively
a cooling off period as opposed to a region-based approach.

I've changed this in the current version. Now, for eager vacuum, we save the
block number of the next eager scan region in next_eager_scan_region_start.
Then when we cross over into the next region, we advance it. Eager scanning is
enabled as long as eager_pages.remaining_fails is > 0. When we cross into a new
region, we reset it to re-enable eager scanning if it was disabled.

For normal and aggressive vacuum, I set next_eager_scan_region_start to
InvalidBlockNumber to ensure we never trigger region calculations. However, for
aggressive vacuums, I do keep track of all-visible pages scanned using the same
counter in LVRelState that counts eager pages scanned. I'm not sure if it is
confusing to use some of this accounting labeled as eager scan accounting for
aggressive vacuum. In fact, in the logs, I print out all-visible pages scanned
-- which will be > 0 for both aggressive vacuums and eager vacuums.

There are tradeoffs between using the eager scan counters for all vacuum types
and initializing them to different values based on the vacuum eagerness level
and guarding all reference to them by vacuum type (and not initializing them to
valid but special values).

Let me know what you think about using the counters for aggressive vacuum too.

On the topic of the region-based method vs the cool-off method, with the region
method, if all of the failures are concentrated at the end of the region, we
will start eager scanning again as soon as we start the next region. With the
cool-off method we would wait a consistent number of blocks. But I think the
region method is still better. The region cutoff may be arbitrary but it
produces a consistent amount of extra scanning. What do you think?

> > +     /*
> > +      * The number of eagerly scanned blocks a semi-aggressive vacuum 
> > failed to
> > +      * freeze (due to age) in the current eager scan region. It is reset 
> > each
> > +      * time we hit MAX_SUCCESSIVE_EAGER_SCAN_FAILS.
> > +      */
> > +     BlockNumber eager_scanned_failed_frozen;
> > +
> > +     /*
> > +      * The remaining number of blocks a semi-aggressive vacuum will 
> > consider
> > +      * eager scanning. This is initialized to EAGER_SCAN_SUCCESS_RATE of 
> > the
> > +      * total number of all-visible but not all-frozen pages.
> > +      */
> > +     BlockNumber remaining_eager_scan_successes;
>
> I think it might look better if you just bundled these into a struct like
>
>       struct
>       {
>         BlockNumber scanned;
>         BlockNumber failed_frozen;
>         BlockNumber remaining_successes;
>       } eager_pages;

Done

> > +     visibilitymap_count(rel, &orig_rel_allvisible, &orig_rel_allfrozen);
> > +     vacrel->remaining_eager_scan_successes =
> > +             (BlockNumber) (EAGER_SCAN_SUCCESS_RATE * (orig_rel_allvisible 
> > - orig_rel_allfrozen));
> >
> >       if (verbose)
> >       {
> > -             if (vacrel->aggressive)
> > -                     ereport(INFO,
> > -                                     (errmsg("aggressively vacuuming 
> > \"%s.%s.%s\"",
> > -                                                     vacrel->dbname, 
> > vacrel->relnamespace,
> > -                                                     vacrel->relname)));
> > -             else
> > -                     ereport(INFO,
> > -                                     (errmsg("vacuuming \"%s.%s.%s\"",
> > -                                                     vacrel->dbname, 
> > vacrel->relnamespace,
> > -                                                     vacrel->relname)));
> > +             switch (vacrel->aggressive)
> > +             {
> > +                     case VAC_UNAGGRESSIVE:
> > +                             ereport(INFO,
> > +                                             (errmsg("vacuuming 
> > \"%s.%s.%s\"",
> > +                                                             
> > vacrel->dbname, vacrel->relnamespace,
> > +                                                             
> > vacrel->relname)));
> > +                             break;
> > +
> > +                     case VAC_AGGRESSIVE:
> > +                             ereport(INFO,
> > +                                             (errmsg("aggressively 
> > vacuuming \"%s.%s.%s\"",
> > +                                                             
> > vacrel->dbname, vacrel->relnamespace,
> > +                                                             
> > vacrel->relname)));
> > +                             break;
> > +
> > +                     case VAC_SEMIAGGRESSIVE:
> > +                             ereport(INFO,
> > +                                             (errmsg("semiaggressively 
> > vacuuming \"%s.%s.%s\"",
> > +                                                             
> > vacrel->dbname, vacrel->relnamespace,
> > +                                                             
> > vacrel->relname)));
> > +                             break;
> > +             }
>
> Wonder if we should have a function that returns the aggressiveness of a
> vacuum as an already translated string. There are other places where we emit
> the aggressiveness as part of a message, and it's pretty silly to duplicate
> most of the message.

I've added a helper to return text with the vacuum eagerness level. I used
gettext_noop() to mark it for translation later because I think the autovacuum
logging uses _() and ereports are translated. But I'm not sure this is
completely right.

> > @@ -545,11 +668,13 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
> >        * Non-aggressive VACUUMs may advance them by any amount, or not at 
> > all.
> >        */
> >       Assert(vacrel->NewRelfrozenXid == vacrel->cutoffs.OldestXmin ||
> > -                TransactionIdPrecedesOrEquals(vacrel->aggressive ? 
> > vacrel->cutoffs.FreezeLimit :
> > +                TransactionIdPrecedesOrEquals(vacrel->aggressive == 
> > VAC_AGGRESSIVE ?
> > +                                                                           
> >    vacrel->cutoffs.FreezeLimit :
> >                                                                             
> >    vacrel->cutoffs.relfrozenxid,
> >                                                                             
> >    vacrel->NewRelfrozenXid));
> >       Assert(vacrel->NewRelminMxid == vacrel->cutoffs.OldestMxact ||
> > -                MultiXactIdPrecedesOrEquals(vacrel->aggressive ? 
> > vacrel->cutoffs.MultiXactCutoff :
> > +                MultiXactIdPrecedesOrEquals(vacrel->aggressive == 
> > VAC_AGGRESSIVE ?
> > +                                                                        
> > vacrel->cutoffs.MultiXactCutoff :
> >                                                                          
> > vacrel->cutoffs.relminmxid,
> >                                                                          
> > vacrel->NewRelminMxid));
> >       if (vacrel->skippedallvis)
>
> These are starting to feel somewhat complicated. Wonder if it'd be easier to
> read if they were written as normal ifs.

Did this.

> > +/*
> > + * Helper to decrement a block number to 0 without wrapping around.
> > + */
> > +static void
> > +decrement_blkno(BlockNumber *block)
> > +{
> > +     if ((*block) > 0)
> > +             (*block)--;
> > +}

I've removed it.

> > @@ -956,11 +1094,23 @@ lazy_scan_heap(LVRelState *vacrel)
> >               if (!got_cleanup_lock)
> >                       LockBuffer(buf, BUFFER_LOCK_SHARE);
> >
> > +             page_freezes = vacrel->vm_page_freezes;
> > +
> >               /* Check for new or empty pages before lazy_scan_[no]prune 
> > call */
> >               if (lazy_scan_new_or_empty(vacrel, buf, blkno, page, 
> > !got_cleanup_lock,
> >                                                                  vmbuffer))
> >               {
> >                       /* Processed as new/empty page (lock and pin 
> > released) */
> > +
> > +                     /* count an eagerly scanned page as a failure or a 
> > success */
> > +                     if (was_eager_scanned)
> > +                     {
> > +                             if (vacrel->vm_page_freezes > page_freezes)
> > +                                     
> > decrement_blkno(&vacrel->remaining_eager_scan_successes);
> > +                             else
> > +                                     vacrel->eager_scanned_failed_frozen++;
> > +                     }
> > +
> >                       continue;
> >               }
>
> Maybe I'm confused, but ISTM that remaining_eager_scan_successes shouldn't
> actually be a BlockNumber, given it doesn't actually indicates a specific
> block...

All of the other counters like this in LVRelState are BlockNumbers (see
lpdead_item_pages, missed_dead_pages, etc). I'm fine with not using a
BlockNumber for this, but I would want to have a justification for why it is
different than the other cumulative counters for numbers of blocks.

> I don't understand why we would sometimes want to treat empty pages as a
> failure? They can't fail to be frozen, no?
>
> I'm not sure it makes sense to count them as progress towards the success
> limit either - afaict we just rediscovered free space is the table. That's imo
> separate from semi-aggressive freezing.

That's a great point. In fact, I don't think we could ever have exercised this
code anyway. Since we will always freeze it, there shouldn't be any all-visible
but not all-frozen empty pages. I've removed this code.

> Storing page_freezes as a copy of vacrel->vm_page_freezes and then checking if
> that increased feels like a somewhat ugly way of tracking if freezing
> happend. There's no more direct way.

This version uses an output parameter to lazy_scan_prune().

> Why is decrement_blkno() needed? How can we ever get into negative territory?
> Shouldn't eager scanning have been disabled when
> remaining_eager_scan_successes reaches zero and thus prevent
> remaining_eager_scan_successes from ever going below zero?

Right I've removed this.

> > @@ -1144,7 +1310,65 @@ heap_vac_scan_next_block(LVRelState *vacrel,
> >                */
> >               bool            skipsallvis;
> >
> > -             find_next_unskippable_block(vacrel, &skipsallvis);
> > +             /*
> > +              * Figure out if we should disable eager scan going forward or
> > +              * downgrade to an unaggressive vacuum altogether.
> > +              */
> > +             if (vacrel->aggressive == VAC_SEMIAGGRESSIVE)
> > +             {
> > +                     /*
> > +                      * If we hit our success limit, there is no need to 
> > eagerly scan
> > +                      * any additional pages. Downgrade the vacuum to 
> > unaggressive.
> > +                      */
> > +                     if (vacrel->remaining_eager_scan_successes == 0)
> > +                             vacrel->aggressive = VAC_UNAGGRESSIVE;
> > +
> > +                     /*
> > +                      * If we hit the max number of failed eager scans for 
> > this region
> > +                      * of the table, figure out where the next eager scan 
> > region
> > +                      * should start. Eager scanning is effectively 
> > disabled until we
> > +                      * scan a block in that new region.
> > +                      */
> > +                     else if (vacrel->eager_scanned_failed_frozen >=
> > +                                      MAX_SUCCESSIVE_EAGER_SCAN_FAILS)
> > +                     {
> > +                             BlockNumber region_size,
> > +                                                     offset;
> > +
>
> Why are we doing this logic here, rather than after incrementing
> eager_scanned_failed_frozen? Seems that'd limit the amount of times we need to
> run through this logic substantially?

I've moved it there. Actually, all of the logic has been moved around to make
the region method work.

> Hm - isn't consider_eager_scan potentially outdated after
> find_next_unskippable_block() iterated over a bunch of blocks? If
> consider_eager_scan is false, this could very well go into the next
> "semi-aggressive region", where consider_eager_scan should have been
> re-enabled, no?

Yep. Great catch. I believe I've fixed this by removing consider_eager_scan and
instead checking if we have remaining failures in this region and resetting
remaining failures whenever we enter a new region.

Also turns out was_eager_scanned had some issues that I believe I've now fixed
as well.

- Melanie
From 1995eff8c5e6d3f9d3e0dcb0482b5a652d8d3c75 Mon Sep 17 00:00:00 2001
From: Melanie Plageman <melanieplage...@gmail.com>
Date: Mon, 28 Oct 2024 10:53:37 -0400
Subject: [PATCH v2 01/10] Rename LVRelState->frozen_pages

Rename frozen_pages to new_frozen_tuple_pages in LVRelState, the struct
used for tracking state during vacuuming of a heap relation.
frozen_pages sounds like it includes every all-frozen page. That is a
misnomer. It does not include pages with already frozen tuples. It also
includes pages that are not actually all-frozen.

Author: Melanie Plageman
Reviewed-by: Andres Freund

Discussion: https://postgr.es.org/message-id/ctdjzroezaxmiyah3gwbwm67defsrwj2b5fpfs4ku6msfpxeia%40mwjyqlhwr2wu
---
 src/backend/access/heap/vacuumlazy.c | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 6a3588cf817..3077ee8ec32 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -188,7 +188,7 @@ typedef struct LVRelState
 	BlockNumber rel_pages;		/* total number of pages */
 	BlockNumber scanned_pages;	/* # pages examined (not skipped via VM) */
 	BlockNumber removed_pages;	/* # pages removed by relation truncation */
-	BlockNumber frozen_pages;	/* # pages with newly frozen tuples */
+	BlockNumber new_frozen_tuple_pages; /* # pages with newly frozen tuples */
 	BlockNumber lpdead_item_pages;	/* # pages with LP_DEAD items */
 	BlockNumber missed_dead_pages;	/* # pages with missed dead tuples */
 	BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */
@@ -407,7 +407,7 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
 	/* Initialize page counters explicitly (be tidy) */
 	vacrel->scanned_pages = 0;
 	vacrel->removed_pages = 0;
-	vacrel->frozen_pages = 0;
+	vacrel->new_frozen_tuple_pages = 0;
 	vacrel->lpdead_item_pages = 0;
 	vacrel->missed_dead_pages = 0;
 	vacrel->nonempty_pages = 0;
@@ -696,9 +696,10 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
 								 vacrel->NewRelminMxid, diff);
 			}
 			appendStringInfo(&buf, _("frozen: %u pages from table (%.2f%% of total) had %lld tuples frozen\n"),
-							 vacrel->frozen_pages,
+							 vacrel->new_frozen_tuple_pages,
 							 orig_rel_pages == 0 ? 100.0 :
-							 100.0 * vacrel->frozen_pages / orig_rel_pages,
+							 100.0 * vacrel->new_frozen_tuple_pages /
+							 orig_rel_pages,
 							 (long long) vacrel->tuples_frozen);
 			if (vacrel->do_index_vacuuming)
 			{
@@ -1453,11 +1454,12 @@ lazy_scan_prune(LVRelState *vacrel,
 	if (presult.nfrozen > 0)
 	{
 		/*
-		 * We don't increment the frozen_pages instrumentation counter when
-		 * nfrozen == 0, since it only counts pages with newly frozen tuples
-		 * (don't confuse that with pages newly set all-frozen in VM).
+		 * We don't increment the new_frozen_tuple_pages instrumentation
+		 * counter when nfrozen == 0, since it only counts pages with newly
+		 * frozen tuples (don't confuse that with pages newly set all-frozen
+		 * in VM).
 		 */
-		vacrel->frozen_pages++;
+		vacrel->new_frozen_tuple_pages++;
 	}
 
 	/*
-- 
2.34.1

From c57df90d531dabd2a1383f67e82fdd2feef7a862 Mon Sep 17 00:00:00 2001
From: Melanie Plageman <melanieplage...@gmail.com>
Date: Mon, 28 Oct 2024 11:14:24 -0400
Subject: [PATCH v2 05/10] Move vacuum VM buffer release

The VM buffer for the next unskippable block can be released after the
main loop in lazy_scan_heap(). Doing so de-clutters
heap_vac_scan_next_block() and is more consistent. All other VM buffer
releases happen in lazy_scan_heap().
---
 src/backend/access/heap/vacuumlazy.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index dac40f2f7fd..272ddee22c5 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -1054,6 +1054,14 @@ lazy_scan_heap(LVRelState *vacrel)
 	}
 
 	vacrel->blkno = InvalidBlockNumber;
+
+	/* Release VM buffers */
+	if (BufferIsValid(vacrel->next_unskippable_vmbuffer))
+	{
+		ReleaseBuffer(vacrel->next_unskippable_vmbuffer);
+		vacrel->next_unskippable_vmbuffer = InvalidBuffer;
+	}
+
 	if (BufferIsValid(vmbuffer))
 		ReleaseBuffer(vmbuffer);
 
@@ -1125,11 +1133,6 @@ heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno,
 	/* Have we reached the end of the relation? */
 	if (next_block >= vacrel->rel_pages)
 	{
-		if (BufferIsValid(vacrel->next_unskippable_vmbuffer))
-		{
-			ReleaseBuffer(vacrel->next_unskippable_vmbuffer);
-			vacrel->next_unskippable_vmbuffer = InvalidBuffer;
-		}
 		*blkno = vacrel->rel_pages;
 		return false;
 	}
-- 
2.34.1

From b85054fdaad430e4581718f75ed591285eb22aa6 Mon Sep 17 00:00:00 2001
From: Melanie Plageman <melanieplage...@gmail.com>
Date: Thu, 21 Nov 2024 18:36:05 -0500
Subject: [PATCH v2 02/10] Make visibilitymap_set() return previous state of
 vmbits

It can be useful to know the state of a relation page's VM bits before
visibilitymap_set(). visibilitymap_set() has the old value on hand, so
returning it is simple. This commit does not use visibilitymap_set()'s
new return value.

Author: Melanie Plageman
Reviewed-by: Masahiko Sawada, Andres Freund, Nitin Jadhav
Discussion: https://postgr.es/m/flat/CAAKRu_ZQe26xdvAqo4weHLR%3DivQ8J4xrSfDDD8uXnh-O-6P6Lg%40mail.gmail.com#6d8d2b4219394f774889509bf3bdc13d,
https://postgr.es/m/ctdjzroezaxmiyah3gwbwm67defsrwj2b5fpfs4ku6msfpxeia%40mwjyqlhwr2wu
---
 src/backend/access/heap/visibilitymap.c | 9 +++++++--
 src/include/access/visibilitymap.h      | 9 ++++++---
 2 files changed, 13 insertions(+), 5 deletions(-)

diff --git a/src/backend/access/heap/visibilitymap.c b/src/backend/access/heap/visibilitymap.c
index 8b24e7bc33c..5f71fafaa37 100644
--- a/src/backend/access/heap/visibilitymap.c
+++ b/src/backend/access/heap/visibilitymap.c
@@ -239,8 +239,10 @@ visibilitymap_pin_ok(BlockNumber heapBlk, Buffer vmbuf)
  * You must pass a buffer containing the correct map page to this function.
  * Call visibilitymap_pin first to pin the right one. This function doesn't do
  * any I/O.
+ *
+ * Returns the state of the page's VM bits before setting flags.
  */
-void
+uint8
 visibilitymap_set(Relation rel, BlockNumber heapBlk, Buffer heapBuf,
 				  XLogRecPtr recptr, Buffer vmBuf, TransactionId cutoff_xid,
 				  uint8 flags)
@@ -250,6 +252,7 @@ visibilitymap_set(Relation rel, BlockNumber heapBlk, Buffer heapBuf,
 	uint8		mapOffset = HEAPBLK_TO_OFFSET(heapBlk);
 	Page		page;
 	uint8	   *map;
+	uint8		status;
 
 #ifdef TRACE_VISIBILITYMAP
 	elog(DEBUG1, "vm_set %s %d", RelationGetRelationName(rel), heapBlk);
@@ -274,7 +277,8 @@ visibilitymap_set(Relation rel, BlockNumber heapBlk, Buffer heapBuf,
 	map = (uint8 *) PageGetContents(page);
 	LockBuffer(vmBuf, BUFFER_LOCK_EXCLUSIVE);
 
-	if (flags != (map[mapByte] >> mapOffset & VISIBILITYMAP_VALID_BITS))
+	status = ((map[mapByte] >> mapOffset) & VISIBILITYMAP_VALID_BITS);
+	if (flags != status)
 	{
 		START_CRIT_SECTION();
 
@@ -311,6 +315,7 @@ visibilitymap_set(Relation rel, BlockNumber heapBlk, Buffer heapBuf,
 	}
 
 	LockBuffer(vmBuf, BUFFER_LOCK_UNLOCK);
+	return status;
 }
 
 /*
diff --git a/src/include/access/visibilitymap.h b/src/include/access/visibilitymap.h
index 1a4d467e6f0..f7779a0fe19 100644
--- a/src/include/access/visibilitymap.h
+++ b/src/include/access/visibilitymap.h
@@ -31,9 +31,12 @@ extern bool visibilitymap_clear(Relation rel, BlockNumber heapBlk,
 extern void visibilitymap_pin(Relation rel, BlockNumber heapBlk,
 							  Buffer *vmbuf);
 extern bool visibilitymap_pin_ok(BlockNumber heapBlk, Buffer vmbuf);
-extern void visibilitymap_set(Relation rel, BlockNumber heapBlk, Buffer heapBuf,
-							  XLogRecPtr recptr, Buffer vmBuf, TransactionId cutoff_xid,
-							  uint8 flags);
+extern uint8 visibilitymap_set(Relation rel,
+							   BlockNumber heapBlk, Buffer heapBuf,
+							   XLogRecPtr recptr,
+							   Buffer vmBuf,
+							   TransactionId cutoff_xid,
+							   uint8 flags);
 extern uint8 visibilitymap_get_status(Relation rel, BlockNumber heapBlk, Buffer *vmbuf);
 extern void visibilitymap_count(Relation rel, BlockNumber *all_visible, BlockNumber *all_frozen);
 extern BlockNumber visibilitymap_prepare_truncate(Relation rel,
-- 
2.34.1

From 2a998f8ffad95c5cf7057d8ad76eefd47e0d671f Mon Sep 17 00:00:00 2001
From: Melanie Plageman <melanieplage...@gmail.com>
Date: Wed, 11 Dec 2024 13:36:29 -0500
Subject: [PATCH v2 04/10] Remove leftover mentions of XLOG_HEAP2_FREEZE_PAGE
 records

f83d709760d merged the separate XLOG_HEAP2_FREEZE_PAGE records into a
new combined prune, freeze, and vacuum record with opcode
XLOG_HEAP2_PRUNE_VACUUM_SCAN. Remove the last few references to
XLOG_HEAP2_FREEZE_PAGE records which were accidentally left behind.
---
 src/backend/access/heap/pruneheap.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index 869d82ad667..11c9532719d 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -1906,7 +1906,7 @@ heap_log_freeze_eq(xlhp_freeze_plan *plan, HeapTupleFreeze *frz)
 }
 
 /*
- * Comparator used to deduplicate XLOG_HEAP2_FREEZE_PAGE freeze plans
+ * Comparator used to deduplicate the freeze plans used in WAL records.
  */
 static int
 heap_log_freeze_cmp(const void *arg1, const void *arg2)
@@ -1966,7 +1966,7 @@ heap_log_freeze_new_plan(xlhp_freeze_plan *plan, HeapTupleFreeze *frz)
 
 /*
  * Deduplicate tuple-based freeze plans so that each distinct set of
- * processing steps is only stored once in XLOG_HEAP2_FREEZE_PAGE records.
+ * processing steps is only stored once in the WAL record.
  * Called during original execution of freezing (for logged relations).
  *
  * Return value is number of plans set in *plans_out for caller.  Also writes
-- 
2.34.1

From 7e52c4c20a8b07ebf2bb40462e2780f25a060b38 Mon Sep 17 00:00:00 2001
From: Melanie Plageman <melanieplage...@gmail.com>
Date: Thu, 31 Oct 2024 18:19:18 -0400
Subject: [PATCH v2 03/10] Count pages set all-visible and all-frozen in VM
 during vacuum

Vacuum already counts and logs pages with newly frozen tuples.
Now count and log the number of pages newly set all-visible and
all-frozen in the visibility map.

Pages that are all-visible but not all-frozen are debt for future
aggressive vacuums. The counts of newly all-visible and all-frozen pages
give us visibility into the rate at which this debt is being accrued and
paid down.

Author: Melanie Plageman
Reviewed-by: Masahiko Sawada, Alastair Turner, Nitin Jadhav, Andres Freund
Discussion: https://postgr.es/m/flat/CAAKRu_ZQe26xdvAqo4weHLR%3DivQ8J4xrSfDDD8uXnh-O-6P6Lg%40mail.gmail.com#6d8d2b4219394f774889509bf3bdc13d,
https://postgr.es/m/ctdjzroezaxmiyah3gwbwm67defsrwj2b5fpfs4ku6msfpxeia%40mwjyqlhwr2wu

ci-os-only:
---
 src/backend/access/heap/vacuumlazy.c | 125 ++++++++++++++++++++++++---
 1 file changed, 113 insertions(+), 12 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 3077ee8ec32..dac40f2f7fd 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -189,6 +189,21 @@ typedef struct LVRelState
 	BlockNumber scanned_pages;	/* # pages examined (not skipped via VM) */
 	BlockNumber removed_pages;	/* # pages removed by relation truncation */
 	BlockNumber new_frozen_tuple_pages; /* # pages with newly frozen tuples */
+
+	/* # pages newly set all-visible in the VM */
+	BlockNumber vm_new_visible_pages;
+
+	/*
+	 * # pages newly set both all-visible and all-frozen in the VM. This is a
+	 * subset of vm_new_visible_pages. That is, vm_new_visible_frozen_pages
+	 * includes only pages previously neither all-visible nor all-frozen in
+	 * the VM but which this vacuum set all-visible and all-frozen.
+	 */
+	BlockNumber vm_new_visible_frozen_pages;
+
+	/* # all-visible pages newly set all-frozen in the VM */
+	BlockNumber vm_new_frozen_pages;
+
 	BlockNumber lpdead_item_pages;	/* # pages with LP_DEAD items */
 	BlockNumber missed_dead_pages;	/* # pages with missed dead tuples */
 	BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */
@@ -428,6 +443,10 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
 	vacrel->recently_dead_tuples = 0;
 	vacrel->missed_dead_tuples = 0;
 
+	vacrel->vm_new_visible_pages = 0;
+	vacrel->vm_new_visible_frozen_pages = 0;
+	vacrel->vm_new_frozen_pages = 0;
+
 	/*
 	 * Get cutoffs that determine which deleted tuples are considered DEAD,
 	 * not just RECENTLY_DEAD, and which XIDs/MXIDs to freeze.  Then determine
@@ -701,6 +720,11 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
 							 100.0 * vacrel->new_frozen_tuple_pages /
 							 orig_rel_pages,
 							 (long long) vacrel->tuples_frozen);
+
+			appendStringInfo(&buf, _("visibility map: %u pages newly set all-visible, of which %u set all-frozen. %u all-visible pages newly set all-frozen.\n"),
+							 vacrel->vm_new_visible_pages,
+							 vacrel->vm_new_visible_frozen_pages,
+							 vacrel->vm_new_frozen_pages);
 			if (vacrel->do_index_vacuuming)
 			{
 				if (vacrel->nindexes == 0 || vacrel->num_index_scans == 0)
@@ -1354,6 +1378,8 @@ lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf, BlockNumber blkno,
 		 */
 		if (!PageIsAllVisible(page))
 		{
+			uint8		old_vmbits;
+
 			START_CRIT_SECTION();
 
 			/* mark buffer dirty before writing a WAL record */
@@ -1373,10 +1399,25 @@ lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf, BlockNumber blkno,
 				log_newpage_buffer(buf, true);
 
 			PageSetAllVisible(page);
-			visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr,
-							  vmbuffer, InvalidTransactionId,
-							  VISIBILITYMAP_ALL_VISIBLE | VISIBILITYMAP_ALL_FROZEN);
+			old_vmbits = visibilitymap_set(vacrel->rel, blkno, buf,
+										   InvalidXLogRecPtr,
+										   vmbuffer, InvalidTransactionId,
+										   VISIBILITYMAP_ALL_VISIBLE |
+										   VISIBILITYMAP_ALL_FROZEN);
 			END_CRIT_SECTION();
+
+			/*
+			 * If the page wasn't already set all-visible and all-frozen in
+			 * the VM, count it as newly set for logging.
+			 */
+			if ((old_vmbits & VISIBILITYMAP_ALL_VISIBLE) == 0 &&
+				(old_vmbits & VISIBILITYMAP_ALL_FROZEN) == 0)
+			{
+				vacrel->vm_new_visible_pages++;
+				vacrel->vm_new_visible_frozen_pages++;
+			}
+			else if ((old_vmbits & VISIBILITYMAP_ALL_FROZEN) == 0)
+				vacrel->vm_new_frozen_pages++;
 		}
 
 		freespace = PageGetHeapFreeSpace(page);
@@ -1531,6 +1572,7 @@ lazy_scan_prune(LVRelState *vacrel,
 	 */
 	if (!all_visible_according_to_vm && presult.all_visible)
 	{
+		uint8		old_vmbits;
 		uint8		flags = VISIBILITYMAP_ALL_VISIBLE;
 
 		if (presult.all_frozen)
@@ -1554,9 +1596,25 @@ lazy_scan_prune(LVRelState *vacrel,
 		 */
 		PageSetAllVisible(page);
 		MarkBufferDirty(buf);
-		visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr,
-						  vmbuffer, presult.vm_conflict_horizon,
-						  flags);
+		old_vmbits = visibilitymap_set(vacrel->rel, blkno, buf,
+									   InvalidXLogRecPtr,
+									   vmbuffer, presult.vm_conflict_horizon,
+									   flags);
+
+		/*
+		 * If the page wasn't already set all-visible and all-frozen in the
+		 * VM, count it as newly set for logging.
+		 */
+		if ((old_vmbits & VISIBILITYMAP_ALL_VISIBLE) == 0 &&
+			(old_vmbits & VISIBILITYMAP_ALL_FROZEN) == 0)
+		{
+			vacrel->vm_new_visible_pages++;
+			if (presult.all_frozen)
+				vacrel->vm_new_visible_frozen_pages++;
+		}
+		else if ((old_vmbits & VISIBILITYMAP_ALL_FROZEN) == 0 &&
+				 presult.all_frozen)
+			vacrel->vm_new_frozen_pages++;
 	}
 
 	/*
@@ -1606,6 +1664,8 @@ lazy_scan_prune(LVRelState *vacrel,
 	else if (all_visible_according_to_vm && presult.all_visible &&
 			 presult.all_frozen && !VM_ALL_FROZEN(vacrel->rel, blkno, &vmbuffer))
 	{
+		uint8		old_vmbits;
+
 		/*
 		 * Avoid relying on all_visible_according_to_vm as a proxy for the
 		 * page-level PD_ALL_VISIBLE bit being set, since it might have become
@@ -1625,10 +1685,33 @@ lazy_scan_prune(LVRelState *vacrel,
 		 * was logged when the page's tuples were frozen.
 		 */
 		Assert(!TransactionIdIsValid(presult.vm_conflict_horizon));
-		visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr,
-						  vmbuffer, InvalidTransactionId,
-						  VISIBILITYMAP_ALL_VISIBLE |
-						  VISIBILITYMAP_ALL_FROZEN);
+		old_vmbits = visibilitymap_set(vacrel->rel, blkno, buf,
+									   InvalidXLogRecPtr,
+									   vmbuffer, InvalidTransactionId,
+									   VISIBILITYMAP_ALL_VISIBLE |
+									   VISIBILITYMAP_ALL_FROZEN);
+
+		/*
+		 * The page was likely already set all-visible in the VM. However,
+		 * there is a small chance that it was modified sometime between
+		 * setting all_visible_according_to_vm and checking the visibility
+		 * during pruning. Check the return value of old_vmbits anyway to
+		 * ensure the visibility map counters used for logging are accurate.
+		 */
+		if ((old_vmbits & VISIBILITYMAP_ALL_VISIBLE) == 0 &&
+			(old_vmbits & VISIBILITYMAP_ALL_FROZEN) == 0)
+		{
+			vacrel->vm_new_visible_pages++;
+			vacrel->vm_new_visible_frozen_pages++;
+		}
+
+		/*
+		 * We already checked that the page was not set all-frozen in the VM
+		 * above, so we don't need to test the value of old_vmbits.
+		 */
+		else
+			vacrel->vm_new_frozen_pages++;
+
 	}
 }
 
@@ -2274,6 +2357,7 @@ lazy_vacuum_heap_page(LVRelState *vacrel, BlockNumber blkno, Buffer buffer,
 	if (heap_page_is_all_visible(vacrel, buffer, &visibility_cutoff_xid,
 								 &all_frozen))
 	{
+		uint8		old_vmbits;
 		uint8		flags = VISIBILITYMAP_ALL_VISIBLE;
 
 		if (all_frozen)
@@ -2283,8 +2367,25 @@ lazy_vacuum_heap_page(LVRelState *vacrel, BlockNumber blkno, Buffer buffer,
 		}
 
 		PageSetAllVisible(page);
-		visibilitymap_set(vacrel->rel, blkno, buffer, InvalidXLogRecPtr,
-						  vmbuffer, visibility_cutoff_xid, flags);
+		old_vmbits = visibilitymap_set(vacrel->rel, blkno, buffer,
+									   InvalidXLogRecPtr,
+									   vmbuffer, visibility_cutoff_xid,
+									   flags);
+
+		/*
+		 * If the page wasn't already set all-visible and all-frozen in the
+		 * VM, count it as newly set for logging.
+		 */
+		if ((old_vmbits & VISIBILITYMAP_ALL_VISIBLE) == 0 &&
+			(old_vmbits & VISIBILITYMAP_ALL_FROZEN) == 0)
+		{
+			vacrel->vm_new_visible_pages++;
+			if (all_frozen)
+				vacrel->vm_new_visible_frozen_pages++;
+		}
+
+		else if ((old_vmbits & VISIBILITYMAP_ALL_FROZEN) == 0 && all_frozen)
+			vacrel->vm_new_frozen_pages++;
 	}
 
 	/* Revert to the previous phase information for error traceback */
-- 
2.34.1

From d0c8d809c668ffb577d7e62b340c62a6b822bb97 Mon Sep 17 00:00:00 2001
From: Melanie Plageman <melanieplage...@gmail.com>
Date: Mon, 28 Oct 2024 11:36:58 -0400
Subject: [PATCH v2 06/10] Remove superfluous next_block local variable in
 vacuum code

Reduce the number of block related variables in lazy_scan_heap() and its
helpers by removing the next_block local variable from
heap_vac_scan_next_block().
---
 src/backend/access/heap/vacuumlazy.c | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 272ddee22c5..0f03bbd951b 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -1125,13 +1125,11 @@ static bool
 heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno,
 						 bool *all_visible_according_to_vm)
 {
-	BlockNumber next_block;
-
 	/* relies on InvalidBlockNumber + 1 overflowing to 0 on first call */
-	next_block = vacrel->current_block + 1;
+	vacrel->current_block++;
 
 	/* Have we reached the end of the relation? */
-	if (next_block >= vacrel->rel_pages)
+	if (vacrel->current_block >= vacrel->rel_pages)
 	{
 		*blkno = vacrel->rel_pages;
 		return false;
@@ -1140,7 +1138,7 @@ heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno,
 	/*
 	 * We must be in one of the three following states:
 	 */
-	if (next_block > vacrel->next_unskippable_block ||
+	if (vacrel->current_block > vacrel->next_unskippable_block ||
 		vacrel->next_unskippable_block == InvalidBlockNumber)
 	{
 		/*
@@ -1167,23 +1165,24 @@ heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno,
 		 * pages then skipping makes updating relfrozenxid unsafe, which is a
 		 * real downside.
 		 */
-		if (vacrel->next_unskippable_block - next_block >= SKIP_PAGES_THRESHOLD)
+		if (vacrel->next_unskippable_block - vacrel->current_block >=
+			SKIP_PAGES_THRESHOLD)
 		{
-			next_block = vacrel->next_unskippable_block;
+			vacrel->current_block = vacrel->next_unskippable_block;
 			if (skipsallvis)
 				vacrel->skippedallvis = true;
 		}
 	}
 
 	/* Now we must be in one of the two remaining states: */
-	if (next_block < vacrel->next_unskippable_block)
+	if (vacrel->current_block < vacrel->next_unskippable_block)
 	{
 		/*
 		 * 2. We are processing a range of blocks that we could have skipped
 		 * but chose not to.  We know that they are all-visible in the VM,
 		 * otherwise they would've been unskippable.
 		 */
-		*blkno = vacrel->current_block = next_block;
+		*blkno = vacrel->current_block;
 		*all_visible_according_to_vm = true;
 		return true;
 	}
@@ -1193,9 +1192,9 @@ heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno,
 		 * 3. We reached the next unskippable block.  Process it.  On next
 		 * iteration, we will be back in state 1.
 		 */
-		Assert(next_block == vacrel->next_unskippable_block);
+		Assert(vacrel->current_block == vacrel->next_unskippable_block);
 
-		*blkno = vacrel->current_block = next_block;
+		*blkno = vacrel->current_block;
 		*all_visible_according_to_vm = vacrel->next_unskippable_allvis;
 		return true;
 	}
-- 
2.34.1

From 285becfb434d6e20fd72c02c781729c6fa5573c1 Mon Sep 17 00:00:00 2001
From: Melanie Plageman <melanieplage...@gmail.com>
Date: Mon, 28 Oct 2024 11:07:50 -0400
Subject: [PATCH v2 07/10] Make heap_vac_scan_next_block() return BlockNumber

Pass rel_pages instead of blkno to vacuum progress reporting and free
space map vacuuming outside of the main loop in lazy_scan_heap(). This
allows us to reduce the scope of blkno and refactor
heap_vac_scan_next_block() to return the next block number.

This makes the interface more straightforward as well as paving the way
for heap_vac_scan_next_block() to be used by the read stream API as a
callback to implement streaming vacuum.
---
 src/backend/access/heap/vacuumlazy.c | 48 +++++++++++++++-------------
 1 file changed, 26 insertions(+), 22 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 0f03bbd951b..0d3f6e67e45 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -242,8 +242,8 @@ typedef struct LVSavedErrInfo
 
 /* non-export function prototypes */
 static void lazy_scan_heap(LVRelState *vacrel);
-static bool heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno,
-									 bool *all_visible_according_to_vm);
+static BlockNumber heap_vac_scan_next_block(LVRelState *vacrel,
+											bool *all_visible_according_to_vm);
 static void find_next_unskippable_block(LVRelState *vacrel, bool *skipsallvis);
 static bool lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf,
 								   BlockNumber blkno, Page page,
@@ -849,7 +849,6 @@ static void
 lazy_scan_heap(LVRelState *vacrel)
 {
 	BlockNumber rel_pages = vacrel->rel_pages,
-				blkno,
 				next_fsm_block_to_vacuum = 0;
 	bool		all_visible_according_to_vm;
 
@@ -873,13 +872,20 @@ lazy_scan_heap(LVRelState *vacrel)
 	vacrel->next_unskippable_allvis = false;
 	vacrel->next_unskippable_vmbuffer = InvalidBuffer;
 
-	while (heap_vac_scan_next_block(vacrel, &blkno, &all_visible_according_to_vm))
+	while (true)
 	{
 		Buffer		buf;
+		BlockNumber blkno;
 		Page		page;
 		bool		has_lpdead_items;
 		bool		got_cleanup_lock = false;
 
+		blkno = heap_vac_scan_next_block(vacrel,
+										 &all_visible_according_to_vm);
+
+		if (!BlockNumberIsValid(blkno))
+			break;
+
 		vacrel->scanned_pages++;
 
 		/* Report as block scanned, update error traceback information */
@@ -1066,7 +1072,8 @@ lazy_scan_heap(LVRelState *vacrel)
 		ReleaseBuffer(vmbuffer);
 
 	/* report that everything is now scanned */
-	pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_SCANNED, blkno);
+	pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_SCANNED,
+								 rel_pages);
 
 	/* now we can compute the new value for pg_class.reltuples */
 	vacrel->new_live_tuples = vac_estimate_reltuples(vacrel->rel, rel_pages,
@@ -1092,11 +1099,13 @@ lazy_scan_heap(LVRelState *vacrel)
 	 * Vacuum the remainder of the Free Space Map.  We must do this whether or
 	 * not there were indexes, and whether or not we bypassed index vacuuming.
 	 */
-	if (blkno > next_fsm_block_to_vacuum)
-		FreeSpaceMapVacuumRange(vacrel->rel, next_fsm_block_to_vacuum, blkno);
+	if (rel_pages > next_fsm_block_to_vacuum)
+		FreeSpaceMapVacuumRange(vacrel->rel, next_fsm_block_to_vacuum,
+								rel_pages);
 
 	/* report all blocks vacuumed */
-	pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_VACUUMED, blkno);
+	pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_VACUUMED,
+								 rel_pages);
 
 	/* Do final index cleanup (call each index's amvacuumcleanup routine) */
 	if (vacrel->nindexes > 0 && vacrel->do_index_cleanup)
@@ -1109,11 +1118,11 @@ lazy_scan_heap(LVRelState *vacrel)
  * lazy_scan_heap() calls here every time it needs to get the next block to
  * prune and vacuum.  The function uses the visibility map, vacuum options,
  * and various thresholds to skip blocks which do not need to be processed and
- * sets blkno to the next block to process.
+ * returns the next block to process.
  *
- * The block number and visibility status of the next block to process are set
- * in *blkno and *all_visible_according_to_vm.  The return value is false if
- * there are no further blocks to process.
+ * The block number and visibility status of the next block to process are
+ * returned and set in *all_visible_according_to_vm.  The return value is
+ * InvalidBlockNumber if there are no further blocks to process.
  *
  * vacrel is an in/out parameter here.  Vacuum options and information about
  * the relation are read.  vacrel->skippedallvis is set if we skip a block
@@ -1121,8 +1130,8 @@ lazy_scan_heap(LVRelState *vacrel)
  * relfrozenxid in that case.  vacrel also holds information about the next
  * unskippable block, as bookkeeping for this function.
  */
-static bool
-heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno,
+static BlockNumber
+heap_vac_scan_next_block(LVRelState *vacrel,
 						 bool *all_visible_according_to_vm)
 {
 	/* relies on InvalidBlockNumber + 1 overflowing to 0 on first call */
@@ -1130,10 +1139,7 @@ heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno,
 
 	/* Have we reached the end of the relation? */
 	if (vacrel->current_block >= vacrel->rel_pages)
-	{
-		*blkno = vacrel->rel_pages;
-		return false;
-	}
+		return InvalidBlockNumber;
 
 	/*
 	 * We must be in one of the three following states:
@@ -1182,9 +1188,8 @@ heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno,
 		 * but chose not to.  We know that they are all-visible in the VM,
 		 * otherwise they would've been unskippable.
 		 */
-		*blkno = vacrel->current_block;
 		*all_visible_according_to_vm = true;
-		return true;
+		return vacrel->current_block;
 	}
 	else
 	{
@@ -1194,9 +1199,8 @@ heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno,
 		 */
 		Assert(vacrel->current_block == vacrel->next_unskippable_block);
 
-		*blkno = vacrel->current_block;
 		*all_visible_according_to_vm = vacrel->next_unskippable_allvis;
-		return true;
+		return vacrel->current_block;
 	}
 }
 
-- 
2.34.1

From 852b876a35e07d3712baf11295e30ae68c835a61 Mon Sep 17 00:00:00 2001
From: Melanie Plageman <melanieplage...@gmail.com>
Date: Wed, 11 Dec 2024 14:13:34 -0500
Subject: [PATCH v2 09/10] Add more general summary to vacuumlazy.c

Add more details to how vacuuming heap relations works to vacuumlazy.c
Previously the top of vacuumlazy.c only had details related to the dead
TID storage added in Postgres 17. This commit adds a more general
summary to help future developers understand the heap relation vacuuming
implementation at a high level.

It would be good to add another sentence or two on index vacuuming.
---
 src/backend/access/heap/vacuumlazy.c | 39 ++++++++++++++++++++++++++++
 1 file changed, 39 insertions(+)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 49c5c24f63b..4bfb5b3c5d2 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -3,6 +3,45 @@
  * vacuumlazy.c
  *	  Concurrent ("lazy") vacuuming.
  *
+ * Heap relations are vacuumed in three main phases. In the phase I,
+ * vacuum scans relation pages, pruning and freezing tuples and saving dead
+ * tuples' TIDs in a TID store. If that TID store fills up or vacuum finishes
+ * scanning the relation, it progresses to the phase II: index vacuuming.
+ *
+ * If there are no indexes or index scanning is disabled, phase II may be
+ * skipped. If phase I identified very few dead index entries, vacuum may skip
+ * phases II and III. Index vacuuming deletes the dead index entries from the
+ * TID store.
+ *
+ * After index vacuuming is complete, vacuum scans the blocks of the relation
+ * indicated by the TIDs in the TID store and reaps the dead tuples, freeing
+ * that space for future tuples. Finally, vacuum may truncate the relation if
+ * it has emptied pages at the end.
+ *
+ * After finishing all phases of work, vacuum updates relation statistics in
+ * pg_class and the cumulative statistics subsystem.
+ *
+ * Relation Scanning:
+ *
+ * Vacuum scans the heap relation, starting at the beginning and progressing
+ * to the end, skipping pages as permitted by their visibility status, vacuum
+ * options, and the eagerness level of the vacuum.
+ *
+ * When page skipping is enabled, non-aggressive vacuums may skip scanning
+ * pages that are marked all-visible in the visibility map. We may choose not
+ * to skip pages if the range of skippable pages is below
+ * SKIP_PAGES_THRESHOLD.
+ *
+ * Once vacuum has decided to scan a given block, it must read in the block
+ * and obtain a cleanup lock to prune tuples on the page. A non-aggressive
+ * vacuums may choose to skip pruning and freezing if it cannot acquire a
+ * cleanup lock on the buffer right away.
+ *
+ * After pruning and freezing, pages that are newly all-visible and all-frozen
+ * are marked as such in the visibility map.
+ *
+ * Dead TID Storage:
+ *
  * The major space usage for vacuuming is storage for the dead tuple IDs that
  * are to be removed from indexes.  We want to ensure we can vacuum even the
  * very largest relations with finite memory space usage.  To do that, we set
-- 
2.34.1

From 153261061f28f05696daf1ddc74898a8e395ed6e Mon Sep 17 00:00:00 2001
From: Melanie Plageman <melanieplage...@gmail.com>
Date: Fri, 13 Dec 2024 13:48:09 -0500
Subject: [PATCH v2 08/10] Refactor vacuum assert into multiple if statements

The assert in heap_vacuum_rel() before updating relfrozenxid and/or
relminmxid in pg_class was long and complicated. This commit refactors
it into several if statements for clarity.
---
 src/backend/access/heap/vacuumlazy.c | 45 +++++++++++++++++++++++-----
 1 file changed, 37 insertions(+), 8 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 0d3f6e67e45..49c5c24f63b 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -550,14 +550,43 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
 	 * value >= FreezeLimit, and relminmxid to a value >= MultiXactCutoff.
 	 * Non-aggressive VACUUMs may advance them by any amount, or not at all.
 	 */
-	Assert(vacrel->NewRelfrozenXid == vacrel->cutoffs.OldestXmin ||
-		   TransactionIdPrecedesOrEquals(vacrel->aggressive ? vacrel->cutoffs.FreezeLimit :
-										 vacrel->cutoffs.relfrozenxid,
-										 vacrel->NewRelfrozenXid));
-	Assert(vacrel->NewRelminMxid == vacrel->cutoffs.OldestMxact ||
-		   MultiXactIdPrecedesOrEquals(vacrel->aggressive ? vacrel->cutoffs.MultiXactCutoff :
-									   vacrel->cutoffs.relminmxid,
-									   vacrel->NewRelminMxid));
+
+#ifdef USE_ASSERT_CHECKING
+	if (vacrel->NewRelfrozenXid == vacrel->cutoffs.OldestXmin)
+	{
+		/* No new relfrozenxid identified */
+	}
+	else if (vacrel->aggressive)
+	{
+		/*
+		 * Aggressive vacuum must have frozen all tuples older than the freeze
+		 * limit.
+		 */
+		Assert(TransactionIdPrecedesOrEquals(vacrel->cutoffs.FreezeLimit,
+											 vacrel->NewRelfrozenXid));
+	}
+	else
+		Assert(TransactionIdPrecedesOrEquals(vacrel->cutoffs.relfrozenxid,
+											 vacrel->NewRelfrozenXid));
+
+	if (vacrel->NewRelminMxid == vacrel->cutoffs.OldestMxact)
+	{
+		/* No new relminmxid identified */
+	}
+	else if (vacrel->aggressive)
+	{
+		/*
+		 * Aggressive vacuum must have frozen all tuples older than the
+		 * multixact cutoff.
+		 */
+		Assert(MultiXactIdPrecedesOrEquals(vacrel->cutoffs.MultiXactCutoff,
+										   vacrel->NewRelminMxid));
+	}
+	else
+		Assert(MultiXactIdPrecedesOrEquals(vacrel->cutoffs.relminmxid,
+										   vacrel->NewRelminMxid));
+#endif
+
 	if (vacrel->skippedallvis)
 	{
 		/*
-- 
2.34.1

From e36b4fac345be44954410c4f0e61467dc0f49a72 Mon Sep 17 00:00:00 2001
From: Melanie Plageman <melanieplage...@gmail.com>
Date: Thu, 12 Dec 2024 16:44:37 -0500
Subject: [PATCH v2 10/10] Eagerly scan all-visible pages to amortize
 aggressive vacuum

Introduce eager vacuums, which scan some of the all-visible but not
all-frozen pages in the relation to amortize the cost of an aggressive
vacuum.

Because the goal is to freeze these all-visible pages, all-visible pages
that are eagerly scanned and set all-frozen in the visibility map are
considered successful eager scans and those not frozen are considered
failed eager scans.

If too many eager scans fail in a row, eager scanning is temporarily
suspended until a later portion of the relation. To effectively amortize
aggressive vacuums, we cap the number of successes as well. Once we
reach the maximum number of blocks successfully eager scanned and
frozen, the eager vacuum is downgraded to a normal vacuum.

All-visible pages scanned by both eager and aggressive vacuums are
counted and logged.
---
 src/backend/access/heap/vacuumlazy.c | 426 +++++++++++++++++++++++----
 src/tools/pgindent/typedefs.list     |   1 +
 2 files changed, 373 insertions(+), 54 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 4bfb5b3c5d2..6806926c459 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -27,11 +27,37 @@
  * to the end, skipping pages as permitted by their visibility status, vacuum
  * options, and the eagerness level of the vacuum.
  *
+ * There are three vacuum eagerness levels: normal vacuum, eager vacuum, and
+ * aggressive vacuum.
+ *
  * When page skipping is enabled, non-aggressive vacuums may skip scanning
- * pages that are marked all-visible in the visibility map. We may choose not
+ * pages that are marked all-visible in the visibility map. It may choose not
  * to skip pages if the range of skippable pages is below
  * SKIP_PAGES_THRESHOLD.
  *
+ * Eager vacuums will scan skippable pages in an effort to freeze them and
+ * decrease the backlog of all-visible but not all-frozen pages that have to
+ * be processed to advance relfrozenxid and avoid transaction ID wraparound.
+ *
+ * Eager vacuums counts it as a success when they are able to set an eagerly
+ * scanned page all-frozen in the VM and a failure when they are not able to
+ * set the page all-frozen.
+ *
+ * Because we want to amortize the overhead of freezing pages over multiple
+ * vacuums, eager vacuums cap the number of successful eager scans to
+ * EAGER_SCAN_SUCCESS_RATE of the number of all-visible but not all-frozen
+ * pages at the beginning of the vacuum.
+ *
+ * On the assumption that different regions of the table are likely to contain
+ * similarly aged data, eager vacuums use a localized failure cap
+ * instead of a global cap for the whole relation. The failure count is reset
+ * for each region of the table -- comprised of EAGER_SCAN_REGION_SIZE blocks.
+ * In each region, we tolerate EAGER_SCAN_MAX_FAILS_PER_REGION before
+ * suspending eager scanning until the end of the region.
+ *
+ * Aggressive vacuums must examine every unfrozen tuple and are thus not
+ * subject to failure or success caps when eagerly scanning all-visible pages.
+ *
  * Once vacuum has decided to scan a given block, it must read in the block
  * and obtain a cleanup lock to prune tuples on the page. A non-aggressive
  * vacuums may choose to skip pruning and freezing if it cannot acquire a
@@ -85,6 +111,7 @@
 #include "commands/progress.h"
 #include "commands/vacuum.h"
 #include "common/int.h"
+#include "common/pg_prng.h"
 #include "executor/instrument.h"
 #include "miscadmin.h"
 #include "pgstat.h"
@@ -170,6 +197,51 @@ typedef enum
 	VACUUM_ERRCB_PHASE_TRUNCATE,
 } VacErrPhase;
 
+/*
+ * Eager vacuums scan some all-visible but not all-frozen pages. Since our
+ * goal is to freeze these pages, an eager scan that fails to set the page
+ * all-frozen in the VM is considered to have "failed".
+ *
+ * On the assumption that different regions of the table tend to have
+ * similarly aged data, once we fail to freeze EAGER_SCAN_MAX_FAILS_PER_REGION
+ * blocks in a region of size EAGER_SCAN_REGION_SIZE, we suspend eager
+ * scanning until vacuum has progressed to another region of the table with
+ * potentially older data.
+ */
+#define EAGER_SCAN_REGION_SIZE 4096
+#define EAGER_SCAN_MAX_FAILS_PER_REGION 128
+
+/*
+ * An eager scan of a page that is set all-frozen in the VM is considered
+ * "successful". To spread out eager scanning across multiple eager vacuums,
+ * we limit the number of successful eager page scans. The maximum number of
+ * successful eager page scans is calculated as a ratio of the all-visible but
+ * not all-frozen pages at the beginning of the vacuum.
+ */
+#define EAGER_SCAN_SUCCESS_RATE 0.2
+
+/*
+ * The eagerness level of a vacuum determines how many all-visible but
+ * not all-frozen pages it eagerly scans.
+ *
+ * A normal vacuum (eagerness VAC_NORMAL) scans no all-visible pages (with the
+ * exception of those scanned due to SKIP_PAGES_THRESHOLD).
+ *
+ * An eager vacuum (eagerness VAC_EAGER) scans a number of pages up to a limit
+ * based on whether or not it is succeeding or failing. An eager vacuum is
+ * downgraded to a normal vacuum when it hits its success quota. An aggressive
+ * vacuum cannot be downgraded. No eagerness level is ever upgraded.
+ *
+ * An aggressive vacuum (eagerness EAGER_FULL) must scan all all-visible but
+ * not all-frozen pages.
+ */
+typedef enum VacEagerness
+{
+	VAC_NORMAL,
+	VAC_EAGER,
+	VAC_AGGRESSIVE,
+} VacEagerness;
+
 typedef struct LVRelState
 {
 	/* Target heap relation and its indexes */
@@ -181,8 +253,6 @@ typedef struct LVRelState
 	BufferAccessStrategy bstrategy;
 	ParallelVacuumState *pvs;
 
-	/* Aggressive VACUUM? (must set relfrozenxid >= FreezeLimit) */
-	bool		aggressive;
 	/* Use visibility map to skip? (disabled by DISABLE_PAGE_SKIPPING) */
 	bool		skipwithvm;
 	/* Consider index vacuuming bypass optimization? */
@@ -267,7 +337,48 @@ typedef struct LVRelState
 	BlockNumber current_block;	/* last block returned */
 	BlockNumber next_unskippable_block; /* next unskippable block */
 	bool		next_unskippable_allvis;	/* its visibility status */
+	bool		next_unskippable_eager_scanned; /* if it was eager scanned */
 	Buffer		next_unskippable_vmbuffer;	/* buffer containing its VM bit */
+
+	/*
+	 * Whether or not this is an normal, eager, or aggressive VACUUM. An
+	 * aggressive vacuum must set relfrozenxid >= FreezeLimit and therefore
+	 * must scan every unfrozen tuple. An eager vacuum will scan some number
+	 * number of all-visible pages until it is downgraded to a normal vacuum.
+	 */
+	VacEagerness eagerness;
+
+	/*
+	 * An eager vacuum that has failed to freeze too many eagerly scanned
+	 * blocks in a row suspends eager scanning. next_eager_scan_region_start
+	 * is the block number of the first block eligible for resumed eager
+	 * scanning. Normal and aggressive vacuums do not use this.
+	 */
+	BlockNumber next_eager_scan_region_start;
+
+	struct
+	{
+		/*
+		 * The remaining number of blocks an eager vacuum will consider eager
+		 * scanning. This is initialized to EAGER_SCAN_SUCCESS_RATE of the
+		 * total number of all-visible but not all-frozen pages. Aggressive
+		 * vacuums also decrement this counter but it is initialized to #
+		 * blocks in the relation.
+		 */
+		BlockNumber remaining_successes;
+
+		/*
+		 * The number of eagerly scanned blocks an eager vacuum failed to
+		 * freeze (due to age) in the current eager scan region. Eager vacuums
+		 * reset it to EAGER_SCAN_MAX_FAILS_PER_REGION each time they enter a
+		 * new region of the relation. Aggressive vacuums also decrement this
+		 * coutner but it is initialized to # blocks in the relation.
+		 */
+		BlockNumber remaining_fails;
+
+		/* Count of all-visible blocks scanned (for logging only). */
+		BlockNumber scanned;
+	}			eager_pages;
 } LVRelState;
 
 /* Struct for saving and restoring vacuum error information. */
@@ -280,9 +391,14 @@ typedef struct LVSavedErrInfo
 
 
 /* non-export function prototypes */
+
 static void lazy_scan_heap(LVRelState *vacrel);
+static const char *vac_eagerness_description(VacEagerness eagerness);
+static void heap_vacuum_set_up_eagerness(LVRelState *vacrel,
+										 bool aggressive);
 static BlockNumber heap_vac_scan_next_block(LVRelState *vacrel,
-											bool *all_visible_according_to_vm);
+											bool *all_visible_according_to_vm,
+											bool *was_eager_scanned);
 static void find_next_unskippable_block(LVRelState *vacrel, bool *skipsallvis);
 static bool lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf,
 								   BlockNumber blkno, Page page,
@@ -290,7 +406,7 @@ static bool lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf,
 static void lazy_scan_prune(LVRelState *vacrel, Buffer buf,
 							BlockNumber blkno, Page page,
 							Buffer vmbuffer, bool all_visible_according_to_vm,
-							bool *has_lpdead_items);
+							bool *has_lpdead_items, bool *vm_page_frozen);
 static bool lazy_scan_noprune(LVRelState *vacrel, Buffer buf,
 							  BlockNumber blkno, Page page,
 							  bool *has_lpdead_items);
@@ -332,6 +448,144 @@ static void restore_vacuum_error_info(LVRelState *vacrel,
 									  const LVSavedErrInfo *saved_vacrel);
 
 
+/*
+ * Helper to return a text description of the vacuum eagerness level for
+ * logging output. The string is not localized but is marked for
+ * translation later.
+ */
+static const char *
+vac_eagerness_description(VacEagerness eagerness)
+{
+	switch (eagerness)
+	{
+		case VAC_NORMAL:
+			return gettext_noop("vacuum");
+		case VAC_EAGER:
+			return gettext_noop("eager vacuum");
+		case VAC_AGGRESSIVE:
+			return gettext_noop("aggressive vacuum");
+		default:
+			elog(ERROR, "Unknown vacuum eagerness level: %d", eagerness);
+	}
+}
+
+/*
+ * Helper to set up the eager scanning state for vacuuming a single relation.
+ * Initializes the eager scanning related members of the LVRelState.
+ *
+ * Caller provides whether or not an aggressive vacuum is required due to
+ * vacuum options or for relfrozenxid/relminmxid advancement.
+ */
+static void
+heap_vacuum_set_up_eagerness(LVRelState *vacrel,
+							 bool aggressive)
+{
+	uint32		randseed;
+	BlockNumber allvisible;
+	BlockNumber allfrozen;
+	bool		oldest_unfrozen_requires_freeze = false;
+
+	vacrel->eager_pages.scanned = 0;
+
+	/* Normal and aggressive vacuums don't have eager scan regions */
+	vacrel->next_eager_scan_region_start = InvalidBlockNumber;
+
+	/*
+	 * The caller will have determined whether or not an aggressive vacuum is
+	 * required by either the vacuum parameters or the relative age of the
+	 * oldest unfrozen transaction IDs.
+	 */
+	if (aggressive)
+	{
+		vacrel->eagerness = VAC_AGGRESSIVE;
+
+		/*
+		 * An aggressive vacuum must scan every all-visible page to safely
+		 * advance the relfrozenxid and/or relminmxid. As such, there is no
+		 * cap to the number of allowed successes or failures.
+		 */
+		vacrel->eager_pages.remaining_fails = vacrel->rel_pages + 1;
+		vacrel->eager_pages.remaining_successes = vacrel->rel_pages + 1;
+		return;
+	}
+
+	/*
+	 * We only want to enable eager scanning if we are likely to be able to
+	 * freeze some of the pages in the relation. We are only guaranteed to
+	 * freeze a freezable page if some of the tuples require freezing. Tuples
+	 * require freezing if any of their xids precede the freeze limit or
+	 * multixact cutoff. So, if the oldest unfrozen xid
+	 * (relfrozenxid/relminmxid) does not precede the freeze cutoff, we won't
+	 * find tuples requiring freezing.
+	 */
+	if (TransactionIdIsNormal(vacrel->cutoffs.relfrozenxid) &&
+		TransactionIdPrecedesOrEquals(vacrel->cutoffs.relfrozenxid,
+									  vacrel->cutoffs.FreezeLimit))
+		oldest_unfrozen_requires_freeze = true;
+
+	if (MultiXactIdIsValid(vacrel->cutoffs.relminmxid) &&
+		MultiXactIdPrecedesOrEquals(vacrel->cutoffs.relminmxid,
+									vacrel->cutoffs.MultiXactCutoff))
+		oldest_unfrozen_requires_freeze = true;
+
+	/*
+	 * If the relation is smaller than a single region, we won't bother eager
+	 * scanning it, as an aggressive vacuum shouldn't take very long anyway so
+	 * there is no point in amortization.
+	 *
+	 * Also, if the oldest unfrozen XID is not old enough to require freezing,
+	 * we won't bother eager scanning, as it will likely not succeed in
+	 * freezing pages.
+	 *
+	 * In both of these cases, we set up a non-eager vacuum. This will not
+	 * intentionally scan all-visible pages, so the success and failure limits
+	 * are initialized to 0.
+	 */
+	if (vacrel->rel_pages < EAGER_SCAN_REGION_SIZE ||
+		!oldest_unfrozen_requires_freeze)
+	{
+		vacrel->eagerness = VAC_NORMAL;
+
+		vacrel->eager_pages.remaining_fails = 0;
+		vacrel->eager_pages.remaining_successes = 0;
+		return;
+	}
+
+	/*
+	 * We are not required to do an aggressive vacuum and we have met the
+	 * criteria to do an eager vacuum.
+	 */
+	vacrel->eagerness = VAC_EAGER;
+
+	/*
+	 * Start at a random spot somewhere within the first eager scan region.
+	 * This avoids eager scanning and failing to freeze the exact same blocks
+	 * each vacuum of the relation.
+	 */
+	randseed = pg_prng_uint32(&pg_global_prng_state);
+
+	vacrel->next_eager_scan_region_start = randseed % EAGER_SCAN_REGION_SIZE;
+
+	/*
+	 * The first region will be smaller than subsequent regions. As such,
+	 * adjust the eager scan failures tolerated for this region.
+	 */
+	vacrel->eager_pages.remaining_fails = EAGER_SCAN_MAX_FAILS_PER_REGION *
+		(1 - vacrel->next_eager_scan_region_start / EAGER_SCAN_REGION_SIZE);
+
+	/*
+	 * Our success cap is EAGER_SCAN_SUCCESS_RATE of the number of all-visible
+	 * but not all-frozen blocks in the relation.
+	 */
+	visibilitymap_count(vacrel->rel,
+						&allvisible,
+						&allfrozen);
+
+	vacrel->eager_pages.remaining_successes =
+		(BlockNumber) (EAGER_SCAN_SUCCESS_RATE *
+					   (allvisible - allfrozen));
+}
+
 /*
  *	heap_vacuum_rel() -- perform VACUUM for one heap relation
  *
@@ -364,6 +618,7 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
 	BufferUsage startbufferusage = pgBufferUsage;
 	ErrorContextCallback errcallback;
 	char	  **indnames = NULL;
+	bool		aggressive = false;
 
 	verbose = (params->options & VACOPT_VERBOSE) != 0;
 	instrument = (verbose || (AmAutoVacuumWorkerProcess() &&
@@ -502,7 +757,8 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
 	 * want to teach lazy_scan_prune to recompute vistest from time to time,
 	 * to increase the number of dead tuples it can prune away.)
 	 */
-	vacrel->aggressive = vacuum_get_cutoffs(rel, params, &vacrel->cutoffs);
+	aggressive = vacuum_get_cutoffs(rel, params, &vacrel->cutoffs);
+
 	vacrel->rel_pages = orig_rel_pages = RelationGetNumberOfBlocks(rel);
 	vacrel->vistest = GlobalVisTestFor(rel);
 	/* Initialize state used to track oldest extant XID/MXID */
@@ -516,25 +772,20 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
 		 * Force aggressive mode, and disable skipping blocks using the
 		 * visibility map (even those set all-frozen)
 		 */
-		vacrel->aggressive = true;
+		aggressive = true;
 		skipwithvm = false;
 	}
 
 	vacrel->skipwithvm = skipwithvm;
 
+	heap_vacuum_set_up_eagerness(vacrel, aggressive);
+
 	if (verbose)
-	{
-		if (vacrel->aggressive)
-			ereport(INFO,
-					(errmsg("aggressively vacuuming \"%s.%s.%s\"",
-							vacrel->dbname, vacrel->relnamespace,
-							vacrel->relname)));
-		else
-			ereport(INFO,
-					(errmsg("vacuuming \"%s.%s.%s\"",
-							vacrel->dbname, vacrel->relnamespace,
-							vacrel->relname)));
-	}
+		ereport(INFO,
+				(errmsg("%s of \"%s.%s.%s\"",
+						vac_eagerness_description(vacrel->eagerness),
+						vacrel->dbname, vacrel->relnamespace,
+						vacrel->relname)));
 
 	/*
 	 * Allocate dead_items memory using dead_items_alloc.  This handles
@@ -595,7 +846,7 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
 	{
 		/* No new relfrozenxid identified */
 	}
-	else if (vacrel->aggressive)
+	else if (vacrel->eagerness == VAC_AGGRESSIVE)
 	{
 		/*
 		 * Aggressive vacuum must have frozen all tuples older than the freeze
@@ -612,7 +863,7 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
 	{
 		/* No new relminmxid identified */
 	}
-	else if (vacrel->aggressive)
+	else if (vacrel->eagerness == VAC_AGGRESSIVE)
 	{
 		/*
 		 * Aggressive vacuum must have frozen all tuples older than the
@@ -633,7 +884,7 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
 		 * chose to skip an all-visible page range.  The state that tracks new
 		 * values will have missed unfrozen XIDs from the pages we skipped.
 		 */
-		Assert(!vacrel->aggressive);
+		Assert(vacrel->eagerness != VAC_AGGRESSIVE);
 		vacrel->NewRelfrozenXid = InvalidTransactionId;
 		vacrel->NewRelminMxid = InvalidMultiXactId;
 	}
@@ -718,7 +969,7 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
 				 * VACUUM VERBOSE ereport
 				 */
 				Assert(!params->is_wraparound);
-				msgfmt = _("finished vacuuming \"%s.%s.%s\": index scans: %d\n");
+				msgfmt = _("finished %s of \"%s.%s.%s\": index scans: %d\n");
 			}
 			else if (params->is_wraparound)
 			{
@@ -728,29 +979,25 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
 				 * implies aggressive.  Produce distinct output for the corner
 				 * case all the same, just in case.
 				 */
-				if (vacrel->aggressive)
-					msgfmt = _("automatic aggressive vacuum to prevent wraparound of table \"%s.%s.%s\": index scans: %d\n");
-				else
-					msgfmt = _("automatic vacuum to prevent wraparound of table \"%s.%s.%s\": index scans: %d\n");
+				msgfmt = _("automatic %s to prevent wraparound of table \"%s.%s.%s\": index scans: %d\n");
 			}
 			else
-			{
-				if (vacrel->aggressive)
-					msgfmt = _("automatic aggressive vacuum of table \"%s.%s.%s\": index scans: %d\n");
-				else
-					msgfmt = _("automatic vacuum of table \"%s.%s.%s\": index scans: %d\n");
-			}
+				msgfmt = _("automatic %s of table \"%s.%s.%s\": index scans: %d\n");
+
 			appendStringInfo(&buf, msgfmt,
+							 vac_eagerness_description(vacrel->eagerness),
 							 vacrel->dbname,
 							 vacrel->relnamespace,
 							 vacrel->relname,
 							 vacrel->num_index_scans);
-			appendStringInfo(&buf, _("pages: %u removed, %u remain, %u scanned (%.2f%% of total)\n"),
+			appendStringInfo(&buf, _("pages: %u removed, %u remain, %u scanned (%.2f%% of total), %u all-visible scanned\n"),
 							 vacrel->removed_pages,
 							 new_rel_pages,
 							 vacrel->scanned_pages,
 							 orig_rel_pages == 0 ? 100.0 :
-							 100.0 * vacrel->scanned_pages / orig_rel_pages);
+							 100.0 * vacrel->scanned_pages /
+							 orig_rel_pages,
+							 vacrel->eager_pages.scanned);
 			appendStringInfo(&buf,
 							 _("tuples: %lld removed, %lld remain, %lld are dead but not yet removable\n"),
 							 (long long) vacrel->tuples_deleted,
@@ -918,7 +1165,8 @@ lazy_scan_heap(LVRelState *vacrel)
 {
 	BlockNumber rel_pages = vacrel->rel_pages,
 				next_fsm_block_to_vacuum = 0;
-	bool		all_visible_according_to_vm;
+	bool		all_visible_according_to_vm,
+				was_eager_scanned = false;
 
 	Buffer		vmbuffer = InvalidBuffer;
 	const int	initprog_index[] = {
@@ -938,6 +1186,7 @@ lazy_scan_heap(LVRelState *vacrel)
 	vacrel->current_block = InvalidBlockNumber;
 	vacrel->next_unskippable_block = InvalidBlockNumber;
 	vacrel->next_unskippable_allvis = false;
+	vacrel->next_unskippable_eager_scanned = false;
 	vacrel->next_unskippable_vmbuffer = InvalidBuffer;
 
 	while (true)
@@ -946,10 +1195,12 @@ lazy_scan_heap(LVRelState *vacrel)
 		BlockNumber blkno;
 		Page		page;
 		bool		has_lpdead_items;
+		bool		vm_page_frozen = false;
 		bool		got_cleanup_lock = false;
 
 		blkno = heap_vac_scan_next_block(vacrel,
-										 &all_visible_according_to_vm);
+										 &all_visible_according_to_vm,
+										 &was_eager_scanned);
 
 		if (!BlockNumberIsValid(blkno))
 			break;
@@ -1057,7 +1308,7 @@ lazy_scan_heap(LVRelState *vacrel)
 			 * lazy_scan_noprune could not do all required processing.  Wait
 			 * for a cleanup lock, and call lazy_scan_prune in the usual way.
 			 */
-			Assert(vacrel->aggressive);
+			Assert(vacrel->eagerness == VAC_AGGRESSIVE);
 			LockBuffer(buf, BUFFER_LOCK_UNLOCK);
 			LockBufferForCleanup(buf);
 			got_cleanup_lock = true;
@@ -1079,7 +1330,38 @@ lazy_scan_heap(LVRelState *vacrel)
 		if (got_cleanup_lock)
 			lazy_scan_prune(vacrel, buf, blkno, page,
 							vmbuffer, all_visible_according_to_vm,
-							&has_lpdead_items);
+							&has_lpdead_items, &vm_page_frozen);
+
+		/*
+		 * Count an eagerly scanned page as a failure or a success.
+		 */
+		if (was_eager_scanned)
+		{
+			if (vm_page_frozen)
+			{
+				Assert(vacrel->eager_pages.remaining_successes > 0);
+				vacrel->eager_pages.remaining_successes--;
+
+				if (vacrel->eager_pages.remaining_successes == 0)
+				{
+					Assert(vacrel->eagerness == VAC_EAGER);
+
+					/*
+					 * If we hit our success limit, there is no need to
+					 * eagerly scan any additional pages. Downgrade the vacuum
+					 * to a normal vacuum.
+					 */
+					vacrel->eagerness = VAC_NORMAL;
+					vacrel->eager_pages.remaining_fails = 0;
+					vacrel->next_eager_scan_region_start = InvalidBlockNumber;
+				}
+			}
+			else
+			{
+				Assert(vacrel->eager_pages.remaining_fails > 0);
+				vacrel->eager_pages.remaining_fails--;
+			}
+		}
 
 		/*
 		 * Now drop the buffer lock and, potentially, update the FSM.
@@ -1190,7 +1472,9 @@ lazy_scan_heap(LVRelState *vacrel)
  *
  * The block number and visibility status of the next block to process are
  * returned and set in *all_visible_according_to_vm.  The return value is
- * InvalidBlockNumber if there are no further blocks to process.
+ * InvalidBlockNumber if there are no further blocks to process. If the block
+ * is being eagerly scanned, was_eager_scanned is set so that the caller can
+ * count whether or not we successfully freeze it.
  *
  * vacrel is an in/out parameter here.  Vacuum options and information about
  * the relation are read.  vacrel->skippedallvis is set if we skip a block
@@ -1200,11 +1484,14 @@ lazy_scan_heap(LVRelState *vacrel)
  */
 static BlockNumber
 heap_vac_scan_next_block(LVRelState *vacrel,
-						 bool *all_visible_according_to_vm)
+						 bool *all_visible_according_to_vm,
+						 bool *was_eager_scanned)
 {
 	/* relies on InvalidBlockNumber + 1 overflowing to 0 on first call */
 	vacrel->current_block++;
 
+	*was_eager_scanned = false;
+
 	/* Have we reached the end of the relation? */
 	if (vacrel->current_block >= vacrel->rel_pages)
 		return InvalidBlockNumber;
@@ -1268,6 +1555,9 @@ heap_vac_scan_next_block(LVRelState *vacrel,
 		Assert(vacrel->current_block == vacrel->next_unskippable_block);
 
 		*all_visible_according_to_vm = vacrel->next_unskippable_allvis;
+		*was_eager_scanned = vacrel->next_unskippable_eager_scanned;
+		if (*was_eager_scanned)
+			vacrel->eager_pages.scanned++;
 		return vacrel->current_block;
 	}
 }
@@ -1291,11 +1581,12 @@ find_next_unskippable_block(LVRelState *vacrel, bool *skipsallvis)
 	BlockNumber rel_pages = vacrel->rel_pages;
 	BlockNumber next_unskippable_block = vacrel->next_unskippable_block + 1;
 	Buffer		next_unskippable_vmbuffer = vacrel->next_unskippable_vmbuffer;
+	bool		next_unskippable_eager_scanned = false;
 	bool		next_unskippable_allvis;
 
 	*skipsallvis = false;
 
-	for (;;)
+	for (;; next_unskippable_block++)
 	{
 		uint8		mapbits = visibilitymap_get_status(vacrel->rel,
 													   next_unskippable_block,
@@ -1303,6 +1594,18 @@ find_next_unskippable_block(LVRelState *vacrel, bool *skipsallvis)
 
 		next_unskippable_allvis = (mapbits & VISIBILITYMAP_ALL_VISIBLE) != 0;
 
+		/*
+		 * At the start of each eager scan region, eager vacuums reset the
+		 * failure counter, allowing them to resume eager scanning if it had
+		 * been disabled.
+		 */
+		if (next_unskippable_block >= vacrel->next_eager_scan_region_start)
+		{
+			vacrel->eager_pages.remaining_fails =
+				EAGER_SCAN_MAX_FAILS_PER_REGION;
+			vacrel->next_eager_scan_region_start += EAGER_SCAN_REGION_SIZE;
+		}
+
 		/*
 		 * A block is unskippable if it is not all visible according to the
 		 * visibility map.
@@ -1335,24 +1638,31 @@ find_next_unskippable_block(LVRelState *vacrel, bool *skipsallvis)
 		 * all-visible.  They may still skip all-frozen pages, which can't
 		 * contain XIDs < OldestXmin (XIDs that aren't already frozen by now).
 		 */
-		if ((mapbits & VISIBILITYMAP_ALL_FROZEN) == 0)
-		{
-			if (vacrel->aggressive)
-				break;
+		if (mapbits & VISIBILITYMAP_ALL_FROZEN)
+			continue;
 
-			/*
-			 * All-visible block is safe to skip in non-aggressive case.  But
-			 * remember that the final range contains such a block for later.
-			 */
-			*skipsallvis = true;
+		/*
+		 * Aggressive vacuums cannot skip all-visible pages that are not also
+		 * all-frozen. Eager vacuums only skip such pages if they have hit the
+		 * failure limit for the current eager scan region.
+		 */
+		if (vacrel->eager_pages.remaining_fails > 0)
+		{
+			next_unskippable_eager_scanned = true;
+			break;
 		}
 
-		next_unskippable_block++;
+		/*
+		 * All-visible block is safe to skip in a normal or eager vacuum. But
+		 * remember that the final range contains such a block for later.
+		 */
+		*skipsallvis = true;
 	}
 
 	/* write the local variables back to vacrel */
 	vacrel->next_unskippable_block = next_unskippable_block;
 	vacrel->next_unskippable_allvis = next_unskippable_allvis;
+	vacrel->next_unskippable_eager_scanned = next_unskippable_eager_scanned;
 	vacrel->next_unskippable_vmbuffer = next_unskippable_vmbuffer;
 }
 
@@ -1531,7 +1841,8 @@ lazy_scan_prune(LVRelState *vacrel,
 				Page page,
 				Buffer vmbuffer,
 				bool all_visible_according_to_vm,
-				bool *has_lpdead_items)
+				bool *has_lpdead_items,
+				bool *vm_page_frozen)
 {
 	Relation	rel = vacrel->rel;
 	PruneFreezeResult presult;
@@ -1684,11 +1995,17 @@ lazy_scan_prune(LVRelState *vacrel,
 		{
 			vacrel->vm_new_visible_pages++;
 			if (presult.all_frozen)
+			{
 				vacrel->vm_new_visible_frozen_pages++;
+				*vm_page_frozen = true;
+			}
 		}
 		else if ((old_vmbits & VISIBILITYMAP_ALL_FROZEN) == 0 &&
 				 presult.all_frozen)
+		{
 			vacrel->vm_new_frozen_pages++;
+			*vm_page_frozen = true;
+		}
 	}
 
 	/*
@@ -1786,6 +2103,7 @@ lazy_scan_prune(LVRelState *vacrel,
 		else
 			vacrel->vm_new_frozen_pages++;
 
+		*vm_page_frozen = true;
 	}
 }
 
@@ -1874,7 +2192,7 @@ lazy_scan_noprune(LVRelState *vacrel,
 									 &NoFreezePageRelminMxid))
 		{
 			/* Tuple with XID < FreezeLimit (or MXID < MultiXactCutoff) */
-			if (vacrel->aggressive)
+			if (vacrel->eagerness == VAC_AGGRESSIVE)
 			{
 				/*
 				 * Aggressive VACUUMs must always be able to advance rel's
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index ce33e55bf1d..728ceb6441c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -3063,6 +3063,7 @@ UserOpts
 VacAttrStats
 VacAttrStatsP
 VacDeadItemsInfo
+VacEagerness
 VacErrPhase
 VacObjFilter
 VacOptValue
-- 
2.34.1

Reply via email to