On Thu, Mar 3, 2022 at 2:08 PM Imseih (AWS), Sami <sims...@amazon.com> wrote:
>
> >    >> If the failsafe kicks in midway through a vacuum, the number 
> > indexes_total will not be reset to 0. If INDEX_CLEANUP is turned off, then 
> > the value will be 0 at the start of the vacuum.
> >    >
> >    > The way that this works with num_index_scans is that we "round up"
> >    > when there has been non-zero work in lazy_vacuum_all_indexes(), but
> >    > not if the precheck in lazy_vacuum_all_indexes() fails. That seems
> >    > like a good model to generalize from here. Note that this makes
> >    > INDEX_CLEANUP=off affect num_index_scans in much the same way as a
> >    > VACUUM where the failsafe kicks in very early, during the initial heap
> >    > pass. That is, if the failsafe kicks in before we reach lazy_vacuum()
> >    > for the first time (which is not unlikely), or even in the
> >    > lazy_vacuum_all_indexes() precheck, then num_index_scans will remain
> >    > at 0, just like INDEX_CLEANUP=off.
> >    >
> >    > The actual failsafe WARNING shows num_index_scans, possibly before it
> >    > gets incremented one last time (by "rounding up"). So it's reasonably
> >    > clear how this all works from that context (assuming that the
> >    > autovacuum logging stuff, which reports num_index_scans, outputs a
> >    > report for a table where the failsafe kicked in).
>
> >    I am confused.  If failsafe kicks in during the middle of a vacuum, I
> >   (perhaps naively) would expect indexes_total and indexes_processed to go 
> > to
> >    zero, and I'd expect to no longer see the "vacuuming indexes" and 
> > "cleaning
> >   up indexes" phases.  Otherwise, how would I know that we are now skipping
> >  indexes?  Of course, you won't have any historical context about the index
> >  work done before failsafe kicked in, but IMO it is misleading to still
> >   include it in the progress view.
>
> After speaking with Nathan offline, the best forward is to reset 
> indexes_total and indexes_processed to 0 after the start of "vacuuming 
> indexes" or "cleaning up indexes" phase.

+1

+/*
+ * vacuum_worker_init --- initialize this module's shared memory hash
+ * to track the progress of a vacuum worker
+ */
+void
+vacuum_worker_init(void)
+{
+       HASHCTL     info;
+       long        max_table_size = GetMaxBackends();
+
+       VacuumWorkerProgressHash = NULL;
+
+       info.keysize = sizeof(pid_t);
+       info.entrysize = sizeof(VacProgressEntry);
+
+       VacuumWorkerProgressHash = ShmemInitHash("Vacuum Progress Hash",
+
                  max_table_size,
+
                  max_table_size,
+
                  &info,
+
                  HASH_ELEM | HASH_BLOBS);
+}

It seems to me that creating a shmem hash with max_table_size entries
for parallel vacuum process tracking is too much. IIRC an old patch
had parallel vacuum workers advertise its progress and changed the
pg_stat_progress_vacuum view so that it aggregates the results
including workers' stats. I think it’s better than the current one.
Why did you change that?

Regards,

-- 
Masahiko Sawada
EDB:  https://www.enterprisedb.com/


Reply via email to