On Mon, 27 Jul 2020 at 06:43, Nasby, Jim <nas...@amazon.com> wrote:
>
> A database with a very large number of  tables eligible for autovacuum can 
> result in autovacuum workers “stuck” in a tight loop of 
> table_recheck_autovac() constantly reporting nothing to do on the table. This 
> is because a database with a very large number of tables means it takes a 
> while to search the statistics hash to verify that the table still needs to 
> be processed[1]. If a worker spends some time processing a table, when it’s 
> done it can spend a significant amount of time rechecking each table that it 
> identified at launch (I’ve seen a worker in this state for over an hour). A 
> simple work-around in this scenario is to kill the worker; the launcher will 
> quickly fire up a new worker on the same database, and that worker will build 
> a new list of tables.
>
>
>
> That’s not a complete solution though… if the database contains a large 
> number of very small tables you can end up in a state where 1 or 2 workers is 
> busy chugging through those small tables so quickly than any additional 
> workers spend all their time in table_recheck_autovac(), because that takes 
> long enough that the additional workers are never able to “leapfrog” the 
> workers that are doing useful work.
>

As another solution, I've been considering adding a queue having table
OIDs that need to vacuumed/analyzed on the shared memory (i.g. on
DSA). Since all autovacuum workers running on the same database can
see a consistent queue, the issue explained above won't happen and
probably it makes the implementation of prioritization of tables being
vacuumed easier which is sometimes discussed on pgsql-hackers. I guess
it might be worth to discuss including this idea.

Regards,

-- 
Masahiko Sawada            http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Reply via email to