Hi,

On 2019-07-20 15:35:57 +0300, Michail Nikolaev wrote:
> Currently I am working a lot with cluster consist a few of big tables.
> About 2-3 TB. These tables are heavily updated, some rows are removed, new
> rows are inserted... Kind of typical OLTP workload.
> 
> Physical table size keeps mostly stable while regular VACUUM is working. It
> is fast enough to clean some place from removed rows.
> 
> But time to time "to prevent wraparound" comes. And it works like 8-9 days.
> During that time relation size starting to expand quickly. Freezing all
> blocks in such table takes a lot of time and bloat is generated much more
> quickly.

Several questions:
- Which version of postgres is this? Newer versions avoid scanning
  unchanged parts of the heap even for freezing (9.6+, with additional
  smaller improvements in 11).
- have you increased the vacuum cost limits? Before PG 12 they're so low
  they're entirely unsuitable for larger databases, and even in 12 you
  should likely increase them for a multi-TB database

Unfortunately even if those are fixed the indexes are still likely going
to be scanned in their entirety - but most of the time not modified
much, so that's not as bad.

Greetings,

Andres Freund


Reply via email to