Per-Table vacuum_freeze_min_age

2022-04-06 Thread Chris Bisnett
Hi all!

I have several large tables (1-2Tb) that are 99.9% writes (small number of
updates) with a decent commit rate (20K/sec). The basic idea is that it’s
generating a lot of data continuously. When the table would reach the
thresholds for autovacuum a vacuum would start and would start generating
wal write lock wait events. Once I set the freeze age to 500,000 (default
is 50,000,000) the vacuums have to touch many fewer pages and is
significantly faster without causing any write lock wait events.

The only downside I’ve seen is that this is a global setting and my
understanding is that this would cause decreased performance when used with
tables with a lot of writes and deletes. Is there a technical reason this
setting cannot be applied at the database or table context like other
autovacuum settings?

- chris


Re: Per-Table vacuum_freeze_min_age

2022-04-06 Thread Chris Bisnett
On Wed, Apr 6, 2022 at 6:24 PM Adrian Klaver 
wrote:

> On 4/6/22 3:13 PM, Chris Bisnett wrote:
> > Hi all!
> >
> > I have several large tables (1-2Tb) that are 99.9% writes (small number
> > of updates) with a decent commit rate (20K/sec). The basic idea is that
> > it’s generating a lot of data continuously. When the table would reach
> > the thresholds for autovacuum a vacuum would start and would start
> > generating wal write lock wait events. Once I set the freeze age to
> > 500,000 (default is 50,000,000) the vacuums have to touch many fewer
> > pages and is significantly faster without causing any write lock wait
> > events.
> >
> > The only downside I’ve seen is that this is a global setting and my
> > understanding is that this would cause decreased performance when used
> > with tables with a lot of writes and deletes. Is there a technical
> > reason this setting cannot be applied at the database or table context
> > like other autovacuum settings?
>
> It can:
>
>
> https://www.postgresql.org/docs/current/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS
>
> Per-table value for vacuum_freeze_min_age parameter.
> >
> > - chris
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com


> Oh I must have missed this. Is this missing from the documentation here (
https://www.postgresql.org/docs/current/runtime-config-autovacuum.html)?

I can try again, but I’m pretty sure this option was rejected when I
attempted to set it via an alter table command.


Re: Per-Table vacuum_freeze_min_age

2022-04-06 Thread Chris Bisnett
On Wed, Apr 6, 2022 at 6:31 PM Adrian Klaver 
wrote:

> On 4/6/22 3:28 PM, Chris Bisnett wrote:
> > On Wed, Apr 6, 2022 at 6:24 PM Adrian Klaver  > <mailto:adrian.kla...@aklaver.com>> wrote:
>
> >
> > It can:
> >
> >
> https://www.postgresql.org/docs/current/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS
> > <
> https://www.postgresql.org/docs/current/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS
> >
> >
> > Per-table value for vacuum_freeze_min_age parameter.
> >  >
> >  > - chris
> >  >
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
> >
> >
> > Oh I must have missed this. Is this missing from the documentation here (
> > https://www.postgresql.org/docs/current/runtime-config-autovacuum.html
> > <https://www.postgresql.org/docs/current/runtime-config-autovacuum.html
> >)?
>
> That describes autovacuum in general. The per table options are
> attributes of a table.
>
> >
> > I can try again, but I’m pretty sure this option was rejected when I
> > attempted to set it via an alter table command.
>
> What was the command you used?
>
> What was the error you got?
>
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com


Either I was trying to use vacuum_freeze_min_age on the table or I forgot
the parenthesis around the options in the alter table command. Either way,
I just tried it and it worked. Thanks for the help!

- chris


Re: LwLocks contention

2022-04-21 Thread Chris Bisnett
> We are occasionally seeing heavy CPU contention with hundreds of processes 
> active but waiting on a lightweight lock - usually lock manager or buffer 
> mapping it seems. This is happening with VMs configured with about 64 CPUs, 
> 350GBs ram, and while we would typically only have 30-100 concurrent 
> processes, there will suddenly be ~300 and many show active with LwLock and 
> they take much longer than usual. Any suggested options to monitor for such 
> issues or logging to setup so the next issue can be debugged properly?
>
> It has seemed to me that this occurs when there are more than the usual 
> number of a particular process type and also something that is a bit heavy in 
> usage of memory/disk. It has happened on various tenant instances and 
> different application processes as well.
>
> Would/how might the use of huge pages (or transparent huge pages, or OFF) 
> play into this scenario?

I've also been contending with a good bit of lightweight lock
contention that causes performance issues. Most often we see this with
the WAL write lock, but when we get too many parallel queries running
we end up in a "thundering herd" type of issue were the contention for
the lock manager lock consumes significant CPU resources causing the
number of parallel queries to increase as more clients back up behind
the lock contention leading to even more lock contention. When this
happens we have to pause our background workers long enough to allow
the lock contention to reduce and then we can resume our background
workers. When we hit the lock contention it's not a gradual
degredation, it goes immediately from nothing more than 100% CPU
usage. The same is true when reducing the lock contention - it goes
from 100% to nothing.

I've been working under the assumption that this has to do with our
native partitioning scheme and the fact that some queries cannot take
advantage of partition pruning because they don't contain the
partition column. My understanding is that when this happens ACCESS
SHARED locks have to be taken on all tables as well as all associated
resources (indexes, sequences, etc.) and the act of taking and
releasing all of those locks will increase the lock contention
significantly. We're working to update our application so that we can
take advantage of the pruning. Are you also using native partitioning?

- Chris