Greetings,

On Thu, Feb 14, 2019 at 10:15 Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:

> On 14/02/2019 11:03, Tomas Vondra wrote:
> > But if you add extra sleep() calls somewhere (say because there's also
> > limit on WAL throughput), it will affect how fast VACUUM works in
> > general. Yet it'll continue with the cost-based throttling, but it will
> > never reach the limits. Say you do another 20ms sleep somewhere.
> > Suddenly it means it only does 25 rounds/second, and the actual write
> > limit drops to 4 MB/s.
>
> I think at a first approximation, you probably don't want to add WAL
> delays to vacuum jobs, since they are already slowed down, so the rate
> of WAL they produce might not be your first problem.  The problem is
> more things like CREATE INDEX CONCURRENTLY that run at full speed.
>
> That leads to an alternative idea of expanding the existing cost-based
> vacuum delay system to other commands.
>
> We could even enhance the cost system by taking WAL into account as an
> additional factor.


This is really what I was thinking- let’s not have multiple independent
ways of slowing down maintenance and similar jobs to reduce their impact on
I/o to the heap and to WAL.

Thanks!

Stephen

Reply via email to