On 3/9/19 4:28 AM, David Rowley wrote: > On Sat, 9 Mar 2019 at 16:11, Tom Lane <t...@sss.pgh.pa.us> wrote: >> I propose therefore that instead of increasing vacuum_cost_limit, >> what we ought to be doing is reducing vacuum_cost_delay by a similar >> factor. And, to provide some daylight for people to reduce it even >> more, we ought to arrange for it to be specifiable in microseconds >> not milliseconds. There's no GUC_UNIT_US right now, but it's time. >> (Perhaps we should also look into using other delay APIs, such as >> nanosleep(2), where available.) > It does seem like a genuine concern that there might be too much all > or nothing. It's no good being on a highspeed train if it stops at > every platform. > > I agree that vacuum_cost_delay might not be granular enough, however. > If we're going to change the vacuum_cost_delay into microseconds, then > I'm a little concerned that it'll silently break existing code that > sets it. Scripts that do manual off-peak vacuums are pretty common > out in the wild.
Maybe we could leave the default units as msec but store it and allow specifying as usec. Not sure how well the GUC mechanism would cope with that. [other good ideas] >> I don't have any particular objection to kicking up the maximum >> value of vacuum_cost_limit by 10X or so, if anyone's hot to do that. >> But that's not where we ought to be focusing our concern. And there >> really is a good reason, not just nannyism, not to make that >> setting huge --- it's just the wrong thing to do, as compared to >> reducing vacuum_cost_delay. > My vote is to 10x the maximum for vacuum_cost_limit and consider > changing how it all works in PG13. If nothing happens before this > time next year then we can consider making vacuum_cost_delay a > microseconds GUC. > +1. cheers andrew -- Andrew Dunstan https://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services