On Sat, Sep 29, 2018 at 10:51:23AM +0200, Tomas Vondra wrote: > One more thought - when running similar tools on a live system, it's > usually a good idea to limit the impact by throttling the throughput. As > the verification runs in an independent process it can't reuse the > vacuum-like cost limit directly, but perhaps it could do something > similar? Like, limit the number of blocks read/second, or so?
When it comes to such parameters, not using a number of blocks but throttling with a value in bytes (kB or MB of course) speaks more to the user. The past experience with checkpoint_segments is one example of that. Converting that to a number of blocks internally would definitely make sense the most sense. +1 for this idea. -- Michael
signature.asc
Description: PGP signature