Greetings,

* Michael Paquier (mich...@paquier.xyz) wrote:
> On Sat, Sep 29, 2018 at 10:51:23AM +0200, Tomas Vondra wrote:
> > One more thought - when running similar tools on a live system, it's
> > usually a good idea to limit the impact by throttling the throughput. As
> > the verification runs in an independent process it can't reuse the
> > vacuum-like cost limit directly, but perhaps it could do something
> > similar? Like, limit the number of blocks read/second, or so?
> 
> When it comes to such parameters, not using a number of blocks but
> throttling with a value in bytes (kB or MB of course) speaks more to the
> user.  The past experience with checkpoint_segments is one example of
> that.  Converting that to a number of blocks internally would definitely
> make sense the most sense.  +1 for this idea.

While I agree this would be a nice additional feature to have, it seems
like something which could certainly be added later and doesn't
necessairly have to be included in the initial patch.  If Michael has
time to add that, great, if not, I'd rather have this as-is than not.

I do tend to agree with Michael that having the parameter be specified
as (or at least able to accept) a byte-based value is a good idea.  As
another feature idea, having this able to work in parallel across
tablespaces would be nice too.  I can certainly imagine some point where
this is a default process which scans the database at a slow pace across
all the tablespaces more-or-less all the time checking for corruption.

Thanks!

Stephen

Attachment: signature.asc
Description: PGP signature

Reply via email to