On 09/29/2018 02:14 PM, Stephen Frost wrote:
> Greetings,
>
> * Michael Paquier (mich...@paquier.xyz) wrote:
>> On Sat, Sep 29, 2018 at 10:51:23AM +0200, Tomas Vondra wrote:
>>> One more thought - when running similar tools on a live system, it's
>>> usually a good idea to limit the impact by throttling the throughput. As
>>> the verification runs in an independent process it can't reuse the
>>> vacuum-like cost limit directly, but perhaps it could do something
>>> similar? Like, limit the number of blocks read/second, or so?
>>
>> When it comes to such parameters, not using a number of blocks but
>> throttling with a value in bytes (kB or MB of course) speaks more to the
>> user. The past experience with checkpoint_segments is one example of
>> that. Converting that to a number of blocks internally would definitely
>> make sense the most sense. +1 for this idea.
>
> While I agree this would be a nice additional feature to have, it seems
> like something which could certainly be added later and doesn't
> necessairly have to be included in the initial patch. If Michael has
> time to add that, great, if not, I'd rather have this as-is than not.
>
True, although I don't think it'd be particularly difficult.
> I do tend to agree with Michael that having the parameter be specified
> as (or at least able to accept) a byte-based value is a good idea.
Sure, I was not really expecting it to be exposed as raw block count. I
agree it should be in byte-based values (i.e. just like --max-rate in
pg_basebackup).
> As another feature idea, having this able to work in parallel across
> tablespaces would be nice too. I can certainly imagine some point where
> this is a default process which scans the database at a slow pace across
> all the tablespaces more-or-less all the time checking for corruption.
>
Maybe, but that's certainly a non-trivial feature.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services