Thanks. This is the first value I tried and it works well. In the archive I 
have all blocks seem to be between 8 and 20KB so the jump forward before the 
change never even got close to 1MB. Could it be bigger in an uncompressed 
archive? Or in a future pg_dump that raises the block size? I don't really 
know, so it is difficult to test such scenario but it made sense to guard 
against these cases too.

I chose 1MB by basically doing a very crude calculation in my mind: when would 
it be worth seeking forward instead of reading? On very slow drives 60MB/s 
sequential and 60 IOPS for random reads is a possible speed. In that worst case 
it would be better to seek() forward for lengths of over 1MB. 

On 1 April 2025 22:04:00 CEST, Nathan Bossart <nathandboss...@gmail.com> wrote:
>On Tue, Apr 01, 2025 at 09:33:32PM +0200, Dimitrios Apostolou wrote:
>> It didn't break any test, but I also don't see any difference, the
>> performance boost is noticeable only when restoring a huge archive that is
>> missing offsets.
>
>This seems generally reasonable to me, but how did you decide on 1MB as the
>threshold?  Have you tested other values?  Could the best threshold vary
>based on the workload and hardware?
>


Reply via email to