On Sat, Feb 24, 2018 at 6:14 PM, Michael Banck <michael.ba...@credativ.de>
wrote:

> Hi,
>
> On Wed, Feb 21, 2018 at 09:53:31PM +0100, Magnus Hagander wrote:
> > We’ve also included a small commandline tool, bin/pg_verify_checksums,
> > that can be run against an offline cluster to validate all checksums.
>
> The way it is coded in the patch will make pg_verify_checksums fail for
> heap files with multiple segments, i.e. tables over 1 GB, becuase the
> block number is consecutive and you start over from 0:
>
> $ pgbench -i -s 80 -h /tmp
> [...]
> $ pg_verify_checksums -D data1
> pg_verify_checksums: data1/base/12364/16396.1, block 0, invalid checksum
> in file 6D61, calculated 6D5F
> pg_verify_checksums: data1/base/12364/16396.1, block 1, invalid checksum
> in file 7BE5, calculated 7BE7
> [...]
> Checksum scan completed
> Data checksum version: 1
> Files scanned:  943
> Blocks scanned: 155925
> Bad checksums:  76
>

Yikes. I could've sworn I tested that, but it's pretty obvious I didn't, at
least not in this version. Thanks for the note, will fix and post a new
version!

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ <http://www.hagander.net/>
 Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>

Reply via email to