On Tue, Nov 15, 2011 at 8:07 AM, <sbre...@hotmail.com> wrote: > Thanks anyone for the help, finally I removed corrupt files from the > "current view" of the file system and left the snapshots as they were. This > way at least the incremental backup continues. (It is sad that snapshots > are so rigid that even corruption is permanent. What more interesting is > that, if snapshots are read only, how can they become corrupted?) >
The snapshot is read-only, meaning users cannot modify the data in the snapshots. However, there's nothing to prevents random bit flips in the underlying storage. Maybe the physical harddrive has a bad block and gmirror copied the bad data to both disks, which flipped a bit or two in the file you are using to back the ZFS pool. Since ZFS only see a single device, it has no internal redundancy and can't fix the corrupted bits, only report that it found a block where the on-disk checksum doesn't match the computed checksum of the block. This is why you need to let ZFS handle redundancy via mirror vdevs, raidz vdevs, or (at the very least) copies=2 property on the ZFS filesystem. If there's redundancy in the pool, then ZFS can correct the corruption. > Would it make sense to do "zfs scrub" regularly and have a report sent, > i.e. once a day, so discrepancy would be noticed beforehand? Is there > anything readily available in the Freebsd ZFS package for this? > Without any redundancy in the pool, all a scrub will do is let you know there is corrupted data in the pool. It can't fix it. Neither can gmirror below the pool fix it. All you can do is delete the corrupted file and restore that file from backups. You really should get rid of the gmirror setup, dedicate the entire disks to ZFS, and create a pool using a mirror vdev. File-backed ZFS vdevs really should only be used for testing purposes. -- Freddie Cash fjwc...@gmail.com
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss