On Wed, Aug 6, 2008 at 13:57, Miles Nordin <[EMAIL PROTECTED]> wrote: >>>>>> "re" == Richard Elling <[EMAIL PROTECTED]> writes: >>>>>> "tb" == Tom Bird <[EMAIL PROTECTED]> writes: > > tb> There was a problem with the SAS bus which caused various > tb> errors including the inevitable kernel panic, the thing came > tb> back up with 3 out of 4 zfs mounted. > > re> In general, ZFS can only repair conditions for which it owns > re> data redundancy. > > If that's really the excuse for this situation, then ZFS is not > ``always consistent on the disk'' for single-VDEV pools. Well, yes. If data is sent, but corruption somewhere (the SAS bus, apparently, here) causes bad data to be written, ZFS can generally detect but not fix that. It might be nice to have a "verifywrites" mode or something similar to make sure that good data has ended up on disk (at least at the time it checks), but failing that there's not much ZFS (or any filesystem) can do. Using a pool with some level of redundancy (mirroring, raidz) at least gives zfs a chance to read the missing pieces from the redundancy that it's kept.
> How about the scenario where you lose power suddenly, but only half of > a mirrored VDEV is available when power is restored? Is ZFS > vulnerable to this type of unfixable corruption in that scenario, > too? Every filesystem is vulnerable to corruption, all the time. I'm willing to dispute any claims otherwise. Some are just more likely than others to hit their error conditions. I've personally run into UFS' problems more often than ZFS... but that doesn't mean I think I'm safe. Will _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss