On Sat, Nov 7, 2015 at 12:55 PM, Philip Robar <philip.ro...@gmail.com> wrote:
> Please correct me if I'm wrong, but the thing that both Jerry's > administrator friend and David are missing is that ZFS data redundancy > isn't just a "sexy" form of reliability. It is also provides data > integrity, i.e. with redundancy ZFS will not just notice that a file is > corrupt, with redundancy it can fix the problem. With a single drive ZFS > pool you give up that integrity and there's a good chance that any data > corruption will then be passed on to your backup before ZFS flags it > resulting in the loss of that data. > Redundant is always better than non-redundant. In general, though, I don't see a lot of people losing files due to data corruption. Most losses I've seen are due to hardware failure, unrepairable levels of filesystem corruption, or operator error (overwriting files, deleting the wrong files.) I think this is probably because if the hardware is so marginal that it's writing corrupted data, it will rapidly corrupt the filesystem beyond repair, too. I have yet to see a data checksum error during a scrub of an otherwise healthy pool. Basically, I think redundancy has some data safety benefits, but I think the best solution to your scenario is to keep more than one backup at different points in time -- especially since zfs streams are pretty fragile as a backup format. Operator error is actually by far the most common way to lose data, in my experience, and it's one where redundancy won't help you. It's also hard to protect against unless you keep multiple backups, since you may not realize what happened for a while. -- D. Brodbeck System Administrator, Linguistics University of Washington GPG key fingerprint: 0DB7 4B50 8910 DBC5 B510 79C4 3970 2BC3 2078 D875 _______________________________________________ openindiana-discuss mailing list openindiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss