On Mon, Feb 20, 2012 at 7:38 AM, Robin Axelsson <gu99r...@student.chalmers.se> wrote: > It is evident that ZFS is not very good to use without disk redundancy.
In your case, you would have silent data corruption on-disk. This corrupted data would get passed to programs, that would try to work with what they have. In some cases, you might be lucky - in others, your system would randomly crash. If you are frustrated about being informed about disk errors, and would prefer the system to not check, it is possible to set "checksum=off". This is not recommended. On Mon, Feb 20, 2012 at 8:05 AM, Gregory Youngblood <greg...@youngblood.me> wrote: >> It would be great if there were some kind of software that could be set up >> to generate .par2 files (with x% data redundancy) on-the-fly to protect >> files on hard drives without disk redundancy (RAID=0). > > What about telling zfs to maintain more than one copy? Not sure how well data > is spread out if there is only one drive though. Anyone know? Yes, there is an option copies=2 (or 3) to have each data block have a "ditto block" somewhere else on the filesystem. You need twice (or three times) the capacity to do this. Both copies are still physically on the same drive, so while this protects against random data corruption or a few bad sectors, it does not protect against the single drive failing. Gregory is talking about generating something like "ECC" for each block. Such an algorithm, for example, could be set to use an additional 10% of information stored with the checksum to attempt recovery of the target block. I'm not aware of any such option at the present, but adding it would require a new zpool version. Jan _______________________________________________ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss