On 2012-02-20 16:57, Jan Owoc wrote:
On Mon, Feb 20, 2012 at 7:38 AM, Robin Axelsson
<gu99r...@student.chalmers.se>  wrote:
It is evident that ZFS is not very good to use without disk redundancy.
In your case, you would have silent data corruption on-disk. This
corrupted data would get passed to programs, that would try to work
with what they have. In some cases, you might be lucky - in others,
your system would randomly crash.

If you are frustrated about being informed about disk errors, and
would prefer the system to not check, it is possible to set
"checksum=off". This is not recommended.
I'm not frustrated about it. I have acknowledged the error and all I want(ed) to do is to let zfs loose the grip on it so that I can fix it by other means. Temporarily disabling the checksum flag/property of the dataset didn't make the corrupt part of the file "readable" again; cp still halted with an I/O error.

In this case it was a hard drive image of a virtual machine that was corrupted. I trust the operating system of that VM to be able to restore system integrity enough to ensure stability, there are no vital files in it that cannot be replaced. If I can see the data surrounding the corrupt datablock with a hex editor I may even figure out which data file that is affected and just replace it manually.


On Mon, Feb 20, 2012 at 8:05 AM, Gregory Youngblood
<greg...@youngblood.me>  wrote:
It would be great if there were some kind of software that could be set up to 
generate .par2 files (with x% data redundancy) on-the-fly to protect files on 
hard drives without disk redundancy (RAID=0).
What about telling zfs to maintain more than one copy? Not sure how well data 
is spread out if there is only one drive though. Anyone know?
Yes, there is an option copies=2 (or 3) to have each data block have a
"ditto block" somewhere else on the filesystem. You need twice (or
three times) the capacity to do this. Both copies are still physically
on the same drive, so while this protects against random data
corruption or a few bad sectors, it does not protect against the
single drive failing.

Gregory is talking about generating something like "ECC" for each
block. Such an algorithm, for example, could be set to use an
additional 10% of information stored with the checksum to attempt
recovery of the target block. I'm not aware of any such option at the
present, but adding it would require a new zpool version.


Jan

_______________________________________________
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss





_______________________________________________
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss

Reply via email to