I think people can understand the concept of missing flushes.  The big 
conceptual problem is how this manages to hose an entire filesystem, which is 
assumed to have rather a lot of data which ZFS has already verified to be ok.

Hardware ignoring flushes and loosing recent data is understandable, I don't 
think anybody would argue with that.  Loosing access to your entire pool and 
multiple gigabytes of data because a few writes failed is a whole different 
story, and while I understand how it happens, ZFS appears to be unique among 
modern filesystems in suffering such a catastrophic failure so often.

To give a quick personal example:  I can plug a fat32 usb disk into a windows 
system, drag some files to it, and pull that drive at any point.  I might loose 
a few files, but I've never lost the entire filesystem.  Even if the absolute 
worst happened, I know I can run scandisk, chkdisk, or any number of file 
recovery tools and get my data back.

I would never, ever attempt this with ZFS.

For a filesystem like ZFS where it's integrity and stability are sold as being 
way better than existing filesystems, loosing your entire pool is a bit of a 
shock.  I know that work is going on to be able to recover pools, and I'll 
sleep a lot sounder at night once it is available.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to