Bob Friesenhahn wrote:
On Tue, 19 Oct 2010, Cindy Swearingen wrote:
unless you use copies=2 or 3, in which case your data is still safe
for those datasets that have this option set.
This advice is a little too optimistic. Increasing the copies property
value on datasets might help in some failure scenarios, but probably not
in more catastrophic failures, such as multiple device or hardware
failures.
It is 100% too optimistic. The copies option only duplicates the user
data. While zfs already duplicates the metadata (regardless of copies
setting), it is not designed to function if a vdev fails.
Bob
Some future filesystem (not zfs as currently implemented) could be
designed to handle certain vdev failures where multiple vdevs were used
without redundancy at the vdev level. In this scenario, the redundant
metadata and user data with copies=2+ would still be accessible by
virtue of it having been spread across the vdevs, with at least one copy
surviving. Expanding upon this design would allow raw space to be
added, with redundancy being set by a 'copies' parameter.
I understand the copies parameter to currently be designed and intended
as an extra assurance against failures that affect single blocks but not
whole devices. I.e. run ZFS on a laptop with a single hard drive, and
use copies=2 to protect against bad sectors but not complete drive
failures. I have not tested this, however I imagine that performance is
the reason to use copies=2 instead of partitioning/slicing the drive
into two halves and mirroring the two halves back together. I also
recall seeing something about the copies parameter attempting to spread
the copies across different devices, as much as possible.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss