Large sites that have centralized their data with a SAN typically have
a storage device export block-oriented storage to a server, with a
fibre-channel or Iscsi connection between the two.  The server sees
this as a single virtual disk.  On the storage device, the blocks of
data may be spread across many physical disks.  The storage device
looks after redundancy and management of the physical disks.  It may
even phone home when a disk fails and needs to be replaced.  The
storage device provides reliability and integrity for the blocks of
data that it serves, and does this well.

On the server, a variety of filesystems can be created on this virtual
disk.  UFS is most common, but ZFS has a number of advantages over
UFS.  Two of these are dynamic space management and snapshots.  There
are also a number of objections to employing ZFS in this manner.
``ZFS cannot correct errors'', and ``you will lose all of your data''
are two of the alarming ones.  Isn't ZFS supposed to ensure that data
written to the disk are always correct?  What's the real problem here?

This is a split responsibility configuration where the storage device
is responsible for integrity of the storage and ZFS is responsible for
integrity of the filesystem.  How can it be made to behave in a
reliable manner?  Can ZFS be better than UFS in this configuration?
Is a different form of communication between the two components
necessary in this case?

-- 
-Gary Mills-    -Unix Support-    -U of M Academic Computing and Networking-
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to