Thanks for the info folks.   

In addition to the 2 replies shown above I got the following  very 
knowledgeable reply from Jim Dunham (for some reason it has not shown up here 
yet so I'm going to paste it in).

----
Chris,

For the purposes of isolating corruption, the separation of two or more 
filesystems coming from the same ZFS storage pool does not help. An entire ZFS 
storage pool is the unit of I/O consistency, as all ZFS filesystems created 
within this single storage pool share the same physical storage.

When configuring a ZFS storage pool the [poor] decision of choosing a 
non-redundant (single or concatenation of disks) verses redundant (mirror, 
raidz, raidz2) storage pool, offers no means for ZFS to automatically recover 
for some forms of corruption.

Even when using a redundant storage pool, there are scenarios in which this is 
not good enough. This is when filesystem needs transitions into availability, 
such as when the loss or accessibility of two or more disks, causes mirroring 
or raidz to be ineffective.

As of Solaris Express build 68, Availability Suite 
[http://www.opensolaris.org/os/project/avs/] is part of base Solaris, offering 
both local snapshots and remote mirrors, both of which work with ZFS.

Locally on a single Solaris host, snapshots of the entire ZFS storage pool can 
be taken at intervals of ones choosing, and with multiple snapshots of a single 
master, collections of snapshots, say at intervals of one hour, can be 
retained. Options allow for 100% independent snapshots (much like your UFS 
analogy above), dependent where only the Copy-On-Write data is retained, or 
compact dependent where the snapshots physical storage is some percentage of 
the master.

Remotely between to or more Solaris hosts, remote mirrors of  the entire ZFS 
storage pool can be configured, where synchronous replication can offer zero 
data loss, or asynchronous replication can offer near zero data loss, but both 
offering write-order, on disk consistency. A key aspect of remote replication 
with Availability Suite, is that the replicated ZFS storage pool can be 
quiesced on the remote node and accessed, or in a disaster recover scenario, 
take over instantly where the primary left off. When the primary site is 
restored, the MTTR (Mean Time To Recovery) is essentially zero, since 
Availability Suite supports on-demand pull, so yet to be replicated blocks are 
retrieved synchronously, allowing the ZFS filesystem and applications to be 
resumed without waiting for a potentially length resynchronization. 
----

Thanks Jim!
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to