On Thu, Sep 11, 2008 at 04:28:03PM -0400, Jim Dunham wrote:
>
> On Sep 11, 2008, at 11:19 AM, A Darren Dunham wrote:
>
>> On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote:
>>> The issue with any form of RAID >1, is that the instant a disk fails
>>> out of the RAID set, with the next write I/O to the remaining members
>>> of the RAID set, the failed disk (and its replica) are instantly out
>>> of sync.
>>
>> Does raidz fall into that category?
>
> Yes. The key reason is that as soon as ZFS (or other mirroring software) 
> detects a disk failure in a RAID >1 set, it will stop writing to the 
> failed disk, which also means it will also stop writing to the replica of 
> the failed disk. From the point of view of the remote node, the replica 
> of the failed disk is no longer being updated.
>
> Now if replication was stopped, or the primary node powered off or  
> panicked, during the import of the ZFS storage pool on the secondary  
> node, the replica of the failed disk must not be part of the ZFS storage 
> pool as its data is stale. This happens automatically, since the ZFS 
> metadata on the remaining disks have already given up on this member of 
> the RAID set.

Then I misunderstood what you were talking about.  Why the restriction
on RAID >1 for your statement?  Even for a mirror, the data is stale and
it's removed from the active set.  I thought you were talking about
block parity run across columns...

-- 
Darren
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to