Just to put closure to this discussion about how CR  6565042 and 6322646 change 
how ZFS functions with in the below scenario. 

>ZFS no longer has the issue where loss of a single device (even
>intermittently) causes pool corruption. That's been fixed.
>
>That is, there used to be an issue in this scenario:
>
>(1) zpool constructed from a single LUN on a SAN device
>(2) SAN experiences temporary outage, while ZFS host remains running.
>(3) zpool is permanently corrupted, even if no I/O occured during outage
>
>This is fixed. (around b101, IIRC)
>
>I went back and dug through some of my email, and the issue showed up as
>CR 6565042.
>
>That was fixed in b77 and s10 update 6." 

After doing further research, and speaking with the CR engineers,  the CR 
changes seem to be included in an overall fix for ZFS panic situations. The 
Zpool can still go into a degraded or faulted state, which will require manual 
intervention by the user. 

This fix was discussed above in information from infodoc 211349 Solaris[TM] ZFS 
& Write Failure

 "ZFS will handle the drive failures gracefully as part of the BUG 6322646 fix 
in the case of non-redundant configurations by degrading the pool instead of 
initiating a system panic with the help of Solaris[TM] FMA framework."
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to