>I went back and dug through some of my email, and the issue showed up as
>CR 6565042.
>
>That was fixed in b77 and s10 update 6.
>
>I looked at this CR, forgive me but I am not a ZFS engineer. Can you explain 
>in, >simple terms, how ZFS now reacts to this? If it does not panic how does 
>it insure >data is save?

Found some  conflicting information

Infodoc: 211349 Solaris[TM] ZFS & Write Failure. 

"ZFS will handle the drive failures gracefully as part of the BUG 6322646 fix 
in the case of non-redundant configurations by degrading the pool instead of 
initiating a system panic with the help of Solaris[TM] FMA framework."

>From Richards post above.
"NB definitions of the pool states, including "degraded" are in the
zpool(1m)
man page.
-- richard"

>From zpool man page located below.
http://docs.sun.com/app/docs/doc/819-2240/zpool-1m?l=en&a=view&q=zpool

"Device Failure and Recovery

      ZFS supports a rich set of mechanisms for handling device failure and 
data corruption. All metadata and data is checksummed, and ZFS automatically 
repairs bad data from a good copy when corruption is detected.

      In order to take advantage of these features, a pool must make use of 
some form of redundancy, using either mirrored or raidz groups. While ZFS 
supports running in a non-redundant configuration, where each root vdev is 
simply a disk or file, this is strongly discouraged. A single case of bit 
corruption can render some or all of your data unavailable.

      A pool's health status is described by one of three states: online, 
degraded, or faulted. An online pool has all devices operating normally. A 
degraded pool is one in which one or more devices have failed, but the data is 
still available due to a redundant configuration. A faulted pool has corrupted 
metadata, or one or more faulted devices, and insufficient replicas to continue 
functioning.

      The health of the top-level vdev, such as mirror or raidz device, is 
potentially impacted by the state of its associated vdevs, or component 
devices. A top-level vdev or component device is in one of the following 
states:"

So from the zpool man page it seems that it is not possible to put a single 
device zpool in a degraded state. Is this correct or does the fix in Bugs 
6565042 and 6322646 change this behavior. 


>
>Also, just want to ensure everyone is on the same page here. There seems to be 
>>some mixed messages in this thread about how sensitive ZFS is to SAN issues.
>
>Do we all agree that creating a zpool out of one device in a SAN environment 
>is >not recommended. One should always constructs a zfs mirror or raidz device 
>out >of SAN attached devices, as posted in the ZFS FAQ?

The zpool man page seem to agree with this. Is this correct?
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to