Background: We have a ZFS pool setup from LUNS which are from a SAN connected StorageTek/Engenio Flexline 380 storage system. Just this past Friday the storage environment went down causing the system to go down.
After looking at the storage environment, we had several volume groups which needed to be carefully put back together to prevent corruption. Well, one of the volume groups and the volumes/LUNs coming from it got corrupted. Since our ZFS pools is setup to only have a LUN from each volume group we basically ended up with a single disk loss in our RAIDZ group. So I believe we should be able to recover from this. My question is how to replace this disk (LUN). Basically the LUN is again okay, but the data on the LUN is not. I have tried to do a zpool replace, but ZFS seems to know that the disk/lun is the same device. Using a -f (force) didn't work either. How does one replace a LUN with ZFS? I'm currently doing a "scrub", but don't know if that will help. I first just had read errors on a lun in the raidz group, but just tonight noticed that I now have a checksum error on another lun as well. (see zpool status command output below) Below is a zpool status -x output. Can anyone advise how to recover from this? # zpool status -x pool: mypool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scrub: scrub in progress, 66.00% done, 10h45m to go config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 raidz ONLINE 0 0 0 c10t600A0B800011730E000066C544C5EBB8d0 ONLINE 0 0 0 c10t600A0B800011730E000066CA44C5EBEAd0 ONLINE 0 0 0 c10t600A0B800011730E000066CF44C5EC1Cd0 ONLINE 0 0 0 c10t600A0B800011730E000066D444C5EC5Cd0 ONLINE 0 0 0 c10t600A0B800011730E000066D944C5ECA0d0 ONLINE 0 0 0 c10t600A0B800011652E0000E5C144C5ECDFd0 ONLINE 0 0 0 c10t600A0B800011730E000066E244C5ED2Cd0 ONLINE 0 0 0 c10t600A0B800011652E0000E5C644C5ED87d0 ONLINE 0 0 0 c10t600A0B800011730E000066EB44C5EDD8d0 ONLINE 0 0 0 c10t600A0B800011652E0000E5CB44C5EE29d0 ONLINE 0 0 0 c10t600A0B800011730E000066F444C5EE7Ed0 ONLINE 0 0 9 c10t600A0B800011652E0000E5D044C5EEC9d0 ONLINE 0 0 0 c10t600A0B800011730E000066FD44C5EF1Ad0 ONLINE 50 0 0 c10t600A0B800011652E0000E5D544C5EF63d0 ONLINE 0 0 0 raidz ONLINE 0 0 0 c10t600A0B800011652E0000E5B844C5EBCBd0 ONLINE 0 0 0 c10t600A0B800011652E0000E5BA44C5EBF5d0 ONLINE 0 0 0 c10t600A0B800011652E0000E5BC44C5EC2Dd0 ONLINE 0 0 0 c10t600A0B800011652E0000E5BE44C5EC6Bd0 ONLINE 0 0 0 c10t600A0B800011730E000066DB44C5ECB4d0 ONLINE 0 0 0 c10t600A0B800011652E0000E5C344C5ECF9d0 ONLINE 0 0 0 c10t600A0B800011730E000066E444C5ED5Ad0 ONLINE 0 0 0 c10t600A0B800011652E0000E5C844C5EDA1d0 ONLINE 0 0 0 c10t600A0B800011730E000066ED44C5EDFAd0 ONLINE 0 0 0 c10t600A0B800011652E0000E5CD44C5EE47d0 ONLINE 0 0 0 c10t600A0B800011730E000066F644C5EE96d0 ONLINE 0 0 6 c10t600A0B800011652E0000E5D244C5EEE7d0 ONLINE 0 0 0 c10t600A0B800011730E000066FF44C5EF32d0 ONLINE 70 0 0 c10t600A0B800011652E0000E5D744C5EF7Fd0 ONLINE 0 0 0 This system is at Solaris 10, U2. Thank you, David This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss