Peter, I'll first check /var/adm/messages to see if there are any poblems with the following disks:
c10t600A0B800011730E000066F444C5EE7Ed0 c10t600A0B800011730E000066F644C5EE96d0 c10t600A0B800011652E0000E5CF44C5EEA7d0 c10t600A0B800011730E000066F844C5EEBAd0 The checksum errors seems to concentrate around these. -- Just me, Wire ... On 9/20/06, Peter Wilk <[EMAIL PROTECTED]> wrote:
All, IHAC that had called in an issue with the following description. I have a system which has two ZFS storage pools. One of the pools is on hardware which is having problems, so I wanted to start the system with only one of the two ZFS storage pools. How do NOT mount the second ZFS storage pool? engineer response: ZFS has a number of commands for this. If you want to make it so the system does not use the pool, you can offline the pool until you have a chance to repair it. From the ZFS manual: zpool offline <pool name> <device> zpool offline myzfspool c1t1d0 However, note you may not be able to offline it if it is the only device in the pool, in which case you would have to add another device so that data can be transferred until the bad drive is replaced, otherwise there would be data loss. You may want to check the status with: zpool status <pool name> Depending on what you find here, you may be able to remove the bad device and replace it or you may have to try and back up the data, destroy the pool and recreate it on the new device. If you reference the ZFS Administration Manual, the full information is listed on pages 135-140. customer response: Since all my disks for the entire pool were not available, I ended up exporting the entire zpool. At that point I could bring up the system with the zpool which was operational. After we got the storage subsystem fixed we brought the second zpool back online with zpool import. Since we had problems with the storage we ran zpool scrub on the pool. Checksum errors were found, on a single device in each of the raidz groups. I have been told that ZFS will correct these errors. After zpool scrub ran to completion, we cleared the errors, and we are now in the process of running it again. There are several hours to go, but it has already flagged additional checksum errors. I would have thought the original run of zpool scrub would have fixed these. Not even understanding ZFS fully and just learning of this command zfs scrub. I believe zfs scrub is similiar to an fsck. It appears that zfs scrub is not resolving the issue..any suggestion would be helpful zfs Thanks Peter Please respond to me directly for I may not be on this alias Customer was told to do the following commands and I believe it did not clear up his issue..see below zpool status -v (should list the status and what errors it found) zpool scrub (one more time to see if more errors are found) zpool status -v (this should show us an after picture) Sorry I left that out, yes, you would want to run a zpool clear before the scrub. You may also want to output a zpool status after the clear to make sure the count cleared out. When you run the commands, you may just want to do a script session to capture everything. latest email from customer After I am attaching some output files which show "zpool scrub" running multiple times and catching checksums each time. Remember, I have now run zpool scrub about 3 - 4 times. ---------------------------------------- ============================================================================= ______ /_____/\ /_____\\ \ Peter Wilk - OS/Security Support /_____\ \\ / Sun Microsystems /_____/ \/ / / 1 Network Drive, P.O Box 4004 /_____/ / \//\ Burlington, Massachusetts 01803-0904 \_____\//\ / / 1-800-USA-4SUN, opt 1, opt 1,<case number># \_____/ / /\ / Email: [EMAIL PROTECTED] \_____/ \\ \ ===================================== \_____\ \\ \_____\/ =============================================================================
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss