Dear List, I am struggling with a storage pool on a server, where I would like to offline a device for replacement. The pool consists of two-disk stripes set up in mirrors (yep, stupid, but we were running out of VDs on the controller at the time, and that's where we are now...).
Here's the pool config: root@storage:~# zpool status -v tank pool: tank state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://www.sun.com/msg/ZFS-8000-8A scan: resilvered 321G in 36h58m with 1 errors on Wed Apr 4 06:46:10 2012 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t14d0 ONLINE 0 0 0 c1t15d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c1t19d0 ONLINE 0 0 0 c1t18d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c1t20d0 ONLINE 0 0 0 c1t21d0 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 c1t22d0 ONLINE 0 0 0 c1t23d0 ONLINE 0 0 0 logs mirror-4 ONLINE 0 0 0 c2t2d0p7 ONLINE 0 0 0 c2t3d0p7 ONLINE 0 0 0 cache c2t2d0p11 ONLINE 0 0 0 c2t3d0p11 ONLINE 0 0 0 errors: Permanent errors have been detected in the following files: <0xeb78a>:<0xa8be6b> What I would like to do is offline or detach c1t19d0, which the server won't let me do: root@storage:~# zpool offline tank c1t19d0 cannot offline c1t19d0: no valid replicas The errored file above is not important to me; it was part of a snapshot since deleted. Could that be related to this? How can I find more information about why it simultaneously seems to think mirror-1 above is ok and broken? Any ideas would be greatly appreciated. Thanks in advance for your kind assistance. Best regards Jan _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss