Hi all, yesterday I had to remove a zpool device due to controller errors (I tried to replace the harddisk, but checksum errors occured again) so I connected a fresh harddisk to another controller port.
Now I have the problem that zpool status looks as following: r...@storage:~# zpool status pool: performance state: DEGRADED scrub: none requested config: NAME STATE READ WRITE CKSUM performance DEGRADED 0 0 0 mirror ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c2d0 ONLINE 0 0 0 mirror DEGRADED 0 0 0 replacing UNAVAIL 0 0 0 insufficient replicas c1t3d0s0/o UNAVAIL 0 0 0 cannot open c1t3d0 UNAVAIL 0 0 0 cannot open c2d1 ONLINE 0 0 0 c1t3d0 is the disk which was replaced (It should now be c1t0d0..it shows up like this in format). After attaching the new device a resilvering occured, but it did not show, what was resilvered...It only showed the remainig time. zpool status -x also says, that all pools are ok (what I cannot believe). r...@storage:~# zpool status -x all pools are healthy Can anybody tell me, why I cannot replace the dead c1t3d0 with c1t0d0. I tried zpool replace, tried to add c1t0d0 as hot spare (which worked, but it did not resilver) and tried to "zpool clear" the pool, but c1t3d0 remains. Can anybody tell me how to get rid of c1t3d0 and heal my zpool? _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss