this is for newbies like myself: I used using 'zdb -l' wrong, just using the drive name from 'zpool status' or format which is like c6d1, didn't work. I needed to add s0 to the end:
zdb -l /dev/dsk/c6d1s0 gives me a good looking label ( I think ). The pool_guid values are the same for all the drives. I see the first 500GB drive I replaced has "children" that are all 500GB drives. The second 500GB drive I replaced has 1 2TB child. All the other drives have 2 2TB children. I managed to detach one of the drives being replaced, but I count not detach the other two 2TB drives. I exported and imported, now my pool looks like pool: brick state: DEGRADED status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://www.sun.com/msg/ZFS-8000-8A scrub: none requested config: NAME STATE READ WRITE CKSUM brick DEGRADED 0 0 0 raidz1 DEGRADED 0 0 0 c13d0 ONLINE 0 0 0 c4d0 ONLINE 0 0 0 c7d0 ONLINE 0 0 0 c4d1 ONLINE 0 0 0 14607330800900413650 UNAVAIL 0 0 0 was /dev/dsk/c15t0d0s0 c11t1d0 ONLINE 0 0 0 c6d0 ONLINE 0 0 0 errors: 352808 data errors, use '-v' for a list I there someway I can take the original zpool label from the first 500GB drive I replaced and use it to fix up the other drives in the pool? What are my options here... -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss