Zpool split is a wonderful feature and it seems to work well, and the choice of which disk got which name was perfect! But there seems to be an odd anomaly (at least with b132) .
Started with c0t1d0s0 running b132 (root pool is called rpool) Attached c0t0d0s0 and waited for it to resilver Rebooted from c0t0d0s0 zpool split rpool spool Rebooted from c0t0d0s0, both rpool and spool were mounted Rebooted from c0t1d0s0, only rpool was mounted It seems to me for consistency rpool should not have been mounted when booting from c0t0d0s0; however that's pretty harmless. But: Rebooted from c0t0d0s0 - a couple of verbose errors on the console... # zpool status rpool pool: rpool state: UNAVAIL status: One or more devices could not be used because the label is missing or invalid. There are insufficient replicas for the pool to continue functioning. action: Destroy and re-create the pool from a backup source. see: http://www.sun.com/msg/ZFS-8000-5E scrub: none requested config: NAME STATE READ WRITE CKSUM rpool UNAVAIL 0 0 0 insufficient replicas mirror-0 UNAVAIL 0 0 0 insufficient replicas c0t1d0s0 FAULTED 0 0 0 corrupted data c0t0d0s0 FAULTED 0 0 0 corrupted data # zpool status spool pool: spool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM spool ONLINE 0 0 0 c0t0d0s0 ONLINE 0 0 0 It seems that ZFS thinks c0t0d0s0 is still part of rpool as well as being a separate pool (spool). # zpool export rpool cannot open 'rpool': I/O error This worked since zpool list doesn't show rpool any more. Reboot c0t1d0s0 - no problem (no spool) Reboot c0t0d0s0 - no problem (no rpool) The workaround seems to be to export rpool the first time you boot c0t0d0s0. No big deal but it's a bit scary when it happens. Has this been fixed in a later release? Thanks -- Frank _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss