Hello All,

  I recently upgrade a test system that had a zpool (test_pool) from S10u5 to 
S10U6-zfsroot by simply replacing the root disks.  I exported the zpool before 
I init 5'ed the system.  On S10u5, the zpool vdevs were on c2t#d#.  On 
S10U6-zfsroot, the zpool vdevs were on c4t#d#.  I ran zpool import to see the 
pool and everything showed up ok.  

  I then ran zpool import test_pool and import was successful.  I was asked to 
upgrade zpool version so I performed a zpool upgrade test_pool.  Next, I ran 
zpool status test_pool.  To my surprise, my hot spare had the old c2t#d# vdev 
name and was unavailable.  All other zpool vdevs had the new c4t#d# names.  I 
tried exporting the zpool one more time.  I reviewed the output of  zpool 
import, it showed zpool test_pool with all the correct vdevs names(c4t#d#) as 
online.

  I then pulled the spare and the spare vdev name changed from c4t#d# online to 
c2t#d# unavailable.  Re-inserted the spare and all zpool vdevs are c4t#d# and 
online.

  I re-imported the zpool but zpool status still showed spare with old vdev 
name and unavailable.  Has anyone seen this, if so how can I 
clean/reset/update/fix the zpool vdev names?

Thanks for your replies.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to