Hi,
I hope someone can help cos ATM zfs' logic seems a little askew.
I just swapped a failing 200gb drive that was one half of a 400gb gstripe 
device which I was using as one of the devices in a 3 device raidz1. When the 
OS came back up after the drive had been changed, the necessary metadata was of 
course not on the new drive so the stripe didn't exist. Zfs understandably 
complained it couldn't open the stripe, however it did not show the array as 
degraded. I didn't save the output, but it was just like described in this 
thread:

http://www.nabble.com/Shooting-yourself-in-the-foot-with-ZFS:-is-quite-easy-t4512790.html

I recreated the gstripe device under the same name stripe/str1 and assumed I 
could just:

# zpool replace pool stripe/str1
invalid vdev specification
stripe/str1 is in use (r1w1e1)

It also told me to try -f, which I did, but was greeted with the same error.
Why can I not replace a device with itself?
As the man page describes just this procedure I'm a little confused.
Try as I might (online, offline, scrub) I could not get the array to rebuild, 
just like was the guy described in that thread above. I eventually resorted to 
recreating the stripe with a different name stripe/str2. I could then perform a:

# zpool replace pool stripe/str1 stripe/str2

Is there a reason I have to jump through these seemingly pointless hoops to 
replace a device with itself?
Many thanks.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to