Tim, I think you're looking for zpool offline:
zpool offline [-t] pool device ... Takes the specified physical device offline. While the device is offline, no attempt is made to read or write to the device. This command is not applicable to spares or cache dev- ices. -t Temporary. Upon reboot, the specified physical device reverts to its previous state. Ed Plese On Wed, Nov 11, 2009 at 12:15 PM, Tim Cook <t...@cook.ms> wrote: > So, I've done a bit of research and RTFM, and haven't found an answer. If > I've missed something obvious, please point me in the right direction. > > Is there a way to manually fail a drive via ZFS? (this is a raid-z2 > raidset) In my case, I'm pre-emptively replacing old drives with newer, > faster, larger drives. So far, I've only been able to come up with two > solutions to the issue, neither of which is very graceful. > > The first option is to simply yank the old drive out of the chassis. I > could go on at-length about why I dislike doing that, but I think it's safe > to say everyone agrees this isn't a good option. > > The second option is to export the zpool, then I can cfgadm -c disconnect > the drive, and finally gracefully pull it from the system. Unfortunately, > this means my data has to go offline. While that's not a big deal for a > home box, it is for something in the enterprise with uptime concerns. > > From my experimentation, you can't disconnect or unconfigure a drive that is > part of a live zpool. So, is there a way to tell zfs to pre-emptively fail > it so that you can use cfgadm to put the drive into a state for a graceful > hotswap? Am I just missing something obvious? Detach seems to only apply > to mirrors and hot spares. > > --Tim > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss