Nenad,
I've seen this solution offered before, but I would not recommend this
except as a last resort, unless you didn't care about the health of
the original pool.
Removing a device from an exported pool, could be very bad, depending
on the pool's redundancy. You might not get your all data back unless
you put the disk back.
See the output below.
Definitely not for a pool and data on a production system.
Cindy
# zpool status epool
pool: epool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
epool ONLINE 0 0 0
mirror ONLINE 0 0 0
c7t6d0 ONLINE 0 0 0
c7t5d0 ONLINE 0 0 0
c5t5d0 ONLINE 0 0 0
c6t6d0 ONLINE 0 0 0
c6t5d0 ONLINE 0 0 0
c6t7d0 ONLINE 0 0 0
errors: No known data errors
# cfgadm | grep c6t7d0
sata4/7::dsk/c6t7d0 disk connected configured ok
# zpool export epool
# cfgadm -c unconfigure sata4/7
Unconfigure the device at: /devices/[EMAIL PROTECTED],0/pci1022,[EMAIL
PROTECTED]/pci11ab,[EMAIL PROTECTED]:7
This operation will suspend activity on the SATA device
Continue (yes/no)? y
# zpool import epool
# zpool status epool
pool: epool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas
exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-D3
scrub: resilver completed with 0 errors on Thu Apr 26 11:38:21 2007
config:
NAME STATE READ WRITE CKSUM
epool DEGRADED 0 0 0
mirror DEGRADED 0 0 0
c7t6d0 ONLINE 0 0 0
c7t5d0 ONLINE 0 0 0
c5t5d0 ONLINE 0 0 0
c6t6d0 ONLINE 0 0 0
c6t5d0 ONLINE 0 0 0
c6t7d0 UNAVAIL 0 0 0 cannot open
errors: No known data errors
#
Nenad Cimerman wrote:
You can - easily:
# zpool export [i]mypool[/i]
Then you take out one of the disks and put it into another system or a safe
place.
Afterwards you simply import the pool again:
# zpool import [i]mypool[/i]
Note - you can NOT import both disks separately, as they are both taged to
belong to the same zpool.
I just tried this, using files as pool-devices. But I didn't test it with real
disks/slices - but it shouldn't make any difference.
HTH,
Nenad.
PS: I know, the reply is pretty late... I just read this thread.
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss