Ok, now its seems like its working what I wanted to do:
bash-3.00# zpool status
pool: mypool
state: ONLINE
scrub: resilver completed with 0 errors on Thu May 31 23:01:09 2007
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0
errors: No known data errors
bash-3.00# zpool detach mypool c1t3d0
bash-3.00# zpool status
pool: mypool
state: ONLINE
scrub: resilver completed with 0 errors on Thu May 31 23:01:09 2007
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
errors: No known data errors
bash-3.00# zpool replace mypool c1t2d0 emcpower0a
bash-3.00# zpool status
pool: mypool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 0.00% done, 17h50m to go
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
replacing ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
emcpower0a ONLINE 0 0 0
errors: No known data errors
bash-3.00#
thank you everyone who helped me with this...
Chris
On Fri, 1 Jun 2007, Will Murnane wrote:
On 6/1/07, Krzys <[EMAIL PROTECTED]> wrote:
bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
mypool 68G 53.1G 14.9G 78% ONLINE -
mypool2 123M 83.5K 123M 0% ONLINE -
Are you sure you've allocated as large a LUN as you thought initially?
Perhaps ZFS is doing something funky with it; does putting UFS on it
show a large filesystem or a small one?
Will
!DSPAM:122,46601749220211363223461!
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss