Hey Richard,
I believe 6844090 would be a candidate for an s10 backport.
The behavior of 6844090 worked nicely when I replaced a disk of the same
physical size even though the disks were not identical.
Another flexible storage feature is George's autoexpand property (Nevada
build 117), where you can attach or replace a disk in a pool with LUN
that is larger in size than the existing size of the pool, but you can
keep the LUN size constrained with autoexpand set to off.
Then, if you decide that you want to use the expanded LUN, you can set
autoexpand to on, or you can just detach it to use in another pool where
you need the expanded size.
(The autoexpand feature description is in the ZFS Admin Guide on the
opensolaris/...zfs/docs site.)
Contrasting the autoexpand behavior to current Solaris 10 releases, I
noticed recently that you can use zpool attach/detach to attach a larger
disk for eventual replacement purposes and the pool size is expanded
automatically, even on a live root pool, without the autoexpand feature
and no import/export/reboot is needed. (Well, I always reboot to see if
the new disk will boot before detaching the existing disk.)
I did this recently to expand a 16-GB root pool to 68-GB root pool.
See the example below.
Cindy
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 16.8G 5.61G 11.1G 33% ONLINE -
# zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c1t18d0s0 ONLINE 0 0 0
errors: No known data errors
# zpool attach rpool c1t18d0s0 c1t1d0s0
# zpool status rpool
pool: rpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h3m, 51.35% done, 0h3m to go
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c1t18d0s0 ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk
/dev/rdsk/c1t1d0s0
<boot from new disk to make sure replacement disk boots>
# init 0
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 16.8G 5.62G 11.1G 33% ONLINE -
# zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c1t18d0s0 ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
errors: No known data errors
# zpool detach rpool c1t18d0s0
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 68.2G 5.62G 62.6G 8% ONLINE -
# cat /etc/release
Solaris 10 5/09 s10s_u7wos_08 SPARC
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 30 March 2009
On 08/05/09 17:20, Richard Elling wrote:
On Aug 5, 2009, at 4:06 PM, cindy.swearin...@sun.com wrote:
Brian,
CR 4852783 was updated again this week so you might add yourself or
your customer to continue to be updated.
In the meantime, a reminder is that a mirrored ZFS configuration
is flexible in that devices can be detached (as long as the redundancy
is not compromised) or replaced as long as the replacement disk is an
equivalent size or larger. So, you can move storage around if you
need to in a mirrored ZFS config and until 4852783 integrates.
Thanks Cindy,
This is another way to skin the cat. It works for simple volumes, too.
But there are some restrictions, which could impact the operation when a
large change in vdev size is needed. Is this planned to be backported
to Solaris 10?
CR 6844090 has more details.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6844090
-- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss