2012-05-03 9:44, Jordi Espasa Clofent wrote:
> Note, as you can see, the slice 0 i used for 'rpool' and the slice 7
> is used for 'opt'. The autoexpand propierty is enabled in 'rpool' but
> is disabled in 'opt'
>
> This machine is a virtual one (VMware), so I can enlarge the disk
> easily if I need. Let's say I enlarge the disk 10 GB:

> PS. I know perfectly how to expand any zpool just adding a new device;
> actually I think is even better, but that's not the point.

The rpool has some limitations compared to other pools; for example,
it can not be concatenated or striped from several locations. Each
component of the rpool (single device or part of the mirror) must be
self-sufficient in case of catastrophic boots.

So adding a new device to rpool won't help.

As for autoexpansion, it works "in-place" - if the device which
contains the pool becomes larger, the pool can increase. In case
of rpool, the device is c0t0d0s0 slice. After you increase the
disk, you should also use tools like format, fdisk and/or parted
to increase the Solaris partition (in external MBR-table layout),
then you should relocate the "opt" pool's sectors towards end of
disk while that pool is exported and not active, then relabel
the Solaris slices with format so as to change the s7's "address"
and expand the s0 slice. Then the singular device of rpool would
become bigger and it should autoexpand.

The tricky part is relocation of opt. I think you can do this
with a series of dd invokations going with chunks of say 1Gb,
starting from the end of its slice (end-1Gb), because by the
time you're done, 2/3 of original opt slice would be overwritten
by its own relocated data.

It would likely be more simple and safe to just back up the data
from your nearly empty opt pool (i.e. zfs send | zfs recv its
datasets into rpool), destroy opt, relabel the solaris slices
with format, expand rpool, create a new opt. You should back it
up anyway before such dangerous experiments

But for the sheer excitement of the experiment, you can give
the dd-series a try, and tell us how it goes

HTH,
//Jim


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to