On Wed, 2008-06-11 at 07:40 -0700, Richard L. Hamilton wrote:
> > I'm not even trying to stripe it across multiple
> > disks, I just want to add another partition (from the
> > same physical disk) to the root pool.  Perhaps that
> > is a distinction without a difference, but my goal is
> > to grow my root pool, not stripe it across disks or
> > enable raid features (for now).
> > 
> > Currently, my root pool is using c1t0d0s4 and I want
> > to add c1t0d0s0 to the pool, but can't.
> > 
> > -Wyllys
> 
> Right, that's how it is right now (which the other guy seemed to
> be suggesting might change eventually, but nobody knows when
> because it's just not that important compared to other things).
> 
> AFAIK, if you could shrink the partition whose data is after
> c1t0d0s4 on the disk, you could grow c1t0d0s4 by that much,
> and I _think_ zfs would pick up the growth of the device automatically.

This works.  ZFS doesn't notice the size increase until you reboot.

I've been installing systems over the past year with a slice arrangement
intended to make it easy to go to zfs root:

        s0 with a ZFS pool at start of  disk
        s1 swap
        s3 UFS boot environment #1
        s4 UFS boot environment #2
        s7 SVM metadb (if mirrored root)

I was happy to discover that this paid off.  Once I upgraded a BE to
nv_90 and was running on it, it was a matter of:

        lucreate -p $pool -n nv_90zfs
        luactivate nv_90zfs

        init 6  (reboot)

        ludelete other BE's

        format
        format> partition
                <delete slices other than s0>
                <grow s0 to full disk>

        reboot

and you're all ZFS all the time.

                                                - Bill

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to