[moved from request-sponsor to zfs-discuss]

Start of thread:
  http://mail.opensolaris.org/pipermail/request-sponsor/2007-April/001661.html
ARC proposal
  http://mail.opensolaris.org/pipermail/request-sponsor/2007-April/001677.html

On 4/14/07, Jeremy Teo <[EMAIL PROTECTED]> wrote:
Attached is my preliminary ARC proposal. Your comments and opinion are
highly appreciated.

In your proposal you say...

In order to provide the functionality we require, we perform all
the other operations we perform currently during a detach, and omit
the erasure of the vdev label on the target vdev.

This provides the basic functionality adminstrators require when
splitting and importing a mirror. However, it may be neccessary
to rewrite the vdev label on the removed vdev to restructure the
zpool into a single vdev zpool. The current prototype allows the
import of the split vdev, but the imported zpool's vdev topology
is identical to the original zpool: ie. zpool status will show
that the imported zpool is missing mirror vdevs.

What happens if a the pool is subsequently deported?  In the typical
usage case, how does one know which copy to bring in the next time
"zpool import" is run?  It seems as though both copies of the zpool
are equally valid.  What about on reboot?

Based upon the description in Section 1.3.3 of the ZFS On-Disk
Specification Draft[1], it seems as though it would be essential for
normal operation to at a minimum assign and write a new pool_guid.
Ideally it would be required for the administrator to assign a (likely
new) name at split time.  That is:

zpool detach pool -s new_name device ...

[1] http://opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf

  4.5. Interfaces:

A new "-s" ("split") flag will be added to the "zpool detach" command.
Issuing a zpool detach command with the -s flag will detach the target
vdev from the zpool, without erasing the vdev label. This will allow
the import of the detached vdev on another host.

There should be some sanity check to be sure that after the split that
you end up with a workable zpool.  That is, if you have the following

# zpool create tank1 mirror c2t0d0 c3t0d0 \
       mirror c2t1d0 c3t1d0

Then the following should fail:

# zpool detach tank1 -s tank2 c3t0d0

And the following should succeed:

# zpool detach tank1 -s tank2 c3t0d0 c3t1d0

And when you get to larger pools, it would be really nice for the
following to be supported:

# zpool detach tank1 -s tank2 c3

That would detach all devices on controller 3.   An alternative syntax
would likely be to specify "c3*" (quoted to avoid shell expansion),
which may be more powerful in the cases where the devices to be split
are on the same controller but with just a different target or LUN
range.

--
Mike Gerdts
http://mgerdts.blogspot.com/
_______________________________________________
zfs-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to