On Wed, Jul 2, 2008 at 2:10 AM, Jeff Bonwick <[EMAIL PROTECTED]> wrote:
>> How difficult would it be to write some code to change the GUID of a pool?
>
> As a recreational hack, not hard at all. ?But I cannot recommend it
> in good conscience, because if the pool contains more than one disk,
> the GUID change cannot possibly be atomic. ?If you were to crash or
> lose power in the middle of the operation, your data would be gone.
>
> What problem are you trying to solve?

I've been trying to figure out how I will do this with iSCSI LUNs
cloned at a storage device.  The basic flow would be as shown below.
Note that things get really sticky around 4a.

If I don't have a way to change the GUID I think that I am stuck
cloning via zfs send|receive or cpio.


1. On Vendor X's storage device (X may or may not be Sun)
   a. Create an iSCSI LUN
   b. Grant sun1 access to this LUN
2. On Solaris box named sun1 create and customize a master zone (or ldom)
   a. Make LUN available
   b. zpool create master /dev/dsk/$whatever
   c. zfs set mounpoint=/zones/master master
   d. zonecfg -z master create
   e. zonecfg -z master set zonepath=/zones
   f. zoneadm -z master install
   g. Customize master zone as needed
   h. zoneadm -z master detach
   i. zpool export master
3. On storage device make clones of master device
   a. Make many clones of master, making each into a LUN
   b. Provision each LUN to several servers
4. Final customization on one of servers from 3b
   a. Import each LUN with a new zpool name
   b. Set mountpoint to /zones/$newzonename
   c. Attach zone (fix zonepath, sysidcfg, etc.)
   d. Detach zone
   e. Export zpool
5. Configure HA for each zone
   a. Each zone should be able to fail over independently of others
   b. Set start-up host based on load, priorities, etc.
   c. Start all zone workloads

While the various zones are running, steps 3-5 will likely be repeated
from time to time as new zones need to be provisioned.

Notice that in this arrangement the only thing that has important data
is the shared storage - each server is a dataless FRU.  If Vendor X
supports deduplication of live data (hint) I only need about 25% of
space that I would need if I weren't using clones + deduplication.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to