So let me work through a scenario of how clone promotion might work
in conjunction with liveupgrade once we have bootable zfs datasets:

1.  We are booted off the dataset  pool/root_sol10_u4

2.  We want to upgrade to U5.  So we begin by lucreating a new
      boot environment (BE) as a clone of the current root

       #  lucreate -n root_sol10_u5 -m /:pool/root_sol10_u4:zfs

       By default, liveupgrade will use zfs cloning when creating a new
       BE from an existing zfs dataset.  So behind the scenes, lucreate
       will execute:

       # zfs snapshot pool/[EMAIL PROTECTED]
       # zfs clone pool/[EMAIL PROTECTED] pool/root_sol10_u5

        (this will take only seconds, and require no pre-allocated space)

3.  Now we do the luupgrade of the newly lucreate'd BE to U5.
     Note that the only space required for the upgrade is the
     space needed for packages that are new or  modified in U5.

4.  So the administrator tries out the new BE by luactivate'ing it
     and booting it.  (This is where we need a menu'ing interface
     at boot time, so we can choose between the various bootable
     datasets in the pool.  Conveniently, GRUB provides us with one.)

5.  The new BE works fine, so the administrator decides to promote
     the BE's dataset (which is still a clone) to primary dataset status.
     Here I'm not sure what's best:  should liveupgrade promote the
     dataset as part of its management of boot environments?  Or
     should the administrator have to (or be able to) promote a
     a bootable dataset explicitly?  I'll have to give that one a bit of
     thought, but one way or another, this happens:

     #  zfs promote pool/root_sol10_u5

6.   So we can rename the newly-promoted BE if we want, but let's
      assume we leave it with its name "root_sol10_u5".  Now if we
      want to get rid of the old U4 root dataset, we should do the
      following:

       # ludelete root_sol10_u4

       which, in addition to the usual liveupgrade tasks to delete the
       BE, will do this:

       # zfs destroy pool/root_sol10_u4

So, for the purposes of zfs boot and liveupgrade, I think your new
"promote" function works very well.  Am I missing anything?

Lori
     

Matthew Ahrens wrote:
FYI folks, I have implemented "clone promotion", also known as "clone
swap" or "clone pivot", as described in this bug report:

	6276916 support for "clone swap"

Look for it in an upcoming release...

Here is a copy of PSARC case which is currently under review.

1. Introduction
    1.1. Project/Component Working Name:
	 ZFS Clone Promotion
    1.2. Name of Document Author/Supplier:
	 Author:  Matt Ahrens
    1.3  Date of This Document:
	06 May, 2006
4. Technical Description
ZFS provides the ability to create read-only snapshots of any filesystem,
and to create writeable clones of any snapshot.  Suppose that F is a
filesystem, S is a snapshot of F, and C is a clone of S.  Topologically,
F and C are peers: that is, S is a common origin point from which F and C
diverge.  F and C differ only in how their space is accounted and where
they appear in the namespace.

After using a clone to explore some alternate reality (e.g. to test a patch),
it's often desirable to 'promote' the clone to 'main' filesystem status --
that is, to swap F and C in the namespace.  This is what 'zfs promote' does.

Here are man page changes:

in the SYNOPSIS section (after 'zfs clone'):
  
     zfs promote <clone filesystem>
    

in the DESCRIPTION - Clones section (only last paragraph is added):
  Clones
     A clone is a writable volume or file  system  whose  initial
     contents are the same as another dataset. As with snapshots,
     creating a clone is nearly instantaneous, and initially con-
     sumes no additional space.

     Clones can only be created from a snapshot. When a  snapshot
     is  cloned,  it  creates  an implicit dependency between the
     parent and child. Even though the clone is created somewhere
     else  in the dataset hierarchy, the original snapshot cannot
     be destroyed as long as a clone exists.  The  "origin"  pro-
     perty exposes this dependency, and the destroy command lists
     any such dependencies, if they exist.

  
   The clone parent-child dependency relationship can be reversed by
   using the _promote_ subcommand.  This causes the "origin"
   filesystem to become a clone of the specified filesystem, which
   makes it possible to destroy the filesystem that the clone was
   created from.
    

in the SUBCOMMANDS section (after 'zfs clone'):
  
   zfs promote <clone filesystem>

      Promotes a clone filesystem to no longer be dependent on its
      "origin" snapshot.  This makes it possible to destroy the
      filesystem that the clone was created from.  The dependency
      relationship is reversed, so that the "origin" filesystem
      becomes a clone of the specified filesystem.

      The snaphot that was cloned, and any snapshots previous to this
      snapshot will now be owned by the promoted clone.  The space
      they use will move from the "origin" filesystem to the promoted
      clone, so is must have enough space available to accommodate
      these snapshots.  Note: no new space is consumed by this
      operation, but the space accounting is adjusted.  Also note that
      the promoted clone must not have any conflicting snapshot names
      of its own.  The _rename_ subcommand can be used to rename any
      conflicting snapshots.
    
 
in the EXAMPLES section (after 'Example 8: Creating a clone'):
  
     Example 9: Promoting a Clone

     The following commands illustrate how to test out changes to a
     filesystem, and then replace the original filesystem with the
     changed one, using clones, clone promotion, and renaming.

      # zfs create pool/project/production
        <populate /pool/project/production with data>
      # zfs snapshot pool/project/[EMAIL PROTECTED]
      # zfs clone pool/project/[EMAIL PROTECTED] pool/project/beta
        <make changes to /pool/project/beta and test them>
      # zfs promote pool/project/beta
      # zfs rename pool/project/production pool/project/legacy
      # zfs rename pool/project/beta pool/project/production
        <once the legacy version is no longer needed, it can be
        destroyed>
      # zfs destroy pool/project/legacy
    

6. Resources and Schedule
    6.4. Steering Committee requested information
   	6.4.1. Consolidation C-team Name:
		ON
    6.5. ARC review type: FastTrack

----- End forwarded message -----
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to