I've installed an s10u6 machine with no UFS partitions at all.  I've created a
dataset for zones and one for a zone named "default."  I then do an lucreate
and luactivate and a subsequent boot off the new BE.  All of that appears to
go just fine (though I've found that I MUST call the zone dataset zoneds for
some reason, or it will rename it ot that for me).  When I try to delete the
old BE, it fails with the following message:

# ludelete s10-RC
ERROR: cannot mount '/zoneds': directory is not empty
ERROR: cannot mount mount point </.alt.tmp.b-VK.mnt/zoneds> device 
<rpool/ROOT/s10-RC/zoneds>
ERROR: failed to mount file system <rpool/ROOT/s10-RC/zoneds> on 
</.alt.tmp.b-VK.mnt/zoneds>
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.1>
ERROR: Cannot mount BE <s10-RC>.
Unable to delete boot environment.

It's obvious that luactivate is not correctly resetting the mount point of
/zoneds and /zoneds/default (the zone named default) in the old BE so that
it's under /.alt like the rest of the ROOT dataset:

# zfs list |grep s10-RC
rpool/ROOT/s10-RC                  14.6M  57.3G  1.29G  /.alt.tmp.b-VK.mnt/
rpool/ROOT/s10-RC/var              2.69M  57.3G  21.1M  /.alt.tmp.b-VK.mnt//var
rpool/ROOT/s10-RC/zoneds           5.56M  57.3G    19K  /zoneds
rpool/ROOT/s10-RC/zoneds/default   5.55M  57.3G  29.9M  /zoneds/default

Obviously I can reset the mount points by hand with "zfs set mountpoint," but
this seems like something that luactivate and the subsequent boot should
handle.  Is this a bug, or am I missing a step/have something misconfigured?

Also, once I run ludelete on a BE, it seems like it should also clean up the
old ZFS filesystems for the BE s10-RC (the old BE) instead of me having to do
an explicit zfs destroy.

The very weird thing is that, if I run lucreate again (new BE is named bar)
and boot off of the new BE, it does the right thing with the old BE (foo):

rpool/ROOT/bar                     1.52G  57.2G  1.29G  /
rpool/ROOT/b...@foo                 89.1M      -  1.29G  -
rpool/ROOT/b...@bar                 84.1M      -  1.29G  -
rpool/ROOT/bar/var                 24.7M  57.2G  21.2M  /var
rpool/ROOT/bar/v...@foo             2.64M      -  21.0M  -
rpool/ROOT/bar/v...@bar              923K      -  21.2M  -
rpool/ROOT/bar/zoneds              32.7M  57.2G    20K  /zoneds
rpool/ROOT/bar/zon...@foo            18K      -    19K  -
rpool/ROOT/bar/zon...@bar            19K      -    20K  -
rpool/ROOT/bar/zoneds/default      32.6M  57.2G  29.9M  /zoneds/default
rpool/ROOT/bar/zoneds/defa...@foo  2.61M      -  27.0M  -
rpool/ROOT/bar/zoneds/defa...@bar   162K      -  29.9M  -
rpool/ROOT/foo                     2.93M  57.2G  1.29G  /.alt.foo
rpool/ROOT/foo/var                  818K  57.2G  21.2M  /.alt.foo/var
rpool/ROOT/foo/zoneds               270K  57.2G    20K  /.alt.foo/zoneds
rpool/ROOT/foo/zoneds/default       253K  57.2G  29.9M  /.alt.foo/zoneds/default

And then DOES clean up the zfs filesystem when I run ludelete.  Does anyone
know where there's a discrepancy?  The same lucreate command (-n <BE> -p
rpool) command was used both times.





_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to