On Fri, 28 Aug 2009 casper....@sun.com wrote:

> >luactivate has been running for about 45 minutes. I'm assuming it will
> >probably take at least the 1.5 hours of the lumount (particularly
> >considering it appears to be running a lumount process under the hood) if
> >not the 3.5 hours of lucreate.
Eeeek, the luactivate command ended up taking about *7 hours* to complete.
And I'm not sure it was even successful, output excerpts at the end of this
message.

> Do you have a lot of files in /etc/mnttab, including nfs filesystems
> mounted from "server1,server2:/path"?

There's only one nfs filesystem in vfstab which is always mounted, user
home directories are automounted and would be in mnttab if accessed, but
during the lu process no users were on the box.

On the other hand, there are a *lot* of zfs filesytems in mnttab:

# grep zfs /etc/mnttab  | wc -l
    8145

> And you're using lucreate for a ZFS root?  It should be "quick"; we are
> changing a number of things in Solaris 10 update 8 and we hope it will be
> faster/

lucreate on a system with *only* an os root pool is blazing (the magic of
clones). The problem occurs when my data pool (with 6k odd filesystems) is
also there. The live upgrade process is analyzing all 6k of those
filesystems, mounting them all in the alternate root, unmounting them all,
and who knows what else. This is totally wasted effort, those filesystems
have nothing to do with the OS or patching, and I'm really hoping that they
can just be completely ignored.

So, after 7 hours, here is the last bit of output from luactivate. Other
than taking forever and a day, all of the output up to this point seemed
normal. The BE s10u6 is neither the currently active BE nor the one being
made active, but these errors have me concerned something _bad_ might
happen if I reboot :(. Any thoughts?


Modifying boot archive service
Propagating findroot GRUB for menu conversion.
ERROR: Read-only file system: cannot create mount point
</.alt.s10u6/export/group/ceis>
ERROR: failed to create mount point </.alt.s10u6/export/group/ceis> for
file system <export/group/ceis>
ERROR: unmounting partially mounted boot environment file systems
ERROR: No such file or directory: error unmounting <ospool/ROOT/s10u6>
ERROR: umount: warning: ospool/ROOT/s10u6 not in mnttab
umount: ospool/ROOT/s10u6 no such file or directory
ERROR: cannot unmount <ospool/ROOT/s10u6>
ERROR: cannot mount boot environment by name <s10u6>
ERROR: Failed to mount BE <s10u6>.
ERROR: Failed to mount BE <s10u6>. Cannot propagate file
</etc/lu/installgrub.findroot> to BE
File propagation was incomplete
ERROR: Failed to propagate installgrub
ERROR: Could not propagate GRUB that supports the findroot command.
Activation of boot environment <patch-20090817> successful.

According to lustatus everything is good, but <shiver>... These boxes have
only been in full production about a month, it would not be good for them
to die during the first scheduled patches.


# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
s10u6                      yes      no     no        yes    -
s10u6-20090413             yes      yes    no        no     -
patch-20090817             yes      no     yes       no     -


Tuanks...


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  hen...@csupomona.edu
California State Polytechnic University  |  Pomona CA 91768
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to