It's perfectly safe to do an installworld on a multi-user system
providing:
(1) That you've kicked any other users off and
(2) That you've killed any daemons that might exec something on
a regular basis. sendmail, cron, webserver, etc... (not sshd,
but make sure nobody ssh's in while you are updating the
source base).
The issue here is that the installworld does not use a 'create file under
temporary name and rename it' scheme. It uses a 'remove the old file,
create the new file' scheme so an exec() at the wrong time can cause a
program to try to load a partially written shared library (e.g. libc).
Some daemons really take exception to this and wind up getting into
fork/exec/core loops which can make the machine unusable.
--
I always update my remote machines by building all necessary kernels,
building the world, and installing it all on a build machine first to
make sure I've got the upgrade procedure down. Then I NFS-export
/usr/src and /usr/obj read-only to the remote machines and do the
kernel install and the installworld on each remote machine.
(note: /usr/src and /usr/obj should be part of the /usr partition,
without using any softlink tricks, or running installworld on the
remote machines will not work as expected).
I never build the world directly on a remote machine.
NOTE!!!! DANGER!!! When doing an installworld over NFS, it takes much
longer for the installworld to copy any given file (such as files in
/usr/lib), which increases the chance of a daemon trying to fork/exec
a program and dying a horrible death, possibly making the machine
unusable. All remote machines should have some sort of serial console
and power cycler setup to allow recovery from these and other potential
problems.
-Matt
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-stable" in the body of the message