@Clint Byrum - bug 603363 seems to talk about an issue where sshd is not
properly being stopped, this was fixed for maverick, but apparently not
for lucid. I ran lsof in the umountroot script, so right before the root
fs should be remounted read only. I don't think anything gets killed in
there between my lsof and the actual remount. The lsof consistenly shows
2 things running: init and sshd. Modifying the ssh.conf file to stop
sshd on anything but runlevel 2345 solves that issue. However, I think
we're also still running into bug 188925. It appears that right after a
libc6 upgrade it fails to properly remount root because it's busy. This
leaves orphaned inodes and possibly some other mess. This doesn't happen
on a completely clean install without updates installed. All of this
goes away after the initial reboot after updating - so this has to be
something related to the updates.

As for if this is completely related to this bug, I'm not sure. I'm just
seeing this behaviour and I'm still not 100% sure 'what to blame'.
Nonetheless, having a dirty filesystem on a new install (a production
server even!) right after doing some initial updates is kind of not
making me feel very confident about this ;-)

-- 
umountfs doesn't cleanly unmount / on reboot
https://bugs.launchpad.net/bugs/616287
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to