On 08/06/2013 23:37, Tanstaafl wrote: > Hi everyone, > > What is best practice for doing this? > > If I reboot in single user mode, will my lvm volumes (ie, /var) be > available for fsck'ing, or do I have to mount them first? > > The current problem started after a different problem required me to do > a hard reset on the server - had to do with a mounted QNAP device being > unavailable when I initiated a reboot, and everything just hung. > > Ever since I did this hard reset, the server hangs at unmounting /var. > I've let it sit there for at least an hour, and it never goes past that. > > Then after I hard reset it, it fsck's /var partition again, maybe fixes > minor problems very quickly, and everything works fine until I have to > reboot or shutdown again. > > This became a major problem this weekend when we had one extended power > outage (about 8 hours) yesterday evening, then another one (about 4 > hours) this morning right after I got everything back up and running > from last nights outage. > > Anyway, I need to do this this weened if at all possible, so... > > Anyone have any pointers to detailed docs and or willing to hold my hand > through this a little?
fsck'ing that filesystem should be no different from any other fsck - it should find what it finds and fix what it can. The fs must be unmounted of course which means you have to do it in single-user mode, or from booting a rescue system (I prefer the second, I find it easier as none of the production filesystems are required to be mounted). fsck.resiserfs has several modes, IIRC there's --rebuild-tree or similar that does an extensive checks but takes ages. I needed to do this 2 or 3 times when I was still using reiser. There's also an option to do not writes if you want a sanity check first. I'm not convinced a power outage broke the fs so that you now can't umount it, I'm having a hard time imaging how that would happen. More likely some other script file elsewhere is damaged and leaves files open when the system wants to umount /var. You have some options: This requires considerable downtime, easily an hour or more. You can dd /var somewhere to get a copy you can experiment on with another host. At least you will then know how much downtime to schedule. You should do a full check and repair on all filesystems to be 100% certain. For the umount issues, that is trickier as you won't have log files in /var after the fact. Any clues on the Alt-F12 console whilst shutting down? Try configure your syslogger to send logs to another host, you might be lucky enough to get some logs that way that describe what is going on. -- Alan McKinnon alan.mckin...@gmail.com