Alan Shutko wrote: >Paolo Falcone <[EMAIL PROTECTED]> writes: > >>>blaubaer:~# e2fsck /dev/hdb1 >>>e2fsck 1.25 (20-Sep-2001) >>>/dev/hdb1 is mounted. >>>WARNING!!! Running e2fsck on a mounted filesystem may cause >>>SEVERE filesystem damage. >> >> This is default behavior. But you need to delete the journal >> file first, else you wreck your ext3fs partition, before committing >> to fsck. > >No, e2fsck works fine on ext3 partitions. It just doesn't want to >work on mounted partitions. Remount root as read-only (mount / -o >ro,remount) and try it again.
ok... I've done remounting root as read-only compared to deleting the journal. It's much easier to delete the .journal file (since you can have it reinstalled with the tune2fs command) but it's quite safer (since whole data info is committed to the journal for a transaction, the journal can be replayed to the last correct commit to disk. By deleting the journal, you can have a fresh start, for fsck to check the disk in the usual way (without checking the journal. It is theoretically possible that a passive erroneous transaction wherein the system viewed it as a correct commit can be committed to disk. Replay the journal and you replay a potential problem). Note that you lose whatever means to replay the previous journal transactions this way. Thus the warnings. (AFAIK, .journal is a file to denote the additional journal inode) You're right that it is very dangerous to have fsck run on a mounted partition (but sometimes you don't have a choice, especially when your hard drive goes flakey). Anyway, I'm offering an alternative. >> The second one is to tweak your ext3fs partition. issue: >> >> tune2fs -c0 -i0 /dev/hdb1 > >Bad idea. From the tune2fs man page > > You should strongly consider the consequences of > disabling mount-count-dependent checking entirely. > Bad disk drives, cables, memory, and kernel bugs > could all corrupt a filesystem without marking the > filesystem dirty or in error. If you are using > journaling on your filesystem, your filesystem will > never be marked dirty, so it will not normally be > checked. A filesystem error detected by the kernel > will still force an fsck on the next reboot, but it > may already be too late to prevent data loss at > that point. > >and > > It is strongly recommended that either -c (mount- > count-dependent) or -i (time-dependent) checking be > enabled to force periodic full e2fsck(8) checking > of the filesystem. Failure to do so may lead to > filesystem corruption due to bad disks, cables, > memory, or kernel bugs to go unnoticed until they > cause data loss or corruption. Well, I'm fully aware of that. That's why they issued warnings on this (I've read this man page too before I posted) However, since ext3 commits whole data info (and not just metadata) into the journal, might as well take advantage of the journalling mechanism. You can, however, force checks via the recommended method. Normally, you won't have to be doing the maximum mount check (but as always, it is your responsibility to do mount checking). I'm just offering a fast solution to the problem, giving the advantage offered by the journalling in ext3 (as implied by the manual entry you provided) Generally, I'll agree that ommitting the fs checks is quite a bad idea if you aren't using journalling (who'll want a maximum file when you're using gigabytes of hard disk space? that's way too slow and terse. When you have a better alternative of just replaying the journal to the last correct commit to disk?) Better not use journalling then if you'll use ext3fs like ext2. It will be more convenient to do that mount count on system development time than on quite an inopportune time during the day then start the long fsck. Note that what I recommended is not an excuse for one not to do the maximum file system check, to make it all clear. Just transferred the responsibility of doing the task from the machine to the system administrator (which is part of his job anyway) I won't generate a flame on this issue, though (technical papers can be thrown, but it won't help solve the problem presented), but I do believe that you've got a point, too. And a good one. >> You won't really need fsck unless you screw up big time (playing >> around as root most of the time does that...). > >Untrue. I'm referring to the frequent checking with the maximum mount counts. The fsck program (aside from fixing file system errors) is the one responsible for the boot time file system checks (at least that's what the manual and the books on OS say). The journal quite does the work it's designed for. No need to overdo fsck. But, as I've said, doing the fast solution commits you to do another task in your system administration. That's why it's also not recommended to play around as root. You can accidentally wreck your filesystem with one wrong, accidental move. You better not mess up with fsck (that's what it all means) or you might end up crying (like fsck'ing a mounted partition and waking up after the long fsck that you're just fscked up. I know this as I've done this quite often when I was still working on an environment with frequent power fluctuations and no UPS available) Hope I made my point clear, if it was misconstrued. Not that I'm refuting anyone's point. As said, there's lots of solutions (BUT NOT THE MICROSOFT SOLUTION). Paolo Falcone __________________________________ www.edsamail.com