> From: David Lang [mailto:da...@lang.hm]
> 
> > The whole point of journaling is that the filesystem effectively does "fsck"
> on the fly, every time it accesses an inode, it checks the consistency.  That
> way, the work of fsck is spread out during normal operation, rather than
> requiring manual intervention, or a really long wait time for system to reboot
> after crash.
> 
> this is one of those theory vs practice things (in theory, theory and practice
> are the same, in practice they are not)
> 
> in theory a journaled filesystem never needs to be checked.
> 
> in practice it's not always true. It's almost always true that the filesystem
> will be usable after an unexpected shutdown, but usable != clean.

I agree, except, that "usable != clean" is irrelevant.  The whole point of the 
journal is that your filesystem doesn't *need* to be clean.  Any part of the FS 
that isn't clean, by definition, you are not using.  As soon as you use it, it 
becomes clean.

Taking it a step further, btrfs and zfs, by design, cannot ever become 
inconsistent.  Like journaling on steroids.  That's not to say it's impossible 
to lose data - It's only to say, that the sum total of data risk is that which 
is buffered in ram, waiting to be flushed to disk at the time of crash.  But by 
design, at all times, the entire filesystem on disk is guaranteed to be a fully 
consistent snapshot of the whole filesystem, at a point in time, free of any 
inconsistencies.
_______________________________________________
Tech mailing list
Tech@lists.lopsa.org
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to