If push comes to shove, then the journaling file system may lose more data, but it will be consistent. FFS will have written as much as possible, sometimes without association with an inode, that's when people encounter full lost+found directories.
Neither file system will correctly record the most recent additions, but will most likely hold on to all the old stuff. Backup is therefore of little help in most situations like this. >From this perspective the main difference is, that a journaling FS will be consistent and bootable after each and every crash (and I've seen hundreds), whereas I have positively seen instances, in which FFS would throw me into root console and ask for manual fsck. The latter is not much of a problem in a desktop (assuming you know how to clean up), but definitely a big nuissance for routers or off site machines. I do agree, that home routers (like my EdgeRouter) should run on a readonly FS to avoid this problem the right way, not scream for FS journals. On Wed, 2023-09-06 at 08:05 +0200, Janne Johansson wrote: > Den tis 5 sep. 2023 kl 20:53 skrev John Holland <johnbholl...@icloud.com>: > > > > I have a backup that is at least 2 days old offsite at a friend’s house. It > > would be a > > bit of a pain to go retrieve it, but I could do that. > > > > Short of that, I have 4000+ files in lost+found with names like #1094827. > > What can I > > do with those? I tried running “file” on the first 50 via xargs and they > > mostly at > > least purport to be some sort of intact file. How can I determine what they > > are? > > Please don’t suggest that I manually use “file” and then an appropriate > > program to > > examine each one in turn > > > > Those "files" are fragments of files, named after the inode number, > which you get when fsck finds a not-complete chain of > directory-entry/filename -> inode -> linked list of file-contents. > > While fsck can't figure out the filename and where in the directory > structure it is meant to belong, or possibly if it is only some part > of a whole file, it will give you a chance to recover at least partial > contents from the lost+found folder. Sometimes this might be awesome > if you can dig out some key or pw needed for something super > important, sometimes you get half of a database file and that is > probably close to zero usefulness. > > That said, if it was (as written later) browser cache and partial > downloads, it is not very surprising that data files exist which are > not yet linked during the download, or temp files unlinked for later > deletion by the FS, had the computer not crashed. If you had something > like zfs, those half-written or half-deleted files might just have > been totally missing instead of ending up in lost+found, since they > represent a point-in-time in which the FS is not in a consistent > state, so the end result would mostly have been the same, this data is > not visible under your home account after the crash. > > Journaling has some great advantages, like write aggregation if your > journal can be placed on a faster device and when it comes to quick > checkups after crashes, an empty journal often means the fs was not in > a broken state and probably needs less or no total checkup by fsck > tools, which is nice. > It will not fix a half-downloaded ISO or unlinked temp files that you > for some reason want to look at afterwards, nor will the journal fix > any kind of broken sectors, though checksumming file systems will of > course help you find the errors before handing the bad sectors over to > your applications. >