Janne-
Thanks for all that useful information.
others- this is a thinkpad, that's not on all the time, so a cron backup
is not that good. I actually back up manually, currently using "borg"
for that. I mostly just do email and web on it so there's probably
nothing serious lost. In a few days I will have the external disk with
the backup back here and I may see what I can find on it. My /home
partition has a lot of data on it because I built an AWS Openbsd machine
image on it. But it would be good to see whether my system is working
correctly.
I still think it would be nice if OpenBSD could use a journaling
filesystem, but I do not have the expertise to do anything to contribute
to that.
Regards,
John
On 9/6/23 02:05, Janne Johansson wrote:
Den tis 5 sep. 2023 kl 20:53 skrev John Holland <johnbholl...@icloud.com>:
I have a backup that is at least 2 days old offsite at a friend’s house. It
would be a bit of a pain to go retrieve it, but I could do that.
Short of that, I have 4000+ files in lost+found with names like #1094827.
What can I do with those? I tried running “file” on the first 50 via xargs and
they mostly at least purport to be some sort of intact file. How can I
determine what they are? Please don’t suggest that I manually use “file” and
then an appropriate program to examine each one in turn
Those "files" are fragments of files, named after the inode number,
which you get when fsck finds a not-complete chain of
directory-entry/filename -> inode -> linked list of file-contents.
While fsck can't figure out the filename and where in the directory
structure it is meant to belong, or possibly if it is only some part
of a whole file, it will give you a chance to recover at least partial
contents from the lost+found folder. Sometimes this might be awesome
if you can dig out some key or pw needed for something super
important, sometimes you get half of a database file and that is
probably close to zero usefulness.
That said, if it was (as written later) browser cache and partial
downloads, it is not very surprising that data files exist which are
not yet linked during the download, or temp files unlinked for later
deletion by the FS, had the computer not crashed. If you had something
like zfs, those half-written or half-deleted files might just have
been totally missing instead of ending up in lost+found, since they
represent a point-in-time in which the FS is not in a consistent
state, so the end result would mostly have been the same, this data is
not visible under your home account after the crash.
Journaling has some great advantages, like write aggregation if your
journal can be placed on a faster device and when it comes to quick
checkups after crashes, an empty journal often means the fs was not in
a broken state and probably needs less or no total checkup by fsck
tools, which is nice.
It will not fix a half-downloaded ISO or unlinked temp files that you
for some reason want to look at afterwards, nor will the journal fix
any kind of broken sectors, though checksumming file systems will of
course help you find the errors before handing the bad sectors over to
your applications.