Thank you Christian,
I think I managed to repair my system.
Here is how I did, if it can help others.
By the way, Jonas, it is impossible to remove broken files/folders, so the 
strategy I suggest is to destroy the dataset and restore it from a backup, 
while running from a bootable media.
One can backup everything in the dataset except the corrupted files, and 
finally try to restore these by other means: reinstalling package, or using 
eventual backups for personnal files.

I scanned every dataset with find and fstat, as suggested in this thread, until 
fstat got stalled, for example with /var (I did it for /, /var, /opt and /home, 
which all had their own datasets):
```
sudo find /var -mount -exec echo '{}' \; -exec stat {} \;
```
At the same time I monitored kernel errors:
```
tail -f /var/log/kern.log
```
When it freezes on a file, its name is printed by the echo command (this is the 
last thing printed out), and a stack trace appears in the log.

I was lucky, only one file got corrupted: `/var/lib/app-
info/icons/ubuntu-impish-universe/48x48/plasma-workspace_preferences-
desktop-color.png`.

Each time a corrupted file is found, it is necessary to restart the scan from 
the beginning, while excluding it, example:
```
sudo find /var -mount -not -path 
'/var/lib/app-info/icons/ubuntu-impish-universe/*' -exec echo '{}' \; -exec 
stat {} \;
```


Apparently, my corrupted file did not belong to any package (I checked with 
`apt-file search <filepath>`), and in the end, I figured out it was recreated 
automatically, I donĀ“t know how...
Otherwise, I would have reinstalled the package after restoring the rest.

I backed up the whole /var with tar:
```
sudo tar --exclude=/var/lib/app-info/icons/ubuntu-impish-universe/48x48 --acls 
--xattrs --numeric-owner --one-file-system -zcpvf backup_var.tar.gz /var
```
At first I did not put --numeric-owner, but the owners where all messed up, and 
it prevented it from going to graphical mode (GDM was complaining not the have 
write access to some /var/lib/gdm3/.config/)
It is probably because by default, owner/group are saved as text, and are 
assigned different uid/gid on the bootable media.

The backup process shall not get stalled, otherwise there might be other
corrupted files, not seen by fstat, I don't know if it is possible.

In order to be extra sure about non-corruption of my root dir (/), I
also created a backup of it, looking for a possible freeze, but it did
not occur.

I created a bootable USB media with Ubuntu 21.04, and booted it.
I accessed my ZFS pool:
```
sudo mkdir /mnt/install
sudo zpool import -f -R /mnt/install rpool
zfs list
```
I destroyed and recreated the dataset for /var (with options from my 
installation notes):
```
sudo zfs destroy -r rpool/root/var
sudo zfs create -o quota=16G -o mountpoint=/var rpool/root/var
```
It is necessary to reopen the pool, otherwise, a simple mount does not allow 
populating the new dataset (for me, /var was created in the root dataset):
```
sudo zpool export -a
sudo zpool import -R /mnt/install rpool
sudo zfs mount -l -a
zfs list
```

Now we can restore the backup:
```
sudo tar --acls --xattrs -zxpvf /home/user/backup_var.tar.gz -C /mnt/install
```
Check if the new dataset has the correct size and content:
```
zfs list
ll /mnt/install/var
```
Close and reboot:
```
sudo zfs umount -a
sudo reboot
```

Of course, it can get more complex if the corrupted files are more sensitive 
system files.
It might be necessary to chroot in order to reinstall the packages where 
corrupted files come from.

Hope it helps...

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906476

Title:
  PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 ==
  sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED,
  &zp->z_sa_hdl)) failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/zfs/+bug/1906476/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to