On 02/25/2022 2:11 am, Alexander Leidinger wrote:
Quoting Larry Rosenman <l...@lerctr.org> (from Thu, 24 Feb 2022 20:19:45
-0600):
I tried a scrub -- it panic'd on a fatal double fault.
Suggestions?
The safest / cleanest (but not fastest) is data export and pool
re-creation. If you export dataset by dataset (instead of recursively
all), you can even see which dataset is causing the issue. In case this
per dataset export narrows down the issue and it is a dataset you don't
care about (as in: 1) no issue to recreate from scratch or 2) there is
a backup available) you could delete this (or each such) dataset and
re-create it in-place (= not re-creating the entire pool).
Bye,
Alexander.
http://www.Leidinger.net alexan...@leidinger.net: PGP
0x8F31830F9F2772BF
http://www.FreeBSD.org netch...@freebsd.org : PGP
0x8F31830F9F2772BF
I'm running this script:
#!/bin/sh
for i in $(zfs list -H | awk '{print $1}')
do
FS=$1
FN=$(echo ${FS} | sed -e s@/@_@g)
sudo zfs send -vecLep ${FS}@REPAIR_SNAP | ssh l...@freenas.lerctr.org
cat - \> $FN
done
How will I know a "Problem" dataset?
--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 E-Mail: l...@lerctr.org
US Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106