Good Day;
A week or so ago I experienced an error trying to compile openoffice.org
from ports. The build failed from an error I was since able to resolve.
This machine is at #>uname -r rainey 8.2-RELEASE-p1 FreeBSD
8.2-RELEASE-p1 #2: Wed Apr 27 04:37:38 UTC 2011
michael@rainey:/usr/obj/usr/src/sys/KERNEL_042511 amd64
During that episode the workstation locked up and the only way to
recover was to do a hard reset (power off). I then noticed during the
next weekly scrub that I had increasing errors listed for the root pool
during that particular scrub;
# zpool status
pool: tank
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
ad2p3 ONLINE 0 0 0
ad3p1 ONLINE 0 0 0
errors: 604 data errors, use '-v' for a list
pool: tank1
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank1 ONLINE 0 0 0
ad12p2 ONLINE 0 0 0
errors: No known data errors
Tank consists of 2 300G PATA drives in a mirror. Tank1 is a 500G SATA
drive I just added recently to use for data archiving. The stuff I wish
to protect is backed up to a network file server manually via NFS. I
have scrubbed the pool showing errors several times now with no
increases or decreases in error counts. I have issued #>zpool clear
tank a number of times with no change in the error count. The document
listed (www.sun.com/msg/ZFS-8000-8A) was of no apparent help for my
condition. I have drives I can export to and import from but I am
unclear as to whether I will be just moving the bad blocks around.
Sample of the output of #>zpool status -v tank;
<
tank/root:<0x1097e7>
tank/root:<0x1096e8>
tank/root:<0x1097e8>
tank/root:<0x1096e9>
tank/root:<0x1097e9>
tank/root:<0x1095ea>
tank/root:<0x1097ea>
tank/root:<0x1096eb>
tank/root:<0x1097eb>
tank/root:<0x1096ec>
tank/root:<0x1097ec>
tank/root:<0x1095ed>
tank/root:<0x1096ed>
tank/root:<0x1097ed>
tank/root:<0x1094ee>
tank/root:<0x1096ee>
tank/root:<0x1095ef>
>
Not sure what to do with these. Why doesn't #>zpool clear tank delete
these? The directory /usr/ports/editors/openoffice.org-3/work was not
able to be deleted after the failed build, so I moved it to /oldwork to
get the port to build. /oldwork still cannot be deleted.
rainey# rm -Rf ./oldwork
rm: ./oldwork/OOO330_m20/dictionaries: Directory not empty
rm: ./oldwork/OOO330_m20/lucene/unxfbsdx.pro/bin: Directory not empty
rm: ./oldwork/OOO330_m20/lucene/unxfbsdx.pro/misc/build: Directory not empty
rm: ./oldwork/OOO330_m20/lucene/unxfbsdx.pro/misc: Directory not empty
rm: ./oldwork/OOO330_m20/lucene/unxfbsdx.pro: Directory not empty
rm: ./oldwork/OOO330_m20/lucene: Directory not empty
rm: ./oldwork/OOO330_m20/jfreereport: Directory not empty
rm: ./oldwork/OOO330_m20/libxslt: Directory not empty
rm: ./oldwork/OOO330_m20/sal: Directory not empty
rm: ./oldwork/OOO330_m20: Directory not empty
rm: ./oldwork: Directory not empty
Attempts to delete the above directories fail.
I've read articles about 'bit rot' and such in ZFS metadata but
memtest86 completes without error on this machine's 3G of ram. I see no
applicable information in dmesg or /var/log/messages. The drives have
been running 24/7 since the initial incident with no increase in the
error count.
Thank You,
Michael
_______________________________________________
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"