Hi, On 09/29/07 22:00, Gavin Maltby wrote:
Hi,Our zfs nfs build server running snv_73 (pool created back before zfs integrated to ON) paniced I guess from zfs the first time and now panics on attempted boot every time as below. Is this a known issue and, more importantly (2TB of data in the pool), any suggestions on how to recover (other than from backup).panic[cpu0]/thread=ffffff003cc8dc80: zfs: allocating allocated segment(offset=24872013824 size=4096)
So in desperation I set 'zfs_recover' which just produced an assertion failure moments after the original panic location. but also set 'aok' to blast through assertions has allowed me to import the pool again (I had booted -m milestone=none and blown away /etc/zfs/zpool.cache to be able to boot at all). Luckily just the single corruption apparent at the moment, ie just a single assertion caught after running for half a day like this: Sep 30 17:01:53 tb3 genunix: [ID 415322 kern.warning] WARNING: zfs: allocating allocated segment(offset=24872013824 size=4096) Sep 30 17:01:53 tb3 genunix: [ID 411747 kern.notice] ASSERTION CAUGHT: sm->sm_space == space (0xc4896c00 == 0xc4897c00), file: ../../common/fs/zfs/space_map.c, line: 355 What I'd really like to know is whether/how I can map from that assertion at the pool level back down to a single filesystem or even file using this segment - perhaps I can recycle that file to free the segment and set the world straight again? A scrub is only 20% complete, but has found no errors thus far. I check the T3 pair and no complaints there either - I did reboot them just for luck (last reboot was 2 years ago, apparently!). Gavin
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss