Hello all,

I am not sure my original mail got through to the list
(I haven't received it back), so I attach it below.

Anyhow, now I have a saved kernel crash dump of the system
panicking when it tries to - I believe - deferred-release
the corrupted deduped blocks which are no longer referenced
by the userdata/blockpointer tree.

As I previously wrote in my thread on unfixeable corruptions
vs. dedup, there may be some single- or multiple-block
references from DDT pointing into "nowhere" - the block
pointed to contains non-matching data, and now the BP
tree does not point to those blocks either. And there
are other blocks with same recorded checksums (repaired
from another source).

I am not really sure what and how breaks the system into
panic. I would like to analyze the kernel dump to see if
my assumptions is true, and perhaps catch the data block
reference from the dump - but I don't know how and ask
for a crash-course ;)

Ultimately I think that if the current system fails to
safely process whatever data it does find on-disk, then
it may be possible to fix that data (or the system) so
that the error is not there and fatal failures don't
occur either.

In the current state of things the pool is practically
unimportable - within a minute after import the system
crashes.

I did not try rolling back yet... and without deferred
release of blocks (i.e. no guarantee that I can safely
roll back to a consistent BP tree where all nodes exist),
I am actually reluctant to try rollbacks until required to...

Thanks,
//Jim

2012-02-04 4:28, Jim Klimov wrote:
I got the machine with my 6-disk raidz2 pool booted again,
into oi_151a, but it reboots soon after importing the pool.
Kernel hits a NULL pointer dereference in DDT-related
routines and crashes.

According to fmdump, error and stacktrace is more or less
the same each time. It seems that "repairing" corrupted
deduped data by overwriting blocks or whole files with
good copies did not go too well, even though all of my
deduped datasets now use "dedup=verify" mode.

The uptimes are too short for "savecore" to complete :(
I'll try to catch the system at a good moment to prevent
"pool"'s importing and hope to get the kernel dump.

What should I look for with ZDB, MDB or in dump files?

Any suggestions how to analyze and ultimately fix this
problem on-disk (without destroying and remaking the pool)?



Here's the latest fmdump:

# fmdump -Vp -u 4f6725c1-509f-eba4-8774-e627e1925461
TIME UUID SUNW-MSG-ID
Feb 04 2012 04:11:20.930300000 4f6725c1-509f-eba4-8774-e627e1925461
SUNOS-8000-KL

TIME CLASS ENA
Feb 04 04:11:11.3891 ireport.os.sunos.panic.dump_pending_on_device
0x0000000000000000

nvlist version: 0
version = 0x0
class = list.suspect
uuid = 4f6725c1-509f-eba4-8774-e627e1925461
code = SUNOS-8000-KL
diag-time = 1328314280 870353
de = fmd:///module/software-diagnosis
fault-list-sz = 0x1
fault-list = (array of embedded nvlists)
(start fault-list[0])
nvlist version: 0
version = 0x0
class = defect.sunos.kernel.panic
certainty = 0x64
asru =
sw:///:path=/var/crash/bofh-sol/.4f6725c1-509f-eba4-8774-e627e1925461
resource =
sw:///:path=/var/crash/bofh-sol/.4f6725c1-509f-eba4-8774-e627e1925461
savecore-succcess = 0
os-instance-uuid = 4f6725c1-509f-eba4-8774-e627e1925461
panicstr = BAD TRAP: type=e (#pf Page fault) rp=ffffff0010a5e920 addr=30
occurred in module "zfs" due to a NULL pointer dereference
panicstack = unix:die+dd () | unix:trap+1799 () | unix:cmntrap+e6 () |
zfs:ddt_phys_decref+c () | zfs:zio_ddt_free+5c () | zfs:zio_execute+8d
() | genunix:taskq_thread+285 () | unix:thread_start+8 () |
crashtime = 1328314064
panic-time = February 4, 2012 04:07:44 AM MSK MSK
(end fault-list[0])

fault-status = 0x1
severity = Major
__ttl = 0x1
__tod = 0x4f2c77a8 0x37734060

And here's some dmesg leading up to that fmdump reference:

Feb 4 04:11:10 bofh-sol zfs: [ID 249136 kern.info] imported version 28
pool pool using 28
Feb 4 04:11:11 bofh-sol savecore: [ID 570001 auth.error] reboot after
panic: BAD TRAP: type=e (#pf Page fault) rp=ffffff0010a5e920 addr=30
occurred in module "zfs" due to a NULL pointer dereference
Feb 4 04:11:11 bofh-sol savecore: [ID 564761 auth.error] Panic crashdump
pending on dump device but dumpadm -n in effect; run savecore(1M)
manually to extract. Image UUID 4f6725c1-509f-eba4-8774-e627e1925461.
Feb 4 04:11:13 bofh-sol unix: [ID 954099 kern.info] NOTICE: IRQ17 is
being shared by drivers with different interrupt levels.
Feb 4 04:11:13 bofh-sol This may result in reduced system performance.
Feb 4 04:11:20 bofh-sol fmd: [ID 377184 daemon.error] SUNW-MSG-ID:
SUNOS-8000-KL, TYPE: Defect, VER: 1, SEVERITY: Major
Feb 4 04:11:20 bofh-sol EVENT-TIME: Sat Feb 4 04:11:20 MSK 2012
Feb 4 04:11:20 bofh-sol PLATFORM: System-Product-Name, CSN:
System-Serial-Number, HOSTNAME: bofh-sol
Feb 4 04:11:20 bofh-sol SOURCE: software-diagnosis, REV: 0.1
Feb 4 04:11:20 bofh-sol EVENT-ID: 4f6725c1-509f-eba4-8774-e627e1925461
Feb 4 04:11:20 bofh-sol DESC: The system has rebooted after a kernel
panic. Refer to http://sun.com/msg/SUNOS-8000-KL for more information.
Feb 4 04:11:20 bofh-sol AUTO-RESPONSE: The failed system image was
dumped to the dump device. If savecore is enabled (see dumpadm(1M)) a
copy of the dump will be written to the savecore directory .
Feb 4 04:11:20 bofh-sol IMPACT: There may be some performance impact
while the panic is copied to the savecore directory. Disk space usage by
panics can be substantial.
Feb 4 04:11:20 bofh-sol REC-ACTION: If savecore is not enabled then
please take steps to preserve the crash image.
Feb 4 04:11:20 bofh-sol Use 'fmdump -Vp -u
4f6725c1-509f-eba4-8774-e627e1925461' to view more panic detail. Please
refer to the knowledge article for additional information.


Thanks in advance,
//Jim Klimov

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to