actually here is the first panic messages:
Sep 13 23:33:22 netra2 unix: [ID 603766 kern.notice] assertion failed: 
dmu_read(os, smo->smo_object, offset, size, entry_map) == 0 (0x5 == 0x0), file: 
../../common/fs/zfs/space_map.c, line: 307
Sep 13 23:33:22 netra2 unix: [ID 100000 kern.notice]
Sep 13 23:33:22 netra2 genunix: [ID 723222 kern.notice] 000002a103e6b000 
genunix:assfail3+94 (7b7706d0, 5, 7b770710, 0, 7b770718, 133)
Sep 13 23:33:22 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
0000000000002000 0000000000000133 0000000000000000 000000000186f800
Sep 13 23:33:22 netra2   %l4-7: 0000000000000000 000000000183d400 
00000000011eb400 0000000000000000
Sep 13 23:33:22 netra2 genunix: [ID 723222 kern.notice] 000002a103e6b0c0 
zfs:space_map_load+1a4 (30007cc2c38, 70450058, 1000, 30007cc2908, 380000000, 1)
Sep 13 23:33:22 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
0000000000001a60 000003000ce3b000 0000000000000000 000000007b73ead0
Sep 13 23:33:22 netra2   %l4-7: 000000007b73e86c 00007fffffffffff 
0000000000007fff 0000000000001000
Sep 13 23:33:22 netra2 genunix: [ID 723222 kern.notice] 000002a103e6b190 
zfs:metaslab_activate+3c (30007cc2900, 8000000000000000, c000000000000000, 
e75efe6c, 30007cc2900, c0000000)
Sep 13 23:33:23 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
000002a103e6b308 0000000000000003 0000000000000002 00000000006dd004
Sep 13 23:33:23 netra2   %l4-7: 0000000070450000 0000030010834940 
00000300080eba40 00000300106c9748
Sep 13 23:33:23 netra2 genunix: [ID 723222 kern.notice] 000002a103e6b240 
zfs:metaslab_group_alloc+1bc (3fffffffffffffff, 400, 8000000000000000, 
32dc18000, 30003387d88, ffffffffffffffff)
Sep 13 23:33:23 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
0000000000000000 00000300106c9750 0000000000000001 0000030007cc2900
Sep 13 23:33:23 netra2   %l4-7: 8000000000000000 0000000000000000 
0000000196e0c000 4000000000000000
Sep 13 23:33:23 netra2 genunix: [ID 723222 kern.notice] 000002a103e6b320 
zfs:metaslab_alloc_dva+114 (0, 32dc18000, 30003387d88, 400, 300080eba40, 3fd0f1)
Sep 13 23:33:23 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
0000000000000001 0000000000000000 0000000000000003 0000030011c068e0
Sep 13 23:33:23 netra2   %l4-7: 0000000000000000 00000300106c9748 
0000000000000000 00000300106c9748
Sep 13 23:33:23 netra2 genunix: [ID 723222 kern.notice] 000002a103e6b3f0 
zfs:metaslab_alloc+2c (30010834940, 200, 30003387d88, 3, 3fd0f1, 0)
Sep 13 23:33:23 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
0000030003387de8 00000300139e1800 00000000704506a0 0000000000000000
Sep 13 23:33:23 netra2   %l4-7: 0000030013fca7be 0000000000000000 
0000030010834940 0000000000000001
Sep 13 23:33:24 netra2 genunix: [ID 723222 kern.notice] 000002a103e6b4a0 
zfs:zio_dva_allocate+4c (30010eafcc0, 7b7515a8, 30003387d88, 70450508, 
70450400, 20001)
Sep 13 23:33:24 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
0000000070450400 0000070300000001 0000070300000001 0000000000000000
Sep 13 23:33:24 netra2   %l4-7: 0000000000000000 00000000018a5c00 
0000000000000003 0000000000000007
Sep 13 23:33:24 netra2 genunix: [ID 723222 kern.notice] 000002a103e6b550 
zfs:zio_write_compress+1ec (30010eafcc0, 23e20b, 23e000, 10001, 3, 30003387d88)
Sep 13 23:33:24 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
000000000000ffff 0000000000000000 0000000000000001 0000000000000200
Sep 13 23:33:24 netra2   %l4-7: 0000000000000000 0000000000010000 
000000000000fc00 0000000000000001
Sep 13 23:33:24 netra2 genunix: [ID 723222 kern.notice] 000002a103e6b620 
zfs:zio_wait+c (30010eafcc0, 30010834940, 7, 30010eaff20, 3, 3fd0f1)
Sep 13 23:33:24 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
ffffffffffffffff 000000007b7297d0 0000030003387d40 000003000be9edf8
Sep 13 23:33:24 netra2   %l4-7: 000002a103e6b7c0 0000000000000002 
0000000000000002 000003000a799920
Sep 13 23:33:24 netra2 genunix: [ID 723222 kern.notice] 000002a103e6b6d0 
zfs:dmu_objset_sync+12c (30003387d40, 3000a762c80, 1, 1, 3000be9edf8, 0)
Sep 13 23:33:24 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
0000030003387d88 ffffffffffffffff 0000000000000002 00000000003be93a
Sep 13 23:33:24 netra2   %l4-7: 0000030003387e40 0000000000000020 
0000030003387e20 0000030003387ea0
Sep 13 23:33:25 netra2 genunix: [ID 723222 kern.notice] 000002a103e6b7e0 
zfs:dsl_dataset_sync+c (30007609480, 3000a762c80, 30007609510, 30005c475b8, 
30005c475b8, 30007609480)
Sep 13 23:33:25 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
0000000000000001 0000000000000007 0000030005c47638 0000000000000001
Sep 13 23:33:25 netra2   %l4-7: 0000030007609508 0000000000000000 
0000030005c4caa8 0000000000000000
Sep 13 23:33:25 netra2 genunix: [ID 723222 kern.notice] 000002a103e6b890 
zfs:dsl_pool_sync+64 (30005c47500, 3fd0f1, 30007609480, 3000f904380, 
300032bb7c0, 300032bb7e8)
Sep 13 23:33:25 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
0000000000000000 0000030010834d00 000003000a762c80 0000030005c47698
Sep 13 23:33:25 netra2   %l4-7: 0000030005c47668 0000030005c47638 
0000030005c475a8 0000030010eafcc0
Sep 13 23:33:25 netra2 genunix: [ID 723222 kern.notice] 000002a103e6b940 
zfs:spa_sync+1b0 (30010834940, 3fd0f1, 0, 0, 2a103e6bcc4, 1)
Sep 13 23:33:25 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
ffffffffffffffff 000000000180c000 0000030010834a28 000003000f904380
Sep 13 23:33:25 netra2   %l4-7: 0000000000000000 00000300080eb500 
0000030005c47500 0000030010834ac0
Sep 13 23:33:25 netra2 genunix: [ID 723222 kern.notice] 000002a103e6ba00 
zfs:txg_sync_thread+134 (30005c47500, 3fd0f1, 0, 2a103e6bab0, 30005c47610, 
30005c47612)
Sep 13 23:33:25 netra2 genunix: [ID 179002 kern.notice]   %l0-3: 
0000030005c47620 0000030005c475d0 0000000000000000 0000030005c475d8
Sep 13 23:33:25 netra2   %l4-7: 0000030005c47616 0000030005c47614 
0000030005c475c8 00000000003fd0f2
Sep 13 23:33:26 netra2 unix: [ID 100000 kern.notice]


So the 0x05 is for EIO i/o error. But now I got 0x06. 
And Now I dd the disks to another sever (actually a vmware) to reproduce the 
problem and free the production server. And I got the 0x06. And it looks like 
the the pool is corrupt. I will read the document ondiskformat0822.pdf to get a 
better knowledge.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to