Howdy,

We are using ZFS on one of our Solaris 10 servers, and the box paniced
this evening with the following stack trace:

Nov 24 04:03:35 foo unix: [ID 100000 kern.notice]
Nov 24 04:03:35 foo genunix: [ID 802836 kern.notice] fffffe80004a14d0
fffffffffb9b49f3 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1550
zfs:space_map_remove+239 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1580
zfs:space_map_claim+32 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a15c0
zfs:zfsctl_ops_root+2f95204d ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1600
zfs:zfsctl_ops_root+2f9522bc ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1620
zfs:zio_dva_claim+1d ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1630
zfs:zio_next_stage+72 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1640
zfs:zio_gang_pipeline+1e ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1650
zfs:zio_next_stage+72 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1680
zfs:zio_wait_for_children+49 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1690
zfs:zio_wait_children_ready+15 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a16a0
zfs:zfsctl_ops_root+2f96de26 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a16d0
zfs:zio_wait+2d ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1700
zfs:zil_claim_log_block+60 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1800
zfs:zil_parse+181 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1840
zfs:zil_claim+d0 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a19f0
zfs:dmu_objset_find+176 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1ba0
zfs:dmu_objset_find+10d ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1c10
zfs:spa_load+5d4 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1c80
zfs:spa_load+32d ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1cc0
zfs:spa_open_common+15b ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1d00
zfs:spa_get_stats+42 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1d40
zfs:zfs_ioc_pool_stats+3f ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1d80
zfs:zfsdev_ioctl+146 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1d90
genunix:cdev_ioctl+1d ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1db0
specfs:spec_ioctl+50 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1de0
genunix:fop_ioctl+25 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1ec0
genunix:ioctl+ac ()
Nov 24 04:03:35 foo.com genunix: [ID 655072 kern.notice]
fffffe80004a1f10 unix:sys_syscall32+101 ()
Nov 24 04:03:35 foo.com unix: [ID 100000 kern.notice]

It appears the ZFS pool on the host is toast, since the box panics
each time we try to import it. :( The stack trace from bug #6458218 is
similar, but there are enough differences that lead me to question
whether it's the underlying problem.  Does anyone happen to know if
this is a known bug?

Thanks,
- Ryan
-- 
UNIX Administrator
http://prefetch.net
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to