On Mon, Jun 16, 2008 at 5:33 PM, Robert Milkowski <[EMAIL PROTECTED]> wrote:
> Have you got more details or at least bug ids?
> Is it only (I dboubt) fc related?

I ran into something that looks like

6594621 dangling dbufs (dn=ffffff056a5ad0a8, dbuf=ffffff0520303300)
during stress

with LDoms 1.0.  It seems as though data that zfs in a guest LDom
thought was committed was not really committed.  Not FC related, but
it is quite frustrating to deal with a panic loop in a file system
(zpool) not required to boot the system to single user mode.  That one
has since been fixed.

More recently I reported:

6709336 panic in mzap_open(): avl_find() succeeded inside avl_add()

If the file that triggered this panic were in a place that was read at
boot, it would be a panic loop.

I asked on the list[1] if anyone was interested in a dump to dig into
it more, with no takers.  Earlier today I noticed that Jeff Bonwick
said that not getting dumps was criminal[2], so a special cc goes out
to him.  :)

1. http://mail.opensolaris.org/pipermail/zfs-discuss/2008-May/047869.html
2. http://mail.opensolaris.org/pipermail/caiman-discuss/2008-June/004405.html

I've run into many other problems with I/O errors when doing a stat()
of a file.  Repeated tries fails, but a reboot seems to clear it.
zpool scrub reports no errors and the pool consists of a single mirror
vdev.  I haven't filed a bug on this yet.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to