Thanks for the list.

Phi

Eric Schrock wrote:
Yes, there are three incremental fixes that we plan in this area:

6417772 need nicer message on write failure

        This just cleans up the failure mode so that we get a nice
        FMA failure message and can distinguish this from a random
        failed assert.

6417779 ZFS: I/O failure (write on ...) -- need to reallocate writes

        In a multi-vdev pool, this would take a failed write and attempt
        to do the write on another toplevel vdev.  This would all but
        elminate the problem for multi-vdev pools.

6322646 ZFS should gracefully handle all devices failing (when writing)

        This is the "real" fix.  Unfortunately, it's also really hard.
Even if we manage to abort the current transaction group, dealing with the semantics of a filesystem which has lost an
        arbitrary amount of change and notifying the user in a
        meaningful way is difficult at best.

Hope that helps.

- Eric


On Thu, Aug 10, 2006 at 02:55:51PM -0700, Phi Tran wrote:

I remember a discussion about I/O write failures causing a panic for a
non-replicated pool and a plan to fix this in the future.  I couldn't
find a bug for this work though.  Is there still a plan to fix this?

Phi

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Eric Schrock, Solaris Kernel Development       http://blogs.sun.com/eschrock

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to