You have created an unreplicated pool of the form:

        pool
            raidz
                /export/sl1
                /export/sl2
                /export/sl3
            /export/sl4

I believe 'zpool add' will warn you about this, hence needing the '-f'.

You then overwrite the entire contents of /export/sl4, causing ZFS to
reopen the device, only to not recognize the device and therefore mark
it FAULTED.  Because you are in an unreplicated pool, you then tripped
over:

6413847 vdev label write failure should be handled more gracefully

Which is just one manifestation of a series of issues currently being
worked on where a failed writes in an unreplicated config can cause ZFS
to panic.

Using a replicated config will prevent this problem from happining in
the future.

- Eric

On Sat, Jun 09, 2007 at 11:50:43AM -0700, Fyodor Ustinov wrote:
> dd if=/dev/zero of=sl1 bs=512 count=256000
> dd if=/dev/zero of=sl2 bs=512 count=256000
> dd if=/dev/zero of=sl3 bs=512 count=256000
> dd if=/dev/zero of=sl4 bs=512 count=256000
> zpool create -m /export/test1 test1 raidz /export/sl1 /export/sl2 /export/sl3
> zpool add -f test1 /export/sl4
> dd if=/dev/zero of=sl4 bs=512 count=256000
> zpool scrub test1
> 
> panic. and message like on image.
>  
>  
> This message posted from opensolaris.org


--
Eric Schrock, Solaris Kernel Development       http://blogs.sun.com/eschrock
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to