Well it shows that you're not suffering from a known bug.  The symptoms
you were describing were the same as those seen when a device
spontaneously shrinks within a raid-z vdev.  But it looks like the sizes
are the same ("config asize" = "asize"), so I'm at a loss.

- Eric

On Sun, Dec 07, 2008 at 05:52:10PM -0800, Brett wrote:
> here is the requested output of raidz_open2.d upon running  a zpool status :-
> 
> [EMAIL PROTECTED]:/export/home/brett# ./raidz_open2.d
> run 'zpool import' to generate trace
> 
> 60027449049959 BEGIN RAIDZ OPEN
> 60027449049959 config asize = 4000755744768
> 60027449049959 config ashift = 9
> 60027507681841 child[3]: asize = 1000193768960, ashift = 9
> 60027508294854 asize = 4000755744768
> 60027508294854 ashift = 9
> 60027508294854 END RAIDZ OPEN
> 60027472787344 child[0]: asize = 1000193768960, ashift = 9
> 60027498558501 child[1]: asize = 1000193768960, ashift = 9
> 60027505063285 child[2]: asize = 1000193768960, ashift = 9
> 
> I hope that helps, means little to me.
> 
> One thought I had was maybe i somehow messed up the cables and the devices 
> are not in their original sequence. Would this make any difference? I have 
> seen examples for raid-z suggesting that the import of a raid-z should figure 
> out the devices regardless of the order of devices or of new device numbers 
> so i was hoping it didnt matter.
> 
> Thanks Rep
> -- 
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, Fishworks                        http://blogs.sun.com/eschrock
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to