Hi Bob

On Friday 27 November 2009 17:19:22 Bob Friesenhahn wrote:
> 
> It is interesting that in addition to being in the same vdev, the
> disks encountering serious problems are all target 6.  Besides
> something at the zfs level, there could be some some issue at the
> device driver, or underlying hardware level.  Or maybe just bad luck.
> 
> As I recall, Albert Chin-A-Young posted about a pool failure where
> many devices in the same raidz2 vdev spontaneously failed somehow (in
> his case the whole pool was lost).  He is using different hardware but
> this looks somewhat similar.

It looks quite similar as this one:

http://www.mail-archive.com/storage-disc...@opensolaris.org/msg06125.html

we swapped the drive and resilvering is almost though and the vdev is showing 
a large number of errors:

 raidz1            DEGRADED     0     0     1
            spare           DEGRADED     0     0 8.81M
              replacing     DEGRADED     0     0     0
                c1t6d0s0/o  FAULTED      6     0    17  corrupted data
                c1t6d0      ONLINE       0     0     0  120G resilvered
              c8t7d0        ONLINE       0     0     0  120G resilvered
            c5t6d0          ONLINE       0     0     0
            c6t6d0          DEGRADED     0     0    41  too many errors
            c7t6d0          DEGRADED     1     0    14  too many errors
            c8t6d0          ONLINE       0     0     1


If having all sixes is a problem, maybe we should try to use a diagonal 
approach the next time (or solve the n-queen problem on a rectangular thumper 
layout)...

I guess after resilvering the next step will be zpool clear and a new scrub, 
but I fear that will show errors again.

Cheers

Carsten
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to