Hello again,

I'm not making progress on this.

Every time I run a zpool scrub rpool I see:

$ zpool status -vx
  pool: rpool
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: scrub in progress for 0h0m, 0.01% done, 177h43m to go
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       DEGRADED     0     0     8
          raidz1    DEGRADED     0     0     8
            c0t0d0  DEGRADED     0     0     0  too many errors
            c0t1d0  ONLINE       0     0     0
            c0t2d0  ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

/export/duke/test/Acoustic/3466/88832/09 - Check.mp3


I popped in a brand new disk of the same size, and did a zpool replace on the 
persistently degraded drive and the new drive. i.e.:

$ zpool replace rpool c0t0d0 c0t7d0

But that simply had the effect of transferring the issue to the new drive:

$ zpool status -xv rpool
  pool: rpool
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: resilver completed after 2h41m with 1 errors on Wed Jun  4 20:22:27 2008
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         DEGRADED     0     0     8
          raidz1      DEGRADED     0     0     8
            spare     DEGRADED     0     0     0
              c0t0d0  DEGRADED     0     0     0  too many errors
              c0t7d0  ONLINE       0     0     0
            c0t1d0    ONLINE       0     0     0
            c0t2d0    ONLINE       0     0     0
        spares
          c0t7d0      INUSE     currently in use


$ zpool detach rpool c0t0d0

$ zpool status -vx rpool
  pool: rpool
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: resilver completed after 2h41m with 1 errors on Wed Jun  4 20:22:27 2008
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     8
          raidz1    ONLINE       0     0     8
            c0t7d0  ONLINE       0     0     0
            c0t1d0  ONLINE       0     0     0
            c0t2d0  ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        <0xc3>:<0x1c0>

$ zpool scrub rpool

...

$ zpool status -vx rpool

  pool: rpool
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: scrub in progress for 0h0m, 0.00% done, 0h0m to go
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       DEGRADED     0     0     4
          raidz1    DEGRADED     0     0     4
            c0t7d0  DEGRADED     0     0     0  too many errors
            c0t1d0  ONLINE       0     0     0
            c0t2d0  ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

/export/duke/test/Acoustic/3466/88832/09 - Check.mp3

$ rm -f "/export/duke/test/Acoustic/3466/88832/09 - Check.mp3"

rm: cannot remove `/export/duke/test/Acoustic/3466/88832/09 - Check.mp3': I/O 
error


I'm guessing this isn't a hardware fault, but a glitch in ZFS - but am hoping 
to be proved wrong.

Any ideas before I rebuild the pool from scratch? And if I do, is there 
anything I can do to prevent this problem in the future?

B
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to