One update to this, I tried a scrub.  This found a number of errors on old 
snapshots (long story, I'd once done a zpool replace from an old disk with 
hardware errors to this disk).  I destroyed the snapshots since they weren't 
needed.  The snapshot I was trying to send did not have any errors.  After 
getting rid of those snapshots, I ran zpool clear on the device, but another 
zfs status still shows the errors (without metadata):

bash-3.2# zpool status -v
  pool: oldspace
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        oldspace    ONLINE       0     0     0
          c3t0d0s3  ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        <0x33>:<0x2e00>
        <0x33>:<0x6000>
        <0x33>:<0x8a00>
        <0x33>:<0x2e01>
        <0x33>:<0x6001>
        <0x33>:<0x2e02>
(with much more output....)

Despite a zpool clear and an export/import, those stuck around in the zpool 
status.

Then looking at the zpool man page, it looked like setting failmode=continue 
could help, but it wasn't clear how that would affect a zfs send.

Another attempt at the zfs send/receive again failed with this in messages:
Mar 22 19:38:47 hancock zfs: [ID 664491 kern.warning] WARNING: Pool 'oldspace' 
has encountered an uncorrectable I/O error. Manual intervention is required.

Any pointers on what "manual intervention" to use would be greatly appreciated. 
 

- Matt
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to