We have a situation where all of the spares in a set of pools have
gone into a faulted state and now, apparently, we can't remove them
or otherwise de-fault them. I'm confidant that the underlying disks
are fine, but ZFS seems quite unwilling to do anything with the spares
situation.

(The specific faulted state is 'FAULTED   corrupted data' in
'zpool status' output.)

 Environment: Solaris 10 U6 on x86 hardware. The disks are iSCSI LUNs
from backend storage devices.

 I have tried:
- 'zpool remove': it produces no errors, but doesn't remove anything.
- 'zpool replace <pool> <drive>': it reports that the device is reserved
  as a hot spare.
- 'zpool replace <pool> <drive> <unused-drive>': also reports 'device
  is reserved as a hot spare'.
- 'zpool clear': reports that it can't clear errors, the device is
  reserved as a hot spare.

 Because these are iSCSI LUNs, I can actually de-configure them (on the
Solaris side); would that make ZFS change its mind about the situation
and move to a state where I could remove them from the pools?

(Would exporting and then importing the pools make any difference,
especially if the iSCSI LUNs of the spares were removed? These are
production pools, so I can't just try it to see; it would create a
user-visible downtime.)

        - cks
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to