A couple of questions/comments --

Why is the REMOVED state not persistent? It seems that, if ZFS knows that an 
administrator pulled a disk deliberately, that's still useful information after 
a reboot. Changing the state to FAULTED is non-intuitive, at least to me.

What happens with autoreplace after a system reconfiguration? If controller 
numbers change, is it possible that autoreplace would grab a drive which was 
not part of a ZFS pool and try to use it? I recall some posts here where the 
"old" path was preserved in the pool and it was fairly difficult to get ZFS to 
recognize the "new" path.

In general, I don't like the idea of autoreplace being tied to the device path. 
It would be both safer and more general if the underlying frameworks exported a 
physical location identifier to the node. I suspect that this isn't currently 
done by Solaris, and I'm sure it's not done for devices in enclosures which 
support (say) SES; but it seems like the right long-term direction. For 
Sun-supplied hardware, it would even be possible to use readable device names 
(e.g. "Slot A connector 2").

Autoreplace probably needs a lot of warnings except in the particular case of 
appliances and other highly-controlled environments. Consider a server three 
drives, A, B, and C, in which A and B are mirrored and C is not. Pull out A, B, 
and C, and re-insert them as A, C, and B. If B is slow to come up for some 
reason, ZFS will see "C" in place of "B", and happily reformat it into a mirror 
of "A".  (Or am I reading this incorrectly?)

I hope that there's a way to disable the periodic probing of hot spares.  
Spinning these drives up often might be highly annoying in some environments 
(though useful in others, since it could also verify that the disk is 
responding normally).
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to