On Dec 13, 2009, at 11:28 PM, Yaverot wrote:

Been lurking for about a week and a half and this is my first post...

--- bfrie...@simple.dallas.tx.us wrote:
On Fri, 11 Dec 2009, Bob wrote:

Thanks. Any alternatives, other than using enterprise-level drives?

You can of course use normal consumer drives.  Just don't expect them
to recover from an read error very quickly.

Any way to tell ZFS that these drives are of "lower quality" and shouldn't be kicked out as faulted so quickly? I personally setup my home server to use OpenSolaris so I could have ZFS safeguard my data. I am willing to trade away performance for more stability, and less "yes that drive is perfectly fine" type management.

FMA (not ZFS, directly) looks for a number of failures over a period of time. By default that is 10 failures in 10 minutes. If you have an error that trips
on TLER, the best it can see is 2-3 failures in 10 minutes.  The symptom
you will see is that when these long timeouts happen, they take a long time because, by default, the drive will be reset and the I/O retried after 60 seconds.


I'm also willing to have more redundancy and less storage with the same number of drives, but that has to wait until I have enough unused drives to setup a new pool with the new options (either raidz3 or full mirroring) and copy over as there is no method to make this change inplace.

This is a good idea anyway :-)
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to