On Tue, 2 Mar 2010, Jeffrey Johnson wrote:

We have put together a 25T ZFS raidz2 zpool (16x2TB 5900 RPM 32MB
Cache SATA 3.0Gb/s drives with 2x LSI SAS3081E-R SAS RAID Controllers
presenting the drives as JBOD straight thru to the backplane) with 2
hot-spares on OpenSolaris snv_133. The pool contains roughly 800
Million files which are all very small (~10-200k map tiles). We had a
hiccup with one of the drives and the resilvering process was
initiated ... the problem is that zpool status is estimating something
like 650 hours currently. This estimate has varied from 400 to 1800 as

Oh, dear! 16 slow drives in one raidz2 vdev is just plain too many! It should be perhaps half that (at most) per raidz2 vdev. With the super-huge drives you will want to dial down the number of drives per vdev. The slow seek times and long rotational delay is a killer.

it has run over the last couple of days, but it seems to have settled
around 650 now. That is just WAY too long ... we fear that if the end
user of this device ever has to replace a drive in the pool, it will
take this long to rebuild again.

This fear is well founded.

Regardless, it is wise to use 'iostat -x 30' to see if you have a slow drive in the mix. The drives should be pretty uniformly loaded.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to