On Wed, Sep 8, 2010 at 6:27 AM, Edward Ned Harvey <sh...@nedharvey.com> wrote:
> Both of the above situations resilver in equal time, unless there is a bus
> bottleneck.  21 disks in a single raidz3 will resilver just as fast as 7
> disks in a raidz1, as long as you are avoiding the bus bottleneck.  But 21
> disks in a single raidz3 provides better redundancy than 3 vdev's each
> containing a 7 disk raidz1.

No, it (21-disk raidz3 vdev) most certainly will not resilver in the
same amount of time.  In fact, I highly doubt it would resilver at
all.

My first foray into ZFS resulted in a 24-disk raidz2 vdev using 500 GB
Seagate ES.2 and WD RE3 drives connected to 3Ware 9550SXU and 9650SE
multilane controllers.  Nice 10 TB storage pool.  Worked beatifully as
we filled it with data.  Had less than 50% usage when a disk died.

No problem, it's ZFS, it's meant to be easy to replace a drive, just
offline, swap, replace, wait for it to resilver.

Well, 3 days later, it was still under 10%, and every disk light was
still solid grrn.  SNMP showed over 100 MB/s of disk I/O continuously,
and the box was basically unusable (5 minutes to get the password line
to appear on the console).

Tried rebooting a few times, stopped all disk I/O to the machine (it
was our backups box, running rysnc every night for - at the time - 50+
remote servers), let it do its thing.

After 3 weeks of trying to get the resilver to complete (or even reach
50%), we pulled the plug and destroyed the pool, rebuilding it using
3x 8-drive raidz2 vdevs.  Things have been a lot smoother ever since.
Have replaced 8 of the drives (1 vdev) with 1.5 TB drives.  Have
replaced multiple dead drives.  Resilvers, while running outgoing
rsync all day and incoming rsync all night, take 3 days for a 1.5 TB
drive (with SNMP showing 300 MB/s disk I/O).

You most definitely do not want to use a single super-wide raidz vdev.
 It just won't work.

> Instead of the Best Practices Guide saying "Don't put more than ___ disks
> into a single vdev," the BPG should say "Avoid the bus bandwidth bottleneck
> by constructing your vdev's using physical disks which are distributed
> across multiple buses, as necessary per the speed of your disks and buses."

Yeah, I still don't buy it.  Even spreading disks out such that you
have 4 SATA drives per PCI-X/PCIe bus, I don't think you'd be able to
get a 500 GB SATA disk to resilver in a 24-disk raidz vdev (even a
raidz1) in a 50% full pool.  Especially if you are using the pool for
anything at the same time.


-- 
Freddie Cash
fjwc...@gmail.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to