On Wed, Apr 7, 2010 at 12:09 PM, Jason S <j.sin...@shaw.ca> wrote:

> I was actually already planning to get another 4 gigs of ram for the box
> right away anyway, but thank you for mentioning it! As there appears to be a
> couple ways to "skin the cat" here i think i am going to try both a 14
> spindle RaidZ2 and 2 X 7 RaidZ2 configuration and see what the performance
> is like. I have a fews days of grace before i need to have this server ready
> for duty.
>
> Don't bother with the 14-drive raidz2.  I can attest to just how horrible
the performance is for a single, large, raidz2 vdev is:  atrocious.
 Especially when it comes time to scrub or resilver.  You'll end up
thrashing all the disks, taking close to a week to resilver a dead drive (if
you can actually get it to complete), and pulling your hair out is
frustration.

Our original configuration in our storage servers used a single 24-drive
raidz2 vdev using 7200 RPM SATA drives.  Worked, not well, but it worked ...
until the first drive died.  After 3 weeks, the resilver still hadn't
finished, the backups processes weren't completing overnight due to the
resilver process, and things just went downhill.  We redid the pool using 3x
raidz2 vdevs using 8 drives each, and things are much better.  (If I had to
re-do it again today, I'd use 4x raidz2 vdevs using 6 drives each.)

The more vdevs you can add to a pool, the better the raw I/O performance of
the pool will be.  Go with lots of smaller vdevs.  With 14 drives, play
around with the following:
  2x raidz2 vdevs using 7 drives each
  3x raidz2 vdevs using 5 drives each (with two hot-spares, or a mirror vdev
for root?)
  4x raidz2 vdevs using 4 drives each (with one hot-spare, perhaps?)
  4x raidz1 vdevs using 4 drives each (maybe not enough redundancy?)
  5x mirror vdevs using 3 drives each (maybe too much lost space for
redundancy?)
  7x mirror vdevs using 2 drives each

You really need to decide which is more important:  raw storage space or raw
I/O throughput.  They're almost (not quite, but almost) mutually exclusive.


> Something i forgot to note in my original post is the performance numbers i
> am concerned with are going to be during reads primarily. There could be at
> any one point 4 media players attempting to stream media from this server.
> The media players all have 100mb interfaces so as long i can can reliable
> stream 400mb/s it should be ok (this is assuming all the media players were
> playing high bitrate Blueray streams at one time). Any writing to this array
> would happen pretty infrequently and i normally schedule any file transfers
> for the wee hours of the morning anyway.
>
>
-- 
Freddie Cash
fjwc...@gmail.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to