> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of MLR
> 
>      Say we get a decent ssd, ~500MB/s read/write. If we have a 20 HDD
zpool
> setup shouldn't we be reading at least at the 500MB/s read/write range?
> Why
> would we want a ~500MB/s cache?

You don't add l2arc because you care about MB/sec.  You add it because you
care about IOPS (read).

Similarly, you don't add dedicated log device for MB/sec.  You add it for
IOPS (sync write).

Any pool - raidz, raidz2, mirror - will give you optimum *sequential*
throughput.  All the performance enhancements are for random IO.  Mirrors
outperform raidzN, but in either case, you get improvements by adding log &
cache.


> Am I correct in
> thinking this means, for example, I have a single 14 disk raidz2 vdev
zpool,

It's not advisable to put more than ~8 disks in a single vdev, because it
really hurts during resilver time.  Maybe a week or two to resilver like
that.


> the
> disks will go ~100MB/s each , this zpool would theoretically read/write at

No matter which configuration you choose, you can expect optimum throughput
from all drives in sequential operations.  Random IO is a different story.


> What would be the best setup? I'm thinking one of the following:
>     a. 1vdev of 8 1.5TB disks (raidz2). 1vdev of 12 3TB disks (raidz3)?
> (~200MB/s reading, best reliability)

No.  12 in a single vdev is too much.


>     b. 1vdev of 8 1.5TB disks (raidz2). 3vdev of 4 3TB disks (raidz)?
(~400MB/s
> reading, evens out size across vdevs)

Not bad, but different size vdev's will perform differently (8 disks vs 4)
so...  See below.


>     c. 2vdev of 4 1.5TB disks (raidz). 3vdev of 4 3TB disks (raidz)?
(~500MB/s
> reading, maximize vdevs for performance)

This would be your optimal configuration.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to