Anton B. Rang wrote:
Careful here.  If your workload is unpredictable, RAID 6 (and RAID 5)
for that matter will break down under highly randomized write loads.

Oh?  What precisely do you mean by "break down"?  RAID 5's write performance is 
well-understood and it's used successfully in many installations for random write loads. 
Clearly if you need the very highest performance from a given amount of hardware, RAID 1 
will perform better for random writes, but RAID 5 can be quite good. (RAID 6 is slightly 
worse, since a random write requires access to 3 disks instead of 2.)

There are certainly bad implementations out there, but in general RAID 5 is a 
reasonable choice for many random-access workloads.

(For those who haven't been paying attention, note that RAIDZ and RAIDZ2 are 
closer to RAID 3 in implementation and performance than to RAID 5; neither is a 
good choice for random-write workloads.)

In my testing, if you have a lot of IO queues spread widely across your array, you do better with RAID 1 or 10. RAIDZ and RAIDZ2 are much worse, yes. If you add large transfers on top of this, which happen in multi-purpose pools, small reads can get starved out. The throughput curve (IO rate vs. queues*size) with RAID 5-6 flattens out a lot faster than with RAID 10.

The scoop is this. On multipurpose pools, zfs often takes the place of many individual file systems. Those had the advantage of separation of IO and some tuning was also available to each file system. My experience, or should I say theory is that RAID 5,6 hardware accelerated arrays work pretty good in more predictable IO patterns. Sometimes even great. I use RAID 5,6 a lot for these.

Don't get me wrong, I love zfs, I ain't going back. Don't start flaming me, I just think we have to be aware of the limitations and engineer our storage carefully. I made the mistake recently of putting to much faith in hardware RAID 6, and as our user load grew, the performance went through the floor faster than I thought it would.

My 2 cents.

Jon
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to