Rich put forth on 11/10/2010 1:52 AM: > The only difference I would have on this server is I would make it a 10 raid > and not raid5. This is a much more higher performing with all the writes to > maildir. Its also better fault tolerance.
I typically use RAID10 for most high load transaction heavy systems as well. I rarely recommend it to others, as they usually have trouble grasping that losing half the space of the disks is a good thing, regardless of the additional redundancy and performance. :) Modern quality caching controllers, either PCI-X/e HBAs or SAN controllers, with decent parity ASICs and 1GB RAM or more of cache, can often get RAID5/6 relatively close to RAID10 in IOPs and throughput. The OP is currently planing on using a single mirror pair for his mail store. Anything is going to be better than that. An 8 disk RAID5 will have about 6-7x the IOPs of his mirror set WRT writes, maybe a little less WRT reads if his RAID controller intelligently interleaves block reads. An 8 disk RAID10 will give 4 spindles of IOPs compared to 7 spindles for the RAID5 using the same 8 disks. Assuming the card has a decent parity ASIC, the write performance should be similar, though it will be lower for the RAID5. The read performance of the RAID5 will be quite a bit higher due to the extra 3 spindles and no parity calculations on READ operations. The one thing I really really like about RAID10 is the rebuild time. Simply mirroring one disk during the rebuild is much faster than any parity scheme. The only downside is it creates a huge IO hot spot on the healthy drive of the failed pair. Configurable rebuild priority helps mitigate this though. Regardless, rebuild times for RAID10 are typically dramatically lower than RAID5/50/6/60. -- Stan