On Mon, Jul 9, 2012 at 3:12 PM, Florian Philipp <li...@binarywings.net> wrote: > Am 09.07.2012 19:26, schrieb Jarry: >> Hi, >> >> I'm building small server for VPS-hosting purpose (not decided >> yet, probably KVM or ESXi). Althought it is "non production", >> reliability is still issue for me. I'm considering using two >> small SSD (ie. Intel 313, 24GB, SLC) in hw-raid1 (Adaptec 3805) >> just for hypervisor, and 2xHDD/raid1 for VPS. >> >> I started looking for info on internet but results are not >> conclusive. Opinions vary from "no problem, use SSD&raid1" >> to "using SSD in raid is not recommended no matter if it is >> SW or HW". Mostly missing trim-command is mentioned as reason >> for not using SSD in any raid. >> >> Does anyone have any experience with SSD in raid (SW or HW)? >> Is it safe to use it on server? >> >> Jarry > > I don't think trim is currently supported in mdaid. AFAIK dmraid > supports it as long as the RAID controller plays along. I'm unsure about > LVM mirrors. > > But trim is not your only issue: mdraid and LVM as well as most (all?) > hardware RAIDs will mirror all blocks from one disk to the other when > you assemble the RAID-1. This means that all blocks on your second SSD > will immediately be marked dirty which incurs a significant performance > penalty.
Well, yes and no. Yes, there will be a performance penalty from having all the blocks marked dirty. But this would eventually happen anyway! On the other hand, once all blocks are marked dirty, you have performance _stability_. If you're in a position to care about predictive performance metrics, stability is quite useful. -- :wq