> > This is a very good point that I totally overlooked. I concentrated
> > more on the IOPS alignment plus write durability, and forgot to check
> > the sequential write bandwidth. The 400GB Intel S3700 is a lot more
> > faster but double the price (around $950) compared to the 200GB.
> Indeed, thus my suggestion of 4 200GBs (at about 1.4GB/s).
> Still not a total match, but a lot closer. Also gives you a nice 1:3
> ratio of SSDs to HDDs, meaning that the load and thus endurance is
> balanced.
> With uneven numbers of HDDs one of your journal SSDs will wear
> noticeably earlier than the others.
> A dead SSD will also just bring down 3 OSDs in that case (of course the
> HDDS are much more likely to fail, but a number to remember).
> 

Thanks, that 1:3 ration with 200GB SSDs may still fit into our budget. Also, 
good point on the unbalanced journal partitions.

> There's one thing I forgot to mention which makes 2) a bit inferior to
> 1) and that is density, as in 3U cases are less dense than 2U or 4U
> ones.
> For example 12U of 2) will give you 64 drive bays instead of 72 for 1).
> 
> > Maybe I would
> > be better off using enterprise SLC SSDs for journals? For example OCZ
> > Deneva 2 C 60GB SLC costs around $640, and have 75K write IOPS and
> > ~510MB/s write bandwidth by spec.
> >
> The fact that they don't have power-loss protection will result in loud
> cries of woe and doom in this ML. ^o^
> 

Woa, i didn't know that! It would be funny to loose entire Ceph cluster data in 
case of power cut due to corruption of the majority of journal filesystems..


> As they don't give that data on their homepage, I would try to find
> benchmarks that include what its latency and variances are, the DC 3700s
> deliver their IOPS without any stutters.
> 

The eMLC version of the OCZ Deneva 2 didn't perform that well during stress 
test, the actual results were much below the expected:
http://www.storagereview.com/ocz_deneva_2_enterprise_ssd_review



Regards,
Benjamin

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to