On Tue, 3 Jun 2014 10:46:36 +0000 Benjamin Somhegyi wrote:

> Hello Robert & Christian,
> 
> First, thank you for the general considerations, 3 and 3.extra has been
> ruled out. 
> 
> 
> > A simple way to make 1) and 2) cheaper is to use AMD CPUs, they will do
> > just fine at half the price with these loads.
> > If you're that tight on budget, 64GB RAM will do fine, too.
> > 
> > I assume you're committed to 10GbE in your environment, at least when
> > it comes to the public side.
> > I have found Infiniband cheaper (especially when it comes to switches)
> > and faster that 10GbE.
> > 
> 
> We decided to go with 10GbE on the storage side to consolidate the 10GbE
> external network connectivity requirement with the storage networking,
> and not use two separate technologies/switches/NICs in the compute and
> storage nodes.
> 
Perfectly fine and understandable, just wanted to point out an often
overlooked alternative.

> > Looking purely at bandwidth (sequential writes), your proposals are all
> > underpowered when it comes to the ratio of SSD journals to HDDs and the
> > available network bandwidth.
> > For example with 1) you have up to 2GB/s of inbound writes from the
> > network and about 1.7GB/s worth on your HDDs, but just 700GB/s on your
> > SSDs.
> > Even if you're more interested in IOPS (as you probably should), it
> > feels like a waste.
> > 2) with 4 SSDs (or bigger ones that are faster) would make a decent
> > storage node it my book.
> 
> This is a very good point that I totally overlooked. I concentrated more
> on the IOPS alignment plus write durability, and forgot to check the
> sequential write bandwidth. The 400GB Intel S3700 is a lot more faster
> but double the price (around $950) compared to the 200GB. 
Indeed, thus my suggestion of 4 200GBs (at about 1.4GB/s). 
Still not a total match, but a lot closer. Also gives you a nice 1:3 ratio
of SSDs to HDDs, meaning that the load and thus endurance is balanced. 
With uneven numbers of HDDs one of your journal SSDs will wear noticeably
earlier than the others.
A dead SSD will also just bring down 3 OSDs in that case (of course the
HDDS are much more likely to fail, but a number to remember).

There's one thing I forgot to mention which makes 2) a bit inferior to 1)
and that is density, as in 3U cases are less dense than 2U or 4U ones.
For example 12U of 2) will give you 64 drive bays instead of 72 for 1).

> Maybe I would
> be better off using enterprise SLC SSDs for journals? For example OCZ
> Deneva 2 C 60GB SLC costs around $640, and have 75K write IOPS and
> ~510MB/s write bandwidth by spec.
> 
The fact that they don't have power-loss protection will result in loud
cries of woe and doom in this ML. ^o^

As they don't give that data on their homepage, I would try to find
benchmarks that include what its latency and variances are, the DC 3700s
deliver their IOPS without any stutters.

Regards,

Christian
> 
> Cheers,
> Benjamin
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
ch...@gol.com           Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to