On 04/14/2013 08:38 AM, Stas Oskin wrote:
Hi.
I mean the sl4500 can have 2 nodes with 50 drives instead of 1 node
with 60 drives. See this:
http://h18000.www1.hp.com/__products/quickspecs/14406_div/__14406_div.HTML
<http://h18000.www1.hp.com/products/quickspecs/14406_div/14406_div.HTML>
That lets you use cheaper CPUs and also gives you better
performance, but is slightly less dense and probably more expensive
than the 1x60 model. The more important thing though is that when a
node fails you only lose 25 drives instead of 60 so recovery will be
smoother.
Aha, but supermicro has a new king:
http://www.supermicro.com/__products/system/4u/6047/ssg-__6047r-e1r72l.cfm
<http://www.supermicro.com/products/system/4u/6047/ssg-6047r-e1r72l.cfm>
72 3.5" drives in 4U along with a motherboard and triple redundant
power supplies! Crazy! I have no idea how well ceph would run on
such a beast, but you'd probably need the fastest CPUs you could get
your hands on and a lot of RAM.
Back to this topic, what would your general advice be regarding the below:
1) If data availability and redundancy is most important, you would go
with multiple 2U boxes to minimize cluster impacts in case of any downtime?
My general feeling here is that it depends on the size of the cluster.
For small clusters, 2U or even 1U boxes may be ideal. For very large
clusters, it is probably fine to use a denser chassis. It's all about
the ratio of how many nodes are left to absorb the loss when a single
node fails. I don't have any hard numbers, but my instinct is that in
production I wouldn't want to lose more than about 10% of my cluster if
a node dies.
2) Barring service and SLA, is it really worth taking HP over
SuperMicro, or it's simply overpaying for a brand?
I've had great experiences with both Supermicro hardware and HP
hardware. Products from both companies can work great for Ceph, but
they are different companies and have different benefits and downsides.
Service, support, and price are all things that may make one or the
other a better fit depending on your needs.
Thanks,
Stas.
--
Mark Nelson
Performance Engineer
Inktank
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com