For me,We have seem a supermicro machine,which is 2U with 2 CPU and 24 2.5 inch sata/sas drives,together with 2 onboard 10Gb Nic. I think it's good enough for both density and computing power.
To another end, we are also planning to evaluating small node for ceph,say a ATOM with 2 /4 disks per node,and you could have 6 nodes in a 2U space 在 2013-3-18,6:39,"Mark Nelson" <mark.nel...@inktank.com> 写道: > Hi Stas, > > The SL4500 series looks like it should be a good option for large > deployments, though you may want to consider going with the 2-node > configuration with 25 drives each. The drive density is a bit lower but > you'll have a better CPU/drive ratio and can get away with much cheaper > processors (dual E5-2620s should be sufficient for 25 drives). > > It's important to keep in mind that unless you are talking about deploying > multiple racks of OSDs, you are likely better off with smaller nodes with > fewer drives (say 2U 12 drive boxes). That helps keep the penalty for losing > a node from being too dramatic. > > Both the SL4500 and the Dell C8000 allow you to have configurations with > multiple nodes in 1 chassis with fewer drives, so they are kind of an > interesting compromise between high density and keeping the drives-per-node > count lower. Granted, they both tend to be more expensive than supermicro > gear, so like always it's a giant balancing act. :) > > Mark > > On 03/17/2013 04:31 PM, Stas Oskin wrote: >> Hi. >> >> First of all, nice to meet you, and thanks for the great software! >> >> I've thoroughly read the benchmarks on the SuperMicro hardware with and >> without SSD combinations, and wondered if there were any tests done on >> HP file server. >> >> According to this article: >> http://www.theregister.co.uk/2012/11/15/hp_proliant_sl4500_big_data_servers/ >> >> This server in single node configuration is ideal for clustered systems >> (OpenStack in this case), holds 60 3.5 drives and can push up to 1M >> IOPS. Being priced as $7,643, it seems to make a serious competition to >> SuperMicro's hardware. >> >> Any idea what throughput can be achieved on this machine with Ceph? >> >> Regards. >> >> >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@lists.ceph.com >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com