Hello,

you are indeed facing the problem of balancing density (and with that
cost, though really dense storage pods get more expensive again) versus
performance. 

I would definitely rule out 3) for the reason you're giving and 3.extra
for the reason Robert gives, if one of those nodes crashes your cluster
will be very busy for a while, unless you have the means to determine that
it will be back soon and can set "noout" in time.

A simple way to make 1) and 2) cheaper is to use AMD CPUs, they will do
just fine at half the price with these loads. 
If you're that tight on budget, 64GB RAM will do fine, too.

I assume you're committed to 10GbE in your environment, at least when it
comes to the public side.
I have found Infiniband cheaper (especially when it comes to switches) and
faster that 10GbE.

Looking purely at bandwidth (sequential writes), your proposals are all
underpowered when it comes to the ratio of SSD journals to HDDs and the
available network bandwidth. 
For example with 1) you have up to 2GB/s of inbound writes from the
network and about 1.7GB/s worth on your HDDs, but just 700GB/s on your
SSDs. 
Even if you're more interested in IOPS (as you probably should), it feels
like a waste. 
2) with 4 SSDs (or bigger ones that are faster) would make a decent storage
node it my book.

Regards,

Christian

On Tue, 3 Jun 2014 08:33:43 +0000 Benjamin Somhegyi wrote:

> Hi,
> 
> We are at the end of the process of designing and purchasing storage to
> provide Ceph based backend for VM images, VM boot (ephemeral) disks,
> persistent volumes (and possibly object storage) for our future
> Openstack cloud. We considered many options and we chose to prefer
> commodity storage server vendors over 'brand vendors'. There are 3 (+1
> extra) types of storage server options we consider at the moment:
> 
> 1.) 2U, 12x 3.5" bay storage server, 24 pieces
>   - Dual Intel E5-2620v2, 128GB RAM
>  - Integrated LSI 2308 controller, cabled with 4-lane iPass-to-iPass
> cable to backplane (single LSI expander chip)
>  - 2x Intel DC S3700 200GB SSDs for journal, 10x4TB Seagate
> Constellation ES.3
>  - 2x10GbE connectivity (in LAG, client and cluster network in separate
> VLANs)
> 
> 2.) 3U, 16x 3.5" bay storage server, 18 pieces
>  - Dual Intel E5-2630v2, 128GB RAM
>   - Integrated LSI 2308 controller, cabled with 4-lane iPass-to-iPass
> cable to backplane (single LSI expander chip)
>   - 3x Intel DC S3700 200GB SSDs for journal, 13x4TB Seagate
> Constellation ES.3
>  - 2x10GbE connectivity
>  - Possibly use 1 out of 13 bays with 400 GB SSD in cache tier in front
> of RDB pools. 3.)  4U 24x 3.5" bay, 12 pieces
>   - Dual Intel E5-2630v2, 256GB RAM
>  - Integrated LSI 2308 controller, single 4-lane iPass can be a limiting
> factor here with SSD journals!
>  - 4x journal SSD, 20x4TB HDD
>   - 2x10GbE for client, 2x10GbE for replication
>  - Possibly use 1 out of 20 bays with 400 GB SSD in cache tier in front
> of RDB pools.
> 
> 3.extra) 8 pieces
>  - 3U 36x 3.5" bay, supermicro recommended:
> http://www.supermicro.com/products/nfo/storage_ceph.cfm
>   - Dual Intel E5-2630v2, 256GB RAM
>  - 24 front disks served by onboard LSI 2308, 12 rear served by HBA with
> LSI 2308
>  - 30 HDDs (24 in front, 6 in rear) + 6 SSDs for journal
>  - 2x10GbE for client, 2x10GbE for replication
>   - Possibly use 1-2 out of 30 bays with 400 GB SSD in cache tier in
> front of RDB pools.
> 
> So, the question is: which one would you prefer? Of course the best
> would be 1.) in terms of performance and reliability but we'd rather
> avoid that if possible due to budget constraints (48x Intel CPU is
> pricy). Or maybe do you have alternative suggestions for this cluster
> size? Many thanks for the tips!
> 
> Cheers,
> Ben
> 


-- 
Christian Balzer        Network/Systems Engineer                
ch...@gol.com           Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to