Hi,

We are at the end of the process of designing and purchasing storage to provide 
Ceph based backend for VM images, VM boot (ephemeral) disks, persistent volumes 
(and possibly object storage) for our future Openstack cloud. We considered 
many options and we chose to prefer commodity storage server vendors over 
'brand vendors'. There are 3 (+1 extra) types of storage server options we 
consider at the moment:

1.) 2U, 12x 3.5" bay storage server, 24 pieces
  - Dual Intel E5-2620v2, 128GB RAM
 - Integrated LSI 2308 controller, cabled with 4-lane iPass-to-iPass cable to 
backplane (single LSI expander chip)
 - 2x Intel DC S3700 200GB SSDs for journal, 10x4TB Seagate Constellation ES.3
 - 2x10GbE connectivity (in LAG, client and cluster network in separate VLANs)

2.) 3U, 16x 3.5" bay storage server, 18 pieces
 - Dual Intel E5-2630v2, 128GB RAM
  - Integrated LSI 2308 controller, cabled with 4-lane iPass-to-iPass cable to 
backplane (single LSI expander chip)
  - 3x Intel DC S3700 200GB SSDs for journal, 13x4TB Seagate Constellation ES.3
 - 2x10GbE connectivity
 - Possibly use 1 out of 13 bays with 400 GB SSD in cache tier in front of RDB 
pools.
3.)  4U 24x 3.5" bay, 12 pieces
  - Dual Intel E5-2630v2, 256GB RAM
 - Integrated LSI 2308 controller, single 4-lane iPass can be a limiting factor 
here with SSD journals!
 - 4x journal SSD, 20x4TB HDD
  - 2x10GbE for client, 2x10GbE for replication
 - Possibly use 1 out of 20 bays with 400 GB SSD in cache tier in front of RDB 
pools.

3.extra) 8 pieces
 - 3U 36x 3.5" bay, supermicro recommended: 
http://www.supermicro.com/products/nfo/storage_ceph.cfm
  - Dual Intel E5-2630v2, 256GB RAM
 - 24 front disks served by onboard LSI 2308, 12 rear served by HBA with LSI 
2308
 - 30 HDDs (24 in front, 6 in rear) + 6 SSDs for journal
 - 2x10GbE for client, 2x10GbE for replication
  - Possibly use 1-2 out of 30 bays with 400 GB SSD in cache tier in front of 
RDB pools.

So, the question is: which one would you prefer? Of course the best would be 
1.) in terms of performance and reliability but we'd rather avoid that if 
possible due to budget constraints (48x Intel CPU is pricy). Or maybe do you 
have alternative suggestions for this cluster size?
Many thanks for the tips!

Cheers,
Ben

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to