Martin,

Thank you very much for sharing your insight on hardware options.  This will be 
very useful for us going forward.

Shain

Shain Miley | Manager of Systems and Infrastructure, Digital Media | 
smi...@npr.org | 202.513.3649
________________________________
From: Martin B Nielsen [mar...@unity3d.com]
Sent: Monday, August 26, 2013 1:13 PM
To: Shain Miley
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Hardware recommendations

Hi Shain,

Those R515 seem to mimic our servers (2U supermicro w. 12x 3.5" bays and 2x 
2.5" in the rear for OS).

Since we need a mix of SSD & platter we have 8x 4TB drives and 4x 500GB SSD + 
2x 250GB SSD for OS in each node (2x 8-port LSI 2308 in IT-mode)

We've partitioned 10GB from each 4x 500GB to use as journal for 4x 4TB drives 
and each of the OS disks each hold 2x journals each for the remaining 4 platter 
disks.

We tested a lot how to put these journals and this setup seemed to fit best 
into our setup (pure VM block storage - 3x replica).

Everything connected via 10GbE (1 network for cluster, 1 for public) and 3 
standalone monitor servers.

For storage nodes we use E5-2620/32gb ram, and monitor nodes E3-1260L/16gb ram 
- we've tested with both 1 and 2 nodes going down and starting redistributing 
data and they seem to cope more than fine.

Overall I find these nodes as a good compromise between capacity, price and 
performance - we looked into getting 2U servers with 8x 3.5" bays and get more 
of them, but ultimately went with this.

We also have some boxes from coraid (SR & SRX with and without 
flashcache/etherflash) so we've been able to do some direct comparison and so 
far ceph is looking good - especially price-storage ratio.

At any rate, back to your mail, I think the most important factor is looking at 
all the pieces and making sure you're not being [hard] bottlenecked somewhere - 
we found 24gb ram to be a little on the low side when all 12 disks started to 
redistribute, but 32 is fine. Also not having journals on SSD before writing to 
platter really hurt a lot when we tested - this can prob. be mitigated somewhat 
with better raid controllers. CPU-wise the E5 2620 hardly breaks a sweat even 
when having to do just a little with a node going down.

Good luck with your HW-adventure :).

Cheers,
Martin


On Mon, Aug 26, 2013 at 3:56 PM, Shain Miley 
<smi...@npr.org<mailto:smi...@npr.org>> wrote:
Good morning,

I am in the process of deciding what hardware we are going to purchase for our 
new ceph based storage cluster.

I have been informed that I must submit my purchase needs by the end of this 
week in order to meet our FY13 budget requirements  (which does not leave me 
much time).

We are planning to build multiple clusters (one primarily for radosgw at 
location 1; the other for vm block storage at location 2).

We will be building our radosgw storage out first, so this is the main focus of 
this email thread.

I have read all the docs and the white papers, etc on hardware suggestions 
...and we have an existing relationship with Dell, so I have been planning on 
buying a bunch of Dell R515's with 4TB drives and using 10GigE networking for 
this radosgw setup (although this will be primary used for radosgw purposes...I 
will be testing running a limited number of vm's on this infrastructure  as 
well...in order to see what kind of performance we can achieve).

I am just wondering if anyone else has any quick thoughts on these hardware 
choices, or any alternative suggestions that I might look at as I seek to 
finalize our purchasing this week.

Thanks in advance,

Shain

Sent from my iPhone
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to