Just an FYI...we have a Ceph cluster setup for archiving audio and video using 
the following Dell hardware:

6 x Dell R-720xd;64 GB of RAM; for OSD nodes
72 x 4TB SAS drives as OSD’s
3 x Dell R-420;32 GB of RAM; for MON/RADOSGW/MDS nodes
2 x Force10 S4810 switches
4 x 10 GigE LCAP bonded Intel cards

This provides us with about 260 TB of usable space. With rados bench we are 
able to get the following on some the pools we tested:

1 replica - 1175 MB/s
2 replicas - 850 MB/s
3 replicas - 625 MB/s

If we decide to build a second cluster in the future for rbd backed vm's, we 
will either be looking into the new ceph 'ssd tiering' options, or a little bit 
less dense Dell nodes for osd's using ssd's for the journals, in order to 
maximize performance.

Shain


Shain Miley | Manager of Systems and Infrastructure, Digital Media | 
smi...@npr.org | 202.513.3649

________________________________________
From: ceph-users-boun...@lists.ceph.com [ceph-users-boun...@lists.ceph.com] on 
behalf of Lincoln Bryant [linco...@uchicago.edu]
Sent: Thursday, January 16, 2014 1:10 PM
To: Cedric Lemarchand
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph / Dell hardware recommendation

For our ~400 TB Ceph deployment, we bought:
        (2) R720s w/ dual X5660s and 96 GB of RAM
        (1) 10Gb NIC (2 interfaces per card)
        (4) MD1200s per machine
        ...and a boat load of 4TB disks!

In retrospect, I would almost certainly would have gotten more servers. During 
heavy writes we see the load spiking up to ~50 on Emperor and warnings about 
slow OSDs, but we clearly seem to be on the extreme with something like 60 OSDs 
per box :)

Cheers,
Lincoln

On Jan 16, 2014, at 4:09 AM, Cedric Lemarchand wrote:

>
> Le 16/01/2014 10:16, NEVEU Stephane a écrit :
>> Thank you all for comments,
>>
>> So to sum up a bit, it's a reasonable compromise to buy :
>> 2 x R720 with 2x Intel E5-2660v2, 2.2GHz, 25M Cache, 48Gb RAM, 2 x 146GB, 
>> SAS 6Gbps, 2.5-in, 15K RPM Hard Drive (Hot-plug) Flex Bay for OS and 24 x 
>> 1.2TB, SAS 6Gbps, 2.5in, 10K RPM Hard Drive for OSDs (journal located on 
>> each osd) and PERC H710p Integrated RAID Controller, 1GB NV Cache
>> ?
>> Or is it a better idea to buy 4 servers less powerful instead of 2 ?
> I think you are facing the well known trade off between 
> price/performances/usable storage size.
>
> More servers less powerfull will give you better power computation and better 
> iops by usable To, but will be more expensive. An extrapolation of that that 
> would be to use a blade for each To => very powerful/very expensive.
>
>
> The choice really depend of the work load you need to handle, witch is not an 
> easy thing to estimate.
>
> Cheers
>
> --
> Cédric
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to