For our ~400 TB Ceph deployment, we bought:
        (2) R720s w/ dual X5660s and 96 GB of RAM
        (1) 10Gb NIC (2 interfaces per card)
        (4) MD1200s per machine
        ...and a boat load of 4TB disks!

In retrospect, I would almost certainly would have gotten more servers. During 
heavy writes we see the load spiking up to ~50 on Emperor and warnings about 
slow OSDs, but we clearly seem to be on the extreme with something like 60 OSDs 
per box :)

Cheers,
Lincoln

On Jan 16, 2014, at 4:09 AM, Cedric Lemarchand wrote:

> 
> Le 16/01/2014 10:16, NEVEU Stephane a écrit :
>> Thank you all for comments,
>> 
>> So to sum up a bit, it's a reasonable compromise to buy :
>> 2 x R720 with 2x Intel E5-2660v2, 2.2GHz, 25M Cache, 48Gb RAM, 2 x 146GB, 
>> SAS 6Gbps, 2.5-in, 15K RPM Hard Drive (Hot-plug) Flex Bay for OS and 24 x 
>> 1.2TB, SAS 6Gbps, 2.5in, 10K RPM Hard Drive for OSDs (journal located on 
>> each osd) and PERC H710p Integrated RAID Controller, 1GB NV Cache
>> ?
>> Or is it a better idea to buy 4 servers less powerful instead of 2 ?
> I think you are facing the well known trade off between 
> price/performances/usable storage size.
> 
> More servers less powerfull will give you better power computation and better 
> iops by usable To, but will be more expensive. An extrapolation of that that 
> would be to use a blade for each To => very powerful/very expensive.
> 
> 
> The choice really depend of the work load you need to handle, witch is not an 
> easy thing to estimate.
> 
> Cheers
> 
> -- 
> Cédric
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to