I think this is it:
https://engage.redhat.com/inktank-ceph-reference-architecture-s-201409080939

You can also check out a presentation on Cern's Ceph cluster:
http://www.slideshare.net/Inktank_Ceph/scaling-ceph-at-cern


At large scale, the biggest problem will likely be network I/O on the
inter-switch links.



On Thu, Dec 18, 2014 at 3:29 PM, Robert LeBlanc <rob...@leblancnet.us>
wrote:
>
> I'm interested to know if there is a reference to this reference
> architecture. It would help alleviate some of the fears we have about
> scaling this thing to a massive scale (10,000's OSDs).
>
> Thanks,
> Robert LeBlanc
>
> On Thu, Dec 18, 2014 at 3:43 PM, Craig Lewis <cle...@centraldesktop.com>
> wrote:
>
>>
>>
>> On Thu, Dec 18, 2014 at 5:16 AM, Patrick McGarry <patr...@inktank.com>
>> wrote:
>>>
>>>
>>> > 2. What should be the minimum hardware requirement of the server (CPU,
>>> > Memory, NIC etc)
>>>
>>> There is no real "minimum" to run Ceph, it's all about what your
>>> workload will look like and what kind of performance you need. We have
>>> seen Ceph run on Raspberry Pis.
>>
>>
>> Technically, the smallest cluster is a single node with a 10 GiB disk.
>> Anything smaller won't work.
>>
>> That said, Ceph was envisioned to run on large clusters.  IIRC, the
>> reference architecture has 7 rows, each row having 10 racks, all full.
>>
>> Those of us running small clusters (less than 10 nodes) are noticing that
>> it doesn't work quite as well.  We have to significantly scale back the
>> amount of backfilling and recovery that is allowed.  I try to keep all
>> backfill/recovery operations touching less than 20% of my OSDs.  In the
>> reference architecture, it could lose a whole row, and still keep under
>> that limit.  My 5 nodes cluster is noticeably better better than the 3 node
>> cluster.  It's faster, has lower latency, and latency doesn't increase as
>> much during recovery operations.
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to