Take a look at Cern's "Scaling Ceph at Cern" slides
<http://www.slideshare.net/Inktank_Ceph/scaling-ceph-at-cern>, as well
as Inktank's
Hardware Configuration Guide
<http://www.inktank.com/resources/inktank-hardware-configuration-guide/>.


You need at least 3 MONs for production.  You might want more if depending
on the size and your failure domains.

Since you're doing RDB, you're going to be more concerned with latency than
raw storage space.  You'll want to look at the number of IOPS each disk can
do, divided by the number of replicas you want, reduced by the percentage
of the cluster you're willing to lose and still have acceptable
performance.  Add more OSDs until you get the IOPS you want.

SSD journals will really help to get the full IOPS out of each disk.  Make
sure the SSD has enough write speed to match the OSDs using it.  ie, if
your SSDs can write 400MB/s, and the OSDs can write 100MB/s, then you only
want 4 OSDs sharing an SSD for journals.

Make sure you have enough network bandwidth to handle all of the OSDs.  10x
disks at 100 MB/s is 1 GB/s.  You'll need 10GigE to handle that.


If you're concerned about latency, you probably want a dedicated cluster
network.


To really get the best performance, you need more money and a lot of
testing.  :-)  It's up to you to determine if you need those SSDs and
battery backed write caching RAID cards to meet your performance numbers.
 A larger cluster is a faster cluster (until you bottleneck on network IO).
 More spindles are faster.  If you favor speed over space, you're better
with twice as many 1TB disks than 2TB disks.  That'll cost more though,
because you need twice as many nodes to hold those twice as many disks.

Consider an caching tier using SSDs.




On Mon, Aug 11, 2014 at 6:23 PM, yuelongguang <fasts...@163.com> wrote:

> hi,all
> i am using ceph-rbd with openstack as its backends storage.
> is there a best practice?
> 1.
> it needs at least   how many osds,mons, and their proportion ?
>
> 2. how you deploy the network?public , cluster network...
>
> 3.as for performance, what do you do? journal......
>
> 4. anything  it promotes ceph performance.
> thanks.
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to