I'm using my existing HAProxy server to also balance my RadosGW nodes.  I'm
not going to run into bandwidth problems on that link any time soon, but
I'll split RadosGW off onto it's own HAProxy instance when it does become
congested.

I have a smaller cluster, only 5 nodes.  I'm running mon on the first 3
nodes, and osd and rgw on all 5 nodes.  rgw and mon have very little
overhead.  I don't plan put those services on dedicated nodes anytime soon.


MONs need a decent amount of Disk I/O; bad things happen if the disks can't
keep up with the MONMAP updates.  Virtual machines with dedicated IOPS
should work fine for them.  RadosGW doesn't need much CPU or Disk I/O.  I
would have no problem testing virtual machines as RadosGW nodes.

As far as few-and-big or many-and-small, my gut feeling is that Apache
FastCGI isn't going to scale up to 10 GigE speeds.  Obviously you should
test this.  I don't see much downside to going many-and-small, unless
you're planning to go crazy with the "many".  But that also depends on your
network, and how the GigE and 10 GigE networks switch/route.


If you have some spare CPU on your MON machines, I see no reason they can't
double as RadosGW nodes too.





On Tue, Oct 14, 2014 at 11:41 AM, Simone Spinelli <simone.spine...@unipi.it>
wrote:

>  Dear all,
>
> we are going to add rados-gw to our ceph cluster (144 OSD on 12 servers +
> 3 monitors connected via 10giga network) and we have a couple of questions.
>
>
> The first question is about the load balancer, do you have some advice
> based on real-world experience?
>
> Second question is about the number of gateway instances: is it better to
> have many little&giga-connected servers or less fat&10giga-connected
> servers considering that the total bandwidth available is 10 giga anyway?
> Do you use real or virtual servers? Any advice in terms of performances and
> reliability?
>
> Many thanks!
>
> Simone
>
>
>
> --
> Simone Spinelli <simone.spine...@unipi.it>
> Università di Pisa
> Direzione ICT - Servizi di Rete
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to