Is this your Graham?  

> On Feb 14, 2021, at 4:31 PM, Graham Allan <g...@umn.edu> wrote:
> 
> On Tue, Feb 9, 2021 at 11:00 AM Matthew Vernon <m...@sanger.ac.uk> wrote:
> 
>> On 07/02/2021 22:19, Marc wrote:
>>> 
>>> I was wondering if someone could post a config for haproxy. Is there
>> something specific to configure? Like binding clients to a specific backend
>> server, client timeouts, security specific to rgw etc.
>> 
>> Ours is templated out by ceph-ansible; to try and condense out just the
>> interesting bits:
>> 
>> (snipped the config...)
>> 
>> The aim is to use all available CPU on the RGWs at peak load, but to
>> also try and prevent one user overwhelming the service for everyone else
>> - hence the dropping of idle connections and soft (and then hard) limits
>> on per-IP connections.
>> 
> 
> Can I ask a followup question to this: how many haproxy instances do you
> then run - one on each of your gateways, with keepalived to manage which is
> active?
> 
> I ask because, since before I was involved with our ceph object store, it
> has been load-balanced between multiple rgw servers directly using
> bgp-ecmp. It doesn't sound like this is common practise in the ceph
> community, and I'm wondering what the pros and cons are.
> 
> The bgp-ecmp load balancing has the flaw that it's not truly fault
> tolerant, at least without additional checks to shut down the local quagga
> instance if rgw isn't responding - it's only fault tolerant in the case of
> an entire server going down, which meets our original goals of rolling
> maintenance/updates, but not a radosgw process going unresponsive. In
> addition I think we have always seen some background level of clients being
> sent "connection reset by peer" errors, which I have never tracked down
> within radosgw; I wonder if these might be masked by an haproxy frontend?
> 
> The converse is that all client gateway traffic must generally pass through
> a single haproxy instance, while bgp-ecmp distributes the connections
> across all nodes. Perhaps haproxy is lightweight and efficient enough that
> this makes little difference to performance?
> 
> Graham
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io


                
Chip Cox
Director, Sales  |  SoftIron
770.314.8300 <tel:770.314.8300>
c...@softiron.com
 <mailto:c...@softiron.com>



_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to