Hi Willy,

Good day to you.
After long time I'm requesting this query as well.
with HA proxy are we able to load balance udp traffic?

Best Regards,
Shehan Jayawardane
Head of Engineering
sheh...@nvision.lk
www.thryvz.com<http://www.thryvz.com/>


[cid:a58c8d85-df93-459d-8064-e5a1ed2a6d92]
________________________________
From: Willy Tarreau <w...@1wt.eu>
Sent: 04 October 2024 12:47
To: Shehan Jayawardane <sheh...@nvision.lk>
Cc: Joshua Turnbull <jos...@loadbalancer.org>; haproxy@formilux.org 
<haproxy@formilux.org>; Dev Ops <dev...@nvision.lk>; Sathiska Udayanga 
<sathi...@nvision.lk>
Subject: Re: HAproxy load balancing query

Hi Shehan,

On Fri, Oct 04, 2024 at 06:41:23AM +0000, Shehan Jayawardane wrote:
> Hi Josh,
>
> Nice. Thank you for the information.
> And we are going to deploy this for one of our critical production
> environment. Where there will be around 7000 TPS. And we hope to have virtual
> machine as the HA proxy. Can we know how ,much of server resources we need
> for such deployment?

That's always very hard to say, especially since TPS don't mean much these
days in terms of sizing:
  - if that's just HTTP requests per second over persistent connections,
    the smallest RaspberryPi is sufficient as long as the network link
    is not saturated

  - if you need to renew TCP connections for each request ("close mode"),
    then you may need a more robust system (RPi4B or above). But VMs are
    notoriously not great for creating/tearing down TCP connections, and
    you may observe significant differences between hypervisors and other
    VMs running on them. You *will* need to test.

  - if these are SSL and you set up / tear down SSL connections with each
    request, then 7k/s can be quite significant especially if rekeying is
    needed. In this case, for a modern x86 CPU, you can count on no more
    than 10k conn/s/core for resume or 2k conn/s/core for rekeying. VMs
    have little to no impact on SSL processing costs, but that doesn't
    remove the TCP setup/teardown costs. If you use SSL on both sides,
    just double your estimates (even if that's normally not exact), and
    stay away from OpenSSL 3.0.x.

And the next point is that usually with a load balancer, you need to take
into account not the initial load but the long-term target. We almost
never see LBs whose load is fading away over time, quite the opposite. I
tend to consider that the system load at deployment time shouldn't be
higher than 25%. That leaves enough margin for traffic progression, and
will leave you some room to try new features, apply some rules to work
around temporary security issues etc.

Finally, never under-estimate the total network traffic. Object sizes
tend to increase over time, especially on well-working sites where it's
common for people to feel at ease with large images and videos. Sometimes
you could just monitor your TPS without realizing that your LB has a
physical 1Gbps connection to the local network or ISP and that it fixes a
hard limit (in practise you'll start to get negative feedback above 70%
average usage due to short peaks). You don't want to upgrade in emergency
once it starts failing, because that means you'll have to do it after
failing in front of the largest possible number of witnesses, which is
never desirable! Just like for the rest, make sure that your initial
deployment doesn't use more than 25% of your absolute maximum capacity,
or make sure to have short-term plans to smoothly improve the situation.

Good luck!
Willy

Reply via email to