On 5/23/23 15:59, Felix Hüttner via discuss wrote:
> Hi everyone,

Hi, Felix.

> 
> we are currently running an OVN Deployment with 450 Nodes. We run a 3 node 
> cluster for the northbound database and a 3 nodes cluster for the southbound 
> database.
> Between the southbound cluster and the ovn-controllers we have a layer of 24 
> ovsdb relays.
> The setup is using TLS for all connections, however the TLS Server is handled 
> by a traefik reverseproxy to offload this from the ovsdb

The very important part of the system description is what versions
of OVS and OVN are you using in this setup?  If it's not latest
3.1 and 23.03, then it's hard to talk about what/if performance
improvements are actually needed.

> Northd and Neutron is connecting directly to north- and southbound databases 
> without the relays.

One of the big things that is annoying is that Neutron connects to
Southbound database at all.  There are some reasons to do that,
but ideally that should be avoided.  I know that in the past limiting
the number of metadata agents was one of the mitigation strategies
for scaling issues.  Also, why can't it connect to relays?  There
shouldn't be too many transactions flowing towards Southbound DB
from the Neutron.

> 
> We needed to increase various timeouts on the ovsdb-server and client side to 
> get this to a mostly stable state:
> * inactivity probes of 60 seconds (for all connections between ovsdb-server, 
> relay and clients)
> * cluster election time of 50 seconds
> 
> As long as none of the relays restarts the environment is quite stable.
> However we see quite regularly the "Unreasonably long xxx ms poll interval" 
> messages ranging from 1000ms up to 40000ms.

With latest versions of OVS/OVN the CPU usage on Southbound DB
servers without relays in our weekly 500-node ovn-heater runs
stays below 10% during the test phase.  No large poll intervals
are getting registered.

Do you have more details on under which circumstances these
large poll intervals occur?

> 
> If a large amount of relays restart simultaneously they can also bring the 
> ovsdb cluster to fail as the poll interval exceeds the cluster election time.
> This happens with the relays already syncing the data from all 3 ovsdb 
> servers.

There was a performance issue with upgrades and simultaneous
reconnections, but it should be mostly fixed on the current master
branch, i.e. in the upcoming 3.2 release:
  https://patchwork.ozlabs.org/project/openvswitch/list/?series=348259&state=*

> 
> We would like to improve this significantly to ensure on the one hand that 
> our ovsdb clusters will survive unplanned load without issues and on the 
> other hand to keep the poll intervals short.
> We would like to ensure a short poll interval to allow us to act on 
> distributed-gateway-ports failovers and failover of virtual port in a timely 
> manner (ideally below 1 second).

These are good goals.  But are you sure they are not already
addressed with the most recent versions of OVS/OVN ?

> 
> To do this we found the following solutions that were discussed in the past:
> 1. Implementing multithreading for ovsdb 
> https://patchwork.ozlabs.org/project/openvswitch/list/?series=&submitter=&state=*&q=multithreading&archive=&delegate=

We moved the compaction process to a separate thread in 3.0.
This partially addressed the multi-threading topic.  General
handling of client requests/updates in separate threads will
require significant changes in the internal architecture, AFAICT.
So, I'd like to avoid doing that unless necessary.  So far we
were able to overcome almost all the performance challenges
with simple algorithmic changes instead.

> 2. Changing the storage backend of OVN to an alternative (e.g. etcd) 
> https://mail.openvswitch.org/pipermail/ovs-discuss/2016-July/041733.html

There was an ovsdb-etcd project, but it didn't manage to provide
better performance in comparison with ovsdb-server.  So it was
ultimately abandoned: https://github.com/IBM/ovsdb-etcd

> 
> Both of these discussion are from 2016, not sure if more up-to-date ones 
> exist.
> 
> I would like to ask if there are already existing discussions on scaling 
> ovsdb further/faster?

This again comes to a question what versions you're using.  I'm
currently not aware of any major performance issues for ovsdb-server
on the most recent code, besides the conditional monitoring, which is
not entirely OVSDB server's issue.  And it is also likely to become
a bit better in 3.2:
  
https://patchwork.ozlabs.org/project/openvswitch/patch/20230518121425.550048-1-i.maxim...@ovn.org/

> 
> From my perspective whatever such a solution might be, would no longer 
> require relays and allow the ovsdb servers to handle load gracefully.
> I personally see that multithreading for ovsdb sounds quite promising, as 
> that would allow us to separate the raft/cluster communication from the 
> client connections.
> This should allow us to keep the cluster healthly even under significant 
> pressure of clients.

Again, good goals.  I'm just not sure if we actually need to do
something or if they are already achievable with the most recent code.

I understand that testing on prod is not an option, so it's unlikely
we'll have an accurate test.  But maybe you can participate in the
initiative [1] for creation of ovn-heater OpenStack scenarios that
might be close to workloads you have?  This way upstream will be able
to test your use-cases or at least something similar.

Most of our current efforts are focused on ovn-kubernetes use-case,
because we don't have much details on how high-scale OpenStack deployments
look like.

[1] https://mail.openvswitch.org/pipermail/ovs-dev/2023-May/404488.html

Best regards, Ilya Maximets.

> 
> Thank you
> 
> --
> Felix Huettner

_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to