I had reason to recreate this experiment and here's what I found after it
completed...

NB db:

[root@oc-syd01-prod-compute-110 ~]# ovs-appctl
-t /usr/local/var/run/openvswitch/ovsdb-server.53752.ctl memory/show
cells:534698 monitors:1 sessions:17

SB db:

[root@oc-syd01-prod-compute-110 ~]# ovs-appctl
-t /usr/local/var/run/openvswitch/ovsdb-server.53754.ctl memory/show
backlog:563735228 cells:4140112 monitors:2 sessions:6

Ryan

"discuss" <discuss-boun...@openvswitch.org> wrote on 02/08/2016 03:19:58
PM:

> From: Ryan Moats/Omaha/IBM@IBMUS
> To: Andy Zhou <az...@ovn.org>
> Cc: discuss@openvswitch.org
> Date: 02/08/2016 03:20 PM
> Subject: Re: [ovs-discuss] Some more scaling test results...
> Sent by: "discuss" <discuss-boun...@openvswitch.org>
>
> Andy Zhou <az...@ovn.org> wrote on 02/08/2016 01:54:06 PM:
>
> > From: Andy Zhou <az...@ovn.org>
> > To: Ryan Moats/Omaha/IBM@IBMUS
> > Cc: discuss@openvswitch.org
> > Date: 02/08/2016 01:54 PM
> > Subject: Re: [ovs-discuss] Some more scaling test results...
> >
> > On Fri, Feb 5, 2016 at 5:26 PM, Ryan Moats <rmo...@us.ibm.com> wrote:
> > Today, I stood up a five node openstack cloud on machines with 56
> > cores and 256GB of memory and ran a scaling test to see if I could
> > stamp out 8000 copies of the following pattern in a single project
> > (tenant): n1 --- r1 --- n2 (in other words, create 8000 routers,
> > 16000 networks, 16000 subnets, and 32000 ports). Since both n1 and
> > n2 had subnets that were configured to use DHCP, the controller has
> > 16000 namespaces and dnsmasq processes. The controller was set up to
> > use separate processes to handle the OVN NB DB, OVN SB DB, and
> > Openvswitch DBs
> >
> > So, what happened?
> >
> > The neutron log (q-svc.log) showed zero OVS DB timeouts, which means
> > that the ovsdb server process handling the NB OVN db could keep up
> > with the scale test. Looking at the server at the end of the
> > experiment, it was using about 70GB of memory, with the top twenty
> > occupancies being:
> >
> > ovsdb-server process handling the OVN SB db at 25G
> > ovsdb-server process handling the vswitch DB at 2.7G
> > ovn-controller process at 879M
> > each of the 17 neutron-server processes at around 825M
> > (this totals up to slightly more than 42.5G)
> >
> > For those interested, the OVSDB file sizes on disk are 138M for
> > ovnsb.db, 14.9M for ovnnb.db and 18.4M for conf.db
> >
> > Although I admit that this test didn't include the stress that
> > putting a bunch of ports onto a single network would create, but
> > still, I'm of the belief that if one uses separate ovsdb-server
> > processes, then the long poles in the tent become the SB OVN
> > database and the processes that are driven by it.
> >
> > Have a great weekend,
> > Ryan
> >
> > Thanks for sharing.
> >
> > May I know how many connections do ovsdb-server SB host?  On a live
> > system, you can find out by typing:  "ovs-appctl -t ovsdb-server
>  memory/show"
>
> Unfortunately, the experiment has been torn down to allow others to
> run, so I can no longer provide that information...
>
>
> _______________________________________________
> discuss mailing list
> discuss@openvswitch.org
> http://openvswitch.org/mailman/listinfo/discuss
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to