As per the comment above, I will close this as fix-released (the patch noted above) and create a RFE bug to track further improvements needed.
I will also create a RFE bug to provide an option to run neutron agents in a "profiled" mode. ** Changed in: neutron Status: Triaged => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1492456 Title: cProfile - fix Security Groups hotfunctions Status in neutron: Fix Released Bug description: I used cProfile to profile neutron-ovs-agent (from neutron kilo 2015.1.0) as VMs are provisioned (see code sample below to reproduce). I find a couple of functions in the IptablesManager scaling poorly with # of VMs (_modify_rules, and its callee find_last_entry). As the # of current VMs doubles, the time spent in these functions to provision 10 new VMs also roughly doubles, While we wait for the new IptablesTables firewall driver: https://blueprints.launchpad.net/neutron/+spec/new-iptables-driver Can we improve the performance of the current iptables firewall on those 2 functions which do a lot of string processing on iptables rule strings checking for dups ? Current: #VMs: 20, # iptables rules: 657, provision 10 new VMs ncalls tottime percall cumtime percall filename:lineno(function) 60 0.143 0.002 3.979 0.066 iptables_manager.py:511(_modify_rules) 25989 2.752 0.000 3.332 0.000 iptables_manager.py:504(_find_last_entry) Cumulative time spent in _find_last_entry: 3.3 sec Current #VMs: 40, # iptables rules: 1277 , provision 10 new VMs 65 0.220 0.003 7.974 0.123 iptables_manager.py:511(_modify_rules) 38891 5.782 0.000 6.986 0.000 iptables_manager.py:504(_find_last_entry) Cumulative time spent in _find_last_entry: 6.9 sec Current #VMs: 80, # iptables rules: 2517 , provision 10 new VMs 30 0.274 0.009 20.496 0.683 iptables_manager.py:511(_modify_rules) 43862 15.920 0.000 19.292 0.000 iptables_manager.py:504(_find_last_entry) Cumulative time spent in _find_last_entry: 19.2 sec current #VMs: 160, # iptables rules: 4997, provision 10 new VMs 20 0.375 0.019 49.255 2.463 iptables_manager.py:511(_modify_rules) 56478 39.275 0.001 47.629 0.001 iptables_manager.py:504(_find_last_entry) Cumulative time spent in _find_last_entry: 47.6 sec -------------------- To Reproduce: THis is one way where we can control start/stop of profiling based on presence of a file (/tmp/cprof) in the file-system ------- Make following change to neutron_ovs_agent.py to enable/disable cProfile for a given scenario. import cProfile import os.path pr_enabled = False pr = None In OVSNeutronAgent add method: def toggle_cprofile(self): global pr, pr_enabled start = False data = "" fname = "vm.profile" try: if os.path.isfile("/tmp/cprof"): start = True except IOError as e: LOG.warn("Error %s", e.strerror) if start and not pr_enabled: pr = cProfile.Profile() pr.enable() pr_enabled = True LOG.warn("enabled cprofile") if not start and pr_enabled: pr.disable() pr.create_stats() pr.dump_stats("/tmp/%s"%fname) pr_enabled = False LOG.warn("disabled cprofile") In polling loop: self.toggle_cprofile() ------------- This is another way to run the cProfile, but here, there is no way to control start/stop of profiling, and the profile includes the initialization also. Run, neutron-ovs-agent as follows: sudo -u neutron bash -c "/usr/bin/python -m cProfile -o /tmp/vm_1.profile /usr/bin/neutron-openvswitch-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini --log-file /var/log/neutron/openvswitch-agent.log" To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1492456/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp