Public bug reported: https://0e01ca15cd4d54a0e684-4b29bd9ae378cf0db90ada93d04207ec.ssl.cf1.rackcdn.com/938106/4/check/neutron- functional/61f4695/testr_results.html
Traceback (most recent call last): File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_ovsdb_monitor.py", line 112, in setUp self.chassis = self.add_fake_chassis('ovs-host1') ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/base.py", line 446, in add_fake_chassis other_config=other_config).execute(check_error=True) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/command.py", line 49, in execute with self.api.transaction(check_error, log_errors, **kwargs) as t: File "/usr/lib/python3.12/contextlib.py", line 144, in __exit__ next(self.gen) File "/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.12/site-packages/ovsdbapp/api.py", line 114, in transaction with self.create_transaction( File "/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.12/site-packages/ovsdbapp/api.py", line 71, in __exit__ self.result = self.commit() ^^^^^^^^^^^^^ File "/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", line 56, in commit raise exceptions.TimeoutException( ovsdbapp.exceptions.TimeoutException: Commands [ChassisAddCommand(_result=<ovsdbapp.backend.ovs_idl.rowview.RowView object at 0x71deceeff110>, chassis=2af26162-7085-4746-9a60-b0337c6ab9af, encap_types=['geneve'], encap_ip=172.24.4.1, may_exist=False, columns={'external_ids': {}, 'hostname': 'ovs-host1', 'other_config': {'ovn-cms-options': 'availability-zones=ovn', 'ovn-bridge-mappings': ''}})] exceeded timeout 30 seconds, cause: Result queue is empty The timestamp from test log is: 2025-01-17 15:59:21.234 299796 ERROR ovsdbapp.backend.ovs_idl.command [None req-610afa3d-c07c-4b59-8806-8e0cedea05fe - 46f70361-ba71-4bd0-9769-3573fd227c4b - - - -] Error executing command (ChassisAddCommand): ovsdbapp.exceptions.TimeoutException: Commands [ChassisAddCommand(_result=<ovsdbapp.backend.ovs_idl.rowview.RowView object at 0x71deceeff110>, chassis=2af26162-7085-4746-9a60-b0337c6ab9af, encap_types=['geneve'], encap_ip=172.24.4.1, may_exist=False, columns={'external_ids': {}, 'hostname': 'ovs-host1', 'other_config': {'ovn-cms-options': 'availability-zones=ovn', 'ovn-bridge-mappings': ''}})] exceeded timeout 30 seconds, cause: Result queue is empty Around this time, the following message can be found in SB db server log: 2025-01-17T15:59:21.499Z|00097|poll_loop|DBG|wakeup due to 2414-ms timeout at ovsdb/ovsdb-server.c:400 (1% CPU usage) 2025-01-17T15:59:21.954Z|00098|timeval|WARN|Unreasonably long 25512ms poll interval (0ms user, 26ms system) 2025-01-17T15:59:21.971Z|00099|timeval|WARN|faults: 22 minor, 71 major 2025-01-17T15:59:21.978Z|00100|timeval|WARN|disk: 14328 reads, 8 writes 2025-01-17T15:59:21.984Z|00101|timeval|WARN|context switches: 70 voluntary, 57 involuntary 2025-01-17T15:59:21.990Z|00102|coverage|INFO|Event coverage, avg rate over last: 5 seconds, last minute, last hour, hash=758a1984: 2025-01-17T15:59:21.994Z|00103|coverage|INFO|hmap_pathological 0.0/sec 0.000/sec 0.0025/sec total: 9 2025-01-17T15:59:21.994Z|00104|coverage|INFO|hmap_expand 0.2/sec 0.450/sec 0.4164/sec total: 1499 2025-01-17T15:59:21.994Z|00105|coverage|INFO|lockfile_lock 0.0/sec 0.000/sec 0.0003/sec total: 1 2025-01-17T15:59:21.998Z|00106|coverage|INFO|poll_create_node 0.2/sec 1.417/sec 0.0483/sec total: 174 2025-01-17T15:59:22.004Z|00107|coverage|INFO|poll_zero_timeout 0.0/sec 0.000/sec 0.0017/sec total: 6 2025-01-17T15:59:22.004Z|00108|coverage|INFO|seq_change 0.0/sec 0.000/sec 0.0008/sec total: 3 2025-01-17T15:59:22.004Z|00109|coverage|INFO|pstream_open 0.0/sec 0.000/sec 0.0006/sec total: 2 2025-01-17T15:59:22.004Z|00110|coverage|INFO|util_xalloc 1.8/sec 18.900/sec 8.4275/sec total: 30339 2025-01-17T15:59:22.007Z|00111|coverage|INFO|14 events never hit 2025-01-17T15:59:22.061Z|00112|poll_loop|DBG|wakeup due to [POLLIN][POLLHUP] on fd 18 (/tmp/tmpsolc6f0_/ovnsb_db.sock<->) at lib/stream-fd.c:157 (0% CPU usage) 2025-01-17T15:59:22.061Z|00113|poll_loop|DBG|wakeup due to 0-ms timeout at unix#2 (0% CPU usage) 2025-01-17T15:59:22.061Z|00114|reconnect|DBG|unix#2: connection closed by peer 2025-01-17T15:59:22.061Z|00115|reconnect|DBG|unix#2: entering VOID 2025-01-17T15:59:22.066Z|00116|poll_loop|DBG|wakeup due to [POLLIN][POLLHUP] on fd 17 (/tmp/tmpsolc6f0_/ovnsb_db.sock<->) at lib/stream-fd.c:157 (0% CPU usage) 2025-01-17T15:59:22.066Z|00117|poll_loop|DBG|wakeup due to 0-ms timeout at unix#1 (0% CPU usage) Something is wrong with ovsdb-server? Not sure if it's a neutron issue, slow infra, or a bug in ovsdb-server. ** Affects: neutron Importance: Undecided Status: New ** Tags: functional-tests gate-failure ovs ** Tags added: functional-tests gate-failure ovs -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/2095196 Title: functional: test_virtual_port_host_update failed due to ovsdb SB transaction timeout, "Unreasonably long 25512ms poll interval (0ms user, 26ms system)" in ovsdb-server log Status in neutron: New Bug description: https://0e01ca15cd4d54a0e684-4b29bd9ae378cf0db90ada93d04207ec.ssl.cf1.rackcdn.com/938106/4/check/neutron- functional/61f4695/testr_results.html Traceback (most recent call last): File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_ovsdb_monitor.py", line 112, in setUp self.chassis = self.add_fake_chassis('ovs-host1') ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/base.py", line 446, in add_fake_chassis other_config=other_config).execute(check_error=True) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/command.py", line 49, in execute with self.api.transaction(check_error, log_errors, **kwargs) as t: File "/usr/lib/python3.12/contextlib.py", line 144, in __exit__ next(self.gen) File "/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.12/site-packages/ovsdbapp/api.py", line 114, in transaction with self.create_transaction( File "/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.12/site-packages/ovsdbapp/api.py", line 71, in __exit__ self.result = self.commit() ^^^^^^^^^^^^^ File "/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", line 56, in commit raise exceptions.TimeoutException( ovsdbapp.exceptions.TimeoutException: Commands [ChassisAddCommand(_result=<ovsdbapp.backend.ovs_idl.rowview.RowView object at 0x71deceeff110>, chassis=2af26162-7085-4746-9a60-b0337c6ab9af, encap_types=['geneve'], encap_ip=172.24.4.1, may_exist=False, columns={'external_ids': {}, 'hostname': 'ovs-host1', 'other_config': {'ovn-cms-options': 'availability-zones=ovn', 'ovn-bridge-mappings': ''}})] exceeded timeout 30 seconds, cause: Result queue is empty The timestamp from test log is: 2025-01-17 15:59:21.234 299796 ERROR ovsdbapp.backend.ovs_idl.command [None req-610afa3d-c07c-4b59-8806-8e0cedea05fe - 46f70361-ba71-4bd0-9769-3573fd227c4b - - - -] Error executing command (ChassisAddCommand): ovsdbapp.exceptions.TimeoutException: Commands [ChassisAddCommand(_result=<ovsdbapp.backend.ovs_idl.rowview.RowView object at 0x71deceeff110>, chassis=2af26162-7085-4746-9a60-b0337c6ab9af, encap_types=['geneve'], encap_ip=172.24.4.1, may_exist=False, columns={'external_ids': {}, 'hostname': 'ovs-host1', 'other_config': {'ovn-cms-options': 'availability-zones=ovn', 'ovn-bridge-mappings': ''}})] exceeded timeout 30 seconds, cause: Result queue is empty Around this time, the following message can be found in SB db server log: 2025-01-17T15:59:21.499Z|00097|poll_loop|DBG|wakeup due to 2414-ms timeout at ovsdb/ovsdb-server.c:400 (1% CPU usage) 2025-01-17T15:59:21.954Z|00098|timeval|WARN|Unreasonably long 25512ms poll interval (0ms user, 26ms system) 2025-01-17T15:59:21.971Z|00099|timeval|WARN|faults: 22 minor, 71 major 2025-01-17T15:59:21.978Z|00100|timeval|WARN|disk: 14328 reads, 8 writes 2025-01-17T15:59:21.984Z|00101|timeval|WARN|context switches: 70 voluntary, 57 involuntary 2025-01-17T15:59:21.990Z|00102|coverage|INFO|Event coverage, avg rate over last: 5 seconds, last minute, last hour, hash=758a1984: 2025-01-17T15:59:21.994Z|00103|coverage|INFO|hmap_pathological 0.0/sec 0.000/sec 0.0025/sec total: 9 2025-01-17T15:59:21.994Z|00104|coverage|INFO|hmap_expand 0.2/sec 0.450/sec 0.4164/sec total: 1499 2025-01-17T15:59:21.994Z|00105|coverage|INFO|lockfile_lock 0.0/sec 0.000/sec 0.0003/sec total: 1 2025-01-17T15:59:21.998Z|00106|coverage|INFO|poll_create_node 0.2/sec 1.417/sec 0.0483/sec total: 174 2025-01-17T15:59:22.004Z|00107|coverage|INFO|poll_zero_timeout 0.0/sec 0.000/sec 0.0017/sec total: 6 2025-01-17T15:59:22.004Z|00108|coverage|INFO|seq_change 0.0/sec 0.000/sec 0.0008/sec total: 3 2025-01-17T15:59:22.004Z|00109|coverage|INFO|pstream_open 0.0/sec 0.000/sec 0.0006/sec total: 2 2025-01-17T15:59:22.004Z|00110|coverage|INFO|util_xalloc 1.8/sec 18.900/sec 8.4275/sec total: 30339 2025-01-17T15:59:22.007Z|00111|coverage|INFO|14 events never hit 2025-01-17T15:59:22.061Z|00112|poll_loop|DBG|wakeup due to [POLLIN][POLLHUP] on fd 18 (/tmp/tmpsolc6f0_/ovnsb_db.sock<->) at lib/stream-fd.c:157 (0% CPU usage) 2025-01-17T15:59:22.061Z|00113|poll_loop|DBG|wakeup due to 0-ms timeout at unix#2 (0% CPU usage) 2025-01-17T15:59:22.061Z|00114|reconnect|DBG|unix#2: connection closed by peer 2025-01-17T15:59:22.061Z|00115|reconnect|DBG|unix#2: entering VOID 2025-01-17T15:59:22.066Z|00116|poll_loop|DBG|wakeup due to [POLLIN][POLLHUP] on fd 17 (/tmp/tmpsolc6f0_/ovnsb_db.sock<->) at lib/stream-fd.c:157 (0% CPU usage) 2025-01-17T15:59:22.066Z|00117|poll_loop|DBG|wakeup due to 0-ms timeout at unix#1 (0% CPU usage) Something is wrong with ovsdb-server? Not sure if it's a neutron issue, slow infra, or a bug in ovsdb-server. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/2095196/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp