Some new occurances after fixes merged, so re opening to check further
https://8f5091142d0d93e54bb3-66f76d6fb4c84b410723fddf17d0dbe7.ssl.cf5.rackcdn.com/periodic/opendev.org/openstack/neutron/master/neutron-ovn-tempest-mariadb-full/933107e/testr_results.html
https://32e6a1c8c02fba10617a-55d9f2ceb0
Public bug reported:
With couple of recent fixes since wsgi switch we still seeing some
random failures, this to track fixes of these unfixed issues.
Example failures:-
1) test_router_interface_port_update_with_fixed_ip
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zu
Public bug reported:
When nova endpoint for endpoint_type(public/internal/admin) is not
exist, following traceback is raised:-
2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova [-] Failed to notify
nova on events: [{'name': 'network-changed', 'server_uuid':
'3c634df2-eb78-4f49-bb01-ae1c54
Still seeing it, latest failure:-
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_ae5/periodic/opendev.org/openstack/neutron/master/neutron-ovn-tempest-mariadb-full/ae590b4/testr_results.html
2024-11-25 03:28:51,737 73074 WARNING [urllib3.connectionpoo
** Changed in: neutron
Status: New => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2083482
Title:
"neutron-fullstack-with-uwsgi-fips" failing in stable releases (Ze
Public bug reported:
Fails like:-
ft1.3:
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovn_db_sync.TestOvnNbSyncOverSsl.test_ovn_nb_sync_offtesttools.testresult.real._StringException:
Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutro
Public bug reported:
Originally noticed in a TripleO job[1], and after enabling log service
plugin in devstack seeing the similar error in neutron service log.
Following Traceback is seen:-
ERROR neutron_lib.callbacks.manager [None
req-131d215c-1d03-48ce-a16e-28175c0f58ba
tempest-DefaultSnatToE
Seems it's fixed with
https://github.com/openstack/neutron/commit/7f063223553a18345891bf42e88989edb67038e7,
no longer seeing issue in TripleO job:-
https://d0e3fd572414115f797b-39524de8c5a1fb89d206195b6f692473.ssl.cf5.rackcdn.com/808056/1/check/tripleo-
ci-
centos-8-standalone/6cb888b/logs/underclo
Public bug reported:
Job failing consistently with below error since
https://review.opendev.org/c/openstack/devstack/+/806858:-
2021-11-26 05:58:40.377912 | controller | +
functions-common:test_with_retry:2339: timeout 60 sh -c 'while ! test -e
/usr/local/var/run/openvswitch/ovn-northd.pi
** Changed in: neutron
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1952393
Title:
[OVN] neutron-tempest-plugin-scenario-ovn broken with "ovn
Seems you following wrong documentation for CentOS Stream. You need to
refer https://docs.openstack.org/neutron/xena/install/install-rdo.html
instead of https://docs.openstack.org/neutron/xena/install/install-
obs.html, The later one is for OpenSUSE. Marking the bug as invalid.
https://docs.opensta
Public bug reported:
When deployed with octavia-ovn-provider with below local.conf,
loadbalancer create(openstack loadbalancer create --vip-network-id
public --provider ovn) goes into ERROR state.
>From o-api logs:-
ERROR ovn_octavia_provider.helper Traceback (most recent call last):
ERROR ovn_oc
Public bug reported:
Neutron fullstack and functional jobs are timing out randomly across all
branches, only few such failures though, 1 or 2 failures per week.
Builds references:-
-
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-functional-with-uwsgi&result=TIMED_OUT&skip=0
-
htt
Public bug reported:
Seen multiple similar occurences in Stable/wallaby patches, where tempest tests
fails with ssh to VM Timeouts, some examples:-
-
https://cfaa2d1e4f6a936642aa-ae5561c9d080274a217713c4553af257.ssl.cf5.rackcdn.com/824022/2/check/neutron-tempest-plugin-scenario-openvswitch-wal
.
It's happening as master version of openvswitch jobs is running in stable/xena
branch and that feature is only available in master branch. This needs to be
fixed by switch to xena-jobs in stable/xena branch.
** Affects: neutron
Importance: Undecided
Assignee: yatin (yatin
Closing it as neutron-ovn-tempest-slow is moved to periodic in master
branch and is no longer seeing ssh failures, if it happen again this can
be reopened or new issue can be created.
** Changed in: neutron
Status: Confirmed => Fix Released
--
You received this bug notification because yo
without re introducing previous
bug:-
openstack subnet create --ip-version 6 --ipv6-ra-mode slaac --ipv6-address-mode
slaac --use-default-subnet-pool --network ipv6-pd --gateway :: ipv6-pd-1
** Affects: neutron
Importance: Undecided
Assignee: yatin (yatinkarel)
Status: New
Closing it as the workaround is in place and recently we have not seen
failures in this test and also the test code have a NOTE added for
future cleanup. If we see again the failure bug can be re opened.
** Changed in: neutron
Status: Confirmed => Fix Released
--
You received this bug not
Closing at as not seeing this issue recently as timeout increased long
back.
** Changed in: neutron
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bug
It's no longer an issue now as i see ovs 2.16 in test and same is
compiled in job, so closing the bug.
OVS_BRANCH=v2.16.0
2022-02-25T11:12:50.733Z|3|ovsdb_server|INFO|ovsdb-server (Open vSwitch)
2.16.0
https://91ac997443c049736e18-7ff9e89a9bc5ab4bd512192457a69ff2.ssl.cf1.rackcdn.com/828687/
Fixed in tempest, closing it.
** Changed in: neutron
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1960022
Title:
neutron-tempest-with-uwsgi
Job is much stable now[1], single failure since the patch[2] merge and
that too different test, Closing for now, if we see the issue again, it
can be reopened.
[1]
https://zuul.openstack.org/builds?job_name=neutron-fullstack&branch=stable%2Fwallaby&skip=0
[2] https://review.opendev.org/c/openstac
*** This bug is a duplicate of bug 1904117 ***
https://bugs.launchpad.net/bugs/1904117
** This bug is no longer a duplicate of bug 1885898
test connectivity through 2 routers fails in
neutron-ovn-tempest-full-multinode-ovs-master job
** This bug has been marked a duplicate of bug 1904117
** Changed in: neutron
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1874447
Title:
[OVN] Tempest test
neutron_tempest_plugin.scenario.test
Public bug reported:
This is failing post
https://review.opendev.org/c/openstack/neutron/+/751110, detected in a
packstack job, it fails with below Traceback:-
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource Traceback (most
recent call last):
2020-12-01 13:05:53.257 85629 ERROR neutr
Public bug reported:
Job(neutron-ovn-tempest-with-uwsgi-loki) running neutron-loki service[1]
fails tempest tests randomly with Tracebacks like below:-
ORM session: SQL execution without transaction in progress, traceback:
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]: File
"/usr/
ron&branch=master&skip=0
[2] https://review.opendev.org/c/openstack/requirements/+/880738
[3] https://review.opendev.org/c/openstack/tooz/+/879930
[4] https://review.opendev.org/c/openstack/governance/+/872232
** Affects: neutron
Importance: Critical
Assignee: yatin (yatinkare
Public bug reported:
Fails as below:-
2023-04-27 12:36:47.535483 | controller | ++
functions-common:apt_get_update:1155 : timeout 300 sh -c 'while ! sudo
http_proxy= https_proxy= no_proxy= apt-get update; do sleep 30; done'
2023-04-27 12:36:50.877357 | controller | Err:1
https://mirror.c
Public bug reported:
With ovsdbapp==2.3.0 [1] releaase functional test
neutron.tests.functional.agent.test_ovs_lib.OVSBridgeTestCase.test_cascading_del_in_txn
fails consistently[2] as below:-
ft1.7:
neutron.tests.functional.agent.test_ovs_lib.OVSBridgeTestCase.test_cascading_del_in_txntesttools.
Public bug reported:
Seen twice till now recently:-
-
https://a78793e982809689fe25-25fa16d377ec97c08c4e6ce3af683bd9.ssl.cf5.rackcdn.com/881232/1/check/neutron-tempest-plugin-fwaas/b0730f9/testr_results.html
-
https://53a7c53d508ecea7485c-f8ccc2b7c32dd8ba5caab7dc1c36a741.ssl.cf5.rackcdn.com/88123
We got an update[1] from vexxhost Team and now the vexxhost node provider is re
enabled[2].
Since it reenabled we are not seeing the issue. Noticed till now the jobs ran
on 3 impacted nodes and is passing so can consider the issue resolved. Closing
the bug.
[1] We have performed some optimizati
Public bug reported:
>From last couple of days Only few tests(just 6 tests) are running in job
neutron-fullstack-with-uwsgi job.
Example:-
https://d16311159baa9c9fc692-58e8a805a242f8a07eac2fd1c3f6b11b.ssl.cf1.rackcdn.com/880867/6/gate/neutron-
fullstack-with-uwsgi/6b4c3ba/testr_results.html
Buil
Public bug reported:
The issue is noticed in RDO openstack-neutron package build[1], the package
builds fails as unit tests fails randomly with below Traceback:-
DEBUG:
neutron.tests.unit.services.trunk.test_utils.UtilsTestCase.test_is_driver_compatible_multiple_drivers
DEBUG:
-
Seen it today in
https://c83b20527acf2b0f8494-4a0455790e56cb733d68b35ced7c28e7.ssl.cf5.rackcdn.com/886250/2/check/nova-
ceph-multistore/2231918/testr_results.html
** Changed in: nova
Status: Expired => New
--
You received this bug notification because you are a member of Yahoo!
Engineerin
Public bug reported:
Broken since https://review.opendev.org/c/openstack/tempest/+/887237
merged as neutron slow jobs inherit from tempest-slow-py3. These fails
with ERROR: unknown environment 'slow'
In releases before xena tempest is pinned and don't have the patch[1]
which added slow toxenv. T
There were many fixes related to the reported issue in neutron and ovs since
this bug report, some of these that i quickly catched are:-
-
https://patchwork.ozlabs.org/project/openvswitch/patch/20220819230810.2626573-1-i.maxim...@ovn.org/
- https://review.opendev.org/c/openstack/ovsdbapp/+/856200
There were many fixes related to the reported issue in neutron and ovs since
this bug report, some of these that i quickly catched are:-
-
https://patchwork.ozlabs.org/project/openvswitch/patch/20220819230810.2626573-1-i.maxim...@ovn.org/
- https://review.opendev.org/c/openstack/ovsdbapp/+/856200
Fix proposed https://review.opendev.org/c/openstack/devstack/+/888906
** Also affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/20
Public bug reported:
Can be reproduced by Just running:-
tox -epy3 -- test_port_deletion_prevention
or run any of the below tests individually:-
neutron.tests.unit.extensions.test_l3.L3NatDBSepTestCase.test_port_deletion_prevention_handles_missing_port
neutron.tests.unit.extensions.test_extraroute
Public bug reported:
Seen many occurrences recently, fails as below:-
Traceback (most recent call last):
File "/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py",
line 259, in test_reboot_server_hard
self._test_reboot_server('HARD')
File "/opt/stack/tempest/tempest/ap
Closed for tempest and neutron, as regression is fixed in nova with
https://review.opendev.org/c/openstack/nova/+/893502, jobs are back to
green.
** Changed in: neutron
Status: Confirmed => Invalid
** Changed in: tempest
Status: New => Invalid
--
You received this bug notification
Public bug reported:
The job runs with latest alembic/sqlalchemy commits and is broken with
recent alembic commit[1].
Test
neutron.tests.unit.db.test_migration.TestCli.test_autogen_process_directives
fails as below:-
ft1.27:
neutron.tests.unit.db.test_migration.TestCli.test_autogen_process_dir
empest-ipv6-only-ovs-master&job_name=neutron-ovn-tempest-ovs-master-
centos-9-stream&project=openstack%2Fneutron&skip=0
Broken with https://github.com/ovn-
org/ovn/commit/558da0cd21ad0172405f7d93c5d0e7533edbf653
Need to update OVS_BRANCH in jobs to fix it.
** Affects: neutron
Imp
Hi Alex,
<< Can someone take a look why the above patch
https://review.opendev.org/c/openstack/kolla/+/761182 mentioned here has
been excluded from the neutron image?
It would have been just missed, since train release Tripleo builds
container images natively and not use kolla, You can propose a
Required patch merged long back, jobs are green, Closing it:-
*
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovs-grenade-multinode-skip-level&skip=0
*
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovn-grenade-multinode-skip-level&skip=0
** Changed in: neutron
Fixed with https://review.opendev.org/c/openstack/nova/+/868419
** Changed in: nova
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bu
Public bug reported:
It started failing[1] since the job switched to ubuntu-jammy[2].
Fails as below:-
2023-09-13 16:46:18.124882 | TASK [tobiko-tox : run sanity test cases before
creating resources]
2023-09-13 16:46:19.463567 | controller | neutron_sanity create:
/home/zuul/src/opendev.org/x/t
Public bug reported:
Tests fails while running ebtables(['-D', chain] + rule.split()) with:-
2023-10-05 12:09:19.307 41358 ERROR neutron.agent.linux.utils [None
req-defd197a-c4e2-4761-a4cc-cc960a3ff71a - - - - - -] Exit code: 4; Cmd: ['ip',
'netns', 'exec', 'test-b58b5cf9-5018-4801-aacb-8b00fae3
Public bug reported:
Seen couple of occurrences across different releases:-
Failures:-
- https://zuul.opendev.org/t/openstack/build/30c625cd86aa40e6b6252689a7e88910
neutron-tempest-plugin-fwaas-2023-1
Traceback (most recent call last):
File
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site
%2Fyoga&branch=stable%2Fzed
** Affects: neutron
Importance: Medium
Assignee: yatin (yatinkarel)
Status: Triaged
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2039066
Title:
Public bug reported:
The functional test fails randomly as:-
ft1.3:
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_maintenance.TestMaintenance.test_porttesttools.testresult.real._StringException:
Traceback (most recent call last):
File "/home/zuul/src/opendev.org/opens
Public bug reported:
tempest.api.compute.servers.test_server_actions.ServerActionsTestOtherA.test_resize_volume_backed_server_confirm
fails randomly with:-
Traceback (most recent call last):
File "/opt/stack/tempest/tempest/lib/common/ssh.py", line 136, in
_get_ssh_connection
ssh.connect(s
Public bug reported:
After deleting an agent, there is stale entry for the host in table
'ml2_vxlan_endpoints'. An use case is during node scale down, a agent is
deleted, but the host entry is not removed from ml2_vxlan_endpoints;
I have not checked other topologies but same should apply to other
Public bug reported:
Broken since https://github.com/sqlalchemy/sqlalchemy/commit/e93a5e89.
Fails like:-
2023-11-07 10:51:53.328284 | ubuntu-jammy | Failed to import test module:
neutron.tests.unit.agent.common.test_ovs_lib
2023-11-07 10:51:53.328310 | ubuntu-jammy | Traceback (most recent call
Public bug reported:
neutron-ovn-tempest-with-sqlalchemy-master and
neutron-ovs-tempest-with-sqlalchemy-master jobs expected to install sqlalchemy
and alembic from main branch as defined in required-projects, but these
installs released versions instead:-
required-projects:
- name: gi
Public bug reported:
These jobs runs in periodic pipeline and are broken[1], these jobs
inherit from ovs master jobs instead of stable variant. This needs to be
fixed.
[1]
https://zuul.openstack.org/builds?job_name=neutron-linuxbridge-tempest-plugin-scenario-nftables
** Affects: neutron
I
Public bug reported:
Since https://review.opendev.org/c/openstack/neutron-lib/+/895940 the job fails
as:-
ft1.2:
neutron.tests.unit.plugins.ml2.drivers.agent.test_capabilities.CapabilitiesTest.test_registertesttools.testresult.real._StringException:
Traceback (most recent call last):
File "/h
Public bug reported:
These jobs are broken as these jobs running on focal and running tests
which shouldn't on focal as per
https://review.opendev.org/c/openstack/neutron/+/871982 (included in
antelope+)
Example failure
https://d5a9a78dc7c742990ec8-242e23f01f4a3c50d38acf4a24e0c600.ssl.cf1.rackc
Public bug reported:
Since 18th Nov nftables jobs failing with RETRY_LIMIT and no logs are
available. These jobs quickly fails and last task seen running in
console is "Preparing job workspace".
This looks a regression with zuul change
https://review.opendev.org/c/zuul/zuul/+/900489.
@fungi chec
Thanks folks jobs are back to green now:-
https://zuul.openstack.org/builds?job_name=neutron-linuxbridge-tempest-
plugin-scenario-nftables&job_name=neutron-ovs-tempest-plugin-scenario-
iptables_hybrid-
nftables&project=openstack%2Fneutron&branch=stable%2Fxena&skip=0
** Changed in: neutron
S
Public bug reported:
Seen couple of hits recently, Tests fails as:-
Traceback (most recent call last):
File "/opt/stack/tempest/tempest/lib/common/ssh.py", line 136, in
_get_ssh_connection
ssh.connect(self.host, port=self.port, username=self.username,
File
"/opt/stack/tempest/.tox/tempes
Public bug reported:
Jobs fails as:-
2023-11-30 10:17:01.182 | +
/opt/stack/neutron-tempest-plugin/devstack/functions.sh:overridden_upload_image:36
: wget --progress=dot:giga -c
https://cloud-images.ubuntu.com/minimal/releases/focal/release/ubuntu-20.04-minimal-cloudimg-amd64.img
-O /opt/sta
Public bug reported:
test_unshelve_to_specific_host fails randomly at
_shelve_offload_then_unshelve_to_host otherhost like:-
Traceback (most recent call last):
File
"/opt/stack/tempest/tempest/api/compute/admin/test_servers_on_multinodes.py",
line 178, in test_unshelve_to_specific_host
se
Still seeing similar issue as below, reopening the bug
-
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_ed8/901717/1/gate/neutron-
functional-with-uwsgi/ed8e900/testr_results.html
Fails as:-
neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFou
It's not an issue in neutron but was just a tracking bug for the issue in our
jobs.
https://review.opendev.org/q/I7163aea4d121cb27620e4f2a083a543abfc286bf handles
the random issue. So closing this for neutron now.
** Changed in: neutron
Status: New => Invalid
--
You received this bug no
*** This bug is a duplicate of bug 1958643 ***
https://bugs.launchpad.net/bugs/1958643
Thanks @Stanislav for the confirmation, will close it as Duplicate.
** This bug has been marked a duplicate of bug 1958643
Unicast RA messages for a VM are filtered out by ovs rules
--
You received thi
Public bug reported:
Fails as:-
2024-01-16 03:02:53.651183 | controller | gcc -DHAVE_CONFIG_H -I. -I
./include -I ./include -I ./ovn -I ./include -I ./lib -I ./lib -I
/opt/stack/ovs/include -I /opt/stack/ovs/include -I /opt/stack/ovs/lib -I
/opt/stack/ovs/lib -I /opt/stack/ovs -I /opt/stack/
Public bug reported:
Example failure:-
https://2f4a32f753edcd6fd518-38c49964a79149719549049b602122d6.ssl.cf5.rackcdn.com/906628/1/experimental/neutron-
ovn-tempest-slow/1b35fb8/testr_results.html
Fails as:-
Traceback (most recent call last):
File "/opt/stack/tempest/tempest/scenario/test_securi
/neutron/master/neutron-
ovn-tempest-ipv6-only-ovs-master/3c5404e/job-output.txt
Failing since
https://github.com/ovn-org/ovn/commit/dc34b4d9f7f3efb4e7547f9850f6086a7e1a2338
need to update OVS_BRANCH
** Affects: neutron
Importance: High
Assignee: yatin (yatinkarel)
Status: Tri
Public bug reported:
It was raised while evaluating ovs-vswitchd debug logs in CI[1] that we
should also document it in neutron docs on steps enabling/disabling
debug logs for ovs-vswitchd.
[1] https://review.opendev.org/c/openstack/neutron/+/907037
** Affects: neutron
Importance: Undecide
Public bug reported:
Fails like:-
2024-02-06 02:28:34.618579 | controller | ++
inc/python:_setup_package_with_constraints_edit:400 : cd /opt/stack/sqlalchemy
2024-02-06 02:28:34.621804 | controller | ++
inc/python:_setup_package_with_constraints_edit:400 : pwd
2024-02-06 02:28:34.624859 | co
Public bug reported:
Couple of jobs failing in unmaintained/yoga periodic pipeline.
https://zuul.openstack.org/buildsets?project=openstack%2Fneutron&branch=unmaintained%2Fyoga&branch=stable%2Fyoga&pipeline=periodic&skip=0
Before switching to unmaintained/yoga jobs were green:-
https://zuul.opens
Public bug reported:
The job is broken in stable/zed and stable/2023.1 where it's running on
ubuntu focal since patch https://review.opendev.org/c/x/tobiko/+/910589
Fails like:-
2024-03-12 02:33:20.806847 | controller | interpreter =
self.creator.interpreter
2024-03-12 02:33:20.806858 | cont
Public bug reported:
Fails like:-
2024-03-17 03:08:11.011222 | controller | ++
/opt/stack/devstack-plugin-tobiko/devstack/plugin.sh:install_tobiko_deps:14 :
pip_install 'tox>=4.13'
2024-03-17 03:08:11.036778 | controller | Using python 3.10 to install tox>=4.13
2024-03-17 03:08:11.040275 | con
Public bug reported:
One of the test fails like:-
testtools.testresult.real._StringException: Traceback (most recent call last):
File
"/home/zuul/src/opendev.org/x/tobiko/tobiko/tests/scenario/neutron/test_security_groups.py",
line 227, in test_security_group_stateful_to_stateless_switch
s
l.openstack.org/builds?job_name=neutron-ovn-tempest-
ipv6-only-ovs-master&project=openstack/neutron
need to use OVS_BRANCH=branch3.4 to be compatible with OVN_BRANCH=main
** Affects: neutron
Importance: High
Assignee: yatin (yatinkarel)
Status: In Progress
** Changed in: ne
Public bug reported:
Test fails as:-
2024-08-07 17:06:05.650302 | controller | 2024-08-07 17:06:05.647 76504 ERROR
rally_openstack.common.osclients [-] Unable to authenticate for user
c_rally_927546a8_h6aDLbnK in project c_rally_927546a8_ahUUmlp9:
keystoneauth1.exceptions.http.InternalServerErr
<< Testing keystone revert in
https://review.opendev.org/c/openstack/neutron/+/925968
The job passed with the revert.
<< Looks like the issue is with osprofiler being enabled for keystone in the
neutron gate, but it's not being enabled in keystone gates, so that's why we
didn't see the issue th
Closing for neutron too as zed moved to unmaintained and we need not run
grenade jobs there.
** Changed in: neutron
Status: Triaged => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpa
*** This bug is a duplicate of bug 2000163 ***
https://bugs.launchpad.net/bugs/2000163
** This bug has been marked a duplicate of bug 2000163
[FT] Error in "test_get_datapath_id"
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed
** Changed in: neutron
Status: Confirmed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2024160
Title:
[OVN][Trunk] subport doesn't reach status ACTIVE
Status in
** Changed in: neutron
Status: Confirmed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2007353
Title:
Functional test
neutron.tests.functional.services.ovn_l3.
Public bug reported:
test_log_deleted_with_corresponding_security_group api tests randomly
failing with below Trace:-
Traceback (most recent call last):
File
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/api/admin/test_logging.py",
line 99, in test_log_de
@Modyn, Ok Thanks for confirming, will close the bug as INVALID for
neutron as i don't see anything to fix for this on neutron side.
** Changed in: neutron
Status: New => Incomplete
** Changed in: neutron
Status: Incomplete => Invalid
--
You received this bug notification because
ded in
https://review.opendev.org/q/I0c4d492887216cad7a8155dceb738389f2886376
and backported till wallaby. Xena+ are ok, only wallaby impacted because
before xena old notification format is used where arguments are passed
as kwargs.
** Affects: neutron
Importance: Undecided
Assignee: yatin (yat
Public bug reported:
After https://review.opendev.org/c/openstack/neutron/+/806246 patch pep8
job is taking more than 30 minutes and TIMES_OUT. Specifically flake8 is
taking much time now. Before the patch approx 2 minutes and now more
than 12 minutes.
builds:- https://zuul.opendev.org/t/openstac
plugin is required to determine endpoint URL
Ex. log:-
https://8430494fc120d4e2add1-92777588630241a74fd2839fb5cc6a5d.ssl.cf5.rackcdn.com/841118/1/gate/neutron-
tempest-plugin-scenario-openvswitch/92d6c29/controller/logs/screen-q-
svc.txt
This is happening as placement client is not configured with
Public bug reported:
Currently these periodic jobs are running on CentOS 8-Stream(with python
3.6) and failing[1][2]. These are failing as master no longer supports
py3.6. To unblock switching these jobs to run functional/fullstack tests
with python3.8[3] and disabling dbcounter installation.
Ide
This should have opened against networking-sfc, it's fixed with
https://review.opendev.org/c/openstack/networking-sfc/+/844251. Will
update project.
** Project changed: neutron => networking-sfc
** Changed in: networking-sfc
Status: Confirmed => Fix Committed
--
You received this bug not
*** This bug is a duplicate of bug 1973347 ***
https://bugs.launchpad.net/bugs/1973347
This looks duplicate of https://bugs.launchpad.net/neutron/+bug/1973347
and is fixed with
https://review.opendev.org/c/openstack/neutron/+/842147. This should be
backported to stable branches as well.
@Amma
Reopening the bz as seeing few failures(5 failures in last 15 days as per
https://opensearch.logs.openstack.org) in linuxbridge and openvswitch scenario
jobs:-
https://1b33868f301e2201a22c-a64bb815b8796eabf8a53948331bd878.ssl.cf5.rackcdn.com/845366/2/check/neutron-tempest-plugin-openvswitch/aa310
release-fips&project=openstack/neutron
-
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovs-tempest-fips&project=openstack/neutron
** Affects: neutron
Importance: High
Assignee: yatin (yatinkarel)
Status: In Progress
** Changed in: neutron
Status: New =>
Public bug reported:
After pyroute2-0.6.12 update[1], multiple tests in fullstack/functional
jobs failing[2][3].
Noting some failures here:-
functional:-
AssertionError: CIDR 2001:db8::/64 not found in the list of routes
AssertionError: Route not found: {'table': 'main', 'cidr': '192.168.0.0/24',
Public bug reported:
pyroute2 updated to 0.7.1 with
https://review.opendev.org/c/openstack/requirements/+/849790, since then
couple of jobs are broken, like:-
-
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovs-grenade-dvr-multinode&branch=master
-
https://zuul.opendev.org/t/open
Public bug reported:
Fails as below:-
ft1.12:
neutron.tests.functional.services.l3_router.test_l3_dvr_ha_router_plugin.L3DvrHATestCase.test__get_dvr_subnet_ids_on_host_querytesttools.testresult.real._StringException:
Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack
Public bug reported:
As some of the operations relies on Messaging Callbacks[1][2], these
requests get's stuck when messaging driver like rabbitmq is not
available, For OVN without any other agent running, there is no consumer
for these messages so these operations should skip messaging callbacks.
Based on https://bugs.launchpad.net/neutron/+bug/1970679/comments/5 and
https://review.opendev.org/c/openstack/devstack/+/848548 closing the
issue.
** Changed in: neutron
Status: Confirmed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering
** Changed in: neutron
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1979047
Title:
Interface attach fails with libvirt.libvirtError: intern
** Changed in: neutron
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1973783
Title:
[devstack] Segment plugin reports Traceback as placement
** Changed in: neutron
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1972764
Title:
[Wallaby] OVNPortForwarding._handle_lb_on_ls fails with
1 - 100 of 160 matches
Mail list logo