[Bug 1915072] Re: [plugin][ovn-central][ovn-host] include logs
I've tested this on focal and there is an issue but it might not be related to this patchset: I tested running sosreport with -a on both an ovn-controller and an ovn- central host. This all works fine until I run it on an ovn-central host that is not the db leader (you can only run commands on a leader host). In this case the ovn_central plugin hangs - https://pastebin.ubuntu.com/p/MJg3VHmS3y/. These commands were added in previous commit cc63749 so are already in sosreport. Ill open a bug on sosreport to see if there is a way to work around this. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1915072 Title: [plugin][ovn-central][ovn-host] include logs To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1915072/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1915072] Re: [plugin][ovn-central][ovn-host] include logs
Tested with focal-updates 4.0-1~ubuntu0.20.04.3 and it has the same issue -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1915072 Title: [plugin][ovn-central][ovn-host] include logs To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1915072/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1915072] Re: [plugin][ovn-central][ovn-host] include logs
I've created https://github.com/sosreport/sos/issues/2418 to address this problem. Perhaps we should wait for that to be fixed and then bundle it with this SRU. ** Bug watch added: github.com/sosreport/sos/issues #2418 https://github.com/sosreport/sos/issues/2418 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1915072 Title: [plugin][ovn-central][ovn-host] include logs To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1915072/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1915072] Re: [plugin][ovn-central][ovn-host] include logs
Patch submitted - https://github.com/sosreport/sos/pull/2419 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1915072 Title: [plugin][ovn-central][ovn-host] include logs To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1915072/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1894843] Re: [dvr_snat] Router update deletes rfp interface from qrouter even when VM port is present on this host
** Changed in: neutron Status: In Progress => Fix Released ** Also affects: cloud-archive Importance: Undecided Status: New ** Also affects: cloud-archive/ussuri Importance: Undecided Status: New ** Also affects: cloud-archive/victoria Importance: Undecided Status: New ** Also affects: neutron (Ubuntu) Importance: Undecided Status: New ** Also affects: neutron (Ubuntu Focal) Importance: Undecided Status: New ** Also affects: neutron (Ubuntu Groovy) Importance: Undecided Status: New ** Also affects: neutron (Ubuntu Hirsute) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1894843 Title: [dvr_snat] Router update deletes rfp interface from qrouter even when VM port is present on this host To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1894843/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1894843] Re: [dvr_snat] Router update deletes rfp interface from qrouter even when VM port is present on this host
** Patch added: "lp1894843-victoria.debdiff" https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1894843/+attachment/5466582/+files/lp1894843-victoria.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1894843 Title: [dvr_snat] Router update deletes rfp interface from qrouter even when VM port is present on this host To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1894843/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1894843] Re: [dvr_snat] Router update deletes rfp interface from qrouter even when VM port is present on this host
** Patch added: "lp1894843-ussuri.debdiff" https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1894843/+attachment/5466583/+files/lp1894843-ussuri.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1894843 Title: [dvr_snat] Router update deletes rfp interface from qrouter even when VM port is present on this host To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1894843/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1894843] Re: [dvr_snat] Router update deletes rfp interface from qrouter even when VM port is present on this host
** Description changed: + [Impact] + When neutron schedules snat namespaces it sometimes deletes the rfp interface from qrouter namespaces which breaks external network (fip) connectivity. The fix prevents this from happening. + + [Test Case] + * deploy Openstack (Ussuri or above) with dvr_snat enabled in compute hosts. + * ensure min. 2 compute hosts + * create one ext network and one private network + * add private subnet to router and ext as gateway + * check which compute has the snat ns (ip netns| grep snat) + * create a vm on each compute host + * check that qrouter ns on both computes has rfp interface + * ip netns| grep qrouter; ip netns exec ip a s| grep rfp + * disable and re-enable router + * openstack router set --disable ; openstack router set --enable + * check again + * ip netns| grep qrouter; ip netns exec ip a s| grep rfp + + [Regression Potential] + This patch is in fact restoring expected behaviour and is not expected to + introduce any new regressions. + + - + Hello, In the case of dvr_snat l3 agents are deployed on hypervisors there can be race condition. The agent creates snat namespaces on each scheduled host and removes them at second step. At this second step agent removes the rfp interface from qrouter even when there is VM with floating IP on the host. When VM is deployed at the time of second step we can lost external access to VMs floating IP. The issue can be reproduced by hand: 1. Create tenant network and router with external gateway 2. Create VM with floating ip 3. Ensure that VM on the hypervisor without snat-* namespace 4. Set the router to disabled state (openstack router set --disable ) 5. Set the router to enabled state (openstack router set --enabled ) 6. The external access to VMs FIP have lost because L3 agent creates the qrouter namespace without rfp interface. - Environment: 1. Neutron with ML2 OVS plugin. 2. L3 agents in dvr_snat mode on each hypervisor 3. openstack-neutron-common-15.1.1-0.2020061910.7d97420.el8ost.noarch ** Changed in: neutron (Ubuntu Hirsute) Status: New => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1894843 Title: [dvr_snat] Router update deletes rfp interface from qrouter even when VM port is present on this host To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1894843/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1894843] Re: [dvr_snat] Router update deletes rfp interface from qrouter even when VM port is present on this host
** Patch added: "lp1894843-groovy.debdiff" https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1894843/+attachment/5466598/+files/lp1894843-groovy.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1894843 Title: [dvr_snat] Router update deletes rfp interface from qrouter even when VM port is present on this host To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1894843/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1894843] Re: [dvr_snat] Router update deletes rfp interface from qrouter even when VM port is present on this host
** Description changed: [Impact] When neutron schedules snat namespaces it sometimes deletes the rfp interface from qrouter namespaces which breaks external network (fip) connectivity. The fix prevents this from happening. [Test Case] - * deploy Openstack (Ussuri or above) with dvr_snat enabled in compute hosts. - * ensure min. 2 compute hosts - * create one ext network and one private network - * add private subnet to router and ext as gateway - * check which compute has the snat ns (ip netns| grep snat) - * create a vm on each compute host - * check that qrouter ns on both computes has rfp interface - * ip netns| grep qrouter; ip netns exec ip a s| grep rfp - * disable and re-enable router - * openstack router set --disable ; openstack router set --enable - * check again - * ip netns| grep qrouter; ip netns exec ip a s| grep rfp + * deploy Openstack (Ussuri or above) with dvr_snat enabled in compute hosts. + * ensure min. 2 compute hosts + * create one ext network and one private network + * add private subnet to router and ext as gateway + * check which compute has the snat ns (ip netns| grep snat) + * create a vm on each compute host + * check that qrouter ns on both computes has rfp interface + * ip netns| grep qrouter; ip netns exec ip a s| grep rfp + * disable and re-enable router + * openstack router set --disable ; openstack router set --enable + * check again + * ip netns| grep qrouter; ip netns exec ip a s| grep rfp - [Regression Potential] + [Where problems could occur] This patch is in fact restoring expected behaviour and is not expected to introduce any new regressions. - Hello, In the case of dvr_snat l3 agents are deployed on hypervisors there can be race condition. The agent creates snat namespaces on each scheduled host and removes them at second step. At this second step agent removes the rfp interface from qrouter even when there is VM with floating IP on the host. When VM is deployed at the time of second step we can lost external access to VMs floating IP. The issue can be reproduced by hand: 1. Create tenant network and router with external gateway 2. Create VM with floating ip 3. Ensure that VM on the hypervisor without snat-* namespace 4. Set the router to disabled state (openstack router set --disable ) 5. Set the router to enabled state (openstack router set --enabled ) 6. The external access to VMs FIP have lost because L3 agent creates the qrouter namespace without rfp interface. Environment: 1. Neutron with ML2 OVS plugin. 2. L3 agents in dvr_snat mode on each hypervisor 3. openstack-neutron-common-15.1.1-0.2020061910.7d97420.el8ost.noarch -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1894843 Title: [dvr_snat] Router update deletes rfp interface from qrouter even when VM port is present on this host To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1894843/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1869808] Re: reboot neutron-ovs-agent introduces a short interrupt of vlan traffic
After a further review of the patches surrounding this issue I decided to pull what looks more like a complete set of the associated patches from stable/queens and have been testing a build that I am now happy with. It behaves no differently to the current upload but supports hopefully all the edge cases around ovs agent restart and resync that have been resolved in stable/queens. I will attach a debdiff and would like to request that replace the currently uploaded package with this. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1869808 Title: reboot neutron-ovs-agent introduces a short interrupt of vlan traffic To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1869808/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1869808] Re: reboot neutron-ovs-agent introduces a short interrupt of vlan traffic
** Patch added: "lp1869808-bionic-queens.debdiff" https://bugs.launchpad.net/neutron/+bug/1869808/+attachment/5471546/+files/lp1869808-bionic-queens.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1869808 Title: reboot neutron-ovs-agent introduces a short interrupt of vlan traffic To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1869808/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1869808] Re: reboot neutron-ovs-agent introduces a short interrupt of vlan traffic
The new set of patches is as follows: d/p/0001-ovs-agent-signal-to-plugin-if-tunnel-refresh-needed.patch (LP: #1853613) d/p/0002-Do-not-block-connection-between-br-int-and-br-phys-o.patch (LP: #1869808) d/p/0003-Ensure-that-stale-flows-are-cleaned-from-phys_bridge.patch (LP: #1864822) d/p/0004-DVR-Reconfigure-re-created-physical-bridges-for-dvr-.patch (LP: #1864822) d/p/0005-Ensure-drop-flows-on-br-int-at-agent-startup-for-DVR.patch (LP: #1887148) d/p/0006-Don-t-check-if-any-bridges-were-recrected-when-OVS-w.patch (LP: #1864822) d/p/0007-Not-remove-the-running-router-when-MQ-is-unreachable.patch (LP: #1871850) Same test case etc as current sru. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1869808 Title: reboot neutron-ovs-agent introduces a short interrupt of vlan traffic To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1869808/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1751923] Re: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server
Since Queens is populating the virtual_interfaces table as standard I think we should proceed with this SRU - https://pastebin.ubuntu.com/p/BdCPsVKGk5/ - since it will provide a clean fix for Queens clouds. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1751923 Title: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1751923/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1907686] Re: ovn: instance unable to retrieve metadata
** Changed in: cloud-archive/victoria Status: New => Won't Fix -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1907686 Title: ovn: instance unable to retrieve metadata To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ovn-chassis/+bug/1907686/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1929832] Re: stable/ussuri py38 support for keepalived-state-change monitor
stable/ussuri backport - https://review.opendev.org/c/openstack/neutron/+/793417 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1929832 Title: stable/ussuri py38 support for keepalived-state-change monitor To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1929832/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1929832] Re: stable/ussuri py38 support for keepalived-state-change monitor
focal-proposed verified using [Test Plan] and with the following output: # apt-cache policy neutron-common neutron-common: Installed: 2:16.3.2-0ubuntu2 Candidate: 2:16.3.2-0ubuntu2 Version table: *** 2:16.3.2-0ubuntu2 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu focal-proposed/main amd64 Packages 100 /var/lib/dpkg/status 2:16.3.1-0ubuntu1.1 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages 2:16.0.0~b3~git2020041516.5f42488a9a-0ubuntu2 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu focal/main amd64 Packages $ grep kill_keepalived_monitor_py38 /etc/neutron/rootwrap.d/l3.filters kill_keepalived_monitor_py38: KillFilter, root, python3.8, -15, -9 ** Description changed: [Impact] + Please see original bug description. Without this fix, the neutron-l3-agent is unable to teardown an HA router and leaves it partially configured on every node it was running on. + [Test Case] - The victoria release of Openstack received patch [1] which allows the neutron-l3-agent to SIGKILL or SIGTERM the keepalived-state-change monitor when running under py38. This patch is needed in Ussuri for users running with py38 so we need to backport it. + * deploy Openstack ussuri on Ubuntu Focal + * enable L3 HA + * create a router and vm on network attached to router + * disable or delete the router and check for errors like the one below + * ensure that the following line exists in /etc/neutron/rootwrap.d/l3.filters: + + kill_keepalived_monitor_py38: KillFilter, root, python3.8, -15, -9 + + - + + The victoria release of Openstack received patch [1] which allows the + neutron-l3-agent to SIGKILL or SIGTERM the keepalived-state-change + monitor when running under py38. This patch is needed in Ussuri for + users running with py38 so we need to backport it. The consequence of not having this is that you get the following when you delete or disable a router: 2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent [req-8c69af29-8f9c-4721-9cba-81ff4e9be92c - 9320f5ac55a04fb280d9ceb0b1106a6e - - -] Error while deleting router ab63ccd8-1197-48d0-815e-31adc40e5193: neutron_lib.exceptions.ProcessExecutionError: Exit code: 99; Stdin: ; Stdout: ; Stderr: /usr/bin/neutron-rootwrap: Unauthorized command: kill -15 2516433 (no filter matched) 2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent Traceback (most recent call last): 2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent File "/usr/lib/python3/dist-packages/neutron/agent/l3/agent.py", line 512, in _safe_router_removed 2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent self._router_removed(ri, router_id) 2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent File "/usr/lib/python3/dist-packages/neutron/agent/l3/agent.py", line 548, in _router_removed 2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent self.router_info[router_id] = ri 2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent self.force_reraise() 2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent six.reraise(self.type_, self.value, self.tb) 2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise 2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent raise value 2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent File "/usr/lib/python3/dist-packages/neutron/agent/l3/agent.py", line 545, in _router_removed 2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent ri.delete() 2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent File "/usr/lib/python3/dist-packages/neutron/agent/l3/dvr_edge_router.py", line 236, in delete 2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent super(DvrEdgeRouter, self).delete() 2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent File "/usr/lib/python3/dist-packages/neutron/agent/l3/ha_router.py", line 492, in delete 2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent self.destroy_state_change_monitor(self.process_monitor) 2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent File "/usr/lib/python3/dist-packages/neutron/agent/l3/ha_router.py", line 438, in destroy_state_change_monitor 2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent pm.disable(sig=str(int(signal.SIGTERM))) 2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent File "/usr/lib/python3/dist-packages/neutron/agent/linux/ext
[Bug 1929832] Re: stable/ussuri py38 support for keepalived-state-change monitor
bionic-ussuri-proposed verified using [Test Plan] and with the following output: root@juju-9c4cdb-lp1929832-verify-6:~# apt-cache policy neutron-common neutron-common: Installed: 2:16.3.2-0ubuntu2~cloud0 Candidate: 2:16.3.2-0ubuntu2~cloud0 Version table: *** 2:16.3.2-0ubuntu2~cloud0 500 500 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-proposed/ussuri/main amd64 Packages 100 /var/lib/dpkg/status 2:12.1.1-0ubuntu7 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages 2:12.0.1-0ubuntu1 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu bionic/main amd64 Packages root@juju-9c4cdb-lp1929832-verify-6:~# grep py38 /etc/neutron/rootwrap.d/l3.filters kill_keepalived_monitor_py38: KillFilter, root, python3.8, -15, -9 ** Tags removed: verification-needed verification-ussuri-needed ** Tags added: verification-done verification-ussuri-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1929832 Title: stable/ussuri py38 support for keepalived-state-change monitor To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1929832/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1908375] Re: ceph-volume lvm list calls blkid numerous times for differrent devices
** Also affects: cloud-archive Importance: Undecided Status: New ** Also affects: cloud-archive/queens Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1908375 Title: ceph-volume lvm list calls blkid numerous times for differrent devices To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1908375/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1908375] Re: ceph-volume lvm list calls blkid numerous times for differrent devices
upload to bionic unapproved queue on 11th May - https://launchpadlibrarian.net/538166201/ceph_12.2.13-0ubuntu0.18.04.8_source.changes and still awaiting sru team approval. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1908375 Title: ceph-volume lvm list calls blkid numerous times for differrent devices To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1908375/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1892361] Re: SRIOV instance gets type-PF interface, libvirt kvm fails
** Changed in: nova/rocky Status: Fix Committed => New ** Changed in: nova/queens Status: Fix Committed => New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1892361 Title: SRIOV instance gets type-PF interface, libvirt kvm fails To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1892361/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1931244] Re: ovn sriov broken from ussuri onwards
I believe the following bug may also be related - https://bugs.launchpad.net/neutron/+bug/1927977 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1931244 Title: ovn sriov broken from ussuri onwards To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1931244/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1931244] Re: ovn sriov broken from ussuri onwards
I think the issue here is basically that the new code relies on [1] to get number of worker threads but that does not include things like rpc workers. https://github.com/openstack/neutron/blob/df94641b43964834ba14c69eb4fb17cc45349117/neutron/service.py#L313 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1931244 Title: ovn sriov broken from ussuri onwards To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1931244/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1931244] Re: ovn sriov broken from ussuri onwards
Verified bionic-ussuri-proposed with output: # apt-cache policy neutron-common neutron-common: Installed: 2:16.3.2-0ubuntu3~cloud0 Candidate: 2:16.3.2-0ubuntu3~cloud0 Version table: *** 2:16.3.2-0ubuntu3~cloud0 500 500 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-proposed/ussuri/main amd64 Packages 100 /var/lib/dpkg/status 2:16.3.1-0ubuntu1~cloud0 500 500 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-updates/ussuri/main amd64 Packages 2:12.1.1-0ubuntu7 500 500 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages 2:12.0.1-0ubuntu1 500 500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages I created a two-port sriov vm on bionic-ussuri and it came up in seconds. ** Tags removed: verification-ussuri-needed ** Tags added: verification-ussuri-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1931244 Title: ovn sriov broken from ussuri onwards To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1931244/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1929832] Re: stable/ussuri py38 support for keepalived-state-change monitor
This has been released to the ussuri cloud archive (which is currently on 2:16.3.2-0ubuntu3~cloud0) so marking Fix Released. ** Changed in: cloud-archive/ussuri Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1929832 Title: stable/ussuri py38 support for keepalived-state-change monitor To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1929832/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1927868] Re: vRouter not working after update to 16.3.1
I've had a go at deploying Train and upgrading Neutron to latest Ussuri and I see the same issue. Looking closer what I see is that post-upgrade Neutron l3-agent has not spawned any keepalived processes hence why no router goes active. When the agent is restarted it would normally receive two router updates; first one to spawn_state_change_monitor and a second to spawn keepalived. In my non-working nodes the second router update is never received by the l3-agent. Here is an example of a working agent https://pastebin.ubuntu.com/p/PFb594wkhB vs. a not working https://pastebin.ubuntu.com/p/MtDNrXmvZB/. I tested restarted all agents and this did not fix things. I then rebooted one of my upgraded nodes and it resolved the issue for that node i.e. two updates received and both spawned then router goes active. I also noticed that on a non-rebooted node, following ovs agent restart I see https://pastebin.ubuntu.com/p/2n4KxBv8S2/ which again is not resolved by an agent restart and is fixed by the node reboot. This latter issue is described on old bugs e.g. https://bugs.launchpad.net/neutron/+bug/1625305 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1927868 Title: vRouter not working after update to 16.3.1 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1927868/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1900851] Re: Cannot Create Port with Fixed IP Address
** Also affects: cloud-archive/xena Importance: Undecided Status: New ** Also affects: cloud-archive/ussuri Importance: Undecided Status: New ** Also affects: cloud-archive/victoria Importance: Undecided Status: New ** Also affects: cloud-archive/wallaby Importance: Undecided Status: New ** Also affects: horizon (Ubuntu) Importance: Undecided Status: New ** Also affects: horizon (Ubuntu Hirsute) Importance: Undecided Status: New ** Also affects: horizon (Ubuntu Focal) Importance: Undecided Status: New ** Also affects: horizon (Ubuntu Impish) Importance: Undecided Status: New ** Also affects: horizon (Ubuntu Groovy) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1900851 Title: Cannot Create Port with Fixed IP Address To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1900851/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1900851] Re: Cannot Create Port with Fixed IP Address
** Changed in: cloud-archive/xena Status: New => Fix Released ** Changed in: cloud-archive/wallaby Status: New => Fix Released ** Changed in: cloud-archive/victoria Status: New => Fix Released ** Changed in: horizon (Ubuntu Impish) Status: New => Fix Released ** Changed in: horizon (Ubuntu Hirsute) Status: New => Fix Released ** Changed in: horizon (Ubuntu Groovy) Status: New => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1900851 Title: Cannot Create Port with Fixed IP Address To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1900851/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1927868] Re: vRouter not working after update to 16.3.1
I have just re-tested all of this as follows: * deployed Openstack Train (on Bionic i.e. 2:15.3.3-0ubuntu1~cloud0) with 3 gateway nodes * created one HA router, one vm with one fip * can ping fip and confirm single active router * upgraded neutron-server (api) to 16.3.0-0ubuntu3~cloud0 (ussuri), stopped server, neutron-db-manage upgrade head, start server * ping still works * upgraded all compute hosts to 16.3.0-0ubuntu3~cloud0, observed vrrp failover and short interruption * ping still works * upgraded one compute to 2:16.3.2-0ubuntu3~cloud0 * ping still works * upgraded neutron-server (api) to 2:16.3.2-0ubuntu3~cloud0, stopped server, neutron-db-manage upgrade head (observed no migrations), start server * ping still works * upgraded remaining compute to 2:16.3.2-0ubuntu3~cloud0 * ping still works I noticed that after upgrading to 2:16.3.2-0ubuntu3~cloud0 my interfaces when from: root@juju-f0dfb3-lp1927868-6:~# ip netns exec qrouter-8b5e4130-6688-45c5-bc8e-ee3781d8719c ip a s; pgrep -alf keepalived| grep -v state 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ha-bd1bd9ab-f8@if11: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether fa:16:3e:6a:ae:8c brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 169.254.195.91/18 brd 169.254.255.255 scope global ha-bd1bd9ab-f8 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe6a:ae8c/64 scope link valid_lft forever preferred_lft forever 3: qg-9e134c20-1f@if13: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether fa:16:3e:c4:cc:84 brd ff:ff:ff:ff:ff:ff link-netnsid 0 4: qr-a125b622-2d@if14: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether fa:16:3e:0b:d3:74 brd ff:ff:ff:ff:ff:ff link-netnsid 0 to: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ha-bd1bd9ab-f8@if11: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether fa:16:3e:6a:ae:8c brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 169.254.195.91/18 brd 169.254.255.255 scope global ha-bd1bd9ab-f8 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe6a:ae8c/64 scope link valid_lft forever preferred_lft forever 3: qg-9e134c20-1f@if13: mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether fa:16:3e:c4:cc:84 brd ff:ff:ff:ff:ff:ff link-netnsid 0 4: qr-a125b622-2d@if14: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether fa:16:3e:0b:d3:74 brd ff:ff:ff:ff:ff:ff link-netnsid 0 And it remained like that until the router went vrrp master: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ha-bd1bd9ab-f8@if11: mtu 1500 qdisc n
[Bug 1751923] Re: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server
@coreycb I think we have everything we need to proceed with this SRU now. Since Queens is the oldest release currently supported on Ubuntu and support for populating vif attach ordering required to rebuild the cache has been available since Newton I think the risk of anyone being impacted is very small. VMs created prior to Newton would need the patch [1] and eventually [2] backported from Stein but I don't see them as essential and given the impact of not having this fix asap I think it supersedes those which we can handle separately. [1] https://github.com/openstack/nova/commit/3534471c578eda6236e79f43153788c4725a5634 [2] https://bugs.launchpad.net/nova/+bug/1825034 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1751923 Title: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1751923/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1832021] Re: Checksum drop of metadata traffic on isolated networks with DPDK
** Tags added: verification-needed-queens -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1832021 Title: Checksum drop of metadata traffic on isolated networks with DPDK To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1832021/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1879798] Re: designate-manage pool update doesn't reflects targets master dns servers into zones.
Not currently available in an upstream point release prior to Victoria: $ git branch -r --contains b967e9f706373f1aad6db882c2295fbbe1fadfc9 gerrit/stable/ussuri $ git tag --contains b967e9f706373f1aad6db882c2295fbbe1fadfc9 $ ** Changed in: cloud-archive/victoria Status: Fix Committed => New ** Changed in: cloud-archive/ussuri Status: Fix Committed => New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1879798 Title: designate-manage pool update doesn't reflects targets master dns servers into zones. To manage notifications about this bug go to: https://bugs.launchpad.net/charm-designate/+bug/1879798/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1751923] Re: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server
Restored the bug description to its original format and updated SRU info. ** Description changed: [Impact] * During periodic task _heal_instance_info_cache the instance_info_caches are not updated using instance port_ids taken from neutron, but from nova db. * This causes that existing VMs to loose their network interfaces after reboot. [Test Plan] * This bug is reproducible on Bionic/Queens clouds. 1) Deploy the following Juju bundle: https://paste.ubuntu.com/p/HgsqZfsDGh/ 2) Run the following script: https://paste.ubuntu.com/p/c4VDkqyR2z/ 3) If the script finishes with "Port not found" , the bug is still present. [Where problems could occur] - ** No specific regression potential has been identified. - ** Check the other info section *** - - [Other Info] + Instances created prior to the Openstack Newton release that have more + than one interface will not have associated information in the + virtual_interfaces table that is required to repopulate the cache with + interfaces in the same order they were attached prior. In the unlikely + event that this occurs and you are using Openstack release Queen or + Rocky, it will be necessary to either manually populate this table. + Openstack Stein has a patch that adds support for generating this data. + Since as things stand the guest will be unable to identify it's network + information at all in the event the cache gets purged and given the + hopefully low risk that a vm was created prior to Newton we hope the + potential for this regression is very low. + + -- + + Description + === + + During periodic task _heal_instance_info_cache the + instance_info_caches are not updated using instance port_ids taken + from neutron, but from nova db. + + Sometimes, perhaps because of some race-condition, its possible to + lose some ports from instance_info_caches. Periodic task + _heal_instance_info_cache should clean this up (add missing records), + but in fact it's not working this way. How it looks now? = _heal_instance_info_cache during crontask: https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/compute/manager.py#L6525 is using network_api to get instance_nw_info (instance_info_caches): - \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0try: - \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# Call to network API to get instance info.. this will - \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# force an update to the instance's info_cache - \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0self.network_api.get_instance_nw_info(context, instance) + try: + # Call to network API to get instance info.. this will + # force an update to the instance's info_cache + self.network_api.get_instance_nw_info(context, instance) self.network_api.get_instance_nw_info() is listed below: https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L1377 and it uses _build_network_info_model() without networks and port_ids parameters (because we're not adding any new interface to instance): https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L2356 Next: _gather_port_ids_and_networks() generates the list of instance networks and port_ids: - \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0networks, port_ids = self._gather_port_ids_and_networks( - \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0context, instance, networks, port_ids, client) + networks, port_ids = self._gather_port_ids_and_networks( + context, instance, networks, port_ids, client) https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L2389-L2390 https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L1393 - As we see that _gather_port_ids_and_networks() takes the port list from - DB: + As we see that _gather_port_ids_and_networks() takes the port list + from DB: https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/objects/instance.py#L1173-L1176 And thats it. When we lose a port its not possible to add it again with this periodic task. The only way is to clean device_id field in neutron port object and re-attach the interface using `nova interface-attach`. - When the interface is missing and there is no port configured on compute - host (for example after compute reboot) - interface is not added to - instance and from neutron point of vi
[Bug 1849098] Re: ovs agent is stuck with OVSFWTagNotFound when dealing with unbound port
** Also affects: cloud-archive Importance: Undecided Status: New ** Also affects: cloud-archive/queens Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1849098 Title: ovs agent is stuck with OVSFWTagNotFound when dealing with unbound port To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1849098/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1927729] Re: Cinder packages should have sysfsutils as a dependency
** Also affects: python-cinderclient Importance: Undecided Status: New ** No longer affects: python-cinderclient ** Also affects: python-cinderclient Importance: Undecided Status: New ** No longer affects: python-cinderclient ** Also affects: charm-cinder Importance: Undecided Status: New ** Also affects: charm-nova-compute Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1927729 Title: Cinder packages should have sysfsutils as a dependency To manage notifications about this bug go to: https://bugs.launchpad.net/charm-cinder/+bug/1927729/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1911900] Re: [SRU] Active scrub blocks upmap balancer
Hi Pon, if you still need Bionic SRU for this one can you attach a debdiff for bionic. Thanks. ** Changed in: ceph (Ubuntu Bionic) Status: In Progress => New ** Changed in: cloud-archive Assignee: Ponnuvel Palaniyappan (pponnuvel) => (unassigned) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1911900 Title: [SRU] Active scrub blocks upmap balancer To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1911900/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1895727] Re: OpenSSL.SSL.SysCallError: (111, 'ECONNREFUSED') and Connection thread stops
** Also affects: cloud-archive Importance: Undecided Status: New ** Also affects: cloud-archive/victoria Importance: Undecided Status: New ** Also affects: cloud-archive/ussuri Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1895727 Title: OpenSSL.SSL.SysCallError: (111, 'ECONNREFUSED') and Connection thread stops To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1895727/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1883089] Re: [L3] floating IP failed to bind due to no agent gateway port(fip-ns)
** Also affects: cloud-archive Importance: Undecided Status: New ** Also affects: cloud-archive/victoria Importance: Undecided Status: New ** Also affects: cloud-archive/ussuri Importance: Undecided Status: New ** Changed in: cloud-archive/victoria Status: New => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1883089 Title: [L3] floating IP failed to bind due to no agent gateway port(fip-ns) To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1883089/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1883089] Re: [L3] floating IP failed to bind due to no agent gateway port(fip-ns)
** Description changed: In patch [1] it introduced a binding of DB uniq constraint for L3 agent gateway. In some extreme case the DvrFipGatewayPortAgentBinding is in DB while the gateway port not. The current code path only checks the binding existence which will pass a "None" port to the following code path that results an AttributeError. [1] https://review.opendev.org/#/c/702547/ Exception log: 2020-06-11 15:39:28.361 1285214 INFO neutron.db.l3_dvr_db [None req-d6a41187-2495-46bf-a424-ab7195c0ecb1 - - - - -] Floating IP Agent Gateway port for network 3fcb7702-ae0b-46b4-807f-8ae94d656dd3 does not exist on host host-compute-1. Creating one. 2020-06-11 15:39:28.370 1285214 DEBUG neutron.db.l3_dvr_db [None req-d6a41187-2495-46bf-a424-ab7195c0ecb1 - - - - -] Floating IP Agent Gateway port for network 3fcb7702-ae0b-46b4-807f-8ae94d656dd3 already exists on host host-compute-1. Probably it was just created by other worker. create_fip_agent_gw_port_if_not_exists /usr/lib/python2.7/site-packages/neutron/db/l3_dvr_db.py:927 2020-06-11 15:39:28.390 1285214 DEBUG neutron.db.l3_dvr_db [None req-d6a41187-2495-46bf-a424-ab7195c0ecb1 - - - - -] Floating IP Agent Gateway port None found for the destination host: host-compute-1 create_fip_agent_gw_port_if_not_exists /usr/lib/python2.7/site-packages/neutron/db/l3_dvr_db.py:933 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server [None req-d6a41187-2495-46bf-a424-ab7195c0ecb1 - - - - -] Exception during message handling: AttributeError: 'NoneType' object has no attribute 'get' 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 170, in _process_incoming 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 220, in dispatch 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 190, in _do_dispatch 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 91, in wrapped 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server setattr(e, '_RETRY_EXCEEDED', True) 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server self.force_reraise() 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 87, in wrapped 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server return f(*args, **kwargs) 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_db/api.py", line 147, in wrapper 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server ectxt.value = e.inner_exc 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server self.force_reraise() 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_db/api.py", line 135, in wrapper 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server return f(*args, **kwargs) 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 126, in wrapped 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server LOG.debug("Retry wrapper got retriable exception: %s", e) 2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in _
[Bug 1885169] Re: Some arping version only accept integer number as -w argument
I just tested arping on Focal and I dont see this issue: ubuntu@arping:~$ sudo arping -U -I eth0 -c 1 -w 1.5 10.48.98.1 ARPING 10.48.98.1 42 bytes from fe:10:17:12:6a:9c (10.48.98.1): index=0 time=14.516 usec --- 10.48.98.1 statistics --- 1 packets transmitted, 1 packets received, 0% unanswered (0 extra) rtt min/avg/max/std-dev = 0.015/0.015/0.015/0.000 ms ubuntu@arping:~$ dpkg -l| grep arping ii arping 2.20-1amd64 sends IP and/or ARP pings (to the MAC address) Not sure what i'm missing. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1885169 Title: Some arping version only accept integer number as -w argument To manage notifications about this bug go to: https://bugs.launchpad.net/devstack/+bug/1885169/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1885169] Re: Some arping version only accept integer number as -w argument
For context this somewhat explains what changes occurred in iputils to lead to this issue - https://github.com/iputils/iputils/issues/267 ** Bug watch added: github.com/iputils/iputils/issues #267 https://github.com/iputils/iputils/issues/267 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1885169 Title: Some arping version only accept integer number as -w argument To manage notifications about this bug go to: https://bugs.launchpad.net/devstack/+bug/1885169/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1899964] Re: Failover of loadbalancer fails when Amphora master is missing
** Changed in: cloud-archive/ussuri Status: Triaged => Fix Released ** Changed in: cloud-archive/train Status: Triaged => Fix Released ** Changed in: octavia (Ubuntu Focal) Status: Triaged => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1899964 Title: Failover of loadbalancer fails when Amphora master is missing To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1899964/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1789045] Re: keepalived 1:1.2.24-1ubuntu0.16.04.1 breaks Neutron stable branches
** Changed in: keepalived (Ubuntu) Status: Confirmed => Incomplete -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1789045 Title: keepalived 1:1.2.24-1ubuntu0.16.04.1 breaks Neutron stable branches To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1789045/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1778771] Re: Backups panel is visible even if enable_backup is False
** Changed in: charm-openstack-dashboard Status: In Progress => Invalid ** Changed in: charm-openstack-dashboard Assignee: Seyeong Kim (xtrusia) => (unassigned) ** Changed in: charm-openstack-dashboard Milestone: 18.11 => None -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1778771 Title: Backups panel is visible even if enable_backup is False To manage notifications about this bug go to: https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1778771/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1737866] Re: Too many open files when large number of routers on a host
Re the trusty-mitaka failure, from comment in http://upstart.ubuntu.com/wiki/Stanzas#limit it looks like this should fix it - https://pastebin.ubuntu.com/p/PMmQTNQsxZ/ -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1737866 Title: Too many open files when large number of routers on a host To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1737866/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1744062] Re: [SRU] L3 HA: multiple agents are active at the same time
** Tags removed: sts-sru-needed ** Tags added: sts-sru-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1744062 Title: [SRU] L3 HA: multiple agents are active at the same time To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1744062/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1799648] Re: Fibre Channel attachments incorrectly scans for local WWN instead of target WWN (Upstream Backport)
Hi Trent, please use the original bug report for SRU/backport submission. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1799648 Title: Fibre Channel attachments incorrectly scans for local WWN instead of target WWN (Upstream Backport) To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1799648/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1778771] Re: Backups panel is visible even if enable_backup is False
I have tested the bionic-proposed build using the provided testcase i.e. deploy openstack-dashbaord with enable_backup set to False and check that the backups panel is not there. I then set enable_backup to true and confirmed that the backups panel appears. ** Tags removed: verification-needed verification-needed-bionic ** Tags added: verification-done verification-done-bionic -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1778771 Title: Backups panel is visible even if enable_backup is False To manage notifications about this bug go to: https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1778771/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1713499] Re: Cannot delete a neutron network, if the currently configured MTU is lower than the network's MTU
** Changed in: cloud-archive Status: New => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1713499 Title: Cannot delete a neutron network, if the currently configured MTU is lower than the network's MTU To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1713499/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1737866] Re: Too many open files when large number of routers on a host
looks like the upload to pike proposed didn't update this lp but it is defo there: openvswitch (2.8.4-0ubuntu0.17.10.1) xenial; urgency=medium * Bump nofiles to 1048576 for ovs daemons (LP: #1737866). * New upstream point release (LP: #1787519): - d/p/s390x-stp-timeout.patch: Dropped, equivalent change upstream. -- James Page Fri, 17 Aug 2018 08:01:11 +0100 ** Tags added: verification-pike-needed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1737866 Title: Too many open files when large number of routers on a host To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1737866/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1737866] Re: Too many open files when large number of routers on a host
** Changed in: cloud-archive/pike Status: Fix Released => Fix Committed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1737866 Title: Too many open files when large number of routers on a host To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1737866/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1776375] Re: [SRU] Making gnocchi resource support multiple projects with the same name
** Tags removed: sts-sru-needed ** Tags added: sts-sru-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1776375 Title: [SRU] Making gnocchi resource support multiple projects with the same name To manage notifications about this bug go to: https://bugs.launchpad.net/aodh/+bug/1776375/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1775170] Re: [SRU] Fixing empty create swift container dialog after upgrading horizon from newton to ocata
** Tags removed: sts-sru-needed ** Tags added: sts-sru-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1775170 Title: [SRU] Fixing empty create swift container dialog after upgrading horizon from newton to ocata To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1775170/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1737866] Re: Too many open files when large number of routers on a host
** Tags removed: sts-sru-needed ** Tags added: sts-sru-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1737866 Title: Too many open files when large number of routers on a host To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1737866/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1713499] Re: Cannot delete a neutron network, if the currently configured MTU is lower than the network's MTU
This is now Fix Released for Queens UCA since the patch is in 12.0.4 release and UCA no has 12.0.5 (from bug 1795424) ** Changed in: cloud-archive/queens Status: In Progress => Fix Released ** Changed in: neutron (Ubuntu Bionic) Status: In Progress => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1713499 Title: Cannot delete a neutron network, if the currently configured MTU is lower than the network's MTU To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1713499/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1778771] Re: Backups panel is visible even if enable_backup is False
I have tested the xenial-proposed/queens build using the provided testcase i.e. deploy openstack-dashboard with enable_backup set to False and check that the backups panel is not there. I then set enable_backup to true and confirmed that the backups panel appears. ** Tags removed: verification-queens-needed ** Tags added: sts-sru-done verification-queens-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1778771 Title: Backups panel is visible even if enable_backup is False To manage notifications about this bug go to: https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1778771/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1789045] Re: keepalived 1:1.2.24-1ubuntu0.16.04.1 breaks Neutron stable branches
We have potentially hit similar behaviour to the mentioned issue and it seems to occur when we enable the ha_vrrp_health_check_interval. I see that this is enabled by default in the upstream tests [1] so I wonder if this is somehow related. [1] https://github.com/openstack/neutron/blob/stable/queens/neutron/tests/functional/agent/l3/test_ha_router.py#L394 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1789045 Title: keepalived 1:1.2.24-1ubuntu0.16.04.1 breaks Neutron stable branches To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1789045/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1737866] Re: Too many open files when large number of routers on a host
** Tags added: sts-sru-needed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1737866 Title: Too many open files when large number of routers on a host To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1737866/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1781039] Re: GCE cloudinit and ubuntu keys from metadata to ubuntu authorized_keys
** Tags removed: sts-sponser ** Tags added: sts-sponsor -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1781039 Title: GCE cloudinit and ubuntu keys from metadata to ubuntu authorized_keys To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1781039/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1744062] Re: [SRU] L3 HA: multiple agents are active at the same time
** Changed in: cloud-archive Status: Triaged => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1744062 Title: [SRU] L3 HA: multiple agents are active at the same time To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1744062/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1789045] Re: keepalived 1:1.2.24-1ubuntu0.16.04.1 breaks Neutron stable branches
Looks like this is the real cause of these problems - https://bugs.launchpad.net/neutron/+bug/1793102 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1789045 Title: keepalived 1:1.2.24-1ubuntu0.16.04.1 breaks Neutron stable branches To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1789045/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1818614] Re: [SRU] Various L3HA functional tests fails often
Patch submitted to fix this issue - https://review.openstack.org/#/c/649991/ -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1818614 Title: [SRU] Various L3HA functional tests fails often To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1818614/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1818614] Re: [SRU] Various L3HA functional tests fails often
cause of this issue: The problem is that the following is being passed to neutron-keepalived- state-change when spawned: '--AGENT-root_helper_daemon=%s' % self.agent_conf.AGENT.root_helper_daemon And in my env root_helper_daemon is not configured or running (which is the neutron default fwiw). So we need to not pass that if it is not set so that neutron-keepalive- state-change won't try to use it so e.g. right now in ps i have: ... --AGENT-root_helper=sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf --AGENT-root_helper_daemon=None The "None" is what is breaking it. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1818614 Title: [SRU] Various L3HA functional tests fails often To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1818614/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1813007] Re: [SRU] Unable to install new flows on compute nodes when having broken security group rules
@coreycb please note that the SRU from bug 1818614 that is bundled with this one in proposed has failed verification due to a regression (see that bug for details) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1813007 Title: [SRU] Unable to install new flows on compute nodes when having broken security group rules To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1813007/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1818614] Re: [SRU] Various L3HA functional tests fails often
This regressed SRU is being replaced (updated) by the SRU package uploaded in https://bugs.launchpad.net/neutron/+bug/1823038 i.e. the original -proposed package has been updated to also include the recently landed and backported patch to fix the regression found in the original sru. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1818614 Title: [SRU] Various L3HA functional tests fails often To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1818614/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1863704] Re: wrongly used a string type as int value for CEPH_VOLUME_SYSTEMD_TRIES and CEPH_VOLUME_SYSTEMD_INTERVAL
@taodd can you please tell me which releases of Ubuntu Ceph this patch already exists in and tell me which releases you are targeting this SRU at. You have set Bionic but is it already in Focal, Eoan etc? ** Also affects: cloud-archive Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1863704 Title: wrongly used a string type as int value for CEPH_VOLUME_SYSTEMD_TRIES and CEPH_VOLUME_SYSTEMD_INTERVAL To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1863704/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1863704] Re: wrongly used a string type as int value for CEPH_VOLUME_SYSTEMD_TRIES and CEPH_VOLUME_SYSTEMD_INTERVAL
** Also affects: cloud-archive/rocky Importance: Undecided Status: New ** Also affects: cloud-archive/train Importance: Undecided Status: New ** Also affects: cloud-archive/queens Importance: Undecided Status: New ** Also affects: cloud-archive/stein Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1863704 Title: wrongly used a string type as int value for CEPH_VOLUME_SYSTEMD_TRIES and CEPH_VOLUME_SYSTEMD_INTERVAL To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1863704/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1818770] Re: Unexpected graybar on top of openstack-dashboard login page
** Also affects: cloud-archive/pike Importance: Undecided Status: New ** Also affects: cloud-archive/stein Importance: Undecided Status: New ** Also affects: cloud-archive/rocky Importance: Undecided Status: New ** Also affects: cloud-archive/queens Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1818770 Title: Unexpected graybar on top of openstack-dashboard login page To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1818770/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1818880] Re: Deadlock when detaching network interface
** Also affects: cloud-archive/mitaka Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1818880 Title: Deadlock when detaching network interface To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1818880/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1818880] Re: Deadlock when detaching network interface
@corey.bryant I just spoke to @halves and he said that the series targets above Bionic are an oversight since this patch landed in anything newer than 2.11 (i.e. bionic version). We do also need this for Trusty-Mitaka though so I have added that as a UCA target. I'll let @halves reply about O/P. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1818880 Title: Deadlock when detaching network interface To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1818880/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1818614] Re: [SRU] Various L3HA functional tests fails often
** Tags added: sts-sru-needed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1818614 Title: [SRU] Various L3HA functional tests fails often To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1818614/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1818614] Re: [SRU] Various L3HA functional tests fails often
Hi @slawek, while verifying this sru I seem to have hit a bug in your patch - https://pastebin.ubuntu.com/p/4h9bhtB7DF/ My test does the following: * on master VR host, kill neutron-keepalived-state-change for router * on master VR host, kill keepalived for router VR * check that master moved to other node - confirmed * wait for neutron-l3-agent to respawn neutron-keepalived-state-change etc * then I hit this bug ** Tags removed: verification-needed-cosmic ** Tags added: verification-failed-cosmic -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1818614 Title: [SRU] Various L3HA functional tests fails often To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1818614/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1818614] Re: [SRU] Various L3HA functional tests fails often
The code does catch the exception but the result is that the local ha_conf//state file remains set to "master" -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1818614 Title: [SRU] Various L3HA functional tests fails often To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1818614/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1843085] Re: Backport of zero-length gc chain fixes to Luminous
** Tags added: sts-sru-needed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1843085 Title: Backport of zero-length gc chain fixes to Luminous To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1843085/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1847822] Re: CephFS authorize fails with unknown cap type
** Tags added: sts-sru-needed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1847822 Title: CephFS authorize fails with unknown cap type To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1847822/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1838607] Re: vaultlocker service fails when some interface are DOWN with NO-CARRIER
I'm trying to understand why I do not see this issue. I have several interfaces DOWN and vaultlocker does not have this issue on boot: root@chespin:~# ip a s| grep ": eno" 2: eno1: mtu 1500 qdisc mq state DOWN group default qlen 1000 3: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 4: eno3: mtu 1500 qdisc mq state DOWN group default qlen 1000 5: eno4: mtu 1500 qdisc mq state DOWN group default qlen 1000 6: eno49: mtu 9000 qdisc mq master br-eno49 state UP group default qlen 1000 7: eno50: mtu 1500 qdisc mq state UP group default qlen 1000 (reverse-i-search)`': ^C root@chespin:~# dpkg -l| grep vaultlocker ii vaultlocker 1.0.3-0ubuntu1.18.10.1~ubuntu18.04.1 all Secure storage of dm-crypt keys in Hashicorp Vault root@chespin:~# grep "Dependency failed" /var/log/syslog* root@chespin:~# It also appears you are using a vm so i wonder if that somehow impacts your issue. The only other issue with vaultlocker on boot that i am aware of is bug 1804261 where it can timeout reaching the vault api but that is a different problem. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1838607 Title: vaultlocker service fails when some interface are DOWN with NO-CARRIER To manage notifications about this bug go to: https://bugs.launchpad.net/bionic-backports/+bug/1838607/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1723030] Re: Under certain conditions check_rules is very sluggish
** No longer affects: cloud-archive/mitaka ** No longer affects: python-oslo.policy (Ubuntu Xenial) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1723030 Title: Under certain conditions check_rules is very sluggish To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1723030/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1782922] Re: LDAP: changing user_id_attribute bricks group mapping
Hi @dorina-t this patch is already release in Bionic (Queens) and is ready to be released for xenial Queens UCA so lets ping @corey.bryant to see if he can get it released. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1782922 Title: LDAP: changing user_id_attribute bricks group mapping To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1782922/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1840465] Re: [SRU] Fails to list security groups if one or more exists without rules
** Tags removed: verification-needed ** Tags added: verification-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1840465 Title: [SRU] Fails to list security groups if one or more exists without rules To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1840465/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1804261] Re: Ceph OSD units requires reboot if they boot before vault (and if not unsealed with 150s)
** Also affects: ceph (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1804261 Title: Ceph OSD units requires reboot if they boot before vault (and if not unsealed with 150s) To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceph-osd/+bug/1804261/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1804261] Re: Ceph OSD units requires reboot if they boot before vault (and if not unsealed with 150s)
Side note, I was initially unable to manually recover because I was restarting the wrong ceph-volume service: root@cephtest:~# systemctl -a| grep ceph-volume ceph-volume@bbfc0235-f8fd-458b-9c3d-21803b72f4bc.service loadedactivating start start Ceph Volume activation: bbfc0235-f8fd-458b-9c3d-21803b72f4bc ceph-volume@lvm-2-bbfc0235-f8fd-458b-9c3d-21803b72f4bc.service loadedinactive deadCeph Volume activation: lvm-2-bbfc0235-f8fd-458b-9c3d-21803b72f4bc i.e. there are two and it is the lvm* one that needs restarting (i tried to restart the other which didnt work). ** Changed in: charm-ceph-osd Assignee: dongdong tao (taodd) => (unassigned) ** Changed in: charm-ceph-osd Status: Triaged => Invalid ** Changed in: charm-ceph-osd Importance: High => Undecided ** Changed in: ceph (Ubuntu) Importance: Undecided => High ** Changed in: ceph (Ubuntu) Assignee: (unassigned) => dongdong tao (taodd) ** Also affects: cloud-archive Importance: Undecided Status: New ** Also affects: cloud-archive/queens Importance: Undecided Status: New ** Also affects: cloud-archive/ussuri Importance: Undecided Status: New ** Also affects: cloud-archive/rocky Importance: Undecided Status: New ** Also affects: cloud-archive/train Importance: Undecided Status: New ** Also affects: cloud-archive/stein Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1804261 Title: Ceph OSD units requires reboot if they boot before vault (and if not unsealed with 150s) To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceph-osd/+bug/1804261/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1804261] Re: Ceph OSD units requires reboot if they boot before vault (and if not unsealed with 150s)
** Also affects: ceph (Ubuntu Focal) Importance: High Assignee: dongdong tao (taodd) Status: New ** Also affects: ceph (Ubuntu Disco) Importance: Undecided Status: New ** Also affects: ceph (Ubuntu Bionic) Importance: Undecided Status: New ** Also affects: ceph (Ubuntu Eoan) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1804261 Title: Ceph OSD units requires reboot if they boot before vault (and if not unsealed with 150s) To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceph-osd/+bug/1804261/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1799737] Re: l3 agent external_network_bridge broken with ovs
@slaweq the bug description says this issue was observed in Queens which is currently under Extended Maintenance so presumably still eligible for fixes if there is sufficient consensus on their criticality and enough people to review. We also need to consider upgrades from Q -> R -> S where people are still using this config. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1799737 Title: l3 agent external_network_bridge broken with ovs To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1799737/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1799737] Re: l3 agent external_network_bridge broken with ovs
@axino I assume the environment you have that is using external_network_bridge/external_network_id is quite old and was originally deployed with a version older than Queens? Using these option to configure external networks is really deprecated and since at least Juno we have used bridge_mappings for this purpose (and to allow > 1 external network). There is an annoying quirk here though (and perhaps this is why you have not switched) which is that with the old way the network will likely not have a provider name (in the db) and therefore migrating it as-is to a bridge_mappings type config will break the network (unless perhaps one can be set manually in the database). -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1799737 Title: l3 agent external_network_bridge broken with ovs To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1799737/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1799737] Re: l3 agent external_network_bridge broken with ovs
Does feel like the code change in https://review.opendev.org/#/c/564825/10/neutron/agent/l3/router_info.py could be reverted though since it only affects the legacy config and is also breaking it. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1799737 Title: l3 agent external_network_bridge broken with ovs To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1799737/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1776622] Re: snapd updates on focal never finish installing. Can't install any other updates.
I can confirm i hit this with a fresh install of Focal Desktop today. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1776622 Title: snapd updates on focal never finish installing. Can't install any other updates. To manage notifications about this bug go to: https://bugs.launchpad.net/snapd/+bug/1776622/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1725421] Re: [mitaka] Hide nova lock/unlock if nova api <2.9 and when >= 2.9, "Lock/Unlock Instance" should not be shown at same time.
for ref: sru cancelled ** Changed in: cloud-archive/mitaka Status: Fix Committed => Invalid ** Changed in: horizon (Ubuntu Xenial) Status: Fix Committed => Invalid ** Tags removed: sts-sru-needed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1725421 Title: [mitaka] Hide nova lock/unlock if nova api <2.9 and when >= 2.9, "Lock/Unlock Instance" should not be shown at same time. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1725421/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1763442] Re: instance creation fails with "Failed to allocate the network(s), not rescheduling." because neutron-ovs-agent rpc_loop took too long
** Tags added: sts -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1763442 Title: instance creation fails with "Failed to allocate the network(s), not rescheduling." because neutron-ovs-agent rpc_loop took too long To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1763442/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1611987] Re: [SRU] glance-simplestreams-sync charm doesn't support keystone v3
** Tags added: sts -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1611987 Title: [SRU] glance-simplestreams-sync charm doesn't support keystone v3 To manage notifications about this bug go to: https://bugs.launchpad.net/charm-glance-simplestreams-sync/+bug/1611987/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1662804] Re: [SRU] Agent is failing to process HA router if initialize() fails
yakkety-proposed verified ltgm ** Tags added: verification-done-yakkety -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1662804 Title: [SRU] Agent is failing to process HA router if initialize() fails To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1662804/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1662804] Re: [SRU] Agent is failing to process HA router if initialize() fails
xenial-newton-proposed verified and lgtm ** Tags removed: verification-newton-needed ** Tags added: verification-newton-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1662804 Title: [SRU] Agent is failing to process HA router if initialize() fails To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1662804/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1662804] Re: [SRU] Agent is failing to process HA router if initialize() fails
trusty-mitaka-proposed verified and lgtm -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1662804 Title: [SRU] Agent is failing to process HA router if initialize() fails To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1662804/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1662804] Re: [SRU] Agent is failing to process HA router if initialize() fails
xenial-mitaka-proposed and lgtm ** Tags removed: verification-mitaka-needed verification-needed ** Tags added: verification-done-xenial verification-mitaka-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1662804 Title: [SRU] Agent is failing to process HA router if initialize() fails To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1662804/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1662804] Re: [SRU] Agent is failing to process HA router if initialize() fails
@sil2100 the tests performed are exactly as detailed in the [Test Case] in the description of this bug and I performed a test against a deployment for each proposed series/release i.e. trusty mitaka uca proposed, xenial proposed, yakkety proposed and xenial newton uca proposed. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1662804 Title: [SRU] Agent is failing to process HA router if initialize() fails To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1662804/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1694337] Re: Port information (binding:host_id) not updated for network:router_gateway after qRouter failover
** Also affects: cloud-archive Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1694337 Title: Port information (binding:host_id) not updated for network:router_gateway after qRouter failover To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1694337/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1681073] Re: Create Consistency Group form has an exception
** Tags removed: sts-sru ** Tags added: sts-sru-needed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1681073 Title: Create Consistency Group form has an exception To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1681073/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1715254] Re: nova-novncproxy process gets wedged, requiring kill -HUP
** No longer affects: cloud-archive/icehouse -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1715254 Title: nova-novncproxy process gets wedged, requiring kill -HUP To manage notifications about this bug go to: https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1715254/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1573766] Re: Enable the paste filter HTTPProxyToWSGI by default
Seems ok - http://pastebin.ubuntu.com/26163893/ -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1573766 Title: Enable the paste filter HTTPProxyToWSGI by default To manage notifications about this bug go to: https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1573766/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1692397] Re: hypervisor statistics could be incorrect
Verified using testcase in description. ** Tags removed: verification-needed-xenial ** Tags added: verification-done-xenial ** Tags removed: verification-needed ** Tags added: verification-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1692397 Title: hypervisor statistics could be incorrect To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1692397/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1758868] Re: ovs restart can lead to critical ovs flows missing
I believe that part of all of this issue is resolved by https://bugs.launchpad.net/neutron/+bug/1584647 which is currently being backported to Xenial/Mitaka -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1758868 Title: ovs restart can lead to critical ovs flows missing To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1758868/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs