Public bug reported:
This bug is the same as [1], but that bug wasn't solved completely. It
also fails occasionally during REVERT_RESIZE operation.
The more detail is:
Description
===
This is all only when Neutron is using OVS with hybrid plugging.
When reverting a resized instance from
Reviewed: https://review.opendev.org/741712
Committed:
https://git.openstack.org/cgit/openstack/nova/commit/?id=806575cfd5327f96e62462f484118d06d17cbe8d
Submitter: Zuul
Branch:master
commit 806575cfd5327f96e62462f484118d06d17cbe8d
Author: Alex Deiter
Date: Fri Jul 17 20:38:55 2020 +
Reviewed: https://review.opendev.org/745752
Committed:
https://git.openstack.org/cgit/openstack/keystone/commit/?id=7d6c71ba26694c21110280e741b9ffe2d36a94ca
Submitter: Zuul
Branch:master
commit 7d6c71ba26694c21110280e741b9ffe2d36a94ca
Author: melanie witt
Date: Tue Aug 11 21:19:01 2020 +0
** Description changed:
- This issue is being treated as a potential security risk under
- embargo. Please do not make any public mention of embargoed
- (private) security vulnerabilities before their coordinated
- publication by the OpenStack Vulnerability Management Team in the
- form of an offi
Public bug reported:
We have a system with some dead DVR hypervisors. Ports of type
network:floatingip_agent_gateway are still associated with them and in
ACTIVE state. Deleting the L3 agent doesn't delete the ports, which
means extra floating IPs are still consumed.
* Version
OpenStack Train de
Public bug reported:
If import is called with all_stores_must_succeed=True and a store fails
during set_image_data(), the store will remain in
os_glance_importing_stores forever, never going into the
os_glance_failed_import list. This means a polling client will never
notice that the import failed
Public bug reported:
I am trying to delete a nova-compute service for a retired hypervisor:
$ openstack compute service delete 124
Failed to delete compute service with ID '124': Service id 124 refers to
mult
Public bug reported:
I have deployed openstack using openstack-ansible (ussuri on centos8) I
have integrated designate with neutron and trying to verify my setup so
i did following.
# Mapping network with dns_domain = foo.com.
openstack network set e7b11bae-e7fa-42c8-9739-862b60d5acce --dns-doma
Public bug reported:
[Request]
Reporting this RFE on behalf of a customer, who would like to inquire on the
possibility of changing the CIDR of a subnet in Neutron.
As of today, the only alternative for expansion is to create new
subnets/gateways to accomodate more hosts. The customer's desire
Public bug reported:
Using nocloud
Following output from ip show addr is not processed properly (see can0
link/can), causing failure in other interface setup.
ip addr show
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1000
link/loopback 00
Public bug reported:
I found in the neutron-tempest-plugin-designate-scenario job that test
neutron_tempest_plugin.scenario.test_dns_integration.DNSIntegrationTests.test_fip
failed due to internal server error in Neutron server.
Failed job: https://fc098855e3982674115d-
9f073b45b43178f46f80f0c5ee
Public bug reported:
Most of the tests in the neutron-ovn-tripleo-ci-centos-8-containers-multinode
job are failing due to ssh issues.
Errors example:
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/tempest/common/utils/__init__.py",
line 89, in wrapper
return f(*
*** This bug is a duplicate of bug 1891307 ***
https://bugs.launchpad.net/bugs/1891307
Public bug reported:
Most of the tests in the neutron-ovn-tripleo-ci-centos-8-containers-multinode
job are failing due to ssh issues.
Errors example:
Traceback (most recent call last):
File "/usr/lib/py
Public bug reported:
In my env, cinder volume backend is ceph rbd.
# Original instance boot with volume-backend from the image01
$ nova boot --block-device
source=image,id=$image01,dest=volume,size=50,shutdown=remove,bootindex=0
--flavor m1.tiny --nic net-id=$net vm01
# create instance image
$
Public bug reported:
In https://review.opendev.org/#/c/744280/ patch, 100% of the nova-
grenade-multinode task run failed about zuul. the log
is:https://zuul.opendev.org/t/openstack/build/dfce6a7688274766bfaff4816f3dab97/log
/job-output.txt#4375-4476
Some details:
2020-08-12 04:23:46.473309 | pri
15 matches
Mail list logo