Public bug reported:
I ran into a problem when the network inside the newly created VM is not
working.
* Pre-conditions:
- the neutron ovs agent has not yet seen any ports from the VM network;
- any other bridge (except for the network in which the VM is created) is
recreated on the node.
* S
Public bug reported:
I ran into a problem when L3 agent fails to process the external network change
on the router and was hitting the retry limit.
I'm using a devstack deployment over the master branch.
* Pre-conditions:
L3 agent in DVR mode
mechanism driver is openvswitch
* Step-by-step repro
Public bug reported:
I run into a problem when unicast RA messages are not accepted by openflow
rules.
In my configuration I'm using radvd daemon to send RA messages in my IPv6
network.
Here is a config of radvd with `clients` dirrective to turn off multicast
messages:
[root@radvd ~]# cat /etc
Public bug reported:
I run into a problem when neutron dhcp-agent is still replying to the old
address confirmation.
Simple steps to reproduce:
- create a port with IPv6 address in dhcpv6-stateful subnet
- create a VM with cloud-init inside
- change the IPv6 port address
- reboot the VM
Here are
Public bug reported:
I can't find a way to set up VPN quotas using the CLI tools: neither the
openstack CLI nor deprecated neutron CLI has this feature.
I can only update VPN quotas using a direct API request (e.g. via curl).
And can only list VPN quotas using neutron CLI tool.
[root@node4578 ~]
Public bug reported:
We got an issue when keepalived stops to be running after update MTU on the
internal network of the DVR-HA router.
It turned out that the keepalived config has an interface from qrouter-ns
although the keepalived process itself is running in snat-ns.
Here is a simple demo o
Public bug reported:
I found out that after VM creation and after VM stop/start the set of
OVS rules is different in br-int table=60 (TRANSIENT_TABLE)
I have a flat network, in this network I create a VM. After the VM
stop/start the set of rules in table 60 for this VM is different from
the one t
Public bug reported:
We discovered that if the nova metadata service is down, then the
neutron metadata service starts printing stack traces with a 500 HTTP
code to the user.
Demo on a newly installed devstack
$ systemctl stop devstack@n-api-meta.service
Then inside a VM:
$ curl http://169.254
Public bug reported:
When trying to GET a non-existent metadata key within the VM, like
'/latest/meta-data/hostname/abc', the Nova metadata service responses
with a 500 HTTP status code:
Inside a VM:
$ curl http://169.254.169.254/latest/meta-data/hostname/abc
500 Internal Server Error
Public bug reported:
I get endless attempts to configure the network in the neutron-dhcp-
agent logs.
Conditions
- devstack setup with extra `force_metadata = true` config option.
- DHCP enabled vxlan network with MTU=1000
$ openstack network create net1000 --mtu 1000
$ openstack subnet create s
Public bug reported:
I encounter a very strange behavior when I try to add and delete the
"access_as_shared" RBAC policy.
I can add it successfully, but the subsequent delete doesn't work:
openstack network rbac create ... # SUCCESS
openstack network rbac delete $ID # FAIL
Pre-requirements:
Public bug reported:
Our customer reported an issue where he does not understand what is
wrong with the DHCP service.
As it turned out, during the management of subnet allocation pools, he
accidentally deleted the allocation pool (thinking that the entire range
would be used).
Then he reset (off
Public bug reported:
During the processing of a router by the L3 agent in
`agent_mode={dvr|dvr_snat}`, we create a floatingip_agent_gateway port.
The code in the L3-agent [0] and the neutron-server [1] assumes that there
should be only one such port per L3-agent per network. However, this is no
Public bug reported:
I investigated the customer's issue and concluded that this code:
https://opendev.org/openstack/neutron/src/commit/0807c94dc9843fff318c21d1f6f7b8838f948f5f/neutron/agent/l3/dvr_fip_ns.py#L155-L160
which deletes the fip-namespace during router processing, leads to connectivity
Public bug reported:
I ran into a problem when the list of networks filtered by segment ID does not
match the expected list.
An important condition is the parallel removal of another network.
Here is a demo:
Console 1:
$ while :; do openstack network create test-net --provider-segment 200
--pr
Public bug reported:
We are testing the network availability of VMs in case of HA events.
And we run into a problem where aborting live migration of a VM can break
communication with that VM in the future at the OVS rules level.
The fault of the wrong OVS rules is the stuck INACTIVE port binding
Public bug reported:
We ran into a problem in our openstack cluster, when traffic does not go
through the virtual network on the node on which the neutron-openvswitch-agent
was restarted.
We had an update from one version of the Openstack to another and by chance we
had a inconsistency of the D
Public bug reported:
The nova-api raises exception on attempt to get VMs sorted by i.e.
task_state key.
Here are steps-to-reproduce:
- create two VMs: vm1 in ACTIVE state (cell1) and vm2 in ERROR state (cell0)
- try to list servers sorted by sort_key=task_state
[root@node0 ~]# openstack server
Public bug reported:
I found that neutron-server does not wait for successful port provisioning from
the dhcp agent in the case of VM creation. DHCP entity is not added into
provisioning_block by neutron-server for such port.
As a result, nova receives a notification that the port is plugged, wh
Public bug reported:
In my CI run I got an error in test_resize_server_revert test case [1]
{3}
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_revert
[401.454625s] ... FAILED
Captured traceback:
~~~
Traceback (most recent call last):
Public bug reported:
I got an error in the test_distributed_port_binding_deleted_by_port_deletion
test on my CI run [1].
Also I found the same failure in another CI run [2]
FAIL:
neutron.tests.unit.plugins.ml2.test_db.Ml2DvrDBTestCase.test_distributed_port_binding_deleted_by_port_deletion
tags:
Public bug reported:
I ran into a problem where a static route just gets stuck in the snat
namepsace, even when removing all static routes from a distributed router with
ha enabled.
Here is a simple demo from my devstack setup:
[root@node0 ~]# openstack network create private
[root@node0 ~]# op
Public bug reported:
I'm trying to do an INACTIVE port binding cleanup using
neutron-remove-duplicated-port-bindings tool from #1979072
But I found an issue with this help tool: it doens't remove entries from the
ml2_port_binding_levels table that still blocks new port binding to the host.
Demo
Public bug reported:
We found the issue when a created HA DVR router gets stuck in the backup state
and does not go into primary state.
Preconditions:
1) there is no router with a specific external network yet
2) the router needs to go through a quick creation->deletion, and then the next
creati
Public bug reported:
We encountered a problem when the floating IP is not removed from the snat-ns
when FIP is moving from the centralized to the distributed state (i.e. when the
host is binding to the associated fixed IP address).
This happens when the the fixed IP was originally created with a
Public bug reported:
We ran into a problem with a customer when some external integration
tries to remove all ports using the neutron API, including router prots.
It seems only the router ports with the router_ha_interface device
owner are allowed to delete, all other router ports cannot be dele
Public bug reported:
I am using Queens release and VM's tap interfaces are plugged into ovs br-int.
I'm watching a case when openflow entries are not totally removed when I stop
my VM (name='my-vm').
It is only reproducable when there is some another activity on a node for
different VMs: in my c
gent.dhcp.agent [-] Synchronizing state
[0] https://bugs.launchpad.net/neutron/+bug/1780370
** Affects: neutron
Importance: Undecided
Assignee: Anton Kurbatov (akurbatov)
Status: New
** Changed in: neutron
Assignee: (unassigned) => Anton Kurbatov (akurbatov)
--
You receiv
28 matches
Mail list logo