Public bug reported:
In a VLAN provider environment, when the physical device is not
configured with multicast function, VRRP multicast is actually broadcast
(flood) throughout the network[1][2]. In a VXLAN environment, multicast
packets enter the full-mesh tunnels (flood) [3] at each node, consum
** Changed in: neutron
Status: Won't Fix => In Progress
** Changed in: neutron
Importance: Undecided => Medium
** Changed in: neutron
Assignee: (unassigned) => LIU Yulong (dragon889)
--
You received this bug notification because you are a member of Yahoo!
Engineeri
Public bug reported:
Because dvr was not enable, the neutron_tempest_plugin.scenario.test_migration
did not run:
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_319/936036/3/check/neutron-tempest-plugin-openvswitch/3193b41/job-output.txt
https://a882c2
Public bug reported:
When router is going to do miration actions, the router ports device owner
changed after admin state down. But these actions will not trigger port update
event. Then ovs-agent side has wrong router port information after router admin
state up. So, l3_db should send such por
Public bug reported:
Ports are removed from the br-int, but the ovs-agent resource_cache
still have them. If during this removal period, the port attributes
changed/updated. Even worse if the update event is not received by the
agent, as well as the port plug to the br-int again. The ovs-agent wil
Set the service type of IPv6 subnet of external network should solve the
problem. The service type should be "network:router_gateway". See the
doc for more details:
https://docs.openstack.org/neutron/latest/admin/config-service-
subnets.html
** Changed in: neutron
Status: In Progress => Inv
Public bug reported:
OVS-agent wants to clean flows table by table during restart, but
actually it does not. [1] If one table has same cookie with other
tables, all related flows will be clean at once. A bit radical in such
style.
[1]
https://github.com/openstack/neutron/blob/master/neutron/plu
** Changed in: neutron
Status: Invalid => New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2052681
Title:
Many stale neutron-keepalived-state-change processes left after
upgra
Public bug reported:
Needs a post-upgrade script to remove those stale "ip -o monitor" and
traditional "neutron-keepalived-state-change" processes.
** Affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engine
Public bug reported:
neutron-keepalived-state-change run neutron-rootwrap-daemon anyway
regardless of config
https://review.opendev.org/q/topic:%22bug/1680183%22
It is the series of patches which replace "ip -o monitor" to pyroute2 python
native process.
But we noticed that the neutron-keepaliv
Looks more like a libvirt error, or nova side problem. Neutron does not
take responsibilities to create the tap-XXX device. It is plugged by
nova-compute. Need to find out why the tap device is not created before
TC rules creating.
** Also affects: nova
Importance: Undecided
Status: New
** Changed in: neutron
Status: New => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1847747
Title:
[RPC] digging RPC timeout for client and server
Status in neutron:
Public bug reported:
We recently met an issue during VM live migration:
1. nova starts live migration
2. plug ports on new host
3. neutron-ovs-agent starts to process the port, but the port is in 'added' and
'updated' set at the same time.
4. because nova still not activate the destination port b
Public bug reported:
1. Agent resource cache has an infinite growth set: _satisfied_server_queries
https://github.com/openstack/neutron/blob/master/neutron/agent/resource_cache.py#L41
there is no entry removal for this set.
2. Because this set has a non-standard structure, for instance:
set([('P
Public bug reported:
USERPID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
neutron 11686 6.8 7.3 29205516 28869568 ? SNov30 1438:32
/usr/bin/python2 /usr/bin/neutron-server --config-file
/usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server
--
Public bug reported:
If an external network has many router on it, then create a subnet will
be time-consuming. Neutron will try to update all routers' external
gateway ports anyway, ignore the subnet service_type.
** Affects: neutron
Importance: Undecided
Status: New
--
You recei
Public bug reported:
For instance:
neutron.tests.fullstack.test_securitygroup.TestSecurityGroupsSameNetwork.test_securitygroup
https://f9c28e74a4e26f2c90e8-fa663b3b43bb6eacc0d3184a52007f13.ssl.cf5.rackcdn.com/888098/11/check/neutron-
fullstack-with-uwsgi/e34e362/testr_results.html
Code:
https://
Public bug reported:
Recently we meet a strange issue between neutron-server and neutron-
dhcp-agents. In a long run deployment, we just restart all neutron-
servers, then we failed to boot VM. Yes, it is vif-plug-timeout!!! We
noticed that the DHCP provisioningblock was not deleted.
Our operator
Public bug reported:
Recently we meet a strange issue between neutron-server and neutron-
dhcp-agents. In a long run deployment, we just restart all neutron-
servers, then we failed to boot VM. Yes, it is vif-plug-timeout!!! We
noticed that the DHCP provisioningblock was not deleted.
Our operator
Public bug reported:
When the tenant creates the first HA router-1, neutron will try to create HA
network for this project.
But during the HA network creation procedure, we assume it has 2 steps:
(1) create HA network
(2) create subnet for this HA network
another router-2 creation API call is com
Public bug reported:
# openstack network create test-network
+---+--+
| Field | Value|
+---+--+
| admin_state_up|
Public bug reported:
Port arp_spoofing_protection will install flows like this:
table=0, priority=9,in_port=2 actions=goto_table:25
table=25, priority=2,in_port=2,dl_src=fa:16:3e:54:f0:71 actions=goto_table:60
For network ports or port_security_enabled = False, those flows
will be delete by setu
Public bug reported:
For L3 DVR of VLAN networks, the HA VRRP traffic flooding from vlan HA
networks will be flooding on physical bridges on compute nodes forever.
For compute nodes, the physical bridges can directly drop the multicast
packets of CIDR "l3_ha_net_cidr".
** Affects: neutron
I
Public bug reported:
For L3 DVR of VLAN networks, the east west traffic between router
subnets will be flooding on the physical bridge.
Assuming we have resources like this:
1. subnet-1 10.10.10.0/24 with gateway-1 10.10.10.1, mac-address-[01]
2. subnet-2 20.20.20.0/24 with gateway-2 20.20.20.1,
*** This bug is a duplicate of bug 1988077 ***
https://bugs.launchpad.net/bugs/1988077
** Changed in: neutron
Importance: Undecided => Wishlist
** Changed in: neutron
Status: New => Opinion
** This bug has been marked a duplicate of bug 1988077
Noisy neutron-openvswitch-agent se
Public bug reported:
We are going to implement packet rate limit on ovs bridge by using meter
rules [1], at the same time, meter can also be used to limit the
bandwidth. os-key(ryu) supports the rule types of OFPMF_KBPS [2]. And
usually, some smart-NICs for ovs offloading will support offloading
m
Public bug reported:
Currently neutron l3 router will run radvd to send out RA packets about
the ManagedFlag, LinkMTU and prefix of IPv6 subnet. But rememeber we
have a distributed SDN controller, aka ovs-agent, which can do this work
more naturally and gracefully.
Current radvd config looks like
Public bug reported:
In real cloud production environment, there are many hosts, which can
access external network, which may not. Some have enough NICs to work
for different networks, while some are lack of NICs.
For instance, an external network, provider:network_type is ``vlan``,
provider:phys
related to these problems.
** Affects: neutron
Importance: High
Assignee: LIU Yulong (dragon889)
Status: In Progress
** Changed in: neutron
Importance: Undecided => High
** Changed in: neutron
Status: New => In Progress
** Changed in: neutron
Assignee: (unas
Public bug reported:
OVN related cases keep getting failed recently, examples:
[1]
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_d0a/804378/3/check/neutron-functional-with-uwsgi/d0a39b4/testr_results.html
[2]
https://d94b4cd84b6a46f0b36c-82f789b81dd
Public bug reported:
Since neutron supports packet rate limit rule [1][2], it's time for us
to support real pps limitation in agent side for neutron ports and IPs.
So this RFE is for real pps limitation functionality. We are going to
implement neutron port's pps limitation with ovs meter first. IP
Public bug reported:
The dsvm-functional test can not run even for those test cases not
involved with OVN.
$ tox -e dsvm-functional
neutron.tests.functional.db.test_migrations.TestModelsMigrationsMysql
dsvm-functional develop-inst-noop: /opt/stack/neutron
dsvm-functional installed:
...
...
Public bug reported:
In the L3 meeting 2021-06-30, I mentioned this topic.
https://meetings.opendev.org/meetings/neutron_l3/2021/neutron_l3.2021-06-30-14.00.log.html#l-28
Current L3 resources (floating IPs, router interface, router external gateway)
processing procedure is a bit heavy, and somet
Public bug reported:
Each cases are sharing the common lease path for dhclient,
for instance, in CentOS it is: /var/lib/dhclient/dhclient.leases.
That means all fullstack cases will use this file to store
fake VM's NIC DHCP lease information.
After run several times of fullstack cases, the dhclie
-conf
** Affects: neutron
Importance: Undecided
Assignee: LIU Yulong (dragon889)
Status: In Progress
** Changed in: neutron
Assignee: (unassigned) => LIU Yulong (dragon889)
** Changed in: neutron
Status: New => In Progress
--
You received this bug notifi
Public bug reported:
When instances are booting, they will try to retrieve metadata from
Nova by the path of Neutron virtual switches(bridges), virtual devices,
namespaces and metadata-agents. After that, metadata agent has no other
functionalities. In large-scale scenarios, a large number of depl
Public bug reported:
These 4 cases get failed frequently
neutron.tests.functional.services.l3_router.test_l3_dvr_ha_router_plugin.L3DvrHATestCase
1) test_agent_gw_port_delete_when_last_gateway_for_ext_net_removed
2) test_delete_agent_gw_port_for_network
neutron.tests.functional.services.l3_router
"ovs datapath_type netdev" should only be used for VM, neutron router
related virtual network devices are not compatible with it, [1] has
those limitations. The only way you can run L3 routers with VMs (using
DPDK) is to run l3-agents and ovs-agents in dedicated nodes with data
patch type system, w
** Changed in: neutron
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1811352
Title:
[RFE] Include neutron CLI floatingip port-forwarding suppo
Public bug reported:
provisioning_blocks are mostly used for compute port to notify that
Neutron has done the networking settings, so VM can go power-on now. But
now, Neutron does not check the port's device owner, it adds
provisioning_block to all types of ports, even it is used by neutron.
For
Public bug reported:
Recently, nothing changed on the test case, but we got this failure:
https://04a9f9fdd9afdf12de4e-f889a65b4dfb1f628c8309e9eb44b225.ssl.cf2.rackcdn.com/787304/5/check/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid/bb9e9a9/testr_results.html
LOG:
Traceback (most r
Public bug reported:
Code: neutron master
Env: a new deployment by devstack
$ tox -e dsvm-functional
neutron.tests.functional.agent.test_firewall.FirewallTestCase.test_rule_ordering_correct
=
Failures during discovery
=
--- import errors ---
Fail
Public bug reported:
Floating ip port forwradings table has constraints:
TABLE_NAME = 'portforwardings'
op.create_unique_constraint(
constraint_name=('uniq_port_forwardings0floatingip_id0'
'external_port0protocol'),
table_name=TABLE_NAME,
colu
*** This bug is a duplicate of bug 1883089 ***
https://bugs.launchpad.net/bugs/1883089
** This bug has been marked a duplicate of bug 1883089
[L3] floating IP failed to bind due to no agent gateway port(fip-ns)
--
You received this bug notification because you are a member of Yahoo!
Engin
Public bug reported:
Recently I played the ovs-agent with bagpipe_bgpvpn. It has some
mechanism which will install the same flows to br-tun by using BGP. But
now arp_responder and l2_population are enforced, after this patch
https://review.opendev.org/c/openstack/neutron/+/669938.
** Affects: neu
*** This bug is a duplicate of bug 1883089 ***
https://bugs.launchpad.net/bugs/1883089
** This bug has been marked a duplicate of bug 1883089
[L3] floating IP failed to bind due to no agent gateway port(fip-ns)
--
You received this bug notification because you are a member of Yahoo!
Engin
Add service type [1] to your subnets of public network, then it will overcome
the 'problem'.
It's a bit complicated because there are some deployment and usage situations:
1. None DVR and IPv4 only
(1) public network has only one subnet which is serving for floating IP and
external gateways.
set
Public bug reported:
Need to clean cache when router is down, otherwise the port forwarding
extension will skip all objects processing due to cache is hitting.
** Affects: neutron
Importance: High
Status: Confirmed
--
You received this bug notification because you are a member of
For DVR router the snat node should be configured with "dvr_snat". For
now, "legacy/ha router" can run on a "dvr_snat" node. But, DVR router
can not run on "legacy" node, since the RouterInfo instance type is
based on the agent mode [1].
[1]
https://github.com/openstack/neutron/blob/master/neutron
Public bug reported:
Create dvr router on 'legacy' agent node, then we got AttributeError:
'DvrLocalRouter' object has no attribute 'snat_namespace'.
ERROR LOG:
.agent.l3.agent [-] Failed to process compatible router:
b247f145-569a-4d5a-bdd8-31a5213641ea: AttributeError: 'DvrLocalRouter' object
Public bug reported:
Delete port failed with final DB error:
Feb 25 19:24:34 devstack neutron-server[15279]: DEBUG
neutron.api.rpc.handlers.l3_rpc [None req-a6ccb04c-401f-4e23-bc16-e7fc9cfc9ae6
None None] New status for floating IP f681d60c-edf9-41e9-b8b3-70c7cf3d8d42:
ERROR {{(pid=15361) upda
bd488dcf6af-
941e362261845d27ff2c0dd3e3a521f3.ssl.cf1.rackcdn.com/773283/6/check
/openstack-tox-py36/4e6fea3/testr_results.html
Log search:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22AssertionError%3A%201%20not%20greater%20than%201%5C%22
** Affects: neutron
Public bug reported:
Examples:
Build failed (check pipeline). For information on how to proceed, see
https://docs.opendev.org/opendev/infra-manual/latest/developers.html#automated-testing
openstack-tox-docs
https://zuul.opendev.org/t/openstack/build/888eec88174946a89e4725e216f4aef
** Changed in: neutron
Status: In Progress => Fix Committed
** Changed in: neutron
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/173206
Public bug reported:
This is the job neutron-tempest-plugin-scenario-openvswitch's cases:
https://812aefd7f17477a1c0dc-8bc1c0202523f17b73621207314548bd.ssl.cf5.rackcdn.com/772255/6/check/neutron-tempest-plugin-scenario-openvswitch/5221232/testr_results.html
This is neutron-tempest-dvr-ha-multinod
** Changed in: neutron
Status: New => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1907175
Title:
intermittently ALL VM's floating IP connection is disconnected, and
Public bug reported:
For cloud providers, to limit the packet per second (pps) of VM NIC is
popular and sometimes essential. Transit large set packets for VM in
physical compute hosts will consume the CPU/phy-nic performance. And for
small packets, even the bandwidth is low, the pps can still be h
Public bug reported:
Neutron creates N NetworkDhcpAgentBindings (N is equal to
dhcp_agents_per_network) for network even if the subnets disabled the
dhcp. This means no matter the DHCP state, the dhcp_scheduler will
schedule the network anyway.
Reproduce steps:
$ source demo_rc
$ openstack networ
Public bug reported:
Neutron router now supports SNAT when the attribute ``enable_snat`` of the
gateway is set to True.
This will enable all the VMs which has no binding floating IP to access the
public world.
But, generally the DataCenter bandwidths for cloud providers are not free. And
some
last 0 s (1000 adds)
2020-12-02T08:13:47.834Z|01322|connmgr|INFO|br-int<->unix#1334: 222 flow_mods
in the last 0 s (222 adds)
So we can say maybe we should increase the trunk (step) size for the
ovs-ofctl bundle installaion. We do not want to add a fixed value
because the vswitchd ma
g
2020-11-30 16:46:55.942 62088 ERROR oslo_messaging.rpc.server for level in
levels:
2020-11-30 16:46:55.942 62088 ERROR oslo_messaging.rpc.server TypeError:
'NoneType' object is not iterable
2020-11-30 16:46:55.942 62088 ERROR oslo_messaging.rpc.server
** Affects: neutron
Importance
raise
n_exc.PortNotFound(port_id=id)
2020-12-01 10:52:46.738 62077 ERROR oslo_messaging.rpc.server PortNotFound:
Port 3f838c59-e84a-49de-a381-f3328d47a69f could not be found.
2020-12-01 10:52:46.738 62077 ERROR oslo_messaging.rpc.server
2020-12-01 10:53:03.921 62076 ERROR oslo_messaging.rpc.ser
Neutron had covered such issue [1][2][3].
[1]
https://github.com/openstack/neutron/blob/master/neutron/services/portforwarding/pf_plugin.py#L433-L439
[2]
https://github.com/openstack/neutron/blob/master/neutron/db/migration/alembic_migrations/versions/rocky/expand/867d39095bf4_port_forwarding.py
Public bug reported:
Add a new ovs agent extension to support fully distributed DHCP for VMs in
compute nodes, especially for large scale cloud. We had some disscussions
during Shanghai PTG:
https://etherpad.opendev.org/p/Shanghai-Neutron-Planning-restored
http://lists.openstack.org/pipermail/op
Public bug reported:
For some agent extension implementation, it may need the router_info to
do some clean up work. So the L3 agent extensions(s) can move the delete
action ahead of the L3 agent cache deleting.
** Affects: neutron
Importance: Low
Assignee: LIU Yulong (dragon889
OK, let's mark this as "Won't Fix". And move to state change can be a
way to new install envrionment. But for running could, the existing
keepalived-state-change processes may need to re-create.
** Changed in: neutron
Status: New => Won't Fix
--
You received this bug notification because
** Affects: neutron
Importance: Low
Assignee: LIU Yulong (dragon889)
Status: In Progress
** Changed in: neutron
Importance: Undecided => Low
** Changed in: neutron
Status: New => In Progress
--
You received this bug notification because you are a member of
Public bug reported:
For now if the L3 agent change the agent mode (dvr to dvr_no_external,
or dvr_no_external to dvr), the floating IP traffic will not recover.
There is a mannully workflow to achive the transition, but it will need
to shutdown the router during the transition. The backwards is o
*** This bug is a duplicate of bug 1861674 ***
https://bugs.launchpad.net/bugs/1861674
** Changed in: neutron
Status: New => Incomplete
** Changed in: neutron
Status: Incomplete => Confirmed
** This bug has been marked a duplicate of bug 1861674
Gateway which is not in subne
.rpc.server fixed_ips =
port.get('fixed_ips', [])
2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server AttributeError:
'NoneType' object has no attribute 'get'
2020-06-11 15:39:28.391 1285214 ERROR oslo_messaging.rpc.server
** Affects: neutron
Impo
Public bug reported:
commit 62bbc262c3c7f633eac1d09ec78c055eef05166a changes the default code
branch condition which breaks the existing cloud static network config.
[1]
https://github.com/canonical/cloud-init/commit/62bbc262c3c7f633eac1d09ec78c055eef05166a#r39437585
** Affects: cloud-init
Public bug reported:
ENV: stable/queens, but master branch basically has the same code.
Unexcepted HA router scheduled instance shows up after manully
scheduling and admin-state down/up.
Step to reproduce:
$ openstack network agent list --router c0f96d58-5521-40fa-9536-205635facc69
--long
Public bug reported:
ENV: meet this issue on our stable/queens deployment, but master branch
has the same code logic
When the L3 agent get a router update notification, it will try to
retrieve the router info from DB server [1]. But at this time, if the
message queue is down/unreachable. It will
Public bug reported:
Code branch: master
Assuming you have 5 nodes to run a multi-node devstack deployment with neutron
and OVN.
One node for "ovn-northd" DB only. Two chassis for compute, and two for gateway.
For the DB only node, if you do not set "ovn-controller" to the enable_services
list
** This bug is no longer a duplicate of bug 1793029
adding 0.0.0.0/0 address pair to a port bypasses all other vm security
groups
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1867119
Public bug reported:
[security] Add allowed-address-pair 0.0.0.0/0 to one port will open all
others' protocol under same security group
When add allowed-address-pair 0.0.0.0/0 to one port, it will unexpectedly open
all others' protocol under same security group. First found in stable/queens,
bu
Public bug reported:
ENV: devstack, master
$ openstack security group list
+--+-++--+--+
| ID | Name| Description|
Project
** This bug is no longer a duplicate of bug 1732067
openvswitch firewall flows cause flooding on integration bridge
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1866445
Title:
br-in
Public bug reported:
One question: how to let the IPv6 traffic to the outside world run directly in
the compute host?
We have a BP before:
https://blueprints.launchpad.net/neutron/+spec/ipv6-router-and-dvr
And one spec for it:
https://review.opendev.org/#/c/136878/
** Affects: neutron
Impo
Public bug reported:
Branch:
neutron master, HEAD: commit 7a0e5185c6cf7b5f8bcfe50576e86798947a7ba7
Exception:
File "/home/yulong/github/neutron/neutron/agent/l3/dvr_edge_router.py",
line 160, in initialize
self._create_snat_namespace()
File "/home/yulong/github/neutron/neutro
Public bug reported:
ENV: devstack with master branch neutron
HEAD: ab24a11f13cdfdf623a4b696f469aa621d59405b
Reproduce:
1. network1 + subnet1: 192.168.1.0/24
2. network2 + subnet2: 192.168.2.0/24
3. dvr router with attached subnets: subnet1, subnet2
4. VM1 (192.168.1.10) and VM2 (192.168.1.11) cr
** Changed in: neutron
Status: Invalid => New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1858262
Title:
"CREATE TABLE ovn_hash_ring" Specified key was too long; max key
leng
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1858262
Title:
"CREATE TABLE ovn_hash_ring" Specified key was too long; max key
leng
Public bug reported:
During the DB upgrading of neutron, the following error raised:
CREATE TABLE ovn_hash_ring (
node_uuid VARCHAR(36) NOT NULL,
group_name VARCHAR(256) NOT NULL,
hostname VARCHAR(256) NOT NULL,
created_at DATETIME NOT NULL,
updated_at
Public bug reported:
neutron-tempest-plugin-dvr-multinode-scenario FAILURE in 1h 22m 34s
(non-voting)
neutron-tempest-plugin-scenario-linuxbridge FAILURE in 1h 01m 47s
neutron-tempest-plugin-scenario-openvswitch FAILURE in 1h 02m 16s
neutron-tempest-plugin-scenario-openvswitch-iptables_
/blob/master/neutron/agent/l3/dvr_local_router.py#L260
** Affects: neutron
Importance: Medium
Assignee: LIU Yulong (dragon889)
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launch
** Changed in: neutron
Assignee: (unassigned) => LIU Yulong (dragon889)
** Changed in: neutron
Status: Won't Fix => New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/b
Public bug reported:
This should be a NOTE, not a bug in case someone who meets this issue
someday, since the minimum support python version of neutron is 3.7 now.
Branch: master
heads:
2a8b70d Merge "Update security group rule if port range is all ports"
fd5e292 Merge "Remove neutron-grenade jo
Public bug reported:
If the DVR+HA router has external gateway, the snat-namespace will be
initialized twice during agent restart.
And that initialized function will run many [1][2] external resource processing
actions which will definitely increase the starting time of agent.
https://github.c
Public bug reported:
For one network with multiple slaac IPv6 subnets, the created port will have
all IPv6 subnets address automatically by default. For some use case, we do not
want the port to have all the IPv6 address from all IPv6 subnets, but only one
of it. It is a behavior for neutron no
Public bug reported:
RPC timeout can be found frequently, but we have no statistical data for
it. A simple log can help. Since all the projects are using
oslo.messaging as midlware between services and message queue, we can
add a log in it, something like this, a local test:
2019-10-11 19:23:05.7
High
Assignee: LIU Yulong (dragon889)
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1845145
Title:
[L3] add abilitiy for iptables_manager to ensure rule was
Public bug reported:
Code: master with nothing changed.
Exception:
Sep 17 00:41:16 controller neutron-server[10222]: ERROR
oslo_messaging.rpc.server [None req-9b3e8e62-b6b3-4506-8950-f73c3e5e2be3 None
None] Exception during message handling: TooManyExternalNetworks: More than one
external netw
Public bug reported:
When port installed to the agent, it will be processed in rpc_loop X as
"added". In next X + 1 rpc_loop, it will be processed again as
"updated". This is unnecessary. And it can highly probably increase the
processing time of new "added" ports in this X+1 loop.
We have do som
Public bug reported:
Bug https://bugs.launchpad.net/neutron/+bug/1732067 has a bad impact on VM
traffic. And all the fix has some potenial risk of data-plane down. So we added
a new bug for the new solution:
It will add a flow table something like a switch FDB table. The accepted egress
flows w
Public bug reported:
When ovs-agent done processing the port, it will call neutron-server to make
some DB update.
Especially when restart the ovs-agent, all ports in one agent will do such RPC
and DB update again to make port status consistent. When a large number of
concurrent agent restart ha
A really old neutron version.
But anyway, IMO, you met this bug:
https://bugs.launchpad.net/neutron/+bug/1682094
This is the error comes from:
https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_v2.py#L611-L616
According to the exception, I did the following tests:
>>> ip =
Public bug reported:
ENV: stable/queens
But master has basically same code, so the issue may also exist.
Config: L2 ovs-agent with enabled openflow based security group.
Recently I run one extreme test locally, booting 2700 instances for one single
tenant.
The instance will be booted in 2000 ne
Public bug reported:
Examples:
http://logs.openstack.org/11/669111/4/check/neutron-tempest-plugin-dvr-multinode-scenario/dc3af26/controller/logs/screen-q-l3.txt.gz#_Jul_07_04_18_11_791730
http://logs.openstack.org/11/669111/4/check/neutron-tempest-plugin-dvr-multinode-scenario/dc3af26/controller/l
** Changed in: neutron
Assignee: (unassigned) => LIU Yulong (dragon889)
** Changed in: neutron
Status: Opinion => New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1
1 - 100 of 245 matches
Mail list logo