As commented by you here:
https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/936318
This change moved the dvr jobs to the experimental queue:
https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/736177
So this is likely not a bug, but the consequence of an earlier decision
As gibi said above, this is unlikely to be either a nova or a neutron
problem, but more likely a deployment problem. I don't believe the
various neutron log lines quoted have anything to do with the root
cause.
To help with the debugging:
What deployment software did you use?
Are you using devsta
t;
user@debian:~$ python3
Python 3.7.3 (default, Jan 22 2021, 20:04:44)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 4/3
1.3333
>>>
Python3 doesn't round to integers, and OVS d
*** This bug is a duplicate of bug 1904188 ***
https://bugs.launchpad.net/bugs/1904188
I am marking this as duplicate. Let me know if you think differently.
Also don't hesitate to propose a backport to stable/ussuri.
** This bug has been marked a duplicate of bug 1904188
Include standard a
formation to an attacker. On the other
hand I would not consider this in itself a vulnerability.
Pushing a trivial fix in a minute.
** Affects: neutron
Importance: Low
Assignee: Bence Romsics (bence-romsics)
Status: In Progress
** Tags: api
--
You received this bug notificat
eceived physnet-segment information into the db.
This means we multiply the load on the db and rpc workers by a factor of the
total rpc worker count.
Pushing a fix attempt soon.
** Affects: neutron
Importance: High
Assignee: Bence Romsics (bence-romsics)
Status: In Progress
-
Rodolfo, based on your analysis I moved this report to tripleo. Of
course if it also has a neutron part, just add that back please.
** Project changed: neutron => tripleo
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
ht
Hi,
There's a long history here, but I would actually recommend that you
switch back to using the legacy devstack plugin.
The new neutron devstack plugin AFAICT worked quite well in a simple dev
environment. Despite the legacy one being deprecated for a long time,
the work on the new one stalled
** Changed in: neutron
Status: In Progress => Won't Fix
** Changed in: neutron
Status: Won't Fix => Triaged
** Changed in: neutron
Importance: Undecided => High
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neu
Hi,
Are you sure you wanted to post this bug report to the neutron project's
bug tracker?
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bu
if it's feasible implement it in Train.
** Affects: neutron
Importance: Undecided
Assignee: Bence Romsics (bence-romsics)
Status: New
** Tags: qos rfe
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to ne
he compute host RP. That case should have been detected
as an error, but it was not.
I'll upload a proposed fix right away.
** Affects: neutron
Importance: Undecided
Assignee: Bence Romsics (bence-romsics)
Status: New
** Tags: qos
--
You received this bug notificati
operations like these should be rejected now, otherwise we may set up
false expectations in our users.
** Affects: neutron
Importance: Undecided
Assignee: Bence Romsics (bence-romsics)
Status: New
** Tags: qos stein-rc-potential
--
You received this bug notification because
Since we have two contradicting bug reports over the preferred form I'm
marking this as Opinion.
** Changed in: neutron
Status: New => Opinion
** Changed in: neutron
Importance: Undecided => Wishlist
--
You received this bug notification because you are a member of Yahoo!
Engineering
y that a random IP+1 is
occasionally the subnet broadcast address which is invalid as a
fixed_ip.
https://opendev.org/openstack/neutron/src/commit/1ea9326fda303b48905d7f7748d320ba8e9322aa/neutron/tests/unit/services/revisions/test_revision_plugin.py#L169
I'm going to upload an attempted fix soon.
.
[1] http://lists.openstack.org/pipermail/openstack-
discuss/2019-April/005121.html1
** Affects: neutron
Importance: Undecided
Assignee: Bence Romsics (bence-romsics)
Status: New
** Tags: rfe
** Summary changed:
- Add atomic extraroute API
+ Atomic Extraroute API
--
You
ing
* Profiling coverage for vif plugging
This work is also driven by the discoveries made while interpreting
profiler reports so I expect further changes here and there.
** Affects: neutron
Importance: Wishlist
Assignee: Bence Romsics (bence-romsics)
Status: In Progress
** Tags:
I don't know when William will read my previous comment, but overall
what I found is this:
The cache of metadata-agent was designed to be invalidated by time-based
expiry. That method has the reported kind of side effect if a client is
too fast. Which is not perfect, but usually can be addressed b
Alternatives
1) Using 4 logical routers with 1 external gateway each. However in this
case the API misses the information which (2 or 4) logical routers
represent the same backend router.
2) Using a VRRP HA router. However this provides a different level of
High Availability plus it is active-
this, please see the details
there:
https://review.opendev.org/c/openstack/neutron-specs/+/781475
** Affects: neutron
Importance: Wishlist
Assignee: Bence Romsics (bence-romsics)
Status: New
** Tags: rfe
--
You received this bug notification because you are a member of Yahoo
|
egrep -v ^NXST_FLOW | sed -r -e
's/(cookie|duration|n_packets|n_bytes|idle_age|hard_age)=[^ ]+ //g' -e
's/^ *//' -e 's/, +/ /g' | sort ) <( cat ~/$base$b | egrep -v ^NXST_FLOW
| sed -r -e 's/(cookie|duration|n_packets|n_bytes|idle_age|hard_age)=[^
]+ //g
Public bug reported:
It seems that after unshelve, occasionally the request for a dedicated
CPU is ignored. More precisely the first pinned CPU does not seem to be
marked as consumed, so the second may end up on the same CPU. This was
first observed on victoria (6 times out of 46 tries), but then
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2028544
Title:
dhcp agent binding count greather than dhcp_agents_per_network
Status
I believe regarding this bug report what could be done, has been done.
Other fixes are not going to happen, therefore I'm setting this to Won't
Fix, to clean up the open bug list.
** Changed in: neutron
Status: Confirmed => Won't Fix
--
You received this bug notification because you are a
Hi,
Thanks for the report!
I'm not sure if the behavior you describe is a bug. If multiple projects
are actually using a shared network, why would you expect it to be
unshared without an error? How should such a network work when it's
shared=False but it has multiple tenants on it?
Maybe I'm mis
Hi,
Thanks for the report!
At first glance this looks like a deployment problem, not a neutron bug.
From neutron perspective there's no clear error symptom described (other
than "networking does not work"). And no neutron log (the attached "log
from neutron_server" stops right when neutron-server
port. Which means that the latter trunk
bridge learned the traffic generator's source MAC now on the wrong port.
I have a suspicion that this may have lead to the unexpectedly double
tagged packets in the other direction.
** Affects: neutron
Importance: Undecided
Assignee: Bence Romsi
I'm reopening this because I believe the fix committed fixes only part
of the problem. With firewall_driver=noop the unnecessary ingress
flooding on br-int is gone. However we still have the same unnecessary
flooding with firewall_driver=openvswitch. For details and a full
reproduction please comme
** Changed in: neutron
Status: New => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1884708
Title:
explicity_egress_direct prevents learning of local MACs and causes
Public bug reported:
I believe this issue was already reported earlier:
https://bugs.launchpad.net/neutron/+bug/1884708
That bug has a fix committed:
https://review.opendev.org/c/openstack/neutron/+/738551
However I believe the above change fixed only part of the issue (with
firewall_driver=n
Public bug reported:
The original problem observed in a downstream deployment was of
overcommit on dedicated PCPUs and CPUPinningInvalid exception breaking
update_available_resource periodic job.
The following reproduction is not an end-to-end reproduction, but I hope
I can demonstrate where thin
Hi Bartosz,
Yes, by default this is prohibited. However oslo.policy based policies
are configurable.
For example, in my devstack I don't have ironic deployed, but I
reproduced the problem using the unprivileged 'demo' user:
$ source openrc demo demo
$ openstack network create net0
$ openstack su
** Changed in: neutron
Status: Invalid => Triaged
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2052937
Title:
Policy: binding operations are prohibited for service role
Status
Public bug reported:
We observed a scheduling failure when using ovs sriov offload
(https://docs.openstack.org/neutron/latest/admin/config-ovs-offload.html
) in combination with multisegment networks. The problem seems to affect the
case when the port should be bound to a tunneled network segmen
Public bug reported:
Our users found a bug while POSTing to /v3/ec2tokens. I could simplify
the reproduction to this script:
$ cat keystone-post-ec2tokens.sh
#! /bin/sh
# source openrc admin admin
# keystone-post-ec2tokens.sh http://127.0.0.1/identity/v3
keystone_base_url="${1:?}"
cleanup ()
Public bug reported:
Reproduction:
Boot two vms (each with one pinned cpu) on devstack0.
Then evacuate them to devtack0a.
devstack0a has two dedicated cpus, so both vms should fit.
However sometimes (for example 6 out of 10 times) the evacuation of one vm
fails with this error message: 'CPU set
Public bug reported:
Tracking a bug seen in the gate:
zuul report:
https://50aa58668700125588f9-69e8ab9908c85e150921aaa267a6677d.ssl.cf1.rackcdn.com/855198/1/gate/keystone-protection-functional/edeae8a/testr_results.html
zuul log:
https://50aa58668700125588f9-69e8ab9908c85e150921aaa267a6677d.ss
e-of-port0 other_config:foo=bar
other_config:bar=baz
** Affects: neutron
Importance: Wishlist
Assignee: Bence Romsics (bence-romsics)
Status: New
** Tags: rfe
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
Public bug reported:
I'm trying to track here a bug I have seen in nova gate appearing
randomly through rechecks.
Typical stack traces:
Traceback (most recent call last):
File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 90, in
wrapper
return f(*func_args, **func_kwargs)
Public bug reported:
source openrc admin admin
export TOKEN="$( openstack token issue -f value -c id )"
A single port create succeeds:
curl -s -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -d
"{\"port\":{\"name\":\"port0\",\"network_id\":\"$( openstack net show private
-f value
Public bug reported:
Opening this report to track the following test that fails occasionally
in the gate:
job neutron-functional-with-uwsgi
test
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_lifecycle_ha_with_snat_with_fipst
Public bug reported:
Colleagues working downstream found a slight discrepancy in quota
enforcement while working with the new unified quota system.
If we set the image_size_total quota to 1 MiB, the actual limit where
quota enforcement turns on is 2 MiB - 1 byte:
openstack --os-cloud devstack-sy
Public bug reported:
Recently we seem to have many the same devstack build failure in many
different gate jobs. The usual error message is:
+ lib/neutron_plugins/ovn_agent:start_ovn:714 : wait_for_db_file
/var/lib/ovn/ovnsb_db.db
+ lib/neutron_plugins/ovn_agent:wait_for_db_file:175 : local c
Removing neutron from the affected projects, since Yatin found the cause
in devstack.
** No longer affects: neutron
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2002629
Title:
devstac
uot;$( openstack net show private
-f value -c id
)\",\"extra_dhcp_opts\":[{\"opt_name\":\"domain-name-servers\",\"opt_value\":\"10.0.0.1\",\"ip_version\":\"4\"}]}]}"
-X POST http://127.0.0.1:9696/networking/v2.0/ports | j
Public bug reported:
devstack 7533276c
neutron aa40aef70f
This reproduction uses the openvswitch ml2 mechanism_driver and
firewall_driver, but I believe this bug affects all mechanism_drivers.
# Choose a port number no other rule uses on the test host.
$ sudo ovs-ofctl dump-flows br-int | egrep
Please note that the following two lines are NOT the same, one config
option ends in urI the other ends in urL. In later versions keystone
folks renamed auth_uri to www_authenticate_uri so it's easier to
distinguish these config options. But in queens we have to live with
this.
auth_uri = http://c
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1866353
Title:
Neutron API returning HTTP 201 for SG rule create when not fully
cre
Public bug reported:
It seems we have a gate failure in neutron-rally-task. It fails in
rally_openstack.task.scenarios.neutron.trunk.CreateAndListTrunks. For
example:
https://zuul.opendev.org/t/openstack/build/9c9970da456d4145a174f73c90529dd2/log/job-output.txt#41274
https://zuul.opendev.org/t/op
Public bug reported:
Seemingly starting from the 1st of April
neutron.tests.functional.agent.ovn.metadata.test_metadata_agent.TestMetadataAgent.test_agent_registration_at_chassis_create_event
fails randomly in the gate with the error message:
2020-04-06 08:55:57.302891 | controller |
deleted: The segment
is still bound with port(s) 8cf8f188-5ea4-41b0-aa3a-fb8a8802888d.
máj 14 14:37:11 devstack1 heat-engine[12508]: ERROR heat.engine.resource
# a few seconds later a second delete succeeds
$ openstack stack delete s0 --yes --wait
2020-05-14 14:24:26Z [s0]: DELETE_IN_PROGRES
Thank you for your bug report!
I believe this typo was fixed in the change below:
https://review.opendev.org/565289
So the command is correct since the rocky version of our docs, for example:
https://docs.openstack.org/neutron/latest/admin/config-ovs-dpdk.html
** Changed in: neutron
Statu
While I agree that it would be way more user friendly to give a
warning/error in the problematic API workflow that would entail some
cross project changes because today:
* nova does not know when an already bound port is added to a trunk
* neutron does not know if nova is supposed to auto-delete a
Public bug reported:
While exploring the newer microversions (here 1.4) of the placement API
I found this part of the API reference unclear to me
(https://developer.openstack.org/api-ref/placement/#list-resource-
providers, 'resources' parameter):
"A comma-separated list of strings indicating an
Public bug reported:
The following inventory was reported after a fresh devstack build:
curl --silent \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--header "OpenStack-API-Version: placement latest" \
--header "X-Auth-Token: ${TOKEN:?}" \
pi.v2.resource
AttributeError: 'QosMinimumBandwidthRule' object has no attribute
'_obj_direction'
febr 12 12:20:19 devstack0 neutron-server[31565]: ERROR neutron.api.v2.resource·
The version used to reproduce the bug:
neutron 2f3cc51784
neutron-lib aceb7c50ed
devstack ee4b6a01
Public bug reported:
Enable bringup of subports via exposing trunk/subport details over
the metadata API
With the completion of the trunk port feature in Newton (Neutron
bp/vlan-aware-vms [1]), trunk and subports are now available. But the
bringup of the subports' VLAN interfaces inside an instan
Public bug reported:
When you boot a vm with a trunk using the ovs trunk driver the boot
fails in allocating the network. While you get this ovs-agent error log:
neutron-openvswitch-agent[12170]: CallbackFailure: Callback
neutron.services.trunk.drivers.openvswitch.agent.driver
.OVSTrunkSkeleton.c
Public bug reported:
Reproduction:
local_settings:
ANGULAR_FEATURES={
'images_panel': True,
...
}
devstack commit b79531a9f96736225a8991052a0be5767c217377
horizon commit d5779eae0ad267533001cb7dae6ca7dbc5becb27
Go to detail page of an image eg: /ngdetails/OS::Glance::Image/90cc
x27;: True,
...
}
A proposed fix is on the way.
** Affects: horizon
Importance: Undecided
Assignee: Bence Romsics (bence-romsics)
Status: In Progress
** Tags: angularjs
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is sub
Public bug reported:
It seems we have a case where the openvswitch firewall driver and a use
of trunks interferes with each other. I tried using the parent's MAC
address for a subport. Like this:
openstack network create net0
openstack network create net1
openstack subnet create --network net0
Public bug reported:
It seems the 'native' and the 'vsctl' ovsdb drivers behave differently.
The native/idl driver seems to lose some ovsdb transactions, at least
the transactions setting the 'other_config' ovs port attribute.
I have written about this in a comment of an earlier bug report
(https
Public bug reported:
Config option 'use_veth_interconnection' should be deprecated. Instead
we can always use Open vSwitch patch ports.
The discussion started in a review here:
https://review.openstack.org/#/c/318317/2
openstack/neutron/doc/source/devref/openvswitch_agent.rst
line 471
AFAICT th
Public bug reported:
I opened this bug to track the following failure seen in various gate
jobs:
test:
tempest.serial_tests.scenario.test_aggregates_basic_ops.TestAggregatesBasicOps
error message:
tempest.lib.exceptions.Conflict: Conflict with state of target resource
Details: {'code': 409, 'mes
Public bug reported:
We have seen tpi- and spi- interfaces in ovs not deleted by ovs-agent
when they should have been deleted already.
At the moment I only have a reproduction based on chance with wildly
varying frequency of the error symptoms:
ovs-dump() {
for bridge in $( sudo ovs-vsctl li
from ovs-agent to os-vif.
So far I did not find any resources leaked, so we probably only have a
spurious error message we could suppress.
devstack 2f3440dc
neutron 8cca47f2e7
** Affects: neutron
Importance: High
Assignee: Bence Romsics (bence-romsics)
Status: Confirmed
66 matches
Mail list logo