[Yahoo-eng-team] [Bug 2017748] Re: [SRU] OVN: ovnmeta namespaces missing during scalability test causing DHCP issues

2024-11-05 Thread Brian Haley
** Changed in: cloud-archive/dalmation
   Status: New => Fix Released

** Changed in: cloud-archive/caracal
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2017748

Title:
  [SRU] OVN:  ovnmeta namespaces missing during scalability test causing
  DHCP issues

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive antelope series:
  New
Status in Ubuntu Cloud Archive bobcat series:
  New
Status in Ubuntu Cloud Archive caracal series:
  Fix Released
Status in Ubuntu Cloud Archive dalmation series:
  Fix Released
Status in Ubuntu Cloud Archive epoxy series:
  Fix Released
Status in Ubuntu Cloud Archive yoga series:
  New
Status in Ubuntu Cloud Archive zed series:
  New
Status in neutron:
  New
Status in neutron ussuri series:
  Fix Released
Status in neutron victoria series:
  New
Status in neutron wallaby series:
  New
Status in neutron xena series:
  New
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Focal:
  New
Status in neutron source package in Jammy:
  New
Status in neutron source package in Noble:
  New
Status in neutron source package in Oracular:
  New

Bug description:
  [Impact]

  ovnmeta- namespaces are missing intermittently then can't reach to VMs

  [Test Case]
  Not able to reproduce this easily, so I run charmed-openstack-tester, the 
result is below:

  ==
 
  Totals
 
  ==
 
  Ran: 469 tests in 4273.6309 sec.  
 
   - Passed: 398
 
   - Skipped: 69
 
   - Expected Fail: 0   
 
   - Unexpected Success: 0  
 
   - Failed: 2  
 
  Sum of execute time for each test: 4387.2727 sec. 

  2 failed tests
  (tempest.api.object_storage.test_account_quotas.AccountQuotasTest and
  
octavia_tempest_plugin.tests.scenario.v2.test_traffic_ops.TrafficOperationsScenarioTest)
  is not related to the fix

  [Where problems could occur]
  This patches are related to ovn metadata agent in compute.
  VM's connectivity can possibly be affected by this patch when ovn is used.
  Biding port to datapath could be affected.

  [Others]

  == ORIGINAL DESCRIPTION ==

  Reported at: https://bugzilla.redhat.com/show_bug.cgi?id=2187650

  During a scalability test it was noted that a few VMs where having
  issues being pinged (2 out of ~5000 VMs in the test conducted). After
  some investigation it was found that the VMs in question did not
  receive a DHCP lease:

  udhcpc: no lease, failing
  FAIL
  checking http://169.254.169.254/2009-04-04/instance-id
  failed 1/20: up 181.90. request failed

  And the ovnmeta- namespaces for the networks that the VMs was booting
  from were missing. Looking into the ovn-metadata-agent.log:

  2023-04-18 06:56:09.864 353474 DEBUG neutron.agent.ovn.metadata.agent
  [-] There is no metadata port for network
  9029c393-5c40-4bf2-beec-27413417eafa or it has no MAC or IP addresses
  configured, tearing the namespace down if needed _get_provision_params
  /usr/lib/python3.9/site-
  packages/neutron/agent/ovn/metadata/agent.py:495

  Apparently, when the system is under stress (scalability tests) there
  are some edge cases where the metadata port information has not yet
  being propagated by OVN to the Southbound database and when the
  PortBindingChassisEvent event is being handled and try to find either
  the metadata port of the IP information on it (which is updated by
  ML2/OVN during subnet creation) it can not be found and fails silently
  with the error shown above.

  Note that, running the same tests but with less concurrency did not
  trigger this issue. So only happens when the system is overloaded.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/2017748/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2065821] Re: cover job started to fail with Killed

2024-09-20 Thread Brian Haley
I originally marked my change Related-bug, but I believe all the changes
above have addressed the problem, so will close this bug.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2065821

Title:
  cover job started to fail with Killed

Status in neutron:
  Fix Released

Bug description:
  Pipeline here:
  https://zuul.opendev.org/t/openstack/builds?job_name=openstack-tox-
  cover&project=openstack/neutron

  First failure is May 14:
  https://zuul.opendev.org/t/openstack/build/6899085a449248ed8b017eb4e9f231ab

  In logs, it looks like this:

  2024-05-14 16:33:32.050334 | ubuntu-jammy | Implementing implicit namespace 
packages (as specified in PEP 420) is preferred to 
`pkg_resources.declare_namespace`. See 
https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
  2024-05-14 16:33:32.050424 | ubuntu-jammy |   declare_namespace(pkg)
  2024-05-14 16:33:32.050451 | ubuntu-jammy | 
/home/zuul/src/opendev.org/openstack/neutron/.tox/cover/lib/python3.10/site-packages/pkg_resources/__init__.py:2832:
 DeprecationWarning: Deprecated call to 
`pkg_resources.declare_namespace('repoze')`.
  2024-05-14 16:33:32.050472 | ubuntu-jammy | Implementing implicit namespace 
packages (as specified in PEP 420) is preferred to 
`pkg_resources.declare_namespace`. See 
https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
  2024-05-14 16:33:32.050490 | ubuntu-jammy |   declare_namespace(pkg)
  2024-05-14 16:33:32.050516 | ubuntu-jammy | 
/home/zuul/src/opendev.org/openstack/neutron/.tox/cover/lib/python3.10/site-packages/pkg_resources/__init__.py:2832:
 DeprecationWarning: Deprecated call to 
`pkg_resources.declare_namespace('repoze')`.
  2024-05-14 16:59:58.794881 | ubuntu-jammy | Implementing implicit namespace 
packages (as specified in PEP 420) is preferred to 
`pkg_resources.declare_namespace`. See 
https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
  2024-05-14 16:59:58.796083 | ubuntu-jammy |   declare_namespace(pkg)
  2024-05-14 16:59:58.796171 | ubuntu-jammy | Killed
  2024-05-14 17:03:29.030113 | ubuntu-jammy | Ran 20812 tests in 1777.707s
  2024-05-14 17:03:29.174365 | ubuntu-jammy | FAILED (id=0, failures=1, 
skips=1701)

  Could it be that the job no longer has enough memory and gets OOM
  killed?

  I've compared versions of packages updated between older good and
  newer bad runs, and I only see these bumped: sqlalchemy 1.4.51 ->
  2.0.29 and alembic 1.9.4 -> 1.13.1.

  Different runs have different unit tests reported as failed (all
  failed runs claim a single test case failed).

  Examples of different failed tests:

  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_218/917262/3/check/openstack-
  tox-cover/2180cc3/testr_results.html -
  
neutron.tests.unit.services.network_segment_range.test_plugin.TestNetworkSegmentRange.test_delete_network_segment_range_failed_with_segment_referenced

  
https://9b86ab5bbc6be76c9905-30f46d6ec556e6b2dd47ea35fedbb1ac.ssl.cf5.rackcdn.com/919699/4/check/openstack-
  tox-cover/ce9baa9/testr_results.html -
  neutron.tests.unit.services.ovn_l3.test_plugin.OVNL3ExtrarouteTests
  .test_floatingip_update_different_port_owner_as_admin

  
https://6eed35a50c35f284b4d2-bf433abff5f8b85f7f80257b72ac6f67.ssl.cf2.rackcdn.com/919632/1/check/openstack-
  tox-cover/3b1c5fa/testr_results.html -
  
neutron.tests.unit.services.placement_report.test_plugin.PlacementReportPluginTestCases.test__sync_placement_state_legacy

  I suspect specific unit test cases are not relevant - the test runner
  process dies for some reason and whatever the test it was running at
  that moment gets reported as failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2065821/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2081643] Re: Neutron OVN support for CARP

2024-09-30 Thread Brian Haley
I am going to close this as any discussion would first have to happen on
the OVN mailing list. If after that there is any change required in
Neutron you can open an RFE for that.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2081643

Title:
  Neutron OVN support for CARP

Status in neutron:
  Invalid

Bug description:
  Does Neutron ML2/OVN support CARP as a Virtual IP synchronization
  protocol ? (Without disabling Port Security)

  I’ve been trying to make it work but from what I did managed to understand.
  CARP uses a mac address with following format 00-00-5E-00-01-{VRID} 
  It answers the ARP Request for Virtual Addresses with source mac of the Main 
Interface but as arp.sha it uses the mac address mentioned above and from what 
I could read on OVN source code it doesn’t seems like OVN matches any arp 
response with different eth.source and arp.sha fields.

  
  link to OVN code: 
https://github.com/ovn-org/ovn/blob/16836c3796f7af68437f9f834b40d87c801dc27c/controller/lflow.c#L2707

  https://datatracker.ietf.org/doc/html/rfc5798#section-7.3
  https://datatracker.ietf.org/doc/html/rfc5798#section-8.1.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2081643/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507078] [NEW] arping for floating IPs fail on newer kernels

2015-10-16 Thread Brian Haley
Public bug reported:

The code to send gratuitous ARPs changed in Liberty to be simpler
because we started setting the sysctl net.ipv4.ip_nonlocal_bind to 1 in
the root namespace.  It seems like in newer kernels (3.19 or so) that
this sysctl attribute was added to the namespaces, so now that arping
call fails because we are only enabling non-local binds in the root
namespace.

This is an example when run by hand:

$ sudo ip netns exec fip-311e3d4a-00ec-46cc-9928-dbc1a2fe3f9a arping -A -I 
fg-bb6b6721-78 -c 3 -w 4.5 172.18.128.7
bind: Cannot assign requested address

Failing to get that ARP out can affect connectivity to the floating IP.

In order to support either kernel, the code should change to try setting
it in the namespace, and if it fails, then set it in the root namespace.

This is backport potential to stable/liberty.

** Affects: neutron
 Importance: Undecided
 Assignee: Brian Haley (brian-haley)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Brian Haley (brian-haley)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507078

Title:
  arping for floating IPs fail on newer kernels

Status in neutron:
  New

Bug description:
  The code to send gratuitous ARPs changed in Liberty to be simpler
  because we started setting the sysctl net.ipv4.ip_nonlocal_bind to 1
  in the root namespace.  It seems like in newer kernels (3.19 or so)
  that this sysctl attribute was added to the namespaces, so now that
  arping call fails because we are only enabling non-local binds in the
  root namespace.

  This is an example when run by hand:

  $ sudo ip netns exec fip-311e3d4a-00ec-46cc-9928-dbc1a2fe3f9a arping -A -I 
fg-bb6b6721-78 -c 3 -w 4.5 172.18.128.7
  bind: Cannot assign requested address

  Failing to get that ARP out can affect connectivity to the floating
  IP.

  In order to support either kernel, the code should change to try
  setting it in the namespace, and if it fails, then set it in the root
  namespace.

  This is backport potential to stable/liberty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507078/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356021] [NEW] Tempest tests for router interfaces need updating to support DVR

2014-08-12 Thread Brian Haley
*** This bug is a duplicate of bug 1355537 ***
https://bugs.launchpad.net/bugs/1355537

Public bug reported:

These Tempest tests are failing when the "check experimental" jenkins
job is run, causing it to enable DVR in devstack:

tempest.scenario.test_security_groups_basic_ops.TestSecurityGroupsBasicOps
test_cross_tenant_traffic[compute,gate,network,smoke] FAIL
test_in_tenant_traffic[compute,gate,network,smoke]FAIL

tempest/scenario/test_security_groups_basic_ops.py._verify_network_details()
has this check:

if i['device_owner'] == 'network:router_interface']

But a DVR router has device_owner
'network:router_interface_distributed', so the loop returns []

Something like this will catch both:

if i['device_owner'].startswith('network:router')]

tempest/common/isolated_creds.py has a similar check that needs
updating.

A quick check with the above change saw the test pass in my environment.

** Affects: neutron
 Importance: Undecided
 Assignee: Brian Haley (brian-haley)
 Status: New


** Tags: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) => Brian Haley (brian-haley)

** Tags added: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1356021

Title:
  Tempest tests for router interfaces need updating to support DVR

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  These Tempest tests are failing when the "check experimental" jenkins
  job is run, causing it to enable DVR in devstack:

  tempest.scenario.test_security_groups_basic_ops.TestSecurityGroupsBasicOps
  test_cross_tenant_traffic[compute,gate,network,smoke] FAIL
  test_in_tenant_traffic[compute,gate,network,smoke]FAIL

  tempest/scenario/test_security_groups_basic_ops.py._verify_network_details()
  has this check:

  if i['device_owner'] == 'network:router_interface']

  But a DVR router has device_owner
  'network:router_interface_distributed', so the loop returns []

  Something like this will catch both:

  if i['device_owner'].startswith('network:router')]

  tempest/common/isolated_creds.py has a similar check that needs
  updating.

  A quick check with the above change saw the test pass in my
  environment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1356021/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367881] [NEW] l2pop RPC code throwing an exception in fdb_chg_ip_tun()

2014-09-10 Thread Brian Haley
Public bug reported:

I'm seeing an error in the l2pop code where it's failing to add a flow
for the ARP entry responder.

This is sometimes leading to DHCP failures for VMs, although a soft
reboot typically fixes that problem.

Here is the trace:

2014-09-10 15:10:36.954 9351 ERROR neutron.agent.linux.ovs_lib 
[req-de0c2985-1fac-46a8-a42b-f0bad5a43805 None] OVS flows could not be applied 
on bridge br-tun
2014-09-10 15:10:36.954 9351 TRACE neutron.agent.linux.ovs_lib Traceback (most 
recent call last):
2014-09-10 15:10:36.954 9351 TRACE neutron.agent.linux.ovs_lib   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 407, in _fdb_chg_ip 
2014-09-10 15:10:36.954 9351 TRACE neutron.agent.linux.ovs_lib 
self.local_ip, self.local_vlan_map)
2014-09-10 15:10:36.954 9351 TRACE neutron.agent.linux.ovs_lib   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/common/log.py",
 line 36, in wrapper
2014-09-10 15:10:36.954 9351 TRACE neutron.agent.linux.ovs_lib return 
method(*args, **kwargs)
2014-09-10 15:10:36.954 9351 TRACE neutron.agent.linux.ovs_lib   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/l2population_rpc.py",
 line 250, in fdb_chg_ip_tun
2014-09-10 15:10:36.954 9351 TRACE neutron.agent.linux.ovs_lib for mac, ip 
in after:
2014-09-10 15:10:36.954 9351 TRACE neutron.agent.linux.ovs_lib TypeError: 
'NoneType' object is not iterable
2014-09-10 15:10:36.954 9351 TRACE neutron.agent.linux.ovs_lib 
2014-09-10 15:10:36.955 9351 ERROR oslo.messaging.rpc.dispatcher 
[req-de0c2985-1fac-46a8-a42b-f0bad5a43805 ] Exception during message handling: 
'NoneType' object is not iterable
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 134, in _dispatch_and_reply
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 177, in _dispatch
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 123, in _do_dispatch
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/common/log.py",
 line 36, in wrapper
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher return 
method(*args, **kwargs)
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/l2population_rpc.py",
 line 55, in update_fdb_entries
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher 
self.fdb_update(context, fdb_entries)
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/common/log.py",
 line 36, in wrapper
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher return 
method(*args, **kwargs)
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/l2population_rpc.py",
 line 212, in fdb_update
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher 
getattr(self, method)(context, values)
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 407, in _fdb_chg_ip
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher 
self.local_ip, self.local_vlan_map)
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/common/log.py",
 line 36, in wrapper
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher return 
method(*args, **kwargs)
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/l2population_rpc.py",
 line 250, in fdb_chg_ip_tun
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher for mac, 
ip in after:
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher TypeError: 
'NoneType' object is not iterable
2014-09-10 15:10:36.955 9351 TRA

[Yahoo-eng-team] [Bug 1370795] [NEW] neutron floatingip-associate on port can cause server exception

2014-09-17 Thread Brian Haley
Public bug reported:

Associating a floating IP address with a port, when it's not itself
associated with an instance, can cause the neutron server to throw an
exception, leaving neutron completely unusable.

Here's how to reproduce it:

1. Start-up devstack, having it clone the latest upstream code, making sure to 
enable dvr by setting Q_DVR_MODE=dvr_snat
(this will create a network, subnet, and router and attach it to private 
and ext-nets)
2. neutron net-list
3. neutron port-create $private_network_id
4. neutron floatingip-create $public_network_id
5. neutron floatingip-associate $floatingip_id $port_id

You'll start seeting this in screen-q-svc.log:

2014-09-17 20:56:17.758 5423 DEBUG neutron.db.l3_dvr_db 
[req-3faea024-ab6c-46f2-8706-e8b1028616ab None] Floating IP host: None 
_process_floating_ips /opt/stack/neutron/neutron/db/l3_dvr_db.py:296
2014-09-17 20:56:17.760 5423 ERROR oslo.messaging.rpc.dispatcher 
[req-3faea024-ab6c-46f2-8706-e8b1028616ab ] Exception during message handling: 
Agent with agent_type=L3 agent and host=None could not be found
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/api/rpc/handlers/l3_rpc.py", line 78, in 
sync_routers
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher context, 
host, router_ids))
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/db/l3_agentschedulers_db.py", line 299, in 
list_active_sync_routers_on_active_l3_agent
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher 
active=True)
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/db/l3_hamode_db.py", line 458, in 
get_ha_sync_data_for_host
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher active)
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/db/l3_dvr_db.py", line 330, in get_sync_data
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher 
self._process_floating_ips(context, routers_dict, floating_ips)
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/db/l3_dvr_db.py", line 299, in _process_floating_ips
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher 
floating_ip['host'])
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/db/agents_db.py", line 157, in 
_get_agent_by_type_and_host
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher host=host)
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher 
AgentNotFoundByTypeHost: Agent with agent_type=L3 agent and host=None could not 
be found
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher
2014-09-17 20:56:17.768 5423 ERROR oslo.messaging._drivers.common 
[req-3faea024-ab6c-46f2-8706-e8b1028616ab ] Returning exception Agent with 
agent_type=L3 agent and host=None could not be found to caller

And it will just keep repeating as the l3-agent retries the call.

The result is the l3-agent won't be able to do any work.

I have a fix I'll send out for review.

** Affects: neutron
 Importance: Undecided
     Assignee: Brian Haley (brian-haley)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Brian Haley (brian-haley)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1370795

Title:
  neutron floatingip-associate on port can cause server exception

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Associating a floating IP address with a port, when it's not itself
  associated with an instance, can cause the neutron server to throw an
  exception, leaving neutron completely unus

[Yahoo-eng-team] [Bug 1257775] [NEW] Neutron metadata proxy should be checked for liveness periodically

2013-12-04 Thread Brian Haley
Public bug reported:

We've seen at least one occurrence where the metadata proxy in the
namespace has died, leading to unusable instances since their metadata
requests fail to get required objects such as ssh keys.

The l3-agent can easily check all of it's sub-processes are still
running, I have a small patch that can do this, along with a
configuration option to enable it so users can choose not to run it.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257775

Title:
  Neutron metadata proxy should be checked for liveness periodically

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We've seen at least one occurrence where the metadata proxy in the
  namespace has died, leading to unusable instances since their metadata
  requests fail to get required objects such as ssh keys.

  The l3-agent can easily check all of it's sub-processes are still
  running, I have a small patch that can do this, along with a
  configuration option to enable it so users can choose not to run it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1257775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1932093] [NEW] "oslo_config.cfg.DuplicateOptError: duplicate option: host" using OVN Octavia provider on stable/train

2021-06-15 Thread Brian Haley
inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia from neutron import service
Jun 15 13:54:56.164409 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia   File 
"/opt/stack/neutron/neutron/service.py", line 37, in 
Jun 15 13:54:56.164409 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia from neutron.common import 
config
Jun 15 13:54:56.164409 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia   File 
"/opt/stack/neutron/neutron/common/config.py", line 49, in 
Jun 15 13:54:56.164409 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia 
common_config.register_core_common_config_opts()
Jun 15 13:54:56.164409 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia   File 
"/opt/stack/neutron/neutron/conf/common.py", line 160, in 
register_core_common_config_opts
Jun 15 13:54:56.164409 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia cfg.register_opts(core_opts)
Jun 15 13:54:56.164409 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia   File 
"/usr/local/lib/python3.6/dist-packages/oslo_config/cfg.py", line 2051, in 
__inner
Jun 15 13:54:56.164409 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia result = f(self, *args, 
**kwargs)
Jun 15 13:54:56.164409 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia   File 
"/usr/local/lib/python3.6/dist-packages/oslo_config/cfg.py", line 2313, in 
register_opts
Jun 15 13:54:56.164409 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia self.register_opt(opt, group, 
clear_cache=False)
Jun 15 13:54:56.164409 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia   File 
"/usr/local/lib/python3.6/dist-packages/oslo_config/cfg.py", line 2055, in 
__inner
Jun 15 13:54:56.164409 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia return f(self, *args, **kwargs)
Jun 15 13:54:56.164409 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia   File 
"/usr/local/lib/python3.6/dist-packages/oslo_config/cfg.py", line 2302, in 
register_opt
Jun 15 13:54:56.164409 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia if 
_is_opt_registered(self._opts, opt):
Jun 15 13:54:56.164409 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia   File 
"/usr/local/lib/python3.6/dist-packages/oslo_config/cfg.py", line 364, in 
_is_opt_registered
Jun 15 13:54:56.164409 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia raise 
DuplicateOptError(opt.name)
Jun 15 13:54:56.164409 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia oslo_config.cfg.DuplicateOptError: 
duplicate option: host

Since there are multiple changes implicated here, and they are desired,
we'll need a workaround to get past it, work in progress.

** Affects: neutron
 Importance: Critical
 Assignee: Brian Haley (brian-haley)
 Status: Confirmed


** Tags: ovn ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1932093

Title:
  "oslo_config.cfg.DuplicateOptError: duplicate option: host" using OVN
  Octavia provider on stable/train

Status in neutron:
  Confirmed

Bug description:
  Some recent changes to the networking-ovn repository have broken the
  OVN Octavia provider that is in-tree.  When staring the octavia-api
  process for tempest scenario tests we get:

  Jun 15 13:54:56.154717 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia.api.drivers.driver_factory [-] 
Unable to load provider driver ovn due to: duplicate option: host: 
oslo_config.cfg.DuplicateOptError: duplicate option: host
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: CRITICAL octavia [-] Unhandled error: 
octavia.common.exceptions.ProviderNotFound: Provider 'ovn' was not found.
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia Traceback (most recent call last):
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia   File 
"/opt/stack/octavia/octavia/api/drivers/driver_factory.py", line 44, in 
get_driver
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia invoke_on_load=True).driver
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia   File 
"/usr/local/lib/python3.6/dist-packages/stevedore

[Yahoo-eng-team] [Bug 1934915] [NEW] [OVN Octavia Provider] Investigate tempest failures

2021-07-07 Thread Brian Haley
Public bug reported:

There are two tempest tests that are failing a significant amount of
time, causing us to not be able to merge code into the ovn-octavia-
provider repo:

Class:

octavia_tempest_plugin.tests.scenario.v2.test_traffic_ops.TrafficOperationsScenarioTest

Tests:

  test_source_ip_port_tcp_traffic
  test_source_ip_port_udp_traffic

I plan on disabling them while I investigate the failures, filing this
bug so I keep track of things and for adding any debugging notes.

** Affects: neutron
 Importance: High
 Assignee: Brian Haley (brian-haley)
 Status: New


** Tags: ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1934915

Title:
  [OVN Octavia Provider] Investigate tempest failures

Status in neutron:
  New

Bug description:
  There are two tempest tests that are failing a significant amount of
  time, causing us to not be able to merge code into the ovn-octavia-
  provider repo:

  Class:

  
octavia_tempest_plugin.tests.scenario.v2.test_traffic_ops.TrafficOperationsScenarioTest

  Tests:

test_source_ip_port_tcp_traffic
test_source_ip_port_udp_traffic

  I plan on disabling them while I investigate the failures, filing this
  bug so I keep track of things and for adding any debugging notes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1934915/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1936959] [NEW] [OVN Octavia Provider] Unable to delete Load Balancer with PENDING_DELETE

2021-07-20 Thread Brian Haley
Public bug reported:

While attempting to delete a Load Balancer the provisioning status is
moved to PENDING_DELETE and remains that way, blocking the deletion
process to finalize.

The following tracebacks were found on the logs regarding that specific
lb:

2021-07-17 13:49:26.131 19 INFO octavia.api.v2.controllers.load_balancer 
[req-b8b3cbd8-3014-4c45-9680-d4c67346ed1c - 1e38d4dfbfb7427787725df69fabc22b - 
default default] Sending delete Load Balancer 
19d8e465-c704-40a9-b1fd-5b0824408e5d to provider ovn
2021-07-17 13:49:26.139 19 DEBUG ovn_octavia_provider.helper [-] Handling 
request lb_delete with info {'id': '19d8e465-c704-40a9-b1fd-5b0824408e5d', 
'cascade': True} request_handler 
/usr/lib/python3.6/site-packages/ovn_octavia_provider/helper.py:303
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper [-] Exception 
occurred during deletion of loadbalancer: RuntimeError: dictionary changed size 
during iteration
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper Traceback (most 
recent call last):
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper   File 
"/usr/lib/python3.6/site-packages/ovn_octavia_provider/helper.py", line 907, in 
lb_delete
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper status = 
self._lb_delete(loadbalancer, ovn_lb, status)
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper   File 
"/usr/lib/python3.6/site-packages/ovn_octavia_provider/helper.py", line 960, in 
_lb_delete
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper for ls in 
self._find_lb_in_table(ovn_lb, 'Logical_Switch'):
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper   File 
"/usr/lib/python3.6/site-packages/ovn_octavia_provider/helper.py", line 289, in 
_find_lb_in_table
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper return [item 
for item in self.ovn_nbdb_api.tables[table].rows.values()
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper   File 
"/usr/lib/python3.6/site-packages/ovn_octavia_provider/helper.py", line 289, in 

2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper return [item 
for item in self.ovn_nbdb_api.tables[table].rows.values()
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper   File 
"/usr/lib64/python3.6/_collections_abc.py", line 761, in __iter__
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper for key in 
self._mapping:
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper RuntimeError: 
dictionary changed size during iteration
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper
2021-07-17 13:49:26.446 13 DEBUG octavia.common.keystone 
[req-267feb7e-2235-43d9-bec8-88ff532b9019 - 1e38d4dfbfb7427787725df69fabc22b - 
default default] Request path is / and it does not require keystone 
authentication process_request 
/usr/lib/python3.6/site-packages/octavia/common/keystone.py:77
2021-07-17 13:49:26.554 19 DEBUG ovn_octavia_provider.helper [-] Updating 
status to octavia: {'loadbalancers': [{'id': 
'19d8e465-c704-40a9-b1fd-5b0824408e5d', 'provisioning_status': 'ERROR', 
'operating_status': 'ERROR'}], 'listeners': [{'id': 
'0806594a-4ed7-4889-81fa-6fd8d02b0d80', 'provisioning_status': 'DELETED', 
'operating_status': 'OFFLINE'}], 'pools': [{'id': 
'b8a98db0-6d2e-4745-b533-d2eb3548d1b9', 'provisioning_status': 'DELETED'}], 
'members': [{'id': '08464181-728b-425a-b690-d3eb656f7e0a', 
'provisioning_status': 'DELETED'}]} _update_status_to_octavia 
/usr/lib/python3.6/site-packages/ovn_octavia_provider/helper.py:32

The problem here is that using rows.values() is inherently racy as if
there are multiple threads running this can happen eventually.

** Affects: neutron
 Importance: High
 Assignee: Brian Haley (brian-haley)
 Status: In Progress


** Tags: ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1936959

Title:
  [OVN Octavia Provider] Unable to delete Load Balancer with
  PENDING_DELETE

Status in neutron:
  In Progress

Bug description:
  While attempting to delete a Load Balancer the provisioning status is
  moved to PENDING_DELETE and remains that way, blocking the deletion
  process to finalize.

  The following tracebacks were found on the logs regarding that
  specific lb:

  2021-07-17 13:49:26.131 19 INFO octavia.api.v2.controllers.load_balancer 
[req-b8b3cbd8-3014-4c45-9680-d4c67346ed1c - 1e38d4dfbfb7427787725df69fabc22b - 
default default] Sending delete Load Balancer 
19d8e465-c704-40a9-b1fd-5b0824408e5d to provider ovn
  2021-07-17 13:49:26.139 19 D

[Yahoo-eng-team] [Bug 1856600] Re: Unit test jobs are failing with ImportError: cannot import name 'engine' from 'flake8'

2021-07-28 Thread Brian Haley
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1856600

Title:
  Unit test jobs are failing with ImportError: cannot import name
  'engine' from 'flake8'

Status in neutron:
  Invalid

Bug description:
  Neutron unit test CI jobs are failing with the following error:

  =
  Failures during discovery
  =
  --- import errors ---
  Failed to import test module: neutron.tests.unit.hacking.test_checks
  Traceback (most recent call last):
File "/usr/lib/python3.7/unittest/loader.py", line 436, in _find_test_path
  module = self._get_module_from_name(name)
File "/usr/lib/python3.7/unittest/loader.py", line 377, in 
_get_module_from_name
  __import__(name)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/hacking/test_checks.py",
 line 15, in 
  from flake8 import engine
  ImportError: cannot import name 'engine' from 'flake8' 
(/home/zuul/src/opendev.org/openstack/neutron/.tox/py37/lib/python3.7/site-packages/flake8/__init__.py)

  Example:
  
https://e859f0a6f5995c9142c5-a232ce3bdc50fca913ceba9a1c600c62.ssl.cf5.rackcdn.com/572767/23/check/openstack-
  tox-py37/1d036e0/job-output.txt

  Looks like flake8 no longer has an engine but they had kept the api
  for backward compatibility [1], perhaps they broke it somehow.

  [1] based on comment in
  https://gitlab.com/pycqa/flake8/blob/master/src/flake8/api/legacy.py#L3

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1856600/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1816485] Re: [rfe] change neutron process names to match their role

2021-07-28 Thread Brian Haley
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1816485

Title:
  [rfe] change neutron process names to match their role

Status in neutron:
  Fix Released

Bug description:
  See the commit message description here:
  https://review.openstack.org/#/c/637019/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1816485/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642770] Re: Security group code is doing unnecessary work removing chains

2021-07-28 Thread Brian Haley
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1642770

Title:
  Security group code is doing unnecessary work removing chains

Status in neutron:
  Fix Released

Bug description:
  The security group code is generating a lot of these messages when
  trying to boot VMs:

  Attempted to remove chain sg-chain which does not exist

  There's also ones specific to the port.  It seems to be calling
  remove_chain(), even when it's a new port and it's initially setting
  up it's filter.  I dropped a print_stack() in remove_chain() and see
  tracebacks like this:

  Prepare port filter for e8f41910-c24e-41f1-ae7f-355e9bb1d18a 
_apply_port_filter /opt/stack/neutron/neutron/agent/securitygroups_rpc.py:163
  Preparing device (e8f41910-c24e-41f1-ae7f-355e9bb1d18a) filter 
prepare_port_filter 
/opt/stack/neutron/neutron/agent/linux/iptables_firewall.py:170
  Attempted to remove chain sg-chain which does not exist remove_chain 
/opt/stack/neutron/neutron/agent/linux/iptables_manager.py:177
File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
214, in main
  result = function(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/ryu/lib/hub.py", line 54, in 
_launch
  return func(*args, **kwargs)
File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py",
 line 37, in agent_main_wrapper
  ovs_agent.main(bridge_classes)
File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2177, in main
  agent.daemon_loop()
File "/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 
154, in wrapper
  return f(*args, **kwargs)
File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2098, in daemon_loop
  self.rpc_loop(polling_manager=pm)
File "/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 
154, in wrapper
  return f(*args, **kwargs)
File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2049, in rpc_loop
  port_info, ovs_restarted)
File "/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 
154, in wrapper
  return f(*args, **kwargs)
File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1657, in process_network_ports
  port_info.get('updated', set()))
File "/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 266, in 
setup_port_filters
  self.prepare_devices_filter(new_devices)
File "/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 131, in 
decorated_function
  *args, **kwargs)
File "/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 139, in 
prepare_devices_filter
  self._apply_port_filter(device_ids)
File "/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 164, in 
_apply_port_filter
  self.firewall.prepare_port_filter(device)
File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
  self.gen.next()
File "/opt/stack/neutron/neutron/agent/firewall.py", line 139, in 
defer_apply
  self.filter_defer_apply_off()
File "/opt/stack/neutron/neutron/agent/linux/iptables_firewall.py", line 
838, in filter_defer_apply_off
  self._pre_defer_unfiltered_ports)
File "/opt/stack/neutron/neutron/agent/linux/iptables_firewall.py", line 
248, in _remove_chains_apply
  self._remove_chain_by_name_v4v6(SG_CHAIN)
File "/opt/stack/neutron/neutron/agent/linux/iptables_firewall.py", line 
279, in _remove_chain_by_name_v4v6
  self.iptables.ipv4['filter'].remove_chain(chain_name)
File "/opt/stack/neutron/neutron/agent/linux/iptables_manager.py", line 
178, in remove_chain
  traceback.print_stack()

  Looking at the code, there's a couple of things that are interesting:

  1) prepare_port_filter() calls self._remove_chains() - why?
  2) in the "defer" case above we always do 
_remove_chains_apply()/_setup_chains_apply() - is there some way to skip the 
remove?

  This also led to us timing how long it's taking in the remove_chain()
  code, since that's where the message is getting printed.  As the
  number of ports and rules grow, it's spending more time spinning
  through chains and rules.  It looks like that can be helped with a
  small code change, which is just fallout from the real problem.  I'll
  send that out since it helps a little.

  More work still required.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1642770/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/L

[Yahoo-eng-team] [Bug 1563069] Re: Centralize Configuration Options

2021-07-28 Thread Brian Haley
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1563069

Title:
  Centralize Configuration Options

Status in neutron:
  Fix Released

Bug description:
  [Overview]
  Refactor Neutron configuration options to be in one place 'neutron/conf' 
similar to the Nova implementation found here: 
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/centralize-config-options.html

  This would allow for centralization of all configuration options and
  provide an easy way to import and gain access to the wide breadth of
  configuration options available to Neutron.

  [Proposal]

  1. Introduce a new package: neutron/conf

  Neutron Quotas Example:

  2. Group modules logically under new package:
2a. Example: options from neutron/quotas
2b. Move to neutron/conf/quotas/common.py
2c. Aggregate quota options in __init__.py
  4. Import neutron.conf.quotas for usage

  Neutron DB Example /w Agent Options:

  2. Group modules logically under new package:
2a. Example: options from neutron/db/agents_db.py
2b. Move to neutron/conf/db/agents.py
2c. Aggregate db options in __init__.py
  4. Import neutron.conf.db for usage

  Neutron DB Example /w Migration CLI:

  2. Group modules logically under new package:
2a. Example: options from neutron/db/migrations/cli.py
2b. Move to neutron/conf/db/migrations_cli.py
2c. Migrations CLI does not get aggregated in __init__.py
  4. Import neutron.conf.db.migrations_cli

  ** neutron.opts list options methods all get moved to neutron/conf as
  well in their respective modules and setup.cfg is modified for this
  adjustment.

  [Benefits]

  - As a developer I will find all config options in one place and will add 
further config options to that central place.
  - End user is not affected by this change.

  [Related information]
  [1] Nova Implementation: 
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/centralize-config-options.html
  [2] Cross Project Spec: https://review.openstack.org/#/c/295543

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1563069/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1942185] Re: Report a bug for practice

2021-08-31 Thread Brian Haley
Please only file valid bugs against projects.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1942185

Title:
  Report a bug for practice

Status in neutron:
  Invalid

Bug description:
  ERROR: test suite for 
  --
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 227, in run
  self.tearDown()
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 350, in tearDown
  self.teardownContext(ancestor)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1942185/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1942033] Re: Mechanism driver 'ovn' failed in update_port_postcommit: ovsdbapp.backend.ovs_idl.idlutils.RowNotFound

2021-09-03 Thread Brian Haley
** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1942033

Title:
  Mechanism driver 'ovn' failed in update_port_postcommit:
  ovsdbapp.backend.ovs_idl.idlutils.RowNotFound

Status in neutron:
  Invalid

Bug description:
  tempest scenario test cases failed with below errors in neutron log
  ==
  Aug 30 12:06:36.037228 e2e-os-scfc067 neutron-server[1910582]: ERROR 
neutron.plugins.ml2.managers [None req-99cc0c3f-60b3-4b7f-ab46-b98029a3e3c2 
None None] Mechanism driver 'ovn' failed in update_port_postcommit: 
ovsdbapp.backend.ovs_idl.idlutils.RowNotFound: Cannot find Logical_Switch_Port 
with name=10ea-7e5d-4fb4-8b19-1bcb72f33566
  Aug 30 12:06:36.037228 e2e-os-scfc067 neutron-server[1910582]: ERROR 
neutron.plugins.ml2.managers Traceback (most recent call last):
  Aug 30 12:06:36.037228 e2e-os-scfc067 neutron-server[1910582]: ERROR 
neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/managers.py", line 477, in 
_call_on_dr
  ivers
  Aug 30 12:06:36.037228 e2e-os-scfc067 neutron-server[1910582]: ERROR 
neutron.plugins.ml2.managers getattr(driver.obj, method_name)(context)
  Aug 30 12:06:36.037228 e2e-os-scfc067 neutron-server[1910582]: ERROR 
neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py
  ", line 791, in update_port_postcommit
  Aug 30 12:06:36.037228 e2e-os-scfc067 neutron-server[1910582]: ERROR 
neutron.plugins.ml2.managers 
self._ovn_client.update_port(context._plugin_context, port,
  Aug 30 12:06:36.037228 e2e-os-scfc067 neutron-server[1910582]: ERROR 
neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_clie
  nt.py", line 518, in update_port
  Aug 30 12:06:36.037228 e2e-os-scfc067 neutron-server[1910582]: ERROR 
neutron.plugins.ml2.managers ovn_port = 
self._nb_idl.lookup('Logical_Switch_Port', port['id'])
  Aug 30 12:06:36.037228 e2e-os-scfc067 neutron-server[1910582]: ERROR 
neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl
  _ovn.py", line 207, in lookup
  Aug 30 12:06:36.037228 e2e-os-scfc067 neutron-server[1910582]: ERROR 
neutron.plugins.ml2.managers return super().lookup(table, record, 
default=default, timeout=timeout,
  Aug 30 12:06:36.037228 e2e-os-scfc067 neutron-server[1910582]: ERROR 
neutron.plugins.ml2.managers   File 
"/usr/local/lib/python3.8/dist-packages/ovsdbapp/backend/ovs_idl/__init__.py", 
lin
  e 208, in lookup
  Aug 30 12:06:36.037228 e2e-os-scfc067 neutron-server[1910582]: ERROR 
neutron.plugins.ml2.managers return self._lookup(table, record)
  Aug 30 12:06:36.037228 e2e-os-scfc067 neutron-server[1910582]: ERROR 
neutron.plugins.ml2.managers   File 
"/usr/local/lib/python3.8/dist-packages/ovsdbapp/backend/ovs_idl/__init__.py", 
lin
  e 268, in _lookup
  Aug 30 12:06:36.037228 e2e-os-scfc067 neutron-server[1910582]: ERROR 
neutron.plugins.ml2.managers row = idlutils.row_by_value(self, rl.table, 
rl.column, record)
  Aug 30 12:06:36.037228 e2e-os-scfc067 neutron-server[1910582]: ERROR 
neutron.plugins.ml2.managers   File 
"/usr/local/lib/python3.8/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", 
lin
  e 114, in row_by_value
  Aug 30 12:06:36.037228 e2e-os-scfc067 neutron-server[1910582]: ERROR 
neutron.plugins.ml2.managers raise RowNotFound(table=table, col=column, 
match=match)
  Aug 30 12:06:36.037228 e2e-os-scfc067 neutron-server[1910582]: ERROR 
neutron.plugins.ml2.managers ovsdbapp.backend.ovs_idl.idlutils.RowNotFound: 
Cannot find Logical_Switch_Port with name=
  10ea-7e5d-4fb4-8b19-1bcb72f33566
  Aug 30 12:06:36.037228 e2e-os-scfc067 neutron-server[1910582]: ERROR 
neutron.plugins.ml2.managers
  ==

  For more detailed logs, please check: https://elab-os-
  
logsrv.delllabs.net/33/786933/6/check/DellEMC_SC_FC/89b4591/DellEMC_SC_FC/377/logs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1942033/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1827489] Re: Wrong IPV6 address provided by openstack server create

2022-03-23 Thread Brian Haley
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1827489

Title:
  Wrong IPV6 address provided by openstack server create

Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  IPV6 address of an interface doesn't have to be derived from its MAC
  address. The newer kernels have addr_gen_mode option which controls
  the behavior of IPV6 calculation, see
  https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt

  I've encountered the problem when I booted up an image (RHEL8 in my
  case) which had the addr_gen_mode option set to 1 (means that IPV6
  address is randomized) by default. OpenStack (I had Rocky deployment)
  didn't recognize this and 'openstack server create' returned wrong
  address which lead to tempest failures because thanks to the
  'openstack server create' output the tests expected different
  addresses on the interfaces.

  Steps to reproduce:

  $ openstack server create --image  --flavor  --network 
 --network  --key-name  instance_name
  
+-++
  | Field   | Value 
 |
  
+-++
  
  | accessIPv4  |   
 |
  | accessIPv6  |   
 |
  | addresses   | 
tempest-network-smoke--884367252=10.100.0.5; 
tempest-network-smoke--18828977=2003::f816:3eff:febb:7456 |
  

  Then ssh to the instance and hit 'ip a' command:
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
  2: eth0:  mtu 1450 qdisc fq_codel state UP 
group default qlen 1000
  link/ether fa:16:3e:48:e8:b5 brd ff:ff:ff:ff:ff:ff
  inet 10.100.0.3/28 brd 10.100.0.15 scope global dynamic noprefixroute eth0
valid_lft 86363sec preferred_lft 86363sec
  inet6 fe80::f816:3eff:fe48:e8b5/64 scope link
valid_lft forever preferred_lft forever
  3: eth1:  mtu 1450 qdisc fq_codel state UP 
group default qlen 1000
  link/ether fa:16:3e:bb:74:56 brd ff:ff:ff:ff:ff:ff
  inet6 2003::b47f:f400:ecca:2a55/64 scope global dynamic noprefixroute
valid_lft 86385sec preferred_lft 14385sec
  inet6 fe80::7615:8d57:775d:fae/64 scope link noprefixroute
valid_lft forever preferred_lft forever

  Notice that eth1 interface has an ipv6 address which seems not to be
  derived from its mac address. Also notice that the output of
  'openstack server create' returned wrong address, a different one than
  it's actually set for eth1. It expected that the ipv6 address would be
  derived from the mac address but it wasn't.

  'openstack server create' should be able to detect the option in the
  image and behave accordingly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1827489/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1817657] Re: Neutron dhcpv6 windows 2008r2 and 2012r2 cannot get ipv6 address

2019-02-28 Thread Brian Haley
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1817657

Title:
  Neutron dhcpv6 windows 2008r2 and 2012r2 cannot get ipv6 address

Status in neutron:
  Invalid

Bug description:
  openstack queens

  controller Capture:
  [root@controller03 ~]# ip netns exec 
qdhcp-61d4061c-74dc-4d2e-a70c-3c86bfd8d5ef tcpdump -t -vv -n -i tapdf0252fa-7e  
-ee  -s 512 '(udp port 546 or 547) or icmp6'
  fa:16:3e:2a:d0:b8 > 33:33:ff:49:5a:ff, ethertype IPv6 (0x86dd), length 78: 
(hlim 255, next-header ICMPv6 (58) payload length: 24) :: > ff02::1:ff49:5aff: 
[icmp6 sum ok] ICMP6, neighbor solicitation, length 24, who has 
fe80::9b7:2b0b:5349:5aff
  fa:16:3e:2a:d0:b8 > 33:33:ff:49:5a:ff, ethertype IPv6 (0x86dd), length 78: 
(hlim 255, next-header ICMPv6 (58) payload length: 24) :: > ff02::1:ff49:5aff: 
[icmp6 sum ok] ICMP6, neighbor solicitation, length 24, who has 
fe80::9b7:2b0b:5349:5aff

  
  compute Capture:
  [root@compute15 ~]# tcpdump -t -vv -n -i tap2d111b01-eb  -ee  -s 512 '(udp 
port 546 or 547) or icmp6'
  fa:16:3e:2a:d0:b8 > 33:33:ff:49:5a:ff, ethertype IPv6 (0x86dd), length 78: 
(hlim 255, next-header ICMPv6 (58) payload length: 24) :: > ff02::1:ff49:5aff: 
[icmp6 sum ok] ICMP6, neighbor solicitation, length 24, who has 
fe80::9b7:2b0b:5349:5aff
  fa:16:3e:2a:d0:b8 > 33:33:00:00:00:02, ethertype IPv6 (0x86dd), length 70: 
(hlim 255, next-header ICMPv6 (58) payload length: 16) fe80::9b7:2b0b:5349:5aff 
> ff02::2: [icmp6 sum ok] ICMP6, router solicitation, length 16
source link-address option (1), length 8 (1): fa:16:3e:2a:d0:b8
  0x:  fa16 3e2a d0b8
  fa:16:3e:2a:d0:b8 > 33:33:00:00:00:01, ethertype IPv6 (0x86dd), length 86: 
(hlim 255, next-header ICMPv6 (58) payload length: 32) fe80::9b7:2b0b:5349:5aff 
> ff02::1: [icmp6 sum ok] ICMP6, neighbor advertisement, length 32, tgt is 
fe80::9b7:2b0b:5349:5aff, Flags [override]
destination link-address option (2), length 8 (1): fa:16:3e:2a:d0:b8
  0x:  fa16 3e2a d0b8
  fa:16:3e:2a:d0:b8 > 33:33:00:00:00:02, ethertype IPv6 (0x86dd), length 70: 
(hlim 255, next-header ICMPv6 (58) payload length: 16) fe80::9b7:2b0b:5349:5aff 
> ff02::2: [icmp6 sum ok] ICMP6, router solicitation, length 16
source link-address option (1), length 8 (1): fa:16:3e:2a:d0:b8
  0x:  fa16 3e2a d0b8
  fa:16:3e:2a:d0:b8 > 33:33:00:00:00:02, ethertype IPv6 (0x86dd), length 70: 
(hlim 255, next-header ICMPv6 (58) payload length: 16) fe80::9b7:2b0b:5349:5aff 
> ff02::2: [icmp6 sum ok] ICMP6, router solicitation, length 16
source link-address option (1), length 8 (1): fa:16:3e:2a:d0:b8
  0x:  fa16 3e2a d0b8
  fa:16:3e:2a:d0:b8 > 33:33:00:01:00:02, ethertype IPv6 (0x86dd), length 144: 
(hlim 1, next-header UDP (17) payload length: 90) 
fe80::9b7:2b0b:5349:5aff.dhcpv6-client > ff02::1:2.dhcpv6-server: [bad udp 
cksum 0xe0fc -> 0x023c!] dhcp6 solicit (xid=f66f58 (elapsed-time 0) (client-ID 
hwaddr/time type 1 time 604290846 fa163e2ad0b8) (IA_NA IAID:240276480 T1:0 
T2:0) (Client-FQDN) (vendor-class) (option-request DNS-search-list DNS-server 
vendor-specific-info Client-FQDN))
  fa:16:3e:2a:d0:b8 > 33:33:00:01:00:02, ethertype IPv6 (0x86dd), length 144: 
(hlim 1, next-header UDP (17) payload length: 90) 
fe80::9b7:2b0b:5349:5aff.dhcpv6-client > ff02::1:2.dhcpv6-server: [bad udp 
cksum 0xe0fc -> 0x01d8!] dhcp6 solicit (xid=f66f58 (elapsed-time 100) 
(client-ID hwaddr/time type 1 time 604290846 fa163e2ad0b8) (IA_NA 
IAID:240276480 T1:0 T2:0) (Client-FQDN) (vendor-class) (option-request 
DNS-search-list DNS-server vendor-specific-info Client-FQDN))
  fa:16:3e:2a:d0:b8 > 33:33:00:01:00:02, ethertype IPv6 (0x86dd), length 144: 
(hlim 1, next-header UDP (17) payload length: 90) 
fe80::9b7:2b0b:5349:5aff.dhcpv6-client > ff02::1:2.dhcpv6-server: [bad udp 
cksum 0xe0fc -> 0x0110!] dhcp6 solicit (xid=f66f58 (elapsed-time 300) 
(client-ID hwaddr/time type 1 time 604290846 fa163e2ad0b8) (IA_NA 
IAID:240276480 T1:0 T2:0) (Client-FQDN) (vendor-class) (option-request 
DNS-search-list DNS-server vendor-specific-info Client-FQDN))
  fa:16:3e:2a:d0:b8 > 33:33:00:01:00:02, ethertype IPv6 (0x86dd), length 144: 
(hlim 1, next-header UDP (17) payload length: 90) 
fe80::9b7:2b0b:5349:5aff.dhcpv6-client > ff02::1:2.dhcpv6-server: [bad udp 
cksum 0xe0fc -> 0xff7f!] dhcp6 solicit (xid=f66f58 (elapsed-time 700) 
(client-ID hwaddr/time type 1 time 604290846 fa163e2ad0b8) (IA_NA 
IAID:240276480 T1:0 T2:0) (Client-FQDN) (vendor-class) (option-request 
DNS-search-list DNS-server vendor-specific-info Client-FQDN))


  1、Centos7 and 6 can get the ipv6 address normally
  2、Windows server can not get

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1817657/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
P

[Yahoo-eng-team] [Bug 1818566] Re: If ipv6 is disabled through the kernel, neutron-dhcp-agent fails to create the tap devices due to error regarding ipv6

2019-03-04 Thread Brian Haley
*** This bug is a duplicate of bug 1618878 ***
https://bugs.launchpad.net/bugs/1618878

** This bug has been marked a duplicate of bug 1618878
   Disabling IPv6 on an interface fails if IPv6 is completely disabled in the 
kernel

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818566

Title:
  If ipv6 is disabled through the kernel, neutron-dhcp-agent fails to
  create the tap devices due to error regarding ipv6

Status in neutron:
  New

Bug description:
  If we disable ipv6 using ipv6.disable=1 at the kernel runtime,
  neutron-dhcp-agent stops creating the tap devices and fails here:

  
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.linux.utils 
[-] Exit code: 255; Stdin: ; Stdout: ; Stderr: sysctl: cannot stat 
/proc/sys/net/ipv6/conf/default/accept_ra: No such file or directory
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent 
[-] Unable to enable dhcp for 310b9752-06a5-4d7b-98ae-1ba8536e22fa.
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent 
Traceback (most recent call last):
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   
File "/usr/lib/python2.7/site-packages/neutron/agent/dhcp/agent.py", line 140, 
in call_driver
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent
 getattr(driver, action)(**action_kwargs)
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   
File "/usr/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", line 213, 
in enable
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent
 interface_name = self.device_manager.setup(self.network)
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   
File "/usr/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", line 1441, 
in setup
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent
 n_const.ACCEPT_RA_DISABLED)
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   
File "/usr/lib/python2.7/site-packages/neutron/agent/linux/interface.py", line 
260, in configure_ipv6_ra
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent
 'value': value}])
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   
File "/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 
912, in execute
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent
 log_fail_as_error=log_fail_as_error, **kwargs)
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   
File "/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 148, 
in execute
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent
 raise ProcessExecutionError(msg, returncode=returncode)
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent 
ProcessExecutionError: Exit code: 255; Stdin: ; Stdout: ; Stderr: sysctl: 
cannot stat /proc/sys/net/ipv6/conf/default/accept_ra: No such file or directory
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1818566/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1765691] Re: OVN vlan networks use geneve tunneling for SNAT traffic

2019-05-13 Thread Brian Haley
** Project changed: neutron => networking-ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1765691

Title:
  OVN vlan networks use geneve tunneling for SNAT traffic

Status in networking-ovn:
  Incomplete

Bug description:
  In OVN driver, traffic from vlan (or any other) tenant network to
  external network uses a geneve tunnel between the compute node and the
  gateway node. So MTU for the VLAN networks needs to account for geneve
  tunnel overhead.

  This doc [1] explains about OVN vlan networks and current issue and future 
enhancements.
  There is ovs-discuss mailing list thread [2] discussing the surprising geneve 
tunnel usage.

  [1] 
https://docs.google.com/document/d/1JecGIXPH0RAqfGvD0nmtBdEU1zflHACp8WSRnKCFSgg/edit#heading=h.st3xgs77kfx4
  [2] https://mail.openvswitch.org/pipermail/ovs-discuss/2018-April/046543.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ovn/+bug/1765691/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1617452] Re: Can't update MAC address for direct-physical ports

2019-06-11 Thread Brian Haley
*** This bug is a duplicate of bug 1830383 ***
https://bugs.launchpad.net/bugs/1830383

This bug was fixed in a different way in
https://review.opendev.org/#/c/661298/ - I'll mark as a duplicate of the
bug mentioned there.

** This bug has been marked a duplicate of bug 1830383
   SRIOV: MAC address in use error

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1617452

Title:
  Can't update MAC address for direct-physical ports

Status in neutron:
  Confirmed

Bug description:
  This bug also affect nova and is described in details there:
  https://bugs.launchpad.net/nova/+bug/1617429

  Nova needs to be fixed in order to update the MAC address of the
  neutron ports of type direct-physical.

  A fix has been proposed for the nova issue.  However, sending a MAC
  address update to neutron-server reports the following error:

  Unable to complete operation on port
  d19c4cef-7415-4113-ba92-2495f00384d2, port is already bound, port
  type: hostdev_physical, old_mac 90:e2:ba:48:27:ed, new_mac
  00:1e:67:51:36:71.

  
  Description:
  

  Booting a guest with a neutron port of type 'direct-physical' will
  cause nova to allocate a PCI passthrough device for the port. The MAC
  address of the PCI passthrough device in the guest is not a virtual
  MAC address (fa:16:...) but the MAC address of the physical device
  since the full device is allocated to the guest (compared to SR-IOV
  where a virtual MAC address is arbitrarily chosen for the port).

  When resizing the guest (to another flavor), nova will allocate a new
  PCI device for the guest. After the resize, the guest will be bound to
  another PCI device which has a different MAC address. However the MAC
  address on the neutron port is not updated, causing DHCP to not work
  because the MAC address is unknown.

  The same issue can be observed when migrating a guest to another host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1617452/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1832603] [NEW] BGP dynamic routing config doc should use openstack client in examples

2019-06-12 Thread Brian Haley
Public bug reported:

The BGP dynamic routing config doc at doc/source/admin/config-bgp-
dynamic-routing.rst in the neutron tree uses the deprecated neutron
client in it's examples.  It should use the openstack client is
possible, looking it seems like all the options are supported there now.

** Affects: neutron
 Importance: Low
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1832603

Title:
  BGP dynamic routing config doc should use openstack client in examples

Status in neutron:
  Confirmed

Bug description:
  The BGP dynamic routing config doc at doc/source/admin/config-bgp-
  dynamic-routing.rst in the neutron tree uses the deprecated neutron
  client in it's examples.  It should use the openstack client is
  possible, looking it seems like all the options are supported there
  now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1832603/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1831726] Re: neutron-cli port-update ipv6 fixed_ips Covering previous

2019-06-20 Thread Brian Haley
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1831726

Title:
  neutron-cli port-update ipv6 fixed_ips Covering previous

Status in neutron:
  Invalid

Bug description:
  I create an IPv6 network ipv6_net which there are two subnets 
A:2010:::3:/64 and B:2011:::6:/64. I added subnet A to the 
router ipv6_router, the gateway of subnet A is 2010:::3::9. So far, 
there is no problem. Then add the gateway of subnet B to router ipv6_router at 
2011:::6::9. I use the command
   "neutron port-update--fixed-ip subnet_id=subnet B, 
ip_address=2011:::6::9 PORT(routing interface of subnet A)". 
  The result is that the gateway address of subnet A is overwritten by the 
gateway address of subnet B, but I hope that the gateway address of subnet B 
can be appended to PORT( Because multiple IPv6 subnets to an internal router 
port). I use the command 
  "openstack port set --fixed-IP subnet = subnet B, ip_address = 
2011:::6:9 PORT (routing interface of subnet A)", which can add the 
gateway address of subnet B to PORT and will not cover the gateway address of 
subnet A.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1831726/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1834257] [NEW] dhcp-agent can overwhelm neutron server with dhcp_ready_on_ports RPC calls

2019-06-25 Thread Brian Haley
Public bug reported:

The Neutron dhcp-agents reports all ready ports to the Neutron server
via the dhcp_ready_on_ports() RPC call. When the dhcp-agent gets ports
ready faster than the server can process them, the amount of ports per
RPC call can grow so high (e.g. 1 Ports) that the neutron server
never has a chance of processing the request before the RPC timeout
kills the request, leading to the dhcp-agent sending the request again,
possibly with even more ports than before, resulting in an endless loop
of dhcp_ready_on_ports() calls. This happens especially on agent
startup.

We should use either a smaller fixed amount, or use an algorithm to
reduce the number being sent in the event a message timeout is received.

** Affects: neutron
 Importance: High
 Assignee: Brian Haley (brian-haley)
 Status: In Progress


** Tags: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1834257

Title:
  dhcp-agent can overwhelm neutron server with dhcp_ready_on_ports RPC
  calls

Status in neutron:
  In Progress

Bug description:
  The Neutron dhcp-agents reports all ready ports to the Neutron server
  via the dhcp_ready_on_ports() RPC call. When the dhcp-agent gets ports
  ready faster than the server can process them, the amount of ports per
  RPC call can grow so high (e.g. 1 Ports) that the neutron server
  never has a chance of processing the request before the RPC timeout
  kills the request, leading to the dhcp-agent sending the request
  again, possibly with even more ports than before, resulting in an
  endless loop of dhcp_ready_on_ports() calls. This happens especially
  on agent startup.

  We should use either a smaller fixed amount, or use an algorithm to
  reduce the number being sent in the event a message timeout is
  received.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1834257/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1719494] Re: When create a fip with subnet_id and ip specified, the created fip won't be the ip specified

2019-06-27 Thread Brian Haley
*** This bug is a duplicate of bug 1738612 ***
https://bugs.launchpad.net/bugs/1738612

This was fixed in https://review.opendev.org/#/c/528535/

** This bug has been marked a duplicate of bug 1738612
   Floating IP update breakes decomposed plugins

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1719494

Title:
  When create a fip with subnet_id and ip specified, the created fip
  won't be the ip specified

Status in neutron:
  New

Bug description:
  When create a fip with subnet_id and ip option specified, the created
  fip won't be the ip specified, in the original code, it will be
  replaced with subnet_id, and it will confuse users.For example:

  #neutron floatingip-create --floating-ip-address 10.10.0.10 --subnet 
 
  then 10.10.0.2 was allocated, not 10.10.0.10

  If only floating-ip-address is specified, the result is fine
  #neutron floatingip-create --floating-ip-address 10.10.0.10 
  then 10.10.0.10 was allocated

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1719494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1759773] Re: FWaaS: Invalid port error on associating L3 ports (Router in HA) to firewall group

2019-08-27 Thread Brian Haley
*** This bug is a duplicate of bug 1762454 ***
https://bugs.launchpad.net/bugs/1762454

** This bug has been marked a duplicate of bug 1762454
   FWaaS: Invalid port error on associating ports (distributed router) to 
firewall group

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1759773

Title:
  FWaaS: Invalid port error on associating L3 ports  (Router in HA) to
  firewall group

Status in neutron:
  Confirmed

Bug description:
  From: Ignazio Cassano:

  I am trying to use fwaas v2 on centos 7 openstack ocata.
  After creating firewall rules an policy I am looking for creating firewall
  group .
  I am able to create the firewall group, but it does not work when I try to
  set the ports into it.

  openstack firewall group set --port
  87173e27-c2b3-4a67-83d0-d8645d9f309b  prova
  Failed to set firewall group 'prova': Firewall Group Port
  87173e27-c2b3-4a67-83d0-d8645d9f309b is invalid
  Neutron server returns request_ids:
  ['req-9ef8ad1e-9fad-4956-8aff-907c32d01e1f']

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1759773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1840952] Re: no tenant network is available for allocation

2019-08-29 Thread Brian Haley
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1840952

Title:
  no tenant network is available for allocation

Status in neutron:
  Invalid

Bug description:
  hi i don t know if it is a bug but i spent 3 days with this error. 
  i read and redo the neutron option 2 network as the service
  . i use stein version
  . i installed a br0 bridge

  my questions are :
  . in linuxbridgeagent.ini must i set  provider:eno1 or provider:br0 ?
  . how to create provider tenant with the new openstack client cli ?

  this message rised with : openstack network create edge --enable
  --internal

  (all previous commands are all ok (horizon, controller, compute node,
  domain, project, user, endpoint, etc). at startup and verification all
  documentation commands are ok

  i don t know where to investigate.
  thx
  Bruno

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1840952/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1843269] [NEW] Nova notifier called even if set to False

2019-09-09 Thread Brian Haley
Public bug reported:

If the config option notify_nova_on_port_status_changes is set to False,
the Nova notifier can still be called in update_device_up().  This could
throw an exception because plugin.nova_notifier is not initialized
otherwise.  Since it defaults to True we've never seen this failure.

** Affects: neutron
 Importance: Low
 Assignee: Brian Haley (brian-haley)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1843269

Title:
  Nova notifier called even if set to False

Status in neutron:
  In Progress

Bug description:
  If the config option notify_nova_on_port_status_changes is set to
  False, the Nova notifier can still be called in update_device_up().
  This could throw an exception because plugin.nova_notifier is not
  initialized otherwise.  Since it defaults to True we've never seen
  this failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1843269/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1843025] Re: FWaaS v2 fails to add ICMPv6 rules via horizon

2019-09-11 Thread Brian Haley
*** This bug is a duplicate of bug 1799904 ***
https://bugs.launchpad.net/bugs/1799904

** This bug has been marked a duplicate of bug 1799904
   ICMPv6 is not an available protocol when creating Firewall-Rule

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1843025

Title:
  FWaaS v2 fails to add ICMPv6 rules via horizon

Status in neutron:
  In Progress

Bug description:
  In rocky, FWaaS v2 fails to add the correct ip6tables rules for
  ICMPv6.

  Steps to reproduce:
  * Create rule with Protocol ICMP, IP version 6 in horizon
  * Add the rule to a policy, and make sure the firewall group with that policy 
is attached to a port
  * Login to the neutron network node that has the netns for your router and 
run ip6tables-save

  Observe that your rule is added like:
  -A neutron-l3-agent-iv63872a6fc -s 2001:db8:1d00:13::/64 -p icmp -j 
neutron-l3-agent-accepted

  It should've added:
  -A neutron-l3-agent-iv63872a6fc -s 2001:db8:1d00:13::/64 -p ipv6-icmp -j 
neutron-l3-agent-accepted

  Ubuntu 18.04
  neutron-l3-agent  2:13.0.4-0ubuntu1~cloud0
  python-neutron-fwaas  1:13.0.2-0ubuntu1~cloud0

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1843025/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1844822] Re: linuxbridge agent crash after R ->S upgrade

2019-09-20 Thread Brian Haley
msgpack is actually included as a requirement of oslo.privsep, with the
following requirement:

msgpack>=0.5.0 # Apache-2.0

>From the changelog at https://github.com/msgpack/msgpack-
python/blob/master/ChangeLog.rst it looks like max_bin_len was removed
in 0.6.1, so perhaps there needs to be a limit set the version.

Will re-assign to oslo.privsep for further investigation.

** Project changed: neutron => oslo.privsep

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1844822

Title:
  linuxbridge agent crash after R ->S upgrade

Status in oslo.privsep:
  New

Bug description:
  After upgrading neutron from Rocky to Stein (openstack-ansible
  deployment on ubuntu 16) I ran into an issue where the linuxbridge
  agent would crash on startup:

  root@bctlpicrouter01:/var/log/neutron#
  /openstack/venvs/neutron-19.0.4.dev1/bin/neutron-linuxbridge-agent
  --config-file /etc/neutron/neutron.conf --config-file
  /etc/neutron/plugins/ml2/ml2_conf.ini --config-file
  /etc/neutron/plugins/ml2/linuxbridge_agent.ini

  Exception in thread privsep_reader:

  Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
  self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
  self.__target(*self.__args, **self.__kwargs)
File 
"/openstack/venvs/neutron-19.0.4.dev1/lib/python2.7/site-packages/oslo_privsep/comm.py",
 line 130, in _reader_main
  for msg in reader:
File 
"/openstack/venvs/neutron-19.0.4.dev1/lib/python2.7/site-packages/six.py", line 
564, in next
  return type(self).__next__(self)
File 
"/openstack/venvs/neutron-19.0.4.dev1/lib/python2.7/site-packages/oslo_privsep/comm.py",
 line 77, in __next__
  return next(self.unpacker)
File "msgpack/_unpacker.pyx", line 562, in 
msgpack._cmsgpack.Unpacker.__next__
File "msgpack/_unpacker.pyx", line 493, in 
msgpack._cmsgpack.Unpacker._unpack
  ValueError: 1870054 exceeds max_bin_len(1048576)


  I was able to get around this problem by downgrading msgpack from
  0.6.1 to 0.5.6

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.privsep/+bug/1844822/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1845324] [NEW] Openvswitch kernel module is sometimes not loaded running devstack

2019-09-25 Thread Brian Haley
Public bug reported:

In change https://review.opendev.org/#/c/661065/ we stopped
compiling openvswitch from source, which was always doing
a reload of the kernel module.  We've seen in some cases
the module isn't loaded, we need to change to always load the 
module unconditionally to avoid this.

I'm working on a patch already, and it will need to go to the stable
branches as far as queens.

** Affects: neutron
 Importance: High
 Assignee: Brian Haley (brian-haley)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1845324

Title:
  Openvswitch kernel module is sometimes not loaded running devstack

Status in neutron:
  In Progress

Bug description:
  In change https://review.opendev.org/#/c/661065/ we stopped
  compiling openvswitch from source, which was always doing
  a reload of the kernel module.  We've seen in some cases
  the module isn't loaded, we need to change to always load the 
  module unconditionally to avoid this.

  I'm working on a patch already, and it will need to go to the stable
  branches as far as queens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1845324/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1886909] Re: selection_fields for udp and sctp case doesn't work correctly

2020-07-09 Thread Brian Haley
I can confirm this is a bug in core OVN, we should be able to bump the
version used in neutron and the ovn provider driver to 20.06.1 and make
sure it's fixed.  We are still working on updates to Octavia to support
SCTP so that should be supported soon hopefully.

** Project changed: networking-ovn => neutron

** Tags added: ovn ovn-octavia-provider

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1886909

Title:
  selection_fields for udp and sctp case doesn't work correctly

Status in neutron:
  Confirmed

Bug description:
  From https://bugzilla.redhat.com/show_bug.cgi?id=1846189

  Description of problem:
  [ovn 20.E]selection_fields for udp and sctp case doesn't work correctly

  Version-Release number of selected component (if applicable):
  # rpm -qa|grep ovn
  ovn2.13-central-2.13.0-34.el7fdp.x86_64
  ovn2.13-2.13.0-34.el7fdp.x86_64
  ovn2.13-host-2.13.0-34.el7fdp.x86_64

  
  How reproducible:
  always

  Steps to Reproduce:
  server:
rlRun "ovn-nbctl ls-add ls1"
  rlRun "ovn-nbctl lsp-add ls1 ls1p1"

  rlRun "ovn-nbctl lsp-set-addresses ls1p1 00:01:02:01:01:01"
  rlRun "ovn-nbctl lsp-add ls1 ls1p2"

  rlRun "ovn-nbctl lsp-set-addresses ls1p2
  00:01:02:01:01:02"

  rlRun "ovn-nbctl lsp-add ls1 ls1p3"
  rlRun "ovn-nbctl lsp-set-addresses ls1p3 00:01:02:01:01:04"

  rlRun "ovn-nbctl ls-add ls2"
  rlRun "ovn-nbctl lsp-add ls2 ls2p1"
  rlRun "ovn-nbctl lsp-set-addresses ls2p1 00:01:02:01:01:03"

  rlRun "ovs-vsctl add-port br-int vm1 -- set interface vm1 
type=internal"
  rlRun "ip netns add server0"
  rlRun "ip link set vm1 netns server0"
  rlRun "ip netns exec server0 ip link set lo up"
  rlRun "ip netns exec server0 ip link set vm1 up"
  rlRun "ip netns exec server0 ip link set vm1 address 
00:01:02:01:01:01"
  rlRun "ip netns exec server0 ip addr add 192.168.0.1/24 dev 
vm1"
  rlRun "ip netns exec server0 ip addr add 3001::1/64 dev vm1"
  rlRun "ip netns exec server0 ip route add default via 
192.168.0.254 dev vm1"
  rlRun "ip netns exec server0 ip -6 route add default via 
3001::a dev vm1"
  rlRun "ovs-vsctl set Interface vm1 
external_ids:iface-id=ls1p1"
rlRun "ovs-vsctl add-port br-int vm2 -- set interface vm2 
type=internal"
  rlRun "ip netns add server1"
  rlRun "ip link set vm2 netns server1"
  rlRun "ip netns exec server1 ip link set lo up"
  rlRun "ip netns exec server1 ip link set vm2 up"
  rlRun "ip netns exec server1 ip link set vm2 address 
00:01:02:01:01:02"
  rlRun "ip netns exec server1 ip addr add 192.168.0.2/24 dev 
vm2"
  rlRun "ip netns exec server1 ip addr add 3001::2/64 dev vm2"
  rlRun "ip netns exec server1 ip route add default via 
192.168.0.254 dev vm2"
  rlRun "ip netns exec server1 ip -6 route add default via 
3001::a dev vm2"
  rlRun "ovs-vsctl set Interface vm2 
external_ids:iface-id=ls1p2"

  rlRun "ovn-nbctl lr-add lr1"
  rlRun "ovn-nbctl lrp-add lr1 lr1ls1 00:01:02:0d:01:01 
192.168.0.254/24 3001::a/64"
  rlRun "ovn-nbctl lrp-add lr1 lr1ls2 00:01:02:0d:01:02 
192.168.1.254/24 3001:1::a/64"

  rlRun "ovn-nbctl lsp-add ls1 ls1lr1"
  rlRun "ovn-nbctl lsp-set-type ls1lr1 router"
  rlRun "ovn-nbctl lsp-set-options ls1lr1 router-port=lr1ls1"
  rlRun "ovn-nbctl lsp-set-addresses ls1lr1 \"00:01:02:0d:01:01 
192.168.0.254 3001::a\""
  rlRun "ovn-nbctl lsp-add ls2 ls2lr1"
  rlRun "ovn-nbctl lsp-set-type ls2lr1 router"
  rlRun "ovn-nbctl lsp-set-options ls2lr1 router-port=lr1ls2"
  rlRun "ovn-nbctl lsp-set-addresses ls2lr1 \"00:01:02:0d:01:02 
192.168.1.254 3001:1::a\""
rlRun "ovn-nbctl lrp-add lr1 lr1p 00:01:02:0d:0f:01 
172.16.1.254/24 2002::a/64"

  rlRun "ovn-nbctl lb-add lb0 192.168.2.1:12345 
192.168.0.1:12345,192.168.0.2:12345"
  rlRun "ovn-nbctl lb-add lb0 [3000::100]:12345 
[3001::1]:12345,[3001::2]:12345"
uuid=`ovn-nbctl list Load_Balancer |grep uuid|awk 
'{printf $3}'`

  rlRun "ovn-nbctl set load_balancer $uuid 
selection_fields=\"ip_src,ip_dst\""
  rlRun "ovn-nbctl show"
  rlRun "ovn-sbctl show"
ovn-nbctl set  Logical_Router lr1

[Yahoo-eng-team] [Bug 1887147] Re: neutron-linuxbridge-agent looping same as dhcp

2020-08-19 Thread Brian Haley
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1887147

Title:
  neutron-linuxbridge-agent looping same as dhcp

Status in neutron:
  Invalid

Bug description:
  I am trying to install https://docs.openstack.org/install-guide
  /openstack-services.html#minimal-deployment-for-ussuri on CentOS 8,
  with network provider option 1.

  
  For the reproduction steps i followed install-guide deployment-for-ussuri.

  Other components with trial's and errors did worked, neutron
  linuxbridge-agent keeps looping and returning errors.

  Logs from linuxbridge-agent.log in the attachment additionally dhcp
  agent log returns similar problem in logs:


  
  2020-07-10 12:21:15.060 68787 DEBUG neutron.agent.dhcp.agent [-] Calling 
driver for network: 34dc4390-9448-4eba-8be2-a5c3f4cb94a5 action: enable 
call_driver /usr/lib/python3.6/site-packages/neutron/agent/dhcp/agent.py:163
  2020-07-10 12:21:15.060 68787 DEBUG neutron.agent.linux.utils [-] Unable to 
access /var/lib/neutron/dhcp/34dc4390-9448-4eba-8be2-a5c3f4cb94a5/pid; Error: 
[Errno 2] No such file or directory: 
'/var/lib/neutron/dhcp/34dc4390-9448-4eba-8be2-a5c3f4cb94a5/pid' 
get_value_from_file 
/usr/lib/python3.6/site-packages/neutron/agent/linux/utils.py:262
  2020-07-10 12:21:15.061 68787 INFO oslo.privsep.daemon [-] Running privsep 
helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'privsep-helper', '--config-file', '/usr/share/neutron/neutron-dist.conf', 
'--config-file', '/etc/neutron/neutron.conf', '--config-file', 
'/etc/neutron/dhcp_agent.ini', '--config-dir', 
'/etc/neutron/conf.d/neutron-dhcp-agent', '--privsep_context', 
'neutron.privileged.default', '--privsep_sock_path', 
'/tmp/tmphzark8uo/privsep.sock']
  2020-07-10 12:21:16.353 68787 CRITICAL oslo.privsep.daemon [-] privsep helper 
command exited non-zero (1)
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent [-] Unable to 
enable dhcp for 34dc4390-9448-4eba-8be2-a5c3f4cb94a5.: 
oslo_privsep.daemon.FailedToDropPrivileges: privsep helper command exited 
non-zero (1)
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent Traceback (most 
recent call last):
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python3.6/site-packages/neutron/agent/dhcp/agent.py", line 178, in 
call_driver
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent 
getattr(driver, action)(**action_kwargs)
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python3.6/site-packages/neutron/agent/linux/dhcp.py", line 256, in 
enable
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent 
common_utils.wait_until_true(self._enable, timeout=300)
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python3.6/site-packages/neutron/common/utils.py", line 703, in 
wait_until_true
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent while not 
predicate():
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python3.6/site-packages/neutron/agent/linux/dhcp.py", line 268, in 
_enable
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent 
interface_name = self.device_manager.setup(self.network)
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python3.6/site-packages/neutron/agent/linux/dhcp.py", line 1652, in 
setup
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent 
ip_lib.IPWrapper().ensure_namespace(network.namespace)
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python3.6/site-packages/neutron/agent/linux/ip_lib.py", line 249, in 
ensure_namespace
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent if not 
self.netns.exists(name):
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python3.6/site-packages/neutron/agent/linux/ip_lib.py", line 728, in 
exists
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent return 
network_namespace_exists(name)
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python3.6/site-packages/neutron/agent/linux/ip_lib.py", line 936, in 
network_namespace_exists
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent output = 
list_network_namespaces(**kwargs)
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python3.6/site-packages/neutron/agent/linux/ip_lib.py", line 922, in 
list_network_namespaces
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent return 
privileged.list_netns(**kwargs)
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python3.6/site-packages/oslo_privsep/priv_context.py", line 246, in 
_wrap
  2020-07-10 12:21:16.354 68787 ERROR neutron.agent.dhcp.agent 

[Yahoo-eng-team] [Bug 1896592] Re: [neutron-tempest-plugin] test_dhcpv6_stateless_* clashing when creating a IPv6 subnet

2020-09-22 Thread Brian Haley
This is actually a bug in the Tempest repo, we changed the code in
neutron-tempest-plugin to be better at creating unique subnets in
https://review.opendev.org/#/c/560465/

Wondering if we need to have a similar change in Tempest.

Should we change the component or just add it as affected?  Although it
is affecting the neutron gate it seems.

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1896592

Title:
  [neutron-tempest-plugin] test_dhcpv6_stateless_* clashing when
  creating a IPv6 subnet

Status in neutron:
  New
Status in tempest:
  New

Bug description:
  The following three test cases are clashing when creating, at the same time, 
an IPv6 subnet with the same CIDR:
  - test_dhcpv6_stateless_eui64
  - test_dhcpv6_stateless_no_ra
  - test_dhcpv6_stateless_no_ra_no_dhcp

  Log: https://61069af11b09b96273ad-
  d5a2c2135ef34e5fcff72992ca5eb476.ssl.cf2.rackcdn.com/662869/6/check
  /neutron-tempest-with-uwsgi/9b9c086/controller/logs/tempest_log.txt

  Snippet: http://paste.openstack.org/show/798195/

  Error:
  "Invalid input for operation: Requested subnet with cidr: 2001:db8::/64 for 
network: 31e04aec-34df-49dc-8a05-05813a37be98 overlaps with another subnet."

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1896592/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1896677] [NEW] [OVN Octavia Provider] OVN provider fails creating pool without subnet ID

2020-09-22 Thread Brian Haley
Public bug reported:

The OVN Octavia Provider driver requires a subnet ID be present in the
API call to create a pool.  Since this is an optional parameter
according to the API, it should be able to work without one.  The
octavia-lib DriverLibrary class provides methods to look it up given
other information we have in the API call, so we should support it.

** Affects: neutron
 Importance: High
 Assignee: Brian Haley (brian-haley)
 Status: New


** Tags: ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1896677

Title:
  [OVN Octavia Provider] OVN provider fails creating pool without subnet
  ID

Status in neutron:
  New

Bug description:
  The OVN Octavia Provider driver requires a subnet ID be present in the
  API call to create a pool.  Since this is an optional parameter
  according to the API, it should be able to work without one.  The
  octavia-lib DriverLibrary class provides methods to look it up given
  other information we have in the API call, so we should support it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1896677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1896678] [NEW] [OVN Octavia Provider] test_port_forwarding failing in gate

2020-09-22 Thread Brian Haley
Public bug reported:

ovn_octavia_provider.tests.functional.test_integration.TestOvnOctaviaProviderIntegration.test_port_forwarding
is currently failing in the check and gate queues and is under
investigation.

Will mark unstable to get the gate Green.

** Affects: neutron
 Importance: Undecided
 Assignee: Brian Haley (brian-haley)
 Status: In Progress


** Tags: ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1896678

Title:
  [OVN Octavia Provider] test_port_forwarding failing in gate

Status in neutron:
  In Progress

Bug description:
  
ovn_octavia_provider.tests.functional.test_integration.TestOvnOctaviaProviderIntegration.test_port_forwarding
  is currently failing in the check and gate queues and is under
  investigation.

  Will mark unstable to get the gate Green.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1896678/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1896827] Re: port update api should not lose foreign external_ids

2020-09-23 Thread Brian Haley
** Changed in: networking-ovn
   Status: New => Confirmed

** Project changed: networking-ovn => neutron

** Tags added: ovn

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1896827

Title:
  port update api should not lose foreign external_ids

Status in neutron:
  Confirmed

Bug description:
  Currently, the update_port api in networking_ovn does not seem to
  retain values in the external_ids that it may not know about.

  That is not proper behavior as external ids in the lsp may be
  storing data used by another entity in neutron, such as
  ovn-octavia-provider.

  Pseudo example:

  1- create neutron port (which creates lsp in ovn)
  2- add a new value to lsp's external_ids
  3- invoke neutron: port_update
  4- check an verify that after port update the key/value pair added in step 2 
is still present

  Ref code to networking-ovn port_update:

  https://github.com/openstack/networking-
  
ovn/blob/51e4351309c1f38c2ed353e6547c60ae9d5d50f5/networking_ovn/common/ovn_client.py#L456

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1896827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1900763] [NEW] [OVN Octavia Provider] OVN provider status update failures can leave orphaned resources

2020-10-20 Thread Brian Haley
Public bug reported:

This is related to Octavia issue
https://storyboard.openstack.org/#!/story/2008254

When the OVN Octavia Provider driver calls into octavia-lib to update
the status of a loadbalancer, for example, the code in
helper:_update_status_to_octavia() it might fail:

DEBUG ovn_octavia_provider.helper [-] Updating status to octavia: {'listeners': 
[{'id': '7033179d-2ddb-4714-9c06-b7d399498238', 'provisioning_status>
ERROR ovn_octavia_provider.helper [-] Error while updating the load balancer 
status: 'NoneType' object has no attribute 'update': octavia_lib.api.dr>
ERROR ovn_octavia_provider.helper [-] Unexpected exception in request_handler: 
octavia_lib.api.drivers.exceptions.UpdateStatusError: ('The status up>
ERROR ovn_octavia_provider.helper Traceback (most recent call last):
ERROR ovn_octavia_provider.helper   File 
"/opt/stack/ovn-octavia-provider/ovn_octavia_provider/helper.py", line 318, in 
_update_status_to_octavia
ERROR ovn_octavia_provider.helper 
self._octavia_driver_lib.update_loadbalancer_status(status)
ERROR ovn_octavia_provider.helper   File 
"/usr/local/lib/python3.8/dist-packages/octavia_lib/api/drivers/driver_lib.py", 
line 121, in update_loadbal>
ERROR ovn_octavia_provider.helper raise driver_exceptions.UpdateStatusError(
ERROR ovn_octavia_provider.helper 
octavia_lib.api.drivers.exceptions.UpdateStatusError: 'NoneType' object has no 
attribute 'update'

This is failing because the listener associated with the loadbalancer
was not found, it's DB transaction was in-flight.  That's the related
Octavia issue from above, but a fix for that will not solve the problem.

A side-effect is this listener is now "stuck":

$ openstack loadbalancer listener delete 7033179d-2ddb-4714-9c06-b7d399498238
Load Balancer 2cc1d429-b176-48e5-adaa-946be2af0d51 is immutable and cannot be 
updated. (HTTP 409) (Request-ID: req-0e1e53ac-4db9-4779-b1f3-11210fe46f7f)

The provider driver needs to retry the operation, typically even the
very next call succeeds.

** Affects: neutron
 Importance: High
 Assignee: Brian Haley (brian-haley)
 Status: New


** Tags: ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1900763

Title:
  [OVN Octavia Provider] OVN provider status update failures can leave
  orphaned resources

Status in neutron:
  New

Bug description:
  This is related to Octavia issue
  https://storyboard.openstack.org/#!/story/2008254

  When the OVN Octavia Provider driver calls into octavia-lib to update
  the status of a loadbalancer, for example, the code in
  helper:_update_status_to_octavia() it might fail:

  DEBUG ovn_octavia_provider.helper [-] Updating status to octavia: 
{'listeners': [{'id': '7033179d-2ddb-4714-9c06-b7d399498238', 
'provisioning_status>
  ERROR ovn_octavia_provider.helper [-] Error while updating the load balancer 
status: 'NoneType' object has no attribute 'update': octavia_lib.api.dr>
  ERROR ovn_octavia_provider.helper [-] Unexpected exception in 
request_handler: octavia_lib.api.drivers.exceptions.UpdateStatusError: ('The 
status up>
  ERROR ovn_octavia_provider.helper Traceback (most recent call last):
  ERROR ovn_octavia_provider.helper   File 
"/opt/stack/ovn-octavia-provider/ovn_octavia_provider/helper.py", line 318, in 
_update_status_to_octavia
  ERROR ovn_octavia_provider.helper 
self._octavia_driver_lib.update_loadbalancer_status(status)
  ERROR ovn_octavia_provider.helper   File 
"/usr/local/lib/python3.8/dist-packages/octavia_lib/api/drivers/driver_lib.py", 
line 121, in update_loadbal>
  ERROR ovn_octavia_provider.helper raise 
driver_exceptions.UpdateStatusError(
  ERROR ovn_octavia_provider.helper 
octavia_lib.api.drivers.exceptions.UpdateStatusError: 'NoneType' object has no 
attribute 'update'

  This is failing because the listener associated with the loadbalancer
  was not found, it's DB transaction was in-flight.  That's the related
  Octavia issue from above, but a fix for that will not solve the
  problem.

  A side-effect is this listener is now "stuck":

  $ openstack loadbalancer listener delete 7033179d-2ddb-4714-9c06-b7d399498238
  Load Balancer 2cc1d429-b176-48e5-adaa-946be2af0d51 is immutable and cannot be 
updated. (HTTP 409) (Request-ID: req-0e1e53ac-4db9-4779-b1f3-11210fe46f7f)

  The provider driver needs to retry the operation, typically even the
  very next call succeeds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1900763/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1901936] [NEW] [OVN Octavia Provider] OVN provider loadbalancer failover should fail as unsupported

2020-10-28 Thread Brian Haley
Public bug reported:

The core OVN code for Loadbalancers does not support a manual failover
from one gateway node to another.  But running the command with the OVN
provider driver seems to succeed:

$ openstack loadbalancer failover $ID
(no output)

The code actually does nothing and just returns the provisioning status
as ACTIVE.

Since it's unsupported by the underlying technology, the provider driver
should return an UnsupportedOptionError() to the caller.

** Affects: neutron
 Importance: Low
 Assignee: Brian Haley (brian-haley)
 Status: In Progress


** Tags: ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1901936

Title:
  [OVN Octavia Provider] OVN provider loadbalancer failover should fail
  as unsupported

Status in neutron:
  In Progress

Bug description:
  The core OVN code for Loadbalancers does not support a manual failover
  from one gateway node to another.  But running the command with the
  OVN provider driver seems to succeed:

  $ openstack loadbalancer failover $ID
  (no output)

  The code actually does nothing and just returns the provisioning
  status as ACTIVE.

  Since it's unsupported by the underlying technology, the provider
  driver should return an UnsupportedOptionError() to the caller.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1901936/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1906568] [NEW] [OVN Octavia Provider] OVN provider not setting member offline correctly on create when admin_state_up=False

2020-12-02 Thread Brian Haley
Public bug reported:

According to the Octavia API, a provider driver should set the member
operating_status field to OFFLINE on a create if admin_state_up=False in
the call.  The OVN provider doesn't look at that flag, so always has
operating_status set to NO_MONITOR (the default when health monitors are
not supported).

Need to fix this in order to enable and pass the tempest API tests.

** Affects: neutron
 Importance: High
 Assignee: Brian Haley (brian-haley)
 Status: In Progress


** Tags: ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1906568

Title:
  [OVN Octavia Provider] OVN provider not setting member offline
  correctly on create when admin_state_up=False

Status in neutron:
  In Progress

Bug description:
  According to the Octavia API, a provider driver should set the member
  operating_status field to OFFLINE on a create if admin_state_up=False
  in the call.  The OVN provider doesn't look at that flag, so always
  has operating_status set to NO_MONITOR (the default when health
  monitors are not supported).

  Need to fix this in order to enable and pass the tempest API tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1906568/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1907710] Re: deprecated resource links on the page: OVN information i

2020-12-11 Thread Brian Haley
This was fixed in commit e92e31123297a9cbdbbc45ed71e430f3df9cc3ae March
2020 on the neutron master branch.

** Project changed: networking-ovn => neutron

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1907710

Title:
  deprecated resource links on the page: OVN information i

Status in neutron:
  Fix Released

Bug description:
  Towards the top of the page, there are a few links to resources for
  OVN tutorials, none of which exist (or were relocated):

  http://blog.spinhirne.com/p/blog-series.html#introToOVN

  https://docs.openvswitch.org/en/latest/tutorials/ovn-sandbox/

  https://docs.openvswitch.org/en/latest/tutorials/ovn-openstack/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1907710/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1918914] Re: Toggling dhcp on and off in a subnet causes new instances to be unreachable

2021-03-12 Thread Brian Haley
Since this report is against Ussuri or later I'm moving to the neutron
component since that's where the OVN driver code is now.

** Project changed: networking-ovn => neutron

** Tags added: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1918914

Title:
  Toggling dhcp on and off in a subnet causes new instances to be
  unreachable

Status in neutron:
  New

Bug description:
  After DHCP was turned on and off again on our network, new instances
  where not reachable. We found that they still tried to get their
  network config via DHCP after that.

  We run Openstack Ussuri installed with Openstack Kolla with OVN
  networking enabled. Also force_config_drive is set to true.

  Steps to reproduce:

    openstack network create test
    openstack subnet create --no-dhcp --subnet-range 192.168.0.0/24 --network 
test test
    openstack router create test
    openstack router set test --external-gateway public
    openstack router add subnet test test

    openstack server create --network test --image e83d66e7-776a-
  4b59-a583-97dfcc5799f6 --flavor s3.small --key-name noudssh test-1

  Network metadata:

  {
     "links" : [
    {
   "ethernet_mac_address" : "fa:16:3e:b1:f6:ee",
   "id" : "tap7608d5b5-bd",
   "mtu" : 8942,
   "type" : "ovs",
   "vif_id" : "7608d5b5-bdc5-4215-a39c-acd8fa1318c2"
    }
     ],
     "networks" : [
    {
   "id" : "network0",
   "ip_address" : "192.168.0.237",
   "link" : "tap7608d5b5-bd",
   "netmask" : "255.255.255.0",
   "network_id" : "66a6378c-3e2d-4814-9412-4a784a81e516",
   "routes" : [
  {
     "gateway" : "192.168.0.1",
     "netmask" : "0.0.0.0",
     "network" : "0.0.0.0"
  }
   ],
   "services" : [],
   "type" : "ipv4"
    }
     ],
     "services" : []
  }

  Toggle DHCP and create new server:

    openstack subnet set --dhcp test
    openstack subnet set --no-dhcp test
    openstack server create --network test --image 
e83d66e7-776a-4b59-a583-97dfcc5799f6 --flavor s3.small --key-name noudssh test-2

  Network metadata:

  {
     "links" : [
    {
   "type" : "ovs",
   "id" : "tapee8f020a-1f",
   "vif_id" : "ee8f020a-1f2e-4db3-aab5-f6387fb45ba6",
   "ethernet_mac_address" : "fa:16:3e:94:05:35",
   "mtu" : 8942
    }
     ],
     "services" : [],
     "networks" : [
    {
   "network_id" : "66a6378c-3e2d-4814-9412-4a784a81e516",
   "link" : "tapee8f020a-1f",
   "type" : "ipv4_dhcp",
   "id" : "network0"
    }
     ]
  }

  As DHCP is now off, this instance stays unreachable.

  I tried the same in a cluster with OVN disabled and that worked
  without any problem. So this seems to be OVN related.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1918914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1925396] [NEW] With dibbler retirement, neutron needs a new IPv6 prefix delegation driver

2021-04-21 Thread Brian Haley
Public bug reported:

The dibbler IPv6 prefix delegation client and server were retired last
year and it is currently unmaintained.  Unfortunately neutron uses
dibbler-client for its default prefix delegation driver and has no other
options.  As distros start to remove it this could become an issue, so a
suitable replacement will need to be developed.  Creating this bug as a
placeholder so the issue can be tracked and found by others, but more
work is required to determine what options we have.

** Affects: neutron
 Importance: Wishlist
 Status: Confirmed


** Tags: ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1925396

Title:
  With dibbler retirement, neutron needs a new IPv6 prefix delegation
  driver

Status in neutron:
  Confirmed

Bug description:
  The dibbler IPv6 prefix delegation client and server were retired last
  year and it is currently unmaintained.  Unfortunately neutron uses
  dibbler-client for its default prefix delegation driver and has no
  other options.  As distros start to remove it this could become an
  issue, so a suitable replacement will need to be developed.  Creating
  this bug as a placeholder so the issue can be tracked and found by
  others, but more work is required to determine what options we have.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1925396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2017518] Re: Neutron LInux bridge agents wildly fluctuating

2023-04-28 Thread Brian Haley
The patch would not have fixed the issue, just added some logging so it
was obvious what the agent was doing.

And yes, syncing time is important, it could have just been the time
difference on the agents and server causing things to seem broken. Glad
you solved your issue.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2017518

Title:
  Neutron LInux bridge agents wildly fluctuating

Status in neutron:
  Invalid

Bug description:
  Hi All,

  We have OSA Yoga setup. The neutron linux bridge agent is wildly
  fluctuating, the agents going up and down in the `neutron agent list`
  command.  The count of the agents which are down is very intermittent
  and changing every few seconds as shown below:

  
  38
  root@utility-container-:~# neutron agent-list | grep Linux | grep xxx | wc -l
  neutron CLI is deprecated and will be removed in the Z cycle. Use openstack 
CLI instead.
  34
  root@utility-container-:~# neutron agent-list | grep Linux | grep xxx | wc -l
  neutron CLI is deprecated and will be removed in the Z cycle. Use openstack 
CLI instead.
  43
  root@utility-container-:~# neutron agent-list | grep Linux | grep xxx | wc -l
  neutron CLI is deprecated and will be removed in the Z cycle. Use openstack 
CLI instead.
  2
  root@utility-container-:~# neutron agent-list | grep Linux | grep xxx | wc -l
  neutron CLI is deprecated and will be removed in the Z cycle. Use openstack 
CLI instead.
  2
  root@utility-container-:~# neutron agent-list | grep Linux | grep xxx | wc -l
  neutron CLI is deprecated and will be removed in the Z cycle. Use openstack 
CLI instead.
  82
  root@utility-container-:~# neutron agent-list | grep Linux | grep xxx | wc -l
  neutron CLI is deprecated and will be removed in the Z cycle. Use openstack 
CLI instead.
  54
  ---

  As shown above, the agents down count is fluctuating within few
  seconds gap of executing the above command. The logs on the network
  nodes are not indicating anything wrong. Why is this happening ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2017518/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1993628] Re: Designate synchronisation inconsistensies with Neutron-API

2023-05-16 Thread Brian Haley
Added neutron since I don't think this is specific to charms.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1993628

Title:
  Designate synchronisation inconsistensies with Neutron-API

Status in OpenStack Designate Charm:
  New
Status in neutron:
  New

Bug description:
  When setting a network to use automatically a dns-domain, some
  inconsistensies were observed when deleting and recreating new
  instances sharing the same names and associating them to the same
  floating IPs from before.

  This has been reproduced on :
  * Focal Ussuri (Neutron-api and Designate charms with Ussuri/edge branch)
  * Focal Yoga  (Neutron-api and Designate charms with Yoga/stable branch)

  
  Reproducible steps :
  * create a domain zone with "openstack zone create"
  * configure an existing self-service with the newly created domain "openstack 
network set --dns-domain ..."
  * create a router on the self-service network with an external gateway on 
provider network
  * create an instance on self-service network
  * create a floating ip address on provider network
  * associate floating ip to instance
  --> the DNS entry gets created

  * delete the instance *WITH* the floating ip still attached
  --> the DNS entry is deleted

  * recreate a new instance with exactly the *same* name and re-use the *same* 
floating ip
  --> the DNS entry doesn't get created
  --> it doesn't seem to be related to TTL, since this makes the issue 
permanent even after a day of testing when TTL is set by default to 1 hour

  Worse inconsistensies can be seen when, instead of deleting an instance, 
moving the floating ip directly to another instance
  * have 2 instances vm-1 and vm-2
  * attach floating ip to vm-1 "openstack server add floating ip XXX vm-1"
  --> the DNS entry is created
  * attach the same floating ip to vm-2 ""openstack server add floating ip XXX 
vm-2"  (this is permitted by CLI and simply move the fIP to vm-2)
  --> the DNS entry still use vm-1, vm-2 doesn't get created

  When you combine these 2 issues, you can be left with either false
  records being kept or automatic records failing silently to be
  created.

  
  Workaround :
  * either always remove floating ip *before* deleting an instance
  or
  * remove floating ip on instance
  * then re-add floating ip on instance

  
  Eventually when deleting the floating ip to reassign it, we are gratified 
with this error on neutron-api unit (on Ussuri but the error is similar on 
Yoga) :

  2022-10-19 02:24:12.497 67548 ERROR neutron.db.dns_db 
[req-e6d270d2-fbde-42d7-a75b-2c8a67c42fcb 2dc4151f6dba4c3e8ba8537c9c354c13 
f548268d5255424591baa8783f1cf277 - 6a71047e7d7f4e01945ec58df06ae63f 
6a71047e7d7f4e01945ec58df06ae63f] Error deleting Floating IP data from external 
DNS service. Name: 'vm-2'. Domain: 'compute.stack.vpn.'. IP addresses 
'192.168.21.217'. DNS service driver message 'Name vm-2.compute.stack.vpn. is 
duplicated in the external DNS service': 
neutron_lib.exceptions.dns.DuplicateRecordSet: Name vm-2.compute.stack.vpn. is 
duplicated in the external DNS service
  2022-10-19 02:24:12.497 67548 ERROR neutron.db.dns_db Traceback (most recent 
call last):
  2022-10-19 02:24:12.497 67548 ERROR neutron.db.dns_db   File 
"/usr/lib/python3/dist-packages/neutron/db/dns_db.py", line 214, in 
_delete_floatingip_from_external_dns_service
  2022-10-19 02:24:12.497 67548 ERROR neutron.db.dns_db 
self.dns_driver.delete_record_set(context, dns_domain, dns_name,
  2022-10-19 02:24:12.497 67548 ERROR neutron.db.dns_db   File 
"/usr/lib/python3/dist-packages/neutron/services/externaldns/drivers/designate/driver.py",
 line 172, in delete_record_set
  2022-10-19 02:24:12.497 67548 ERROR neutron.db.dns_db ids_to_delete = 
self._get_ids_ips_to_delete(
  2022-10-19 02:24:12.497 67548 ERROR neutron.db.dns_db   File 
"/usr/lib/python3/dist-packages/neutron/services/externaldns/drivers/designate/driver.py",
 line 200, in _get_ids_ips_to_delete
  2022-10-19 02:24:12.497 67548 ERROR neutron.db.dns_db raise 
dns_exc.DuplicateRecordSet(dns_name=name)
  2022-10-19 02:24:12.497 67548 ERROR neutron.db.dns_db 
neutron_lib.exceptions.dns.DuplicateRecordSet: Name vm-2.compute.stack.vpn. is 
duplicated in the external DNS service
  2022-10-19 02:24:12.497 67548 ERROR neutron.db.dns_db

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-designate/+bug/1993628/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2020698] [NEW] neutron-tempest-plugin-bgpvpn-bagpipe job unstable

2023-05-24 Thread Brian Haley
Public bug reported:

The neutron-tempest-plugin-bgpvpn-bagpipe has been unstable for over a
week, and yesterday it got worse where half the tests are failing now.

I thought increasing the job timeout would help, but it has not:

https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/883991

I realize there are changes in-flight wrt to sRBAC which might fix the
issue, but until they all merge I think we should just make it non-
voting on the master branch. The other branches don't seem to have any
problems.

** Affects: neutron
 Importance: High
 Assignee: Brian Haley (brian-haley)
 Status: Confirmed


** Tags: l3-bgp tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2020698

Title:
  neutron-tempest-plugin-bgpvpn-bagpipe job unstable

Status in neutron:
  Confirmed

Bug description:
  The neutron-tempest-plugin-bgpvpn-bagpipe has been unstable for over a
  week, and yesterday it got worse where half the tests are failing now.

  I thought increasing the job timeout would help, but it has not:

  https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/883991

  I realize there are changes in-flight wrt to sRBAC which might fix the
  issue, but until they all merge I think we should just make it non-
  voting on the master branch. The other branches don't seem to have any
  problems.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2020698/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2025264] Re: [ovn][DVR]FIP traffic centralized in DVR environments

2023-07-05 Thread Brian Haley
** Changed in: neutron
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2025264

Title:
  [ovn][DVR]FIP traffic centralized in DVR environments

Status in neutron:
  Fix Committed

Bug description:
  When a port is down, the FIP associated to it get centralized
  (external_mac removed on NAT table entry) despite DVR being enabled.
  This also happen when deleting a VM with a FIP associated, where
  during some period of time, the FIP gets centralized -- time between
  removing the external_mac from NAT table entry, and the deletion of
  the NAT table entry.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2025264/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2026775] [NEW] Metadata agents do not parse X-Forwarded-For headers properly

2023-07-10 Thread Brian Haley
Public bug reported:

While looking at an unrelated issue I noticed log lines like this in the
neutron-ovn-metadata-agent log file:

  No port found in network b62452f3-ec93-4cd7-af2d-9f9eabb33b12 with IP
address 10.246.166.21,10.131.84.23

While it might seem harmless, looking at the code it only showed a
single value being logged:

  LOG.error("No port found in network %s with IP address %s",
network_id, remote_address)

The code in question is looking for a matching IP address, but will
never match the concatenated string.

Google shows the additional IP address(es) that might be present in this
header are actually proxies:

  https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-
For

And sure enough in my case the second IP was always the same.

The code needs to be changed to account for proxies, which aren't
actually necessary to lookup what port is making the request, but it
could be logged for posterity.

I'll send a change for that soon.

** Affects: neutron
 Importance: Medium
 Assignee: Brian Haley (brian-haley)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2026775

Title:
  Metadata agents do not parse X-Forwarded-For headers properly

Status in neutron:
  In Progress

Bug description:
  While looking at an unrelated issue I noticed log lines like this in
  the neutron-ovn-metadata-agent log file:

No port found in network b62452f3-ec93-4cd7-af2d-9f9eabb33b12 with
  IP address 10.246.166.21,10.131.84.23

  While it might seem harmless, looking at the code it only showed a
  single value being logged:

LOG.error("No port found in network %s with IP address %s",
  network_id, remote_address)

  The code in question is looking for a matching IP address, but will
  never match the concatenated string.

  Google shows the additional IP address(es) that might be present in
  this header are actually proxies:

https://developer.mozilla.org/en-
  US/docs/Web/HTTP/Headers/X-Forwarded-For

  And sure enough in my case the second IP was always the same.

  The code needs to be changed to account for proxies, which aren't
  actually necessary to lookup what port is making the request, but it
  could be logged for posterity.

  I'll send a change for that soon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2026775/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2028112] Re: Unable to create VM when using the sriov agent with the ml2/ovn driver.

2023-07-19 Thread Brian Haley
*** This bug is a duplicate of bug 1975743 ***
https://bugs.launchpad.net/bugs/1975743

** This bug has been marked a duplicate of bug 1975743
   ML2 OVN - Creating an instance with hardware offloaded port is broken

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2028112

Title:
  Unable to create VM when using the sriov agent with the ml2/ovn
  driver.

Status in neutron:
  New

Bug description:
  I am planning to operate nodes using HWOL and OVN Controller and nodes
  using SR-IOV simultaneously in the OVN environment.

  The test content is as follows.

  ** Controller server **

  The ml2 mechanism_drivers were specified as follows:

  ```
  [ml2]
  mechanism_drivers = sriovnicswitch,ovn
  ```

  Upon checking the log, the driver was confirmed to be loaded normally.

  ```
  2023-07-19 00:44:37.403 1697414 INFO neutron.plugins.ml2.managers [-] 
Configured mechanism driver names: ['ovn', 'sriovnicswitch']
  2023-07-19 00:44:37.464 1697414 INFO neutron.plugins.ml2.managers [-] Loaded 
mechanism driver names: ['ovn', 'sriovnicswitch']
  2023-07-19 00:44:37.464 1697414 INFO neutron.plugins.ml2.managers [-] 
Registered mechanism drivers: ['ovn', 'sriovnicswitch']
  2023-07-19 00:44:37.464 1697414 INFO neutron.plugins.ml2.managers [-] No 
mechanism drivers provide segment reachability information for agent scheduling.
  2023-07-19 00:44:38.358 1697414 INFO neutron.plugins.ml2.managers 
[req-bc634856-2d9a-44d0-ae0e-351b440a2a0b - - - - -] Initializing mechanism 
driver 'ovn'
  2023-07-19 00:44:38.378 1697414 INFO neutron.plugins.ml2.managers 
[req-bc634856-2d9a-44d0-ae0e-351b440a2a0b - - - - -] Initializing mechanism 
driver 'sriovnicswitch'
  ```

  ** Compute **

  nova.conf

  ```
  [pci]
  passthrough_whitelist = {  "devname": "enp94s0f1np1", "physical_network": 
"physnet1" }
  ```

  plugin/ml2/sriov-agent.ini
  ```
  [DEFAULT]
  debug = true

  [securitygroup]
  firewall_driver = neutron.agent.firewall.NoopFirewallDriver

  [sriov_nic]
  physical_device_mappings = physnet1:enp94s0f1np1
  ```

  Neutron Agent status
  ```
  
+--+--+---+---+---+---++
  | ID   | Agent Type   | Host  
| Availability Zone | Alive | State | Binary |
  
+--+--+---+---+---+---++
  | 24e9395c-379f-4afd-aa84-ae0d970794ff | NIC Switch agent | 
Qacloudhost06 | None  | :-)   | UP| neutron-sriov-nic-agent 
   |
  | 43ba481c-c0f2-49bc-a34a-c94faa284ac7 | NIC Switch agent | 
Qaamdhost02   | None  | :-)   | UP| neutron-sriov-nic-agent 
   |
  | 4c1a6c78-e58a-48d9-aa4a-abdf44d2f359 | NIC Switch agent | 
Qacloudhost07 | None  | :-)   | UP| neutron-sriov-nic-agent 
   |
  | 534f0946-6eb3-491f-a57d-65cbc0133399 | NIC Switch agent | 
Qacloudhost02 | None  | :-)   | UP| neutron-sriov-nic-agent 
   |
  | 2275f9d4-7c69-51db-ae71-b6e0be15e9b8 | OVN Metadata agent   | 
Qacloudhost05 |   | :-)   | UP| 
neutron-ovn-metadata-agent |
  | 92a7b8dc-e122-49c8-a3bc-ae6a38b56cc0 | OVN Controller Gateway agent | 
Qacloudhost05 |   | :-)   | UP| ovn-controller  
   |
  | c3a1e8fe-8669-5e7a-a3d7-3a2b638fae26 | OVN Metadata agent   | 
Qaamdhost02   |   | :-)   | UP| 
neutron-ovn-metadata-agent |
  | d203ff10-0835-4d7e-bc63-5ff274ade5a3 | OVN Controller agent | 
Qaamdhost02   |   | :-)   | UP| ovn-controller  
   |
  | fc4c5075-9b44-5c21-a24d-f86dfd0009f9 | OVN Metadata agent   | 
Qacloudhost02 |   | :-)   | UP| 
neutron-ovn-metadata-agent |
  | bed0-1519-47f8-b52f-3a9116e1408f | OVN Controller Gateway agent | 
Qacloudhost02 |   | :-)   | UP| ovn-controller  
   |
  
+--+--+---+---+---+---++
  ```

  When creating a vm, Neutron error log
  ```
  2023-07-19 02:44:30.463 1725695 ERROR neutron.plugins.ml2.managers 
[req-9204d6f7-ddc3-44e2-878c-bfa9c3f761ef fbec686e249e4818be7a686833140326 
7a4dd87db099460795d775b055a648ea - default default] Mechanism driver 'ovn' 
failed in update_port_precommit: neutron_lib.exceptions.InvalidInput: Invalid 
input for operation: Invalid binding:profile. too many parameters.
  2023-07-19 02:44:30.463 1725695 ERROR neutron.plugins.ml2.managers Traceback 
(most recent call last):
  2023-07-19 02:44:30.463 1725695 ERROR neutron.pl

[Yahoo-eng-team] [Bug 2037239] [NEW] neutron-tempest-plugin-openvswitch-* jobs randomly failing in gate

2023-09-24 Thread Brian Haley
Public bug reported:

A number of different scenario tests seem to be failing randomly in the
same way:

Details: Router 01dda41e-67ed-4af0-ac56-72fd895cef9a is not active on
any of the L3 agents

One example is in
https://review.opendev.org/c/openstack/neutron/+/895832 where these
three jobs are failing:

neutron-tempest-plugin-openvswitch-iptables_hybrid  FAILURE
neutron-tempest-plugin-openvswitch  FAILURE
neutron-tempest-plugin-openvswitch-enforce-scope-new-defaults   FAILURE

I see combinations of these three failing in other recent checks as
well.

Further investigation required.

** Affects: neutron
 Importance: Critical
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2037239

Title:
  neutron-tempest-plugin-openvswitch-* jobs randomly failing in gate

Status in neutron:
  New

Bug description:
  A number of different scenario tests seem to be failing randomly in
  the same way:

  Details: Router 01dda41e-67ed-4af0-ac56-72fd895cef9a is not active on
  any of the L3 agents

  One example is in
  https://review.opendev.org/c/openstack/neutron/+/895832 where these
  three jobs are failing:

  neutron-tempest-plugin-openvswitch-iptables_hybridFAILURE
  neutron-tempest-plugin-openvswitchFAILURE
  neutron-tempest-plugin-openvswitch-enforce-scope-new-defaults FAILURE

  I see combinations of these three failing in other recent checks as
  well.

  Further investigation required.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2037239/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2037500] Re: OVSDB transaction returned TRY_AGAIN, retrying do_commit

2023-10-03 Thread Brian Haley
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2037500

Title:
  OVSDB transaction returned TRY_AGAIN, retrying do_commit

Status in neutron:
  Invalid

Bug description:
  Trying to create instance and got error when it's trying to attach the
  port to instance about details below on neutron server.

  2023-09-27 09:58:10.725 716 DEBUG ovsdbapp.backend.ovs_idl.transaction 
[req-2df7a23e-8b9f-409c-a35e-9b78edb6bce1 - - - - -] OVSDB transaction returned 
TRY_AGAIN, retrying do_commit 
/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/transaction.py:97
  2023-09-27 09:58:10.724 723 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn 
[req-ca61167b-aca9-46a2-81fb-8f8e3ebba349 - - - - -] OVS database connection to 
OVN_Northbound failed with error: 'Timeout'. Verify that the OVS and OVN 
services are available and that the 'ovn_nb_connection' and 'ovn_sb_connection' 
configuration options are correct.: Exception: Timeout
  2023-09-27 09:58:10.724 723 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn Traceback (most 
recent call last):
  2023-09-27 09:58:10.724 723 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py",
 line 68, in start_connection
  2023-09-27 09:58:10.724 723 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn 
self.ovsdb_connection.start()
  2023-09-27 09:58:10.724 723 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn   File 
"/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/connection.py", line 
79, in start
  2023-09-27 09:58:10.724 723 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn 
idlutils.wait_for_change(self.idl, self.timeout)
  2023-09-27 09:58:10.724 723 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn   File 
"/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", line 
219, in wait_for_change
  2023-09-27 09:58:10.724 723 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn raise 
Exception("Timeout")  # TODO(twilson) use TimeoutException?
  2023-09-27 09:58:10.724 723 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn Exception: 
Timeout
  2023-09-27 09:58:10.724 723 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn

  and later the error still come below.

  
  2023-09-27 12:07:36.849 747 ERROR ovsdbapp.backend.ovs_idl.transaction [-] 
OVSDB Error: The transaction failed because the IDL has been configured to 
require a database lock but didn't get it yet or has already lost it
  2023-09-27 12:07:36.849 747 ERROR ovsdbapp.backend.ovs_idl.transaction 
[req-7f9163da-8faf-4509-b650-aedfdf4ff303 - - - - -] Traceback (most recent 
call last):
File 
"/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/connection.py", line 
122, in run
  txn.results.put(txn.do_commit())
File 
"/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/transaction.py", line 
119, in do_commit
  raise RuntimeError(msg)
  RuntimeError: OVSDB Error: The transaction failed because the IDL has been 
configured to require a database lock but didn't get it yet or has already lost 
it

  2023-09-27 12:07:36.849 747 ERROR futurist.periodics 
[req-7f9163da-8faf-4509-b650-aedfdf4ff303 - - - - -] Failed to call periodic 
'neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance.DBInconsistenciesPeriodics.check_for_ha_chassis_group_address'
 (it runs every 600.00 seconds): RuntimeError: OVSDB Error: The transaction 
failed because the IDL has been configured to require a database lock but 
didn't get it yet or has already lost it
  2023-09-27 12:07:36.849 747 ERROR futurist.periodics Traceback (most recent 
call last):
  2023-09-27 12:07:36.849 747 ERROR futurist.periodics   File 
"/usr/lib/python3/dist-packages/futurist/periodics.py", line 293, in run
  2023-09-27 12:07:36.849 747 ERROR futurist.periodics work()
  2023-09-27 12:07:36.849 747 ERROR futurist.periodics   File 
"/usr/lib/python3/dist-packages/futurist/periodics.py", line 67, in __call__
  2023-09-27 12:07:36.849 747 ERROR futurist.periodics return 
self.callback(*self.args, **self.kwargs)
  2023-09-27 12:07:36.849 747 ERROR futurist.periodics   File 
"/usr/lib/python3/dist-packages/futurist/periodics.py", line 181, in decorator
  2023-09-27 12:07:36.849 747 ERROR futurist.periodics return f(*args, 
**kwargs)
  2023-09-27 12:07:36.849 747 ERROR futurist.periodics   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py",
 line 622, in check_for_ha_chassis_group_address
  2023-09-27 12:07:36.849 747 ERROR futurist.periodics priority -= 1
  2023-09-27 12:07:36.849 747 ERRO

[Yahoo-eng-team] [Bug 2038373] [NEW] Segment unit tests are not mocking properly

2023-10-03 Thread Brian Haley
Public bug reported:

Running the segment unit tests -
neutron/tests/unit/extensions/test_segment.py generates a lot of extra
noise, like:

{0}
neutron.tests.unit.extensions.test_segment.TestNovaSegmentNotifier.test_delete_network_and_owned_segments
[1.185650s] ... ok

Captured stderr:


/home/bhaley/git/neutron.dev/.tox/py311/lib/python3.11/site-packages/kombu/utils/compat.py:82:
 DeprecationWarning: SelectableGroups dict interface is deprecated. Use select.
  for ep in importlib_metadata.entry_points().get(namespace, [])
Traceback (most recent call last):
  File 
"/home/bhaley/git/neutron.dev/.tox/py311/lib/python3.11/site-packages/eventlet/hubs/hub.py",
 line 476, in fire_timers
timer()
  File 
"/home/bhaley/git/neutron.dev/.tox/py311/lib/python3.11/site-packages/eventlet/hubs/timer.py",
 line 59, in __call__
cb(*args, **kw)
  File "/home/bhaley/git/neutron.dev/neutron/common/utils.py", line 956, in 
wrapper
return func(*args, **kwargs)
   ^
  File "/home/bhaley/git/neutron.dev/neutron/notifiers/batch_notifier.py", line 
58, in synced_send
self._notify()
  File "/home/bhaley/git/neutron.dev/neutron/notifiers/batch_notifier.py", line 
70, in _notify
self.callback(batched_events)
  File "/home/bhaley/git/neutron.dev/neutron/services/segments/plugin.py", line 
212, in _send_notifications
event.method(event)
  File "/home/bhaley/git/neutron.dev/neutron/services/segments/plugin.py", line 
384, in _delete_nova_inventory
aggregate_id = self._get_aggregate_id(event.segment_id)
   
  File "/home/bhaley/git/neutron.dev/neutron/services/segments/plugin.py", line 
378, in _get_aggregate_id
for aggregate in self.n_client.aggregates.list():
 ^^^
  File 
"/home/bhaley/git/neutron.dev/.tox/py311/lib/python3.11/site-packages/novaclient/v2/aggregates.py",
 line 59, in list
return self._list('/os-aggregates', 'aggregates')
   ^^
  File 
"/home/bhaley/git/neutron.dev/.tox/py311/lib/python3.11/site-packages/novaclient/base.py",
 line 253, in _list
resp, body = self.api.client.get(url)
 
  File 
"/home/bhaley/git/neutron.dev/.tox/py311/lib/python3.11/site-packages/keystoneauth1/adapter.py",
 line 395, in get
return self.request(url, 'GET', **kwargs)
   ^^
  File 
"/home/bhaley/git/neutron.dev/.tox/py311/lib/python3.11/site-packages/novaclient/client.py",
 line 77, in request
if raise_exc and resp.status_code >= 400:
 ^^^
TypeError: '>=' not supported between instances of 'MagicMock' and 'int'


>From looking at the code it's not mocking things properly, for example it does 
>this in TestNovaSegmentNotifier.setUp():

self.batch_notifier._waiting_to_send = True

That code was removed in 2016 in
255e8a839db0be10c98b5d9f480ce476e2f2e171 :-/

The noise doesn't seem to cause the test to fail, but it should be
fixed.

There are also keystone auth exceptions in other tests, and again,
nothing seems to fail because of it:

   raise exceptions.MissingAuthPlugin(msg_fmt % msg)
keystoneauth1.exceptions.auth_plugins.MissingAuthPlugin: An auth plugin is 
required to determine endpoint URL

** Affects: neutron
 Importance: Low
 Status: New


** Tags: unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2038373

Title:
  Segment unit tests are not mocking properly

Status in neutron:
  New

Bug description:
  Running the segment unit tests -
  neutron/tests/unit/extensions/test_segment.py generates a lot of extra
  noise, like:

  {0}
  
neutron.tests.unit.extensions.test_segment.TestNovaSegmentNotifier.test_delete_network_and_owned_segments
  [1.185650s] ... ok

  Captured stderr:
  
  
/home/bhaley/git/neutron.dev/.tox/py311/lib/python3.11/site-packages/kombu/utils/compat.py:82:
 DeprecationWarning: SelectableGroups dict interface is deprecated. Use select.
for ep in importlib_metadata.entry_points().get(namespace, [])
  Traceback (most recent call last):
File 
"/home/bhaley/git/neutron.dev/.tox/py311/lib/python3.11/site-packages/eventlet/hubs/hub.py",
 line 476, in fire_timers
  timer()
File 
"/home/bhaley/git/neutron.dev/.tox/py311/lib/python3.11/site-packages/eventlet/hubs/timer.py",
 line 59, in __call__
  cb(*args, **kw)
File "/home/bhaley/git/neutron.dev/neutron/common/utils.py", line 956, in 
wrapper
  return func(*args, **kwargs)
 ^
File "/home/bhaley/git/neutron.dev/neutron/notifiers/batch_notifier.py", 
line 58, in synced_send
  self._notify()
File "/home/bhaley/git/neutron.dev/neutron/notifiers/batch_notifier.py", 
line 70, in _notify
  self.call

[Yahoo-eng-team] [Bug 1998517] Re: Floating IP not reachable from instance in other project

2023-10-24 Thread Brian Haley
Moving this to the neutron project as networking-ovn has been retired
for a while.

My first question is are you able to test this with a later release?
Since it's been 10 months since it was filed just want to make sure it
hasn't been fixed.

** Project changed: networking-ovn => neutron

** Tags added: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1998517

Title:
  Floating IP not reachable from instance in other project

Status in neutron:
  New

Bug description:
  We noticed a strange behavior regarding Floating IPs in an OpenStack
  environment using ML2/OVN with DVR. Consider the provided test setup
  consisting of 3 projects. Each project has exactly one Network with
  two subnets, one for IPv4 one for IPv6, associated with it. Each
  project’s network is connected to the provider network through a
  router which has two ports facing the provider network and two
  internal ones for the respective subnets.

  The VM (Instance) Layout is also included. The first instance (a1) in Project 
A also has an FIP associated with it. Trying to ping this FIP from outside 
Openstack’s context works without any problems. This is also true when we want 
to ping the FIP from instance a2 in the same project.
  However, trying to do so from any of the other instances in a different 
project does not work. This however, changes when a FIP is assigned to an 
instance in a different project. By assigning a FIP to instance b for example 
will result in b being able to ping the FIP of a1. After removing the FIP this 
still holds through.

  The following observations regarding this have been made.
  When a FIP is assigned new entries in OVN’s SB DB (specifically the 
MAC_Binding table) show up, some of which will disappear again when the FIP is 
released from b. The one entry persisting is a mac-binding of the mac address 
and IPv4 associated with the router of project b facing the provider network, 
with the logical port being the provider net facing port of project a’s router. 
We are not sure if this is relevant to the problem, we are just putting this 
out here.

  In addition, when we were looking for other solutions we came across
  this old bug: https://bugzilla.redhat.com/show_bug.cgi?id=1836963 with
  a possible workaround, this however, lead to pinging not being
  possible afterwards.

  The Overcloud has been deployed using the `/usr/share/openstack-
  tripleo-heat-templates/environments/services/neutron-ovn-dvr-ha.yaml`
  template for OVN and the following additional settings were added to
  neutron:

  parameter_defaults:
OVNEmitNeedToFrag: true
NeutronGlobalPhysnetMtu: 9000

  Furthermore, all nodes use a Linux bond for the `br-ex` interface on
  on which the different node networks (Internal API, Storage, ...)
  reside. These networks also use VLANs.

  If you need any additional Information of the setup, please let me know.
  Best Regards

  
  Version Info

  - TripleO Wallaby

  - puppet-ovn-18.5.0-0.20220216211819.d496e5a.el9.noarch
  - ContainerImageTag: ecab4196e43c16aaea91ebb25fb25ab1

  inside ovn_controller container:
  - ovn22.06-22.06.0-24.el8s.x86_64
  - rdo-ovn-host-22.06-3.el8.noarch
  - rdo-ovn-22.06-3.el8.noarch
  - ovn22.06-host-22.06.0-24.el8s.x86_64

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1998517/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2043141] [NEW] neutron-lib unit tests need update for sqlalchemy 2.0

2023-11-09 Thread Brian Haley
Public bug reported:

Some of the neutron-lib unit tests do not support sqlalchemy 2.0.

Thomas Goirand ran them on a Debian system and this test file fails:

  neutron_lib/tests/unit/db/test_sqlalchemytypes.py

There are 8 failures, all basically the same:

105s FAIL: neutron_lib.tests.unit.db.test_sqlalchemytypes.CIDRTestCase.test_crud
105s neutron_lib.tests.unit.db.test_sqlalchemytypes.CIDRTestCase.test_crud
105s --
105s testtools.testresult.real._StringException: Traceback (most recent call 
last):
105s   File 
"/tmp/autopkgtest-lxc.jvth6_27/downtmp/build.pBL/src/neutron_lib/tests/unit/db/test_sqlalchemytypes.py",
 line 36, in setUp
105s meta = sa.MetaData(bind=self.engine)
105s^
105s TypeError: MetaData.__init__() got an unexpected keyword argument 'bind'

>From looking at the functional tests and Nova code, should be a
straightforward fix.

We should also look at creating a test job that both tests sqlalchemy
2.0 and neutron-lib main/master branches so we don't regress.

** Affects: neutron
 Importance: High
 Assignee: Brian Haley (brian-haley)
 Status: In Progress


** Tags: unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2043141

Title:
  neutron-lib unit tests need update for sqlalchemy 2.0

Status in neutron:
  In Progress

Bug description:
  Some of the neutron-lib unit tests do not support sqlalchemy 2.0.

  Thomas Goirand ran them on a Debian system and this test file fails:

neutron_lib/tests/unit/db/test_sqlalchemytypes.py

  There are 8 failures, all basically the same:

  105s FAIL: 
neutron_lib.tests.unit.db.test_sqlalchemytypes.CIDRTestCase.test_crud
  105s neutron_lib.tests.unit.db.test_sqlalchemytypes.CIDRTestCase.test_crud
  105s --
  105s testtools.testresult.real._StringException: Traceback (most recent call 
last):
  105s   File 
"/tmp/autopkgtest-lxc.jvth6_27/downtmp/build.pBL/src/neutron_lib/tests/unit/db/test_sqlalchemytypes.py",
 line 36, in setUp
  105s meta = sa.MetaData(bind=self.engine)
  105s^
  105s TypeError: MetaData.__init__() got an unexpected keyword argument 'bind'

  From looking at the functional tests and Nova code, should be a
  straightforward fix.

  We should also look at creating a test job that both tests sqlalchemy
  2.0 and neutron-lib main/master branches so we don't regress.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2043141/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1975828] Re: difference in execution time between admin/non-admin call

2023-11-28 Thread Brian Haley
** Changed in: neutron
   Status: Expired => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1975828

Title:
  difference in execution time between admin/non-admin call

Status in neutron:
  Triaged

Bug description:
  Part of https://bugs.launchpad.net/neutron/+bug/1973349 :
  Another interesting thing is difference in execution time between 
admin/non-admin call:
  (openstack) dmitriy@6BT6XT2:~$ . Documents/openrc/admin.rc
  (openstack) dmitriy@6BT6XT2:~$ time openstack port list --project  | 
wc -l
  2142

  real 0m5,401s
  user 0m1,565s
  sys 0m0,086s
  (openstack) dmitriy@6BT6XT2:~$ . Documents/openrc/.rc
  (openstack) dmitriy@6BT6XT2:~$ time openstack port list | wc -l
  2142

  real 2m38,101s
  user 0m1,626s
  sys 0m0,083s
  (openstack) dmitriy@6BT6XT2:~$
  (openstack) dmitriy@6BT6XT2:~$ time openstack port list --project  | 
wc -l
  2142

  real 1m17,029s
  user 0m1,541s
  sys 0m0,085s
  (openstack) dmitriy@6BT6XT2:~$

  So basically if provide tenant_id to query, it will be execute twice
  as fast.But it won't look through networks owned by tenant (which
  would kind of explain difference in speed).

  Environment:
  Neutron SHA: 97180b01837638bd0476c28bdda2340eccd649af
  Backend: ovs
  OS: Ubuntu 20.04
  Mariadb: 10.6.5
  SQLalchemy: 1.4.23
  Backend: openvswitch
  Plugins: router vpnaas metering 
neutron_dynamic_routing.services.bgp.bgp_plugin.BgpPlugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1975828/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2028003] Re: neutron fails with postgres on subnet_id

2023-12-01 Thread Brian Haley
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2028003

Title:
  neutron fails with postgres on subnet_id

Status in neutron:
  Fix Released

Bug description:
  Ironic's postgres CI test job has started to fail with an error rooted
  in Neutron's database API layer. Specifically how that API is
  interacting with SQLAlchemy to interact with postgres.

  Error:

  DBAPIError exception wrapped.: psycopg2.errors.GroupingError: column
  "subnet_service_types.subnet_id" must appear in the GROUP BY clause or
  be used in an aggregate function

  This is likely just a command formatting issue in the database
  interaction, and should be easily fixed.

  Job Logs:
  
https://96a560a38139b70cb224-e9f29c7afce5197c5c20e02f6b6da59e.ssl.cf5.rackcdn.com/888500/7/check/ironic-
  tempest-pxe_ipmitool-postgres/7eeffae/controller/logs/screen-q-svc.txt

  Full error:

  
  Jul 17 15:02:54.203622 np0034696541 neutron-server[69958]: DEBUG 
neutron.pecan_wsgi.hooks.quota_enforcement 
[req-36ab3b86-999f-48a0-87f8-e2613909b6c4 
req-8aa9b5ab-4403-42dc-b82e-c28f1a37c843 tempest-BaremetalBasicOps-471932799 
tempest-BaremetalBasicOps-471932799-project-member] Made reservation on behalf 
of 9d6bf2710477411887e0dcc4386b458a for: {'port': 1} {{(pid=69958) before 
/opt/stack/neutron/neutron/pecan_wsgi/hooks/quota_enforcement.py:53}}
  Jul 17 15:02:54.206063 np0034696541 neutron-server[69958]: DEBUG 
neutron_lib.callbacks.manager [req-36ab3b86-999f-48a0-87f8-e2613909b6c4 
req-8aa9b5ab-4403-42dc-b82e-c28f1a37c843 tempest-BaremetalBasicOps-471932799 
tempest-BaremetalBasicOps-471932799-project-member] Publish callbacks 
['neutron.plugins.ml2.plugin.SecurityGroupDbMixin._ensure_default_security_group_handler-595366']
 for port (None), before_create {{(pid=69958) _notify_loop 
/usr/local/lib/python3.10/dist-packages/neutron_lib/callbacks/manager.py:176}}
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: WARNING 
oslo_db.sqlalchemy.exc_filters [req-36ab3b86-999f-48a0-87f8-e2613909b6c4 
req-8aa9b5ab-4403-42dc-b82e-c28f1a37c843 tempest-BaremetalBasicOps-471932799 
tempest-BaremetalBasicOps-471932799-project-member] DBAPIError exception 
wrapped.: psycopg2.errors.GroupingError: column 
"subnet_service_types.subnet_id" must appear in the GROUP BY clause or be used 
in an aggregate function
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: LINE 2: ...de, 
subnets.standard_attr_id AS standard_attr_id, subnet_ser...
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]:
  ^
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: ERROR 
oslo_db.sqlalchemy.exc_filters Traceback (most recent call last):
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: ERROR 
oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.10/dist-packages/sqlalchemy/engine/base.py", line 1900, 
in _execute_context
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: ERROR 
oslo_db.sqlalchemy.exc_filters self.dialect.do_execute(
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: ERROR 
oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.10/dist-packages/sqlalchemy/engine/default.py", line 
736, in do_execute
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: ERROR 
oslo_db.sqlalchemy.exc_filters cursor.execute(statement, parameters)
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: ERROR 
oslo_db.sqlalchemy.exc_filters psycopg2.errors.GroupingError: column 
"subnet_service_types.subnet_id" must appear in the GROUP BY clause or be used 
in an aggregate function
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: ERROR 
oslo_db.sqlalchemy.exc_filters LINE 2: ...de, subnets.standard_attr_id AS 
standard_attr_id, subnet_ser...
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: ERROR 
oslo_db.sqlalchemy.exc_filters  
^
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: ERROR 
oslo_db.sqlalchemy.exc_filters 
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: ERROR 
oslo_db.sqlalchemy.exc_filters 
  Jul 17 15:02:54.218977 np0034696541 neutron-server[69958]: ERROR 
neutron.pecan_wsgi.hooks.translation [req-36ab3b86-999f-48a0-87f8-e2613909b6c4 
req-8aa9b5ab-4403-42dc-b82e-c28f1a37c843 tempest-BaremetalBasicOps-471932799 
tempest-BaremetalBasicOps-471932799-project-member] POST failed.: 
oslo_db.exception.DBError: (psycopg2.errors.GroupingError) column 
"subnet_service_types.subnet_id" must appear in the GROUP BY clause or be used 
in an aggregate function
  Jul 17 15:02:54.218977 np0034696541 neutron-server[69958]: LINE 2: ...de, 
subnets.standard_attr_id AS standard_attr_id, subnet_ser...
  Jul 17 15:02:54.218977 np0034696541 neutron-server[69958]:   

[Yahoo-eng-team] [Bug 1849463] Re: linuxbridge packet forwarding issue with vlan backed networks

2023-12-01 Thread Brian Haley
I am going to mark this as won't fix as the linuxbridge agent is
unmaintained and experimental on the master branch.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1849463

Title:
  linuxbridge packet forwarding issue with vlan backed networks

Status in neutron:
  Won't Fix

Bug description:
  This is related to: https://bugs.launchpad.net/os-vif/+bug/1837252

  In Ubuntu 18.04 using Ubuntu Cloud Archives (UCA) and Stein os-vif
  version 1.15.1 is deployed.

  According to the bug #1837252/OSSA-2019-004/CVE-2019-15753 this
  version is vulnerable to unicast packet broadcasting to all bridge
  members resulting in traffic interception due to disabled mac-learning
  (ageing set to 0). The fix is to set ageing to the default of 300.

  With this vulnerable set up instances using vlan-backed networks have
  working traffic flows as expected since all packets are being
  distributed to all members.

  The FDB entries show:
  # bridge fdb | grep -e tapb2b8c5ff-8c -e brqa50c5b7b-db -e ens256.3002 | grep 
-v -e ^01:00:5e -e ^33:33
  00:16:3e:ba:fa:33 dev ens256.3002 vlan 1 master brqa50c5b7b-db permanent
  00:16:3e:ba:fa:33 dev ens256.3002 master brqa50c5b7b-db permanent
  fe:16:3e:0d:c0:42 dev tapb2b8c5ff-8c vlan 1 master brqa50c5b7b-db permanent
  fe:16:3e:0d:c0:42 dev tapb2b8c5ff-8c master brqa50c5b7b-db permanent

  Showmacs confirm:
  # brctl showmacs brqa50c5b7b-db
  port no mac addris local?   ageing timer
2 00:16:3e:ba:fa:33   yes0.00
2 00:16:3e:ba:fa:33   yes0.00
1 fe:16:3e:0d:c0:42   yes0.00
1 fe:16:3e:0d:c0:42   yes0.00

  However, once ageing is enabled by either `brctl setageing
  brqa50c5b7b-db 300` or upgrading to UCA/Train with os-vif 1.17.0
  traffic flows directed towards tapb2b8c5ff-8c are not being forwarded.

  Traffic coming from tapb2b8c5ff-8c is being forwarded correctly
  through the bridge and exits ens236.3002.

  Only incoming traffic destined for tapb2b8c5ff-8c' MAC is being
  dropped or not forwarded.

  the FDB entries show:
  # bridge fdb | grep -e tapb2b8c5ff-8c -e brqa50c5b7b-db -e ens256.3002 | grep 
-v -e ^01:00:5e -e ^33:33
  00:50:56:89:64:e0 dev ens256.3002 master brqa50c5b7b-db 
  00:16:3e:ba:fa:33 dev ens256.3002 vlan 1 master brqa50c5b7b-db permanent
  fa:16:3e:f8:76:cf dev ens256.3002 master brqa50c5b7b-db 
  00:16:35:bf:5f:e5 dev ens256.3002 master brqa50c5b7b-db 
  fa:16:3e:0d:c0:42 dev ens256.3002 master brqa50c5b7b-db 
  00:50:56:89:69:d9 dev ens256.3002 master brqa50c5b7b-db 
  9e:dc:1b:a2:9b:2e dev ens256.3002 master brqa50c5b7b-db 
  00:16:3e:ba:fa:33 dev ens256.3002 master brqa50c5b7b-db permanent
  0e:c7:c3:cd:8d:fa dev ens256.3002 master brqa50c5b7b-db 
  fe:16:3e:0d:c0:42 dev tapb2b8c5ff-8c vlan 1 master brqa50c5b7b-db permanent
  fe:16:3e:0d:c0:42 dev tapb2b8c5ff-8c master brqa50c5b7b-db permanent

  Showmacs confirm:
  # brctl showmacs brqa50c5b7b-db
  port no mac addris local?   ageing timer
2 00:16:35:bf:5f:e5   no 0.16
2 00:16:3e:ba:fa:33   yes0.00
2 00:16:3e:ba:fa:33   yes0.00
2 00:50:56:89:64:e0   no 0.10
2 00:50:56:89:69:d9   no 0.20
2 0e:c7:c3:cd:8d:fa   no 0.10
2 9e:dc:1b:a2:9b:2e   no 0.12
2 fa:16:3e:0d:c0:42   no20.00
2 fa:16:3e:f8:76:cf   no13.33
1 fe:16:3e:0d:c0:42   yes0.00
1 fe:16:3e:0d:c0:42   yes0.00

  This shows the Guest (fa:16:3e:0d:c0:42) as Non-Local originating
  ens256.3002 instead of tapb2b8c5ff-8c which I suspect causes packets
  not being forwarded into tapb2b8c5ff-8c.

  The VM has now no means of ingress connectivity to the vlan backed
  network but outgoing packets are still being forwarded fine.

  It's important to note that instances using vXlan backed networks
  function without issues when ageing is set. The issue seems therefore
  limited to vlan backed networks.

  One significant difference in the FDB table between vlan and vxlan
  backed networks is the device which holds the guest MAC. On vxlan
  backed networks, this MAC is mapped to the tap device inside the FDB

  I have 2 pcap recordings of DHCP traffic, one from the bridge and one
  from the tap showing traffic flowing out of the tap but not returning
  despite replies arriving on the bridge interface.

  iptables have been rules out by prepending a -j ACCEPT at the top of
  the neutron-linuxbri-ib2b8c5ff-8 chain.

  I talked to @ralonsoh and @seam-k-mooney on IRC yesterday about this
  issue and both suggested me to open this bug report.

  Let me k

[Yahoo-eng-team] [Bug 1845145] Re: [L3] add abilitiy for iptables_manager to ensure rule was added only once

2023-12-01 Thread Brian Haley
Since the patch on master was abandoned manually I am going to close
this.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1845145

Title:
  [L3] add abilitiy for iptables_manager to ensure rule was added only
  once

Status in neutron:
  Won't Fix

Bug description:
  iptables_manager should have abilitiy to ensure rule was added only
  once. In function [1], it just adds the new rule to the cache list no
  matter if it is duplicated. And finally, warning LOG [2] will be
  raised. Sometimes, there will have multiple threads to add rule for
  one same resource, it may be not easy for users to ensure that their
  rule generation code was run as expected. So rule will be duplicated
  in cache. And during the removal procedure, cache has duplicated
  rules, remove one then there still has same rule remained. As a
  result, the linux netfilter rule may have nothing changed after user's
  removal action.

  [1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_manager.py#L205-L225
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_manager.py#L718-L725

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1845145/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2015364] Re: [skip-level] OVN tests constantly failing

2023-12-01 Thread Brian Haley
Since the skip-level job is now passing and voting in our gate I am
going to close this bug.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2015364

Title:
  [skip-level] OVN tests constantly failing

Status in devstack:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  In the new Zed-Bobcat skip-level jobs [1], the OVN job has 4 tests constantly 
failing (1 fail is actually a setup class method):
  
*tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_network_basic_ops
  
*tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesUnderV243Test.test_add_remove_fixed_ip
  *setUpClass 
(tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON)
  
*tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops

  Logs:
  
*https://fd50651997fbb0337883-282d0b18354725863279cd3ebda4ab44.ssl.cf5.rackcdn.com/878632/6/experimental/neutron-ovn-grenade-multinode-skip-level/baf4ed5/controller/logs/grenade.sh_log.txt
  
*https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_607/878632/6/experimental/neutron-ovn-grenade-multinode-skip-level/6072d85/controller/logs/grenade.sh_log.txt

  [1]https://review.opendev.org/c/openstack/neutron/+/878632

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/2015364/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2024205] Re: [OVN] Hash Ring nodes removed when "periodic worker" is killed

2023-12-01 Thread Brian Haley
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2024205

Title:
  [OVN] Hash Ring nodes removed when "periodic worker" is killed

Status in neutron:
  Fix Released

Bug description:
  Reported at: https://bugzilla.redhat.com/show_bug.cgi?id=2213910

  In the ML2/OVN driver we set a signal handler for SIGTERM to remove
  the hash ring nodes upon the service exit [0] but, during the
  investigation of one bug with a customer we identified that an
  unrelated Neutron worker is killed (such as the "periodic worker" in
  this case) this could lead to that process removing the entries from
  the ovn_hash_ring table for that hostname.

  If this happens on all controllers, the ovn_hash_ring table is
  rendered empty and OVSDB events are no longer processed by ML2/OVN.

  Proposed solution:

  This LP proposes to make this more reliable by instead of removing the
  nodes from the ovn_hash_ring table at exiting, we would mark them as
  offline instead. That way, if a worker dies the nodes will remain
  registered in the table and the heartbeat thread will set them as
  online again on the next beat. If the service is properly stopped the
  heartbeat won't be running and the nodes will be seeing as offline to
  the Hash Ring manager.

  As a note, upon the next startup of the service the nodes matching the
  server hostname will be removed from the ovn_hash_ring table and added
  again accordingly as Neutron worker are spawned [1].

  [0] 
https://github.com/openstack/neutron/blob/cbb89fdb1414a1b3a8e8b3a9a4154ef627bb9d1a/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py#L295-L296
  [1] 
https://github.com/openstack/neutron/blob/cbb89fdb1414a1b3a8e8b3a9a4154ef627bb9d1a/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py#L316

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2024205/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1779335] Re: neutron-vpnaas doesn't support local tox targets

2023-12-01 Thread Brian Haley
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1779335

Title:
  neutron-vpnaas doesn't support local tox targets

Status in neutron:
  Fix Released

Bug description:
  Today it appears the neutron-vpnaas doesn't support proper env setup
  for running tox targets locally. For more details see [1].

  
  [1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131801.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1779335/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1853873] Re: The /v2.0/ports/{port_id}/bindings APIs are not documented

2023-12-01 Thread Brian Haley
https://docs.openstack.org/api-ref/network/v2/#port-binding shows these
api's are now present, closing bug.

** Changed in: neutron
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1853873

Title:
  The /v2.0/ports/{port_id}/bindings APIs are not documented

Status in neutron:
  Fix Released

Bug description:
  The following APIs are not documented in the networking api-ref [1]:
  * GET /v2.0/ports/{port_id}/bindings
  * POST /v2.0/ports/{port_id}/bindings
  * PUT /v2.0/ports/{port_id}/bindings/{host}/activate

  
  [1] https://docs.openstack.org/api-ref/network/v2/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1853873/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2045811] [NEW] neutron-ovn-db-sync-util can fail with KeyError

2023-12-06 Thread Brian Haley
Public bug reported:

If the netron-ovn-db-sync-util is run while neutron-server is active
(which is not recommended), it can randomly fail if there are active API
calls in flight to create networks and/or subnets.

This is an example traceback I've seen many times in a production
environment:

WARNING neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_db_sync 
[req-7fc12422-6fae-4ec9-98bc-8a114f30c9e3 - - - - -] DHCP options for subnet 
0662e4fd-f8b4-4d29-8ba7-5846bd19e45d is present in Neutron but out of sync for 
OVN
CRITICAL neutron_ovn_db_sync_util [req-7fc12422-6fae-4ec9-98bc-8a114f30c9e3 - - 
- - -] Unhandled error: KeyError: 'neutron-93ad1c21-d2cf-448a-8fae-21c71f44dc5c'
ERROR neutron_ovn_db_sync_util Traceback (most recent call last):
ERROR neutron_ovn_db_sync_util   File "/usr/bin/neutron-ovn-db-sync-util", line 
10, in 
ERROR neutron_ovn_db_sync_util sys.exit(main())
ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3/dist-packages/neutron/cmd/ovn/neutron_ovn_db_sync_util.py", 
line 219, in main
ERROR neutron_ovn_db_sync_util synchronizer.do_sync()
ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_db_sync.py",
 line 98, in do_sync
ERROR neutron_ovn_db_sync_util self.sync_networks_ports_and_dhcp_opts(ctx)
ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_db_sync.py",
 line 871, in sync_networks_ports_and_dhcp_opts
ERROR neutron_ovn_db_sync_util self._sync_subnet_dhcp_options(
ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_db_sync.py",
 line 645, in _sync_subnet_dhcp_options
ERROR neutron_ovn_db_sync_util network = 
db_networks[utils.ovn_name(subnet['network_id'])]
ERROR neutron_ovn_db_sync_util KeyError: 
'neutron-93ad1c21-d2cf-448a-8fae-21c71f44dc5c'

** Affects: neutron
 Importance: Medium
 Assignee: Brian Haley (brian-haley)
 Status: Confirmed


** Tags: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2045811

Title:
  neutron-ovn-db-sync-util can fail with KeyError

Status in neutron:
  Confirmed

Bug description:
  If the netron-ovn-db-sync-util is run while neutron-server is active
  (which is not recommended), it can randomly fail if there are active
  API calls in flight to create networks and/or subnets.

  This is an example traceback I've seen many times in a production
  environment:

  WARNING neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_db_sync 
[req-7fc12422-6fae-4ec9-98bc-8a114f30c9e3 - - - - -] DHCP options for subnet 
0662e4fd-f8b4-4d29-8ba7-5846bd19e45d is present in Neutron but out of sync for 
OVN
  CRITICAL neutron_ovn_db_sync_util [req-7fc12422-6fae-4ec9-98bc-8a114f30c9e3 - 
- - - -] Unhandled error: KeyError: 
'neutron-93ad1c21-d2cf-448a-8fae-21c71f44dc5c'
  ERROR neutron_ovn_db_sync_util Traceback (most recent call last):
  ERROR neutron_ovn_db_sync_util   File "/usr/bin/neutron-ovn-db-sync-util", 
line 10, in 
  ERROR neutron_ovn_db_sync_util sys.exit(main())
  ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3/dist-packages/neutron/cmd/ovn/neutron_ovn_db_sync_util.py", 
line 219, in main
  ERROR neutron_ovn_db_sync_util synchronizer.do_sync()
  ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_db_sync.py",
 line 98, in do_sync
  ERROR neutron_ovn_db_sync_util self.sync_networks_ports_and_dhcp_opts(ctx)
  ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_db_sync.py",
 line 871, in sync_networks_ports_and_dhcp_opts
  ERROR neutron_ovn_db_sync_util self._sync_subnet_dhcp_options(
  ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_db_sync.py",
 line 645, in _sync_subnet_dhcp_options
  ERROR neutron_ovn_db_sync_util network = 
db_networks[utils.ovn_name(subnet['network_id'])]
  ERROR neutron_ovn_db_sync_util KeyError: 
'neutron-93ad1c21-d2cf-448a-8fae-21c71f44dc5c'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2045811/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2049546] Re: neutron-linuxbridge-agent ebtables RULE_DELETE failed (Invalid argument)

2024-01-16 Thread Brian Haley
*** This bug is a duplicate of bug 2038541 ***
https://bugs.launchpad.net/bugs/2038541

This was fixed with
https://review.opendev.org/c/openstack/neutron/+/898832 and is a
duplicate of https://bugs.launchpad.net/neutron/+bug/2038541 - please
try the fix there.

** This bug has been marked a duplicate of bug 2038541
   LinuxBridgeARPSpoofTestCase functional tests fails with latest jammy kernel 
5.15.0-86.96

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2049546

Title:
  neutron-linuxbridge-agent ebtables RULE_DELETE failed (Invalid
  argument)

Status in neutron:
  New

Bug description:
  neutron-linuxbridge-agent fails and gets stuck when cleaning up ARP
  protection rules:

   neutron-linuxbridge-agent[3049824]: Exit code: 4; Cmd:
  ['ebtables', '-t', 'nat', '--concurrent', '-D', 'neutronMAC-
  tap50f1af99-28', '-i', 'tap50f1af99-28', '--among-src',
  'fa:16:3e:ba:10:2a', '-j', 'RETURN']; Stdin: ; Stdout: ; Stderr:
  ebtables v1.8.7 (nf_tables):  RULE_DELETE failed (Invalid argument):
  rule in chain neutronMAC-tap50f1af99-28

  Afterward, it stops responding to RPC messages and nova-compute times
  out waiting for vif-plugged events.

  Version:

* OpenStack Zed from Ubuntu cloud archive
* Ubuntu 22.04 LTS
* 5.15.0-91-generic #101-Ubuntu
* Deployed via Ubuntu cloud archive packages

  Context:

  The document
  https://github.com/openstack/neutron/blob/stable/zed/doc/source/admin/deploy-
  lb.rst mentions some resolved issues with ebtables based on nftables,
  and the scenarios from the linked bug reports do work. The issue here
  appears to only happens when removing ARP spoofing rules. We have a
  few compute hosts with a high churn, many instances created and
  deleted. On these, neutron-linuxbridge-agent works visibly fine until
  it becomes too stuck.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2049546/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2051171] [NEW] SQLalchemy 2.0 warning in neutron-lib

2024-01-24 Thread Brian Haley
Public bug reported:

Running 'tox -e pep8' in neutron-lib or neutron repo generates this new
warning:

/home/bhaley/git/neutron-lib/neutron_lib/db/model_base.py:113: 
MovedIn20Warning: Deprecated API features detected! These feature(s) are not 
compatible with SQLAlchemy 2.0. To prevent incompatible upgrades prior to 
updating applications, ensure requirements files are pinned to 
"sqlalchemy<2.0". Set environment variable SQLALCHEMY_WARN_20=1 to show all 
deprecation warnings.  Set environment variable 
SQLALCHEMY_SILENCE_UBER_WARNING=1 to silence this message. (Background on 
SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
  BASEV2 = declarative.declarative_base(cls=NeutronBaseV2)

Google eventually points in this direction:

https://docs.sqlalchemy.org/en/20/changelog/whatsnew_20.html#step-one-
orm-declarative-base-is-superseded-by-orm-declarativebase

So moving to use sqlalchemy.orm.DeclarativeBase class is the future.

Might be a little tricky to implement as sqlalchemy is currently pinned
in UC:

sqlalchemy===1.4.50

** Affects: neutron
 Importance: High
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2051171

Title:
  SQLalchemy 2.0 warning in neutron-lib

Status in neutron:
  Confirmed

Bug description:
  Running 'tox -e pep8' in neutron-lib or neutron repo generates this
  new warning:

  /home/bhaley/git/neutron-lib/neutron_lib/db/model_base.py:113: 
MovedIn20Warning: Deprecated API features detected! These feature(s) are not 
compatible with SQLAlchemy 2.0. To prevent incompatible upgrades prior to 
updating applications, ensure requirements files are pinned to 
"sqlalchemy<2.0". Set environment variable SQLALCHEMY_WARN_20=1 to show all 
deprecation warnings.  Set environment variable 
SQLALCHEMY_SILENCE_UBER_WARNING=1 to silence this message. (Background on 
SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
BASEV2 = declarative.declarative_base(cls=NeutronBaseV2)

  Google eventually points in this direction:

  https://docs.sqlalchemy.org/en/20/changelog/whatsnew_20.html#step-one-
  orm-declarative-base-is-superseded-by-orm-declarativebase

  So moving to use sqlalchemy.orm.DeclarativeBase class is the future.

  Might be a little tricky to implement as sqlalchemy is currently
  pinned in UC:

  sqlalchemy===1.4.50

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2051171/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999677] Re: Defunct nodes are reported as happy in network agent list

2024-02-13 Thread Brian Haley
Since this has been fixed in later Ussuri and/or later neutron code I'm
going to close this. Please re-open if necessary.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1999677

Title:
  Defunct nodes are reported as happy in network agent list

Status in OpenStack Neutron API Charm:
  New
Status in networking-ovn:
  Invalid
Status in neutron:
  Invalid

Bug description:
  When decommissioning a node from a cloud using Neutron and OVN, the Chassis 
is not removed from OVN SB db and also it always shows as happy in "openstack 
network agent list"
  which is a bit weird and the operator would expect to have that as XXX in the 
agent list

  This is more for the upstream neutron but adding the charm for
  visibility.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-api/+bug/1999677/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1742187] Re: osc client missing extra-dhcp-opts option

2024-02-14 Thread Brian Haley
** Changed in: python-openstackclient
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1742187

Title:
  osc client missing extra-dhcp-opts option

Status in neutron:
  Invalid
Status in python-openstackclient:
  Fix Released

Bug description:
  An option to use the extra-dhcp-opt API extension seems to be missing
  from the osc plugin for neutron:

  stack@tm-devstack-master-01:~$ openstack extension list |grep extra_dhcp_opt
  | Neutron Extra DHCP options  
 | extra_dhcp_opt  | Extra options 
configuration for DHCP. For example PXE boot options to DHCP clients can be 
specified (e.g. tftp-server, server-ip-address, bootfile-name) |

  => the corresponding API extension is enabled in this setup

  stack@tm-devstack-master-01:~$ openstack port create 2>&1 |grep extra

  => nothing about extra dhcp opt in the CLI help

  stack@tm-devstack-master-01:~$ openstack port create --network foo 
--extra-dhcp-opt opt_name=42,opt_value=55
  usage: openstack port create [-h] [-f {json,shell,table,value,yaml}]
   [-c COLUMN] [--max-width ] [--fit-width]
   [--print-empty] [--noindent] [--prefix PREFIX]
   --network  [--description ]
   [--device ]
   [--mac-address ]
   [--device-owner ]
   [--vnic-type ] [--host ]
   [--dns-name dns-name]
   [--fixed-ip 
subnet=,ip-address=]
   [--binding-profile ]
   [--enable | --disable] [--project ]
   [--project-domain ]
   [--security-group  | 
--no-security-group]
   [--qos-policy ]
   [--enable-port-security | 
--disable-port-security]
   [--allowed-address 
ip-address=[,mac-address=]]
   [--tag  | --no-tag]
   
  openstack port create: error: unrecognized arguments: --extra-dhcp-opt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1742187/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2028285] Re: [unit test][xena+] test_port_deletion_prevention fails when runs in isolation

2024-02-19 Thread Brian Haley
The comment on a failure in Zed looked to not have the fix - version
21.1.2, version 21.2.0 or greater is required. Will close this as the
fixes have been released.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2028285

Title:
  [unit test][xena+] test_port_deletion_prevention fails when runs in
  isolation

Status in neutron:
  Fix Released

Bug description:
  Can be reproduced by Just running:-
  tox -epy3 -- test_port_deletion_prevention
  or run any of the below tests individually:-
  
neutron.tests.unit.extensions.test_l3.L3NatDBSepTestCase.test_port_deletion_prevention_handles_missing_port
  
neutron.tests.unit.extensions.test_extraroute.ExtraRouteDBSepTestCase.test_port_deletion_prevention_handles_missing_port

  Fails as below:-
  
neutron.tests.unit.extensions.test_extraroute.ExtraRouteDBSepTestCase.test_port_deletion_prevention_handles_missing_port
  


  Captured traceback:
  ~~~
  Traceback (most recent call last):

File "/home/ykarel/work/openstack/neutron/neutron/tests/base.py", line 
178, in func
  return f(self, *args, **kwargs)

File "/home/ykarel/work/openstack/neutron/neutron/tests/base.py", line 
178, in func
  return f(self, *args, **kwargs)

File 
"/home/ykarel/work/openstack/neutron/neutron/tests/unit/extensions/test_l3.py", 
line 4491, in test_port_deletion_prevention_handles_missing_port
  pl.prevent_l3_port_deletion(context.get_admin_context(), 'fakeid')

File "/home/ykarel/work/openstack/neutron/neutron/db/l3_db.py", line 
1742, in prevent_l3_port_deletion
  port = port or self._core_plugin.get_port(context, port_id)

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/neutron_lib/db/api.py",
 line 223, in wrapped
  return f_with_retry(*args, **kwargs,

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/neutron_lib/db/api.py",
 line 137, in wrapped
  with excutils.save_and_reraise_exception():

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_utils/excutils.py",
 line 227, in __exit__
  self.force_reraise()

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_utils/excutils.py",
 line 200, in force_reraise
  raise self.value

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/neutron_lib/db/api.py",
 line 135, in wrapped
  return f(*args, **kwargs)

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_db/api.py",
 line 144, in wrapper
  with excutils.save_and_reraise_exception() as ectxt:

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_utils/excutils.py",
 line 227, in __exit__
  self.force_reraise()

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_utils/excutils.py",
 line 200, in force_reraise
  raise self.value

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_db/api.py",
 line 142, in wrapper
  return f(*args, **kwargs)

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/neutron_lib/db/api.py",
 line 183, in wrapped
  with excutils.save_and_reraise_exception():

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_utils/excutils.py",
 line 227, in __exit__
  self.force_reraise()

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_utils/excutils.py",
 line 200, in force_reraise
  raise self.value

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/neutron_lib/db/api.py",
 line 181, in wrapped
  return f(*dup_args, **dup_kwargs)

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_db/sqlalchemy/enginefacade.py",
 line 1022, in wrapper
  return fn(*args, **kwargs)

File 
"/home/ykarel/work/openstack/neutron/neutron/db/db_base_plugin_v2.py", line 
1628, in get_port
  lazy_fields = [models_v2.Port.port_forwardings,

  AttributeError: type object 'Port' has no attribute
  'port_forwardings'

  It's reproducible Since Xena+ since the inclusion of patch
  https://review.opendev.org/c/openstack/neutron/+/790691

  It do not reproduce if there are other test runs(from the test class)
  before this test which involve other requests(like network get/create
  etc) apart from the ones modified in above patch.

  Considering above point if this test is modified to run other requests like 
be

[Yahoo-eng-team] [Bug 2042941] Re: neutron-{ovn, ovs}-tempest-with-sqlalchemy-master jobs not installing sqlalchemy/alembic from source

2024-02-19 Thread Brian Haley
** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2042941

Title:
  neutron-{ovn,ovs}-tempest-with-sqlalchemy-master jobs not installing
  sqlalchemy/alembic from source

Status in neutron:
  Invalid

Bug description:
  neutron-ovn-tempest-with-sqlalchemy-master and 
neutron-ovs-tempest-with-sqlalchemy-master jobs expected to install sqlalchemy 
and alembic from main branch as defined in required-projects, but these 
installs released versions instead:-
  required-projects:
- name: github.com/sqlalchemy/sqlalchemy
  override-checkout: main
- openstack/oslo.db
- openstack/neutron-lib
- name: github.com/sqlalchemy/alembic
  override-checkout: main

  
  Builds:- 
https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-with-sqlalchemy-master&job_name=neutron-ovs-tempest-with-sqlalchemy-master&skip=0

  Noticed it when other jobs running with sqlalchemy master are broken
  but not these https://bugs.launchpad.net/neutron/+bug/2042939

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2042941/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2035578] Re: [stable branches] devstack-tobiko-neutron job Fails with InvocationError('could not find executable python', None)

2024-02-19 Thread Brian Haley
As this was not a neutron bug and the tobiko patch has merged will close
this bug.

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2035578

Title:
  [stable branches] devstack-tobiko-neutron job Fails with
  InvocationError('could not find executable python', None)

Status in neutron:
  Fix Released

Bug description:
  It started failing[1] since the job switched to ubuntu-jammy[2].

  Fails as below:-
  2023-09-13 16:46:18.124882 | TASK [tobiko-tox : run sanity test cases before 
creating resources]
  2023-09-13 16:46:19.463567 | controller | neutron_sanity create: 
/home/zuul/src/opendev.org/x/tobiko/.tox/py3
  2023-09-13 16:46:20.518574 | controller | neutron_sanity installdeps: 
-c/home/zuul/src/opendev.org/x/tobiko/upper-constraints.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/test-requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/extra-requirements.txt
  2023-09-13 16:46:20.519390 | controller | ERROR: could not install deps 
[-c/home/zuul/src/opendev.org/x/tobiko/upper-constraints.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/test-requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/extra-requirements.txt]; v = 
InvocationError('could not find executable python', None)
  2023-09-13 16:46:20.520263 | controller | ___ 
summary 
  2023-09-13 16:46:20.555843 | controller | ERROR:   neutron_sanity: could not 
install deps [-c/home/zuul/src/opendev.org/x/tobiko/upper-constraints.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/test-requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/extra-requirements.txt]; v = 
InvocationError('could not find executable python', None)
  2023-09-13 16:46:21.141713 | controller | ERROR
  2023-09-13 16:46:21.142024 | controller | {
  2023-09-13 16:46:21.142117 | controller |   "delta": "0:00:01.484351",
  2023-09-13 16:46:21.142197 | controller |   "end": "2023-09-13 
16:46:20.556249",
  2023-09-13 16:46:21.142276 | controller |   "failed_when_result": true,
  2023-09-13 16:46:21.142353 | controller |   "msg": "non-zero return code",
  2023-09-13 16:46:21.142688 | controller |   "rc": 1,
  2023-09-13 16:46:21.142770 | controller |   "start": "2023-09-13 
16:46:19.071898"
  2023-09-13 16:46:21.142879 | controller | }
  2023-09-13 16:46:21.142972 | controller | ERROR: Ignoring Errors

  
  Example failures zed/stable2023.1:-
  - https://zuul.opendev.org/t/openstack/build/591dae67122444daa35195f7458ffafe
  - https://zuul.opendev.org/t/openstack/build/5838bf0704b247dc8f1eb12367b1d33e
  - https://zuul.opendev.org/t/openstack/build/8d2e22ff171944b0b549c12e1aaac476

  Wallaby/Xena/Yoga builds started failing with:-
  ++ functions:write_devstack_version:852 :   git log '--format=%H %s %ci' 
-1
  + ./stack.sh:main:230  :   
SUPPORTED_DISTROS='bullseye|focal|f35|opensuse-15.2|opensuse-tumbleweed|rhel8|rhel9|openEuler-20.03'
  + ./stack.sh:main:232  :   [[ ! jammy =~ 
bullseye|focal|f35|opensuse-15.2|opensuse-tumbleweed|rhel8|rhel9|openEuler-20.03
 ]]
  + ./stack.sh:main:233  :   echo 'WARNING: this script has 
not been tested on jammy'

  Example:-
  - https://zuul.opendev.org/t/openstack/build/0bd0421e30804b7aa9b6ea032d271be7
  - https://zuul.opendev.org/t/openstack/build/8e06dfc0ccd940f3ab71edc0ec93466c
  - https://zuul.opendev.org/t/openstack/build/899634e90ee94e0294985747075fb26c

  Even before these jobs were broken but there tests used to fail not
  the test setup, that can be handled once the current issues are
  cleared.

  
  [1] 
https://zuul.opendev.org/t/openstack/builds?job_name=devstack-tobiko-neutron&branch=stable%2F2023.1
  [2] https://review.opendev.org/c/x/devstack-plugin-tobiko/+/893662?usp=search

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2035578/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1685352] Re: Can't invoke function 'get_bind' from alembic.op in expand_drop_exceptions function in alembic migration scripts

2024-02-19 Thread Brian Haley
Looks like this code was changed for Sqlalchemy 2.0 in
d7ba5948ffe4ff4ec760a2774c699774b065cdfb as from_engine() is deprecated,
will close this bug.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1685352

Title:
  Can't invoke function 'get_bind' from alembic.op in
  expand_drop_exceptions function in alembic migration scripts

Status in neutron:
  Invalid

Bug description:
  If something like:

  inspector = reflection.Inspector.from_engine(op.get_bind())

  is used in alembic migration scripts in functions
  expand_drop_exceptions() or contract_creation_exceptions() then there
  is error like:

  NameError: Can't invoke function 'get_bind', as the proxy object
  has not yet been established for the Alembic 'Operations' class.  Try
  placing this code inside a callable.

  Those 2 functions are used only in functional tests but it would be
  nice to have possibility to use this Inspector class for example to
  get names of constraints from database.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1685352/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1794870] Re: NetworkNotFound failures on network test teardown because of retries due to the initial request taking >60 seconds

2024-02-19 Thread Brian Haley
Looks like this was fixed in commit
748dd8df737d28aad7dfd0a1e32659e0256126e2 in the tempest tree, will
close.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1794870

Title:
  NetworkNotFound failures on network test teardown because of retries
  due to the initial request taking >60 seconds

Status in neutron:
  Fix Released
Status in tempest:
  Invalid

Bug description:
  I've seen this in a few different tests and branches, network tests
  are tearing down and hitting NetworkNotFound presumably because the
  test already deleted the network and we're racing on teardown:

  http://logs.openstack.org/70/605270/1/gate/tempest-full-
  py3/f18bf28/testr_results.html.gz

  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/lib/services/network/networks_client.py", 
line 52, in delete_network
  return self.delete_resource(uri)
File "/opt/stack/tempest/tempest/lib/services/network/base.py", line 41, in 
delete_resource
  resp, body = self.delete(req_uri)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 310, in 
delete
  return self.request('DELETE', url, extra_headers, headers, body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 675, in 
request
  self._error_checker(resp, resp_body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 781, in 
_error_checker
  raise exceptions.NotFound(resp_body, resp=resp)
  tempest.lib.exceptions.NotFound: Object not found
  Details: {'detail': '', 'type': 'NetworkNotFound', 'message': 'Network 
0574d093-73f1-4a7c-b0d8-49c9f43d44fa could not be found.'}

  We should just handle the 404 and ignore it since we're trying to
  delete the network anyway.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1794870/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1849479] Re: neutron l2 to dhcp lost when migrating in stable/stein 14.0.2

2024-02-19 Thread Brian Haley
I'm goint to close this as Stein has been EOL for quite a while. If this
is happening on a newer, supported release please open a new bug.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1849479

Title:
  neutron l2 to dhcp lost when migrating in stable/stein 14.0.2

Status in neutron:
  Invalid

Bug description:
  Info about the environment:

  3x controller nodes
  50+ compute nodes

  all in stable stein, neutron is 14.0.2 using OVS 2.11.0

  neutron settings:
- max_l3_agents_per_router = 3
- dhcp_agents_per_network = 2
- router_distributed = true
- interface_driver = openvswitch
- l3_ha = true

  l3 agent:
- agent_mode = dvr

  ml2:
- type_drivers = flat,vlan,vxlan
- tenant_network_types = vxlan
- mechanism_drivers = openvswitch,l2population
- extension_drivers = port_security,dns
- external_network_type = vlan

  tenants may have multiple external networks
  instances may have multiple interfaces

  tests have been performed on 10 instances launched in a tenant network
  connected to a router in an external network. all instances have
  floating ip's assigned. these instances had only 1 interface. this
  particular testing tenant has rbac's for 4 external networks in which
  only 1 is used.

  migrations have been done via cli with admin:
  openstack server migrate --live  
  have also tested using evacuate with same results

  expected behavior:
  when _multiple_ (in the ranges of 10+) instances is migrated simultaneously 
from one computehost to another, they should come up with a minor network 
service drop. all l2 should be resumed.

  what actually happends:
  instances are migrated, some errors pop in neutron/nova and then instances 
comes up with a minor network service drop. However L2 toward dhcp-servers is 
totally severed in OVS. The migrated instances will as expected start try 
renewal of lease half-way through it's current lease and at the end of it drop 
the IP. Easy test is try renewal of lease on an instance or icmp to any 
dhcp-server in that vxlan L2.

  current workaround:
  once the instance is migrated the l2 to dhcp-servers can be re-established by 
restarting neutron-openvswitch-agent on the destination host.

  how to test:
  create instances (10+), migrate and then try to ping neutron dhcp-server in 
the vxlan (tenant created network) or simply renew dhcp-leases.

  error messages:

  Exception during message handling: TooManyExternalNetworks: More than
  one external network exists. TooManyExternalNetworks: More than one
  external network exists.

  other oddities:
  when performing migration of small number of instances i.e. 1-4 migrations 
become successful and L2 with dhcp-servers is not lost.

  when looking through debug logs i can't really find anything of
  relevance. no other large errors/warnings occur other that the one
  above.

  i will perform more test when migrations are successful and/or
  neutron-openvswitch-agent restarted and see if L2 to dhcp-servers
  survive 24h.

  This occurs in a 14.0.0 regression bug which should be fixed in 14.0.2
  (this bugreport is for 14.0.2) but it could possible not work with
  this combination of settings(?).

  Please let me know if any versions to api/services is required for
  this or any configurations or other info.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1849479/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1821137] Re: neutron_tempest_plugin.api test_show_network_segment_range fails

2024-02-19 Thread Brian Haley
I'm going to close this as the logs to know what the exact error was are
long gone. If it happens again we can open a new bug.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1821137

Title:
  neutron_tempest_plugin.api test_show_network_segment_range fails

Status in neutron:
  Invalid

Bug description:
  Example:
  
http://logs.openstack.org/42/644842/2/check/neutron-tempest-plugin-api/1c82227/testr_results.html.gz

  log search:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22observed_range%5B'project_id'%5D)%5C%22

  Exception:
  
http://logs.openstack.org/42/644842/2/check/neutron-tempest-plugin-api/1c82227/controller/logs/screen-q-svc.txt.gz?level=ERROR

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1821137/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1865223] Re: [scale issue] regression for security group list between Newton and Rocky+

2024-02-19 Thread Brian Haley
Looks like this was fixed, will close.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1865223

Title:
  [scale issue] regression for security group list between Newton and
  Rocky+

Status in neutron:
  Fix Released

Bug description:
  We recently upgraded an environment from Newton -> Rocky, and
  experienced a dramatic increase in the amount of time it takes to
  return a full security group list. For ~8,000 security groups, it
  takes nearly 75 seconds. This was not observed in Newton.

  I was able to replicate this in the following 4 environments:

  Newton (virtual machine)
  Rocky (baremetal)
  Stein (virtual machine)
  Train (baremetal)

  Command: openstack security group list

  > Sec Grps vs. Seconds

  QtyNewton VM  Rocky BM  Stein VM  Train BM
  200 4.1 3.7  5.4   5.2  
  500 5.3 7119.4  
  10007.2 12.4 19.2  16   
  20009.2 24.2 35.3  30.7 
  300012.136.5 5244   
  400016.147.2 7358.9 

  At this time, we do not know if this increase in time extends to other
  'list' commands at scale. The 'show' commands appear to be fairly
  performant. This increase in time does have a negative impact on user
  perception, scripts, other dependent resources, etc. The Stein VM is
  slower than Train, but could be due to VM vs BM. The Newton
  environment is virtual, too, so I would expect even better performance
  on bare metal.

  Any assistance or insight into what might have changed between
  releases to cause this would be helpful.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1865223/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1867214] Re: MTU too large error presented on create but not update

2024-02-19 Thread Brian Haley
Could not reproduce, marking invalid.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1867214

Title:
  MTU too large error presented on create but not update

Status in neutron:
  Invalid

Bug description:
  If an MTU is supplied when creating a network it is rejected if it is
  above global_physnet_mtu.  If an MTU is supplied when updating a
  network it is not rejected even if the value is too large.  When
  global_physnet_mtu is 1500 I can easily set MTU 9000 or even beyond
  through update.  This is not valid.

  ~~~
  w(overcloud) [stack@undercloud-0 ~]$ openstack network show private1
  
+---++
  | Field | Value   
   |
  
+---++
  | admin_state_up| UP  
   |
  | availability_zone_hints   | 
   |
  | availability_zones| nova
   |
  | created_at| 2020-03-09T15:55:38Z
   |
  | description   | 
   |
  | dns_domain| None
   |
  | id| bffac18a-ceaa-4eeb-9a19-800de150def5
   |
  | ipv4_address_scope| None
   |
  | ipv6_address_scope| None
   |
  | is_default| False   
   |
  | is_vlan_transparent   | None
   |
  | mtu   | 1500
   |
  | name  | private1
   |
  | port_security_enabled | True
   |
  | project_id| d69c1c6601c741deaa205fa1a7e9c632
   |
  | provider:network_type | vlan
   |
  | provider:physical_network | tenant  
   |
  | provider:segmentation_id  | 106 
   |
  | qos_policy_id | None
   |
  | revision_number   | 8   
   |
  | router:external   | External
   |
  | segments  | None
   |
  | shared| True
   |
  | status| ACTIVE  
   |
  | subnets   | 51fc6508-313f-41c4-839c-bcbe2fa8795d, 
7b6fcbe1-b064-4660-b04a-e433ab18ba73 |
  | tags  | 
   |
  | updated_at| 2020-03-09T15:56:41Z
   |
  
+---++
  (overcloud) [stack@undercloud-0 ~]$ openstack network set private1 --mtu 9000
  (overcloud) [stack@undercloud-0 ~]$ openstack network set private1 --mtu 9500
  (overcloud) [stack@undercloud-0 ~]$ openstack network show private1

  
+---++
  | Field | Value   
   |
  
+---++
  | admin_state_up| UP  
   |
  | availability_zone_hints   | 
   |
  | availability_zones| nova  

[Yahoo-eng-team] [Bug 2055245] Re: DHCP Option is not passed to VM via Cloud-init

2024-02-28 Thread Brian Haley
Neutron started using network:distributed for both DHCP and metadata
ports in Victoria [0]

Looking at the change proposed, Nova only ever looks for ports with
network:dhcp in the device_owner field, it also needs to do a lookup of
ports with network:distributed in this field. Unfortunately they can't
be combined in one query at the moment, I might try to fix that.

So I don't think this a valid bug for Neutron.

[0] https://review.opendev.org/c/openstack/neutron/+/732364

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2055245

Title:
  DHCP Option is not passed to VM via Cloud-init

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===

  Nova-Metadata-API doesn't provide ipv4_dhcp type for OVN (native OVH
  DHCP feature, no DHCP agents) networks with dhcp_enabled but no
  default gateway.

  Problem seems to be in
  
https://opendev.org/openstack/nova/src/branch/master/nova/network/neutron.py#L3617

  There is just an exception to networks without device_owner:
  network:dhcp where default gateway is used, which doesn't cover this
  case.

  Steps to reproduce
  ==

  Create a OVN network in an environment where native DHCP feature is
  provided by ovn (no ml2/ovs DHCP Agents). In addition this network
  needs to have no default gateway enabled.

  Create VM in this network and observe the cloud-init process
  (network_data.json)

  Expected result
  ===

  network_data.json
  (http://169.254.169.254/openstack/2018-08-27/network_data.json) should
  return something like:

  {
"links": [
  {
"id": "tapddc91085-96",
"vif_id": "ddc91085-9650-4b7b-ad9d-b475bac8ec8b",
"type": "ovs",
"mtu": 1442,
"ethernet_mac_address": "fa:16:3e:93:49:fa"
  }
],
"networks": [
  {
"id": "network0",
"type": "ipv4_dhcp",
"link": "tapddc91085-96",
"network_id": "9f61a3a7-26d3-4013-b61d-12880b325ea9"
  }
],
"services": []
  }

  Actual result
  =

  {
"links": [
  {
"id": "tapddc91085-96",
"vif_id": "ddc91085-9650-4b7b-ad9d-b475bac8ec8b",
"type": "ovs",
"mtu": 1442,
"ethernet_mac_address": "fa:16:3e:93:49:fa"
  }
],
"networks": [
  {
"id": "network0",
"type": "ipv4",
"link": "tapddc91085-96",
"ip_address": "10.0.0.40",
"netmask": "255.255.255.0",
"routes": [],
"network_id": "9f61a3a7-26d3-4013-b61d-12880b325ea9",
"services": []
  }
],
"services": []
  }

  Environment
  ===

  Openstack Zed with Neutron OVN feature enabled

  Nova: 26.2.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2055245/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1973347] Re: OVN revision_number infinite update loop

2024-03-01 Thread Brian Haley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1973347

Title:
  OVN revision_number infinite update loop

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive ussuri series:
  New
Status in Ubuntu Cloud Archive victoria series:
  New
Status in Ubuntu Cloud Archive wallaby series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  New
Status in neutron source package in Focal:
  New
Status in neutron source package in Jammy:
  Fix Released

Bug description:
  After the change described in
  https://mail.openvswitch.org/pipermail/ovs-dev/2022-May/393966.html
  was merged and released in stable OVN 22.03, there is a possibility to
  create an endless loop of revision_number update in external_ids of
  ports and router_ports. We have confirmed the bug in Ussuri and Yoga.
  When the problem happens, the Neutron log would look like this:

  2022-05-13 09:30:56.318 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: router_ports) to 4815
  2022-05-13 09:30:56.366 25 ... Running txn n=1 command(idx=0): 
CheckRevisionNumberCommand(...)
  2022-05-13 09:30:56.367 25 ... Running txn n=1 command(idx=1): 
SetLSwitchPortCommand(...)
  2022-05-13 09:30:56.367 25 ... Running txn n=1 command(idx=2): 
PgDelPortCommand(...)
  2022-05-13 09:30:56.467 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: ports) to 4815
  2022-05-13 09:30:56.880 25 ... Running txn n=1 command(idx=0): 
CheckRevisionNumberCommand(...)
  2022-05-13 09:30:56.881 25 ... Running txn n=1 command(idx=1): 
UpdateLRouterPortCommand(...)
  2022-05-13 09:30:56.881 25 ... Running txn n=1 command(idx=2): 
SetLRouterPortInLSwitchPortCommand(...)
  2022-05-13 09:30:56.984 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: router_ports) to 4816
  2022-05-13 09:30:57.057 25 ... Running txn n=1 command(idx=0): 
CheckRevisionNumberCommand(...)
  2022-05-13 09:30:57.057 25 ... Running txn n=1 command(idx=1): 
SetLSwitchPortCommand(...)
  2022-05-13 09:30:57.058 25 ... Running txn n=1 command(idx=2): 
PgDelPortCommand(...)
  2022-05-13 09:30:57.159 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: ports) to 4816
  2022-05-13 09:30:57.523 25 ... Running txn n=1 command(idx=0): 
CheckRevisionNumberCommand(...)
  2022-05-13 09:30:57.523 25 ... Running txn n=1 command(idx=1): 
UpdateLRouterPortCommand(...)
  2022-05-13 09:30:57.524 25 ... Running txn n=1 command(idx=2): 
SetLRouterPortInLSwitchPortCommand(...)
  2022-05-13 09:30:57.627 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: router_ports) to 4817
  2022-05-13 09:30:57.674 25 ... Running txn n=1 command(idx=0): 
CheckRevisionNumberCommand(...)
  2022-05-13 09:30:57.674 25 ... Running txn n=1 command(idx=1): 
SetLSwitchPortCommand(...)
  2022-05-13 09:30:57.675 25 ... Running txn n=1 command(idx=2): 
PgDelPortCommand(...)
  2022-05-13 09:30:57.765 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: ports) to 4817

  (full version here: https://pastebin.com/raw/NLP1b6Qm).

  In our lab environment we have confirmed that the problem is gone
  after mentioned change is rolled back.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1973347/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2037717] Re: [OVN] ``PortBindingChassisEvent`` event is not executing the conditions check

2024-03-01 Thread Brian Haley
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Jammy)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu Jammy)
   Status: New => Fix Released

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/wallaby
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/wallaby
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2037717

Title:
  [OVN] ``PortBindingChassisEvent`` event is not executing the
  conditions check

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive ussuri series:
  New
Status in Ubuntu Cloud Archive victoria series:
  New
Status in Ubuntu Cloud Archive wallaby series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  New
Status in neutron source package in Focal:
  New
Status in neutron source package in Jammy:
  Fix Released

Bug description:
  Since [1], that overrides the "match_fn" method, the event is not checking 
the defined conditions in the initialization, that are:
    ('type', '=', ovn_const.OVN_CHASSIS_REDIRECT)

  [1]https://review.opendev.org/q/I3b7c5d73d2b0d20fb06527ade30af8939b249d75

  Related bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2241824

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/2037717/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1865453] Re: neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_created_before fails randomly

2024-03-05 Thread Brian Haley
** Changed in: neutron
 Assignee: Adil Ishaq (iradvisor) => (unassigned)

** Changed in: identity-management
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1865453

Title:
  
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_created_before
  fails randomly

Status in Identity Management:
  Invalid
Status in neutron:
  Confirmed

Bug description:
  Sometimes we see random failures of the test:

  
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_created_before

  
  
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_created_beforetesttools.testresult.real._StringException:
 Traceback (most recent call last):
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/mock/mock.py",
 line 1330, in patched
  return func(*args, **keywargs)
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/test_mech_driver.py",
 line 280, in test_virtual_port_created_before
  ovn_vport.options[ovn_const.LSP_OPTIONS_VIRTUAL_PARENTS_KEY])
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/testtools/testcase.py",
 line 417, in assertIn
  self.assertThat(haystack, Contains(needle), message)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/testtools/testcase.py",
 line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 
'88c0378b-71bd-454b-a0df-8c70b57d257a' not in 
'49043b88-554f-48d0-888d-eeaa749e752f'

To manage notifications about this bug go to:
https://bugs.launchpad.net/identity-management/+bug/1865453/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1701410] Re: different behavior when deleting with request body (no BadRequest with core resources in case of pecan)

2024-03-05 Thread Brian Haley
I am going to close this as it is over 6 years old and no one has
stepped forward to fix it, so it's just not a priority. Please re-open
if necessary.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1701410

Title:
  different behavior when deleting with request body (no BadRequest with
  core resources in case of pecan)

Status in neutron:
  Won't Fix

Bug description:
  In master environment, it is different behavior when we try to delete with
  request body.  I fixed it in [1] but 
CORE_RESOURCE(network/subnet/port/subnetpool) doesn't pass this code in case of 
web_framework = pecan in /etc/neutron/neutron.conf

  [1]
  https://github.com/openstack/neutron/blame/master/neutron/api/v2/base.py#L555

  [FloatingIP, Router]
  $ source ~/devstack/openrc admin admin; export TOKEN=`openstack token issue | 
grep ' id ' | get_field 2`
  $ curl -i -X DELETE -d '{"floatingip":{"description": "aaa"}}' -H 
"content-type:application/json" -H 'accept:application/json' -H 
"x-auth-token:$TOKEN" 
192.168.122.33:9696/v2.0/floatingips/f4e9b845-4472-4806-bd7a-bec8f7618af2
  HTTP/1.1 400 Bad Request
  Content-Length: 113
  Content-Type: application/json
  X-Openstack-Request-Id: req-deaffdb3-7c13-4604-89d0-78fbcc184ef5
  Date: Fri, 30 Jun 2017 00:56:56 GMT

  {"NeutronError": {"message": "Request body is not supported in
  DELETE.", "type": "HTTPBadRequest", "detail": ""}}

  $ curl -i -X DELETE -d '{"router": {"name": "aaa"}}' -H 
"content-type:application/json" -H 'accept:application/json' -H 
"x-auth-token:$TOKEN" 
192.168.122.33:9696/v2.0/routers/1d0ea30e-c481-4be3-a548-a659d9e3787c
  HTTP/1.1 400 Bad Request
  Content-Length: 113
  Content-Type: application/json
  X-Openstack-Request-Id: req-a2f9babb-4eb3-471e-9b42-ccfe722c44f0
  Date: Fri, 30 Jun 2017 01:44:40 GMT

  {"NeutronError": {"message": "Request body is not supported in
  DELETE.", "type": "HTTPBadRequest", "detail": ""}}

  [Core resources: Network/Subnet/Port/Subnetpool]
  $ source ~/devstack/openrc admin admin; export TOKEN=`openstack token issue | 
grep ' id ' | get_field 2`
  $ curl -i -X DELETE -d '{"network":{"name": ""}}' -H 
"content-type:application/json" -H 'accept:application/json' -H 
"x-auth-token:$TOKEN" 
192.168.122.33:9696/v2.0/networks/1fb94931-dabe-49dc-bce4-68c8bafea8b0

  HTTP/1.1 204 No Content
  Content-Length: 0
  X-Openstack-Request-Id: req-7e838c38-e6cd-46c3-8703-c93f5bb4a503
  Date: Fri, 30 Jun 2017 01:32:12 GMT

  $ curl -i -X DELETE -d '{"subnet": {"name": "aaa"}}' -H "content-
  type:application/json" -H 'accept:application/json' -H "x-auth-
  token:$TOKEN"
  192.168.122.33:9696/v2.0/subnets/a18fb191-2a89-4193-80d1-5330a8052d64

  HTTP/1.1 204 No Content
  Content-Length: 0
  X-Openstack-Request-Id: req-901476cf-7e87-4b7c-ab20-209b81d2eb25
  Date: Fri, 30 Jun 2017 01:37:01 GMT

  $ curl -i -X DELETE -d '{"port": {"name": "aaa"}}' -H "content-
  type:application/json" -H 'accept:application/json' -H "x-auth-
  token:$TOKEN"
  192.168.122.33:9696/v2.0/ports/47f2c36a-7461-4c1a-a23e-931d5aee3f9c

  HTTP/1.1 204 No Content
  Content-Length: 0
  X-Openstack-Request-Id: req-48452706-6309-42c2-ac80-f0f4e387060e
  Date: Fri, 30 Jun 2017 01:37:33 GMT

  $ curl -i -X DELETE -d '{"subnetpool": {"description": "aaa"}}' -H
  "content-type:application/json" -H 'accept:application/json' -H
  "x-auth-token:$TOKEN"
  192.168.122.33:9696/v2.0/subnetpools/e0e09ffc-a4af-4cf0-ac2e-7a8b1475cef6

  HTTP/1.1 204 No Content
  Content-Length: 0
  X-Openstack-Request-Id: req-9601a3ae-74a0-49ca-9f99-02ad624ceacb
  Date: Fri, 30 Jun 2017 06:24:58 GMT

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1701410/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1897928] Re: TestOvnDbNotifyHandler test cases failing due to missing attribute "_RowEventHandler__watched_events"

2024-03-05 Thread Brian Haley
Seems to have been fixed with
https://review.opendev.org/c/openstack/neutron/+/820911 will close this
bug.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1897928

Title:
  TestOvnDbNotifyHandler test cases failing due to missing attribute
  "_RowEventHandler__watched_events"

Status in neutron:
  Fix Released

Bug description:
  Some 
neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovsdb_monitor.TestOvnDbNotifyHandler
 test cases are failing:
  * test_shutdown
  * test_watch_and_unwatch_events

  The error [1] is caused because of a missing attribute: 
AttributeError: 'OvnDbNotifyHandler' object has no attribute 
'_RowEventHandler__watched_events'

  
  
[1]https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_bec/periodic/opendev.org/openstack/neutron/master/openstack-tox-py36-with-ovsdbapp-master/becf062/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1897928/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2022914] Re: [neutron-api] remove leader_only for maintenance worker

2024-03-05 Thread Brian Haley
Patches have merged, will close.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2022914

Title:
  [neutron-api] remove leader_only for maintenance worker

Status in neutron:
  Fix Released

Bug description:
  Currently if you want to connect the neutron-api to the souhtbound
  database you cannot use relays, because the maintenance worker has a
  condition set that it requires a leader_only connection.

  This leader_only collection is not necessary since the maintenance
  tasks of the neutron-api are only getting information from the
  souhtbound and are not pushing information into the souhtbound
  database.

  If you adjust the neutron-api to use relays, it will log something
  like "relay database, cannot be leader" every time the maintenance
  task should run.

  I would expect to be able to set the southbound connection for the
  neutron-api to use the relays.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2022914/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2008912] Re: "_validate_create_network_callback" failing with 'NoneType' object has no attribute 'qos_policy_id'

2024-03-05 Thread Brian Haley
Change merged, will close this.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2008912

Title:
  "_validate_create_network_callback" failing with 'NoneType' object has
  no attribute 'qos_policy_id'

Status in neutron:
  Fix Released

Bug description:
  Logs:
  
https://e138a887655b8fda005f-ea1d911c7c7db668a9aa6765a743313b.ssl.cf5.rackcdn.com/874133/2/check/neutron-
  tempest-plugin-openvswitch-enforce-scope-new-
  defaults/7e5cbf9/controller/logs/screen-q-svc.txt

  Error (snippet): https://paste.opendev.org/show/bYMju0ckz5GK5BYq0yhN/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2008912/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999154] Re: ovs/ovn source deployment broken with ovs_branch=master

2024-03-05 Thread Brian Haley
Seems fixed, closing.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1999154

Title:
  ovs/ovn source deployment broken with ovs_branch=master

Status in neutron:
  Fix Released

Bug description:
  Since [1] jobs running with OVS_BRANCH=master are broken, fails as below:-
  utilities/ovn-dbctl.c: In function ‘do_dbctl’:
  utilities/ovn-dbctl.c:724:9: error: too few arguments to function 
‘ctl_context_init_command’
724 | ctl_context_init_command(ctx, c);
| ^~~~
  In file included from utilities/ovn-dbctl.c:23:
  /opt/stack/ovs/lib/db-ctl-base.h:249:6: note: declared here
249 | void ctl_context_init_command(struct ctl_context *, struct 
ctl_command *,
|  ^~~~
  make[1]: *** [Makefile:2352: utilities/ovn-dbctl.o] Error 1
  make[1]: *** Waiting for unfinished jobs
  make[1]: Leaving directory '/opt/stack/ovn'
  make: *** [Makefile:1548: all] Error 2
  + lib/neutron_plugins/ovn_agent:compile_ovn:1 :   exit_trap

  Failure builds example:-
  - https://zuul.opendev.org/t/openstack/build/3a900a1cfe824746ac8ffc6a27fc8ec4
  - https://zuul.opendev.org/t/openstack/build/7d862338d6194a4fb3a34e8c3c67f532
  - https://zuul.opendev.org/t/openstack/build/ae092f4985af41908697240e3f64f522

  
  Until OVN repo[2] get's updated to work with ovs master we have to pin ovs to 
working version to get these experimental jobs back to green.

  [1] 
https://github.com/openvswitch/ovs/commit/b8bf410a5c94173da02279b369d75875c4035959
  [2] https://github.com/ovn-org/ovn

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1999154/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2025056] Re: Router ports without IP addresses shouldn't be allowed to deletion using port's API directly

2024-03-05 Thread Brian Haley
Patches merged, will close.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2025056

Title:
  Router ports without IP addresses shouldn't be allowed to deletion
  using port's API directly

Status in neutron:
  Fix Released

Bug description:
  Long time ago there was bug https://bugs.launchpad.net/neutron/+bug/1104337 
and as fix for this bug there was patch 
https://review.opendev.org/c/openstack/neutron/+/20424 proposed. This patch 
allowed to remove router ports without fixed IPs directly using "port delete" 
command.
  But it may cause error 500 if port really belongs to an existing router. 
Steps to reproduce the issue:

  1. Create network (external) and do NOT create subnet for it,
  2. Create router,
  3. Set network from p. 1 as external gateway for the router,
  4. Try to delete external gateway's port using "openstack port delete" 
command - it will fail with error 500. Stacktrace in neutron server log is as 
below:

  2023-06-22 05:41:06.672 16 DEBUG neutron.db.l3_db 
[req-a261d22f-9243-4b22-8d40-a5e7bcd63453 abd0fab2837040f383c986b6a723fbec 
39e32a986a4d4f42bce967634a308f99 - default default] Port 
9978f00d-4be2-474d-89a7-07d9b1e797df has owner network:router_gateway, but no 
IP address, so it can be deleted prevent_l3_port_deletion 
/usr/lib/python3.9/site-packages/neutron/db/l3_db.py:1675
  2023-06-22 05:41:07.085 16 DEBUG neutron.plugins.ml2.plugin 
[req-a261d22f-9243-4b22-8d40-a5e7bcd63453 abd0fab2837040f383c986b6a723fbec 
39e32a986a4d4f42bce967634a308f99 - default default] Calling delete_port for 
9978f00d-4be2-474d-89a7-07d9b1e797df owned by network:router_gateway 
delete_port /usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py:2069
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation 
[req-a261d22f-9243-4b22-8d40-a5e7bcd63453 abd0fab2837040f383c986b6a723fbec 
39e32a986a4d4f42bce967634a308f99 - default default] DELETE failed.: 
oslo_db.exception.DBReferenceError: (pymysql.err.IntegrityError) (1451, 'Cannot 
delete or update a parent row: a foreign key constraint fails 
(`ovs_neutron`.`routers`, CONSTRAINT `routers_ibfk_1` FOREIGN KEY 
(`gw_port_id`) REFERENCES `ports` (`id`))')
  [SQL: DELETE FROM ports WHERE ports.id = %(id)s]
  [parameters: {'id': '9978f00d-4be2-474d-89a7-07d9b1e797df'}]
  (Background on this error at: http://sqlalche.me/e/13/gkpj)
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation 
Traceback (most recent call last):
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1276, in 
_execute_context
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation 
self.dialect.do_execute(
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 609, in 
do_execute
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation 
cursor.execute(statement, parameters)
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation 
result = self._query(query)
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation 
conn.query(q)
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation 
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in 
_read_query_result
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation 
result.read()
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation 
first_packet = self.connection._read_packet()
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python3.9/site-packages/pymysql/connections.py", line 676, in 
_read_packet
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation 
packet.raise_for_error()
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python3.9/site-packages/pymysql/protocol.py", line 223, in 
raise_for_error
  2023-06-22 05:41:07

[Yahoo-eng-team] [Bug 1779978] Re: [fwaas] FWaaS instance stuck in PENDING_CREATE when devstack enable fwaas-v1

2024-03-06 Thread Brian Haley
I am going to close this as fwaas-v1 has been deprecated. Please open a
new bug if this also affects fwaas-v2. Thanks.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1779978

Title:
  [fwaas] FWaaS instance stuck in PENDING_CREATE when devstack enable
  fwaas-v1

Status in neutron:
  Invalid

Bug description:
  When we deploy OpenStack by using devstack and enable FW v1 in
  local.conf  "enable_service neutron-fwaas-v1", deploying process is
  successful, but when we create FW instance, instance will stuck in
  "PENDING_CREATE" status forever, I found a related bug
  https://bugs.launchpad.net/charm-neutron-gateway/+bug/1680164 , that
  only address for charm project, but problem still exist in devstack
  fwaas plugin, I add these options in my local environment and restart
  neutron services, then create FW instance, it will be in ACTIVE.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1779978/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1866615] Re: Packets incorrectly marked as martian

2024-03-07 Thread Brian Haley
I am going to close this since moving to the OVS firewall driver has
helped, and I'm not sure anyone will take the time to investigate
further as OVN is now the default driver. Someone can re-open if they
intend on working on it.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1866615

Title:
  Packets incorrectly marked as martian

Status in neutron:
  Won't Fix

Bug description:
  Problem:
  The following behaviour is observed:

  The deployment has 2 provider networks. One of the them is the public one and 
another is
  getting outside but through NAT. This second one is the one that they 
hypervisors use and this is what we have as "openstack public" (10.20.6.X). The 
VMs that are launched are attached to the fabric on the 10.10 public network. 
Therefore that network is not present on the hypervisor NICS. What we observe 
is that the switch is sending ARP requests (correctly) from the .250 active 
standby IP but the kernel is marking them as Martian despite the fact that 
neutron knows this network.

  System:
  triple-O Based Rocky Deployment . VXLAN tunneling, DVR enabled with Bond 
interfaces on 2 switches. (Open vSwitch) 2.11.0, Neutron 13.0.5
  kernel: 3.10.0-957.21.3.el7.x86_64
  Host OS: CentOS
  Switches: Arista

  -- --- ---
  | SWITCH | | HYPERVISOR  | |  VM  |
  | 10.10.91.250   | --- | 10.20.6.X   | --- | 10.10.X.Y/23 |
  -- --- ---

  Subnet details :

  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  | 10.10.90.10-10.10.91.240 |
  | cidr  | 10.10.90.0/23|
  | created_at| 2019-09-24T08:43:54Z |
  | description   |  |
  | dns_nameservers   | 10.10.D.Y|
  | enable_dhcp   | True |
  | gateway_ip| 10.10.91.254 |
  | host_routes   |  |
  | id| f91d725a-89d1-4a32-97b5-95409177e8eb |
  | ip_version| 4|
  | ipv6_address_mode | None |
  | ipv6_ra_mode  | None |
  | name  | public-subnet|
  | network_id| a1a3280b-9c78-4e5f-883a-9b4bc4e72b1f |
  | project_id| ec9851ba91854e10bb8d5e752260f5fd |
  | revision_number   | 14   |
  | segment_id| None |
  | service_types |  |
  | subnetpool_id | None |
  | tags  |  |
  | updated_at| 2020-03-03T14:34:26Z |
  +---+--+

  cat openvswitch_agent.ini
  [agent]
  l2_population=True
  arp_responder=True
  enable_distributed_routing=True
  drop_flows_on_start=False
  extensions=qos
  tunnel_csum=False
  tunnel_types=vxlan
  vxlan_udp_port=4789

  [securitygroup]
  firewall_driver=iptables_hybrid

  Expected output:
  No Martian Packets observed

  Actual output:
  Since The extra provider network is configured I would expect that the linux 
kernel would not mark the incoming packets as martian.

  However,

  Mar  9 10:45:41 compute0 kernel: IPv4: martian source 10.10.90.74 from 
10.10.91.250 on dev qbrff08c591-e2
  Mar  9 10:45:41 compute0 kernel: ll header: : ff ff ff ff ff ff 98 5d 
82 a1 a6 cd 08 06...]..
  Mar  9 10:45:42 compute0 kernel: IPv4: martian source 10.10.90.74 from 
10.10.91.250 on dev qbrff08c591-e2
  Mar  9 10:45:42 compute0 kernel: ll header: : ff ff ff ff ff ff 98 5d 
82 a1 a6 cd 08 06...]..
  Mar  9 10:45:43 compute0 kernel: IPv4: martian source 10.10.91.203 from 
10.10.91.250 on dev qbrff08c591-e2
  Mar  9 10:45:43 compute0 kernel: ll header: : ff ff ff ff ff ff 98 5d 
82 a1 a6 cd 08 06...]..
  Mar  9 10:45:44 compute0 kernel: IPv4: martian source 10.10.90.74 from 
10.10.91.250 on dev qbrff08c591-e2
  Mar  9 10:45:44 compute0 kernel: ll header: : ff ff ff ff ff ff 98 5d 
82 a1 a6 cd 08 06...]..
  Mar  9 10:45:44 compute0 kernel: IPv4: martian source 10.10.91.203 from 
10.10.91.250 on dev qbrff08c591-e2
  Mar  9 10:45:44 compute0 kernel: ll header: : ff ff ff ff ff ff 98 5d 
82 a1 a6 cd 08 06...]..

  Perceived severity:
  Minor annoyance since /var/log/messages is flooded.
  Minor security vuln

[Yahoo-eng-team] [Bug 1879407] Re: [OVN] Modifying FIP that is no associated causes ovn_revision_numbers to go stale

2024-03-07 Thread Brian Haley
I am inclined to leave this as-is since there are other resources that
follow the same pattern, and either the maintenance task will fix it,
otherwise when it's associated to a port.

Thanks for the bug Flavio :)

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1879407

Title:
  [OVN] Modifying FIP that is no associated causes ovn_revision_numbers
  to go stale

Status in neutron:
  Won't Fix

Bug description:
  NOTE: This is a low priority issue, mostly because it eventually gets fixed 
by maintenance task. Also because while fip is not associated, there is no
  real harm done to the NAT functionality.

  CheckRevisionNumberCommand relies in finding a corresponding entry in OVN's 
NAT table
  in order to update the OVN_REV_NUM_EXT_ID_KEY to keep ovn and neutron 
databases in sync.

  Ref: http://lucasgom.es/posts/neutron_ovn_database_consistency.html

  Trouble is that unless the floating ip is associated, there will be no
  entries in OVN's NAT table, causing the call to

   db_rev.bump_revision(context, floatingip, ovn_const.TYPE_FLOATINGIPS)

  to not take place.

  Steps to reproduce it:

  # create a floating ip but do not associate it with anything so router_id is 
None
  FIP=172.24.4.8
  openstack floating ip create --floating-ip-address ${FIP} public
  FIP_UUID=$(openstack floating ip show ${FIP} -f value -c id) ; echo $FIP_UUID

  # Mess with its name, which will bump revision on fip object
  openstack floating ip set --description foo ${FIP_UUID}

  Code when there is no NAT for a given FIP makes line 1044 skip line
  1045

  
https://github.com/openstack/neutron/blob/15088b39bab715e40d8161a85c95ca400708c83f/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L1044

  check_rev_cmd.result is None

  The dbs are now the inconsistent state

  mysql> use neutron;
  Reading table information for completion of table and column names
  You can turn off this feature to get a quicker startup with -A

  Database changed
  mysql> select * from standardattributes where resource_type="floatingips";
  
++---+-+-+-+-+
  | id | resource_type | created_at  | updated_at  | 
description | revision_number |
  
++---+-+-+-+-+
  | 49 | floatingips   | 2020-05-18 20:56:51 | 2020-05-18 20:58:58 | foo2   
 |   2 |
  
++---+-+-+-+-+
  1 row in set (0.01 sec)

  mysql> select * from ovn_revision_numbers where resource_type="floatingips";
  
+--+--+---+-+-+-+
  | standard_attr_id | resource_uuid| resource_type | 
revision_number | created_at  | updated_at  |
  
+--+--+---+-+-+-+
  |   49 | 5a1e1ffa-0312-4e78-b7a0-551c396bcf6b | floatingips   |   
0 | 2020-05-18 20:56:51 | 2020-05-18 20:57:08 |
  
+--+--+---+-+-+-+
  1 row in set (0.00 sec)

  Maintenance task fixes it up later

  May 18 21:50:29 stack neutron-server[909]: DEBUG futurist.periodics [None 
req-35091ee8-f2fe-47cc-b757-8bb70f750b47 None None] Submitting periodic 
callback 'neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance.DBIn\
  consistenciesPeriodics.check_for_inconsistencies' {{(pid=3186) 
_process_scheduled 
/usr/local/lib/python3.6/dist-packages/futurist/periodics.py:642}}
  May 18 21:50:29 stack neutron-server[909]: DEBUG 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance [None 
req-35091ee8-f2fe-47cc-b757-8bb70f750b47 None None] Maintenance task: 
Synchronizing Neutron and OVN datab\
  ases {{(pid=3186) check_for_inconsistencies 
/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py:347}}
  May 18 21:50:29 stack neutron-server[909]: DEBUG 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance [None 
req-35091ee8-f2fe-47cc-b757-8bb70f750b47 None None] Maintenance task: Number of 
inconsistencies found at \
  create/update: floatingips=1 {{(pid=3186) _log 
/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py:325}}
  May 18 21:50:29 stack neutron-server[909]: DEBUG 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance [None 
req-35091ee8-f2fe-47cc-b757-8bb70f750b47 None None] Maintenance task: Fixing 
resource 6b876a35-d286-4407-\
  b538-9ce07ab1a281 (type: floatingips) at create/update {{

[Yahoo-eng-team] [Bug 1880845] Re: [fullstack] Error assigning IPv4 (network address) in "test_gateway_ip_changed"

2024-03-07 Thread Brian Haley
I think I figured out the issue here, so will close this.

Here's my thinking.

The referenced log above was from a change on stable/train:

  https://review.opendev.org/c/openstack/neutron/+/730888

Lajos fixed a bug in _find_available_ips that seems related:

  https://review.opendev.org/c/openstack/neutron/+/692135

commit 3c9b0a5fac2e3a1321eadc272c8ed46aa61efd3e
Author: elajkat 
Date:   Wed Oct 30 13:38:30 2019 +0100

[fullstack] find ip based on allocation_pool

_find_available_ips tried to find available ips based on the given
subnet's cidr field, which can be misleading if random selection goes
out-of allocation-pool. This patch changes this behaviour to use
cidr's allocation_pool field.

Closes-Bug: #1850292
Change-Id: Ied2ffb5ed58007789b0f5157731687dc2e0b9bb1

That change is only included in these versions:

  master stable/2023.1 stable/2023.2 stable/victoria stable/wallaby stable/xena 
stable/zed
  unmaintained/victoria unmaintained/wallaby unmaintained/xena unmaintained/yoga

So I'm guessing merged in Victoria.

Since the change was from stable/train we could have had an issue with
the subnet['cidr'] being used, which would have included IP addresses
outside the start/end allocation pool.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1880845

Title:
  [fullstack] Error assigning IPv4 (network address) in
  "test_gateway_ip_changed"

Status in neutron:
  Invalid

Bug description:
  Error assigning IPv4 (network address) in "test_gateway_ip_changed".

  LOG:
  
https://8e3d76ba7bcafd7367d8-a42dfacf856f2ce428049edff149969f.ssl.cf1.rackcdn.com/730888/1/check/neutron-
  fullstack/31482ea/testr_results.html

  ERROR MESSAGE: http://paste.openstack.org/show/794029/
  """
  neutronclient.common.exceptions.InvalidIpForNetworkClient: IP address 
240.135.228.0 is not a valid IP for any of the subnets on the specified network.
  """

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1880845/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1880969] Re: Creating FIP takes time

2024-03-07 Thread Brian Haley
Looking at some recent logs these values seem Ok now, so will close
this. If we see the issue again can open a new bug.

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1880969

Title:
  Creating FIP takes time

Status in neutron:
  Fix Released

Bug description:
  I noticed on upstream and downstream gates that while creating
  FloatingIP for action like:

  neutron floatingip-create public

  For ml2/ovs and ml2/ovn this operation takes minimum ~4 seconds.

  The same we can find on u/s gates from rally jobs [1].

  While we put the load on Neutron server it normally takes more than 10
  seconds.

  For ML/OVN creating a FIP doesn't end with creating NAT entry in OVN
  NBDB row. So its clearly only API operation.

  Maybe we can consider profiling it?

  [1]
  
https://98a898dcf3dfb1090155-da3b599be5166de1dcb38898c60ea3c9.ssl.cf5.rackcdn.com/729588/1/check/neutron-
  rally-
  
task/dd55aa7/results/report.html#/NeutronNetworks.associate_and_dissociate_floating_ips/overview

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1880969/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1894799] Re: For existing ovs interface, the ovs_use_veth parameter don't take effect

2024-03-07 Thread Brian Haley
I am going to close this as it has been un-assigned for almost 3 years
and the change abandoned. If you wish to work on it please re-open.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1894799

Title:
  For existing ovs interface, the ovs_use_veth parameter don't take
  effect

Status in neutron:
  Won't Fix

Bug description:
  For existing router, the qr- interface already exists in the
  qrouter namespace, so when change the ovs_use_veth from fralse to
  true, the veth interface can't be created. Just like the
  use_veth_interconnection
  
parameter(https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1513),
  we also need to drop ports if the interface types doesn't match the
  configuration value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1894799/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1898015] Re: neutron-db-manage returns SUCCESS on wrong subproject name

2024-03-07 Thread Brian Haley
Going to mark invalid for Neutron as it seems like an oslo.config bug.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1898015

Title:
  neutron-db-manage returns SUCCESS on wrong subproject name

Status in neutron:
  Invalid
Status in oslo.config:
  Confirmed

Bug description:
  (neutron-server)[neutron@os-controller-1 /]$ neutron-db-manage --subproject 
neutron-sfc upgrade --contract
  argument --subproject: Invalid String(choices=['vmware-nsx', 
'networking-sfc', 'neutron-vpnaas', 'networking-l2gw', 'neutron-fwaas', 
'neutron', 'neutron-dynamic-routing']) value: neutron-sfc
  (neutron-server)[neutron@os-controller-1 /]$ echo $?
  0

  Tested Train and Victoria, possibly behaved like this since forever.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1898015/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1913664] Re: [CI] neutron multinode jobs does not run neutron_tempest_plugin scenario cases

2024-03-07 Thread Brian Haley
>From the review it seems as the decision was to not do this, so I will
close this bug.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1913664

Title:
  [CI] neutron multinode jobs does not run neutron_tempest_plugin
  scenario cases

Status in neutron:
  Invalid

Bug description:
  This is the job neutron-tempest-plugin-scenario-openvswitch's cases:
  
https://812aefd7f17477a1c0dc-8bc1c0202523f17b73621207314548bd.ssl.cf5.rackcdn.com/772255/6/check/neutron-tempest-plugin-scenario-openvswitch/5221232/testr_results.html

  This is neutron-tempest-dvr-ha-multinode-full cases:
  
https://87e09d95af4c4ee8cb65-839132c9f2f257823716e8f40ef80a9a.ssl.cf1.rackcdn.com/772255/6/check/neutron-tempest-dvr-ha-multinode-full/0e428cd/testr_results.html

  IMO, neutron-tempest-*-multinode-full should contain all the neutron-
  tempest-plugin-scenario-* cases. But it does not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1913664/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666779] Re: Expose neutron API via a WSGI script

2024-03-08 Thread Brian Haley
Seems this fix is released, will close.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1666779

Title:
  Expose neutron API via a WSGI script

Status in neutron:
  Fix Released

Bug description:
  As per Pike goal [1], we should expose neutron API via a WSGI script,
  and make devstack installation use a web server for default
  deployment. This bug is a RFE/tracker for the feature.

  [1] https://governance.openstack.org/tc/goals/pike/deploy-api-in-
  wsgi.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1666779/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1694165] Re: Improve Neutron documentation for simpler deployments

2024-03-08 Thread Brian Haley
The documents have been updated many times over the past 6+ years, I'm
going to close this as they are much better now. If there is something
specific please open a new bug.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1694165

Title:
  Improve Neutron documentation for simpler deployments

Status in neutron:
  Won't Fix

Bug description:
  During Boston Summit session, an issue was raised that Neutron
  documentation for simpler deployments should be improved/simplified.

  Couple of observations were noted:

  1) For a non-neutron savvy users, it is not very intuitive to 
specify/configure networking requirements. 
  2) Basic default configuration (as documented) is very OVS centric. It should 
discuss other non-OVS specific deployments as well. 

  Here is the etherpad with the details of the discussion -
  https://etherpad.openstack.org/p/pike-neutron-making-it-easy

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1694165/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1797663] Re: refactor def _get_dvr_sync_data from neutron/db/l3_dvr_db.py

2024-03-08 Thread Brian Haley
As this has never been worked on am going to close. If anyone wants to
pick it up please re-open.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1797663

Title:
  refactor def _get_dvr_sync_data from neutron/db/l3_dvr_db.py

Status in neutron:
  Won't Fix

Bug description:
  The function def _get_dvr_sync_data in neutron/db/l3_dvr_db.py is
  fetching and processing routers data and since its called upon for
  each dvr ha router type on update, its becomes very hard to pin point
  issues in such a massive method, so I propose breaking it into two
  methods.

  def _get_dvr_sync_data and _process_dvr_sync_data. will make debugging
  in future easy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1797663/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1786226] Re: Use sqlalchemy baked query

2024-03-08 Thread Brian Haley
>From comment in the change that was linked above:

"BakedQuery is a legacy extension that no longer does too much beyond
what SQLAlchemy 1.4 does in most cases automatically. new development w/
BakedQuery is a non-starter, this is a legacy module we would eventually
remove."

For that reason I'm going to close this bug.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1786226

Title:
  Use sqlalchemy baked query

Status in neutron:
  Won't Fix

Bug description:
  I am running rally scenario test create_and_list_ports on a 3
  controller setup(each controller have 8 CPUs i.e 4 cores*2 HTs) with
  (function call trace enabled on neutron server processes) a
  concurrency of 8 for 400 iterations.

  Average time taken for create port is 7.207 seconds(when 400 ports are 
created) and the function call trace  for this run is at 
http://paste.openstack.org/show/727718/ and rally results are 
  
+---+
  |   Response Times (sec)  
  |
  
++---+--+--+--+---+---+-+---+
  | Action | Min (sec) | Median (sec) | 90%ile (sec) | 95%ile 
(sec) | Max (sec) | Avg (sec) | Success | Count |
  
++---+--+--+--+---+---+-+---+
  | neutron.create_network | 2.085 | 2.491| 3.01 | 3.29 
| 7.558 | 2.611 | 100.0%  | 400   |
  | neutron.create_port| 5.69  | 6.878| 7.755| 9.394
| 17.0  | 7.207 | 100.0%  | 400   |
  | neutron.list_ports | 0.72  | 5.552| 9.123| 9.599
| 11.165| 5.559 | 100.0%  | 400   |
  | total  | 10.085| 15.263   | 18.789   | 19.734   
| 28.712| 15.377| 100.0%  | 400   |
  |  -> duration   | 10.085| 15.263   | 18.789   | 19.734   
| 28.712| 15.377| 100.0%  | 400   |
  |  -> idle_duration  | 0.0   | 0.0  | 0.0  | 0.0  
| 0.0   | 0.0   | 100.0%  | 400   |
  
++---+--+--+--+---+---+-+---+


  Michael Bayer (zzzeek) has analysed this callgraph and had some
  suggestions. One suggestion is to use baked query i.e
  https://review.openstack.org/#/c/430973/2

  This is his analysis - "But looking at the profile I see here, it is
  clear that the vast majority of time is spent doing lots and lots of
  small queries, and all of the mechanics involved with turning them
  into SQL strings and invoking them.   SQLAlchemy has a very effective
  optimization for this but it must be coded into Neutron.

  Here is the total time spent for Query to convert its state into SQL:

  148029/356073   15.2320.000 4583.8200.013
  /usr/lib64/python2.7/site-
  packages/sqlalchemy/orm/query.py:3372(Query._compile_context)

  that's 4583 seconds spent in Query compilation, which if Neutron were
  modified  to use baked queries, would be vastly reduced.  I
  demonstrated the beginning of this work in 2017 here:
  https://review.openstack.org/#/c/430973/1  , which illustrates how to
  first start to create a base query method in neutron that other
  functions can begin to make use of.  As more queries start using the
  baked form, this 4500 seconds number will begin to drop."

  
  I have restored his patch https://review.openstack.org/#/c/430973/2 , with 
this the average time taken to create port is 5.196 seconds (when 400 ports are 
created), and the function call trace  for this run is at 
http://paste.openstack.org/show/727719/ also total time spent on Query 
compilation (Query._compile_context) is only 1675 seconds.

  83696/1690627.3080.000 1675.1400.010 
/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py:3372(Query._compile_context)
   
  Rally results for this run are

  
+---+
  |   Response Times (sec)  
  |
  
++---+--+--+--+---+---+-+---+
  | Action | Min (sec) | Median (sec) | 90%ile (sec) | 95%ile 
(sec) | Max (sec) | Avg (sec) | Success | Count |
  
++---+--+--+--+---+---+-+---+
  | neutron.cre

[Yahoo-eng-team] [Bug 1764738] Re: routed provider networks limit to one host

2024-03-08 Thread Brian Haley
>From all the changes that have merged this seems to be complete, will
close.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1764738

Title:
  routed provider networks limit to one host

Status in neutron:
  Fix Released

Bug description:
  There seems to be limitation for a compute node to only have interface
  on one segment in a multisegment network. This feels wrong and limits
  the compute resources since they can only be part of one segment.

  The purpose of multi segment networks is to group multiple segments
  under one network name. i.e. operators should be able to expand the IP
  pool without having to create multiple network for it like internet1,
  internet2, etc.

  The way it should work is that a compute node can belong to one or
  more segments. It should be up to the operator to decide how they want
  to segment the compute resources or not. It should not be enforced by
  the simple need to add IP ranges to a network.

  way to reproduce.
  1. configure compute nodes to have bridges configured on 2 segments
  2. create a network with 2 segments.
  3. create the segments
  2018-04-17 15:17:59.545 25 ERROR oslo_messaging.rpc.server
  2018-04-17 15:18:18.836 25 ERROR oslo_messaging.rpc.server 
[req-4fdf6ee1-2be3-49c5-b3cb-62a2194465ab - - - - -] Exception during message 
handling: HostConnectedToMultipleSegments: Host eselde03u02s04 is connected to 
multiple segments on routed provider network 
'5c1f4dd4-baff-4c59-ba56-bd9cc2c59fa4'.  It should be connected to one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1764738/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   3   4   5   6   >