[Yahoo-eng-team] [Bug 2044272] Re: Inconsistent IGMP configuration across drivers

2023-12-18 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/901753
Committed: 
https://opendev.org/openstack/neutron/commit/114ca0f1be8c10915d7e755e68ac2117f7db78e7
Submitter: "Zuul (22348)"
Branch:master

commit 114ca0f1be8c10915d7e755e68ac2117f7db78e7
Author: Lucas Alvares Gomes 
Date:   Wed Nov 22 13:05:24 2023 +

Fix IGMP inconsistency across drivers

Prior to this patch, ML2/OVS and ML2/OVN had inconsistent IGMP
configurations. Neutron only exposed one configuration option for IGMP:
igmp_snooping_enabled.

Other features such as IGMP flood, IGMP flood reports and IGMP flood
unregistered were hardcoded differently on each driver (see LP#2044272
for a more details).

These hardcoded values has led to many changes over the years tweaking
them to work on different scenarios but they were never final because
the fix for one case would break the other.

This patch introduces 3 new configuration options for these other IGMP
features that can be enabled or disabled on both backends. Operators
can now fine tune their deployments in the way that will work for them.

As a consequence of the hardcoded values for each driver we had to break
some defaults and, in the case of ML2/OVS, if operators want to keep
things as they were before this patch they will need to enable the new
mcast_flood and mcast_flood_unregistered configuration options.

That said, the for ML2/OVS there was also an inconsistency with the help
string of igmp_snooping_enabled configuration option as it mentioned
that enabling snooping would disable flooding to unregistered ports but
that was not true anymore after the fix [0].

[0] https://bugs.launchpad.net/neutron/+bug/1884723

Closes-Bug: #2044272
Change-Id: Ic4dde46aa0ea2b03362329c87341c83b24d32176
Signed-off-by: Lucas Alvares Gomes 


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2044272

Title:
  Inconsistent IGMP configuration across drivers

Status in neutron:
  Fix Released

Bug description:
  Currently there's only one configuration available for IGMP in
  Neutron: [ovs]/igmp_snooping_enable.

  By enabling this we will get different behaviors on ML2/OVN and
  ML2/OVS because the rest of the igmp configuration: "mcast-snooping-
  flood", "mcast-snooping-flood-reports" and "mcast-snooping-disable-
  flood-unregistered" are hard coded with different values in both
  drivers.

  For example, in the help string for the [ovs]/igmp_snooping_enable it
  says [0]:

  """
  ... Setting this option to True will also enable the Open vSwitch 
mcast-snooping-disable-flood-unregistered flag...

  """

  But that's only true for ML2/OVN nowadays where it was changed in 2020
  [1] to match the behavior of ML2/OVS. But, in 2021, ML2/OVS changed
  this behavior again [2] and now this has caused another issue with one
  of our customers.

  Right now, ML2/OVN will disable the flooding to unregistered ports and
  ML2/OVS will enable it.

  This back and forth changing IGMP values is not new [3], this patch
  for example disables the "mcast-snooping-flood-reports" in ML2/OVN
  where it was hard coded as enabled before. The patch [4] is what
  enabled "mcast-snooping-flood" and "mcast-snooping-flood-reports" for
  OVN provnet ports. Then patch [1] disabled "mcast-snooping-flood-
  reports" for OVN provnet ports... And so on... It's messy.

  The fact is that, since Neutron exposes only one configuration for
  IGMP but the backend offers a total of 4 config options we will never
  get it right. There will always be a use case that will have problems
  with these hard coded settings and we will have to keep changing it
  indefinitely.

  This LP is proposing making a definitive and final change for IGMP in
  Neutron by exposing all these knobs to the operators via config
  options. I know in OpenStack nowadays we strive to have fewer
  configuration options where possible but, I think this is one case
  where this should not be applicable because of the many ways multicast
  can be configured on each deployment.

  As part of this work tho, we will have to change the defaults of one
  of the drivers to make them consistent again and I would argue, given
  the help string for igmp_snooping_enable, that everything should be
  disabled by default.

  [0] 
https://github.com/openstack/neutron/blob/2be4343756863f252c8289e2ca3e7afe71f566c4/neutron/conf/agent/ovs_conf.py#L41-L46
  [1] https://review.opendev.org/c/openstack/neutron/+/762818
  [2] https://review.opendev.org/c/openstack/neutron/+/766360
  [3] https://review.opendev.org/c/openstack/neutron/+/888127
  [4] https://review.opendev.org/c/openstack/neutron/+/779258

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+b

[Yahoo-eng-team] [Bug 1671011] Re: Live migration of paused instance fails when post copy is enabled

2023-12-18 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/444517
Committed: 
https://opendev.org/openstack/nova/commit/33fa92b6cb1dfeb88a4188c0e4e4ce51be1f7a4b
Submitter: "Zuul (22348)"
Branch:master

commit 33fa92b6cb1dfeb88a4188c0e4e4ce51be1f7a4b
Author: Sivasathurappan Radhakrishnan 
Date:   Fri Mar 10 22:16:42 2017 +

Allow live migrate paused instance when post copy is enabled

Live migration of paused instance fails when VIR_MIGRATE_POSTCOPY
flag is set. In this patch, the flag is unset to permit live migration
of paused instance.

Change-Id: Ib5cbc948cb953e35a22bcbb859976f0afddcb662
Closes-Bug: #1671011


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1671011

Title:
  Live migration of paused instance fails when post copy is enabled

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  live migration paused instance fails when post copy is enabled.

  Steps to Reproduce:

  * spin up a instance and pause it
    nova pause 

  * Live migrate the instance
    nova live-migration  

  Expected result
  ===
  live migration should go through without any errors
  Actual result
  =
  Live migration command returns 202  but I could see libvirt failure while 
doing live-migration in compute logs.

  Environment:

  Multinode devstack environment with 2 compute nodes.
  1)Current master
  2)Networking-neutron
  3)Hypervisor Libvirt-KVM
  3) Enable post copy for which libvirt version should be greater than  or 
equal to 1.3.3.

  Logs:
  Following error found in compute log
  http://paste.openstack.org/show/601362/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1671011/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1905391] Re: [RFE] VPNaaS support for OVN

2023-12-18 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron-vpnaas/+/765353
Committed: 
https://opendev.org/openstack/neutron-vpnaas/commit/256464aea691f8b4957ba668a117963353f34e4c
Submitter: "Zuul (22348)"
Branch:master

commit 256464aea691f8b4957ba668a117963353f34e4c
Author: Bodo Petermann 
Date:   Thu Dec 3 17:56:27 2020 +0100

VPNaaS support for OVN

Adds VPNaaS support for OVN.
Add a new stand-alone VPN agent to support OVN+VPN. Add OVN-specific
service and device drivers that support this new VPN agent. This will
have no impact on the existing VPN solution for ML2/OVS, the existing
L3 agent and its VPN extension will still work.

Add a new VPN agent scheduler that will schedule VPN services to VPN
agents on a per-router basis.

Add two new database tables: vpn_ext_gws (to store extra port IDs)
and routervpnagentbindings (to store VPN agent ID per router).

More details see spec (neutron-specs/specs/xena/vpnaas-ovn.rst).

This work is based on work of MingShuan Xian (xia...@cn.ibm.com),
see https://bugs.launchpad.net/networking-ovn/+bug/1586253

Depends-On: https://review.opendev.org/c/openstack/neutron/+/847005
Depends-On: 
https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/847007

Closes-Bug: #1905391
Change-Id: I632f86762d63edbfe225727db11ea21bbb1ffc25


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1905391

Title:
  [RFE] VPNaaS support for OVN

Status in neutron:
  Fix Released

Bug description:
  Problem Description

  The current VPNaaS plugin only supports L3 routers and relies on the L3 agent.
  It does not support the OVN distributed router without an L3 agent.

  Proposed Change

  Implement VPN functionality in a new stand-alone VPN agent and a new service
  driver to support it. On the agent side a new device driver will deal with
  namespace management.
  The existing VPN solution will not be impacted. One may choose between the
  existing VPN plugin (for non-OVN) or the new one (for OVN) in the neutron
  server configuration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1905391/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2039464] Re: disallowed by policy error when user try to create_port with fixed_Ips

2023-12-18 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2039464

Title:
  disallowed by policy error when user try to create_port with fixed_Ips

Status in neutron:
  Expired

Bug description:
  OS: Ubuntu 22.04
  Openstack Release: Zed 
  Deployment tool: Kolla-ansible
  Neutron Plugin: OVN 

  
  I have setup RBAC policy on my external network and here is the policy.yaml 
file 

  "create_port:fixed_ips": "rule:context_is_advsvc or rule:network_owner or 
rule:admin_only or rule:shared"
  "create_port:fixed_ips:ip_address": "rule:context_is_advsvc or 
rule:network_owner or rule:admin_only or rule:shared"
  "create_port:fixed_ips:subnet_id": "rule:context_is_advsvc or 
rule:network_owner or rule:admin_only or rule:shared"

  I have RBAC setup on following network to allow access to specific
  project to access network.

  # openstack network show public-network-948
  
+---++
  | Field | Value   
   |
  
+---++
  | admin_state_up| UP  
   |
  | availability_zone_hints   | 
   |
  | availability_zones| 
   |
  | created_at| 2023-09-01T20:31:36Z
   |
  | description   | 
   |
  | dns_domain| 
   |
  | id| 5aacb586-c234-449e-a209-45fc63c8de26
   |
  | ipv4_address_scope| None
   |
  | ipv6_address_scope| None
   |
  | is_default| False   
   |
  | is_vlan_transparent   | None
   |
  | mtu   | 1500
   |
  | name  | public-network-948  
   |
  | port_security_enabled | True
   |
  | project_id| 1ed68ab792854dc99c1b2d31bf90019b
   |
  | provider:network_type | None
   |
  | provider:physical_network | None
   |
  | provider:segmentation_id  | None
   |
  | qos_policy_id | None
   |
  | revision_number   | 9   
   |
  | router:external   | External
   |
  | segments  | None
   |
  | shared| True
   |
  | status| ACTIVE  
   |
  | subnets   | d36886a2-99d3-4e2b-93ed-9e3cfabf5817, 
dba7a427-dccb-4a5a-a8e0-23fcda64666d |
  | tags  | 
   |
  | tenant_id | 1ed68ab792854dc99c1b2d31bf90019b
   |
  | updated_at| 2023-10-15T18:13:52Z
   |
  
+---++

  When normal user try to create port then getting following error:

  # openstack port create --network public-network-1 --fixed-ip 
subnet=dba7a427-dccb-4a5a-a8e0-23fcda64666d,ip-address=204.247.186.133 test1
  ForbiddenException: 403: Client Error for url: 
http://192.168.18.100:9696/v2.0/ports, (rule:create_port and 
(rule:create_port:fixed_ips and (rule:create_

[Yahoo-eng-team] [Bug 2046866] [NEW] rebuild_instance of volume-backed server wait_for_instance_event eventlet.timeout.Timeout leads to "ValueError: Circular reference detected"

2023-12-18 Thread melanie witt
Public bug reported:

Seen in the CI gate where ServerActionsV293TestJSON:
test_rebuild_volume_backed_server rebuild fails due to "ValueError:
Circular reference detected".

While rebuilding a volume-backed instance, if there is a timeout while
waiting for the external event from Cinder [1], the exception handling
eventually leads to a "ValueError: Circular reference detected". It
appears to happen while in the @wrap_instance_event decorator when
objects.InstanceActionEvent.event_finish_with_failure result is being
serialized for RPC.

[1]
https://github.com/openstack/nova/blob/55a27f0ac4badee439b03c8c52ae217767aa88fc/nova/compute/manager.py#L3633

Full traceback:

Dec 19 02:48:08.261374 np0036191650 nova-compute[85399]: INFO 
nova.compute.manager [None req-5ca3de1f-3ad5-4075-8cee-d060978d3797 
tempest-ServerActionsV293TestJSON-1486319881 
tempest-ServerActionsV293TestJSON-1486319881-project-member] [instance: 
f6a12b05-3e12-430e-b60a-eea2d8015815] Successfully reverted task state from 
rebuilding on failure for instance.
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server [None req-5ca3de1f-3ad5-4075-8cee-d060978d3797 
tempest-ServerActionsV293TestJSON-1486319881 
tempest-ServerActionsV293TestJSON-1486319881-project-member] Exception during 
message handling: ValueError: Circular reference detected
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server Traceback (most recent call last):
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/utils.py", line 
1439, in decorated_function
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", 
line 203, in decorated_function
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", 
line 3858, in rebuild_instance
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server self._do_rebuild_instance_with_claim(
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", 
line 3944, in _do_rebuild_instance_with_claim
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server self._do_rebuild_instance(
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", 
line 4136, in _do_rebuild_instance
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server self._rebuild_default_impl(**kwargs)
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", 
line 3713, in _rebuild_default_impl
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server self._rebuild_volume_backed_instance(
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", 
line 3633, in _rebuild_volume_backed_instance
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server with 
self.virtapi.wait_for_instance_event(instance, events,
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server   File "/usr/lib/python3.10/contextlib.py", line 142, 
in __exit__
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server next(self.gen)
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", 
line 559, in wait_for_instance_event
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server self._wait_for_instance_events(
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", 
line 471, in _wait_for_instance_events
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server actual_event = event.wait()
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server   File "/opt/stack/nova/nova/compute/manager.py", 
line 436, in wait
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server instance_event = self.event.wait()
Dec 19 02:48:08.335604 np0036191650 nova-compute[85399]: ERROR 
oslo_messaging.rpc.server   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/eventle