[Yahoo-eng-team] [Bug 1763608] Re: Netplan ignores Interfaces without IP Addresses

2020-05-15 Thread Lukas Märdian
** Changed in: netplan
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1763608

Title:
  Netplan ignores Interfaces without IP Addresses

Status in netplan:
  Fix Released
Status in neutron:
  Invalid
Status in netplan.io package in Ubuntu:
  Fix Released
Status in netplan.io source package in Bionic:
  Fix Released
Status in netplan.io source package in Cosmic:
  Fix Released
Status in netplan.io source package in Disco:
  Fix Released

Bug description:
  [Impact]
  Netplan users who need to bring up an interface (set its flag to IFF_UP) but 
not define an address, using the networkd renderer, as the interface may be 
further managed via another tool.

  [Test case]
  1) Install Ubuntu
  2) Set up netplan; with the following different use cases:

  == New VLAN ==

  network:
version: 2
renderer: networkd
ethernets:
  [... whatever is already configured for the system...]
vlans:
  vlan100:
link: 
id: 100

  == Bring up an existing secondary interface ==

  network:
version: 2
renderer: networkd
ethernets:
  eth0: {}

  
  3) Verify that in both cases, the interface (ethernet or VLAN) is brought up 
and shows UP, LOWER_UP flags in the output of 'ip link'.

  
  [Regression potential]
  As this brings a behavior change in netplan where as soon as an interface is 
listed in the netplan YAML, it will be brought up, care should be taken with 
existing configurations that do work, if specific devices are listed but are 
not assigned an IP address, as they will be brought up by networkd. This is 
expected to be a limited number of cases already, and impact to network 
installations is minimal.

  
  

  The "manual" method in /etc/network/interfaces resulted in an
  interface being brought up, but not having an IP address assigned.

  When configuring an Interface without an IP Address, netplan ignores
  the interface instead of bringing it up.

  ---
  network:
    version: 2
    renderer: networkd
    ethernets:
  eth1: {}

  Expected result from `netplan apply`: eth1 is brought up.
  Actual result: eth1 is still down.

  Similarly `netplan generate` does not generate any file in
  /run/systemd/network for eth1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/netplan/+bug/1763608/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1878916] [NEW] When deleting a network, delete the segment RP only when the segment is deleted

2020-05-15 Thread Rodolfo Alonso
Public bug reported:

When a network is deleted, those are some of the operations executed (in order):
- First we check the network is not used.
- Then the subnets are deleted.
- The segments are deleted.
- The network is deleted.

For each network, the segment plugin updates the Placement resource
provider of the segment. When no subnets are allocated in this segment,
the segment RP is deleted.

Having more than one subnet per segment, will lead to an unnecessary
Placement API load. When the network is being deleted, instead of
updating the segment RP, we can wait until the segment is deleted and
then we can delete the RP. This will same some time in the Neutron
server call "network delete" and will reduce the load in the Placement
server.

As an example, some figures. With a network created, I've created
another segment and 10 subnets in this new segment.

  CLI time (s)..Neutron API time (s)
Code as is now
  9.71..8.23
  9.63..8.19
  9.62..8.11

Skipping the subnet RP update
  7.42..5.96
  7.49..6.05

Skipping the subnet route update (host_routes_after_delete) too
  5.49..4.05
  5.74..4.26

Now adding the segment RP deletion when the segment is deleted
  5.99..4.46
  5.79..4.31

During a network deletion, we can save time and Placement calls just
deleting the segment RP only when the segment is already deleted
(AFTER_DELETE event).

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  When a network is deleted, those are some of the operations executed (in 
order):
  - First we check the network is not used.
  - Then the subnets are deleted.
  - The segments are deleted.
  - The network is deleted.
  
  For each network, the segment plugin updates the Placement resource
  provider of the segment. When no subnets are allocated in this segment,
  the segment RP is deleted.
  
  Having more than one subnet per segment, will lead to an unnecessary
  Placement API load. When the network is being deleted, instead of
  updating the segment RP, we can wait until the segment is deleted and
  then we can delete the RP. This will same some time in the Neutron
  server call "network delete" and will reduce the load in the Placement
  server.
  
  As an example, some figures. With a network created, I've created
  another segment and 10 subnets in this new segment.
  
-   CLI time (s)  Neutron API time (s)
- Code as is now 
-   9.71  8.23
-   9.63  8.19
-   9.62  8.11
+   CLI time (s)..Neutron API time (s)
+ Code as is now
+   9.71..8.23
+   9.63..8.19
+   9.62..8.11
  
  Skipping the subnet RP update
-   7.42  5.96
-   7.49  6.05
+   7.42..5.96
+   7.49..6.05
  
  Skipping the subnet route update (host_routes_after_delete) too
-   5.49  4.05
-   5.74  4.26
+   5.49..4.05
+   5.74..4.26
  
  Now adding the segment RP deletion when the segment is deleted
-   5.99  4.46
-   5.79  4.31
+   5.99..4.46
+   5.79..4.31
  
- 
- During a network deletion, we can save time and Placement calls just deleting 
the segment RP only when the segment is already deleted (AFTER_DELETE event).
+ During a network deletion, we can save time and Placement calls just
+ deleting the segment RP only when the segment is already deleted
+ (AFTER_DELETE event).

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1878916

Title:
  When deleting a network, delete the segment RP only when the segment
  is deleted

Status in neutron:
  New

Bug description:
  When a network is deleted, those are some of the operations executed (in 
order):
  - First we check the network is not used.
  - Then the subnets are deleted.
  - The segments are deleted.
  - The network is deleted.

  For each network, the segment plugin updates the Placement resource
  provider of the segment. When no subnets are allocated in this
  segment, the segment RP is deleted.

  Having more than one subnet per segment, will lead to an unnecessary
  Placement API load. When the network is being deleted, instead of
  updating the segment RP, we can wait until the segment is deleted and
  then we can delete the RP. This will same some time in the Neutron
  server call "network delete" and will reduce the load in the Placement
  server.

  As an example, so

[Yahoo-eng-team] [Bug 1878024] Re: disk usage of the nova image cache is not counted as used disk space

2020-05-15 Thread Balazs Gibizer
** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Also affects: nova/ussuri
   Importance: Undecided
   Status: New

** Also affects: nova/train
   Importance: Undecided
   Status: New

** Changed in: nova/ussuri
 Assignee: (unassigned) => Balazs Gibizer (balazs-gibizer)

** Changed in: nova/train
 Assignee: (unassigned) => Balazs Gibizer (balazs-gibizer)

** Changed in: nova/stein
 Assignee: (unassigned) => Alexandre arents (aarents)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1878024

Title:
  disk usage of the nova image cache is not counted as used disk space

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) stein series:
  New
Status in OpenStack Compute (nova) train series:
  New
Status in OpenStack Compute (nova) ussuri series:
  New

Bug description:
  Description
  ===
  The nova-compute service keeps a local image cache for glance images used for 
nova servers to avoid multiple download of the same image from glance. The disk 
usage of such cache is not calculated as local disk usage in nova and not 
reported to placement as used DISK_GB. This leads to disk over-allocation.

  Also the size of that cache cannot be limited by nova configuration so
  the deployer cannot reserve  disk space for that cache with
  reserved_host_disk_mb config.

  Steps to reproduce
  ==
  * Set up a single node devstack
  * Create and upload an image with a not too small physical size. Like an 
image with 1G physical size.
  * Check the current disk usage of the Host OS and configure 
reserved_host_disk_mb in nova-cpu.conf accordingly.
  * Boot two servers from that image with a flavor, like d1 (disk=5G)
  * Nova will download the glance image once to the local cache which result in 
a 1GB disk usage
  * Nova will create two root file systems, one for each VM. Those disks 
initially has minimal physical disk size, but has 5G virtual size.
  * At this point Nova allocated 5G + 5G of DISK_GB in placement, but due to 
the image in the cache the total disk usage of the two VMs + cache can be 5G + 
5G + 1G, if both VMs overwrite and fills the content of its own disk.

  Expected result
  ===
  Option A)
  Nova maintains a DISK_GB allocation in placement for the images in its cache. 
This way the expected DISK_GB allocation in placement is 5G + 5G + 1G at the end

  Option B)
  Nova provides a config option to limit the maximum size of the image cache 
and therefore the deployer can include the maximum image cache size into the 
reserved_host_disk_mb during dimensioning of the disk space of the compute.

  Actual result
  =
  Only 5G + 5G was allocation from placement. So disk space is over-allocated 
by the image cache.

  Environment
  ===

  Devstack from recent master

  stack@aio:/opt/stack/nova$ git log --oneline | head -n 1
  4b62c90063 Merge "Remove stale nested backport from InstancePCIRequests"

  libvirt driver with file based image backend

  Logs & Configs
  ==
  http://paste.openstack.org/show/793388/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1878024/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1878622] Re: Open vSwitch with DPDK datapath in neutron

2020-05-15 Thread Bence Romsics
Thank you for your bug report!

I believe this typo was fixed in the change below:
https://review.opendev.org/565289

So the command is correct since the rocky version of our docs, for example:
https://docs.openstack.org/neutron/latest/admin/config-ovs-dpdk.html

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1878622

Title:
  Open vSwitch with DPDK datapath in neutron

Status in neutron:
  Fix Released

Bug description:
  - [ x ] I have a fix to the document that I can paste below including
  example: input and output.

  There is a typo in the following documentation page:

https://docs.openstack.org/neutron/queens/admin/config-ovs-dpdk.html

  $ openstack image set --property hw_vif_mutliqueue_enabled=true
  IMAGE_NAME

  should read:

  $ openstack image set --property hw_vif_multiqueue_enabled=true
  IMAGE_NAME

  (i.e. multi not mutli)

  ---
  Release: 12.1.2.dev96 on 2020-05-11 17:10
  SHA: ed413939fcd134ee616078c017272f229b09f1d9
  Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/admin/config-ovs-dpdk.rst
  URL: https://docs.openstack.org/neutron/queens/admin/config-ovs-dpdk.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1878622/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1878929] [NEW] LDAP user issue

2020-05-15 Thread YG Kumar
Public bug reported:

Hi,

We have a rocky setup in which  we have integrated our LDAP with
keystone. All LDAP users are able to log into horizon without any issues
except for one user. He is a LDAP member but when tries logging into
horizon, we are observing the following errors in the keystone log:


May 15 07:43:39 c1w-keystone-container-d7c676b4 keystone-wsgi-public[17692]: 
2020-05-15 07:43:39.362 17692 WARNING py.warnings 
[req-38586df4-b1f2-4443-a5b4-208d76e241e8 9ca30f42033f4e93b72f9be304f66726 
e12b8e37797b4fbf8d0d6b28d4b61848 - default default] 
/openstack/venvs/keystone-18.1.9/lib/python2.7/site-packages/oslo_policy/policy.py:896:
 UserWarning: Policy identity:list_domains failed scope check. The token used 
to make the request was project scoped but the policy requires ['system'] 
scope. This behavior may change in the future where using the intended scope is 
required
   
warnings.warn(msg)
May 15 07:43:39 c1w-keystone-container-d7c676b4 uwsgi[17682]: [pid: 17692|app: 
0|req: 9761/156161] 172.29.239.225 () {42 vars in 750 bytes} [Fri May 15 
07:43:39 2020] GET /v3/domains?name=example.com => generated 348 bytes in 49 
msecs (HTTP/1.1 200) 5 headers in 177 bytes (1 switches on core 0)
May 15 07:43:39 c1w-keystone-container-d7c676b4 keystone-wsgi-public[17697]: 
2020-05-15 07:43:39.603 17697 INFO keystone.common.wsgi 
[req-988f0421-6720-460a-a976-6db5ed2f2ba6 9ca30f42033f4e93b72f9be304f66726 
e12b8e37797b4fbf8d0d6b28d4b61848 - default default] GET 
http://wtl-int.example.cloud:5000/v3/users/eb32979cbb97bc64051b32290186dc0a0cd583bd8f54c18879ca2543fca40b20/projects?domain_id=f7834cb0083b4f8f81184b6595b46b34
May 15 07:43:39 c1w-keystone-container-d7c676b4 keystone-wsgi-public[17697]: 
2020-05-15 07:43:39.625 17697 ERROR keystone.common.wsgi 
[req-988f0421-6720-460a-a976-6db5ed2f2ba6 9ca30f42033f4e93b72f9be304f66726 
e12b8e37797b4fbf8d0d6b28d4b61848 - default default] 'ascii' codec can't decode 
byte 0xc3 in position 27: ordinal not in range(128): UnicodeDecodeError: 
'ascii' codec can't decode byte 0xc3 in position 27: ordinal not in range(128)
 
2020-05-15 07:43:39.625 17697 ERROR keystone.common.wsgi Traceback (most recent 
call last):
 
2020-05-15 07:43:39.625 17697 ERROR keystone.common.wsgi   File 
"/openstack/venvs/keystone-18.1.9/lib/python2.7/site-packages/keystone/common/wsgi.py",
 line 148, in __call__
 
2020-05-15 07:43:39.625 17697 ERROR keystone.common.wsgi result = 
method(req, **params)
 
2020-05-15 07:43:39.625 17697 ERROR keystone.common.wsgi   File 
"/openstack/venvs/keystone-18.1.9/lib/python2.7/site-packages/keystone/common/controller.py",
 line 103, in wrapper
 
2020-05-15 07:43:39.625 17697 ERROR keystone.common.wsgi return f(self, 
request, filters, **kwargs)
 
2020-05-15 07:43:39.625 17697 ERROR keystone.common.wsgi   File 
"/openstack/venvs/keystone-18.1.9/lib/python2.7/site-packages/keystone/assignment/controllers.py",
 line 50, in list_user_projects
 
2020-05-15 07:43:39.625 17697 ERROR keystone.common.wsgi refs = 
PROVIDERS.assignment_api.list_projects_for_user(user_id)
 
2020-05-15 07:43:39.625 17697 ERROR keystone.common.wsgi   File 
"/openstack/venvs/keystone-18.1.9/lib/python2.7/site-packages/keystone/common/manager.py",
 line 116, in wrapped
 
2020-05-15 07:43:39.625 17697 ERROR keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
 
2020-05-15 07:43:39.625 17697 ERROR keystone.common.wsgi   File 
"/openstack/venvs/keystone-18.1.9/lib/python2.7/site-packages/dogpile/cache/region.py",
 line 1270, in decorate
 
2020-05-15 07:43:39.625 17697 ERROR keystone.common.wsgi should_cache_fn)
 
2020-05-15 07:43:39.625 17697 ERROR keystone.common.wsgi   File 
"/openstack/venvs/keystone-18.1.9/lib/python2.7/site-packages/dogpile/cache/region.py",
 line 864, in get_or_create
 
2020-05-15 07:43:39.625 17697 ERROR keystone.common.wsgi async_creator) as 
value:
  

[Yahoo-eng-team] [Bug 1878031] Re: Unable to delete an instance | Conflict: Port [port-id] is currently a parent port for trunk [trunk-id]

2020-05-15 Thread Bence Romsics
While I agree that it would be way more user friendly to give a
warning/error in the problematic API workflow that would entail some
cross project changes because today:

* nova does not know when an already bound port is added to a trunk
* neutron does not know if nova is supposed to auto-delete a port

That means neither nova nor neutron can detect the error condition in
itself.

Again, I believe changing the workflow to pre-create the parent port for
the server stops the problem described in this bug report completely.

So I'm setting this bug as Invalid. But let me know if you see other
alternatives.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1878031

Title:
   Unable to delete an instance | Conflict: Port [port-id] is currently
  a parent port for trunk [trunk-id]

Status in neutron:
  Invalid

Bug description:
  When you create a trunk in Neutron you create a parent port for the
  trunk and attach the trunk to the parent.  Then subports can be
  created on the trunk.  When instances are created on the trunk, first
  a port is created and then an instance is associated with a free port.
  It looks to me that's this is the oversight in the logic.

  From the perspective of the code, the parent port looks like any other
  port attached to the trunk bridge.  It doesn't have an instance
  attached to it so it looks like it's not being used for anything
  (which is technically correct).  So it becomes an eligible port for an
  instance to bind to.  That is all fine and dandy until you go to
  delete the instance and you get the "Port [port-id] is currently a
  parent port for trunk [trunk-id]" exception just as happened here.
  Anecdotally, it's seems rare that an instance will actually bind to
  it, but that is what happened for the user in this case and I have had
  several pings over the past year about people in a similar state.

  I propose that when a port is made parent port for a trunk, that the
  trunk be established as the owner of the port.  That way it will be
  ineligible for instances seeking to bind to the port.

  See also old bug: https://bugs.launchpad.net/neutron/+bug/1700428

  Description of problem:

  Attempting to delete instance failed with error in nova-compute

  ~~~
  2020-03-04 09:52:46.257 1 WARNING nova.network.neutronv2.api 
[req-0dd45fe4-861c-46d3-a5ec-7db36352da58 02c6d1bc10fe4ffaa289c786cd09b146 
695c417810ac460480055b074bc41817 - default default] [instance: 
2f9e3740-b425-4f00-a949-e1aacf2239c4] Failed to delete port 
991e4e50-481a-4ca6-9ea6-69f848c4ca9f for instance.: Conflict: Port 
991e4e50-481a-4ca6-9ea6-69f848c4ca9f is currently a parent port for trunk 
5800ee0f-b558-46cb-bb0b-92799dbe02cf.
  ~~~

  ~~~
  [stack@migration-host ~]$ openstack network trunk show 
5800ee0f-b558-46cb-bb0b-92799dbe02cf
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | UP   |
  | created_at  | 2020-03-04T09:01:23Z |
  | description |  |
  | id  | 5800ee0f-b558-46cb-bb0b-92799dbe02cf |
  | name| WIN-TRUNK|
  | port_id | 991e4e50-481a-4ca6-9ea6-69f848c4ca9f |
  | project_id  | 695c417810ac460480055b074bc41817 |
  | revision_number | 3|
  | status  | ACTIVE   |
  | sub_ports   |  |
  | tags| []   |
  | tenant_id   | 695c417810ac460480055b074bc41817 |
  | updated_at  | 2020-03-04T10:20:46Z |
  +-+--+

  
  [stack@migration-host ~]$ nova interface-list 
2f9e3740-b425-4f00-a949-e1aacf2239c4
  
++--+--+--+---+
  | Port State | Port ID  | Net ID  
 | IP addresses | MAC Addr  |
  
++--+--+--+---+
  | DOWN   | 991e4e50-481a-4ca6-9ea6-69f848c4ca9f | 
9be62c82-4274-48b4-bba0-39ccbdd5bb1b | 192.168.0.19 | fa:16:3e:0a:2b:9b |
  
++--+--+--+---+
  [stack@migration-host ~]$ openstack port show 
991e4e50-481a-4ca6-9ea6-69f848c4ca9f
  
+---+---+
  | Field | Value  

[Yahoo-eng-team] [Bug 1863021] Re: [SRU] eventlet monkey patch results in assert len(_active) == 1 AssertionError

2020-05-15 Thread Launchpad Bug Tracker
This bug was fixed in the package networking-mlnx - 1:15.0.2-0ubuntu1

---
networking-mlnx (1:15.0.2-0ubuntu1) groovy; urgency=medium

  * New upstream release for OpenStack Ussuri (LP: #1877642).
  * d/control: Align (Build-)Depends with upstream.
  * d/p/monkey-patch-original-current-thread.patch: Cherry-picked
from upstream review (https://review.opendev.org/725365)
to fix Python 3.8 monkey patching (LP: #1863021).

 -- Corey Bryant   Thu, 14 May 2020 15:15:09
-0400

** Changed in: networking-mlnx (Ubuntu Groovy)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863021

Title:
  [SRU] eventlet monkey patch results in assert len(_active) == 1
  AssertionError

Status in Cinder:
  Fix Released
Status in Designate:
  In Progress
Status in Glance:
  Fix Released
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in masakari:
  In Progress
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in BaGPipe:
  Fix Released
Status in networking-hyperv:
  Fix Released
Status in networking-l2gw:
  Fix Released
Status in Mellanox backend  integration with Neutron (networking-mlnx):
  In Progress
Status in networking-sfc:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.service:
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress
Status in watcher:
  Fix Released
Status in barbican package in Ubuntu:
  Fix Released
Status in cinder package in Ubuntu:
  Fix Released
Status in designate package in Ubuntu:
  Fix Released
Status in glance package in Ubuntu:
  Fix Released
Status in heat package in Ubuntu:
  Triaged
Status in ironic package in Ubuntu:
  Fix Released
Status in ironic-inspector package in Ubuntu:
  Fix Released
Status in magnum package in Ubuntu:
  Triaged
Status in manila package in Ubuntu:
  Fix Released
Status in masakari package in Ubuntu:
  Triaged
Status in mistral package in Ubuntu:
  Triaged
Status in murano package in Ubuntu:
  Triaged
Status in murano-agent package in Ubuntu:
  Triaged
Status in networking-bagpipe package in Ubuntu:
  Fix Released
Status in networking-hyperv package in Ubuntu:
  Fix Released
Status in networking-l2gw package in Ubuntu:
  Triaged
Status in networking-mlnx package in Ubuntu:
  Fix Released
Status in networking-sfc package in Ubuntu:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron-dynamic-routing package in Ubuntu:
  Triaged
Status in nova package in Ubuntu:
  Fix Released
Status in openstack-trove package in Ubuntu:
  Triaged
Status in python-os-ken package in Ubuntu:
  Fix Released
Status in python-oslo.service package in Ubuntu:
  Fix Released
Status in sahara package in Ubuntu:
  Triaged
Status in senlin package in Ubuntu:
  Triaged
Status in swift package in Ubuntu:
  Triaged
Status in watcher package in Ubuntu:
  Triaged
Status in barbican source package in Focal:
  Triaged
Status in cinder source package in Focal:
  Triaged
Status in designate source package in Focal:
  Triaged
Status in glance source package in Focal:
  Fix Released
Status in heat source package in Focal:
  Triaged
Status in ironic source package in Focal:
  Triaged
Status in ironic-inspector source package in Focal:
  Triaged
Status in magnum source package in Focal:
  Triaged
Status in manila source package in Focal:
  Triaged
Status in masakari source package in Focal:
  Triaged
Status in mistral source package in Focal:
  Triaged
Status in murano source package in Focal:
  Triaged
Status in murano-agent source package in Focal:
  Triaged
Status in networking-bagpipe source package in Focal:
  Triaged
Status in networking-hyperv source package in Focal:
  Triaged
Status in networking-l2gw source package in Focal:
  Triaged
Status in networking-mlnx source package in Focal:
  Triaged
Status in networking-sfc source package in Focal:
  Triaged
Status in neutron source package in Focal:
  Triaged
Status in neutron-dynamic-routing source package in Focal:
  Triaged
Status in nova source package in Focal:
  Fix Released
Status in openstack-trove source package in Focal:
  Triaged
Status in python-os-ken source package in Focal:
  Triaged
Status in python-oslo.service source package in Focal:
  Triaged
Status in sahara source package in Focal:
  Triaged
Status in senlin source package in Focal:
  Triaged
Status in swift source package in Focal:
  Triaged
Status in watcher source package in Focal:
  Triaged
Status in barbican source package in Groovy:
  Fix Released
Status in cinder source package in Groovy:
  Fix Released
Status in designate source package in Groovy:
  Fix Released
Status in glance source package in Groovy:
  Fix Released
Status in heat source package in Groovy:
  T

[Yahoo-eng-team] [Bug 1878938] [NEW] System role assignments exist after system role delete

2020-05-15 Thread s10
Public bug reported:

How to reproduce:

1. Create role:
openstack role create dumb_reader
2. Create system role assignment
openstack role add --system all --user admin dumb_reader
3. Check role:
openstack role assignment list --system all
4. Delete role:
openstack role delete dumb_reader

What is expected:
All role assignments with the deleted role are removed.

What is in the reality:
System role assignments are left in the keystone.system_assignment table.

Version of the Keystone: stable/train

** Affects: keystone
 Importance: Undecided
 Status: New

** Summary changed:

- System role assignments are left after system role delete
+ System role assignments exist after system role delete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1878938

Title:
  System role assignments exist after system role delete

Status in OpenStack Identity (keystone):
  New

Bug description:
  How to reproduce:

  1. Create role:
  openstack role create dumb_reader
  2. Create system role assignment
  openstack role add --system all --user admin dumb_reader
  3. Check role:
  openstack role assignment list --system all
  4. Delete role:
  openstack role delete dumb_reader

  What is expected:
  All role assignments with the deleted role are removed.

  What is in the reality:
  System role assignments are left in the keystone.system_assignment table.

  Version of the Keystone: stable/train

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1878938/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863021] Re: [SRU] eventlet monkey patch results in assert len(_active) == 1 AssertionError

2020-05-15 Thread Launchpad Bug Tracker
This bug was fixed in the package mistral - 10.0.0-0ubuntu1

---
mistral (10.0.0-0ubuntu1) groovy; urgency=medium

  * New upstream release for OpenStack Ussuri (LP: #1877642).
  * d/p/monkey-patch-original-current-thread.patch: Cherry-picked from
https://review.opendev.org/#/c/728369/. This fixes service failures
with Python 3.8 (LP: #1863021)
  * d/watch: Scope to 10.x series.
  * d/watch: Update to point at tarballs.opendev.org.
  * d/control: Align (Build-)Depends with upstream.

 -- Chris MacNaughton   Fri, 15 May
2020 06:53:28 +

** Changed in: mistral (Ubuntu Groovy)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863021

Title:
  [SRU] eventlet monkey patch results in assert len(_active) == 1
  AssertionError

Status in Cinder:
  Fix Released
Status in Designate:
  In Progress
Status in Glance:
  Fix Released
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in masakari:
  In Progress
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in BaGPipe:
  Fix Released
Status in networking-hyperv:
  Fix Released
Status in networking-l2gw:
  Fix Released
Status in Mellanox backend  integration with Neutron (networking-mlnx):
  In Progress
Status in networking-sfc:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.service:
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress
Status in watcher:
  Fix Released
Status in barbican package in Ubuntu:
  Fix Released
Status in cinder package in Ubuntu:
  Fix Released
Status in designate package in Ubuntu:
  Fix Released
Status in glance package in Ubuntu:
  Fix Released
Status in heat package in Ubuntu:
  Triaged
Status in ironic package in Ubuntu:
  Fix Released
Status in ironic-inspector package in Ubuntu:
  Fix Released
Status in magnum package in Ubuntu:
  Triaged
Status in manila package in Ubuntu:
  Fix Released
Status in masakari package in Ubuntu:
  Triaged
Status in mistral package in Ubuntu:
  Fix Released
Status in murano package in Ubuntu:
  Triaged
Status in murano-agent package in Ubuntu:
  Triaged
Status in networking-bagpipe package in Ubuntu:
  Fix Released
Status in networking-hyperv package in Ubuntu:
  Fix Released
Status in networking-l2gw package in Ubuntu:
  Triaged
Status in networking-mlnx package in Ubuntu:
  Fix Released
Status in networking-sfc package in Ubuntu:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron-dynamic-routing package in Ubuntu:
  Triaged
Status in nova package in Ubuntu:
  Fix Released
Status in openstack-trove package in Ubuntu:
  Triaged
Status in python-os-ken package in Ubuntu:
  Fix Released
Status in python-oslo.service package in Ubuntu:
  Fix Released
Status in sahara package in Ubuntu:
  Triaged
Status in senlin package in Ubuntu:
  Triaged
Status in swift package in Ubuntu:
  Triaged
Status in watcher package in Ubuntu:
  Triaged
Status in barbican source package in Focal:
  Triaged
Status in cinder source package in Focal:
  Triaged
Status in designate source package in Focal:
  Triaged
Status in glance source package in Focal:
  Fix Released
Status in heat source package in Focal:
  Triaged
Status in ironic source package in Focal:
  Triaged
Status in ironic-inspector source package in Focal:
  Triaged
Status in magnum source package in Focal:
  Triaged
Status in manila source package in Focal:
  Triaged
Status in masakari source package in Focal:
  Triaged
Status in mistral source package in Focal:
  Triaged
Status in murano source package in Focal:
  Triaged
Status in murano-agent source package in Focal:
  Triaged
Status in networking-bagpipe source package in Focal:
  Triaged
Status in networking-hyperv source package in Focal:
  Triaged
Status in networking-l2gw source package in Focal:
  Triaged
Status in networking-mlnx source package in Focal:
  Triaged
Status in networking-sfc source package in Focal:
  Triaged
Status in neutron source package in Focal:
  Triaged
Status in neutron-dynamic-routing source package in Focal:
  Triaged
Status in nova source package in Focal:
  Fix Released
Status in openstack-trove source package in Focal:
  Triaged
Status in python-os-ken source package in Focal:
  Triaged
Status in python-oslo.service source package in Focal:
  Triaged
Status in sahara source package in Focal:
  Triaged
Status in senlin source package in Focal:
  Triaged
Status in swift source package in Focal:
  Triaged
Status in watcher source package in Focal:
  Triaged
Status in barbican source package in Groovy:
  Fix Released
Status in cinder source package in Groovy:
  Fix Released
Status in designate source package in Groovy:
  Fix Released
Status in glance source package in

[Yahoo-eng-team] [Bug 1863021] Re: [SRU] eventlet monkey patch results in assert len(_active) == 1 AssertionError

2020-05-15 Thread Launchpad Bug Tracker
This bug was fixed in the package senlin - 9.0.0-0ubuntu1

---
senlin (9.0.0-0ubuntu1) groovy; urgency=medium

  * New upstream release for OpenStack Ussuri (LP: #1877642).
  * d/p/monkey-patch-original-current-thread.patch: Cherry-picked from
https://review.opendev.org/#/c/727186/. This fixes service failures
with Python 3.8 (LP: #1863021)
  * d/watch: Scope to 9.x series.
  * d/watch: Update to point at tarballs.opendev.org.

 -- Chris MacNaughton   Fri, 15 May
2020 09:11:06 +

** Changed in: senlin (Ubuntu Groovy)
   Status: Triaged => Fix Released

** Changed in: watcher (Ubuntu Groovy)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863021

Title:
  [SRU] eventlet monkey patch results in assert len(_active) == 1
  AssertionError

Status in Cinder:
  Fix Released
Status in Designate:
  In Progress
Status in Glance:
  Fix Released
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in masakari:
  In Progress
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in BaGPipe:
  Fix Released
Status in networking-hyperv:
  Fix Released
Status in networking-l2gw:
  Fix Released
Status in Mellanox backend  integration with Neutron (networking-mlnx):
  In Progress
Status in networking-sfc:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.service:
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress
Status in watcher:
  Fix Released
Status in barbican package in Ubuntu:
  Fix Released
Status in cinder package in Ubuntu:
  Fix Released
Status in designate package in Ubuntu:
  Fix Released
Status in glance package in Ubuntu:
  Fix Released
Status in heat package in Ubuntu:
  Triaged
Status in ironic package in Ubuntu:
  Fix Released
Status in ironic-inspector package in Ubuntu:
  Fix Released
Status in magnum package in Ubuntu:
  Triaged
Status in manila package in Ubuntu:
  Fix Released
Status in masakari package in Ubuntu:
  Triaged
Status in mistral package in Ubuntu:
  Fix Released
Status in murano package in Ubuntu:
  Triaged
Status in murano-agent package in Ubuntu:
  Triaged
Status in networking-bagpipe package in Ubuntu:
  Fix Released
Status in networking-hyperv package in Ubuntu:
  Fix Released
Status in networking-l2gw package in Ubuntu:
  Triaged
Status in networking-mlnx package in Ubuntu:
  Fix Released
Status in networking-sfc package in Ubuntu:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron-dynamic-routing package in Ubuntu:
  Triaged
Status in nova package in Ubuntu:
  Fix Released
Status in openstack-trove package in Ubuntu:
  Fix Released
Status in python-os-ken package in Ubuntu:
  Fix Released
Status in python-oslo.service package in Ubuntu:
  Fix Released
Status in sahara package in Ubuntu:
  Triaged
Status in senlin package in Ubuntu:
  Fix Released
Status in swift package in Ubuntu:
  Triaged
Status in watcher package in Ubuntu:
  Fix Released
Status in barbican source package in Focal:
  Triaged
Status in cinder source package in Focal:
  Triaged
Status in designate source package in Focal:
  Triaged
Status in glance source package in Focal:
  Fix Released
Status in heat source package in Focal:
  Triaged
Status in ironic source package in Focal:
  Triaged
Status in ironic-inspector source package in Focal:
  Triaged
Status in magnum source package in Focal:
  Triaged
Status in manila source package in Focal:
  Triaged
Status in masakari source package in Focal:
  Triaged
Status in mistral source package in Focal:
  Triaged
Status in murano source package in Focal:
  Triaged
Status in murano-agent source package in Focal:
  Triaged
Status in networking-bagpipe source package in Focal:
  Triaged
Status in networking-hyperv source package in Focal:
  Triaged
Status in networking-l2gw source package in Focal:
  Triaged
Status in networking-mlnx source package in Focal:
  Triaged
Status in networking-sfc source package in Focal:
  Triaged
Status in neutron source package in Focal:
  Triaged
Status in neutron-dynamic-routing source package in Focal:
  Triaged
Status in nova source package in Focal:
  Fix Released
Status in openstack-trove source package in Focal:
  Triaged
Status in python-os-ken source package in Focal:
  Triaged
Status in python-oslo.service source package in Focal:
  Triaged
Status in sahara source package in Focal:
  Triaged
Status in senlin source package in Focal:
  Triaged
Status in swift source package in Focal:
  Triaged
Status in watcher source package in Focal:
  Triaged
Status in barbican source package in Groovy:
  Fix Released
Status in cinder source package in Groovy:
  Fix Released
Status in designate source package in Groovy:
  Fix Release

[Yahoo-eng-team] [Bug 1863021] Re: [SRU] eventlet monkey patch results in assert len(_active) == 1 AssertionError

2020-05-15 Thread Launchpad Bug Tracker
This bug was fixed in the package watcher - 1:4.0.0-0ubuntu1

---
watcher (1:4.0.0-0ubuntu1) groovy; urgency=medium

  * New upstream release for OpenStack Ussuri (LP: #1877642).
  * d/p/monkey-patch-original-current-thread.patch: Cherry-picked from
https://review.opendev.org/#/c/728397/. This fixes service failures
with Python 3.8 (LP: #1863021)
  * d/watch: Scope to 4.x series.
  * d/watch: Update to point at tarballs.opendev.org.

 -- Chris MacNaughton   Fri, 15 May
2020 09:28:34 +

** Changed in: openstack-trove (Ubuntu Groovy)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863021

Title:
  [SRU] eventlet monkey patch results in assert len(_active) == 1
  AssertionError

Status in Cinder:
  Fix Released
Status in Designate:
  In Progress
Status in Glance:
  Fix Released
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in masakari:
  In Progress
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in BaGPipe:
  Fix Released
Status in networking-hyperv:
  Fix Released
Status in networking-l2gw:
  Fix Released
Status in Mellanox backend  integration with Neutron (networking-mlnx):
  In Progress
Status in networking-sfc:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.service:
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress
Status in watcher:
  Fix Released
Status in barbican package in Ubuntu:
  Fix Released
Status in cinder package in Ubuntu:
  Fix Released
Status in designate package in Ubuntu:
  Fix Released
Status in glance package in Ubuntu:
  Fix Released
Status in heat package in Ubuntu:
  Triaged
Status in ironic package in Ubuntu:
  Fix Released
Status in ironic-inspector package in Ubuntu:
  Fix Released
Status in magnum package in Ubuntu:
  Triaged
Status in manila package in Ubuntu:
  Fix Released
Status in masakari package in Ubuntu:
  Triaged
Status in mistral package in Ubuntu:
  Fix Released
Status in murano package in Ubuntu:
  Triaged
Status in murano-agent package in Ubuntu:
  Triaged
Status in networking-bagpipe package in Ubuntu:
  Fix Released
Status in networking-hyperv package in Ubuntu:
  Fix Released
Status in networking-l2gw package in Ubuntu:
  Triaged
Status in networking-mlnx package in Ubuntu:
  Fix Released
Status in networking-sfc package in Ubuntu:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron-dynamic-routing package in Ubuntu:
  Triaged
Status in nova package in Ubuntu:
  Fix Released
Status in openstack-trove package in Ubuntu:
  Fix Released
Status in python-os-ken package in Ubuntu:
  Fix Released
Status in python-oslo.service package in Ubuntu:
  Fix Released
Status in sahara package in Ubuntu:
  Triaged
Status in senlin package in Ubuntu:
  Fix Released
Status in swift package in Ubuntu:
  Triaged
Status in watcher package in Ubuntu:
  Fix Released
Status in barbican source package in Focal:
  Triaged
Status in cinder source package in Focal:
  Triaged
Status in designate source package in Focal:
  Triaged
Status in glance source package in Focal:
  Fix Released
Status in heat source package in Focal:
  Triaged
Status in ironic source package in Focal:
  Triaged
Status in ironic-inspector source package in Focal:
  Triaged
Status in magnum source package in Focal:
  Triaged
Status in manila source package in Focal:
  Triaged
Status in masakari source package in Focal:
  Triaged
Status in mistral source package in Focal:
  Triaged
Status in murano source package in Focal:
  Triaged
Status in murano-agent source package in Focal:
  Triaged
Status in networking-bagpipe source package in Focal:
  Triaged
Status in networking-hyperv source package in Focal:
  Triaged
Status in networking-l2gw source package in Focal:
  Triaged
Status in networking-mlnx source package in Focal:
  Triaged
Status in networking-sfc source package in Focal:
  Triaged
Status in neutron source package in Focal:
  Triaged
Status in neutron-dynamic-routing source package in Focal:
  Triaged
Status in nova source package in Focal:
  Fix Released
Status in openstack-trove source package in Focal:
  Triaged
Status in python-os-ken source package in Focal:
  Triaged
Status in python-oslo.service source package in Focal:
  Triaged
Status in sahara source package in Focal:
  Triaged
Status in senlin source package in Focal:
  Triaged
Status in swift source package in Focal:
  Triaged
Status in watcher source package in Focal:
  Triaged
Status in barbican source package in Groovy:
  Fix Released
Status in cinder source package in Groovy:
  Fix Released
Status in designate source package in Groovy:
  Fix Released
Status in glance source package in Groovy:
  Fix Released
Stat

[Yahoo-eng-team] [Bug 1863021] Re: [SRU] eventlet monkey patch results in assert len(_active) == 1 AssertionError

2020-05-15 Thread Corey Bryant
** Changed in: murano (Ubuntu Groovy)
   Status: Triaged => Fix Released

** Changed in: murano-agent (Ubuntu Groovy)
   Status: Triaged => Fix Released

** Changed in: networking-l2gw (Ubuntu Groovy)
   Status: Triaged => Fix Released

** Changed in: neutron-dynamic-routing (Ubuntu Groovy)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863021

Title:
  [SRU] eventlet monkey patch results in assert len(_active) == 1
  AssertionError

Status in Cinder:
  Fix Released
Status in Designate:
  In Progress
Status in Glance:
  Fix Released
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in masakari:
  In Progress
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in BaGPipe:
  Fix Released
Status in networking-hyperv:
  Fix Released
Status in networking-l2gw:
  Fix Released
Status in Mellanox backend  integration with Neutron (networking-mlnx):
  In Progress
Status in networking-sfc:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.service:
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress
Status in watcher:
  Fix Released
Status in barbican package in Ubuntu:
  Fix Released
Status in cinder package in Ubuntu:
  Fix Released
Status in designate package in Ubuntu:
  Fix Released
Status in glance package in Ubuntu:
  Fix Released
Status in heat package in Ubuntu:
  Triaged
Status in ironic package in Ubuntu:
  Fix Released
Status in ironic-inspector package in Ubuntu:
  Fix Released
Status in magnum package in Ubuntu:
  Triaged
Status in manila package in Ubuntu:
  Fix Released
Status in masakari package in Ubuntu:
  Triaged
Status in mistral package in Ubuntu:
  Fix Released
Status in murano package in Ubuntu:
  Fix Released
Status in murano-agent package in Ubuntu:
  Fix Released
Status in networking-bagpipe package in Ubuntu:
  Fix Released
Status in networking-hyperv package in Ubuntu:
  Fix Released
Status in networking-l2gw package in Ubuntu:
  Fix Released
Status in networking-mlnx package in Ubuntu:
  Fix Released
Status in networking-sfc package in Ubuntu:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron-dynamic-routing package in Ubuntu:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in openstack-trove package in Ubuntu:
  Fix Released
Status in python-os-ken package in Ubuntu:
  Fix Released
Status in python-oslo.service package in Ubuntu:
  Fix Released
Status in sahara package in Ubuntu:
  Triaged
Status in senlin package in Ubuntu:
  Fix Released
Status in swift package in Ubuntu:
  Triaged
Status in watcher package in Ubuntu:
  Fix Released
Status in barbican source package in Focal:
  Triaged
Status in cinder source package in Focal:
  Triaged
Status in designate source package in Focal:
  Triaged
Status in glance source package in Focal:
  Fix Released
Status in heat source package in Focal:
  Triaged
Status in ironic source package in Focal:
  Triaged
Status in ironic-inspector source package in Focal:
  Triaged
Status in magnum source package in Focal:
  Triaged
Status in manila source package in Focal:
  Triaged
Status in masakari source package in Focal:
  Triaged
Status in mistral source package in Focal:
  Triaged
Status in murano source package in Focal:
  Triaged
Status in murano-agent source package in Focal:
  Triaged
Status in networking-bagpipe source package in Focal:
  Triaged
Status in networking-hyperv source package in Focal:
  Triaged
Status in networking-l2gw source package in Focal:
  Triaged
Status in networking-mlnx source package in Focal:
  Triaged
Status in networking-sfc source package in Focal:
  Triaged
Status in neutron source package in Focal:
  Triaged
Status in neutron-dynamic-routing source package in Focal:
  Triaged
Status in nova source package in Focal:
  Fix Released
Status in openstack-trove source package in Focal:
  Triaged
Status in python-os-ken source package in Focal:
  Triaged
Status in python-oslo.service source package in Focal:
  Triaged
Status in sahara source package in Focal:
  Triaged
Status in senlin source package in Focal:
  Triaged
Status in swift source package in Focal:
  Triaged
Status in watcher source package in Focal:
  Triaged
Status in barbican source package in Groovy:
  Fix Released
Status in cinder source package in Groovy:
  Fix Released
Status in designate source package in Groovy:
  Fix Released
Status in glance source package in Groovy:
  Fix Released
Status in heat source package in Groovy:
  Triaged
Status in ironic source package in Groovy:
  Fix Released
Status in ironic-inspector source package in Groovy:
  Fix Released
Status in magnum source package in Groovy:
  Triaged
Status in ma

[Yahoo-eng-team] [Bug 1878979] [NEW] Quota code does not respect [api]/instance_list_per_project_cells

2020-05-15 Thread Mohammed Naser
Public bug reported:

The function which counts resources using the legacy method involves
getting a list of all cell mappings assigned to a specific project:

https://github.com/openstack/nova/blob/575a91ff5be79ac35aef4b61d84c78c693693304/nova/quota.py#L1170-L1209

This code can be very heavy on a database which contains a lot of
instances (but not a lot of mappings), potentially scanning millions of
rows to gather 1-2 cell mappings.  In a single cell environment, it is
just extra CPU usage with exactly the same outcome.

The [api]/instance_list_per_project_cells was introduced to workaround
this:

https://github.com/openstack/nova/blob/575a91ff5be79ac35aef4b61d84c78c693693304/nova/compute/instance_list.py#L146-L153

However, the quota code does not implement it which means quota count
take a big toll on the database server.  We should ideally mirror the
same behaviour in the quota code.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1878979

Title:
  Quota code does not respect [api]/instance_list_per_project_cells

Status in OpenStack Compute (nova):
  New

Bug description:
  The function which counts resources using the legacy method involves
  getting a list of all cell mappings assigned to a specific project:

  
https://github.com/openstack/nova/blob/575a91ff5be79ac35aef4b61d84c78c693693304/nova/quota.py#L1170-L1209

  This code can be very heavy on a database which contains a lot of
  instances (but not a lot of mappings), potentially scanning millions
  of rows to gather 1-2 cell mappings.  In a single cell environment, it
  is just extra CPU usage with exactly the same outcome.

  The [api]/instance_list_per_project_cells was introduced to workaround
  this:

  
https://github.com/openstack/nova/blob/575a91ff5be79ac35aef4b61d84c78c693693304/nova/compute/instance_list.py#L146-L153

  However, the quota code does not implement it which means quota count
  take a big toll on the database server.  We should ideally mirror the
  same behaviour in the quota code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1878979/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863021] Re: [SRU] eventlet monkey patch results in assert len(_active) == 1 AssertionError

2020-05-15 Thread Launchpad Bug Tracker
This bug was fixed in the package magnum - 10.0.0-0ubuntu1

---
magnum (10.0.0-0ubuntu1) groovy; urgency=medium

  * New upstream release for OpenStack Ussuri (LP: #1877642).
  * d/p/monkey-patch-original-current-thread.patch: Cherry-picked from
https://review.opendev.org/#/c/728010. This fixes service failures
with Python 3.8 (LP: #1863021).

 -- Chris MacNaughton   Thu, 14 May
2020 10:11:58 +

** Changed in: magnum (Ubuntu Groovy)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863021

Title:
  [SRU] eventlet monkey patch results in assert len(_active) == 1
  AssertionError

Status in Cinder:
  Fix Released
Status in Designate:
  In Progress
Status in Glance:
  Fix Released
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in masakari:
  In Progress
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in BaGPipe:
  Fix Released
Status in networking-hyperv:
  Fix Released
Status in networking-l2gw:
  Fix Released
Status in Mellanox backend  integration with Neutron (networking-mlnx):
  In Progress
Status in networking-sfc:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.service:
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress
Status in watcher:
  Fix Released
Status in barbican package in Ubuntu:
  Fix Released
Status in cinder package in Ubuntu:
  Fix Released
Status in designate package in Ubuntu:
  Fix Released
Status in glance package in Ubuntu:
  Fix Released
Status in heat package in Ubuntu:
  Triaged
Status in ironic package in Ubuntu:
  Fix Released
Status in ironic-inspector package in Ubuntu:
  Fix Released
Status in magnum package in Ubuntu:
  Fix Released
Status in manila package in Ubuntu:
  Fix Released
Status in masakari package in Ubuntu:
  Triaged
Status in mistral package in Ubuntu:
  Fix Released
Status in murano package in Ubuntu:
  Fix Released
Status in murano-agent package in Ubuntu:
  Fix Released
Status in networking-bagpipe package in Ubuntu:
  Fix Released
Status in networking-hyperv package in Ubuntu:
  Fix Released
Status in networking-l2gw package in Ubuntu:
  Fix Released
Status in networking-mlnx package in Ubuntu:
  Fix Released
Status in networking-sfc package in Ubuntu:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron-dynamic-routing package in Ubuntu:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in openstack-trove package in Ubuntu:
  Fix Released
Status in python-os-ken package in Ubuntu:
  Fix Released
Status in python-oslo.service package in Ubuntu:
  Fix Released
Status in sahara package in Ubuntu:
  Triaged
Status in senlin package in Ubuntu:
  Fix Released
Status in swift package in Ubuntu:
  Triaged
Status in watcher package in Ubuntu:
  Fix Released
Status in barbican source package in Focal:
  Triaged
Status in cinder source package in Focal:
  Triaged
Status in designate source package in Focal:
  Triaged
Status in glance source package in Focal:
  Fix Released
Status in heat source package in Focal:
  Triaged
Status in ironic source package in Focal:
  Fix Committed
Status in ironic-inspector source package in Focal:
  Triaged
Status in magnum source package in Focal:
  Triaged
Status in manila source package in Focal:
  Triaged
Status in masakari source package in Focal:
  Triaged
Status in mistral source package in Focal:
  Triaged
Status in murano source package in Focal:
  Triaged
Status in murano-agent source package in Focal:
  Triaged
Status in networking-bagpipe source package in Focal:
  Triaged
Status in networking-hyperv source package in Focal:
  Triaged
Status in networking-l2gw source package in Focal:
  Triaged
Status in networking-mlnx source package in Focal:
  Triaged
Status in networking-sfc source package in Focal:
  Triaged
Status in neutron source package in Focal:
  Triaged
Status in neutron-dynamic-routing source package in Focal:
  Triaged
Status in nova source package in Focal:
  Fix Released
Status in openstack-trove source package in Focal:
  Triaged
Status in python-os-ken source package in Focal:
  Triaged
Status in python-oslo.service source package in Focal:
  Triaged
Status in sahara source package in Focal:
  Triaged
Status in senlin source package in Focal:
  Triaged
Status in swift source package in Focal:
  Triaged
Status in watcher source package in Focal:
  Triaged
Status in barbican source package in Groovy:
  Fix Released
Status in cinder source package in Groovy:
  Fix Released
Status in designate source package in Groovy:
  Fix Released
Status in glance source package in Groovy:
  Fix Released
Status in heat source package in Groovy:
  Triaged
Status in ironic source

[Yahoo-eng-team] [Bug 1863021] Re: [SRU] eventlet monkey patch results in assert len(_active) == 1 AssertionError

2020-05-15 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/725393
Committed: 
https://git.openstack.org/cgit/openstack/manila/commit/?id=5e9f694a5a8f90c72680acb1181318930f55aa30
Submitter: Zuul
Branch:master

commit 5e9f694a5a8f90c72680acb1181318930f55aa30
Author: Corey Bryant 
Date:   Mon May 4 17:04:40 2020 -0400

Monkey patch original current_thread _active

Monkey patch the original current_thread to use the up-to-date _active
global variable. This solution is based on that documented at:
https://github.com/eventlet/eventlet/issues/592

Change-Id: Ifc6420d927c0ce9e04ff3b3253e81a474591e9bb
Closes-Bug: #1863021


** Changed in: manila
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863021

Title:
  [SRU] eventlet monkey patch results in assert len(_active) == 1
  AssertionError

Status in Cinder:
  Fix Released
Status in Designate:
  In Progress
Status in Glance:
  Fix Released
Status in OpenStack Shared File Systems Service (Manila):
  Fix Released
Status in masakari:
  In Progress
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in BaGPipe:
  Fix Released
Status in networking-hyperv:
  Fix Released
Status in networking-l2gw:
  Fix Released
Status in Mellanox backend  integration with Neutron (networking-mlnx):
  In Progress
Status in networking-sfc:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.service:
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress
Status in watcher:
  Fix Released
Status in barbican package in Ubuntu:
  Fix Released
Status in cinder package in Ubuntu:
  Fix Released
Status in designate package in Ubuntu:
  Fix Released
Status in glance package in Ubuntu:
  Fix Released
Status in heat package in Ubuntu:
  Triaged
Status in ironic package in Ubuntu:
  Fix Released
Status in ironic-inspector package in Ubuntu:
  Fix Released
Status in magnum package in Ubuntu:
  Fix Released
Status in manila package in Ubuntu:
  Fix Released
Status in masakari package in Ubuntu:
  Triaged
Status in mistral package in Ubuntu:
  Fix Released
Status in murano package in Ubuntu:
  Fix Released
Status in murano-agent package in Ubuntu:
  Fix Released
Status in networking-bagpipe package in Ubuntu:
  Fix Released
Status in networking-hyperv package in Ubuntu:
  Fix Released
Status in networking-l2gw package in Ubuntu:
  Fix Released
Status in networking-mlnx package in Ubuntu:
  Fix Released
Status in networking-sfc package in Ubuntu:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron-dynamic-routing package in Ubuntu:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in openstack-trove package in Ubuntu:
  Fix Released
Status in python-os-ken package in Ubuntu:
  Fix Released
Status in python-oslo.service package in Ubuntu:
  Fix Released
Status in sahara package in Ubuntu:
  Triaged
Status in senlin package in Ubuntu:
  Fix Released
Status in swift package in Ubuntu:
  Triaged
Status in watcher package in Ubuntu:
  Fix Released
Status in barbican source package in Focal:
  Triaged
Status in cinder source package in Focal:
  Triaged
Status in designate source package in Focal:
  Triaged
Status in glance source package in Focal:
  Fix Released
Status in heat source package in Focal:
  Fix Committed
Status in ironic source package in Focal:
  Fix Committed
Status in ironic-inspector source package in Focal:
  Triaged
Status in magnum source package in Focal:
  Triaged
Status in manila source package in Focal:
  Triaged
Status in masakari source package in Focal:
  Triaged
Status in mistral source package in Focal:
  Triaged
Status in murano source package in Focal:
  Triaged
Status in murano-agent source package in Focal:
  Triaged
Status in networking-bagpipe source package in Focal:
  Triaged
Status in networking-hyperv source package in Focal:
  Triaged
Status in networking-l2gw source package in Focal:
  Triaged
Status in networking-mlnx source package in Focal:
  Triaged
Status in networking-sfc source package in Focal:
  Triaged
Status in neutron source package in Focal:
  Triaged
Status in neutron-dynamic-routing source package in Focal:
  Triaged
Status in nova source package in Focal:
  Fix Released
Status in openstack-trove source package in Focal:
  Triaged
Status in python-os-ken source package in Focal:
  Triaged
Status in python-oslo.service source package in Focal:
  Triaged
Status in sahara source package in Focal:
  Triaged
Status in senlin source package in Focal:
  Triaged
Status in swift source package in Focal:
  Triaged
Status in watcher source package in Focal:
  Triaged
Status in barbican source package in Groovy:
  Fix Released
Status in cinder source package in Groovy

[Yahoo-eng-team] [Bug 1879009] [NEW] attaching extra port to server raise duplicate dns-name error

2020-05-15 Thread hamza
Public bug reported:

if a user have the designate extension in neutron config enabled (dns-domain : 
example.com) and he creates a server serv1 that server will have a dns record 
in designate like serv1.example.com.
if the user try openstack server add port port2 that port will get the same 
dns-name which will used by the first port assigned to serv1 which will cause 
the raise of the designate dns-name duplicate

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1879009

Title:
  attaching extra port to server raise duplicate dns-name error

Status in neutron:
  New

Bug description:
  if a user have the designate extension in neutron config enabled (dns-domain 
: example.com) and he creates a server serv1 that server will have a dns record 
in designate like serv1.example.com.
  if the user try openstack server add port port2 that port will get the same 
dns-name which will used by the first port assigned to serv1 which will cause 
the raise of the designate dns-name duplicate

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1879009/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1878708] Re: mock.patch.stopall called twice on tests inheriting from ovsdbapp

2020-05-15 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/728306
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=ccb0cddd4af6ceb4ed4e8a2612d6c705d16a8e64
Submitter: Zuul
Branch:master

commit ccb0cddd4af6ceb4ed4e8a2612d6c705d16a8e64
Author: Terry Wilson 
Date:   Thu May 14 22:27:10 2020 +

Fix mock.patch.stopall issue with ovsdbapp

After I876919dfc1fa0ae36bd99e3d760e38d207ee6ef3, two test classes
that inherit from both neutron's oslotest-based base classes and
ovsdbapp's unittest-based base class would fail due to
mock.patch.stopall being called twice. This appears to be because
of some special handling in oslotest addCleanup that checks a
private _cleanups variable before adding a cleanup to stopall.
Changing the order of the imports so that neutron can register its
cleanups first seems to fix the issue.

Closes-Bug: #1878708
Change-Id: I5b3812a9765a37b3e66d6c8ca0cb42ee1b7a2b9a


** Changed in: neutron
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1878708

Title:
  mock.patch.stopall called twice on tests inheriting from ovsdbapp

Status in neutron:
  Fix Released

Bug description:
  After I876919dfc1fa0ae36bd99e3d760e38d207ee6ef3, two test classes that
  inherit from both neutron's oslotest-based base classes and ovsdbapp's
  unittest-based base class would fail due to mock.patch.stopall being
  called twice. This appears to be because of some special handling in
  oslotest addCleanup that checks a private _cleanups variable before
  adding a cleanup to stopall. Changing the order of the imports so that
  neutron can register its cleanups first seems to fix the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1878708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1869050] Re: migration of anti-affinity server fails due to stale scheduler instance info

2020-05-15 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/714998
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=738110db7492b1360f5f197e8ecafd69a3b141b4
Submitter: Zuul
Branch:master

commit 738110db7492b1360f5f197e8ecafd69a3b141b4
Author: Balazs Gibizer 
Date:   Wed Mar 25 17:48:23 2020 +0100

Update scheduler instance info at confirm resize

When a resize is confirmed the instance does not belong to the source
compute any more. In the past the scheduler instance info is only
updated by the _sync_scheduler_instance_info periodic. This caused that
server boots with anti-affinity did not consider the source host.
But now at the end of the confirm_resize call the compute also updates
the scheduler about the move.

Change-Id: Ic50e72e289b56ac54720ad0b719ceeb32487b8c8
Closes-Bug: #1869050


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1869050

Title:
  migration of anti-affinity server fails due to stale scheduler
  instance info

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  Triaged
Status in OpenStack Compute (nova) queens series:
  Triaged
Status in OpenStack Compute (nova) rocky series:
  Triaged
Status in OpenStack Compute (nova) stein series:
  Triaged
Status in OpenStack Compute (nova) train series:
  Triaged

Bug description:
  Description
  ===

  
  Steps to reproduce
  ==
  Have a deployment with 3 compute nodes

  * make sure that the deployment is configured with 
tracks_instance_changes=True (True is the default)
  * create and server group with anti-affinity policy
  * boot server1 into the group
  * boot server2 into the group
  * migrate server2
  * confirm the migration
  * boot server3

  Make sure that between the last two steps there was no periodic
  _sync_scheduler_instance_info running on the compute that was hosted
  server2 before the migration. This could done by doing the last too
  steps after each other without waiting too much as interval of that
  periodic (scheduler_instance_sync_interval) is defaulted to 120 sec.

  Expected result
  ===
  server3 is booted on the host where server2 is moved away

  Actual result
  =
  server3 cannot be booted (NoValidHost)

  Triage
  ==

  The confirm resize call on the source compute does not update the
  scheduler that the instance is removed from this host. This makes the
  scheduler instance info stale and causing the subsequent scheduling
  error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1869050/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1878979] Re: Quota code does not respect [api]/instance_list_per_project_cells

2020-05-15 Thread melanie witt
Patch proposed here, not sure why the bot didn't add it:

https://review.opendev.org/728575

** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Also affects: nova/ussuri
   Importance: Undecided
   Status: New

** Also affects: nova/train
   Importance: Undecided
   Status: New

** Changed in: nova/stein
   Importance: Undecided => Medium

** Changed in: nova/ussuri
   Importance: Undecided => Medium

** Changed in: nova/train
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1878979

Title:
  Quota code does not respect [api]/instance_list_per_project_cells

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) stein series:
  New
Status in OpenStack Compute (nova) train series:
  New
Status in OpenStack Compute (nova) ussuri series:
  New

Bug description:
  The function which counts resources using the legacy method involves
  getting a list of all cell mappings assigned to a specific project:

  
https://github.com/openstack/nova/blob/575a91ff5be79ac35aef4b61d84c78c693693304/nova/quota.py#L1170-L1209

  This code can be very heavy on a database which contains a lot of
  instances (but not a lot of mappings), potentially scanning millions
  of rows to gather 1-2 cell mappings.  In a single cell environment, it
  is just extra CPU usage with exactly the same outcome.

  The [api]/instance_list_per_project_cells was introduced to workaround
  this:

  
https://github.com/openstack/nova/blob/575a91ff5be79ac35aef4b61d84c78c693693304/nova/compute/instance_list.py#L146-L153

  However, the quota code does not implement it which means quota count
  take a big toll on the database server.  We should ideally mirror the
  same behaviour in the quota code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1878979/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp