[Yahoo-eng-team] [Bug 1512267] Re: HTTP Unexpected API Error during network-create
** Changed in: nova Importance: Undecided => Low ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1512267 Title: HTTP Unexpected API Error during network-create Status in OpenStack Compute (nova): Invalid Bug description: 1. reedip@reedip-VirtualBox:/opt/stack/nova$ git log -1 commit 542552754d8da0a97cde32f07a777179c4be608f Merge: 1dfc36d b08b20b Author: Jenkins Date: Tue Oct 20 15:21:38 2015 + Merge "Make secgroup rules refresh with refresh_instance_security_rules()" 2. Please see: http://paste.openstack.org/show/477745/ for debug logs and devstack log snippet 3. Execute Network create as : nova network-create --mtu 1500 --mtu 9000 --fixed-range-v4 1.1.1.1/24 M 4. Expected Result :Network with Label M and MTU 9000 should be created. 5. Actual result in the PASTE-IT link given above. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1512267/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513390] [NEW] rule change via GUI/CLI puts FW in ERROR mode when no routers exist
Public bug reported: create FW rule , create policy and attach the rule to it, create FW and attach the policy to it. Verify NO ROUTERS exist. editing the attached rule puts FW to ERROR state http://pastebin.com/uxsTPrAc ** Affects: neutron Importance: Undecided Status: New ** Tags: fwaas ** Tags added: fwaas -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1513390 Title: rule change via GUI/CLI puts FW in ERROR mode when no routers exist Status in neutron: New Bug description: create FW rule , create policy and attach the rule to it, create FW and attach the policy to it. Verify NO ROUTERS exist. editing the attached rule puts FW to ERROR state http://pastebin.com/uxsTPrAc To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1513390/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513396] [NEW] Incorrect success message of remove routers with firewall in Horizon
Public bug reported: Steps to reproduce: 1) Create allow icmp rule 2) Create policy with this rule 3) Create firewall with the policy and associate router1 in Horizon 4) Click remove router,do nothing or unselect router1 then select it again. Result: There's a success message says:"Router(s) was successfully removed from firewall fw." In fact,the unselected router will be remove. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1513396 Title: Incorrect success message of remove routers with firewall in Horizon Status in OpenStack Dashboard (Horizon): New Bug description: Steps to reproduce: 1) Create allow icmp rule 2) Create policy with this rule 3) Create firewall with the policy and associate router1 in Horizon 4) Click remove router,do nothing or unselect router1 then select it again. Result: There's a success message says:"Router(s) was successfully removed from firewall fw." In fact,the unselected router will be remove. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1513396/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513410] [NEW] A space inserted at the end of the instance name, results in error
Public bug reported: to reproduce: create an instance from an image in Horizon and add a space after the instance name. See below the screenshots with the error http://imagebin.suse.de/1952 http://imagebin.suse.de/1953 ** Affects: horizon Importance: Undecided Status: Confirmed ** Tags: space -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1513410 Title: A space inserted at the end of the instance name, results in error Status in OpenStack Dashboard (Horizon): Confirmed Bug description: to reproduce: create an instance from an image in Horizon and add a space after the instance name. See below the screenshots with the error http://imagebin.suse.de/1952 http://imagebin.suse.de/1953 To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1513410/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513412] [NEW] FwaaS extension quota exceeded at the gate
Public bug reported: [20:40]| sc68cal: how do we go about debugging these failures? http://logs.openstack.org/19/225319/20/gate/gate-neutron-dsvm-api/9d9b806/console.html#_2015-11-05_00_01_58_095 - there are a bunch of failures in the q-svc logs but they don't result in 100% job fails in logstash, [20:40] < mriedem> where is the fwaas stuff logged? i assume q-svc [20:40] < mriedem> which is confusing if it's run as it's own service [20:42]<-- | tmatsu [~tmatsu@103.2.128.1] has quit (Read error: Connection reset by peer) [20:42]<-- | rfolco [~rfolco@201.74.199.134] has quit (Quit: This computer has gone to sleep) [20:43] < mriedem> i guess i can use this http://logs.openstack.org/96/237896/4/gate/gate-neutron-dsvm-api/6e1d5c7/logs/screen-q-svc.txt.gz#_2015-11-04_20_14_34_810 ** Affects: neutron Importance: Undecided Status: New ** Tags: fwaas -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1513412 Title: FwaaS extension quota exceeded at the gate Status in neutron: New Bug description: [20:40] | sc68cal: how do we go about debugging these failures? http://logs.openstack.org/19/225319/20/gate/gate-neutron-dsvm-api/9d9b806/console.html#_2015-11-05_00_01_58_095 - there are a bunch of failures in the q-svc logs but they don't result in 100% job fails in logstash, [20:40] < mriedem> where is the fwaas stuff logged? i assume q-svc [20:40] < mriedem> which is confusing if it's run as it's own service [20:42]<-- | tmatsu [~tmatsu@103.2.128.1] has quit (Read error: Connection reset by peer) [20:42]<-- | rfolco [~rfolco@201.74.199.134] has quit (Quit: This computer has gone to sleep) [20:43] < mriedem> i guess i can use this http://logs.openstack.org/96/237896/4/gate/gate-neutron-dsvm-api/6e1d5c7/logs/screen-q-svc.txt.gz#_2015-11-04_20_14_34_810 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1513412/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1498370] Re: DHCP agent: interface unplug leads to exeception
** Also affects: neutron (Ubuntu) Importance: Undecided Status: New ** Also affects: neutron (Ubuntu Vivid) Importance: Undecided Status: New ** Also affects: neutron (Ubuntu Xenial) Importance: Undecided Status: New ** Also affects: neutron (Ubuntu Wily) Importance: Undecided Status: New ** Changed in: neutron (Ubuntu Vivid) Status: New => Fix Released ** Changed in: neutron (Ubuntu Wily) Status: New => Fix Released ** Changed in: neutron (Ubuntu Vivid) Status: Fix Released => In Progress ** Changed in: neutron (Ubuntu Xenial) Status: New => Fix Released ** Changed in: neutron (Ubuntu Vivid) Assignee: (unassigned) => Edward Hope-Morley (hopem) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1498370 Title: DHCP agent: interface unplug leads to exeception Status in neutron: Fix Released Status in neutron package in Ubuntu: Fix Released Status in neutron source package in Vivid: In Progress Status in neutron source package in Wily: Fix Released Status in neutron source package in Xenial: Fix Released Bug description: 2015-09-22 01:23:42.612 ERROR neutron.agent.dhcp.agent [-] Unable to disable dhcp for c543db4d-e077-488f-b58c-5805f63f86b6. 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent Traceback (most recent call last): 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent File "/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 115, in call_driver 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent getattr(driver, action)(**action_kwargs) 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 221, in disable 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent self._destroy_namespace_and_port() 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 226, in _destroy_namespace_and_port 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent self.device_manager.destroy(self.network, self.interface_name) 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1223, in destroy 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent self.driver.unplug(device_name, namespace=network.namespace) 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent File "/opt/stack/neutron/neutron/agent/linux/interface.py", line 358, in unplug 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent tap_name = self._get_tap_name(device_name, prefix) 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent File "/opt/stack/neutron/neutron/agent/linux/interface.py", line 299, in _get_tap_name 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent dev_name = dev_name.replace(prefix or self.DEV_NAME_PREFIX, 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent AttributeError: 'NoneType' object has no attribute 'replace' 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent 2015-09-22 01:23:42.616 INFO neutron.agent.dhcp.agent [-] Synchronizing state complete The reason is the device is None To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1498370/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513464] [NEW] wrong description in developer doc
Public bug reported: doc address: http://docs.openstack.org/developer/keystone/key_terms.html#resources "The Identity portion of keystone includes Projects and Domains, and are commonly stored in an SQL backend." It is NOT Identity but Resources. ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1513464 Title: wrong description in developer doc Status in OpenStack Identity (keystone): New Bug description: doc address: http://docs.openstack.org/developer/keystone/key_terms.html#resources "The Identity portion of keystone includes Projects and Domains, and are commonly stored in an SQL backend." It is NOT Identity but Resources. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1513464/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513467] [NEW] resource tracker incorrect log of pci stats
Public bug reported: in the nova compute the resource tracker log the pci stats as pci_stats=PciDevicePoolList(objects=[PciDevicePool] and not showing the PciDevicePool values it is nova master branch ** Affects: nova Importance: Undecided Assignee: Moshe Levi (moshele) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1513467 Title: resource tracker incorrect log of pci stats Status in OpenStack Compute (nova): In Progress Bug description: in the nova compute the resource tracker log the pci stats as pci_stats=PciDevicePoolList(objects=[PciDevicePool] and not showing the PciDevicePool values it is nova master branch To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1513467/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513473] [NEW] Introduce Functionality to Replace "location" and "copy-from" Flags for Glance Image Creation
You have been subscribed to a public bug: Since the "location" and "copy-from" flags are being deprecated / reserved in the newest version of the Glance CLI for creating images, it would be useful to at least replace their functionality with something similar. Suggest adding a flag called "--image-url" that eliminates the need to copy an image to an OpenStack account in order to use it, similar to how "--location" worked. Suggest adding a flag called "--copy-url" that allows the user to provide a URL to an existing image (e.g. on S3), where it can be copied from, similar to how "--copy-from" worked. ** Affects: glance Importance: Undecided Status: New -- Introduce Functionality to Replace "location" and "copy-from" Flags for Glance Image Creation https://bugs.launchpad.net/bugs/1513473 You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513473] [NEW] Introduce Functionality to Replace "location" and "copy-from" Flags for Glance Image Creation
Public bug reported: Since the "location" and "copy-from" flags are being deprecated / reserved in the newest version of the Glance CLI for creating images, it would be useful to at least replace their functionality with something similar. Suggest adding a flag called "--image-url" that eliminates the need to copy an image to an OpenStack account in order to use it, similar to how "--location" worked. Suggest adding a flag called "--copy-url" that allows the user to provide a URL to an existing image (e.g. on S3), where it can be copied from, similar to how "--copy-from" worked. Since some developers' needs currently depend on these features, they are forced to use the older version of the CLI, forcing them to be exposed to past bugs and potential security flaws. ** Affects: glance Importance: Undecided Status: New ** Project changed: openstack-manuals => glance ** Description changed: Since the "location" and "copy-from" flags are being deprecated / reserved in the newest version of the Glance CLI for creating images, it would be useful to at least replace their functionality with something similar. Suggest adding a flag called "--image-url" that eliminates the need to copy an image to an OpenStack account in order to use it, similar to how "--location" worked. Suggest adding a flag called "--copy-url" that allows the user to provide a URL to an existing image (e.g. on S3), where it can be copied from, similar to how "--copy-from" worked. + + Since some developers' needs currently depend on these features, they + are forced to use the older version of the CLI, forcing them to be + exposed to past bugs and potential security flaws. -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1513473 Title: Introduce Functionality to Replace "location" and "copy-from" Flags for Glance Image Creation Status in Glance: New Bug description: Since the "location" and "copy-from" flags are being deprecated / reserved in the newest version of the Glance CLI for creating images, it would be useful to at least replace their functionality with something similar. Suggest adding a flag called "--image-url" that eliminates the need to copy an image to an OpenStack account in order to use it, similar to how "--location" worked. Suggest adding a flag called "--copy-url" that allows the user to provide a URL to an existing image (e.g. on S3), where it can be copied from, similar to how "--copy-from" worked. Since some developers' needs currently depend on these features, they are forced to use the older version of the CLI, forcing them to be exposed to past bugs and potential security flaws. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1513473/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513484] [NEW] networking-bgpvpn juno branch
Public bug reported: While we are working on a first liberty release, we would like to also prepare a Juno backport. To do this we would like to create a stable/juno branch from current master head, but *without* doing a release (there is a bit of work to do to have juno-compatible code). ** Affects: bgpvpn Importance: Undecided Status: New ** Affects: neutron Importance: Undecided Status: New ** Tags: release-subproject ** Also affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1513484 Title: networking-bgpvpn juno branch Status in bgpvpn: New Status in neutron: New Bug description: While we are working on a first liberty release, we would like to also prepare a Juno backport. To do this we would like to create a stable/juno branch from current master head, but *without* doing a release (there is a bit of work to do to have juno-compatible code). To manage notifications about this bug go to: https://bugs.launchpad.net/bgpvpn/+bug/1513484/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513513] [NEW] Support configuration of multiple pci_alias with an array
Public bug reported: Nova code doesn't currently support defining multiple pci_alias using an array. This is not aligned with packstack. See manifests/api.pp: # [*pci_alias*] # (optional) Pci passthrough for controller: # Defaults to undef # Example # "[ {'vendor_id':'1234', 'product_id':'5678', 'name':'default'}, {...} ]" Version: commit e52d236a3f1740997890cad9d4726df01d5a7e5d Merge: 961e330 86fe90f Author: Jenkins Date: Thu Nov 5 01:22:54 2015 + Merge "cells: add debug logging to bdm_update_or_create_at_top" Log: ERROR (BadRequest): Invalid PCI alias definition: [{u'vendor_id': u'8086', u'product_id': u'0443', u'name': u'a1'}, {u'vendor_id': u'8086', u'product_id': u'0443', u'name': u'a2'}] is not of type 'object' Failed validating 'type' in schema: {'additionalProperties': False, 'properties': {'capability_type': {'enum': ['pci'], 'type': 'string'}, 'device_type': {'enum': ['NIC', 'ACCEL', 'GPU'], 'type': 'string'}, 'name': {'maxLength': 256, 'minLength': 1, 'type': 'string'}, 'product_id': {'pattern': '^([\\da-fA-F]{4})$', 'type': 'string'}, 'vendor_id': {'pattern': '^([\\da-fA-F]{4})$', 'type': 'string'}}, 'required': ['name'], 'type': 'object'} On instance: [{u'name': u'a1', u'product_id': u'0443', u'vendor_id': u'8086'}, {u'name': u'a2', u'product_id': u'0443', u'vendor_id': u'8086'}] (HTTP 400) (Request-ID: req-3fe994bc-6a99-4c0c-be98-1a22703c58ee) Reproduce steps: 1) Configure pci_alias in nova.conf: pci_alias=[{"vendor_id":"8086", "product_id":"0443", "name":"a1"}, {"vendor_id":"8086", "product_id":"0443", "name":"a2"}] 2) Create a flavor with "pci_passthrough:alias=a1:1". 3) Boot an instance with this flavor. Expected result: Instance get successfully booted Actual result: Instance fails to start Workaround: It's possible to configure multiple pci_alias by having them each separate on their own config line: pci_alias={"vendor_id":"8086", "product_id":"0443", "name":"a1"} pci_alias={"vendor_id":"8086", "product_id":"0443", "name":"a2"} But this is still not aligned with packstack. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1513513 Title: Support configuration of multiple pci_alias with an array Status in OpenStack Compute (nova): New Bug description: Nova code doesn't currently support defining multiple pci_alias using an array. This is not aligned with packstack. See manifests/api.pp: # [*pci_alias*] # (optional) Pci passthrough for controller: # Defaults to undef # Example # "[ {'vendor_id':'1234', 'product_id':'5678', 'name':'default'}, {...} ]" Version: commit e52d236a3f1740997890cad9d4726df01d5a7e5d Merge: 961e330 86fe90f Author: Jenkins Date: Thu Nov 5 01:22:54 2015 + Merge "cells: add debug logging to bdm_update_or_create_at_top" Log: ERROR (BadRequest): Invalid PCI alias definition: [{u'vendor_id': u'8086', u'product_id': u'0443', u'name': u'a1'}, {u'vendor_id': u'8086', u'product_id': u'0443', u'name': u'a2'}] is not of type 'object' Failed validating 'type' in schema: {'additionalProperties': False, 'properties': {'capability_type': {'enum': ['pci'], 'type': 'string'}, 'device_type': {'enum': ['NIC', 'ACCEL', 'GPU'], 'type': 'string'}, 'name': {'maxLength': 256, 'minLength': 1, 'type': 'string'}, 'product_id': {'pattern': '^([\\da-fA-F]{4})$', 'type': 'string'}, 'vendor_id': {'pattern': '^([\\da-fA-F]{4})$', 'type': 'string'}}, 'required': ['name'], 'type': 'object'} On instance: [{u'name': u'a1', u'product_id': u'0443', u'vendor_id': u'8086'}, {u'name': u'a2', u'product_id': u'0443', u'vendor_id': u'8086'}] (HTTP 400) (Request-ID: req-3fe994bc-6a99-4c0c-be98-1a22703c5
[Yahoo-eng-team] [Bug 1500830] Re: setting COMPRESS_ENABLED = False and restarting Apache leads to every xstatic library being NOT FOUND
Not a Horizon bug, instead it's an Apache 'Works As Intended'. Let me explain how Apache web-server works in conjunction with django- compressor package. First, Apache serves dynamic (Django stuff) and static (CSS/JS) Horizon content in 2 different ways: * https://github.com/openstack-dev/devstack/blob/stable/liberty/files/apache-horizon.template#L2 - dynamic content is served through Apache mod_wsgi where Apache process needs to have an access just to django.wsgi script * https://github.com/openstack-dev/devstack/blob/stable/liberty/files/apache-horizon.template#L12 - static content is served through Alias directive, which ties Web location to FileSystem location, but the FileSystem location also needs to be made explicitly available to Apache (see [2]) Then, how the Apache would be able to serve something static outside of %HORIZON_DIR% (which resides usually in /usr/share/openstack_dashboard), like our XStatic / libjs stuff? In fact it wouldn't :) - and that's what we had seen on your lab with COMPRESS_ENABLED = False. But when COMPRESS_ENABLED = True, what it does is putting compressed static assets into STATIC_ROOT [3] which is already inside %HORIZON_DIR% - the same thing that collectstatic does! So the bottom line here: in order to serve static assets from Apache you _have_ to put them to place with explicitly enabled read access. This single (and the least confusing) place is Horizon STATIC_ROOT. Either collectstatic, or compress commands of manage.py do this thing. If you don't want to multiply number of files, collectstatic --link is the only viable option. Otherwise, we won't be able to debug uncompressed statics on production environments. ** Changed in: horizon Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1500830 Title: setting COMPRESS_ENABLED = False and restarting Apache leads to every xstatic library being NOT FOUND Status in OpenStack Dashboard (Horizon): Invalid Bug description: Hi, Trying to see if it is possible to debug Horizon in production, one of my colleague tried to disable compress. Then the result isn't nice at all. Setting COMPRESS_ENABLED = False and restarting Apache leads to every xstatic library being NOT FOUND, and loading of pages taking forever. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1500830/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1489126] Re: Filtering by tags is broken in v3
** Changed in: glance Status: In Progress => Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1489126 Title: Filtering by tags is broken in v3 Status in Glance: Opinion Bug description: When I want to filter list of artifacts by tag I get a 500 error: http://localhost:9292/v3/artifacts/myartifact/v2.0/drafts?tag=hyhyhy 500 Internal Server Error 500 Internal Server Error The server has either erred or is incapable of performing the requested operation. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1489126/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513538] [NEW] Remove SQL's datetime format inplace of integer timestamps
Public bug reported: Keystone's current schema uses SQL's DATETIME format. Depending on the version of SQL, it may or may not support sub-second accuracy/precision. We should replace keystone's use of DATETIME with an integer timestamp. With integer timestamps we can support sub-second accuracy regardless of the version of SQL being used. ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1513538 Title: Remove SQL's datetime format inplace of integer timestamps Status in OpenStack Identity (keystone): New Bug description: Keystone's current schema uses SQL's DATETIME format. Depending on the version of SQL, it may or may not support sub-second accuracy/precision. We should replace keystone's use of DATETIME with an integer timestamp. With integer timestamps we can support sub-second accuracy regardless of the version of SQL being used. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1513538/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513541] [NEW] Support sub-second accuracy in Fernet's creation timestamp
Public bug reported: The fernet token provider has sub-second format, but it is currently truncated to .00Z. This is because the library (pyca/cryptography [0]) that keystone relies on for generating fernet tokens uses integer timestamps instead of floats, which loses sub-second accuracy. We should find a way to support sub-second accuracy in Fernet's creation timestamp so that we don't hit token revocation edge cases, like the ones documented here - https://review.openstack.org/#/c/227995/ . This will likely have to be a coordinated effort between the cryptography development community and the maintainers of the Fernet specification [1]. This bug is to track that we include the corresponding fix (via version bump of cryptography) for keystone. [0] https://github.com/pyca/cryptography [1] https://github.com/fernet/spec ** Affects: keystone Importance: Undecided Status: New ** Tags: fernet ** Tags added: fernet -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1513541 Title: Support sub-second accuracy in Fernet's creation timestamp Status in OpenStack Identity (keystone): New Bug description: The fernet token provider has sub-second format, but it is currently truncated to .00Z. This is because the library (pyca/cryptography [0]) that keystone relies on for generating fernet tokens uses integer timestamps instead of floats, which loses sub-second accuracy. We should find a way to support sub-second accuracy in Fernet's creation timestamp so that we don't hit token revocation edge cases, like the ones documented here - https://review.openstack.org/#/c/227995/ . This will likely have to be a coordinated effort between the cryptography development community and the maintainers of the Fernet specification [1]. This bug is to track that we include the corresponding fix (via version bump of cryptography) for keystone. [0] https://github.com/pyca/cryptography [1] https://github.com/fernet/spec To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1513541/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513558] [NEW] test_create_ebs_image_and_check_boot failing with ceph job on stable/kilo
Public bug reported: After https://review.openstack.org/#/c/230937/ merged stable/kilo gate seems to be broken in the ceph job gate-tempest-dsvm-full-ceph The tests fail with an error like: 2015-11-04 19:20:07.224 | Captured traceback-2: 2015-11-04 19:20:07.224 | ~ 2015-11-04 19:20:07.224 | Traceback (most recent call last): 2015-11-04 19:20:07.224 | File "/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py", line 791, in wait_for_resource_deletion 2015-11-04 19:20:07.224 | raise exceptions.TimeoutException(message) 2015-11-04 19:20:07.224 | tempest_lib.exceptions.TimeoutException: Request timed out 2015-11-04 19:20:07.224 | Details: (TestVolumeBootPattern:_run_cleanups) Failed to delete volume 1da0ba45-a4e6-49c6-8d47-ca522d7acabb within the required time (196 s). 2015-11-04 19:20:07.225 | 2015-11-04 19:20:07.225 | 2015-11-04 19:20:07.225 | Captured traceback-1: 2015-11-04 19:20:07.225 | ~ 2015-11-04 19:20:07.225 | Traceback (most recent call last): 2015-11-04 19:20:07.225 | File "tempest/scenario/manager.py", line 100, in delete_wrapper 2015-11-04 19:20:07.225 | delete_thing(*args, **kwargs) 2015-11-04 19:20:07.225 | File "tempest/services/volume/json/volumes_client.py", line 108, in delete_volume 2015-11-04 19:20:07.225 | resp, body = self.delete("volumes/%s" % str(volume_id)) 2015-11-04 19:20:07.225 | File "/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py", line 290, in delete 2015-11-04 19:20:07.225 | return self.request('DELETE', url, extra_headers, headers, body) 2015-11-04 19:20:07.226 | File "/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py", line 639, in request 2015-11-04 19:20:07.226 | resp, resp_body) 2015-11-04 19:20:07.226 | File "/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py", line 697, in _error_checker 2015-11-04 19:20:07.226 | raise exceptions.BadRequest(resp_body, resp=resp) 2015-11-04 19:20:07.226 | tempest_lib.exceptions.BadRequest: Bad request 2015-11-04 19:20:07.226 | Details: {u'code': 400, u'message': u'Invalid volume: Volume still has 1 dependent snapshots.'} Full logs here: http://logs.openstack.org/52/229152/11/check/gate- tempest-dsvm-full-ceph/11bddbf/console.html#_2015-11-04_19_20_07_224 This seems to be similar to https://bugs.launchpad.net/tempest/+bug/1489581 but isn't in the cells job. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1513558 Title: test_create_ebs_image_and_check_boot failing with ceph job on stable/kilo Status in OpenStack Compute (nova): New Bug description: After https://review.openstack.org/#/c/230937/ merged stable/kilo gate seems to be broken in the ceph job gate-tempest-dsvm-full-ceph The tests fail with an error like: 2015-11-04 19:20:07.224 | Captured traceback-2: 2015-11-04 19:20:07.224 | ~ 2015-11-04 19:20:07.224 | Traceback (most recent call last): 2015-11-04 19:20:07.224 | File "/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py", line 791, in wait_for_resource_deletion 2015-11-04 19:20:07.224 | raise exceptions.TimeoutException(message) 2015-11-04 19:20:07.224 | tempest_lib.exceptions.TimeoutException: Request timed out 2015-11-04 19:20:07.224 | Details: (TestVolumeBootPattern:_run_cleanups) Failed to delete volume 1da0ba45-a4e6-49c6-8d47-ca522d7acabb within the required time (196 s). 2015-11-04 19:20:07.225 | 2015-11-04 19:20:07.225 | 2015-11-04 19:20:07.225 | Captured traceback-1: 2015-11-04 19:20:07.225 | ~ 2015-11-04 19:20:07.225 | Traceback (most recent call last): 2015-11-04 19:20:07.225 | File "tempest/scenario/manager.py", line 100, in delete_wrapper 2015-11-04 19:20:07.225 | delete_thing(*args, **kwargs) 2015-11-04 19:20:07.225 | File "tempest/services/volume/json/volumes_client.py", line 108, in delete_volume 2015-11-04 19:20:07.225 | resp, body = self.delete("volumes/%s" % str(volume_id)) 2015-11-04 19:20:07.225 | File "/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py", line 290, in delete 2015-11-04 19:20:07.225 | return self.request('DELETE', url, extra_headers, headers, body) 2015-11-04 19:20:07.226 | File "/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py", line 639, in request 2015-11-04 19:20:07.226 | resp, resp_body)
[Yahoo-eng-team] [Bug 1387552] Re: Should add a new method to get volumes from block_device_info
This isn't a bug, it's a style nit. Marking as invalid. I don't really consider this worth the time. ** Tags added: volumes ** Changed in: nova Status: Confirmed => Opinion ** Changed in: nova Status: Opinion => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1387552 Title: Should add a new method to get volumes from block_device_info Status in OpenStack Compute (nova): Invalid Bug description: Instead of block_device_mapping = driver.block_device_info_get_mapping( block_device_info) for vol in block_device_mapping: use driver.block_device_info_get_volumes(block_device_info) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1387552/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513574] [NEW] firewall rules on DVR FIP fails to work for ingress traffic
Public bug reported: = my env = controller +network node(dvr_snat) + 2 compute nodes(dvr) DVR: enable DVR when using devstack to deploy this env FWaaS: manually git clone neutron-fwaas and to configure, using iptables as driver steps 1) create net, subnet, boot VM-1 on CN-1, VM-2 on CN-2, create router, and attach subnet onto router. 2) create external network, set as router gateway net, create 2 floating IPs and associate to two VMs. 3) confirm DVR FIP works: fip ns created, iptable rules updated in qrouter ns, two VMs are pingable by floating IP. floating IP like: 192.168.0.4 and 192.168.0.5 4) create firewall rules, firewall policy and create firewall on router. firewall rule like: fw-r1: ICMP, source: 192.168.0.184/29(none), dest: 192.168.0.0/28(none), allow fw-r2: ICMP, source: 192.168.0.0/28(none), dest: 192.168.0.184/29(none), allow 5) confirm firewall rules updated in qrouter ns. 6) on host who has IP like 192.168.0.190, try to ping floating IPs mentioned in step 3. expected: floating IPs should be pingable (for IP 192.168.0.190 is in 192.168.0.184/29, and two firewall rules allows) observed: no response, "100% packet loss" from ping command. floating IP fail to ping. more details firewall iptable rules: -A INPUT -j neutron-l3-agent-INPUT -A FORWARD -j neutron-filter-top -A FORWARD -j neutron-l3-agent-FORWARD -A OUTPUT -j neutron-filter-top -A OUTPUT -j neutron-l3-agent-OUTPUT -A neutron-filter-top -j neutron-l3-agent-local -A neutron-l3-agent-FORWARD -o rfp-+ -j neutron-l3-agent-iv4322a9b15 -A neutron-l3-agent-FORWARD -i rfp-+ -j neutron-l3-agent-ov4322a9b15 -A neutron-l3-agent-FORWARD -o rfp-+ -j neutron-l3-agent-fwaas-defau -A neutron-l3-agent-FORWARD -i rfp-+ -j neutron-l3-agent-fwaas-defau -A neutron-l3-agent-INPUT -m mark --mark 0x1/0x -j ACCEPT -A neutron-l3-agent-INPUT -p tcp -m tcp --dport 9697 -j DROP -A neutron-l3-agent-fwaas-defau -j DROP -A neutron-l3-agent-iv4322a9b15 -m state --state INVALID -j DROP -A neutron-l3-agent-iv4322a9b15 -m state --state RELATED,ESTABLISHED -j ACCEPT -A neutron-l3-agent-iv4322a9b15 -s 192.168.0.0/28 -d 192.168.0.184/29 -p icmp -j ACCEPT -A neutron-l3-agent-iv4322a9b15 -s 192.168.0.184/29 -d 192.168.0.0/28 -p icmp -j ACCEPT -A neutron-l3-agent-ov4322a9b15 -m state --state INVALID -j DROP -A neutron-l3-agent-ov4322a9b15 -m state --state RELATED,ESTABLISHED -j ACCEPT -A neutron-l3-agent-ov4322a9b15 -s 192.168.0.0/28 -d 192.168.0.184/29 -p icmp -j ACCEPT -A neutron-l3-agent-ov4322a9b15 -s 192.168.0.184/29 -d 192.168.0.0/28 -p icmp -j ACCEPT --- DVR FIP nat iptable rules: --- 1) for 192.168.0.4: -A PREROUTING -j neutron-l3-agent-PREROUTING -A OUTPUT -j neutron-l3-agent-OUTPUT -A POSTROUTING -j neutron-l3-agent-POSTROUTING -A POSTROUTING -j neutron-postrouting-bottom -A neutron-l3-agent-OUTPUT -d 192.168.0.4/32 -j DNAT --to-destination 20.0.1.7 -A neutron-l3-agent-POSTROUTING ! -i rfp-4bf3186c-d ! -o rfp-4bf3186c-d -m conntrack ! --ctstate DNAT -j ACCEPT -A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697 -A neutron-l3-agent-PREROUTING -d 192.168.0.4/32 -j DNAT --to-destination 20.0.1.7 -A neutron-l3-agent-float-snat -s 20.0.1.7/32 -j SNAT --to-source 192.168.0.4 -A neutron-l3-agent-snat -j neutron-l3-agent-float-snat -A neutron-postrouting-bottom -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat 2) for 192.168.0.5: -A PREROUTING -j neutron-l3-agent-PREROUTING -A OUTPUT -j neutron-l3-agent-OUTPUT -A POSTROUTING -j neutron-l3-agent-POSTROUTING -A POSTROUTING -j neutron-postrouting-bottom -A neutron-l3-agent-OUTPUT -d 192.168.0.5/32 -j DNAT --to-destination 20.0.1.6 -A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697 -A neutron-l3-agent-PREROUTING -d 192.168.0.5/32 ! -i qr-+ -j DNAT --to-destination 20.0.1.6 -A neutron-l3-agent-float-snat -s 20.0.1.6/32 -j SNAT --to-source 192.168.0.5 -A neutron-l3-agent-snat -j neutron-l3-agent-float-snat -A neutron-postrouting-bottom -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat -- tcpdump result: (192.168.0.190 ping 192.168.0.4) -- 1) on fg in fip ns, ingress traffic caught: fa:16:3e:b3:3e:8c > fa:16:3e:9d:ea:ed, ethertype IPv4 (0x0800), length 98: 192.168.0.190 > 192.168.0.4: ICMP echo request, id 28356, seq 31, length 64 and fg: 40: fg-59c9ce49-3a: mtu 1500 qdisc noqueue state UNKNOWN group default link/ether fa:16:3e:9d:ea:ed brd ff:ff:ff:ff:ff:ff inet 192.168.0.133/24 brd 192.168.0.255 scope global fg-59c9ce49-3a valid_lft forever preferred_lft forever inet6 fe80::f816:3ef
[Yahoo-eng-team] [Bug 1177432] Re: [SRU] Enable backports in cloud-init archive template
This bug was fixed in the package cloud-init - 0.7.7~bzr1154-0ubuntu1 --- cloud-init (0.7.7~bzr1154-0ubuntu1) xenial; urgency=medium * New upstream snapshot. * create the same /etc/apt/sources.list that is present in default server ISO installs. This change adds restricted, multiverse, and -backports (LP: #1177432). -- Scott Moser Thu, 05 Nov 2015 12:10:00 -0500 ** Changed in: cloud-init (Ubuntu Xenial) Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1177432 Title: [SRU] cloud-init archive template should match Ubuntu Server Status in cloud-init: New Status in cloud-init package in Ubuntu: Fix Released Status in cloud-init source package in Precise: New Status in cloud-init source package in Trusty: Fix Committed Status in cloud-init source package in Vivid: Fix Committed Status in cloud-init source package in Wily: Fix Committed Status in cloud-init source package in Xenial: Fix Released Bug description: [SRU Justification] Ubuntu Cloud Images are inconsistent with desktop and bare-metal server installations since backports, restricted and multiverse are not enabled. This is effected via cloud-init that uses a template to select an in-cloud archive. [FIX] Make the cloud-init template match that of Ubuntu-server. [REGRESION] The potential for regression is low. However, all users will experience slower fetch times on apt-get updates especially on slower or high latency networks. [TEST] 1. Build image from -proposed 2. Boot up image 3. Confirm that "sudo apt-get update" pulls in backports, restricted and multiverse. Backports are currently not enabled in the cloud-init template. This is needed in order to get the backport kernels on cloud images. Related bugs: * bug 997371: Create command to add "multiverse" and "-backports" to apt sources * bug 1513529: cloud image built-in /etc/apt/sources.list needs updating To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1177432/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1512654] Re: Unable to create VM from horizon due to neutron error
Sure, please file a separate bug for 4. ** Changed in: neutron Status: Triaged => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1512654 Title: Unable to create VM from horizon due to neutron error Status in neutron: Invalid Bug description: Ubuntu 15.10 : Openstack Liberty I have ML2 plugin deployed as flat network without floating ip address When I'm creating VM from Horizon there are 404 not found errors in logs and error popping as Danger: There was an error submitting the form. Please try again. Nov 3 16:19:50 ubuntu neutron-server[2441]: 2015-11-03 16:19:50.517 2644 INFO neutron.wsgi [req-f1825b0c-9347-4212-bef1-84ce9942782f 8073992ecba44af28fc7aa32b20bdb72 21018e833b7e4093a1b5a0c933509c4e - - -] x.x.x.x - - [03/Nov/2015 16:19:50] "GET /v2.0/floatingips.json?fixed_ip_address=y.y.y.y&port_id=6a9eb8f8-735b- 403a-8abb-3b2cd14a97cd HTTP/1.1" 404 266 0.002497 If one is not using floating ips then neutron/horizon should allow VM creation for flat network. I'm able to create VM from nova CLI but this error keeps popping. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1512654/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1297451] Re: Image uploads to the filesystem driver are not fully atomic
** Project changed: glance => glance-store ** Changed in: glance-store Importance: Undecided => Wishlist ** Tags added: lite-spec -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1297451 Title: Image uploads to the filesystem driver are not fully atomic Status in glance_store: In Progress Bug description: When uploading to the filesystem store, the image is uploaded, checksummed, and added to the database, with appropriate rollback operations in place in case of failures. This is all good, but there are various corner-cases where partially-uploaded files can be left lying around in the filesystem_store_datadir. Mostly this kind of thing interferes with operational processes we have that are external to Glance and not Glance itself. e.g., we have a monitoring process to reconcile the list/checksums of glance images against what's on-disk, and this occasionally goes funny on us. Additionally, we have a side-band backup/replication process that ends up occasionally transferring partial files for no reason. Using a temporary upload space and moving the file into place on the filesystem store would cause a lot of our co-processes to work more cleanly, as well as reducing the risk of the _delete_partial job deleting the wrong thing should it go sideways. To manage notifications about this bug go to: https://bugs.launchpad.net/glance-store/+bug/1297451/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1498536] Re: After clear router gateway the vpnaas site connection still show the "active"
*** This bug is a duplicate of bug 1261598 *** https://bugs.launchpad.net/bugs/1261598 ** This bug has been marked a duplicate of bug 1261598 VPNaaS doesn't consider subnet interface or router gateway removal operation after vpnservice is created -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1498536 Title: After clear router gateway the vpnaas site connection still show the "active" Status in neutron: New Bug description: After clear router gateway the vpnaas site connection still show the "active" Setup: OS: ubuntu 14.04 based Juno vm 1-Router1(tenant1) ---router2(tenant2)|-vm2 1 controller + 1 network node + 2 nova computer node + 1 docker node Bring up one tunnel between two tenants in the same openstack enviroment based Juno. The vpn site connection show active in both tenants. Then clear one of the router external gateway, after few mins the vpn status still show the active. although the previous gateway ip address is not reachable. The issue is the vpn tunnel is established between the two router external gateway. if clear one of them, the vpn site connection status should be go down, because the router gateway is not reachable. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1498536/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513601] [NEW] add Italian localization
Public bug reported: Our Horizon Italian localization is at approximately 100%. https://translate.openstack.org/webtrans/translate?project=horizon&iteration=master&localeId=it&locale=en%20#doc:horizon/locale/django The criteria is 90% complete. We should include in the list of languages. ** Affects: horizon Importance: Undecided Assignee: Doug Fish (drfish) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1513601 Title: add Italian localization Status in OpenStack Dashboard (Horizon): In Progress Bug description: Our Horizon Italian localization is at approximately 100%. https://translate.openstack.org/webtrans/translate?project=horizon&iteration=master&localeId=it&locale=en%20#doc:horizon/locale/django The criteria is 90% complete. We should include in the list of languages. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1513601/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513609] [NEW] easily disable cloud-init
Public bug reported: cloud-init should be easy to disable. cloud-init by design does slow down system boot for 2 reasons a.) by design, to offer users the ability to do things at defined points in boot and block other things b.) by nature of being python code and loading libraries and just being "one more thing" that occurs in boot. We should fix performance wherever we can, but shoudl also make it such that cloud-init can be installed (in an image) and not negatively affect boot performance if none of its functionality is needed. ** Affects: cloud-init Importance: Medium Status: Triaged ** Changed in: cloud-init Status: New => Triaged ** Changed in: cloud-init Importance: Undecided => Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1513609 Title: easily disable cloud-init Status in cloud-init: Triaged Bug description: cloud-init should be easy to disable. cloud-init by design does slow down system boot for 2 reasons a.) by design, to offer users the ability to do things at defined points in boot and block other things b.) by nature of being python code and loading libraries and just being "one more thing" that occurs in boot. We should fix performance wherever we can, but shoudl also make it such that cloud-init can be installed (in an image) and not negatively affect boot performance if none of its functionality is needed. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1513609/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1512305] Re: keystone api-site is out of date
Marking this as invalid based on Steve's comment. If there are things that need to be addressed in http://specs.openstack.org/openstack /keystone-specs/api/v3/identity-api-v3.html we can either reopen this bug, or better yet, open a separate bug that is specific to the issue. ** Changed in: keystone Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1512305 Title: keystone api-site is out of date Status in OpenStack Identity (keystone): Invalid Status in openstack-api-site: Confirmed Bug description: http://docs.openstack.org/developer/keystone/api_curl_examples.html http://developer.openstack.org/api-ref-identity-v3.html comparing the content of these links, u will find some difference, these are some attributes missing in docs To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1512305/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1512969] Re: DHCP-HA-RFE- there is no command that show us with network node is the DHCP proviider
According to discussion, the rfe seems invalid as the feature already exists ** Changed in: neutron Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1512969 Title: DHCP-HA-RFE- there is no command that show us with network node is the DHCP proviider Status in neutron: Invalid Bug description: version : [root@cougar16 ~(keystone_admin)]# rpm -qa |grep neutron python-neutronclient-3.1.1-dev1.el7.centos.noarch python-neutron-7.0.0.0-rc2.dev21.el7.centos.noarch openstack-neutron-7.0.0.0-rc2.dev21.el7.centos.noarch openstack-neutron-ml2-7.0.0.0-rc2.dev21.el7.centos.noarch openstack-neutron-common-7.0.0.0-rc2.dev21.el7.centos.noarch openstack-neutron-openvswitch-7.0.0.0-rc2.dev21.el7.centos.noarch When deploying HA environment we have HA of DHCP service . There is no command that gave us information about which network node is provide the DHCP service. The only way to get this info is by search in /var/log/messages which network node sent ack for DHCP request. I think that we need to add neutron command that show which network node is providing DHCP services. for example $ neutron dhcp-show To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1512969/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513645] [NEW] Inconsistent use of admin/non-admin context in neutron network api while preparing network info
Public bug reported: In allocate_for_instance, Neutron network API calls Neutron to create the port(s)[1]. Once the port is created, it formats network info in network info model, before returning it to Compute Manager [2]. To form network info, the API makes several calls to Neutron. 1. List ports for device id [3] 2. Get associated floating IPs [4] 3. Get subnets from port [5] & [6] Notice that, in 3 & 4, the API uses admin context to talk to Neutron, whereas in 6, it doesn't use admin context. This is inconsistent. Is this intentional? [1] https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L716 [2] https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L739-L741 [3] https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1671-L1676 [4] https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1555-L1566 [5] https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1568-L1573 [6] https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1755-L1756 ** Affects: nova Importance: Undecided Assignee: Shraddha Pandhe (shraddha-pandhe) Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1513645 Title: Inconsistent use of admin/non-admin context in neutron network api while preparing network info Status in OpenStack Compute (nova): New Bug description: In allocate_for_instance, Neutron network API calls Neutron to create the port(s)[1]. Once the port is created, it formats network info in network info model, before returning it to Compute Manager [2]. To form network info, the API makes several calls to Neutron. 1. List ports for device id [3] 2. Get associated floating IPs [4] 3. Get subnets from port [5] & [6] Notice that, in 3 & 4, the API uses admin context to talk to Neutron, whereas in 6, it doesn't use admin context. This is inconsistent. Is this intentional? [1] https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L716 [2] https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L739-L741 [3] https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1671-L1676 [4] https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1555-L1566 [5] https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1568-L1573 [6] https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1755-L1756 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1513645/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513654] [NEW] scheduler: disk_filter permits scheduling on full drives
Public bug reported: I use qcow images and have disk_allocation_ratio == 2.1 to allow large amounts of overcommitting of disk space. To quote the nova config reference: > If the value is set to >1, we recommend keeping track of the free disk > space, as the value approaching 0 may result in the incorrect > functioning of instances using it at the moment. Good advice, but 'keeping track' can be a bit impractical at times. I just now had the scheduler drop a large sized instance onto a server with a 98% full drive since the behavior of disk_allocation_ratio intentionally ignores the actual free space on the drive. I propose that we add an additional config setting to the disk scheduler so that I can overschedule but can /also/ request that the scheduler stop piling things onto an already groaning server. ** Affects: nova Importance: Undecided Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1513654 Title: scheduler: disk_filter permits scheduling on full drives Status in OpenStack Compute (nova): In Progress Bug description: I use qcow images and have disk_allocation_ratio == 2.1 to allow large amounts of overcommitting of disk space. To quote the nova config reference: > If the value is set to >1, we recommend keeping track of the free disk space, as the value approaching 0 may result in the incorrect > functioning of instances using it at the moment. Good advice, but 'keeping track' can be a bit impractical at times. I just now had the scheduler drop a large sized instance onto a server with a 98% full drive since the behavior of disk_allocation_ratio intentionally ignores the actual free space on the drive. I propose that we add an additional config setting to the disk scheduler so that I can overschedule but can /also/ request that the scheduler stop piling things onto an already groaning server. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1513654/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513160] [NEW] UnsupportedObjectError on launching instance
You have been subscribed to a public bug: I had setup a openstack in single machine beforea month , it worked fine till I added a new nova-compute node. After adding a new nova-compute node I am not able to launch an instance. Launch getting stucked in build state. Found following exception in nova-compute of new node: 2015-11-04 23:02:20.460 2164 ERROR object [req-1696a514-fc24-49c0-af84-ec35bf67f7b1 af26d0f550b242428e8600f8a90a0d79 ae1eb9a146ed4c3a9bf030c73567330e] Unable to instantiate unregistered object type NetworkRequestList 2015-11-04 23:02:20.461 2164 ERROR oslo.messaging.rpc.dispatcher [-] Exception during message handling: Unsupported object type NetworkRequestList 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher incoming.message)) 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 121, in _do_dispatch 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher new_args[argname] = self.serializer.deserialize_entity(ctxt, arg) 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/rpc.py", line 111, in deserialize_entity 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher return self._base.deserialize_entity(context, entity) 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 575, in deserialize_entity 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher entity = self._process_object(context, entity) 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 542, in _process_object 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher objinst = NovaObject.obj_from_primitive(objprim, context=context) 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 251, in obj_from_primitive 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher objclass = cls.obj_class_from_name(objname, objver) 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 201, in obj_class_from_name 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher raise exception.UnsupportedObjectError(objtype=objname) 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher UnsupportedObjectError: Unsupported object type NetworkRequestList 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher 2015-11-04 23:02:20.463 2164 ERROR oslo.messaging._drivers.common [-] Returning exception Unsupported object type NetworkRequestList to caller 2015-11-04 23:02:20.464 2164 ERROR oslo.messaging._drivers.common [-] ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply\nincoming.message))\n', ' File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', ' File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 121, in _do_dispatch\nnew_args[argname] = self.serializer.deserialize_entity(ctxt, arg)\n', ' File "/usr/lib/python2.7/dist-packages/nova/rpc.py", line 111, in deserialize_entity\nreturn self._base.deserialize_entity(context, entity)\n', ' File "/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 575, in deserialize_entity\nentity = self._process_object(context, entity)\n', ' File "/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 542, in _process_object\nobji nst = NovaObject.obj_from_primitive(objprim, context=context)\n', ' File "/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 251, in obj_from_primitive\nobjclass = cls.obj_class_from_name(objname, objver)\n', ' File "/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 201, in obj_class_from_name\nraise exception.UnsupportedObjectError(objtype=objname)\n', 'UnsupportedObjectError: Unsupported object type NetworkRequestList\n'] nova conf of new node: --- [DEFAULT] dhcpbridge_flagfile=
[Yahoo-eng-team] [Bug 1513678] [NEW] At scale router scheduling takes a long time with DVR routers with multiple compute nodes hosting thousands of VMs
Public bug reported: At scale when we have 100s of compute Node and 1000s of VM in networks that are routed by Distributed Virtual Router, we are seeing a control plane performance issue. It takes a while for all the routers to be schedule in the Nodes. The _schedule_router calls _get_candidates, and it internally calls get_l3_agent_candidates. In the case of the DVR Routers, all the active agents are passed to the get_l3_agent_candidates which iterates through the agents and for each agent it tries to find out if there are any dvr_service ports available in the routed subnet. This might be taking lot more time. So we need to figure out the issue and reduce the time taken for the scheduling. ** Affects: neutron Importance: Undecided Assignee: Swaminathan Vasudevan (swaminathan-vasudevan) Status: In Progress ** Tags: l3-dvr-backlog -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1513678 Title: At scale router scheduling takes a long time with DVR routers with multiple compute nodes hosting thousands of VMs Status in neutron: In Progress Bug description: At scale when we have 100s of compute Node and 1000s of VM in networks that are routed by Distributed Virtual Router, we are seeing a control plane performance issue. It takes a while for all the routers to be schedule in the Nodes. The _schedule_router calls _get_candidates, and it internally calls get_l3_agent_candidates. In the case of the DVR Routers, all the active agents are passed to the get_l3_agent_candidates which iterates through the agents and for each agent it tries to find out if there are any dvr_service ports available in the routed subnet. This might be taking lot more time. So we need to figure out the issue and reduce the time taken for the scheduling. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1513678/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513160] Re: UnsupportedObjectError on launching instance
** Project changed: oslo.messaging => nova -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1513160 Title: UnsupportedObjectError on launching instance Status in OpenStack Compute (nova): New Bug description: I had setup a openstack in single machine beforea month , it worked fine till I added a new nova-compute node. After adding a new nova-compute node I am not able to launch an instance. Launch getting stucked in build state. Found following exception in nova-compute of new node: 2015-11-04 23:02:20.460 2164 ERROR object [req-1696a514-fc24-49c0-af84-ec35bf67f7b1 af26d0f550b242428e8600f8a90a0d79 ae1eb9a146ed4c3a9bf030c73567330e] Unable to instantiate unregistered object type NetworkRequestList 2015-11-04 23:02:20.461 2164 ERROR oslo.messaging.rpc.dispatcher [-] Exception during message handling: Unsupported object type NetworkRequestList 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher incoming.message)) 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 121, in _do_dispatch 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher new_args[argname] = self.serializer.deserialize_entity(ctxt, arg) 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/rpc.py", line 111, in deserialize_entity 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher return self._base.deserialize_entity(context, entity) 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 575, in deserialize_entity 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher entity = self._process_object(context, entity) 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 542, in _process_object 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher objinst = NovaObject.obj_from_primitive(objprim, context=context) 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 251, in obj_from_primitive 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher objclass = cls.obj_class_from_name(objname, objver) 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 201, in obj_class_from_name 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher raise exception.UnsupportedObjectError(objtype=objname) 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher UnsupportedObjectError: Unsupported object type NetworkRequestList 2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher 2015-11-04 23:02:20.463 2164 ERROR oslo.messaging._drivers.common [-] Returning exception Unsupported object type NetworkRequestList to caller 2015-11-04 23:02:20.464 2164 ERROR oslo.messaging._drivers.common [-] ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply\nincoming.message))\n', ' File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', ' File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 121, in _do_dispatch\nnew_args[argname] = self.serializer.deserialize_entity(ctxt, arg)\n', ' File "/usr/lib/python2.7/dist-packages/nova/rpc.py", line 111, in deserialize_entity\nreturn self._base.deserialize_entity(context, entity)\n', ' File "/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 575, in deserialize_entity\nentity = self._process_object(context, entity)\n', ' File "/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 542, in _process_object\nob jinst = NovaObject.obj_from_primitive(objprim, context=context)\n', ' File "/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 251, in obj_from_primitive\nobjclass = cls.obj_class_from_name
[Yahoo-eng-team] [Bug 1513733] [NEW] nova api wsgi request lost, system messages show 'ValueError: I/O operation on closed file'
Public bug reported: 1. Exact version of Nova/OpenStack you are running: icehouse 2. Relevant log files: the log in /var/log/messages Oct 28 17:30:32 controller1 nova-api[4206]: Traceback (most recent call last): Oct 28 17:30:32 controller1 nova-api[4206]: File "/usr/lib/python2.7/site-packages/eventlet/greenpool.py", line 80, in _spawn_n_impl Oct 28 17:30:32 controller1 nova-api[4206]: func(*args, **kwargs) Oct 28 17:30:32 controller1 nova-api[4206]: File "/usr/lib/python2.7/site-packages/eventlet/wsgi.py", line 594, in process_request Oct 28 17:30:32 controller1 nova-api[4206]: proto.__init__(sock, address, self) Oct 28 17:30:32 controller1 nova-api[4206]: File "/usr/lib64/python2.7/SocketServer.py", line 649, in __init__ Oct 28 17:30:32 controller1 nova-api[4206]: self.handle() Oct 28 17:30:32 controller1 nova-api[4206]: File "/usr/lib64/python2.7/BaseHTTPServer.py", line 340, in handle Oct 28 17:30:32 controller1 nova-api[4206]: self.handle_one_request() Oct 28 17:30:32 controller1 nova-api[4206]: File "/usr/lib/python2.7/site-packages/eventlet/wsgi.py", line 285, in handle_one_request Oct 28 17:30:32 controller1 nova-api[4206]: self.handle_one_response() Oct 28 17:30:32 controller1 nova-api[4206]: File "/usr/lib/python2.7/site-packages/eventlet/wsgi.py", line 453, in handle_one_response Oct 28 17:30:32 controller1 nova-api[4206]: 'wall_seconds': finish - start, Oct 28 17:30:32 controller1 nova-api[4206]: File "/usr/lib/python2.7/site-packages/eventlet/wsgi.py", line 603, in log_message Oct 28 17:30:32 controller1 nova-api[4206]: self.log.write(message + '\n') Oct 28 17:30:32 controller1 nova-api[4206]: File "/usr/lib/python2.7/site-packages/nova/openstack/common/log.py", line 575, in write Oct 28 17:30:32 controller1 nova-api[4206]: self.logger.log(self.level, msg.rstrip()) Oct 28 17:30:32 controller1 nova-api[4206]: File "/usr/lib64/python2.7/logging/__init__.py", line 1471, in log Oct 28 17:30:32 controller1 nova-api[4206]: self.logger.log(level, msg, *args, **kwargs) Oct 28 17:30:32 controller1 nova-api[4206]: File "/usr/lib64/python2.7/logging/__init__.py", line 1213, in log Oct 28 17:30:32 controller1 nova-api[4206]: self._log(level, msg, args, **kwargs) Oct 28 17:30:32 controller1 nova-api[4206]: File "/usr/lib64/python2.7/logging/__init__.py", line 1268, in _log Oct 28 17:30:32 controller1 nova-api[4206]: self.handle(record) Oct 28 17:30:32 controller1 nova-api[4206]: File "/usr/lib64/python2.7/logging/__init__.py", line 1278, in handle Oct 28 17:30:32 controller1 nova-api[4206]: self.callHandlers(record) Oct 28 17:30:32 controller1 nova-api[4206]: File "/usr/lib64/python2.7/logging/__init__.py", line 1318, in callHandlers Oct 28 17:30:32 controller1 nova-api[4206]: hdlr.handle(record) Oct 28 17:30:32 controller1 nova-api[4206]: File "/usr/lib64/python2.7/logging/__init__.py", line 749, in handle Oct 28 17:30:32 controller1 nova-api[4206]: self.emit(record) Oct 28 17:30:32 controller1 nova-api[4206]: File "/usr/lib64/python2.7/logging/handlers.py", line 425, in emit Oct 28 17:30:32 controller1 nova-api[4206]: self.stream.flush() Oct 28 17:30:32 controller1 nova-api[4206]: ValueError: I/O operation on closed file 3. Reproduce steps: build many instances. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1513733 Title: nova api wsgi request lost, system messages show 'ValueError: I/O operation on closed file' Status in OpenStack Compute (nova): New Bug description: 1. Exact version of Nova/OpenStack you are running: icehouse 2. Relevant log files: the log in /var/log/messages Oct 28 17:30:32 controller1 nova-api[4206]: Traceback (most recent call last): Oct 28 17:30:32 controller1 nova-api[4206]: File "/usr/lib/python2.7/site-packages/eventlet/greenpool.py", line 80, in _spawn_n_impl Oct 28 17:30:32 controller1 nova-api[4206]: func(*args, **kwargs) Oct 28 17:30:32 controller1 nova-api[4206]: File "/usr/lib/python2.7/site-packages/eventlet/wsgi.py", line 594, in process_request Oct 28 17:30:32 controller1 nova-api[4206]: proto.__init__(sock, address, self) Oct 28 17:30:32 controller1 nova-api[4206]: File "/usr/lib64/python2.7/SocketServer.py", line 649, in __init__ Oct 28 17:30:32 controller1 nova-api[4206]: self.handle() Oct 28 17:30:32 controller1 nova-api[4206]: File "/usr/lib64/python2.7/BaseHTTPServer.py", line 340, in handle Oct 28 17:30:32 controller1 nova-api[4206]: self.handle_one_request() Oct 28 17:30:32 controller1 nova-api[4206]: File "/usr/lib/python2.7/site-packages/eventlet/wsgi.py", line 285, in handle_one_request Oct 28 17:30:32 controller1 nova-api[4206]: self.handle_one_response() Oct 28 17:30:32 controller1 nova-api[4206]: File "/usr/lib/python2.7/site-packages/eventlet/wsgi.py", line 453, in handle_one_respo
[Yahoo-eng-team] [Bug 1513735] [NEW] VMware VCDriver: the _init_host and period power sync take too long time when there are thousands VMs
Public bug reported: When the nova compute service is starting, it will do a host_init, and this will result in initing instance one by one, by fetching VMs info one by one from vCenter. If there are thousands of VMs on OpenStack, there will be thousands of requests to vCenter, which is a big network burden. This circumstance also happens when compute manager doing periodical power state syncing. If the power state syncing takes too long time, and too network bandwidth, it will block the ongoing normal VM deployment. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1513735 Title: VMware VCDriver: the _init_host and period power sync take too long time when there are thousands VMs Status in OpenStack Compute (nova): New Bug description: When the nova compute service is starting, it will do a host_init, and this will result in initing instance one by one, by fetching VMs info one by one from vCenter. If there are thousands of VMs on OpenStack, there will be thousands of requests to vCenter, which is a big network burden. This circumstance also happens when compute manager doing periodical power state syncing. If the power state syncing takes too long time, and too network bandwidth, it will block the ongoing normal VM deployment. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1513735/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp