[Yahoo-eng-team] [Bug 1212625] Re: 'str' object has no attribute 'AndReturn'
In milestone-proposed at https://review.openstack.org/#/c/50832/ ** Changed in: glance Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1212625 Title: 'str' object has no attribute 'AndReturn' Status in OpenStack Image Registry and Delivery Service (Glance): Fix Released Bug description: Fix up for review here: https://review.openstack.org/#/c/49526/ == ERROR: glance.tests.unit.test_store_image.TestStoreAddToBackend.test_bad_metadata_not_dict -- _StringException: Traceback (most recent call last): File "/opt/stack/glance/glance/tests/unit/test_store_image.py", line 788, in test_bad_metadata_not_dict store.__str__().AndReturn(('hello')) AttributeError: 'str' object has no attribute 'AndReturn' == ERROR: glance.tests.unit.test_store_image.TestStoreAddToBackend.test_bad_nonunicode_dict_list -- _StringException: Traceback (most recent call last): File "/opt/stack/glance/glance/tests/unit/test_store_image.py", line 782, in test_bad_nonunicode_dict_list self._bad_metadata(m) File "/opt/stack/glance/glance/tests/unit/test_store_image.py", line 713, in _bad_metadata store.__str__().AndReturn(('hello')) AttributeError: 'str' object has no attribute 'AndReturn' == ERROR: glance.tests.unit.test_store_image.TestStoreAddToBackend.test_bad_top_level_nonunicode -- _StringException: Traceback (most recent call last): File "/opt/stack/glance/glance/tests/unit/test_store_image.py", line 776, in test_bad_top_level_nonunicode self._bad_metadata(metadata) File "/opt/stack/glance/glance/tests/unit/test_store_image.py", line 713, in _bad_metadata store.__str__().AndReturn(('hello')) AttributeError: 'str' object has no attribute 'AndReturn' To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1212625/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1236339] Re: Glance functional tests fail against swift backend
Reviewed: https://review.openstack.org/51048 Committed: http://github.com/openstack/glance/commit/c6fd948243dc42722b8ff89cdbeba1d90cb76851 Submitter: Jenkins Branch:milestone-proposed commit c6fd948243dc42722b8ff89cdbeba1d90cb76851 Author: Thomas Leaman Date: Mon Oct 7 14:24:03 2013 + Update functional tests for swift changes It seems that a couple of changes to the store calls have slipped through unnoticed: * store.add() now returns a 4-tuple * x-container-read and x-container-write headers use tenant:user rather than just tenant. fixes bug 1236339 Change-Id: Ic5c311bcc2999e13203e0d3ad37bce5ac5f27e2d ** Changed in: glance Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1236339 Title: Glance functional tests fail against swift backend Status in OpenStack Image Registry and Delivery Service (Glance): Fix Released Bug description: == ERROR: glance.tests.functional.store.test_swift.TestSwiftStore.test_multitenant -- _StringException: Traceback (most recent call last): File "/home/ubuntu/glance/glance/tests/functional/store/test_swift.py", line 338, in test_multitenant uri, _, _ = store.add(image_id, image_data, 3) ValueError: too many values to unpack == ERROR: glance.tests.functional.store.test_swift.TestSwiftStore.test_object_chunking -- _StringException: Traceback (most recent call last): File "/home/ubuntu/glance/glance/tests/functional/store/test_swift.py", line 201, in test_object_chunking image_size) ValueError: too many values to unpack It looks like the store.add() call that both of these are complaining about started returning a fourth value (see commit 74e4fe25dc7c2fa9868d0b1f09a13d4cf3e8a57f). To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1236339/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1231255] Re: Glance GET /v2/images fails with 500 due to erroneous policy check
Reviewed: https://review.openstack.org/51044 Committed: http://github.com/openstack/glance/commit/005904da775a809d4319310d6e3a79104aa27ba1 Submitter: Jenkins Branch:milestone-proposed commit 005904da775a809d4319310d6e3a79104aa27ba1 Author: Fei Long Wang Date: Thu Sep 26 15:49:01 2013 +0800 Glance GET /v2/images fails with 500 due to erroneous policy check This patch will fix below two issues of V2 ResponseSerializer for images-list, image-show, image-update and image-download. 1. A user should be able to list/show/update/download image without needing permission on get_image_location. 2. A policy failure should result in a 403 return code. We're getting a 500. Fixes bug 1231255 Change-Id: Ie0ec2d574eea4433c4f610ec66a22cb16cae6dc6 ** Changed in: glance Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1231255 Title: Glance GET /v2/images fails with 500 due to erroneous policy check Status in OpenStack Image Registry and Delivery Service (Glance): Fix Released Bug description: A user with 'viewer' authority per the following policy receives a 500 error when calling glance v2/images. The user is successfully able to get a list of images and details when calling /v1/images/detail. Policy: { "admin_only": "role:admin", "admin_or_deployer": "role:admin or role:deployer", "admin_or_deployer_or_viewer": "role:admin or role:deployer or role:viewer", "default": "rule:admin_or_deployer", "get_images": "rule:admin_or_deployer_or_viewer", "get_image": "rule:admin_or_deployer_or_viewer", "download_image": "rule:admin_or_deployer", "add_image": "rule:admin_or_deployer", "modify_image": "rule:admin_or_deployer", "publicize_image": "rule:admin_or_deployer", "delete_image": "rule:admin_or_deployer", "manage_image_cache": "role:admin" } Based on the investigation, it is due to a failed policy check on the 'get_image_location' rule while the REST response is being serialized. There are several things wrong with this: 1. A user should be able to list images without needing permission on get_image_location 2. Image location output on the image detail APIs is controlled by these Glance CONF settings CONF.show_multiple_location and CONF.show_image_direct_url. By default, both of them are False so the location would not be getting returned anyway, so there would be no need to do the policy check in this particular case. 3. A policy failure should result in a 403 return code. We're getting a 500. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1231255/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1233097] Re: rbd delete_image does not catch ImageNotFound when deleting snap
Reviewed: https://review.openstack.org/51045 Committed: http://github.com/openstack/glance/commit/1fd64662fe66eda5fee404a37586f30cae2bf785 Submitter: Jenkins Branch:milestone-proposed commit 1fd64662fe66eda5fee404a37586f30cae2bf785 Author: Edward Hope-Morley Date: Mon Sep 30 12:22:30 2013 +0100 Fixes rbd _delete_image snapshot with missing image Also ignore if _delete_image returns NotFound when cleaning up following failed attempt to create a new image with add(). Also added unit tests. Change-Id: Id66866b4260385a6324cc277c5ac665f81493c89 Fixes: bug 1233097 ** Changed in: glance Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1233097 Title: rbd delete_image does not catch ImageNotFound when deleting snap Status in OpenStack Image Registry and Delivery Service (Glance): Fix Released Bug description: the store.rbd._delete_image() method does not catch rbd.ImageNotFound and return exception.NotFound when trying to delete a snapshot. The behaviour should be the same as when deleting the image itself. This produces errors like this: 2013-09-30 14:30:10.139 442 ERROR glance.api.v2.image_data [0be083f3-4d9f-4d88-9264-b69ca9bbd2c2 18ab8e3ef26b499a8c581b8f18f2d33f 133f3a73a0294e98aaee00e8139bb922] Failed to upload image data due to internal error 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data Traceback (most recent call last): 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data File "/mnt/glance/glance/api/v2/image_data.py", line 55, in upload 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data image.set_data(data, size) 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data File "/mnt/glance/glance/domain/proxy.py", line 126, in set_data 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data self.base.set_data(data, size) 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data File "/mnt/glance/glance/notifier/__init__.py", line 202, in set_data 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data self.image.set_data(data, size) 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data File "/mnt/glance/glance/domain/proxy.py", line 126, in set_data 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data self.base.set_data(data, size) 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data File "/mnt/glance/glance/quota/__init__.py", line 140, in set_data 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data self.image.set_data(data, size=size) 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data File "/mnt/glance/glance/store/__init__.py", line 644, in set_data 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data self.image.image_id, utils.CooperativeReader(data), size) 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data File "/mnt/glance/glance/store/__init__.py", line 355, in add_to_backend 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data return store_add_to_backend(image_id, data, size, store) 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data File "/mnt/glance/glance/store/__init__.py", line 333, in store_add_to_backend 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data (location, size, checksum, metadata) = store.add(image_id, data, size) 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data File "/mnt/glance/glance/store/rbd.py", line 330, in add 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data self._delete_image(loc.image, loc.snapshot) 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data File "/mnt/glance/glance/store/rbd.py", line 267, in _delete_image 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data image.unprotect_snap(snapshot_name) 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data File "/usr/lib/python2.7/dist-packages/rbd.py", line 578, in unprotect_snap 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data raise make_ex(ret, 'error unprotecting snapshot %s@%s' % (self.name, name)) 2013-09-30 14:30:10.139 442 TRACE glance.api.v2.image_data ImageNotFound: error unprotecting snapshot bccb3478-e7cf-4661-86ba-d405cffe0912@snap where we are doing a cleanup but the image may not exist in which case we would want to ignore the error. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1233097/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1235111] Re: nec-agent: port_added message can be dropped when RPC timeout occurs
In milestone-proposed: https://review.openstack.org/#/c/50383/ ** Changed in: neutron Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1235111 Title: nec-agent: port_added message can be dropped when RPC timeout occurs Status in OpenStack Neutron (virtual network service): Fix Released Status in neutron grizzly series: Fix Committed Bug description: NEC agent reports added OpenFlow ports on OVS through RPC. If RPC exception occurs when sending RPC message, added port information is not retransmitted. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1235111/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1235106] Re: nec-agent dies with RPC timeout in secgroup RPC
In milestone-proposed: https://review.openstack.org/#/c/50383/ ** Changed in: neutron Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1235106 Title: nec-agent dies with RPC timeout in secgroup RPC Status in OpenStack Neutron (virtual network service): Fix Released Status in neutron grizzly series: Fix Committed Bug description: nec-agent dies when RPC timeout in security group RPC is detected. 2013-10-04 16:14:38.230 24923 DEBUG neutron.openstack.common.rpc.amqp [-] UNIQUE_ID is d2b295dac2f249c2bace8ce834f0e4fc. _add_unique_id /opt/stack/neutron/neutron/openstack/common/rpc/amqp.py:339 2013-10-04 16:14:38.231 24923 DEBUG amqp [-] Closed channel #1 _do_close /usr/local/lib/python2.7/dist-packages/amqp/channel.py:88 2013-10-04 16:14:38.232 24923 DEBUG amqp [-] using channel_id: 1 __init__ /usr/local/lib/python2.7/dist-packages/amqp/channel.py:70 2013-10-04 16:14:38.233 24923 DEBUG amqp [-] Channel open _open_ok /usr/local/lib/python2.7/dist-packages/amqp/channel.py:420 2013-10-04 16:14:42.127 24923 CRITICAL neutron [-] Timeout while waiting on RPC response - topic: "q-plugin", RPC method: "security_group_rules_for_devices" info: "" 2013-10-04 16:14:42.127 24923 TRACE neutron Traceback (most recent call last): 2013-10-04 16:14:42.127 24923 TRACE neutron File "/usr/local/bin/neutron-nec-agent", line 10, in 2013-10-04 16:14:42.127 24923 TRACE neutron sys.exit(main()) 2013-10-04 16:14:42.127 24923 TRACE neutron File "/opt/stack/neutron/neutron/plugins/nec/agent/nec_neutron_agent.py", line 242, in main 2013-10-04 16:14:42.127 24923 TRACE neutron agent.daemon_loop() 2013-10-04 16:14:42.127 24923 TRACE neutron File "/opt/stack/neutron/neutron/plugins/nec/agent/nec_neutron_agent.py", line 219, in daemon_loop 2013-10-04 16:14:42.127 24923 TRACE neutron self._process_security_group(port_added, port_removed) 2013-10-04 16:14:42.127 24923 TRACE neutron File "/opt/stack/neutron/neutron/plugins/nec/agent/nec_neutron_agent.py", line 193, in _process_security_group 2013-10-04 16:14:42.127 24923 TRACE neutron self.sg_agent.prepare_devices_filter(devices_added) 2013-10-04 16:14:42.127 24923 TRACE neutron File "/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 120, in prepare_devices_filter 2013-10-04 16:14:42.127 24923 TRACE neutron self.context, list(device_ids)) 2013-10-04 16:14:42.127 24923 TRACE neutron File "/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 58, in security_group_rules_for_devices 2013-10-04 16:14:42.127 24923 TRACE neutron topic=self.topic) 2013-10-04 16:14:42.127 24923 TRACE neutron File "/opt/stack/neutron/neutron/openstack/common/rpc/proxy.py", line 130, in call 2013-10-04 16:14:42.127 24923 TRACE neutron exc.info, real_topic, msg.get('method')) 2013-10-04 16:14:42.127 24923 TRACE neutron Timeout: Timeout while waiting on RPC response - topic: "q-plugin", RPC method: "security_group_rules_for_devices" info: "" 2013-10-04 16:14:42.127 24923 TRACE neutron q-agt failed to start To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1235106/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1233275] Re: Avoid printing URIs which can contain credentials
milestone-proposed: https://review.openstack.org/#/c/51127/ ** Changed in: glance Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1233275 Title: Avoid printing URIs which can contain credentials Status in OpenStack Image Registry and Delivery Service (Glance): Fix Released Bug description: This is a recurrence of https://bugs.launchpad.net/glance/+bug/1171851 To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1233275/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1237102] Re: Conductor does not properly copy objects during change tracking
Reviewed: https://review.openstack.org/51076 Committed: http://github.com/openstack/nova/commit/157249a69f5e99a94df36f00adb139c353cac25e Submitter: Jenkins Branch:milestone-proposed commit 157249a69f5e99a94df36f00adb139c353cac25e Author: Dan Smith Date: Mon Oct 7 13:02:09 2013 -0700 Fix conductor's object change detection Conductor was doing a copy.copy() on the inbound object to later detect changes that should be sent back to the caller. This does not copy things like Instance.system_metadata and thus is incapable of properly detecting changes that should be tracked. This patch makes conductor use obj_clone(), and imports Chris Behrens' __deepcopy__ fix for objects so that deepcopy works. Closes-bug: #1237102 Change-Id: I46ae8b0694dc31a90c1a5cdf76757d877877f072 (cherry picked from commit 73b3bf91df00059c69dc1dd81e4554ec24c647b1) ** Changed in: nova Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1237102 Title: Conductor does not properly copy objects during change tracking Status in OpenStack Compute (Nova): Fix Released Bug description: The conductor object_action() method does a shallow copy of the instance in order to do change tracking after the method is called. This is not sufficient as complex types like dicts and lists will not be copied and then the change detection logic will think those fields have not changed. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1237102/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1233789] Re: Object actions via conductor will result in verbose exception logging
Reviewed: https://review.openstack.org/51075 Committed: http://github.com/openstack/nova/commit/b64ea7c2cb76c476a178deeed6ab9e83676faf05 Submitter: Jenkins Branch:milestone-proposed commit b64ea7c2cb76c476a178deeed6ab9e83676faf05 Author: Dan Smith Date: Tue Oct 1 12:12:13 2013 -0700 Avoid spamming conductor logs with object exceptions Conductor's logs should include tracebacks only when something unexpected happened, which is why we have the client_exceptions() decorator. The object_action() and object_class_action() methods are used for direct remoting of object methods, and thus really should forward *any* exception to the client. This patch does that, and also adds missing tests for these two methods to verify the normal and exception-wrapped behavior. Closes-bug: #1233789 Closes-bug: #1084706 Change-Id: I505462fa429a6aa68e7b8a08ec2b704bf18d029c (cherry picked from commit 3a5e1faee04671f2e88b28d805b191b480054254) ** Changed in: nova Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1233789 Title: Object actions via conductor will result in verbose exception logging Status in OpenStack Compute (Nova): Fix Released Bug description: See http://logs.openstack.org/87/44287/9/check/gate-tempest-devstack- vm-full/c3a07eb/logs/screen-n-cond.txt.gz To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1233789/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1237028] Re: Fix dhcp_release lease race condition
Reviewed: https://review.openstack.org/51005 Committed: http://github.com/openstack/neutron/commit/ba99783a80ee0f92135dcfd562f3cfc9dad46d86 Submitter: Jenkins Branch:milestone-proposed commit ba99783a80ee0f92135dcfd562f3cfc9dad46d86 Author: Aaron Rosen Date: Tue Oct 8 12:24:21 2013 -0700 Fix dhcp_release lease race condition There is a possible race condition when delete or updating fixed_ips on ports where an instance could renew its ip address again after dhcp_release has already been executed. To fix this, the order of reload_allocation and release_lease need to be switched. This way an instance will not be able to renew it's ip address after it is removed from the host file. Fixes bug: 1237028 Change-Id: If05ec2be507378c634f5c1856dab0fbd396f43cc ** Changed in: neutron Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1237028 Title: Fix dhcp_release lease race condition Status in OpenStack Neutron (virtual network service): Fix Released Bug description: There is a possible race condition when delete or updating fixed_ips on ports where an instance could renew its ip address again after dhcp_release has already been executed. To fix this, the order of reload_allocation and release_lease need to be switched. This way an instance will not be able to renew it's ip address after it is removed from the host file. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1237028/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1236176] Re: Bad parameter to ConfigFilesNotFoundError
Reviewed: https://review.openstack.org/50939 Committed: http://github.com/openstack/nova/commit/d120cedb4e5b9ab055e0f47562eb461e0805daba Submitter: Jenkins Branch:milestone-proposed commit d120cedb4e5b9ab055e0f47562eb461e0805daba Author: Arata Notsu Date: Tue Oct 8 13:18:04 2013 +0900 Correct use of ConfigFilesNotFoundError ConfigFilesNotFoundError.__init__() takes "config_files", not "path". And pass the config value itself rather than the result of find_file(), which is always None. Change-Id: Ia5285d252d5636892c4fbeb9191a6c7ed4923b78 Closes-Bug: 1236176 (cherry picked from commit 84b02ca1d54fd058c68345068832e84d2f80b9a5) ** Changed in: nova Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1236176 Title: Bad parameter to ConfigFilesNotFoundError Status in OpenStack Compute (Nova): Fix Released Bug description: In nova/cells/state.py, ConfigFilesNotFoundError created as below: raise cfg.ConfigFilesNotFoundError(path=config_path) However ConfigFilesNotFoundError.__init__() is defined as below: def __init__(self, config_files): self.config_files = config_files So it fails. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1236176/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1238488] Re: Can't launch windows instance VMware 5.1
OK sorry guys false alarm, after I found this http://www.thinkvirt.com/?q=node/181 and updated the metadata to vmware_ostype=windows7Server64Guest that seems to have resolved the current issue and allowed it to spawn. Thanks. ** Changed in: nova Status: New => Invalid ** Changed in: openstack-vmwareapi-team Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1238488 Title: Can't launch windows instance VMware 5.1 Status in OpenStack Compute (Nova): Invalid Status in The OpenStack VMwareAPI subTeam: Invalid Bug description: Uploaded a Windows Server 2012 image to glance with the following attributes (same we are using for our Linux images): +---+--+ | Property | Value| +---+--+ | Property 'vmware_adaptertype' | lsiLogic | | Property 'vmware_disktype'| eagerZeroedThick | | Property 'vmware_ostype' | windowsGuest | | checksum | e4554ab96614ce8e897c1fba0742ccf0 | | container_format | ovf | | created_at| 2013-10-11T07:12:16 | | deleted | False| | disk_format | vmdk | | id| e7a874af-ce41-4d86-a80e-c675460d77de | | is_public | True | | min_disk | 0| | min_ram | 0| | name | WS2012SQL| | owner | bde4b0c3645c49f9a0a2788c6685e40c | | protected | False| | size | 2882216448 | | status| active | | updated_at| 2013-10-11T07:13:26 | +---+--+ nova refuses to boot the image however. The log is quite long so I've attached it below. Running Ubuntu 12.04.2 LTS, everything is running Grizzly from the Ubuntu Cloud Archive except nova-computes /usr/share/pyshared/nova/virt/vmwareapi which is running from VMware provided source. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1238488/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1233817] Re: python-django-horizon (havana) does not require python-openstack-auth >= 1.1.1
** Changed in: cloud-archive Status: New => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1233817 Title: python-django-horizon (havana) does not require python-openstack-auth >= 1.1.1 Status in Ubuntu Cloud Archive: Fix Released Status in OpenStack Dashboard (Horizon): Invalid Bug description: Using the openstack-dashboard package from the cloud archive, openstack havana release, I am unable to log in to horizon. After submitting the login form, I am simply redirected back to the login form. There is no error displayed by the browser, or logged by apache. It is first necessary to address #1210253 to get horizon to load at all, which I did by applying a patch similar to http://launchpadlibrarian.net/148082203/horizon_1%3A2013.2~b2-0ubuntu2_1%3A2013.2~b2-0ubuntu3.diff.gz to the UCA package. I also set DEBUG=True in local_settings.py, and changed the SECRET_KEY path to /tmp, so that www-data is able to write the generated file. When I submit the login form, Apache logs the following: ==> /var/log/apache2/error.log <== [Tue Oct 01 19:12:08 2013] [error] DEBUG:openstack_auth.backend:Beginning user authentication for user "admin". [Tue Oct 01 19:12:08 2013] [error] INFO:urllib3.connectionpool:Starting new HTTP connection (1): 127.0.0.1 [Tue Oct 01 19:12:08 2013] [error] DEBUG:urllib3.connectionpool:"POST /v2.0/tokens HTTP/1.1" 200 1309 [Tue Oct 01 19:12:08 2013] [error] INFO:urllib3.connectionpool:Starting new HTTP connection (1): 127.0.0.1 [Tue Oct 01 19:12:08 2013] [error] DEBUG:urllib3.connectionpool:"GET /v2.0/tenants HTTP/1.1" 200 143 [Tue Oct 01 19:12:08 2013] [error] INFO:urllib3.connectionpool:Starting new HTTP connection (1): 127.0.0.1 [Tue Oct 01 19:12:08 2013] [error] DEBUG:urllib3.connectionpool:"POST /v2.0/tokens HTTP/1.1" 200 5427 [Tue Oct 01 19:12:08 2013] [error] INFO:urllib3.connectionpool:Starting new HTTP connection (1): 172.20.21.7 [Tue Oct 01 19:12:09 2013] [error] DEBUG:urllib3.connectionpool:"POST /v2.0/tokens HTTP/1.1" 200 5427 [Tue Oct 01 19:12:09 2013] [error] DEBUG:openstack_auth.backend:Authentication completed for user "admin". ==> /var/log/apache2/access.log <== 172.20.33.199 - - [01/Oct/2013:19:12:08 +] "POST /horizon/auth/login/ HTTP/1.1" 302 20 "http://xen10.macprofessionals.lan/horizon/auth/login/"; "Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130917 Firefox/17.0 Iceweasel/17.0.9" 172.20.33.199 - - [01/Oct/2013:19:12:09 +] "GET /horizon HTTP/1.1" 200 733 "http://xen10.macprofessionals.lan/horizon/auth/login/"; "Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130917 Firefox/17.0 Iceweasel/17.0.9" I see that my browser is sending sessionid and csrftoken cookies with the request. I believe that authentication is working, because if I enter invalid credentials, I do get an error. I am also able to successfully authenticate with the command line API clients (nova, glance, etc) with the following environment: # env | grep OS_ OS_REGION_NAME=RegionOne OS_PASSWORD=admin OS_AUTH_STRATEGY=keystone OS_AUTH_URL=http://xen10.macprofessionals.lan:5000/v2.0/ OS_USERNAME=admin OS_TENANT_NAME=admin OS_NO_CACHE=true ProblemType: Bug DistroRelease: Ubuntu 12.04 Package: openstack-dashboard 1:2013.2~b2-0ubuntu2~cloud0 [origin: Canonical] ProcVersionSignature: Ubuntu 3.8.0-31.46~precise1-generic 3.8.13.8 Uname: Linux 3.8.0-31-generic x86_64 ApportVersion: 2.0.1-0ubuntu17.4 Architecture: amd64 CrashDB: cloud_archive Date: Tue Oct 1 15:06:04 2013 InstallationMedia: Ubuntu-Server 12.04.3 LTS "Precise Pangolin" - Release amd64 (20130820.2) MarkForUpload: True PackageArchitecture: all ProcEnviron: TERM=screen PATH=(custom, no user) LANG=en_US.UTF-8 SHELL=/bin/bash SourcePackage: horizon UpgradeStatus: No upgrade log present (probably fresh install) mtime.conffile..etc.openstack.dashboard.local.settings.py: 2013-10-01T14:52:27.653153 To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1233817/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1210253] Re: With Havana 2 installed, Launching horizon UI results in the error " NameError: name 'Dashboard' is not defined"
All packages now synced to updates pocket; marking Fix Released. ** Changed in: cloud-archive Status: Confirmed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1210253 Title: With Havana 2 installed, Launching horizon UI results in the error " NameError: name 'Dashboard' is not defined" Status in Ubuntu Cloud Archive: Fix Released Status in OpenStack Dashboard (Horizon): Invalid Status in “horizon” package in Ubuntu: Fix Released Status in “horizon” source package in Saucy: Fix Released Status in “horizon” package in Debian: New Bug description: Hi All, I installed Havana2, after installation Launching horizon UI results in the error " NameError: name 'Dashboard' is not defined". Havana 2 was installed from "deb http://ppa.launchpad.net/ubuntu-cloud-archive/havana-staging/ubuntu precise main" Below is the log snippet from apache2: [Thu Aug 08 09:44:41 2013] [error] Warning: Could not import Horizon dependencies. This is normal during installation. [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] mod_wsgi (pid=16851): Exception occurred processing WSGI script '/usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi'. [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] Traceback (most recent call last): [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] File "/usr/lib/python2.7/dist-packages/django/core/handlers/wsgi.py", line 236, in __call__ [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] self.load_middleware() [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] File "/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 45, in load_middleware [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] for middleware_path in settings.MIDDLEWARE_CLASSES: [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] File "/usr/lib/python2.7/dist-packages/django/conf/__init__.py", line 53, in __getattr__ [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] self._setup(name) [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] File "/usr/lib/python2.7/dist-packages/django/conf/__init__.py", line 48, in _setup [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] self._wrapped = Settings(settings_module) [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] File "/usr/lib/python2.7/dist-packages/django/conf/__init__.py", line 132, in __init__ [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] mod = importlib.import_module(self.SETTINGS_MODULE) [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] File "/usr/lib/python2.7/dist-packages/django/utils/importlib.py", line 35, in import_module [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] __import__(name) [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] File "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/settings.py", line 182, in [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] from local.local_settings import * [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] File "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/local/local_settings.py", line 91, in [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] from horizon.utils import secret_key [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] File "/usr/lib/python2.7/dist-packages/horizon/__init__.py", line 55, in [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] assert Dashboard [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] NameError: name 'Dashboard' is not defined [Thu Aug 08 09:44:41 2013] [error] [client 10.1.6.45] File does not exist: /var/www/favicon.ico To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1210253/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1087735] Re: Attaching volume fails if keystone has multiple endpoints of Cinder
** Changed in: cloud-archive Status: New => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1087735 Title: Attaching volume fails if keystone has multiple endpoints of Cinder Status in Cinder: Opinion Status in Ubuntu Cloud Archive: Fix Released Status in OpenStack Compute (Nova): Fix Released Bug description: Cinder fails to attach a volume to an instance if multiple endpoints of Cinder are configured in Keystone and the endpoints are unique to a region. Pre-config: Create a Cinder endpoint for two different regions (say, RegionOne and RegionTwo) in Keystone Create a volume and try to attach it to an instance fails with the following error: AmbiguousEndpoints: [{u'adminURL': u'http://10.2.3.102:8776/v1/4f272085adb34eef8a8b6e61e15f5009', u'region': u'RegionOne', u'internalURL': u'http://10.2.3.102:8776/v1/4f272085adb34eef8a8b6e61e15f5009', 'serviceName': u'cinder', u'id': u'1ebcb30a728948a29e594cdabe1d3ca5', u'publicURL': u'http://10.2.3.102:8776/v1/4f272085adb34eef8a8b6e61e15f5009'}, {u'adminURL': u'http://10.2.3.102:8776/v1/4f272085adb34eef8a8b6e61e15f5009', u'region': u'RegionTwo', u'internalURL': u'http://10.2.3.102:8776/v1/4f272085adb34eef8a8b6e61e15f5009', 'serviceName': u'cinder', u'id': u'6f17a81430cb4409af0026d543d362f7', u'publicURL': u'http://10.2.3.102:8776/v1/4f272085adb34eef8a8b6e61e15f5009'}] Nova API Log snip: http://paste.openstack.org/show/27572/ To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1087735/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1055929] Re: Can not display usage data for Quota Summary
** Changed in: cloud-archive Status: Confirmed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1055929 Title: Can not display usage data for Quota Summary Status in Ubuntu Cloud Archive: Fix Released Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Dashboard (Horizon) folsom series: Fix Released Status in “horizon” package in Ubuntu: Fix Released Status in “horizon” source package in Quantal: Fix Released Bug description: No data display on Quota Summary (overview page). 1. All components are got from github(master). 2. Quotas data display properly when i launch a instance from Intance page. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1055929/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1220692] Re: LBaaS HAProxy agent outputs traceback in get_stats
** Changed in: cloud-archive Status: Triaged => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1220692 Title: LBaaS HAProxy agent outputs traceback in get_stats Status in Ubuntu Cloud Archive: Fix Released Status in OpenStack Neutron (virtual network service): Fix Released Bug description: I found the following error in q-lbaas log after creating a vip on a pool. 2013-09-04 21:41:59.830 10678 DEBUG neutron.openstack.common.periodic_task [-] Running periodic task LbaasAgentManager.collect_stats run_periodic_tasks /opt/stack/neutron/neutron/openstack/common/periodic_task.py:176 2013-09-04 21:41:59.831 10678 ERROR neutron.services.loadbalancer.drivers.haproxy.agent_manager [-] Error upating stats 2013-09-04 21:41:59.831 10678 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager Traceback (most recent call last): 2013-09-04 21:41:59.831 10678 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager File "/opt/stack/neutron/neutron/services/loadbalancer/drivers/haproxy/agent_manager.py", line 137, in collect_stats 2013-09-04 21:41:59.831 10678 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager stats = driver.get_stats(pool_id) 2013-09-04 21:41:59.831 10678 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager File "/opt/stack/neutron/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py", line 168, in get_stats 2013-09-04 21:41:59.831 10678 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager pool_stats['members'] = self._get_servers_stats(parsed_stats) 2013-09-04 21:41:59.831 10678 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager File "/opt/stack/neutron/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py", line 188, in _get_servers_stats 2013-09-04 21:41:59.831 10678 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager if stats['type'] == TYPE_SERVER_RESPONSE: 2013-09-04 21:41:59.831 10678 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager KeyError: 'type' 2013-09-04 21:41:59.831 10678 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1220692/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1233178] Re: novncproxy broken in grizzly cloud-archive
Havana should be fixed now as well; we reverted the version bump and associated code changes as this was noticed quite late in cycle. ** Changed in: cloud-archive Status: New => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1233178 Title: novncproxy broken in grizzly cloud-archive Status in Ubuntu Cloud Archive: Fix Released Status in OpenStack Compute (Nova): Invalid Bug description: https://github.com/openstack/nova/commit/3eb67b811ae2442bd86781d9f1c4078a982cfe84 changes the requirements for websockify. It appears the cloud archive hasn't updated to match. Can you please pull a newer websockify in? To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1233178/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1187613] Re: nova-novncproxy crashes on Debian Wheezy
Package not from an official Ubuntu source; marking Invalid. ** Changed in: novnc (Ubuntu) Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1187613 Title: nova-novncproxy crashes on Debian Wheezy Status in OpenStack Compute (Nova): Invalid Status in “novnc” package in Ubuntu: Invalid Bug description: Hi, I use the GPLHost repository for Grizzly on Debian Wheezy (http://www.gplhost.com/software-openstack.html). When i try to start the nova-novncproxy service it crashed with this traceback : Traceback (most recent call last): File "/usr/bin/nova-novncproxy", line 32, in from nova import flags ImportError: cannot import name flags I found this link : https://lists.launchpad.net/openstack/msg23774.html The important part of the thread is that : "flags.py was removed from Grizzly, you may want to update the code for nova dns by using “from oslo.config import cfg” instead." Package informations : - Package: novnc - Version: 1:0.4+dfsg+1-7 - Architecture: amd64 So i did a patch which solves the issue for me, you will find it in attachment. Gaëtan (goldyfruit) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1187613/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1091939] Re: nova-network applies too liberal a SNAT rule
nova (2012.1.3+stable-20130423-e52e6912-0ubuntu1) precise-proposed; urgency=low * Resynchronize with stable/essex (e52e6912) (LP: #1089488): - [48e81f1] VNC proxy can be made to connect to wrong VM LP: 1125378 - [3bf5a58] snat rule too broad for some network configurations LP: 1048765 - [efaacda] DOS by allocating all fixed ips LP: 1125468 - [b683ced] Add nosehtmloutput as a test dependency. - [45274c8] Nova unit tests not running, but still passing for stable/essex LP: 1132835 - [e02b459] vnc unit-test fixes - [87361d3] Jenkins jobs fail because of incompatibility between sqlalchemy- migrate and the newest sqlalchemy-0.8.0b1 (LP: #1073569) - [e98928c] VNC proxy can be made to connect to wrong VM LP: 1125378 - [c0a10db] DoS through XML entity expansion (CVE-2013-1664) LP: 1100282 - [243d516] No authentication on block device used for os-volume_boot LP: 1069904 - [80fefe5] use_single_default_gateway does not function correctly (LP: #1075859) - [bd10241] Essex 2012.1.3 : Error deleting instance with 2 Nova Volumes attached (LP: #1079745) - [86a5937] do_refresh_security_group_rules in nova.virt.firewall is very slow (LP: #1062314) - [ae9c5f4] deallocate_fixed_ip attempts to update an already deleted fixed_ip (LP: #1017633) - [20f98c5] failed to allocate fixed ip because old deleted one exists (LP: #996482) - [75f6922] snapshot stays in saving state if the vm base image is deleted (LP: #921774) - [1076699] lock files may be removed in error dues to permissions issues (LP: #1051924) - [40c5e94] ensure_default_security_group() does not call sgh (LP: #1050982) - [4eebe76] At termination, LXC rootfs is not always unmounted before rmtree() is called (LP: #1046313) - [47dabb3] Heavily loaded nova-compute instances don't sent reports frequently enough (LP: #1045152) - [b375b4f] When attach volume lost attach when node restart (LP: #1004791) - [4ac2dcc] nova usage-list returns wrong usage (LP: #1043999) - [014fcbc] Bridge port's hairpin mode not set after resuming a machine (LP: #1040537) - [2f35f8e] Nova flavor ephemeral space size reported incorrectly (LP: #1026210) * Dropped, superseeded by new snapshot: - debian/patches/CVE-2013-0335.patch: [48e81f1] - debian/patches/CVE-2013-1838.patch: [efaacda] - debian/patches/CVE-2013-1664.patch: [c0a10db] - debian/patches/CVE-2013-0208.patch: [243d516] -- Yolanda Mon, 22 Apr 2013 12:37:08 +0200 ** CVE added: http://www.cve.mitre.org/cgi-bin/cvename.cgi?name=2013-0208 ** CVE added: http://www.cve.mitre.org/cgi- bin/cvename.cgi?name=2013-0335 ** CVE added: http://www.cve.mitre.org/cgi- bin/cvename.cgi?name=2013-1664 ** CVE added: http://www.cve.mitre.org/cgi- bin/cvename.cgi?name=2013-1838 ** Changed in: nova (Ubuntu Precise) Status: In Progress => Fix Released ** Changed in: nova (Ubuntu) Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1091939 Title: nova-network applies too liberal a SNAT rule Status in OpenStack Compute (Nova): Invalid Status in “nova” package in Ubuntu: Fix Released Status in “nova” source package in Precise: Fix Released Bug description: Version: 2012.1.3+stable-20120827-4d2a4afe-0ubuntu1 We recently set up a new Nova cluster on precise + essex with Juju and MaaS, and ran into a problem where instances could not communicate with the swift-proxy node on the MaaS network. This turned out to be due to nova-network installing a SNAT rule for the cluster's public IP that applied to all network traffic, not just that traffic destined to exit towards the Internet. This problem has been fixed upstream in https://github.com/openstack/nova/commit/959c93f6d3572a189fc3fe73f1811c12323db857 Please consider applying this change to Ubuntu 12.04 LTS in an SRU. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1091939/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1238547] Re: differentiate between missing and non-enabled users
This is by design, to avoid unnecessarily leaking security-related details. However, if debug is enabled, then error message should be specific about what caused the 401 -- but it should still be a 401. ** Changed in: keystone Status: In Progress => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1238547 Title: differentiate between missing and non-enabled users Status in OpenStack Identity (Keystone): Won't Fix Bug description: The current implementation returns HTTP error code 401 (Unauthorized) even when the user exists and the password is correct, but the actual user is disabled. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1238547/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1224991] Re: ml2 plugin may let hyperv agents ports to build status
Reviewed: https://review.openstack.org/51139 Committed: http://github.com/openstack/neutron/commit/16b501f5f0fc7577418a3d4a0d1bd62218e41d39 Submitter: Jenkins Branch:milestone-proposed commit 16b501f5f0fc7577418a3d4a0d1bd62218e41d39 Author: Petrut Lucian Date: Wed Sep 25 20:07:01 2013 +0300 Fixes port status hanging to build status ML2 plugin changes the port status to "build" when get_device_details is called. For this reason, the port status must be updated once the port details are processed. Fixes bug: #1224991 Change-Id: I2c0321073cc07e1764fedbfbecbc844557ac6bc9 (cherry picked from commit 01194b356e39e3b0affca67015efb7634bf28697) ** Changed in: neutron Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1224991 Title: ml2 plugin may let hyperv agents ports to build status Status in OpenStack Neutron (virtual network service): Fix Released Bug description: ml2 implementation of l2-population changed port status management workflow: when get_device_details is called on the plugin, corresponding port status is changed to "build", and agents are expected to call update_device_up/down once port details have been processed. OVS and LB agents have been changed to update port status, but not hyperv agent, as a consequence, port plugged on hyperv agents may stay to 'build' status. Two solutions can be investigated to fix this issue: -update_port_status plugin method should be called when port status changes to 'build'. hyperv mechanism driver would be notified and could re-set port status to 'active' -hyperv agent could be changed to call update_device_up/down once a port has been plugged (but in that case, unchanged would still face the issue) To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1224991/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1213927] Re: flavor extra spec api fails with XML content type if key contains a colon
I don't think there is a fix to make in tempest for this so once the problem is resolved on the nova side it should fix any test issues in tempest, right? ** Changed in: tempest Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1213927 Title: flavor extra spec api fails with XML content type if key contains a colon Status in OpenStack Compute (Nova): In Progress Status in Tempest: Invalid Bug description: The flavor extra spec API extension (os-extra_specs) fails with "HTTP 500" when content-type application/xml is requested if the extra spec key contains a colon. For example: curl [endpoint]/flavors/[ID]/os-extra_specs -H "Accept: application/json" -H "X-Auth-Token: $TOKEN" {"extra_specs": {"foo:bar": "999"}} curl -i [endpoint]/flavors/[ID]/os-extra_specs -H "Accept: application/xml" -H "X-Auth-Token: $TOKEN" {"extra_specs": {"foo:bar": "999"}} HTTP/1.1 500 Internal Server Error The stack trace shows that the XML parser tries to interpret the ":" in key as if it would be a XML namespace, which fails, as the namespace is not valid: 2013-08-19 13:08:14.374 27521 DEBUG nova.api.openstack.wsgi [req-afe0c3c8-e7d6-48c5-84f1-782260850e6b redacted redacted] Calling method > _process_stack /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:927 2013-08-19 13:08:14.377 27521 ERROR nova.api.openstack [req-afe0c3c8-e7d6-48c5-84f1-782260850e6b redacted redacted] Caught error: Invalid tag name u'foo:bar' 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack Traceback (most recent call last): 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 110, in __call__ 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return req.get_response(self.application) 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/request.py", line 1296, in send 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack application, catch_exc_info=False) 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/request.py", line 1260, in call_application 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack app_iter = application(self.environ, start_response) 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return resp(environ, start_response) 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/hp/middleware/cs_auth_token.py", line 160, in __call__ 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return super(CsAuthProtocol, self).__call__(env, start_response) 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py", line 461, in __call__ 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return self.app(env, start_response) 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return resp(environ, start_response) 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return resp(environ, start_response) 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in __call__ 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack response = self.app(environ, start_response) 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return resp(environ, start_response) 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__ 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack resp = self.call_func(req, *args, **self.kwargs) 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return self.func(req, *args, **kwargs) 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 903, in __call__ 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack content_type, body, accept) 2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack
[Yahoo-eng-team] [Bug 1238293] Re: Admin for tenant can view ports belonging to other tenants upon executing quantum port-list
Currently the admin can view all information which is what we intended. Marking as Invalid. Feel free to file a blueprint as Eugene suggested. ** Changed in: neutron Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1238293 Title: Admin for tenant can view ports belonging to other tenants upon executing quantum port-list Status in OpenStack Neutron (virtual network service): Invalid Bug description: Currently, if we create two networks, say net1 and net2, for two different tenants, tenant1 and tenant2 respectively, and add ports to these networks, quantum port-list run by an admin user of tenant1 is able to view ports belonging to tenant2. This is not expected behavior. An admin user of tenant1 should be able to view all ports within that tenant, but not those belonging to another tenant. It looks like quantum isn't correclty using the scope and non-scope tokens that are passed to it, when retrieving port/network info from the quantum database. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1238293/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1238293] Re: Admin for tenant can view ports belonging to other tenants upon executing quantum port-list
This does not seem like a new feature but an bug. Neutron is behaving different to other services like nova where token scope is respected. ** Changed in: neutron Status: Invalid => Confirmed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1238293 Title: Admin for tenant can view ports belonging to other tenants upon executing quantum port-list Status in OpenStack Neutron (virtual network service): Confirmed Bug description: Currently, if we create two networks, say net1 and net2, for two different tenants, tenant1 and tenant2 respectively, and add ports to these networks, quantum port-list run by an admin user of tenant1 is able to view ports belonging to tenant2. This is not expected behavior. An admin user of tenant1 should be able to view all ports within that tenant, but not those belonging to another tenant. It looks like quantum isn't correclty using the scope and non-scope tokens that are passed to it, when retrieving port/network info from the quantum database. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1238293/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1155458] Re: 500 response when trying to create a server from a deleted image
** Changed in: nova (Ubuntu) Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1155458 Title: 500 response when trying to create a server from a deleted image Status in OpenStack Compute (Nova): Fix Released Status in “nova” package in Ubuntu: Fix Released Bug description: After an image cleanup, I noticed the API will return a 500 if I try to create a server with an id of an image that has been deleted. The expected behavior would be a 400 stating that the user provided a bad image id. This is the current behavior if I pass in a random invalid id. Steps to reproduce: 1. Take a snapshot of an existing server and note the id 2. Delete the snapshot 3. Try to create a new server using the id of the deleted image. Instead of a 400 Bad Request (no image found), a generic 500 is returned To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1155458/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1031119] Re: nova: proxy floating ip calls to quantum
** Changed in: nova (Ubuntu) Status: Triaged => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1031119 Title: nova: proxy floating ip calls to quantum Status in OpenStack Compute (Nova): Fix Released Status in OpenStack Compute (nova) folsom series: Fix Released Status in “nova” package in Ubuntu: Fix Released Bug description: nova includes tenant-facing commands to deal with floating ips. We'd like to have these commands still work and proxied to Quantum. this should be doable by changing the quantumv2 implementation of network- api. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1031119/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1091605] Re: Internal interfaces defined via OVS are not brought up properly after a reboot
** Changed in: quantum (Ubuntu) Status: In Progress => Fix Released ** Changed in: quantum (Ubuntu) Status: Fix Released => Triaged ** Changed in: quantum (Ubuntu) Importance: Undecided => Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1091605 Title: Internal interfaces defined via OVS are not brought up properly after a reboot Status in OpenStack Neutron (virtual network service): Fix Released Status in neutron folsom series: Fix Released Status in “quantum” package in Ubuntu: Triaged Bug description: The L3 agents and DHCP agents both define internal (qg-, qr-, tap-) ports via OVS. In both cases, the agents call plug() to configure and bring the device up if it does not exist. If the device does exist, however, the agents neither call plug nor do they ensure the link is up (OVS ensures that the devices survive a reboot but does not ensure that they are brought up on boot). The responsibility for bringing devices up should probably remain in quantum/agent/linux/interface.py, so a suggested implementation would be delegating the device existence check to the driver's plug() method, which could then ensure that the device was brought up if necessary. This bug reveals a hole in our current testing strategy. Most developers presumably work on devstack rather than installed code. Since devstack agents don't survive a reboot, most developers would never have the chance to validate whether a quantum agent node still works after a reboot. Documenting use-cases that need to be tested (e.g. quantum agent nodes need to work properly after a reboot) is a good first step - is this currently captured somewhere or can we find a place to do so? To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1091605/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1238121] Re: new python-troveclient release breaks Horizon tests/gate
Reviewed: https://review.openstack.org/51186 Committed: http://github.com/openstack/horizon/commit/d55d502c16f9aa5e0ed57371d1e33d6c8ac2d1d2 Submitter: Jenkins Branch:milestone-proposed commit d55d502c16f9aa5e0ed57371d1e33d6c8ac2d1d2 Author: David Lyle Date: Thu Oct 10 10:35:45 2013 -0600 capping python-troveclient version Verson 1.0 of python-troveclient is not backwards compatible with the trove work in Horizon. There are too many places to fix the compatibility issues for Havana, so for now pinning the release version. Closes-bug: #1238121 Change-Id: If0ec929180e083aa47bdb688879ba2f63fdcd2a3 (cherry picked from commit b49b952be096c4d50228b5ca0dc40ad19e0211a6) ** Changed in: horizon Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1238121 Title: new python-troveclient release breaks Horizon tests/gate Status in OpenStack Dashboard (Horizon): Fix Released Bug description: Non-backward compatible Version 1.0 of python-troveclient was release to pypi this morning breaking all openstack_dashboard tests and all trove functionality. == ERROR: openstack_dashboard.test.test_data.utils.load_test_data -- Traceback (most recent call last): File "/home/david-lyle/test/horizon/.venv/local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/david-lyle/test/horizon/openstack_dashboard/test/test_data/utils.py", line 46, in load_test_data loaders += (trove_data.data,) AttributeError: 'module' object has no attribute 'data' -- Ran 714 tests in 0.399s FAILED (SKIP=3, errors=711) Destroying test database for alias 'default'... Tests failed. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1238121/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1238281] Re: Migration script deletes wrong compute node stats
Reviewed: https://review.openstack.org/51258 Committed: http://github.com/openstack/nova/commit/49d60d1f1321c8c7860e021c5137551d87fac6cf Submitter: Jenkins Branch:milestone-proposed commit 49d60d1f1321c8c7860e021c5137551d87fac6cf Author: Joe Cropper Date: Thu Oct 10 16:38:33 2013 -0500 Fix nova DB 215 migration script logic error This addresses a minor (albeit important) logic error in the migration script for https://review.openstack.org/#/c/46379/ - that script currently deletes compute node stats for active compute nodes. This patch reverses that logic so it deletes the correct set of compute node stats. Also included is a migration test case to demonstrate the behavior. Change-Id: I77afcf443357d0767ac933d791a565289eabee9a Closes-Bug: #1238281 (cherry picked from commit c3330931113ac2edf5961a653e5c2cfe459c13a0) ** Changed in: nova Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1238281 Title: Migration script deletes wrong compute node stats Status in OpenStack Compute (Nova): Fix Released Bug description: The nova DB migration script for 215 executes: result = conn.execute( 'update compute_node_stats set deleted = id, ' 'deleted_at = current_timestamp where compute_node_id not in ' '(select id from compute_nodes where deleted <> 0)') Need to remove the 'not' part so that we delete the right stats since the nested select finds all DELETED nodes. The current SQL actually deletes all active compute nodes' stats. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1238281/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1236326] Re: AttributeError: 'Client' object has no attribute 'ec2'
Reviewed: https://review.openstack.org/51165 Committed: http://github.com/openstack/horizon/commit/904eebee866e43841b5e267db8ada6e80b4bb365 Submitter: Jenkins Branch:milestone-proposed commit 904eebee866e43841b5e267db8ada6e80b4bb365 Author: Kieran Spear Date: Thu Oct 10 13:41:38 2013 +1100 Add keystoneclient CredentialsManager if missing The keystone v3 client doesn't include the 'ec2' manager, even though the EC2 API is included in the default pipeline. This commit creates the ec2 manager manually if it's missing (and necessary). It uses the same call that the client itself should have used. Change-Id: I83e7bfa04cde5093c10bf2bc27af5ec03da4b48e Closes-bug: #1236326 (cherry picked from commit 7187d12a1727b98e91c7d90941de6bd46f098809) ** Changed in: horizon Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1236326 Title: AttributeError: 'Client' object has no attribute 'ec2' Status in OpenStack Dashboard (Horizon): Fix Released Status in Python client library for Keystone: In Progress Bug description: While downloading ec2 credentials file from dashboard, getting 'AttributeError at /project/access_and_security/api_access/ec2/'. Horizon traceback link : http://paste.openstack.org/show/48015/. Actual output: AttributeError: 'Client' object has no attribute 'ec2' Expected output: EC2 Credentials file should be downloaded. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1236326/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1238984] Re: neutron service-provider-list 404 when service_plugins is empty
Right, that is behavior by design. Service Provider extension is not supported by any of the loaded plugin. That's why corresponding command returns 404. Marking as invalid. ** Changed in: neutron Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1238984 Title: neutron service-provider-list 404 when service_plugins is empty Status in OpenStack Neutron (virtual network service): Invalid Bug description: In a clean devstack, if I edit /etc/neutron/neutron.conf to replace: service_plugins = neutron.services.loadbalancer.plugin.LoadBalancerPlugin by: service_plugins = and then restart 'q-svc', I then run into the following issue: $ neutron service-provider-list 404 Not Found The resource could not be found. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1238984/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1235182] Re: live migration fails with vm booted from vol
Reviewed: https://review.openstack.org/51315 Committed: http://github.com/openstack/nova/commit/27e6f05a42ec7c5e212c6a3ae431537cbc10686e Submitter: Jenkins Branch:milestone-proposed commit 27e6f05a42ec7c5e212c6a3ae431537cbc10686e Author: Edward Hope-Morley Date: Fri Oct 4 20:13:44 2013 +0100 Fixes error on live-migration of volume-backed vm Live-migrating a volume-backed vm (i.e. booted from volume) is currently broken. This patch fixes the case where a volume-backed vm is to be live-migrated without shared storage on compute nodes or ephemeral volumes attached to the instance. Specifically, it stops create_images_and_backing() from blowing up when no disk info is supplied. Change-Id: Icec7a6e7225ebe029e24d3be303c9ab01818f30e Fixes: bug 1235182 (cherry picked from commit 0cbb231cd14c8cb767b67d89b14d0ef46b3e8018) ** Changed in: nova Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1235182 Title: live migration fails with vm booted from vol Status in OpenStack Compute (Nova): Fix Released Bug description: Sequence (Havana): 1. boot a vm from cinder volume 2. nova live-migration I see the following traceback in /var/log/nova/nova-compute.log on the source host: (see attached file 'nova_compute_traceback') To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1235182/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1236356] Re: Havana libvirt live migration should not be allowed without shared storage, even for volume-backed VMs
Reviewed: https://review.openstack.org/51316 Committed: http://github.com/openstack/nova/commit/c63f4af000af191a9a70b6b62dbda40a5c9b59e5 Submitter: Jenkins Branch:milestone-proposed commit c63f4af000af191a9a70b6b62dbda40a5c9b59e5 Author: Nikola Dipanov Date: Thu Oct 10 16:00:26 2013 +0200 Libvirt: disallow live-mig for volume-backed with local disk This patch makes libvirt raise an error if a live migration was requested without shared storage for a volume-backed instance, if that instance has any local disks. The reason is that without shared storage, local disks will be re-created on the destination node which can result in loss of data. Change-Id: Ic96dabf6020e957309280862b325792faf44b1f5 Closes-bug: 1236356 (cherry picked from commit cf89e78a1b921adee5b1943600315b0637fdefdc) ** Changed in: nova Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1236356 Title: Havana libvirt live migration should not be allowed without shared storage, even for volume-backed VMs Status in OpenStack Compute (Nova): Fix Released Bug description: The following patch related to bug #1074054 https://review.openstack.org/#/c/17118/ mistakenly allows live migration without shared storage, if the instance is volume backed - only in the libvirt driver. Bug #1235182 made this issue visible. this is not the correct thing to do because even though the instance root disk is a cinder volume - the instance might have additional disks that need to be migrated. Currently - nova would just create them on the destination node. Two possible solutions are: 1) Prevent non-block live migration without shared storage - effectively reverting https://review.openstack.org/#/c/17118/ 2) Allow live migration ONLY for volume backed instances that have no local disks. This might be a preferred way of solving the issue, as there is value in having the flexibility to have live migrations without the overhead of setting up shared storage for volume-only VMs To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1236356/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1212082] Re: Improve the sql query performance of tag querying
** Changed in: glance Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1212082 Title: Improve the sql query performance of tag querying Status in OpenStack Image Registry and Delivery Service (Glance): Invalid Bug description: After review the method "image_tag_get_all" of /api.py, we're trying to order the list by created_at. However, it's meaningless from the end-user perspective. Since there is no priority for tags and we can't say one tag is more useful/important than another one. And even worse, it may introduce some performance issue. = mysql> explain SELECT image_tags.created_at AS image_tags_created_at, image_tags.updated_at AS image_tags_updated_at, image_tags.deleted_at AS image_tags_deleted_at, image_tags.deleted AS image_tags_deleted, image_tags.id AS image_tags_id, image_tags.image_id AS image_tags_image_id, image_tags.value AS image_tags_value FROM image_tags WHERE image_tags.image_id = 'c67d1ff2-c5c7-411b-9bf0-2c723e746434' AND image_tags.deleted = 0 ORDER BY image_tags.created_at ASC; ++-++--+-++-+---+--+-+ | id | select_type | table | type | possible_keys | key| key_len | ref | rows | Extra | ++-++--+-++-+---+--+-+ | 1 | SIMPLE | image_tags | ref | ix_image_tags_image_id,ix_image_tags_image_id_tag_value | ix_image_tags_image_id | 110 | const |2 | Using where; Using filesort | ++-++--+-++-+---+--+-+ 1 row in set (0.01 sec) To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1212082/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1238604] Re: Run into 500 error during delete image
It's most like a deployment issue. And given the lock_path will be set a default value in patch https://review.openstack.org/#/c/46479/, so i'm going to mark this bug as Invalid. ** Changed in: glance Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1238604 Title: Run into 500 error during delete image Status in OpenStack Image Registry and Delivery Service (Glance): Invalid Bug description: Recreate steps: 1. Enable delayed delete delayed_delete = True 2. Create a new image by: glance image-create --name flwang_1 --container-format bare --disk-format qcow2 --is-public yes --location https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img 3. Delete image by: glance image-delete flwang_1 You will see an error like below, but the image has been deleted, in 'pending-delete' state. Request returned failure status. HTTPInternalServerError (HTTP 500): Unable to delete image 86d0a3df-d140-4d41-aaae-f1c538591d3d To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1238604/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1214900] Re: Missing index for images table
** Changed in: glance Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1214900 Title: Missing index for images table Status in OpenStack Image Registry and Delivery Service (Glance): Invalid Bug description: Based on current REST API v2/images implement, we support multi query parameters. See http://api.openstack.org/api-ref.html#os-images-2.0, but we're missing some of these indexes, such as status, size. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1214900/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1199652] Re: Validation of paramater "enabled" during Create Project(tenant)
[Expired for Keystone because there has been no activity for 60 days.] ** Changed in: keystone Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1199652 Title: Validation of paramater "enabled" during Create Project(tenant) Status in OpenStack Identity (Keystone): Expired Bug description: When create a project(tenant) by JSON / HTTP API,the paramater "enabled" should be validated.This paramater can be assigned a int value( like "5"),rather than a bool value. But,if we assign "5" to "enabled", the API "List Tenant"(CLI command "keystone tenant- list",REST request "List tenants") will be unavailable.It fails with a 500 error: { "error": { "message": "An unexpected error prevented the server from fulfilling your request. int_to_boolean only accepts None, 0 or 1", "code": 500, "title": "Internal Server Error" } } To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1199652/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp