[Yahoo-eng-team] [Bug 1251141] Re: nova quota-show didn't report error when provide a non-existing tenant-id

2013-12-03 Thread Joe Gordon
** Changed in: python-novaclient
   Status: Confirmed => Invalid

** Changed in: python-novaclient
   Importance: Medium => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251141

Title:
  nova quota-show didn't report error when provide a non-existing
  tenant-id

Status in OpenStack Compute (Nova):
  New
Status in Python client library for Nova:
  Invalid

Bug description:
  Seems "nova quota-show --tenant" doesn't check whether or not the
  "tenant id" is existing.

  I think in normal ways, this should report a error if you provide a
  illegal tenant id.

  Steps to reproduce:

  Just type "nova quota-show --tenant non-existing-tenant-id", and nova
  will show you the default quota settings for any tenant.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1251141/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1238604] Re: Run into 500 error during delete image

2013-12-03 Thread Stuart McLaren
If you enable delayed delete you get an E500 and the following stack
trace:

Dec  2 11:29:17 gl-aw1rdc1-registry 4101 DEBUG eventlet.wsgi.server 
[f46a69d7-c49b-4e41-ad12-447bcd7d2c38 77049353665607 34096082065107] Traceback 
(most recent call last):
012  File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 384, in 
handle_one_response
012result = self.application(self.environ, start_response)
012  File 
"/usr/lib/python2.7/dist-packages/hp_glance_extras/middleware/healthcheck.py", 
line 38, in __call__
012return self.app(env, start_response)
012  File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
012resp = self.call_func(req, *args, **self.kwargs)
012  File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
012return self.func(req, *args, **kwargs)
012  File "/usr/lib/python2.7/dist-packages/glance/common/wsgi.py", line 377, 
in __call__
012response = req.get_response(self.application)
012  File "/usr/lib/python2.7/dist-packages/webob/request.py", line 1296, in 
send
012application, catch_exc_info=False)
012  File "/usr/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
012app_iter = application(self.environ, start_response)
012  File "/usr/lib/python2.7/dist-packages/hp/middleware/cs_auth_token.py", 
line 160, in __call__
012return super(CsAuthProtocol, self).__call__(env, start_response)
012  File 
"/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py", 
line 539, in __call__
012return self.app(env, start_response)
012  File "/usr/lib/python2.7/dist-packages/hp/middleware/cs_authz.py", line 
30, in __call__
012return self.app(env, start_response)
012  File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
012resp = self.call_func(req, *args, **self.kwargs)
012  File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
012return self.func(req, *args, **kwargs)
012  File "/usr/lib/python2.7/dist-packages/glance/common/wsgi.py", line 377, 
in __call__
012response = req.get_response(self.application)
012  File "/usr/lib/python2.7/dist-packages/webob/request.py", line 1296, in 
send
012application, catch_exc_info=False)
012  File "/usr/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
012app_iter = application(self.environ, start_response)
012  File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
012resp = self.call_func(req, *args, **self.kwargs)
012  File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
012return self.func(req, *args, **kwargs)
012  File "/usr/lib/python2.7/dist-packages/glance/common/wsgi.py", line 377, 
in __call__
012response = req.get_response(self.application)
012  File "/usr/lib/python2.7/dist-packages/webob/request.py", line 1296, in 
send
012application, catch_exc_info=False)
012  File "/usr/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
012app_iter = application(self.environ, start_response)
012  File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
012resp = self.call_func(req, *args, **self.kwargs)
012  File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
012return self.func(req, *args, **kwargs)
012  File "/usr/lib/python2.7/dist-packages/glance/common/wsgi.py", line 377, 
in __call__
012response = req.get_response(self.application)
012  File "/usr/lib/python2.7/dist-packages/webob/request.py", line 1296, in 
send
012application, catch_exc_info=False)
012  File "/usr/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
012app_iter = application(self.environ, start_response)
012  File "/usr/lib/python2.7/dist-packages/paste/urlmap.py", line 203, in 
__call__
012return app(environ, start_response)
012  File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
012return resp(environ, start_response)
012  File "/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in 
__call__
012response = self.app(environ, start_response)
012  File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
012return resp(environ, start_response)
012  File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
012resp = self.call_func(req, *args, **self.kwargs)
012  File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
012return self.func(req, *args, **kwargs)
012  File "/usr/lib/python2.7/dist-packages/glance/common/wsgi.py", line 609, 
in __call__
012request, **action_args)
012  File "/usr/lib/python2.7/dist-packages/glance/common/wsgi.py", line 628, 
in dispatch
012return method(*args, **kwargs)
012  File "/usr/lib/python2.7/dist-packages/glance/common/utils.py", line 422, 
in wrapped
012return func(self, req, *args, **kwargs)
012  File "/usr/lib/python2.7

[Yahoo-eng-team] [Bug 1257273] [NEW] Glance download fails when size is 0

2013-12-03 Thread Flavio Percoco
Public bug reported:

Glance images are not being fetched by glance's API v1 when the size is
0. There are 2 things wrong with this behaviour:

1) Active images should always be ready to be downloaded, regardless they're 
locally or remotely stored.
2) The size shouldn't be the way to verify whether an image has some data or 
not.

https://git.openstack.org/cgit/openstack/glance/tree/glance/api/v1/images.py#n455

This is happening in the API v1, but it doesn't seem to be true for v2.

** Affects: glance
 Importance: High
 Status: New

** Changed in: glance
   Importance: Undecided => High

** Description changed:

  Glance images are not being fetched by glance's API v1 when the size is
  0. There are 2 things wrong with this behaviour:
  
  1) Active images should always be ready to be downloaded, regardless they're 
locally or remotely stored.
  2) The size shouldn't be the way to verify whether an image has some data or 
not.
  
- 431 if image_meta.get('size') == 0:
- 432  -> image_iterator = iter([])
- 433 else:
- 434 image_iterator, size = self._get_from_store(req.context,
- 435 
image_meta['location'])
- 436 image_iterator = utils.cooperative_iter(image_iterator)
- 437 image_meta['size'] = size or image_meta['size']
+ 
https://git.openstack.org/cgit/openstack/glance/tree/glance/api/v1/images.py#n455
  
  This is happening in the API v1, but it doesn't seem to be true for v2.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1257273

Title:
  Glance download fails when size is 0

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Glance images are not being fetched by glance's API v1 when the size
  is 0. There are 2 things wrong with this behaviour:

  1) Active images should always be ready to be downloaded, regardless they're 
locally or remotely stored.
  2) The size shouldn't be the way to verify whether an image has some data or 
not.

  
https://git.openstack.org/cgit/openstack/glance/tree/glance/api/v1/images.py#n455

  This is happening in the API v1, but it doesn't seem to be true for
  v2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1257273/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257274] [NEW] Bump hacking to 0.8

2013-12-03 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Due to Bump hacking dependency is not the 0.8, some compatibility checks
with python 3.x are not being done on gate and it is bringing code
issues.

** Affects: glance
 Importance: Undecided
 Assignee: Avishay Traeger (avishay-il)
 Status: In Progress

-- 
Bump hacking to 0.8
https://bugs.launchpad.net/bugs/1257274
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to Glance.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257282] [NEW] Bump hacking to 0.8

2013-12-03 Thread Sergio Cazzolato
Public bug reported:

Due to Bump hacking dependency is not the 0.8, some compatibility checks
with python 3.x are not being done on gate and it is bringing code
issues.

** Affects: glance
 Importance: Undecided
 Assignee: Sergio Cazzolato (sergio-j-cazzolato)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Sergio Cazzolato (sergio-j-cazzolato)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1257282

Title:
  Bump hacking to 0.8

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Due to Bump hacking dependency is not the 0.8, some compatibility
  checks with python 3.x are not being done on gate and it is bringing
  code issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1257282/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257295] [NEW] openstack is full of misspelled words

2013-12-03 Thread Joe Gordon
Public bug reported:

List of known misspellings

http://paste.openstack.org/show/54354

Generated with:
  pip install misspellings
  git ls-files | grep -v locale | misspellings -f -

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: oslo
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

** Tags added: low-hanging-fruit

** Summary changed:

- nova is full of misspelled words
+ openstack is full of misspelled words

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257295

Title:
  openstack is full of misspelled words

Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  List of known misspellings

  http://paste.openstack.org/show/54354

  Generated with:
pip install misspellings
git ls-files | grep -v locale | misspellings -f -

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1257295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257295] Re: openstack is full of misspelled words

2013-12-03 Thread Joe Gordon
** Also affects: oslo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257295

Title:
  openstack is full of misspelled words

Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  List of known misspellings

  http://paste.openstack.org/show/54354

  Generated with:
pip install misspellings
git ls-files | grep -v locale | misspellings -f -

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1257295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1153926] Re: flavor show shouldn't read deleted flavors.

2013-12-03 Thread Rohan
** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** Changed in: python-novaclient
 Assignee: (unassigned) => Rohan (kanaderohan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1153926

Title:
  flavor show shouldn't read deleted flavors.

Status in OpenStack Compute (Nova):
  Triaged
Status in Python client library for Nova:
  New

Bug description:
  An instance type is created by:

  return db.instance_type_create(context.get_admin_context(), kwargs)

  which uses the read_deleted="no" from the admin context.

  This means, as seen in nova/tests/test_instance_types.py:

  def test_read_deleted_false_converting_flavorid(self):
  """
  Ensure deleted instance types are not returned when not needed (for
  example when creating a server and attempting to translate from
  flavorid to instance_type_id.
  """
  instance_types.create("instance_type1", 256, 1, 120, 100, "test1")
  instance_types.destroy("instance_type1")
  instance_types.create("instance_type1_redo", 256, 1, 120, 100, "test1")

  instance_type = instance_types.get_instance_type_by_flavor_id(
  "test1", read_deleted="no")
  self.assertEqual("instance_type1_redo", instance_type["name"])

  flavors with colliding ids can exist in the database.

  From the test we see this looks intended, however it results in
  undesirable results if we consider the following scenario.

  For 'show' in the flavors api, it uses read_deleted="yes". The reason
  for this is if a vm was created in the past with a now-deleted flavor,
  'nova show' can still show the flavor name that was specified for that
  vm creation. The flavor name is retrieved using the flavor id stored
  with the instance.

  Well, if there are colliding flavor ids in the database, the first of
  the duplicates will be picked, and it may not be the correct flavor
  for the vm.

  This leads me to believe that maybe at flavor create time, colliding
  ids should not be allowed, i.e. use

  return db.instance_type_create(context.get_admin_context(read_deleted="yes"),
 kwargs)

  to prevent the possibility of colliding flavor ids.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1153926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257301] Re: Bump hacking to 0.8

2013-12-03 Thread Sergio Cazzolato
** Also affects: keystone
   Importance: Undecided
   Status: New

** Changed in: keystone
 Assignee: (unassigned) => Sergio Cazzolato (sergio-j-cazzolato)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1257301

Title:
  Bump hacking to 0.8

Status in OpenStack Identity (Keystone):
  In Progress
Status in Python client library for Keystone:
  In Progress

Bug description:
  Due to Bump hacking dependency is not the 0.8, some compatibility
  checks with python 3.x are not being done on gate and it is bringing
  code issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1257301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257323] [NEW] Keystone token_flush execution may fail because of full transaction log in DB2

2013-12-03 Thread John Warren
Public bug reported:

If there is a high number of expired tokens, a user may get the
following error message:

$ keystone-manage token_flush
2013-11-18 02:11:09.491 3806 CRITICAL keystone [-] (InternalError) 
ibm_db_dbi::InternalError: Statement Execute Failed: [IBM][CLI 
Driver][DB2/LINUXX8664] SQL0964C  The transaction log for the database is full. 
 SQLSTATE=57011 SQLCODE=-964 'DELETE FROM token WHERE token.expires < ?' 
(datetime.datetime(2013, 11, 18, 8, 10, 37, 519596),)

** Affects: keystone
 Importance: Undecided
 Assignee: John Warren (jswarren)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1257323

Title:
  Keystone token_flush execution may fail because of full transaction
  log in DB2

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  If there is a high number of expired tokens, a user may get the
  following error message:

  $ keystone-manage token_flush
  2013-11-18 02:11:09.491 3806 CRITICAL keystone [-] (InternalError) 
ibm_db_dbi::InternalError: Statement Execute Failed: [IBM][CLI 
Driver][DB2/LINUXX8664] SQL0964C  The transaction log for the database is full. 
 SQLSTATE=57011 SQLCODE=-964 'DELETE FROM token WHERE token.expires < ?' 
(datetime.datetime(2013, 11, 18, 8, 10, 37, 519596),)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1257323/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257326] [NEW] nova quotas are used for floating IP even when neutron is used

2013-12-03 Thread Rob Raymond
Public bug reported:

Horizon does not allow user to allocate floating IP address even if neutron 
quota allows it.
This is because we always use nova quotas.

** Affects: horizon
 Importance: Undecided
 Assignee: Rob Raymond (rob-raymond)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Rob Raymond (rob-raymond)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1257326

Title:
  nova quotas are used for floating IP even when neutron is used

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Horizon does not allow user to allocate floating IP address even if neutron 
quota allows it.
  This is because we always use nova quotas.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1257326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1153926] Re: flavor show shouldn't read deleted flavors.

2013-12-03 Thread Rohan
** Also affects: horizon
   Importance: Undecided
   Status: New

** Changed in: horizon
 Assignee: (unassigned) => Rohan (kanaderohan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1153926

Title:
  flavor show shouldn't read deleted flavors.

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Compute (Nova):
  Triaged
Status in Python client library for Nova:
  New

Bug description:
  An instance type is created by:

  return db.instance_type_create(context.get_admin_context(), kwargs)

  which uses the read_deleted="no" from the admin context.

  This means, as seen in nova/tests/test_instance_types.py:

  def test_read_deleted_false_converting_flavorid(self):
  """
  Ensure deleted instance types are not returned when not needed (for
  example when creating a server and attempting to translate from
  flavorid to instance_type_id.
  """
  instance_types.create("instance_type1", 256, 1, 120, 100, "test1")
  instance_types.destroy("instance_type1")
  instance_types.create("instance_type1_redo", 256, 1, 120, 100, "test1")

  instance_type = instance_types.get_instance_type_by_flavor_id(
  "test1", read_deleted="no")
  self.assertEqual("instance_type1_redo", instance_type["name"])

  flavors with colliding ids can exist in the database.

  From the test we see this looks intended, however it results in
  undesirable results if we consider the following scenario.

  For 'show' in the flavors api, it uses read_deleted="yes". The reason
  for this is if a vm was created in the past with a now-deleted flavor,
  'nova show' can still show the flavor name that was specified for that
  vm creation. The flavor name is retrieved using the flavor id stored
  with the instance.

  Well, if there are colliding flavor ids in the database, the first of
  the duplicates will be picked, and it may not be the correct flavor
  for the vm.

  This leads me to believe that maybe at flavor create time, colliding
  ids should not be allowed, i.e. use

  return db.instance_type_create(context.get_admin_context(read_deleted="yes"),
 kwargs)

  to prevent the possibility of colliding flavor ids.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1153926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257301] Re: Bump hacking to 0.8

2013-12-03 Thread Sergio Cazzolato
** Also affects: python-swiftclient
   Importance: Undecided
   Status: New

** Changed in: python-swiftclient
 Assignee: (unassigned) => Sergio Cazzolato (sergio-j-cazzolato)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1257301

Title:
  Bump hacking to 0.8

Status in OpenStack Identity (Keystone):
  In Progress
Status in Python client library for Keystone:
  In Progress
Status in Python client library for Swift:
  New

Bug description:
  Due to Bump hacking dependency is not the 0.8, some compatibility
  checks with python 3.x are not being done on gate and it is bringing
  code issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1257301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254890] Re: "Timed out waiting for thing" causes neutron-large-ops failures

2013-12-03 Thread Davanum Srinivas (DIMS)
Looking at
http://logs.openstack.org/08/59108/1/gate/gate-tempest-dsvm-neutron-large-ops/694acc7/logs/

http://paste.openstack.org/show/54362/ - "Waiting for" messages stop at
14:17:38.668 (should be one every second). Looks like stuck in
call_until_true loop. then it wakes up after 5 mins after the sleep and
decides to get out of the loop w/o checking one last time if the
condition is true. curiously server becomes "ACTIVE" at
"2013-12-03T14:22:07Z"


** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1254890

Title:
  "Timed out waiting for thing" causes neutron-large-ops failures

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  New

Bug description:
  Separate out bug from:
  https://bugs.launchpad.net/neutron/+bug/1250168/comments/23

  Logstash query:
  message:"Details: Timed out waiting for thing" AND 
build_name:gate-tempest-devstack-vm-neutron-large-ops

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRGV0YWlsczogVGltZWQgb3V0IHdhaXRpbmcgZm9yIHRoaW5nXCIgQU5EIGJ1aWxkX25hbWU6Z2F0ZS10ZW1wZXN0LWRldnN0YWNrLXZtLW5ldXRyb24tbGFyZ2Utb3BzIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI0MzIwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzODU0MDQ5Mzg5MjZ9

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1254890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257354] [NEW] Metering doesn't anymore respect the l3 agent binding

2013-12-03 Thread Sylvain Afchain
Public bug reported:

Since the old L3 mixin has been moved as a service plugin, the metering
service plugin doesn't respect anymore the l3 agent binding. So instead
of using the cast rpc method it uses the fanout_cast method.

** Affects: neutron
 Importance: Undecided
 Assignee: Sylvain Afchain (sylvain-afchain)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Sylvain Afchain (sylvain-afchain)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257354

Title:
  Metering doesn't anymore respect the l3 agent binding

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Since the old L3 mixin has been moved as a service plugin, the
  metering service plugin doesn't respect anymore the l3 agent binding.
  So instead of using the cast rpc method it uses the fanout_cast
  method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1257354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257355] [NEW] live migration fails when using non-image backed disk

2013-12-03 Thread Dirk Mueller
Public bug reported:

running live migration with --block-migrate fails if the disk was
resized before (aka detached from the cow image). This is because
nova.virt.libvirt.driver.py uses disk_size, not virt_disk_size for re-
creating the qcow2 file on the destination host. in the case of qcow2
files, qemu-img however needs to get the virt_disk size passed down,
otherwise the block migration step will not be able to convert all
blocks.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257355

Title:
  live migration fails when using non-image backed disk

Status in OpenStack Compute (Nova):
  New

Bug description:
  running live migration with --block-migrate fails if the disk was
  resized before (aka detached from the cow image). This is because
  nova.virt.libvirt.driver.py uses disk_size, not virt_disk_size for re-
  creating the qcow2 file on the destination host. in the case of qcow2
  files, qemu-img however needs to get the virt_disk size passed down,
  otherwise the block migration step will not be able to convert all
  blocks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1257355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1246848] Re: VMWare: AssertionError: Trying to re-send() an already-triggered event.

2013-12-03 Thread Gary Kotton
** Also affects: openstack-vmwareapi-team
   Importance: Undecided
   Status: New

** Changed in: openstack-vmwareapi-team
   Importance: Undecided => High

** Changed in: openstack-vmwareapi-team
 Assignee: (unassigned) => Gary Kotton (garyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246848

Title:
  VMWare: AssertionError: Trying to re-send() an already-triggered
  event.

Status in OpenStack Compute (Nova):
  In Progress
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  When an exceptin occurs in _wait_for_task and a failure occurs, for
  example a file is requested and it does not exists then another
  exception is also thrown:

  013-10-31 10:49:52.617 WARNING nova.virt.vmwareapi.driver [-] In 
vmwareapi:_poll_task, Got this error Trying to re-send() an already-triggered 
event.
  2013-10-31 10:49:52.618 ERROR nova.openstack.common.loopingcall [-] in fixed 
duration looping call
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall Traceback 
(most recent call last):
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall   File 
"/opt/stack/nova/nova/openstack/common/loopingcall.py", line 78, in _inner
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall 
self.f(*self.args, **self.kw)
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 941, in _poll_task
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall 
done.send_exception(excep)
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 208, in 
send_exception
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall return 
self.send(None, args)
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 150, in send
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall assert 
self._result is NOT_USED, 'Trying to re-send() an already-triggered event.'
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall 
AssertionError: Trying to re-send() an already-triggered event.
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1246848/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257326] Re: nova quotas are used for floating IP even when neutron is used

2013-12-03 Thread Rob Raymond
** Changed in: horizon
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1257326

Title:
  nova quotas are used for floating IP even when neutron is used

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Horizon does not allow user to allocate floating IP address even if neutron 
quota allows it.
  This is because we always use nova quotas.

  The project/overview page shows the correct limits in the pie charts but
  Access & Security FIPs page had the "Allocate IP to Project" button disabled 
based on a call to quotas.tenant_quota_usages(request)

  This code uses only nova to retrieve quotas/limits in
  tenant_quota_usages_from_limits from usage/quotas.py

  But what we would need to add is the logic from get_neutron_limits in
  usage/base.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1257326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257038] Re: VMware: instance names can be edited, breaks nova-driver lookup

2013-12-03 Thread Shawn Hartsock
** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova
Milestone: None => icehouse-2

** Also affects: openstack-vmwareapi-team
   Importance: Undecided
   Status: New

** Changed in: openstack-vmwareapi-team
   Status: New => In Progress

** Changed in: openstack-vmwareapi-team
   Importance: Undecided => High

** Changed in: openstack-vmwareapi-team
 Assignee: (unassigned) => Sidharth Surana (ssurana)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257038

Title:
  VMware: instance names can be edited, breaks nova-driver lookup

Status in OpenStack Compute (Nova):
  In Progress
Status in The OpenStack VMwareAPI subTeam:
  In Progress

Bug description:
  Currently the VMware Nova Driver relies on the VM name in vCenter/ESX
  to match the UUID in Nova. The name can be easily edited by vCenter
  administrators and break Nova administration of VMs. A better solution
  should be found allowing the Nova Compute Driver for vSphere to look
  up VMs by a less volatile and publicly visible mechanism.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1257038/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257390] [NEW] races in assignment manager can cause spurious 404 when removing user from project

2013-12-03 Thread Peter Feiner
Public bug reported:

Similar kind of bug as described in bug #1246489.

When removing a user from a project, the assignment manager retrieves a
list of all roles the user has on the project, then removes each role.
Each (user, role, project) tuple is removed with a separate call into
the driver. If, before a particular role has been removed, that role is
deleted by another request calling into the manager (i.e., via
delete_role), the call into the driver by the user removal request will
raise a RoleNotFound exception and the request will return an HTTP 404
error. Furthermore, any roles in the list after the exceptional role
will not be deleted. Another call to Manager.remove_user_from_project
will remove the remaining roles.

The 404 can easily be avoided by either putting a "try: except:
RoleNotFound .. pass" around the
driver.remove_role_from_user_and_project calls.

Alternatively, a begin/end transaction interface could be added to the
driver. In its simplest form, this interface could be implemented by
serializing all transactions with a mutex. The SQL driver could
implement the interface with database transactions.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1257390

Title:
  races in assignment manager can cause spurious 404 when removing user
  from project

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Similar kind of bug as described in bug #1246489.

  When removing a user from a project, the assignment manager retrieves
  a list of all roles the user has on the project, then removes each
  role. Each (user, role, project) tuple is removed with a separate call
  into the driver. If, before a particular role has been removed, that
  role is deleted by another request calling into the manager (i.e., via
  delete_role), the call into the driver by the user removal request
  will raise a RoleNotFound exception and the request will return an
  HTTP 404 error. Furthermore, any roles in the list after the
  exceptional role will not be deleted. Another call to
  Manager.remove_user_from_project will remove the remaining roles.

  The 404 can easily be avoided by either putting a "try: except:
  RoleNotFound .. pass" around the
  driver.remove_role_from_user_and_project calls.

  Alternatively, a begin/end transaction interface could be added to the
  driver. In its simplest form, this interface could be implemented by
  serializing all transactions with a mutex. The SQL driver could
  implement the interface with database transactions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1257390/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257301] Re: Bump hacking to 0.8

2013-12-03 Thread Sergio Cazzolato
** Also affects: python-openstackclient
   Importance: Undecided
   Status: New

** Changed in: python-openstackclient
 Assignee: (unassigned) => Sergio Cazzolato (sergio-j-cazzolato)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1257301

Title:
  Bump hacking to 0.8

Status in OpenStack Identity (Keystone):
  In Progress
Status in Python client library for Keystone:
  In Progress
Status in OpenStack Command Line Client:
  New
Status in Python client library for Swift:
  In Progress

Bug description:
  Due to Bump hacking dependency is not the 0.8, some compatibility
  checks with python 3.x are not being done on gate and it is bringing
  code issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1257301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257396] [NEW] iso8601 DEBUG messages spam log

2013-12-03 Thread Peter Feiner
Public bug reported:

Useless DEBUG messages are printed every time iso8601 parses a date:

(iso8601.iso8601): 2013-12-03 12:47:12,924 DEBUG iso8601 parse_date Parsed 
2013-12-03T17:47:12Z into {'tz_sign': None, 'second_fraction': None, 'hour': 
'17', 'tz_hour': None, 'month': '12', 'timezone': 'Z', 'second': '12', 
'tz_minute': None, 'year': '2013', 'separator': 'T', 'day': '03', 'minute': 
'47'} with default timezone 
(iso8601.iso8601): 2013-12-03 12:47:12,924 DEBUG iso8601 to_int Got '2013' for 
'year' with default None
(iso8601.iso8601): 2013-12-03 12:47:12,925 DEBUG iso8601 to_int Got '12' for 
'month' with default None
(iso8601.iso8601): 2013-12-03 12:47:12,925 DEBUG iso8601 to_int Got '03' for 
'day' with default None
(iso8601.iso8601): 2013-12-03 12:47:12,925 DEBUG iso8601 to_int Got '17' for 
'hour' with default None
(iso8601.iso8601): 2013-12-03 12:47:12,925 DEBUG iso8601 to_int Got '47' for 
'minute' with default None
(iso8601.iso8601): 2013-12-03 12:47:12,925 DEBUG iso8601 to_int Got '12' for 
'second' with default None

The log level for iso8601 has been set to WARN in oslo-incubator:
https://github.com/openstack/oslo-incubator/commit/cbfded9c. This change
should be merged into keystone.

** Affects: keystone
 Importance: Undecided
 Assignee: Peter Feiner (pete5)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => Peter Feiner (pete5)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1257396

Title:
  iso8601 DEBUG messages spam log

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  Useless DEBUG messages are printed every time iso8601 parses a date:

  (iso8601.iso8601): 2013-12-03 12:47:12,924 DEBUG iso8601 parse_date Parsed 
2013-12-03T17:47:12Z into {'tz_sign': None, 'second_fraction': None, 'hour': 
'17', 'tz_hour': None, 'month': '12', 'timezone': 'Z', 'second': '12', 
'tz_minute': None, 'year': '2013', 'separator': 'T', 'day': '03', 'minute': 
'47'} with default timezone 
  (iso8601.iso8601): 2013-12-03 12:47:12,924 DEBUG iso8601 to_int Got '2013' 
for 'year' with default None
  (iso8601.iso8601): 2013-12-03 12:47:12,925 DEBUG iso8601 to_int Got '12' for 
'month' with default None
  (iso8601.iso8601): 2013-12-03 12:47:12,925 DEBUG iso8601 to_int Got '03' for 
'day' with default None
  (iso8601.iso8601): 2013-12-03 12:47:12,925 DEBUG iso8601 to_int Got '17' for 
'hour' with default None
  (iso8601.iso8601): 2013-12-03 12:47:12,925 DEBUG iso8601 to_int Got '47' for 
'minute' with default None
  (iso8601.iso8601): 2013-12-03 12:47:12,925 DEBUG iso8601 to_int Got '12' for 
'second' with default None

  The log level for iso8601 has been set to WARN in oslo-incubator:
  https://github.com/openstack/oslo-incubator/commit/cbfded9c. This
  change should be merged into keystone.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1257396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257301] Re: Bump hacking to 0.8

2013-12-03 Thread Sergio Cazzolato
** No longer affects: python-openstackclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1257301

Title:
  Bump hacking to 0.8

Status in OpenStack Identity (Keystone):
  In Progress
Status in Python client library for Keystone:
  In Progress
Status in Python client library for Swift:
  In Progress

Bug description:
  Due to Bump hacking dependency is not the 0.8, some compatibility
  checks with python 3.x are not being done on gate and it is bringing
  code issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1257301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257405] [NEW] Not checking image format produces lots of useless messages

2013-12-03 Thread Stanislaw Pitucha
Public bug reported:

The code for resizing partitionless images goes with the "tell, don't
ask" idea and attempts to run extfs / mount utilities on an image even
though they may fail. This produces lots of useless messages during the
instance preparation, like these:

2013-11-21 06:45:07 20902 DEBUG nova.virt.disk.mount.api 
[req-939d5d50-25ea-4f7a-8882-a880b9671e47 10873781609182 10816527907643] Failed 
to mount filesystem: Unexpected error while running command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf mount /dev/nbd8 
/tmp/openstack-vfs-localfsSz6ylg
Exit code: 32
Stdout: ''
Stderr: 'mount: you must specify the filesystem type\n' mnt_dev 
/usr/lib/python2.7/dist-packages/nova/virt/disk/mount/api.py:198
2013-11-21 06:45:07 20902 DEBUG nova.virt.disk.mount.api 
[req-939d5d50-25ea-4f7a-8882-a880b9671e47 10873781609182 10816527907643] Fail 
to mount, tearing back down do_mount 
/usr/lib/python2.7/dist-packages/nova/virt/disk/mount/api.py:219
2013-11-21 06:45:07 20902 DEBUG nova.virt.disk.mount.api 
[req-939d5d50-25ea-4f7a-8882-a880b9671e47 10873781609182 10816527907643] Unmap 
dev /dev/nbd8 unmap_dev 
/usr/lib/python2.7/dist-packages/nova/virt/disk/mount/api.py:184
2013-11-21 06:45:07 20902 DEBUG nova.virt.disk.mount.nbd 
[req-939d5d50-25ea-4f7a-8882-a880b9671e47 10873781609182 10816527907643] 
Release nbd device /dev/nbd8 unget_dev 
/usr/lib/python2.7/dist-packages/nova/virt/disk/mount/nbd.py:128
2013-11-21 06:45:07 20902 DEBUG nova.openstack.common.processutils 
[req-939d5d50-25ea-4f7a-8882-a880b9671e47 10873781609182 10816527907643] 
Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf qemu-nbd 
-d /dev/nbd8 execute 
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:147
2013-11-21 06:45:07 20902 DEBUG nova.virt.disk.vfs.localfs 
[req-939d5d50-25ea-4f7a-8882-a880b9671e47 10873781609182 10816527907643] Failed 
to mount image Failed to mount filesystem: Unexpected error while running 
command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf mount /dev/nbd8 
/tmp/openstack-vfs-localfsSz6ylg
Exit code: 32
Stdout: ''
Stderr: 'mount: you must specify the filesystem type\n') setup 
/usr/lib/python2.7/dist-packages/nova/virt/disk/vfs/localfs.py:83
2013-11-21 06:45:07 20902 DEBUG nova.virt.disk.api 
[req-939d5d50-25ea-4f7a-8882-a880b9671e47 10873781609182 10816527907643] Unable 
to mount image 
/var/lib/nova/instances/5159cff0-91f6-4521-a0be-d74ce4f81fad/disk with error 
Failed to mount filesystem: Unexpected error while running command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf mount /dev/nbd8 
/tmp/openstack-vfs-localfsSz6ylg
Exit code: 32
Stdout: ''
Stderr: 'mount: you must specify the filesystem type\n'. Cannot resize. 
is_image_partitionless 
/usr/lib/python2.7/dist-packages/nova/virt/disk/api.py:191

This could be fixed by doing a simple check on the the image itself to
pick up the magic signature. This would allow to skip e2resize on non-
extfs files and simple mounting on partitioned images without the error
messages.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257405

Title:
  Not checking image format produces lots of useless messages

Status in OpenStack Compute (Nova):
  New

Bug description:
  The code for resizing partitionless images goes with the "tell, don't
  ask" idea and attempts to run extfs / mount utilities on an image even
  though they may fail. This produces lots of useless messages during
  the instance preparation, like these:

  2013-11-21 06:45:07 20902 DEBUG nova.virt.disk.mount.api 
[req-939d5d50-25ea-4f7a-8882-a880b9671e47 10873781609182 10816527907643] Failed 
to mount filesystem: Unexpected error while running command.
  Command: sudo nova-rootwrap /etc/nova/rootwrap.conf mount /dev/nbd8 
/tmp/openstack-vfs-localfsSz6ylg
  Exit code: 32
  Stdout: ''
  Stderr: 'mount: you must specify the filesystem type\n' mnt_dev 
/usr/lib/python2.7/dist-packages/nova/virt/disk/mount/api.py:198
  2013-11-21 06:45:07 20902 DEBUG nova.virt.disk.mount.api 
[req-939d5d50-25ea-4f7a-8882-a880b9671e47 10873781609182 10816527907643] Fail 
to mount, tearing back down do_mount 
/usr/lib/python2.7/dist-packages/nova/virt/disk/mount/api.py:219
  2013-11-21 06:45:07 20902 DEBUG nova.virt.disk.mount.api 
[req-939d5d50-25ea-4f7a-8882-a880b9671e47 10873781609182 10816527907643] Unmap 
dev /dev/nbd8 unmap_dev 
/usr/lib/python2.7/dist-packages/nova/virt/disk/mount/api.py:184
  2013-11-21 06:45:07 20902 DEBUG nova.virt.disk.mount.nbd 
[req-939d5d50-25ea-4f7a-8882-a880b9671e47 10873781609182 10816527907643] 
Release nbd device /dev/nbd8 unget_dev 
/usr/lib/python2.7/dist-packages/nova/virt/disk/mount/nbd.py:128
  2013-11-21 06:45:07 20902 DEBUG nova.openstack.common.processutils 
[req-939d5d50-25ea-4f7a-8882-a880b9671e47 10873781609182 10816527907643] 
Running

[Yahoo-eng-team] [Bug 1257411] [NEW] Intermittent boot instance failure, "libvirt unable to read from monitor"

2013-12-03 Thread John Griffith
Public bug reported:

devstack install using master on precise intermittent failures when
trying to boot instances.  (cirros image, flavor 1).  Typically simply
running again this will work.  n-cpu logs contain the following trace:

2013-12-03 11:11:01.124 DEBUG nova.compute.manager 
[req-a228c55f-5050-4324-9853-6e72036e9449 demo demo] [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96] Re-scheduling run_instance: attempt 1 
from (pid=30610) _reschedule /opt/stack/nova/nova/compute/man
ager.py:1167
2013-12-03 11:11:01.124 DEBUG nova.openstack.common.rpc.amqp 
[req-a228c55f-5050-4324-9853-6e72036e9449 demo demo] Making synchronous call on 
conductor ... from (pid=30610) multicall 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:553
2013-12-03 11:11:01.125 DEBUG nova.openstack.common.rpc.amqp 
[req-a228c55f-5050-4324-9853-6e72036e9449 demo demo] MSG_ID is 
7b83e1059204445ba23ed876943eea2d from (pid=30610) multicall 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:556
2013-12-03 11:11:01.125 DEBUG nova.openstack.common.rpc.amqp 
[req-a228c55f-5050-4324-9853-6e72036e9449 demo demo] UNIQUE_ID is 
67de22630ca94eee9f409ee8aeaece1c. from (pid=30610) _add_unique_id 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:341
2013-12-03 11:11:01.237 DEBUG nova.openstack.common.lockutils 
[req-a228c55f-5050-4324-9853-6e72036e9449 demo demo] Got semaphore 
"compute_resources" from (pid=30610) lock 
/opt/stack/nova/nova/openstack/common/lockutils.py:167
2013-12-03 11:11:01.237 DEBUG nova.openstack.common.lockutils 
[req-a228c55f-5050-4324-9853-6e72036e9449 demo demo] Got semaphore / lock 
"update_usage" from (pid=30610) inner 
/opt/stack/nova/nova/openstack/common/lockutils.py:247
2013-12-03 11:11:01.237 DEBUG nova.openstack.common.lockutils 
[req-a228c55f-5050-4324-9853-6e72036e9449 demo demo] Semaphore / lock released 
"update_usage" from (pid=30610) inner 
/opt/stack/nova/nova/openstack/common/lockutils.py:251
2013-12-03 11:11:01.239 DEBUG nova.openstack.common.rpc.amqp 
[req-a228c55f-5050-4324-9853-6e72036e9449 demo demo] Making asynchronous cast 
on scheduler... from (pid=30610) cast 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:582
2013-12-03 11:11:01.239 DEBUG nova.openstack.common.rpc.amqp 
[req-a228c55f-5050-4324-9853-6e72036e9449 demo demo] UNIQUE_ID is 
ce107999c53949fa8aef7d13586a3d5a. from (pid=30610) _add_unique_id 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:341
2013-12-03 11:11:01.247 ERROR nova.compute.manager 
[req-a228c55f-5050-4324-9853-6e72036e9449 demo demo] [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96] Error: Unable to read from monitor: 
Connection reset by peer
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96] Traceback (most recent call last):
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1049, in _build_instance
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96] set_access_ip=set_access_ip)
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1453, in _spawn
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1450, in _spawn
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96] block_device_info)
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2161, in spawn
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96] block_device_info)
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3395, in 
_create_domain_and_network
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96] domain = self._create_domain(xml, 
instance=instance, power_on=power_on)
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3338, in _create_domain
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96] domain.XMLDesc(0))
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line , in _create_domain
2013-12-03 11:11:01.247 TRACE nova.compute.manager [instance: 
e7b65a0e-d9cf-4d4e-939c-5424e45c3d96] d

[Yahoo-eng-team] [Bug 1257420] [NEW] boot instance fails, libvirt unable to allocate memory

2013-12-03 Thread John Griffith
Public bug reported:

Intermittent failures trying to boot an instance using devstack/master
on precise VM.  In most cases deleting the failed instance and retrying
the boot command seems to work.

2013-12-03 11:28:24.514 DEBUG nova.compute.manager 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] Re-scheduling run_instance: attempt 1 
from (pid=5873) _reschedule /opt/stack/nova/nova/compute/mana
ger.py:1167
2013-12-03 11:28:24.514 DEBUG nova.openstack.common.rpc.amqp 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] Making synchronous call on 
conductor ... from (pid=5873) multicall 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:553
2013-12-03 11:28:24.514 DEBUG nova.openstack.common.rpc.amqp 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] MSG_ID is 
ea9adfa2f6564cd193d6baec7bf7f8a3 from (pid=5873) multicall 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:556
2013-12-03 11:28:24.515 DEBUG nova.openstack.common.rpc.amqp 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] UNIQUE_ID is 
33300a17273f4529bd36156c4406ada3. from (pid=5873) _add_unique_id 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:341
2013-12-03 11:28:24.627 DEBUG nova.openstack.common.lockutils 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] Got semaphore 
"compute_resources" from (pid=5873) lock 
/opt/stack/nova/nova/openstack/common/lockutils.py:167
2013-12-03 11:28:24.628 DEBUG nova.openstack.common.lockutils 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] Got semaphore / lock 
"update_usage" from (pid=5873) inner 
/opt/stack/nova/nova/openstack/common/lockutils.py:247
2013-12-03 11:28:24.628 DEBUG nova.openstack.common.lockutils 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] Semaphore / lock released 
"update_usage" from (pid=5873) inner 
/opt/stack/nova/nova/openstack/common/lockutils.py:251
2013-12-03 11:28:24.630 DEBUG nova.openstack.common.rpc.amqp 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] Making asynchronous cast 
on scheduler... from (pid=5873) cast 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:582
2013-12-03 11:28:24.630 DEBUG nova.openstack.common.rpc.amqp 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] UNIQUE_ID is 
501ebe16dd814daaa37c648f8f9848df. from (pid=5873) _add_unique_id 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:341
2013-12-03 11:28:24.642 ERROR nova.compute.manager 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] Error: internal error process exited 
while connecting to monitor: char device redirected to /dev/pt
s/30
Failed to allocate 536870912 B: Cannot allocate memory

2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] Traceback (most recent call last):
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1049, in _build_instance
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] set_access_ip=set_access_ip)
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1453, in _spawn
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1450, in _spawn
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] block_device_info)
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2161, in spawn
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] block_device_info)
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3395, in 
_create_domain_and_network
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] domain = self._create_domain(xml, 
instance=instance, power_on=power_on)
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3338, in _create_domain
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] domain.XMLDesc(0))
2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line , in _create_domain
2013-12-03 11:28:24.642 TRACE nova.compute.manager

[Yahoo-eng-team] [Bug 1257424] [NEW] Spelling miss in the code

2013-12-03 Thread Nachi Ueno
Public bug reported:

It looks like 53 misspelling in the code based on misspell tool
(https://github.com/lyda/misspell-check)

http://paste.openstack.org/show/54380/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257424

Title:
  Spelling miss in the code

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  It looks like 53 misspelling in the code based on misspell tool
  (https://github.com/lyda/misspell-check)

  http://paste.openstack.org/show/54380/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1257424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257301] Re: Bump hacking to 0.8

2013-12-03 Thread Sergio Cazzolato
** Also affects: oslo.messaging
   Importance: Undecided
   Status: New

** Changed in: oslo.messaging
 Assignee: (unassigned) => Sergio Cazzolato (sergio-j-cazzolato)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1257301

Title:
  Bump hacking to 0.8

Status in OpenStack Identity (Keystone):
  In Progress
Status in Messaging API for OpenStack:
  New
Status in Python client library for Keystone:
  In Progress
Status in Python client library for Swift:
  In Progress

Bug description:
  Due to Bump hacking dependency is not the 0.8, some compatibility
  checks with python 3.x are not being done on gate and it is bringing
  code issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1257301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257301] Re: Bump hacking to 0.8

2013-12-03 Thread Sergio Cazzolato
** No longer affects: oslo.messaging

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1257301

Title:
  Bump hacking to 0.8

Status in OpenStack Identity (Keystone):
  In Progress
Status in Python client library for Keystone:
  In Progress
Status in Python client library for Swift:
  In Progress

Bug description:
  Due to Bump hacking dependency is not the 0.8, some compatibility
  checks with python 3.x are not being done on gate and it is bringing
  code issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1257301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257446] [NEW] Creating a dual-stacked network causes dhcp for both stacks to fail

2013-12-03 Thread Anthony Veiga
Public bug reported:

Currently, we are running Havana in a lab with l2 provider networks.
Upstream is done via 802.1Q tags, and we are using dnsmasq 2.59 (Compile
time options IPv6 GNU-getopt DBus i18n DHCP TFTP conntrack IDN).
Creating a working IPv4-only network works fine, by creating a shared
(provider) network and an IPv4 subnet.  Instances are brought up as
expected.  However, upon adding a second subnet to this network with an
IPv6 scope, all new instances fail to receive dhcp for IPv4.  The
following lines are found in the devstack q-dhcp output:
http://paste.openstack.org/show/54386/

** Affects: neutron
 Importance: Undecided
 Assignee: Openstack@Comcast (comcast-openstack)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Openstack@Comcast (comcast-openstack)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257446

Title:
  Creating a dual-stacked network causes dhcp for both stacks to fail

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently, we are running Havana in a lab with l2 provider networks.
  Upstream is done via 802.1Q tags, and we are using dnsmasq 2.59
  (Compile time options IPv6 GNU-getopt DBus i18n DHCP TFTP conntrack
  IDN).  Creating a working IPv4-only network works fine, by creating a
  shared (provider) network and an IPv4 subnet.  Instances are brought
  up as expected.  However, upon adding a second subnet to this network
  with an IPv6 scope, all new instances fail to receive dhcp for IPv4.
  The following lines are found in the devstack q-dhcp output:
  http://paste.openstack.org/show/54386/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1257446/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257467] [NEW] extra_dhcp_opts allows empty strings

2013-12-03 Thread dkehn
Public bug reported:

the extra_dhcp_opts allows empty string '  ', as an option_value,
which dnsmasq has been detected ans segment faulting when encountering a
tag:ece4c8aa-15c9-4f6b-8c42-7d4e285734bf,option:server-ip-address,
option in the opts file.

Checks are need in the create and update protions of the extra_dhcp_opts
extension to prevent this.

** Affects: neutron
 Importance: Undecided
 Assignee: dkehn (dekehn)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257467

Title:
  extra_dhcp_opts allows empty strings

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  the extra_dhcp_opts allows empty string '  ', as an option_value,
  which dnsmasq has been detected ans segment faulting when encountering
  a  tag:ece4c8aa-15c9-4f6b-8c42-7d4e285734bf,option:server-ip-address,
  option in the opts file.

  Checks are need in the create and update protions of the
  extra_dhcp_opts extension to prevent this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1257467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257498] [NEW] Glance v1: Creating/updating image with a malformed location uri causes 500

2013-12-03 Thread Alex Meade
Public bug reported:


glance image-create --name bad-location --disk-format=vhd --container-
format=ovf --location="swift+http://bah";

Request returned failure status.
HTTPInternalServerError (HTTP 500)


2013-12-03 21:26:16.684 6312 INFO glance.wsgi.server 
[eee7679e-6710-4487-9f55-fae7ba4a7aaa 1c3848b015f94b70866e
a33fa52945f0 54bc4959075343ff80f460b77e783a49] Traceback (most recent call 
last):
  File "/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py", line 389, in 
handle_one_response
result = self.application(self.environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File "/opt/stack/glance/glance/common/wsgi.py", line 367, in __call__
response = req.get_response(self.application)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1296, in 
send
application, catch_exc_info=False)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
app_iter = application(self.environ, start_response)
  File 
"/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py", 
line 581, in __call__
return self.app(env, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File "/opt/stack/glance/glance/common/wsgi.py", line 367, in __call__
response = req.get_response(self.application)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1296, in 
send
application, catch_exc_info=False)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
app_iter = application(self.environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File "/opt/stack/glance/glance/common/wsgi.py", line 367, in __call__
response = req.get_response(self.application)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1296, in 
send
application, catch_exc_info=False)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
app_iter = application(self.environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File "/opt/stack/glance/glance/common/wsgi.py", line 367, in __call__
response = req.get_response(self.application)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1296, in 
send
application, catch_exc_info=False)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
app_iter = application(self.environ, start_response)
  File "/usr/lib/python2.7/dist-packages/paste/urlmap.py", line 203, in __call__
return app(environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in 
__call__
return resp(environ, start_response)
  File "/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in 
__call__
response = self.app(environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in 
__call__
return resp(environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File "/opt/stack/glance/glance/common/wsgi.py", line 599, in __call__
request, **action_args)
  File "/opt/stack/glance/glance/common/wsgi.py", line 618, in dispatch
return method(*args, **kwargs)
  File "/opt/stack/glance/glance/common/utils.py", line 422, in wrapped
return func(self, req, *args, **kwargs)
  File "/opt/stack/glance/glance/api/v1/images.py", line 754, in create
image_meta = self._reserve(req, image_meta)
  File "/opt/stack/glance/glance/api/v1/images.py", line 488, in _reserve
store = get_store_from_location(location)
  File "/opt/stack/glance/glance/store/__init__.py", line 263, in 
get_store_from_location
loc = location.get_location_from_uri(uri)
  File "/opt/stack/glance/glance/store/location.py", line 76, in 
get_location_from_uri
store_location_class=scheme_info['location_c

[Yahoo-eng-team] [Bug 1257496] [NEW] Glance v1: Creating image with bad scheme in location causes 500

2013-12-03 Thread Alex Meade
Public bug reported:

When creating an image in glance v1 and specifyig the location with a
bad scheme you receive an HTTP 500. In this case 'http+swift" is a bad
scheme.

glance image-create --name bad-location --disk-format=vhd --container-
format=ovf --location="http+swift://bah"

Request returned failure status.
HTTPInternalServerError (HTTP 500)

2013-12-03 21:24:32.009 6312 INFO glance.wsgi.server 
[402e831a-935d-4e14-b4c8-64653c14263d 1c3848b015f94b70866e
a33fa52945f0 54bc4959075343ff80f460b77e783a49] Traceback (most recent call 
last):
  File "/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py", line 389, in 
handle_one_response
result = self.application(self.environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File "/opt/stack/glance/glance/common/wsgi.py", line 367, in __call__
response = req.get_response(self.application)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1296, in 
send
application, catch_exc_info=False)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
app_iter = application(self.environ, start_response)
  File 
"/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py", 
line 581, in __call__
return self.app(env, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File "/opt/stack/glance/glance/common/wsgi.py", line 367, in __call__
response = req.get_response(self.application)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1296, in 
send
application, catch_exc_info=False)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
app_iter = application(self.environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File "/opt/stack/glance/glance/common/wsgi.py", line 367, in __call__
response = req.get_response(self.application)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1296, in 
send
application, catch_exc_info=False)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
app_iter = application(self.environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File "/opt/stack/glance/glance/common/wsgi.py", line 367, in __call__
response = req.get_response(self.application)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1296, in 
send
application, catch_exc_info=False)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
app_iter = application(self.environ, start_response)
  File "/usr/lib/python2.7/dist-packages/paste/urlmap.py", line 203, in __call__
return app(environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in 
__call__
return resp(environ, start_response)
  File "/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in 
__call__
response = self.app(environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in 
__call__
return resp(environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File "/opt/stack/glance/glance/common/wsgi.py", line 599, in __call__
request, **action_args)
  File "/opt/stack/glance/glance/common/wsgi.py", line 618, in dispatch
return method(*args, **kwargs)
  File "/opt/stack/glance/glance/common/utils.py", line 422, in wrapped
return func(self, req, *args, **kwargs)
  File "/opt/stack/glance/glance/api/v1/images.py", line 754, in create
image_meta = self._reserve(req, image_meta)
  File "/opt/stack/glance/glance/api/v1/images.py", line 488, in _reserve
store = get_store_from_location(location)
  File "/opt/stack/glance/glance/store/__init__.py", line 263, in 
get_store_from_location
loc = location.get_location_fro

[Yahoo-eng-team] [Bug 1253497] target

2013-12-03 Thread Thierry Carrez
affects savanna
 status fixreleased



** Changed in: savanna
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1253497

Title:
  Replace uuidutils.generate_uuid() with str(uuid.uuid4())

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Orchestration API (Heat):
  New
Status in Ironic (Bare Metal Provisioning):
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  In Progress
Status in OpenStack Data Processing (Savanna):
  Fix Released
Status in Trove - Database as a Service:
  In Progress

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2013-November/018980.html

  
  > Hi all,
  >
  > We had a discussion of the modules that are incubated in Oslo.
  >
  > https://etherpad.openstack.org/p/icehouse-oslo-status
  >
  > One of the conclusions we came to was to deprecate/remove uuidutils in
  > this cycle.
  >
  > The first step into this change should be to remove generate_uuid() from
  > uuidutils.
  >
  > The reason is that 1) generating the UUID string seems trivial enough to
  > not need a function and 2) string representation of uuid4 is not what we
  > want in all projects.
  >
  > To address this, a patch is now on gerrit.
  > https://review.openstack.org/#/c/56152/
  >
  > Each project should directly use the standard uuid module or implement its
  > own helper function to generate uuids if this patch gets in.
  >
  > Any thoughts on this change? Thanks.
  >

  Unfortunately it looks like that change went through before I caught up on
  email. Shouldn't we have removed its use in the downstream projects (at
  least integrated projects) before removing it from Oslo?

  Doug

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1253497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257507] [NEW] Glance v2: HTTP500 when updating image with locations

2013-12-03 Thread Alex Meade
Public bug reported:

curl -i -X PATCH -H "X-Auth-Token: $AUTH_TOKEN" -H 'Content-Type:
application/openstack-images-v2.1-json-patch' -H 'User-Agent: python-
glanceclient' -d '[{"path": "/locations", "value":
[{"url":"swift+http://service:glance:password@localhost:5000/v2.0/glance
/d0d90e9b-82f2-43c4-9e12-232de00fa8ea", "metadata": {}}], "op":
"replace"}]'
http://localhost:9292/v2/images/7b724ba6-6451-4280-85e4-1c46b3e6e5b5

HTTP/1.1 500 Internal Server Error


2013-12-03 22:14:39.298 12510 INFO glance.wsgi.server 
[85024f66-9dd2-4289-98a2-f4b9f6426aae 1c3848b015f94b70866ea33fa52945f0 
54bc4959075343ff80f460b77e783a49] Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py", line 389, in 
handle_one_response
result = self.application(self.environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File "/opt/stack/glance/glance/common/wsgi.py", line 367, in __call__
response = req.get_response(self.application)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1296, in 
send
application, catch_exc_info=False)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
app_iter = application(self.environ, start_response)
  File 
"/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py", 
line 581, in __call__
return self.app(env, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File "/opt/stack/glance/glance/common/wsgi.py", line 367, in __call__
response = req.get_response(self.application)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1296, in 
send
application, catch_exc_info=False)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
app_iter = application(self.environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File "/opt/stack/glance/glance/common/wsgi.py", line 367, in __call__
response = req.get_response(self.application)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1296, in 
send
application, catch_exc_info=False)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
app_iter = application(self.environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File "/opt/stack/glance/glance/common/wsgi.py", line 367, in __call__
response = req.get_response(self.application)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1296, in 
send
application, catch_exc_info=False)
  File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
app_iter = application(self.environ, start_response)
  File "/usr/lib/python2.7/dist-packages/paste/urlmap.py", line 203, in __call__
return app(environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in 
__call__
return resp(environ, start_response)
  File "/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in 
__call__
response = self.app(environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in 
__call__
return resp(environ, start_response)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File "/opt/stack/glance/glance/common/wsgi.py", line 599, in __call__
request, **action_args)
  File "/opt/stack/glance/glance/common/wsgi.py", line 618, in dispatch
return method(*args, **kwargs)
  File "/opt/stack/glance/glance/common/utils.py", line 422, in wrapped
return func(self, req, *args, **kwargs)
  File "/opt/stack/glance/glance/api/v2/images.py", line 119, in update
change_method(req, image, change)
  File "/opt/stack/glance/glance/api/v2/images.py", line 149, in _do_replace
self._do_replace_locations(image, value)
  File "/opt/stack/gla

[Yahoo-eng-team] [Bug 1257509] [NEW] swiftclient ERROR in g-api after successful tempest run

2013-12-03 Thread David Kranz
Public bug reported:

>From the log file from this change
http://logs.openstack.org/33/59533/1/gate/gate-tempest-dsvm-
full/1f2c988/console.html

2013-12-03 21:30:31.851 22827 ERROR swiftclient [-] Object GET failed: 
http://127.0.0.1:8080/v1/AUTH_4d22a858761e4b90b536f489ccff34ca/glance/a6c33fc7-4871-45f7-8b3c-fd0a7452cea0
 404 Not Found  [first 60 chars of response] Not FoundThe 
resource could not be found.<
2013-12-03 21:30:31.851 22827 TRACE swiftclient Traceback (most recent call 
last):
2013-12-03 21:30:31.851 22827 TRACE swiftclient   File 
"/opt/stack/new/python-swiftclient/swiftclient/client.py", line 1122, in _retry
2013-12-03 21:30:31.851 22827 TRACE swiftclient rv = func(self.url, 
self.token, *args, **kwargs)
2013-12-03 21:30:31.851 22827 TRACE swiftclient   File 
"/opt/stack/new/python-swiftclient/swiftclient/client.py", line 760, in 
get_object
2013-12-03 21:30:31.851 22827 TRACE swiftclient http_response_content=body)
2013-12-03 21:30:31.851 22827 TRACE swiftclient ClientException: Object GET 
failed: 
http://127.0.0.1:8080/v1/AUTH_4d22a858761e4b90b536f489ccff34ca/glance/a6c33fc7-4871-45f7-8b3c-fd0a7452cea0
 404 Not Found  [first 60 chars of response] Not FoundThe 
resource could not be found.<
2013-12-03 21:30:31.851 22827 TRACE swiftclient 
2013-12-03 21:30:31.851 22827 WARNING glance.store.swift 
[c841d109-8752-491b-acf0-d3f0da72d69e 724396572e324829aefb01ff24bd746e 
928f762a2fe54ffea18fc270c1292920] Swift could not find object 
a6c33fc7-4871-45f7-8b3c-fd0a7452cea0.
2013-12-03 21:30:31.854 22827 INFO glance.wsgi.server 
[c841d109-8752-491b-acf0-d3f0da72d69e 724396572e324829aefb01ff24bd746e 
928f762a2fe54ffea18fc270c1292920] 127.0.0.1 - - [03/Dec/2013 21:30:31] "GET 
/v1/images/a6c33fc7-4871-45f7-8b3c-fd0a7452cea0 HTTP/1.1" 404 294 1.760652

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1257509

Title:
  swiftclient ERROR in g-api after successful tempest run

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  From the log file from this change
  http://logs.openstack.org/33/59533/1/gate/gate-tempest-dsvm-
  full/1f2c988/console.html

  2013-12-03 21:30:31.851 22827 ERROR swiftclient [-] Object GET failed: 
http://127.0.0.1:8080/v1/AUTH_4d22a858761e4b90b536f489ccff34ca/glance/a6c33fc7-4871-45f7-8b3c-fd0a7452cea0
 404 Not Found  [first 60 chars of response] Not FoundThe 
resource could not be found.<
  2013-12-03 21:30:31.851 22827 TRACE swiftclient Traceback (most recent call 
last):
  2013-12-03 21:30:31.851 22827 TRACE swiftclient   File 
"/opt/stack/new/python-swiftclient/swiftclient/client.py", line 1122, in _retry
  2013-12-03 21:30:31.851 22827 TRACE swiftclient rv = func(self.url, 
self.token, *args, **kwargs)
  2013-12-03 21:30:31.851 22827 TRACE swiftclient   File 
"/opt/stack/new/python-swiftclient/swiftclient/client.py", line 760, in 
get_object
  2013-12-03 21:30:31.851 22827 TRACE swiftclient 
http_response_content=body)
  2013-12-03 21:30:31.851 22827 TRACE swiftclient ClientException: Object GET 
failed: 
http://127.0.0.1:8080/v1/AUTH_4d22a858761e4b90b536f489ccff34ca/glance/a6c33fc7-4871-45f7-8b3c-fd0a7452cea0
 404 Not Found  [first 60 chars of response] Not FoundThe 
resource could not be found.<
  2013-12-03 21:30:31.851 22827 TRACE swiftclient 
  2013-12-03 21:30:31.851 22827 WARNING glance.store.swift 
[c841d109-8752-491b-acf0-d3f0da72d69e 724396572e324829aefb01ff24bd746e 
928f762a2fe54ffea18fc270c1292920] Swift could not find object 
a6c33fc7-4871-45f7-8b3c-fd0a7452cea0.
  2013-12-03 21:30:31.854 22827 INFO glance.wsgi.server 
[c841d109-8752-491b-acf0-d3f0da72d69e 724396572e324829aefb01ff24bd746e 
928f762a2fe54ffea18fc270c1292920] 127.0.0.1 - - [03/Dec/2013 21:30:31] "GET 
/v1/images/a6c33fc7-4871-45f7-8b3c-fd0a7452cea0 HTTP/1.1" 404 294 1.760652

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1257509/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257514] [NEW] pool.waitall needed in dhcp-agent sync_state

2013-12-03 Thread Ed Bak
Public bug reported:

While debugging an issue in the dhcp-agent, I noticed that it was
difficult to determine when sync_state completes.  I would like to add
the following pool.waitall and LOG message to the sync_state method.

for network in active_networks:
pool.spawn_n(self.safe_configure_dhcp_for_network, network)
pool.waitall()
LOG.info(_('Synchronizing state complete'))

** Affects: neutron
 Importance: Undecided
 Assignee: Ed Bak (ed-bak2)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Ed Bak (ed-bak2)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257514

Title:
  pool.waitall needed in dhcp-agent sync_state

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  While debugging an issue in the dhcp-agent, I noticed that it was
  difficult to determine when sync_state completes.  I would like to add
  the following pool.waitall and LOG message to the sync_state method.

  for network in active_networks:
  pool.spawn_n(self.safe_configure_dhcp_for_network, network)
  pool.waitall()
  LOG.info(_('Synchronizing state complete'))

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1257514/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257517] [NEW] Incorrect tenant id returned to neutron when deleting a subnet

2013-12-03 Thread Bilal Ahmad
Public bug reported:

Create a new user through Horizon, create a network and subnet in it.
Launch instances on the network created. Now, when you try to delete the
subnet (after deleting VMs), a different tenant-id is returned. You can
see from logs that a subnet with wrong tenant-id gets deleted.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257517

Title:
  Incorrect tenant id returned to neutron when deleting a subnet

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Create a new user through Horizon, create a network and subnet in it.
  Launch instances on the network created. Now, when you try to delete
  the subnet (after deleting VMs), a different tenant-id is returned.
  You can see from logs that a subnet with wrong tenant-id gets deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1257517/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251920] Re: Tempest failures due to failure to return console logs from an instance

2013-12-03 Thread Russell Bryant
Closing this out for Nova since it appears to have been addressed in
tempest

** Changed in: nova
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251920

Title:
  Tempest failures due to failure to return console logs from an
  instance

Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Fix Committed

Bug description:
  Logstash search:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJmaWxlbmFtZTpjb25zb2xlLmh0bWwgQU5EIG1lc3NhZ2U6XCJhc3NlcnRpb25lcnJvcjogY29uc29sZSBvdXRwdXQgd2FzIGVtcHR5XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzODQ2NDEwNzIxODl9

  An example failure is http://logs.openstack.org/92/55492/8/check
  /check-tempest-devstack-vm-full/ef3a4a4/console.html

  console.html
  ===

  2013-11-16 21:54:27.998 | 2013-11-16 21:41:20,775 Request: POST 
http://127.0.0.1:8774/v2/3f6934d9aabf467aa8bc51397ccfa782/servers/10aace14-23c1-4cec-9bfd-2c873df1fbee/action
  2013-11-16 21:54:27.998 | 2013-11-16 21:41:20,776 Request Headers: 
{'Content-Type': 'application/json', 'Accept': 'application/json', 
'X-Auth-Token': ''}
  2013-11-16 21:54:27.998 | 2013-11-16 21:41:20,776 Request Body: 
{"os-getConsoleOutput": {"length": 10}}
  2013-11-16 21:54:27.998 | 2013-11-16 21:41:21,000 Response Status: 200
  2013-11-16 21:54:27.999 | 2013-11-16 21:41:21,001 Nova request id: 
req-7a2ee0ab-c977-4957-abb5-1d84191bf30c
  2013-11-16 21:54:27.999 | 2013-11-16 21:41:21,001 Response Headers: 
{'content-length': '14', 'date': 'Sat, 16 Nov 2013 21:41:20 GMT', 
'content-type': 'application/json', 'connection': 'close'}
  2013-11-16 21:54:27.999 | 2013-11-16 21:41:21,001 Response Body: {"output": 
""}
  2013-11-16 21:54:27.999 | }}}
  2013-11-16 21:54:27.999 | 
  2013-11-16 21:54:27.999 | Traceback (most recent call last):
  2013-11-16 21:54:27.999 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 281, in 
test_get_console_output
  2013-11-16 21:54:28.000 | self.wait_for(get_output)
  2013-11-16 21:54:28.000 |   File "tempest/api/compute/base.py", line 133, in 
wait_for
  2013-11-16 21:54:28.000 | condition()
  2013-11-16 21:54:28.000 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 278, in get_output
  2013-11-16 21:54:28.000 | self.assertTrue(output, "Console output was 
empty.")
  2013-11-16 21:54:28.000 |   File "/usr/lib/python2.7/unittest/case.py", line 
420, in assertTrue
  2013-11-16 21:54:28.000 | raise self.failureException(msg)
  2013-11-16 21:54:28.001 | AssertionError: Console output was empty.

  n-api
  

  2013-11-16 21:41:20.782 DEBUG nova.api.openstack.wsgi 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] Action: 'action', body: 
{"os-getConsoleOutput": {"length": 10}} _process_stack 
/opt/stack/new/nova/nova/api/openstack/wsgi.py:963
  2013-11-16 21:41:20.782 DEBUG nova.api.openstack.wsgi 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] Calling method > _process_stack 
/opt/stack/new/nova/nova/api/openstack/wsgi.py:964
  2013-11-16 21:41:20.865 DEBUG nova.openstack.common.rpc.amqp 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] Making synchronous call on 
compute.devstack-precise-hpcloud-az2-663635 ... multicall 
/opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:553
  2013-11-16 21:41:20.866 DEBUG nova.openstack.common.rpc.amqp 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] MSG_ID is 
a93dceabf6a441eb850b5fbb012d661f multicall 
/opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:556
  2013-11-16 21:41:20.866 DEBUG nova.openstack.common.rpc.amqp 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] UNIQUE_ID is 
706ab69dc066440fbe1bd7766b73d953. _add_unique_id 
/opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:341
  2013-11-16 21:41:20.869 22679 DEBUG amqp [-] Closed channel #1 _do_close 
/usr/local/lib/python2.7/dist-packages/amqp/channel.py:95
  2013-11-16 21:41:20.869 22679 DEBUG amqp [-] using channel_id: 1 __init__ 
/usr/local/lib/python2.7/dist-packages/amqp/channel.py:71
  2013-11-16 21:41:20.870 22679 DEBUG amqp [-] Channel open _open_ok 
/usr/local/lib/python2.7/dist-packages/amqp/channel.py:429
  2013-11-16 21:41:20.999 INFO nova.osapi_compute.wsgi.server 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] 127.0.0.1 "POST 
/v2/3f6

[Yahoo-eng-team] [Bug 1243314] Re: AWS VPC support in Openstack

2013-12-03 Thread Russell Bryant
This should be a blueprint instead of a bug:
https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
Milestone: icehouse-1 => icehouse-2

** Changed in: nova
Milestone: icehouse-2 => None

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1243314

Title:
  AWS VPC support in Openstack

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Add support for Amazon VPC APIs in Openstack

  - Created a separate file (vpc.py) to handle VPC APIs
  - Added unit tests for VPC (test_vpc.py)

  Nova EC2 extended to support Amazon VPC APIs.
  Only VPC and Subnet CRUD APIs are added in this patch.
  This patch has no dependency on any other blueprints.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1243314/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257524] [NEW] If neutron spawned dnsmasq dies, neutron-dhcp-agent will be totally unaware

2013-12-03 Thread Clint Byrum
Public bug reported:

I recently had some trouble with dnsmasq causing it to segfault in
certain situations. No doubt, this was a bug in dnsmasq. However, it was
quite troubling that Neutron never noted that dnsmasq had stopped
working. This is because dnsmasq is spawned as a daemon, even though it
is most definitely "owned" by neutron-dhcp-agent. Also if neutron-dhcp-
agent should die, since dnsmasq is a daemon it will continue to run and
be "stale", requiring manual intervention to clean up. However if it is
in the foreground then it will stay in neutron-dhcp-agent's process
group and should also die and if need-be cleaned up by init.

I did some analysis and will not be able to dig into the actual
implementation. However my analysis shows that this would work:

* use utils.create_process instead of execute and remember returned Popen 
object.
* spawn a greenthread to wait() on the process
* if it dies, restart it and log the error code
* pass the -k option so dnsmasq stays in foreground
* kill the process using child signals

Note sure how or if SIGCHLD plays a factor.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257524

Title:
  If neutron spawned dnsmasq dies, neutron-dhcp-agent will be totally
  unaware

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I recently had some trouble with dnsmasq causing it to segfault in
  certain situations. No doubt, this was a bug in dnsmasq. However, it
  was quite troubling that Neutron never noted that dnsmasq had stopped
  working. This is because dnsmasq is spawned as a daemon, even though
  it is most definitely "owned" by neutron-dhcp-agent. Also if neutron-
  dhcp-agent should die, since dnsmasq is a daemon it will continue to
  run and be "stale", requiring manual intervention to clean up. However
  if it is in the foreground then it will stay in neutron-dhcp-agent's
  process group and should also die and if need-be cleaned up by init.

  I did some analysis and will not be able to dig into the actual
  implementation. However my analysis shows that this would work:

  * use utils.create_process instead of execute and remember returned Popen 
object.
  * spawn a greenthread to wait() on the process
  * if it dies, restart it and log the error code
  * pass the -k option so dnsmasq stays in foreground
  * kill the process using child signals

  Note sure how or if SIGCHLD plays a factor.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1257524/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257523] [NEW] Neither vpnaas.filters nor debug.filters are referenced in setup.cfg

2013-12-03 Thread Terry Wilson
Public bug reported:

Both vpnaas.filters and debug.filters are missing from setup.cfg,
breaking rootwrap for the appropriate commands.

** Affects: neutron
 Importance: Undecided
 Assignee: Terry Wilson (otherwiseguy)
 Status: In Progress


** Tags: havana-backport-potential low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257523

Title:
  Neither vpnaas.filters nor debug.filters are referenced in setup.cfg

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Both vpnaas.filters and debug.filters are missing from setup.cfg,
  breaking rootwrap for the appropriate commands.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1257523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257532] [NEW] Shelving fails with KeyError: 'metadata'

2013-12-03 Thread Sam Morrison
Public bug reported:

When I try and shelve an instance I get the following error on the
compute node:

2013-12-04 10:39:59.716 18800 ERROR nova.openstack.common.rpc.amqp 
[req-d87825e7-9c2f-4735-94e2-4c470ee0edab d9646718471b46aeb5fd94c702336ca9 
0bdf024c921848c4b74d9e69af9edf08] Exception during message handling
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line 461, 
in _process_data
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp **args)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp result = 
getattr(proxyobj, method)(ctxt, **kwargs)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 90, in wrapped
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp payload)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 73, in wrapped
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 243, in 
decorated_function
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp pass
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 229, in 
decorated_function
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 294, in 
decorated_function
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 271, in 
decorated_function
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 258, in 
decorated_function
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3336, in 
shelve_instance
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp 
current_period=True)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/conductor/api.py", line 292, in 
notify_usage_exists
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp 
system_metadata, extra_usage_info)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 1094, in wrapper
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp return 
func(*args, **kwargs)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 486, in 
notify_usage_exists
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp 
system_metadata, extra_usage_info)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/utils.py", line 295, in 
notify_usage_exists
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp 
system_metadata=system_metadata, extra_usage_info=extra_info)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/utils.py", line 316, in 
notify_about_instance_usage
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp 
network_info, system_metadata, **extra_usage_info)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/notifications.py", line 420, in 
info_from_instance
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp 
instance_info['metadata'] = utils.instance_meta(instance_ref)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 1044, in instance_meta
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp if

[Yahoo-eng-team] [Bug 1257295] Re: openstack is full of misspelled words

2013-12-03 Thread Abhishek Chanda
** Changed in: oslo
   Status: New => Confirmed

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** Changed in: python-novaclient
 Assignee: (unassigned) => Abhishek Chanda (abhishek-i)

** Changed in: python-novaclient
   Status: New => Confirmed

** Changed in: python-novaclient
   Importance: Undecided => Low

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
 Assignee: (unassigned) => Abhishek Chanda (abhishek-i)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257295

Title:
  openstack is full of misspelled words

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Confirmed
Status in Python client library for Nova:
  Confirmed

Bug description:
  List of known misspellings

  http://paste.openstack.org/show/54354

  Generated with:
pip install misspellings
git ls-files | grep -v locale | misspellings -f -

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1257295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255658] Re: Error launching an instance: nova scheduler driver: Setting instance to ERROR state.

2013-12-03 Thread suibinz
confirmed that this was caused by hosts running short on RAM limit. But
not sure where the limit was set (maybe in Rabbitmq-server?)

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255658

Title:
  Error launching an instance: nova scheduler driver: Setting instance
  to ERROR state.

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  My stack (one all-in-one node + one compute node) had been working fine.  
Then it started to have error launching instances with the following 
observations:
  1) Horizon reports success, but instance status is "Error";
  2) no error/warning from nova-compute.log
  3) nova-scheduler.log has this error: 
  WARNING nova.scheduler.driver [req-ff98dcde-c88b-40e1-85c4-6bb89b627c44 
24477163d7ca46a38b9d45360a395d59 8db3509086494bc3a0a5174c26392bb1] [instance: 
380fbb79-dbdb-407d-bd89-78afeba8e83d] Setting instance to ERROR state.
  4) "nova-manage service list" shows everything is working properly on both 
nodes.
  5) there are plenty of diskspace (mounted nfs on /var/lib/nova/instances)
  6) restarted all nova/quantum services but did not help.

  Some suggested to check rabbitmq-plugin for the Disk limit, but there
  was no warning/error on Disk limits.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1255658/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257545] [NEW] Unshelving an offloaded instance doesn't set host, hypervisor_name

2013-12-03 Thread Sam Morrison
Public bug reported:

When you unshelve an instance that has been offloaded it doesn't set:

OS-EXT-SRV-ATTR:host
OS-EXT-SRV-ATTR:hypervisor_hostname

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257545

Title:
  Unshelving an offloaded instance doesn't set host, hypervisor_name

Status in OpenStack Compute (Nova):
  New

Bug description:
  When you unshelve an instance that has been offloaded it doesn't set:

  OS-EXT-SRV-ATTR:host
  OS-EXT-SRV-ATTR:hypervisor_hostname

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1257545/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257566] [NEW] EC2 and S3 token middleware create insecure connections

2013-12-03 Thread Jamie Lennox
Public bug reported:

EC2 and S3 token middleware are similar to auth_token_middleware
receiving and authenticating ec2/s3 tokens. They both still use the
httplib method of connecting to keystone and so doesn't validate any SSL
certificates.

On top of this they appears to be completely untested.

They are not enabled by keystone's default pipeline and are thus most
likely not used at all and should be either deprecated or moved into
keystoneclient.

** Affects: keystone
 Importance: Undecided
 Status: New

** Summary changed:

- EC2 and S3 token middleware uses httplib and is untested
+ EC2 and S3 token middleware create insecure connections

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1257566

Title:
  EC2 and S3 token middleware create insecure connections

Status in OpenStack Identity (Keystone):
  New

Bug description:
  EC2 and S3 token middleware are similar to auth_token_middleware
  receiving and authenticating ec2/s3 tokens. They both still use the
  httplib method of connecting to keystone and so doesn't validate any
  SSL certificates.

  On top of this they appears to be completely untested.

  They are not enabled by keystone's default pipeline and are thus most
  likely not used at all and should be either deprecated or moved into
  keystoneclient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1257566/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257594] [NEW] Unshelving an instance uses original image not shelved image

2013-12-03 Thread Sam Morrison
Public bug reported:

When unshelving a shelved instance that has been offloaded to glance it doesn't 
actually use the image stored in glance.
It actually uses the image that the instance was booted up with in the first 
place.

This seems a bit crazy to me so it would be great if someone could
replicate.

Note: This is with stable/havana but looking at master I don't see
anything that would mean that this actually works in master either

Please tell me I'm wrong and I have some messed up setup..

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257594

Title:
  Unshelving an instance uses original image not shelved image

Status in OpenStack Compute (Nova):
  New

Bug description:
  When unshelving a shelved instance that has been offloaded to glance it 
doesn't actually use the image stored in glance.
  It actually uses the image that the instance was booted up with in the first 
place.

  This seems a bit crazy to me so it would be great if someone could
  replicate.

  Note: This is with stable/havana but looking at master I don't see
  anything that would mean that this actually works in master either

  Please tell me I'm wrong and I have some messed up setup..

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1257594/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257566] Re: EC2 and S3 token middleware create insecure connections

2013-12-03 Thread Grant Murphy
** Also affects: ossa
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1257566

Title:
  EC2 and S3 token middleware create insecure connections

Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Security Advisories:
  New

Bug description:
  EC2 and S3 token middleware are similar to auth_token_middleware
  receiving and authenticating ec2/s3 tokens. They both still use the
  httplib method of connecting to keystone and so doesn't validate any
  SSL certificates.

  On top of this they appears to be completely untested.

  They are not enabled by keystone's default pipeline and are thus most
  likely not used at all and should be either deprecated or moved into
  keystoneclient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1257566/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257607] [NEW] Mistake in usage drop_constraint parameters

2013-12-03 Thread Ann Kamyshnikova
Public bug reported:

In miration e197124d4b9_add_unique_constrain mistake in usage drop_constraint 
parameter type_ and positional agruments name
and table_name.

File 
"/opt/stack/neutron/neutron/db/migration/alembic_migrations/versions/e197124d4b9_add_unique_constrain.py",
 line 64, in downgrade
type='unique'
TypeError: drop_constraint() takes at least 2 arguments (1 given)

The same mistake was already fixed in miration
63afba73813_ovs_tunnelendpoints_id_unique.

** Affects: neutron
 Importance: Undecided
 Assignee: Ann Kamyshnikova (akamyshnikova)
 Status: In Progress


** Tags: db

** Changed in: neutron
 Assignee: (unassigned) => Ann Kamyshnikova (akamyshnikova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257607

Title:
  Mistake in usage drop_constraint parameters

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  In miration e197124d4b9_add_unique_constrain mistake in usage drop_constraint 
parameter type_ and positional agruments name
  and table_name.

  File 
"/opt/stack/neutron/neutron/db/migration/alembic_migrations/versions/e197124d4b9_add_unique_constrain.py",
 line 64, in downgrade
  type='unique'
  TypeError: drop_constraint() takes at least 2 arguments (1 given)

  The same mistake was already fixed in miration
  63afba73813_ovs_tunnelendpoints_id_unique.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1257607/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1250320] Re: There is not any error information while try to update RAM of quota with 0

2013-12-03 Thread Maithem
This is not a bug, the reason you are able to set ram to zero is because
the tenant total used ram is 0. Basically when you update an attribute
(i.e. ram) , your lower bound becomes min(0, used so far). So for
example, if you have one instance running and it is assigned 100mb, then
when try to update the ram to 0 it will throw an exception.

** Changed in: nova
   Status: Confirmed => Invalid

** Changed in: nova
 Assignee: Maithem (maithem) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1250320

Title:
  There is not any error information while try to update RAM of quota
  with 0

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  [root@ns11 ~]# nova quota-update --ram 0   a1a8dbd9285f4d0ab61d13af3958c8c9
  [root@ns11 ~]# nova quota-show --tenant a1a8dbd9285f4d0ab61d13af3958c8c9
  +-+---+
  | Quota   | Limit |
  +-+---+
  | instances   | 10|
  | cores   | 20|
  | ram | 0|
  | floating_ips| 10|
  | fixed_ips   | -1|
  | metadata_items  | 128   |
  | injected_files  | 1 |
  | injected_file_content_bytes | 10240 |
  | injected_file_path_bytes| 255   |
  | key_pairs   | 100   |
  | security_groups | 10|
  | security_group_rules| 20|
  +-+---+

  # EXPECTED RESULTS 
  Prompt the input RAM in quota should be greater than 0

  # Actual Result
  update is successful

   it doesn't make sense to set the quota for RAM, DISK, CPU with 0. It
  is impossible for a virtual machine without RAM or DISK or CPU.  so
  when the user tries to set RAM with 0, it is better to prompt error
  information and suggest to set with a configured default value, it
  will be better user experience.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1250320/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257617] [NEW] Nova is unable to authenticate via keystone

2013-12-03 Thread Vincent Hou
Public bug reported:

I updated the code yesterday with devstack. The latest nova code gave me
the following error, when I tried any nova command.

==n-api==
2013-12-04 13:49:40.151 INFO requests.packages.urllib3.connectionpool [-] 
Starting new HTTPS connection (1): 9.119.148.201
2013-12-04 13:49:40.472 WARNING keystoneclient.middleware.auth_token [-] 
Retrying on HTTP connection exception: [Errno 1] _ssl.c:504: 
error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
2013-12-04 13:49:40.975 INFO requests.packages.urllib3.connectionpool [-] 
Starting new HTTPS connection (1): 9.119.148.201
2013-12-04 13:49:41.008 WARNING keystoneclient.middleware.auth_token [-] 
Retrying on HTTP connection exception: [Errno 1] _ssl.c:504: 
error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
2013-12-04 13:49:42.012 INFO requests.packages.urllib3.connectionpool [-] 
Starting new HTTPS connection (1): 9.119.148.201
2013-12-04 13:49:42.053 WARNING keystoneclient.middleware.auth_token [-] 
Retrying on HTTP connection exception: [Errno 1] _ssl.c:504: 
error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
2013-12-04 13:49:44.058 INFO requests.packages.urllib3.connectionpool [-] 
Starting new HTTPS connection (1): 9.119.148.201
2013-12-04 13:49:44.100 ERROR keystoneclient.middleware.auth_token [-] HTTP 
connection exception: [Errno 1] _ssl.c:504: error:140770FC:SSL 
routines:SSL23_GET_SERVER_HELLO:unknown protocol
2013-12-04 13:49:44.103 DEBUG keystoneclient.middleware.auth_token [-] Token 
validation failure. from (pid=7023) _validate_user_token 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:826
2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token Traceback 
(most recent call last):
2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token   File 
"/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py", 
line 821, in _validate_user_token
2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token data = 
self.verify_uuid_token(user_token, retry)
2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token   File 
"/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py", 
line 1096, in verify_uuid_token
2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token 
self.auth_version = self._choose_api_version()
2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token   File 
"/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py", 
line 519, in _choose_api_version
2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token 
versions_supported_by_server = self._get_supported_versions()
2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token   File 
"/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py", 
line 539, in _get_supported_versions
2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token 
response, data = self._json_request('GET', '/')
2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token   File 
"/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py", 
line 749, in _json_request
2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token response 
= self._http_request(method, path, **kwargs)
2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token   File 
"/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py", 
line 714, in _http_request
2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token raise 
NetworkError('Unable to communicate with keystone')
2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token 
NetworkError: Unable to communicate with keystone
2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token 
2013-12-04 13:49:44.110 WARNING keystoneclient.middleware.auth_token [-] 
Authorization failed for token 4517bf0837d30dcf7b9cd438075c9d92
2013-12-04 13:49:44.111 INFO keystoneclient.middleware.auth_token [-] Invalid 
user token - rejecting request
2013-12-04 13:49:44.114 INFO nova.osapi_compute.wsgi.server [-] 9.119.148.201 
"GET /v2/26b6d3afa22340a4aa5896068ab58f97/extensions HTTP/1.1" status: 401 len: 
197 time: 3.9667039

2013-12-04 13:49:44.117 DEBUG keystoneclient.middleware.auth_token [-] 
Authenticating user token from (pid=7023) __call__ 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:568
2013-12-04 13:49:44.121 DEBUG keystoneclient.middleware.auth_token [-] Removing 
headers from request environment: 
X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role
 from (pid=7023) _remove_auth_headers 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_t