[Yahoo-eng-team] [Bug 1276778] Re: test_lock_unlock_server: failed to reach ACTIVE status and task state "None" within the required time

2014-09-09 Thread David Kranz
Searching for

failed to reach ACTIVE status and task state "None"

shows a lot of different bug tickets. This does not seem like a tempest
bug.

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276778

Title:
  test_lock_unlock_server: failed to reach ACTIVE status and task state
  "None" within the required time

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
   Traceback (most recent call last):
 File "tempest/api/compute/servers/test_server_actions.py", line 419, in 
test_lock_unlock_server
   self.servers_client.wait_for_server_status(self.server_id, 'ACTIVE')
 File "tempest/services/compute/xml/servers_client.py", line 371, in 
wait_for_server_status
   raise_on_error=raise_on_error)
 File "tempest/common/waiters.py", line 89, in wait_for_server_status
  raise exceptions.TimeoutException(message)
   TimeoutException: Request timed out
   Details: Server c73d5bba-4f88-4279-8de6-9c66844e72e2 failed to reach ACTIVE 
status and task state "None" within the required time (196 s). Current status: 
SHUTOFF. Current task state: None.

  Source: http://logs.openstack.org/47/70647/3/gate/gate-tempest-dsvm-
  full/b8607e6/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1276778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1283599] Re: TestNetworkBasicOps occasionally fails to delete resources

2014-09-09 Thread David Kranz
This is still hitting persistently but not that often. I think this is
more likely a bug in neutron than in tempest so marking accordingly.
Please reopen in tempest if more evidence appears.

** Changed in: neutron
   Status: New => Confirmed

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1283599

Title:
  TestNetworkBasicOps occasionally fails to delete resources

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in Tempest:
  Invalid

Bug description:
  Network, Subnet and security group appear to be in use when they are deleted.
  Observed in: 
http://logs.openstack.org/84/75284/3/check/check-tempest-dsvm-neutron-full/d792a7a/logs

  Observed so far with neutron full job only.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1283599/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361781] Re: bogus "Cannot 'rescue' while instance is in task_state powering-off"

2014-09-09 Thread David Kranz
This looks real though there is only one hit in the past 8 days and the
log is not available. The test immediately preceding this failure has an
addCLeanUp that unrescues:

def _unrescue(self, server_id):
resp, body = self.servers_client.unrescue_server(server_id)
self.assertEqual(202, resp.status)
self.servers_client.wait_for_server_status(server_id, 'ACTIVE')


The only possibility I can see is that somehow even after nova reports ACTIVE, 
the rescue code thinks the server is still in the powering-off state. I am 
going to call this a nova issue unless some one claims the above code is not 
sufficient to allow a follow-on call to rescue.

** Changed in: tempest
   Status: New => Invalid

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361781

Title:
  bogus "Cannot 'rescue' while instance is in task_state powering-off"

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  
tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescued_vm_associate_dissociate_floating_ip
  appears flaky.  For my change that fixes some documentation, sometimes
  this test succeeds and sometimes it fails.  For an example of a
  failure, see http://logs.openstack.org/85/109385/5/check/check-
  tempest-dsvm-full/ab9c111/

  Here is the traceback from the console.html in that case:

  2014-08-26 07:29:18.804 | ==
  2014-08-26 07:29:18.804 | Failed 1 tests - output below:
  2014-08-26 07:29:18.805 | ==
  2014-08-26 07:29:18.805 | 
  2014-08-26 07:29:18.805 | 
tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescued_vm_associate_dissociate_floating_ip[gate]
  2014-08-26 07:29:18.805 | 
--
  2014-08-26 07:29:18.805 | 
  2014-08-26 07:29:18.805 | Captured traceback:
  2014-08-26 07:29:18.805 | ~~~
  2014-08-26 07:29:18.806 | Traceback (most recent call last):
  2014-08-26 07:29:18.806 |   File 
"tempest/api/compute/servers/test_server_rescue.py", line 95, in 
test_rescued_vm_associate_dissociate_floating_ip
  2014-08-26 07:29:18.806 | self.server_id, adminPass=self.password)
  2014-08-26 07:29:18.806 |   File 
"tempest/services/compute/json/servers_client.py", line 463, in rescue_server
  2014-08-26 07:29:18.806 | schema.rescue_server, **kwargs)
  2014-08-26 07:29:18.806 |   File 
"tempest/services/compute/json/servers_client.py", line 218, in action
  2014-08-26 07:29:18.806 | post_body)
  2014-08-26 07:29:18.807 |   File "tempest/common/rest_client.py", line 
219, in post
  2014-08-26 07:29:18.807 | return self.request('POST', url, 
extra_headers, headers, body)
  2014-08-26 07:29:18.807 |   File "tempest/common/rest_client.py", line 
431, in request
  2014-08-26 07:29:18.807 | resp, resp_body)
  2014-08-26 07:29:18.807 |   File "tempest/common/rest_client.py", line 
485, in _error_checker
  2014-08-26 07:29:18.807 | raise exceptions.Conflict(resp_body)
  2014-08-26 07:29:18.807 | Conflict: An object with that identifier 
already exists
  2014-08-26 07:29:18.808 | Details: {u'message': u"Cannot 'rescue' while 
instance is in task_state powering-off", u'code': 409}
  2014-08-26 07:29:18.808 | 
  2014-08-26 07:29:18.808 | 
  2014-08-26 07:29:18.808 | Captured pythonlogging:
  2014-08-26 07:29:18.808 | ~~~
  2014-08-26 07:29:18.808 | 2014-08-26 07:05:12,251 25737 INFO 
[tempest.common.rest_client] Request 
(ServerRescueTestJSON:test_rescued_vm_associate_dissociate_floating_ip): 409 
POST 
http://127.0.0.1:8774/v2/690b69920c1b4a4c8d2b376ba4cb6f80/servers/9a840d84-a381-42e5-81ef-8e7cd95c086e/action
 0.211s

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361781/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360504] Re: tempest.api.identity.admin.v3.test_credentials.CredentialsTestJSON create credential unauthorized

2014-09-09 Thread David Kranz
I don't see any hits for this in logstash. There is nothing unusual
about this test and it is surrounded by similar tests that pass. So
there must be some issue in keystone that is causing the admin
credentials to be rejected here.

** Changed in: tempest
   Status: New => Invalid

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1360504

Title:
  tempest.api.identity.admin.v3.test_credentials.CredentialsTestJSON
  create credential unauthorized

Status in OpenStack Identity (Keystone):
  New
Status in Tempest:
  Invalid

Bug description:
  The bug appeared in a gate-tempest-dsvm-neutron-full run:
  https://review.openstack.org/#/c/47/5

  Full console.log here: http://logs.openstack.org/47/47/5/gate
  /gate-tempest-dsvm-neutron-full/f21c917/console.html

  Stacktrace:
  2014-08-22 10:49:35.168 | 
tempest.api.identity.admin.v3.test_credentials.CredentialsTestJSON.test_credentials_create_get_update_delete[gate,smoke]
  2014-08-22 10:49:35.168 | 

  2014-08-22 10:49:35.168 | 
  2014-08-22 10:49:35.168 | Captured traceback:
  2014-08-22 10:49:35.168 | ~~~
  2014-08-22 10:49:35.168 | Traceback (most recent call last):
  2014-08-22 10:49:35.168 |   File 
"tempest/api/identity/admin/v3/test_credentials.py", line 62, in 
test_credentials_create_get_update_delete
  2014-08-22 10:49:35.168 | self.projects[0])
  2014-08-22 10:49:35.168 |   File 
"tempest/services/identity/v3/json/credentials_client.py", line 43, in 
create_credential
  2014-08-22 10:49:35.168 | resp, body = self.post('credentials', 
post_body)
  2014-08-22 10:49:35.168 |   File "tempest/common/rest_client.py", line 
219, in post
  2014-08-22 10:49:35.169 | return self.request('POST', url, 
extra_headers, headers, body)
  2014-08-22 10:49:35.169 |   File "tempest/common/rest_client.py", line 
431, in request
  2014-08-22 10:49:35.169 | resp, resp_body)
  2014-08-22 10:49:35.169 |   File "tempest/common/rest_client.py", line 
472, in _error_checker
  2014-08-22 10:49:35.169 | raise exceptions.Unauthorized(resp_body)
  2014-08-22 10:49:35.169 | Unauthorized: Unauthorized
  2014-08-22 10:49:35.169 | Details: {"error": {"message": "The request you 
have made requires authentication. (Disable debug mode to suppress these 
details.)", "code": 401, "title": "Unauthorized"}}
  2014-08-22 10:49:35.169 | 
  2014-08-22 10:49:35.169 | 
  2014-08-22 10:49:35.169 | Captured pythonlogging:
  2014-08-22 10:49:35.170 | ~~~
  2014-08-22 10:49:35.170 | 2014-08-22 10:31:28,001 5831 INFO 
[tempest.common.rest_client] Request 
(CredentialsTestJSON:test_credentials_create_get_update_delete): 401 POST 
http://127.0.0.1:35357/v3/credentials 0.065s

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1360504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359995] Re: Tempest failed to delete user

2014-09-09 Thread David Kranz
Tempest does check for token expiry and no test should fail due to an
expired token. So this must be a keystone issue. I just looked at
another bug that got an unauthorized for one of the keystone tests with
no explanation which I also added keystone to the ticket
https://bugs.launchpad.net/keystone/+bug/1360504

** Changed in: tempest
   Status: New => Confirmed

** Changed in: tempest
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1359995

Title:
  Tempest failed to delete user

Status in devstack - openstack dev environments:
  Fix Released
Status in OpenStack Identity (Keystone):
  Incomplete
Status in Tempest:
  Invalid

Bug description:
  
  check-tempest-dsvm-full failed on a keystone change. Here's the main log: 
http://logs.openstack.org/73/111573/4/check/check-tempest-dsvm-full/c5ce3bd/console.html

  The traceback shows:

  File "tempest/api/volume/test_volumes_list.py", line 80, in tearDownClass
  File "tempest/services/identity/json/identity_client.py", line 189, in 
delete_user
  Unauthorized: Unauthorized
  Details: {"error": {"message": "The request you have made requires 
authentication. (Disable debug mode to suppress these details.)", "code": 401, 
"title": "Unauthorized"}}

  So it's trying to delete the user and it gets unauthorized. Maybe the
  token was expired or marked invalid for some reason.

  There's something wrong here, but the keystone logs are useless for
  debugging now that it's running in Apache httpd. The logs don't have
  the request or result line, so you can't find where the request was
  being made.

  Also, Tempest should be able to handle the token being invalidated. It
  should just get a new token and try with that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1359995/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359805] Re: 'Requested operation is not valid: domain is not running' from check-tempest-dsvm-neutron-full

2014-09-09 Thread David Kranz
*** This bug is a duplicate of bug 1260537 ***
https://bugs.launchpad.net/bugs/1260537

None is available I'm afraid. This is not a bug in tempest and this
ticket https://bugs.launchpad.net/tempest/+bug/1260537 is used to track
such things for whatever good it does.

** This bug has been marked a duplicate of bug 1260537
   Generic catchall bug for non triaged bugs where a server doesn't reach it's 
required state

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1359805

Title:
  'Requested operation is not valid: domain is not running' from check-
  tempest-dsvm-neutron-full

Status in OpenStack Neutron (virtual network service):
  Incomplete
Status in Tempest:
  New

Bug description:
  I received the following error from the check-tempest-dsvm-neutron-
  full test suite after submitting a nova patch:

  2014-08-21 14:11:25.059 | Captured traceback:
  2014-08-21 14:11:25.059 | ~~~
  2014-08-21 14:11:25.059 | Traceback (most recent call last):
  2014-08-21 14:11:25.059 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 407, in 
test_suspend_resume_server
  2014-08-21 14:11:25.059 | 
self.client.wait_for_server_status(self.server_id, 'SUSPENDED')
  2014-08-21 14:11:25.059 |   File 
"tempest/services/compute/xml/servers_client.py", line 390, in 
wait_for_server_status
  2014-08-21 14:11:25.059 | raise_on_error=raise_on_error)
  2014-08-21 14:11:25.059 |   File "tempest/common/waiters.py", line 77, in 
wait_for_server_status
  2014-08-21 14:11:25.059 | server_id=server_id)
  2014-08-21 14:11:25.059 | BuildErrorException: Server 
a29ec7be-be83-4247-b7db-49bd4727d206 failed to build and is in ERROR status
  2014-08-21 14:11:25.059 | Details: {'message': 'Requested operation is 
not valid: domain is not running', 'code': '500', 'details': 'None', 'created': 
'2014-08-21T13:49:49Z'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1359805/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358857] Re: test_load_balancer_basic mismatch error

2014-09-09 Thread David Kranz
This must be a race of some sort in tempest or neutron but I'm not sure
which.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1358857

Title:
  test_load_balancer_basic mismatch error

Status in OpenStack Neutron (virtual network service):
  New
Status in Tempest:
  New

Bug description:
  Gate failed check-tempest-dsvm-neutron-full on this (unrelated) patch change: 
  https://review.openstack.org/#/c/114693/

  http://logs.openstack.org/93/114693/1/check/check-tempest-dsvm-
  neutron-full/2755713/console.html

  
  2014-08-19 01:11:40.597 | ==
  2014-08-19 01:11:40.597 | Failed 1 tests - output below:
  2014-08-19 01:11:40.597 | ==
  2014-08-19 01:11:40.597 | 
  2014-08-19 01:11:40.597 | 
tempest.scenario.test_load_balancer_basic.TestLoadBalancerBasic.test_load_balancer_basic[compute,gate,network,smoke]
  2014-08-19 01:11:40.597 | 

  2014-08-19 01:11:40.597 | 
  2014-08-19 01:11:40.597 | Captured traceback:
  2014-08-19 01:11:40.597 | ~~~
  2014-08-19 01:11:40.598 | Traceback (most recent call last):
  2014-08-19 01:11:40.598 |   File "tempest/test.py", line 128, in wrapper
  2014-08-19 01:11:40.598 | return f(self, *func_args, **func_kwargs)
  2014-08-19 01:11:40.598 |   File 
"tempest/scenario/test_load_balancer_basic.py", line 297, in 
test_load_balancer_basic
  2014-08-19 01:11:40.598 | self._check_load_balancing()
  2014-08-19 01:11:40.598 |   File 
"tempest/scenario/test_load_balancer_basic.py", line 277, in 
_check_load_balancing
  2014-08-19 01:11:40.598 | self._send_requests(self.vip_ip, 
set(["server1", "server2"]))
  2014-08-19 01:11:40.598 |   File 
"tempest/scenario/test_load_balancer_basic.py", line 289, in _send_requests
  2014-08-19 01:11:40.598 | set(resp))
  2014-08-19 01:11:40.598 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 321, in 
assertEqual
  2014-08-19 01:11:40.598 | self.assertThat(observed, matcher, message)
  2014-08-19 01:11:40.599 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 406, in 
assertThat
  2014-08-19 01:11:40.599 | raise mismatch_error
  2014-08-19 01:11:40.599 | MismatchError: set(['server1', 'server2']) != 
set(['server1'])

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1358857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358814] Re: test_s3_ec2_images fails with 500 error "Unkown error occurred"

2014-09-09 Thread David Kranz
Looks like some kind of nova issue.

** Changed in: tempest
   Status: New => Invalid

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358814

Title:
  test_s3_ec2_images fails with 500 error "Unkown error occurred"

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  Testing a CI job I'm setting up to validate some Cinder Driver, I
  encountered the following issue while running tempest-dsvm-full:

  Full log at: http://r.ci.devsca.com:8080/job/periodic-scality-tempest-
  dsvm-full/12/console

  The relevant screen logs I could find (which contains either errors or 
tracebacks) are:
   - error.log (contains one single line from rabbitmq not being able to set 
the password)
   - screen-g-api.log
   - screen-g-reg.log
   - screen-tr-api.log

  All the screen logs are attached as a gzip archive file.

  Traceback of the internal server error:
  
tempest.thirdparty.boto.test_s3_ec2_images.S3ImagesTest.test_register_get_deregister_aki_image
  16:03:09 
--
  16:03:09 
  16:03:09 Captured traceback:
  16:03:09 ~~~
  16:03:09 Traceback (most recent call last):
  16:03:09   File "tempest/thirdparty/boto/test_s3_ec2_images.py", line 90, 
in test_register_get_deregister_aki_image
  16:03:09 self.assertImageStateWait(retrieved_image, "available")
  16:03:09   File "tempest/thirdparty/boto/test.py", line 354, in 
assertImageStateWait
  16:03:09 state = self.waitImageState(lfunction, wait_for)
  16:03:09   File "tempest/thirdparty/boto/test.py", line 339, in 
waitImageState
  16:03:09 self.valid_image_state)
  16:03:09   File "tempest/thirdparty/boto/test.py", line 333, in 
state_wait_gone
  16:03:09 state = wait.state_wait(lfunction, final_set, valid_set)
  16:03:09   File "tempest/thirdparty/boto/utils/wait.py", line 54, in 
state_wait
  16:03:09 status = lfunction()
  16:03:09   File "tempest/thirdparty/boto/test.py", line 316, in _status
  16:03:09 obj.update(validate=True)
  16:03:09   File 
"/usr/local/lib/python2.7/dist-packages/boto/ec2/image.py", line 160, in update
  16:03:09 rs = self.connection.get_all_images([self.id], 
dry_run=dry_run)
  16:03:09   File 
"/usr/local/lib/python2.7/dist-packages/boto/ec2/connection.py", line 190, in 
get_all_images
  16:03:09 [('item', Image)], verb='POST')
  16:03:09   File 
"/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1150, in 
get_list
  16:03:09 response = self.make_request(action, params, path, verb)
  16:03:09   File 
"/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1096, in 
make_request
  16:03:09 return self._mexe(http_request)
  16:03:09   File 
"/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1009, in _mexe
  16:03:09 raise BotoServerError(response.status, response.reason, body)
  16:03:09 BotoServerError: BotoServerError: 500 Internal Server Error
  16:03:09 
  16:03:09 
HTTPInternalServerErrorUnknown 
error 
occurred.req-f2757f18-e039-49b1-b537-e48d0281abf0
  16:03:09 
  16:03:09 
  16:03:09 Captured pythonlogging:
  16:03:09 ~~~
  16:03:09 2014-08-19 16:02:33,467 30126 DEBUG
[keystoneclient.auth.identity.v2] Making authentication request to 
http://127.0.0.1:5000/v2.0/tokens
  16:03:09 2014-08-19 16:02:36,730 30126 INFO 
[tempest.thirdparty.boto.utils.wait] State transition "pending" ==> "failed" 1 
second
  16:03:09  


  
  Glance API Screen Log:
  2014-08-19 16:02:50.519 26241 DEBUG keystonemiddleware.auth_token [-] Storing 
token in cache store 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py:1425
  2014-08-19 16:02:50.520 26241 DEBUG keystonemiddleware.auth_token [-] 
Received request from user: f28e3251f72347df9791ecd861c5caf4 with project_id : 
526acfaadbc042f8ac7c37d9ef7cffde and roles: _member_,Member  
_build_user_headers 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py:738
  2014-08-19 16:02:50.521 26241 DEBUG routes.middleware [-] Matched HEAD 
/images/db22d1d9-420b-41d2-8603-86c6fb9b5962 __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:100
  2014-08-19 16:02:50.522 26241 DEBUG routes.middleware [-] Route path: 
'/images/{id}', defaults: {'action': u'meta', 'controller': 
} __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:102
  2014-08-19 16:02:50.522 26241 DEBUG routes.middleware [-] Match dict: 
{'action': u'meta', 'controller': , 'id': u'db22d1d9-420b-41d2-8603-86c6fb9b5962'} __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:103
  2014-08-19 16:02:50.522 26241 DEBUG glance.common

[Yahoo-eng-team] [Bug 1352092] Re: tempest.scenario.test_large_ops.TestLargeOpsScenario.test_large_ops_scenario_3

2014-09-09 Thread David Kranz
This happened once in the last week. So it is real but not common. I am
assuming this is a nova issue and not tempest. Please reopen if there is
evidence to the contrary.

** Changed in: tempest
   Status: New => Invalid

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1352092

Title:
  tempest.scenario.test_large_ops.TestLargeOpsScenario.test_large_ops_scenario_3

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  2014-08-04 01:41:35.047 | Traceback (most recent call last):
  2014-08-04 01:41:35.047 |   File "tempest/scenario/manager.py", line 175, in 
delete_wrapper
  2014-08-04 01:41:35.047 | thing.delete()
  2014-08-04 01:41:35.048 |   File 
"/opt/stack/new/python-novaclient/novaclient/v1_1/security_groups.py", line 31, 
in delete
  2014-08-04 01:41:35.048 | self.manager.delete(self)
  2014-08-04 01:41:35.048 |   File 
"/opt/stack/new/python-novaclient/novaclient/v1_1/security_groups.py", line 71, 
in delete
  2014-08-04 01:41:35.049 | self._delete('/os-security-groups/%s' % 
base.getid(group))
  2014-08-04 01:41:35.049 |   File 
"/opt/stack/new/python-novaclient/novaclient/base.py", line 109, in _delete
  2014-08-04 01:41:35.049 | _resp, _body = self.api.client.delete(url)
  2014-08-04 01:41:35.050 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 538, in delete
  2014-08-04 01:41:35.050 | return self._cs_request(url, 'DELETE', **kwargs)
  2014-08-04 01:41:35.050 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 507, in 
_cs_request
  2014-08-04 01:41:35.050 | resp, body = self._time_request(url, method, 
**kwargs)
  2014-08-04 01:42:12.628 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 481, in 
_time_request
  2014-08-04 01:42:12.629 | resp, body = self.request(url, method, **kwargs)
  2014-08-04 01:42:12.629 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 475, in request
  2014-08-04 01:42:12.629 | raise exceptions.from_response(resp, body, url, 
method)
  2014-08-04 01:42:12.630 | BadRequest: Security group is still in use (HTTP 
400) (Request-ID: req-cb8e9344-57e7-4ad8-962c-c527934eae59)
  2014-08-04 01:42:12.630 | 
==
  2014-08-04 01:42:12.630 | FAIL: 
tempest.scenario.test_large_ops.TestLargeOpsScenario.test_large_ops_scenario_3[compute,image]
  2014-08-04 01:42:12.631 | tags: worker-0
  2014-08-04 01:42:12.631 | 
--
  2014-08-04 01:42:12.631 | Empty attachments:
  2014-08-04 01:42:12.631 |   stderr
  2014-08-04 01:42:12.632 |   stdout

  detail: http://logs.openstack.org/38/38/3/check/gate-tempest-dsvm-
  large-ops/c0005d5/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1352092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339923] Re: failures in tempest.thirdparty.boto.test_ec2_keys

2014-09-16 Thread David Kranz
This is still showing up in logstash

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQm90b1NlcnZlckVycm9yOiBCb3RvU2VydmVyRXJyb3I6IDUwMCBJbnRlcm5hbCBTZXJ2ZXIgRXJyb3JcIiBBTkQgTk9UIGJ1aWxkX2JyYW5jaDpcInN0YWJsZS9oYXZhbmFcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQxMDg5MDk1MTI4M30=

Opening to nova for now as tempest should not be getting a 500.

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1339923

Title:
  failures in tempest.thirdparty.boto.test_ec2_keys

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  per http://logs.openstack.org/07/105307/9/check/check-tempest-dsvm-
  postgres-full/190d3ce/console.html, seeing failures unrelated to my
  patch:

  
tempest.thirdparty.boto.test_ec2_security_groups.EC2SecurityGroupTest.test_create_authorize_security_group
  
--
   
   BotoServerError: BotoServerError: 500 Internal Server Error
   
   OperationalErrorUnknown 
error 
occurred.req-ab53ffd6-fd2f-4747-b0ed-8240381adc12
   
   
   tempest.thirdparty.boto.test_ec2_keys.EC2KeysTest.test_create_ec2_keypair
   -
   
   BotoServerError: BotoServerError: 500 Internal Server Error
   
   OperationalErrorUnknown 
error 
occurred.req-930ffb3f-ece3-47ee-89e7-9d3afe273c9e
   
   
   tempest.thirdparty.boto.test_ec2_keys.EC2KeysTest.test_duplicate_ec2_keypair
   
   
   BotoServerError: BotoServerError: 500 Internal Server Error
   
   OperationalErrorUnknown 
error 
occurred.req-dda2e5f0-58c9-43ab-81a7-103f12d7915f
   
   
   tempest.thirdparty.boto.test_ec2_keys.EC2KeysTest.test_get_ec2_keypair
   --
   
   BotoServerError: BotoServerError: 500 Internal Server Error
   
   OperationalErrorUnknown 
error 
occurred.req-08ac28ea-5d4e-4f83-a256-53717e6bce9e
   

  the final cleanup exception is similar to 
  https://bugs.launchpad.net/tempest/+bug/1339910 but the cause is not a 
duplicate 
  keypair:

   tearDownClass (tempest.thirdparty.boto.test_ec2_keys.EC2KeysTest)
   -
   
   Captured traceback:
   ~~~
   Traceback (most recent call last):
 File "tempest/thirdparty/boto/test.py", line 272, in tearDownClass
   raise exceptions.TearDownException(num=fail_count)
   TearDownException: 1 cleanUp operation failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1339923/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1335889] Re: Race condition in the stress volume_attach_delete test

2014-09-16 Thread David Kranz
I am not seeing how this is a bug in tempest. Tempest is deleting the vm
only after nova reports that the volume is 'in-use' which seems fine.
It would be nice if there was a backtrace, log, or something associated
with this ticket. This might be a cinder issue but more likely nova and
the test is making a nova call.

** Changed in: tempest
   Status: New => Invalid

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1335889

Title:
  Race condition in quickly attaching / deleting volumes

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  Confirmed
Status in Tempest:
  Invalid

Bug description:
  It seems that there is a race condition in the stress
  volume_attach_delete test. It creates VMs and volumes, attaches
  volumes and deletes everything.

  The test is waiting for volumes to be in 'in-use' state before
  deleting VMs. It seems that Nova/Cinder don't have time to register
  volumes as attached in their databases before VMs get deleted. Volumes
  are then left attached to deleted VMs and unable to be deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1335889/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1331913] Re: tempest.api.volume.test_volumes_actions.VolumesActionsTestXML.test_volume_upload fails

2014-09-16 Thread David Kranz
I really don't know if this is a problem with glance or devstack or
something else.

** Changed in: tempest
   Status: New => Invalid

** Also affects: glance
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1331913

Title:
  
tempest.api.volume.test_volumes_actions.VolumesActionsTestXML.test_volume_upload
  fails

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Tempest:
  Invalid

Bug description:
  See: http://logs.openstack.org/07/81707/7/check/check-tempest-dsvm-
  full/8b1ee80/console.html

  2014-06-19 03:37:29.394 | 
tempest.api.volume.test_volumes_actions.VolumesActionsTestXML.test_volume_upload[gate,image]
  2014-06-19 03:37:29.394 | 

  2014-06-19 03:37:29.395 | 
  2014-06-19 03:37:29.395 | Captured traceback:
  2014-06-19 03:37:29.395 | ~~~
  2014-06-19 03:37:29.395 | Traceback (most recent call last):
  2014-06-19 03:37:29.395 |   File "tempest/test.py", line 126, in wrapper
  2014-06-19 03:37:29.395 | return f(self, *func_args, **func_kwargs)
  2014-06-19 03:37:29.395 |   File 
"tempest/api/volume/test_volumes_actions.py", line 107, in test_volume_upload
  2014-06-19 03:37:29.395 | 
self.image_client.wait_for_image_status(image_id, 'active')
  2014-06-19 03:37:29.395 |   File 
"tempest/services/image/v1/json/image_client.py", line 289, in 
wait_for_image_status
  2014-06-19 03:37:29.395 | status=status)
  2014-06-19 03:37:29.395 | ImageKilledException: Image 
ecd98deb-ca3d-4207-b6c9-49ae6434e765 'killed' while waiting for 'active'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1331913/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1331215] Re: AttachInterfacesTestXML.test_add_remove_fixed_ip BadRequest

2014-09-17 Thread David Kranz
The proximate cause of these errors is that at the time this bug was
reported, we tried to grab the console in an erroneous way that causes a
400. This issue was fixed and I would just close this as another random
"server failed to boot" issue except that right before the failed server
create, there are a lot of errors in the nova cpu log:

http://logs.openstack.org/93/99393/4/check/check-tempest-dsvm-
neutron/1392e22/logs/screen-n-cpu.txt.gz?level=ERROR

I tried to search for some of these errors but many normal runs produce
them as well. So I am opening this to nova in case there is meaning in
these errors that a nova expert can see.

** Changed in: tempest
   Status: New => Invalid

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1331215

Title:
  AttachInterfacesTestXML.test_add_remove_fixed_ip  BadRequest

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  This looks to be a new bug, see
  http://logs.openstack.org/93/99393/4/check/check-tempest-dsvm-
  neutron/1392e22/console.html

  2014-06-17 19:47:54.665 | 
  2014-06-17 19:47:54.665 | ==
  2014-06-17 19:47:54.665 | Failed 3 tests - output below:
  2014-06-17 19:47:54.665 | ==
  2014-06-17 19:47:54.665 | 
  2014-06-17 19:47:54.665 | 
tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesTestXML.test_add_remove_fixed_ip[gate,smoke]
  2014-06-17 19:47:54.665 | 
---
  2014-06-17 19:47:54.665 | 
  2014-06-17 19:47:54.665 | Captured traceback:
  2014-06-17 19:47:54.665 | ~~~
  2014-06-17 19:47:54.665 | Traceback (most recent call last):
  2014-06-17 19:47:54.666 |   File 
"tempest/api/compute/servers/test_attach_interfaces.py", line 133, in 
test_add_remove_fixed_ip
  2014-06-17 19:47:54.666 | server, ifs = 
self._create_server_get_interfaces()
  2014-06-17 19:47:54.666 |   File 
"tempest/api/compute/servers/test_attach_interfaces.py", line 48, in 
_create_server_get_interfaces
  2014-06-17 19:47:54.666 | resp, server = 
self.create_test_server(wait_until='ACTIVE')
  2014-06-17 19:47:54.666 |   File "tempest/api/compute/base.py", line 247, 
in create_test_server
  2014-06-17 19:47:54.666 | raise ex
  2014-06-17 19:47:54.666 | BadRequest: Bad request
  2014-06-17 19:47:54.666 | Details: {'message': 'The server could not 
comply with the request since it is either malformed or otherwise incorrect.', 
'code': '400'}
  2014-06-17 19:47:54.666 | 
  2014-06-17 19:47:54.666 | 
  2014-06-17 19:47:54.666 | Captured pythonlogging:
  2014-06-17 19:47:54.666 | ~~~
  2014-06-17 19:47:54.667 | 2014-06-17 19:15:13,993 Request 
(AttachInterfacesTestXML:test_add_remove_fixed_ip): 200 POST 
http://127.0.0.1:5000/v2.0/tokens
  2014-06-17 19:47:54.667 | 2014-06-17 19:15:14,431 Request 
(AttachInterfacesTestXML:test_add_remove_fixed_ip): 202 POST 
http://127.0.0.1:8774/v2/97b84e38f795456d8c63a64526f8e8b5/servers 0.437s
  2014-06-17 19:47:54.667 | 2014-06-17 19:15:14,534 Request 
(AttachInterfacesTestXML:test_add_remove_fixed_ip): 200 GET 
http://127.0.0.1:8774/v2/97b84e38f795456d8c63a64526f8e8b5/servers/94bb0663-1b9e-4098-bedf-9c7d4fc6f8c9
 0.101s
  2014-06-17 19:47:54.667 | 2014-06-17 19:15:15,663 Request 
(AttachInterfacesTestXML:test_add_remove_fixed_ip): 200 GET 
http://127.0.0.1:8774/v2/97b84e38f795456d8c63a64526f8e8b5/servers/94bb0663-1b9e-4098-bedf-9c7d4fc6f8c9
 0.126s
  2014-06-17 19:47:54.667 | 2014-06-17 19:15:15,665 State transition 
"BUILD/scheduling" ==> "ERROR/None" after 1 second wait
  2014-06-17 19:47:54.667 | 2014-06-17 19:15:15,782 Request 
(AttachInterfacesTestXML:test_add_remove_fixed_ip): 400 POST 
http://127.0.0.1:8774/v2/97b84e38f795456d8c63a64526f8e8b5/servers/94bb0663-1b9e-4098-bedf-9c7d4fc6f8c9/action
 0.116s
  2014-06-17 19:47:54.667 | 2014-06-17 19:15:16,103 Request 
(AttachInterfacesTestXML:test_add_remove_fixed_ip): 204 DELETE 
http://127.0.0.1:8774/v2/97b84e38f795456d8c63a64526f8e8b5/servers/94bb0663-1b9e-4098-bedf-9c7d4fc6f8c9
 0.320s
  2014-06-17 19:47:54.667 | 
  2014-06-17 19:47:54.667 | 
  2014-06-17 19:47:54.667 | 
tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesTestXML.test_create_list_show_delete_interfaces[gate,smoke]
  2014-06-17 19:47:54.667 | 
--
  2014-06-17 19:47:54.668 | 
  2014-06-17 19:47:54.668 | Captured traceback:
  2014-06-17 19:47:54.668 | ~~~
  2014-06-17 19:47:54.668 | Traceback (most recent call last):
  2014-06-17 19:47:54.668 |  

[Yahoo-eng-team] [Bug 1084706] Re: ERROR stacktraces in n-cond log after good tempest run

2013-09-26 Thread David Kranz
The same issue is showing up in current jobs
http://logs.openstack.org/87/44287/9/check/gate-tempest-devstack-vm-
full/c3a07eb/logs/screen-n-cond.txt.gz

** Changed in: nova
   Status: Fix Released => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1084706

Title:
  ERROR stacktraces in n-cond log after good tempest run

Status in OpenStack Compute (Nova):
  New

Bug description:
  From 
http://logs.openstack.org/periodic/periodic-tempest-devstack-vm-check-hourly/404/logs/screen-n-cond.txt
  There are lots of these.

  2012-11-29 19:17:30 29815 DEBUG nova.manager [-] Running periodic task 
ConductorManager.publish_service_capabilities periodic_tasks 
/opt/stack/nova/nova/manager.py:172
  2012-11-29 19:17:34 29815 DEBUG nova.openstack.common.rpc.amqp [-] received 
{u'_context_roles': [], u'_msg_id': u'971480bb48574994aebce24c9d2c4e68', 
u'_context_quota_class': None, u'_context_request_id': 
u'req-5acd8b17-cd50-4e52-9877-be572eda2c3b', u'_context_service_catalog': None, 
u'_context_user_name': None, u'_context_auth_token': '', u'args': 
{u'instance_uuid': u'cec91bce-bfb5-4544-9bd6-1d43aff47b19'}, 
u'_context_instance_lock_checked': False, u'_context_project_name': None, 
u'_context_is_admin': True, u'version': u'1.2', u'_context_project_id': None, 
u'_context_timestamp': u'2012-11-29T19:17:27.770893', u'_context_read_deleted': 
u'no', u'_context_user_id': None, u'method': u'instance_get_by_uuid', 
u'_context_remote_address': None} _safe_log 
/opt/stack/nova/nova/openstack/common/rpc/common.py:195
  2012-11-29 19:17:34 29815 DEBUG nova.openstack.common.rpc.amqp [-] unpacked 
context: {'project_name': None, 'user_id': None, 'roles': [], 'timestamp': 
u'2012-11-29T19:17:27.770893', 'auth_token': '', 'remote_address': 
None, 'quota_class': None, 'is_admin': True, 'service_catalog': None, 
'request_id': u'req-5acd8b17-cd50-4e52-9877-be572eda2c3b', 
'instance_lock_checked': False, 'project_id': None, 'user_name': None, 
'read_deleted': u'no'} _safe_log 
/opt/stack/nova/nova/openstack/common/rpc/common.py:195
  2012-11-29 19:17:34 29815 ERROR nova.openstack.common.rpc.amqp [-] Exception 
during message handling
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/rpc/amqp.py", line 276, in _process_data
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp rval = 
self.proxy.dispatch(ctxt, version, method, **args)
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py", line 145, in dispatch
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp return 
getattr(proxyobj, method)(ctxt, **kwargs)
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/conductor/manager.py", line 66, in instance_get_by_uuid
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp 
self.db.instance_get_by_uuid(context, instance_uuid))
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/db/api.py", line 570, in instance_get_by_uuid
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp return 
IMPL.instance_get_by_uuid(context, uuid)
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 127, in wrapper
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp return 
f(*args, **kwargs)
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 1493, in instance_get_by_uuid
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp raise 
exception.InstanceNotFound(instance_id=uuid)
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp 
InstanceNotFound: Instance cec91bce-bfb5-4544-9bd6-1d43aff47b19 could not be 
found.
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1084706/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1084706] Re: ERROR stacktraces in n-cond log after good tempest run

2013-10-01 Thread David Kranz
OK, I created https://bugs.launchpad.net/nova/+bug/1233789

** Changed in: nova
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1084706

Title:
  ERROR stacktraces in n-cond log after good tempest run

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  From 
http://logs.openstack.org/periodic/periodic-tempest-devstack-vm-check-hourly/404/logs/screen-n-cond.txt
  There are lots of these.

  2012-11-29 19:17:30 29815 DEBUG nova.manager [-] Running periodic task 
ConductorManager.publish_service_capabilities periodic_tasks 
/opt/stack/nova/nova/manager.py:172
  2012-11-29 19:17:34 29815 DEBUG nova.openstack.common.rpc.amqp [-] received 
{u'_context_roles': [], u'_msg_id': u'971480bb48574994aebce24c9d2c4e68', 
u'_context_quota_class': None, u'_context_request_id': 
u'req-5acd8b17-cd50-4e52-9877-be572eda2c3b', u'_context_service_catalog': None, 
u'_context_user_name': None, u'_context_auth_token': '', u'args': 
{u'instance_uuid': u'cec91bce-bfb5-4544-9bd6-1d43aff47b19'}, 
u'_context_instance_lock_checked': False, u'_context_project_name': None, 
u'_context_is_admin': True, u'version': u'1.2', u'_context_project_id': None, 
u'_context_timestamp': u'2012-11-29T19:17:27.770893', u'_context_read_deleted': 
u'no', u'_context_user_id': None, u'method': u'instance_get_by_uuid', 
u'_context_remote_address': None} _safe_log 
/opt/stack/nova/nova/openstack/common/rpc/common.py:195
  2012-11-29 19:17:34 29815 DEBUG nova.openstack.common.rpc.amqp [-] unpacked 
context: {'project_name': None, 'user_id': None, 'roles': [], 'timestamp': 
u'2012-11-29T19:17:27.770893', 'auth_token': '', 'remote_address': 
None, 'quota_class': None, 'is_admin': True, 'service_catalog': None, 
'request_id': u'req-5acd8b17-cd50-4e52-9877-be572eda2c3b', 
'instance_lock_checked': False, 'project_id': None, 'user_name': None, 
'read_deleted': u'no'} _safe_log 
/opt/stack/nova/nova/openstack/common/rpc/common.py:195
  2012-11-29 19:17:34 29815 ERROR nova.openstack.common.rpc.amqp [-] Exception 
during message handling
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/rpc/amqp.py", line 276, in _process_data
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp rval = 
self.proxy.dispatch(ctxt, version, method, **args)
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py", line 145, in dispatch
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp return 
getattr(proxyobj, method)(ctxt, **kwargs)
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/conductor/manager.py", line 66, in instance_get_by_uuid
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp 
self.db.instance_get_by_uuid(context, instance_uuid))
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/db/api.py", line 570, in instance_get_by_uuid
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp return 
IMPL.instance_get_by_uuid(context, uuid)
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 127, in wrapper
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp return 
f(*args, **kwargs)
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 1493, in instance_get_by_uuid
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp raise 
exception.InstanceNotFound(instance_id=uuid)
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp 
InstanceNotFound: Instance cec91bce-bfb5-4544-9bd6-1d43aff47b19 could not be 
found.
  2012-11-29 19:17:34 29815 TRACE nova.openstack.common.rpc.amqp

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1084706/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1256406] Re: libvirt version error in n-cpu after successful tempest run

2013-12-02 Thread David Kranz
One of these came from logs.openstack.org/94/58494/3/check/check-
tempest-devstack-vm-full/2ef9650/logs/screen-n-cpu.txt.gz

But this was probably from a "check" run on a proposed commit. I have
been scanning all builds to finalize the whitelist for log errors.
Closing this ticket for now.

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1256406

Title:
  libvirt version error in n-cpu after successful tempest run

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  These started happening on 11/26 but it is flaky:

  2013-11-28 15:47:41.391 21230 ERROR nova.virt.libvirt.driver [-] Nova
  requires libvirt version 0.9.11 or greater.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1256406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257509] [NEW] swiftclient ERROR in g-api after successful tempest run

2013-12-03 Thread David Kranz
Public bug reported:

>From the log file from this change
http://logs.openstack.org/33/59533/1/gate/gate-tempest-dsvm-
full/1f2c988/console.html

2013-12-03 21:30:31.851 22827 ERROR swiftclient [-] Object GET failed: 
http://127.0.0.1:8080/v1/AUTH_4d22a858761e4b90b536f489ccff34ca/glance/a6c33fc7-4871-45f7-8b3c-fd0a7452cea0
 404 Not Found  [first 60 chars of response] Not FoundThe 
resource could not be found.<
2013-12-03 21:30:31.851 22827 TRACE swiftclient Traceback (most recent call 
last):
2013-12-03 21:30:31.851 22827 TRACE swiftclient   File 
"/opt/stack/new/python-swiftclient/swiftclient/client.py", line 1122, in _retry
2013-12-03 21:30:31.851 22827 TRACE swiftclient rv = func(self.url, 
self.token, *args, **kwargs)
2013-12-03 21:30:31.851 22827 TRACE swiftclient   File 
"/opt/stack/new/python-swiftclient/swiftclient/client.py", line 760, in 
get_object
2013-12-03 21:30:31.851 22827 TRACE swiftclient http_response_content=body)
2013-12-03 21:30:31.851 22827 TRACE swiftclient ClientException: Object GET 
failed: 
http://127.0.0.1:8080/v1/AUTH_4d22a858761e4b90b536f489ccff34ca/glance/a6c33fc7-4871-45f7-8b3c-fd0a7452cea0
 404 Not Found  [first 60 chars of response] Not FoundThe 
resource could not be found.<
2013-12-03 21:30:31.851 22827 TRACE swiftclient 
2013-12-03 21:30:31.851 22827 WARNING glance.store.swift 
[c841d109-8752-491b-acf0-d3f0da72d69e 724396572e324829aefb01ff24bd746e 
928f762a2fe54ffea18fc270c1292920] Swift could not find object 
a6c33fc7-4871-45f7-8b3c-fd0a7452cea0.
2013-12-03 21:30:31.854 22827 INFO glance.wsgi.server 
[c841d109-8752-491b-acf0-d3f0da72d69e 724396572e324829aefb01ff24bd746e 
928f762a2fe54ffea18fc270c1292920] 127.0.0.1 - - [03/Dec/2013 21:30:31] "GET 
/v1/images/a6c33fc7-4871-45f7-8b3c-fd0a7452cea0 HTTP/1.1" 404 294 1.760652

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1257509

Title:
  swiftclient ERROR in g-api after successful tempest run

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  From the log file from this change
  http://logs.openstack.org/33/59533/1/gate/gate-tempest-dsvm-
  full/1f2c988/console.html

  2013-12-03 21:30:31.851 22827 ERROR swiftclient [-] Object GET failed: 
http://127.0.0.1:8080/v1/AUTH_4d22a858761e4b90b536f489ccff34ca/glance/a6c33fc7-4871-45f7-8b3c-fd0a7452cea0
 404 Not Found  [first 60 chars of response] Not FoundThe 
resource could not be found.<
  2013-12-03 21:30:31.851 22827 TRACE swiftclient Traceback (most recent call 
last):
  2013-12-03 21:30:31.851 22827 TRACE swiftclient   File 
"/opt/stack/new/python-swiftclient/swiftclient/client.py", line 1122, in _retry
  2013-12-03 21:30:31.851 22827 TRACE swiftclient rv = func(self.url, 
self.token, *args, **kwargs)
  2013-12-03 21:30:31.851 22827 TRACE swiftclient   File 
"/opt/stack/new/python-swiftclient/swiftclient/client.py", line 760, in 
get_object
  2013-12-03 21:30:31.851 22827 TRACE swiftclient 
http_response_content=body)
  2013-12-03 21:30:31.851 22827 TRACE swiftclient ClientException: Object GET 
failed: 
http://127.0.0.1:8080/v1/AUTH_4d22a858761e4b90b536f489ccff34ca/glance/a6c33fc7-4871-45f7-8b3c-fd0a7452cea0
 404 Not Found  [first 60 chars of response] Not FoundThe 
resource could not be found.<
  2013-12-03 21:30:31.851 22827 TRACE swiftclient 
  2013-12-03 21:30:31.851 22827 WARNING glance.store.swift 
[c841d109-8752-491b-acf0-d3f0da72d69e 724396572e324829aefb01ff24bd746e 
928f762a2fe54ffea18fc270c1292920] Swift could not find object 
a6c33fc7-4871-45f7-8b3c-fd0a7452cea0.
  2013-12-03 21:30:31.854 22827 INFO glance.wsgi.server 
[c841d109-8752-491b-acf0-d3f0da72d69e 724396572e324829aefb01ff24bd746e 
928f762a2fe54ffea18fc270c1292920] 127.0.0.1 - - [03/Dec/2013 21:30:31] "GET 
/v1/images/a6c33fc7-4871-45f7-8b3c-fd0a7452cea0 HTTP/1.1" 404 294 1.760652

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1257509/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258601] Re: nova.network.manager: Unable to release because vif doesn't exist.

2013-12-12 Thread David Kranz
*** This bug is a duplicate of bug 1258848 ***
https://bugs.launchpad.net/bugs/1258848

Please include a pointer to the log file for such reports. According to
logstash this has hit 48 times in the last two weeks which is a very low
failure rate. Ideally flaky bugs like this would be fixed. If the nova
team wants to silence this a patch can be submitted to the whitelist in
tempest.

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258601

Title:
  nova.network.manager: Unable to release  because vif doesn't
  exist.

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  This error shows un in nova-network log.

  Not sure if it needs to be whitelisted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258601/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255627] Re: images.test_list_image_filters.ListImageFiltersTest fails with timeout

2013-12-12 Thread David Kranz
This non-white-listed error showed up in n-cpu:

2013-11-27 00:53:57.756 ERROR nova.virt.libvirt.driver [req-
298cf8f1-3907-4494-8b6e-61e9b88dfded ListImageFiltersTestXML-
tempest-656023876-user ListImageFiltersTestXML-tempest-656023876-tenant]
An error occurred while enabling hairpin mode on domain with xml:


According to logstash this happened 9 times in the last two weeks.

** Changed in: nova
   Status: New => Confirmed

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255627

Title:
  images.test_list_image_filters.ListImageFiltersTest fails with timeout

Status in OpenStack Compute (Nova):
  Confirmed
Status in Tempest:
  Invalid

Bug description:
  Spurious failure in this test:

  http://logs.openstack.org/49/55749/8/check/check-tempest-devstack-vm-
  full/9bc94d5/console.html

  2013-11-27 01:10:35.802 | 
==
  2013-11-27 01:10:35.802 | FAIL: setUpClass 
(tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestXML)
  2013-11-27 01:10:35.803 | setUpClass 
(tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestXML)
  2013-11-27 01:10:35.803 | 
--
  2013-11-27 01:10:35.803 | _StringException: Traceback (most recent call last):
  2013-11-27 01:10:35.804 |   File 
"tempest/api/compute/images/test_list_image_filters.py", line 50, in setUpClass
  2013-11-27 01:10:35.807 | cls.client.wait_for_image_status(cls.image1_id, 
'ACTIVE')
  2013-11-27 01:10:35.809 |   File 
"tempest/services/compute/xml/images_client.py", line 153, in 
wait_for_image_status
  2013-11-27 01:10:35.809 | raise exceptions.TimeoutException
  2013-11-27 01:10:35.809 | TimeoutException: Request timed out

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1255627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254772] Re: tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML setUpClass times-out on attaching volume

2013-12-12 Thread David Kranz
This shows up in n-cpu:

The "model server went away" showed up 11 times in the last two weeks
with the last one being on Dec. 3. This sample size is too small for me
to close at this time.

2013-11-25 15:24:22.099 21076 ERROR nova.servicegroup.drivers.db [-] model 
server went away
2013-11-25 15:24:32.814 ERROR nova.compute.manager 
[req-ecacaa21-3f07-4b44-9896-8b5bd2238a19 
ServersTestManualDisk-tempest-1962756300-user 
ServersTestManualDisk-tempest-1962756300-tenant] [instance: 
1f872097-8ad8-44f8-ba03-89a14115efe0] Failed to deallocate network for instance.
2013-11-25 15:25:32.855 21076 ERROR root [-] Original exception being dropped: 
['Traceback (most recent call last):\n', '  File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1809, in 
_try_deallocate_network\nself._deallocate_network(context, instance, 
requested_networks)\n', '  File "/opt/stack/new/nova/nova/compute/manager.py", 
line 1491, in _deallocate_network\ncontext, instance, 
requested_networks=requested_networks)\n', '  File 
"/opt/stack/new/nova/nova/network/api.py", line 93, in wrapped\nreturn 
func(self, context, *args, **kwargs)\n', '  File 
"/opt/stack/new/nova/nova/network/api.py", line 318, in 
deallocate_for_instance\n
self.network_rpcapi.deallocate_for_instance(context, **args)\n', '  File 
"/opt/stack/new/nova/nova/network/rpcapi.py", line 199, in 
deallocate_for_instance\nhost=host, 
requested_networks=requested_networks)\n', '  File 
"/opt/stack/new/nova/nova/rpcclient.py", line 85, in call\nreturn 
self._invoke(self.proxy.call, ctxt, method, **
 kwargs)\n', '  File "/opt/stack/new/nova/nova/rpcclient.py", line 63, in 
_invoke\nreturn cast_or_call(ctxt, msg, **self.kwargs)\n', '  File 
"/opt/stack/new/nova/nova/openstack/common/rpc/proxy.py", line 130, in call\n   
 exc.info, real_topic, msg.get(\'method\'))\n', 'Timeout: Timeout while waiting 
on RPC response - topic: "network", RPC method: "deallocate_for_instance" info: 
""\n']
2013-11-25 15:25:38.371 21076 ERROR nova.openstack.common.periodic_task [-] 
Error during ComputeManager.update_available_resource: Timeout while waiting on 
RPC response - topic: "conductor", RPC method: "compute_node_update" info: 
""
2013-11-25 15:26:32.903 21076 ERROR root [-] Original exception being dropped: 
['Traceback (most recent call last):\n', '  File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1919, in _delete_instance\n 
   self._shutdown_instance(context, db_inst, bdms)\n', '  File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1854, in 
_shutdown_instance\nself._try_deallocate_network(context, instance, 
requested_networks)\n', '  File "/opt/stack/new/nova/nova/compute/manager.py", 
line 1814, in _try_deallocate_network\n
self._set_instance_error_state(context, instance[\'uuid\'])\n', '  File 
"/opt/stack/new/nova/nova/compute/manager.py", line 484, in 
_set_instance_error_state\nvm_state=vm_states.ERROR)\n', '  File 
"/opt/stack/new/nova/nova/compute/manager.py", line 473, in _instance_update\n  
  **kwargs)\n', '  File "/opt/stack/new/nova/nova/conductor/api.py", line 389, 
in instance_update\nupdates, \'conductor\')\n', '  File 
"/opt/stack/new/nova/nova/conductor/rpcapi.py", line
  149, in instance_update\nservice=service)\n', '  File 
"/opt/stack/new/nova/nova/rpcclient.py", line 85, in call\nreturn 
self._invoke(self.proxy.call, ctxt, method, **kwargs)\n', '  File 
"/opt/stack/new/nova/nova/rpcclient.py", line 63, in _invoke\nreturn 
cast_or_call(ctxt, msg, **self.kwargs)\n', '  File 
"/opt/stack/new/nova/nova/openstack/common/rpc/proxy.py", line 130, in call\n   
 exc.info, real_topic, msg.get(\'method\'))\n', 'Timeout: Timeout while waiting 
on RPC response - topic: "conductor", RPC method: "instance_update" info: 
""\n']
2013-11-25 15:26:32.933 21076 ERROR nova.servicegroup.drivers.db [-] Recovered 
model server connection!


** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1254772

Title:
  tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML
  setUpClass times-out on attaching volume

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  2013-11-25 15:42:45.769 | 
==
  2013-11-25 15:42:45.770 | FAIL: setUpClass 
(tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML)
  2013-11-25 15:42:45.770 | setUpClass 
(tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML)
  2013-11-25 15:42:45.770 | 
--
  2013-11-25 15:42:45.770 | _StringException: Traceback (most recent call last):
  2013-11-25 15:42:45.770 |   File 
"tempest/api/compute/servers/test_server

[Yahoo-eng-team] [Bug 1269614] [NEW] virt ERRORs in n-cpu log after succesful tempest run

2014-01-15 Thread David Kranz
Public bug reported:

Lots of these slipped in during the current log checking outage:

2014-01-14 00:05:15.220 | 2014-01-14 00:04:13.658 26807 ERROR
nova.virt.driver [-] Exception dispatching event
: Info cache for
instance a0896255-1e5d-477d-9d16-0ab69687ba41 could not be found.


>From 
>http://logs.openstack.org/34/63934/3/gate/gate-tempest-dsvm-full/28c2e9a/console.html

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1269614

Title:
  virt ERRORs in n-cpu log after succesful tempest run

Status in OpenStack Compute (Nova):
  New

Bug description:
  Lots of these slipped in during the current log checking outage:

  2014-01-14 00:05:15.220 | 2014-01-14 00:04:13.658 26807 ERROR
  nova.virt.driver [-] Exception dispatching event
  : Info cache for
  instance a0896255-1e5d-477d-9d16-0ab69687ba41 could not be found.

  
  From 
http://logs.openstack.org/34/63934/3/gate/gate-tempest-dsvm-full/28c2e9a/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1269614/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406438] [NEW] Failures in test_create_list_show_delete_interfaces

2014-12-29 Thread David Kranz
Public bug reported:

logstash showed several dozen examples of this in the last week,
searching for

"u'port_state': u'BUILD'"


tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesTestJSON.test_create_list_show_delete_interfaces[gate,network,smoke]
2014-12-29 23:02:14.022 | 
---
2014-12-29 23:02:14.022 | 
2014-12-29 23:02:14.022 | Captured traceback:
2014-12-29 23:02:14.022 | ~~~
2014-12-29 23:02:14.022 | Traceback (most recent call last):
2014-12-29 23:02:14.022 |   File "tempest/test.py", line 112, in wrapper
2014-12-29 23:02:14.022 | return f(self, *func_args, **func_kwargs)
2014-12-29 23:02:14.022 |   File 
"tempest/api/compute/servers/test_attach_interfaces.py", line 128, in 
test_create_list_show_delete_interfaces
2014-12-29 23:02:14.023 | self._test_show_interface(server, ifs)
2014-12-29 23:02:14.023 |   File 
"tempest/api/compute/servers/test_attach_interfaces.py", line 81, in 
_test_show_interface
2014-12-29 23:02:14.023 | self.assertEqual(iface, _iface)
2014-12-29 23:02:14.023 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 348, in 
assertEqual
2014-12-29 23:02:14.023 | self.assertThat(observed, matcher, message)
2014-12-29 23:02:14.023 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 433, in 
assertThat
2014-12-29 23:02:14.023 | raise mismatch_error
2014-12-29 23:02:14.023 | MismatchError: !=:
2014-12-29 23:02:14.023 | reference = {u'fixed_ips': [{u'ip_address': 
u'10.100.0.4',
2014-12-29 23:02:14.023 |  u'subnet_id': 
u'14eede71-3e7c-45d0-ba9a-1e862971e73a'}],
2014-12-29 23:02:14.023 |  u'mac_addr': u'fa:16:3e:b4:a6:5a',
2014-12-29 23:02:14.023 |  u'net_id': 
u'5820024b-ce9d-4175-a922-2fc197f425e9',
2014-12-29 23:02:14.024 |  u'port_id': 
u'49bc5869-1716-49a6-812a-90a603e4f8f3',
2014-12-29 23:02:14.024 |  u'port_state': u'ACTIVE'}
2014-12-29 23:02:14.024 | actual= {u'fixed_ips': [{u'ip_address': 
u'10.100.0.4',
2014-12-29 23:02:14.024 |  u'subnet_id': 
u'14eede71-3e7c-45d0-ba9a-1e862971e73a'}],
2014-12-29 23:02:14.024 |  u'mac_addr': u'fa:16:3e:b4:a6:5a',
2014-12-29 23:02:14.024 |  u'net_id': 
u'5820024b-ce9d-4175-a922-2fc197f425e9',
2014-12-29 23:02:14.024 |  u'port_id': 
u'49bc5869-1716-49a6-812a-90a603e4f8f3',
2014-12-29 23:02:14.024 |  u'port_state': u'BUILD'}
2014-12-29 23:02:14.024

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1406438

Title:
  Failures in test_create_list_show_delete_interfaces

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  logstash showed several dozen examples of this in the last week,
  searching for

  "u'port_state': u'BUILD'"

  
  
tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesTestJSON.test_create_list_show_delete_interfaces[gate,network,smoke]
  2014-12-29 23:02:14.022 | 
---
  2014-12-29 23:02:14.022 | 
  2014-12-29 23:02:14.022 | Captured traceback:
  2014-12-29 23:02:14.022 | ~~~
  2014-12-29 23:02:14.022 | Traceback (most recent call last):
  2014-12-29 23:02:14.022 |   File "tempest/test.py", line 112, in wrapper
  2014-12-29 23:02:14.022 | return f(self, *func_args, **func_kwargs)
  2014-12-29 23:02:14.022 |   File 
"tempest/api/compute/servers/test_attach_interfaces.py", line 128, in 
test_create_list_show_delete_interfaces
  2014-12-29 23:02:14.023 | self._test_show_interface(server, ifs)
  2014-12-29 23:02:14.023 |   File 
"tempest/api/compute/servers/test_attach_interfaces.py", line 81, in 
_test_show_interface
  2014-12-29 23:02:14.023 | self.assertEqual(iface, _iface)
  2014-12-29 23:02:14.023 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 348, in 
assertEqual
  2014-12-29 23:02:14.023 | self.assertThat(observed, matcher, message)
  2014-12-29 23:02:14.023 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 433, in 
assertThat
  2014-12-29 23:02:14.023 | raise mismatch_error
  2014-12-29 23:02:14.023 | MismatchError: !=:
  2014-12-29 23:02:14.023 | reference = {u'fixed_ips': [{u'ip_address': 
u'10.100.0.4',
  2014-12-29 23:02:14.023 |  u'subnet_id': 
u'14eede71-3e7c-45d0-ba9a-1e862971e73a'}],
  2014-12-29 23:02:14.023 |  u'mac_addr': u'fa:16:3e:b4:a6:5a',
  2014-12-29 23:02:14.023 |  u'net_id': 
u'5820024b-ce9d-4175-a922-2fc197f425e9',
  2014-12-29

[Yahoo-eng-team] [Bug 1410310] [NEW] 'headroom' KeyError while creating instance

2015-01-13 Thread David Kranz
Public bug reported:

A gate job failed with this error in the n-cpu log. The job is
http://logs.openstack.org/13/145713/7/gate/gate-tempest-dsvm-
full/71a2280/

>From http://logs.openstack.org/13/145713/7/gate/gate-tempest-dsvm-
full/71a2280/logs/screen-n-cpu.txt.gz

2015-01-13 12:24:35.148 29194 ERROR nova.compute.manager [-] Instance failed 
network setup after 1 attempt(s)
2015-01-13 12:24:35.148 29194 TRACE nova.compute.manager Traceback (most recent 
call last):
2015-01-13 12:24:35.148 29194 TRACE nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1676, in 
_allocate_network_async
2015-01-13 12:24:35.148 29194 TRACE nova.compute.manager 
dhcp_options=dhcp_options)
2015-01-13 12:24:35.148 29194 TRACE nova.compute.manager   File 
"/opt/stack/new/nova/nova/network/api.py", line 47, in wrapped
2015-01-13 12:24:35.148 29194 TRACE nova.compute.manager return func(self, 
context, *args, **kwargs)
2015-01-13 12:24:35.148 29194 TRACE nova.compute.manager   File 
"/opt/stack/new/nova/nova/network/base_api.py", line 64, in wrapper
2015-01-13 12:24:35.148 29194 TRACE nova.compute.manager res = f(self, 
context, *args, **kwargs)
2015-01-13 12:24:35.148 29194 TRACE nova.compute.manager   File 
"/opt/stack/new/nova/nova/network/api.py", line 276, in allocate_for_instance
2015-01-13 12:24:35.148 29194 TRACE nova.compute.manager nw_info = 
self.network_rpcapi.allocate_for_instance(context, **args)
2015-01-13 12:24:35.148 29194 TRACE nova.compute.manager   File 
"/opt/stack/new/nova/nova/network/rpcapi.py", line 189, in allocate_for_instance
2015-01-13 12:24:35.148 29194 TRACE nova.compute.manager 
macs=jsonutils.to_primitive(macs))
2015-01-13 12:24:35.148 29194 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 
152, in call
2015-01-13 12:24:35.148 29194 TRACE nova.compute.manager retry=self.retry)
2015-01-13 12:24:35.148 29194 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90, 
in _send
2015-01-13 12:24:35.148 29194 TRACE nova.compute.manager timeout=timeout, 
retry=retry)
2015-01-13 12:24:35.148 29194 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", 
line 436, in send
2015-01-13 12:24:35.148 29194 TRACE nova.compute.manager retry=retry)
2015-01-13 12:24:35.148 29194 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", 
line 427, in _send
2015-01-13 12:24:35.148 29194 TRACE nova.compute.manager raise result
2015-01-13 12:24:35.148 29194 TRACE nova.compute.manager KeyError: 
u'\'headroom\'\nTraceback (most recent call last):\n\n  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
137, in _dispatch_and_reply\nincoming.message))\n\n  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
180, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, 
args)\n\n  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
126, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n\n  File "/opt/stack/new/nova/nova/network/floating_ips.py", line 
113, in allocate_for_instance\n**kwargs)\n\n  File 
"/opt/stack/new/nova/nova/network/manager.py", line 501, in 
allocate_for_instance\nrequested_networks=requested_networks)\n\n  File 
"/opt/stack/new/nova/nova/network/manager.py", line 193, in 
_allocate_fixed_ips\nvpn=vpn, address=address)\n\n  File 
"/opt/stack/new/nova/n
 ova/network/manager.py", line 857, in allocate_fixed_ip\nheadroom = 
exc.kwargs[\'headroom\']\n\nKeyError: \'headroom\'\n'
2015-01-13 12:24:35.148 29194 TRACE nova.compute.manager 
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/eventlet/queue.py", line 117, in 
switch
self.greenlet.switch(value)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
214, in main
result = function(*args, **kwargs)
  File "/opt/stack/new/nova/nova/compute/manager.py", line 1676, in 
_allocate_network_async
dhcp_options=dhcp_options)
  File "/opt/stack/new/nova/nova/network/api.py", line 47, in wrapped
return func(self, context, *args, **kwargs)
  File "/opt/stack/new/nova/nova/network/base_api.py", line 64, in wrapper
res = f(self, context, *args, **kwargs)
  File "/opt/stack/new/nova/nova/network/api.py", line 276, in 
allocate_for_instance
nw_info = self.network_rpcapi.allocate_for_instance(context, **args)
  File "/opt/stack/new/nova/nova/network/rpcapi.py", line 189, in 
allocate_for_instance
macs=jsonutils.to_primitive(macs))
  File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", 
line 152, in call
retry=self.retry)
  File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/transport.py", 
line 90, in 

[Yahoo-eng-team] [Bug 1411482] Re: save() takes exactly 1 argument (2 given)

2015-01-16 Thread David Kranz
This error is coming from the nova compute log here.

http://logs.openstack.org/10/115110/20/check/check-tempest-dsvm-neutron-
pg-full-2/3c885b8/logs/screen-n-cpu.txt.gz

2015-01-16 01:53:19.798 ERROR oslo.messaging.rpc.dispatcher 
[req-6bd7d570-7e04-4118-9547-6f8b6fdd67fa TestMinimumBasicScenario-1445783167 
TestMinimumBasicScenario-1666592455] Exception during message handling: save() 
takes exactly 1 argument (2 given)
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
137, in _dispatch_and_reply
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
180, in _dispatch
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
126, in _do_dispatch
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 88, in wrapped
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher payload)
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 82, in 
__exit__
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 71, in wrapped
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 296, in decorated_function
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher pass
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 82, in 
__exit__
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 281, in decorated_function
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 346, in decorated_function
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 324, in decorated_function
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 82, in 
__exit__
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 312, in decorated_function
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2999, in reboot_instance
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher 
self._set_instance_obj_error_state(context, instance)
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 82, in 
__exit__
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2980, in reboot_instance
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher 
bad_volumes_callback=bad_volumes_callback)
2015-01-16 01:53:19.798 30555 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2028, in reboot
2015-01-16 01:53:19.798 30555 TRACE oslo

[Yahoo-eng-team] [Bug 1441745] [NEW] Lots of gate failures with "not enough hosts available"

2015-04-08 Thread David Kranz
Public bug reported:

Thousands of matches in the last two days:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiTm8gdmFsaWQgaG9zdCB3YXMgZm91bmQuIFRoZXJlIGFyZSBub3QgZW5vdWdoIGhvc3RzIGF2YWlsYWJsZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDI4NTA4MTY3MTcwfQ==

The following is from this log file:

http://logs.openstack.org/42/163842/8/check/check-tempest-dsvm-neutron-
full/1f66320/logs/screen-n-cond.txt.gz


For the few I looked at, there is an error in the n-cond log:

2015-04-08 07:20:15.207 WARNING nova.scheduler.utils 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] Failed to compute_task_build_instances: No 
valid host was found. There are not enough hosts available.
Traceback (most recent call last):

  File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", 
line 142, in inner
return func(*args, **kwargs)

  File "/opt/stack/new/nova/nova/scheduler/manager.py", line 86, in 
select_destinations
filter_properties)

  File "/opt/stack/new/nova/nova/scheduler/filter_scheduler.py", line 80, in 
select_destinations
raise exception.NoValidHost(reason=reason)

NoValidHost: No valid host was found. There are not enough hosts
available.

--

That makes it sound like the problem is that the deployed devstack does
not have enough capacity. But right before that I see:

2015-04-08 07:20:15.014 ERROR nova.conductor.manager 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] Instance update attempted for 
'availability_zone' on 745aafcf-686d-4cf0-91c7-701e282f6d06
2015-04-08 07:20:15.149 ERROR nova.scheduler.utils 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] [instance: 
745aafcf-686d-4cf0-91c7-701e282f6d06] Error from last host: 
devstack-trusty-rax-dfw-1769605.slave.openstack.org (node 
devstack-trusty-rax-dfw-1769605.slave.openstack.org): [u'Traceback (most recent 
call last):\n', u'  File "/opt/stack/new/nova/nova/compute/manager.py", line 
2193, in _do_build_and_run_instance\nfilter_properties)\n', u'  File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2336, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', u'RescheduledException: Build of instance 
745aafcf-686d-4cf0-91c7-701e282f6d06 was re-scheduled: u\'u"unexpected update 
keyword \\\'availability_zone\\\'"\\nTraceback (most recent call last):\\n\\n  
File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", 
line 142, in inner\\nreturn func(*args, **kwargs)\
 \n\\n  File "/opt/stack/new/nova/nova/conductor/manager.py", line 125, in 
instance_update\\nraise KeyError("unexpected update keyword \\\'%s\\\'" % 
key)\\n\\nKeyError: u"unexpected update keyword 
\\\'availability_zone\\\'"\\n\'\n']

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1441745

Title:
  Lots of gate failures with "not enough hosts available"

Status in OpenStack Compute (Nova):
  New

Bug description:
  Thousands of matches in the last two days:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiTm8gdmFsaWQgaG9zdCB3YXMgZm91bmQuIFRoZXJlIGFyZSBub3QgZW5vdWdoIGhvc3RzIGF2YWlsYWJsZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDI4NTA4MTY3MTcwfQ==

  The following is from this log file:

  http://logs.openstack.org/42/163842/8/check/check-tempest-dsvm-
  neutron-full/1f66320/logs/screen-n-cond.txt.gz

  
  For the few I looked at, there is an error in the n-cond log:

  2015-04-08 07:20:15.207 WARNING nova.scheduler.utils 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] Failed to compute_task_build_instances: No 
valid host was found. There are not enough hosts available.
  Traceback (most recent call last):

File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", 
line 142, in inner
  return func(*args, **kwargs)

File "/opt/stack/new/nova/nova/scheduler/manager.py", line 86, in 
select_destinations
  filter_properties)

File "/opt/stack/new/nova/nova/scheduler/filter_scheduler.py", line 80, in 
select_destinations
  raise exception.NoValidHost(reason=reason)

  NoValidHost: No valid host was found. There are not enough hosts
  available.

  --

  That makes it sound like the problem is that the deployed devstack
  does not have enough capacity. But right before that I see:

  2015-04-08 07:20:15.014 ERROR nova.conductor.manager 
[req-a21c9875-efe

[Yahoo-eng-team] [Bug 1441745] Re: Lots of gate failures with "not enough hosts available"

2015-04-08 Thread David Kranz
It is true that this particular sub-case of the bug title has only one
patch responsible, there are many other patches shown in logstash that
could not possibly cause this problem but which experience it.  So this
seems to be a problem that can randomly impact any patch. Though it may
be difficult to find, it seems to me there is a bug here. The other
possibility is that tempest is trying to create too many vms. I'm not
sure how many tiny vms are expected to be supported by our devstack.

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1441745

Title:
  Lots of gate failures with "not enough hosts available"

Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  New

Bug description:
  Thousands of matches in the last two days:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiTm8gdmFsaWQgaG9zdCB3YXMgZm91bmQuIFRoZXJlIGFyZSBub3QgZW5vdWdoIGhvc3RzIGF2YWlsYWJsZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDI4NTA4MTY3MTcwfQ==

  The following is from this log file:

  http://logs.openstack.org/42/163842/8/check/check-tempest-dsvm-
  neutron-full/1f66320/logs/screen-n-cond.txt.gz

  
  For the few I looked at, there is an error in the n-cond log:

  2015-04-08 07:20:15.207 WARNING nova.scheduler.utils 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] Failed to compute_task_build_instances: No 
valid host was found. There are not enough hosts available.
  Traceback (most recent call last):

File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", 
line 142, in inner
  return func(*args, **kwargs)

File "/opt/stack/new/nova/nova/scheduler/manager.py", line 86, in 
select_destinations
  filter_properties)

File "/opt/stack/new/nova/nova/scheduler/filter_scheduler.py", line 80, in 
select_destinations
  raise exception.NoValidHost(reason=reason)

  NoValidHost: No valid host was found. There are not enough hosts
  available.

  --

  That makes it sound like the problem is that the deployed devstack
  does not have enough capacity. But right before that I see:

  2015-04-08 07:20:15.014 ERROR nova.conductor.manager 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] Instance update attempted for 
'availability_zone' on 745aafcf-686d-4cf0-91c7-701e282f6d06
  2015-04-08 07:20:15.149 ERROR nova.scheduler.utils 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] [instance: 
745aafcf-686d-4cf0-91c7-701e282f6d06] Error from last host: 
devstack-trusty-rax-dfw-1769605.slave.openstack.org (node 
devstack-trusty-rax-dfw-1769605.slave.openstack.org): [u'Traceback (most recent 
call last):\n', u'  File "/opt/stack/new/nova/nova/compute/manager.py", line 
2193, in _do_build_and_run_instance\nfilter_properties)\n', u'  File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2336, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', u'RescheduledException: Build of instance 
745aafcf-686d-4cf0-91c7-701e282f6d06 was re-scheduled: u\'u"unexpected update 
keyword \\\'availability_zone\\\'"\\nTraceback (most recent call last):\\n\\n  
File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", 
line 142, in inner\\nreturn func(*args, **kwargs
 )\\n\\n  File "/opt/stack/new/nova/nova/conductor/manager.py", line 125, in 
instance_update\\nraise KeyError("unexpected update keyword \\\'%s\\\'" % 
key)\\n\\nKeyError: u"unexpected update keyword 
\\\'availability_zone\\\'"\\n\'\n']

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1441745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441745] Re: Lots of gate failures with "not enough hosts available"

2015-04-08 Thread David Kranz
** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1441745

Title:
  Lots of gate failures with "not enough hosts available"

Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Invalid

Bug description:
  Thousands of matches in the last two days:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiTm8gdmFsaWQgaG9zdCB3YXMgZm91bmQuIFRoZXJlIGFyZSBub3QgZW5vdWdoIGhvc3RzIGF2YWlsYWJsZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDI4NTA4MTY3MTcwfQ==

  The following is from this log file:

  http://logs.openstack.org/42/163842/8/check/check-tempest-dsvm-
  neutron-full/1f66320/logs/screen-n-cond.txt.gz

  
  For the few I looked at, there is an error in the n-cond log:

  2015-04-08 07:20:15.207 WARNING nova.scheduler.utils 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] Failed to compute_task_build_instances: No 
valid host was found. There are not enough hosts available.
  Traceback (most recent call last):

File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", 
line 142, in inner
  return func(*args, **kwargs)

File "/opt/stack/new/nova/nova/scheduler/manager.py", line 86, in 
select_destinations
  filter_properties)

File "/opt/stack/new/nova/nova/scheduler/filter_scheduler.py", line 80, in 
select_destinations
  raise exception.NoValidHost(reason=reason)

  NoValidHost: No valid host was found. There are not enough hosts
  available.

  --

  That makes it sound like the problem is that the deployed devstack
  does not have enough capacity. But right before that I see:

  2015-04-08 07:20:15.014 ERROR nova.conductor.manager 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] Instance update attempted for 
'availability_zone' on 745aafcf-686d-4cf0-91c7-701e282f6d06
  2015-04-08 07:20:15.149 ERROR nova.scheduler.utils 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] [instance: 
745aafcf-686d-4cf0-91c7-701e282f6d06] Error from last host: 
devstack-trusty-rax-dfw-1769605.slave.openstack.org (node 
devstack-trusty-rax-dfw-1769605.slave.openstack.org): [u'Traceback (most recent 
call last):\n', u'  File "/opt/stack/new/nova/nova/compute/manager.py", line 
2193, in _do_build_and_run_instance\nfilter_properties)\n', u'  File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2336, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', u'RescheduledException: Build of instance 
745aafcf-686d-4cf0-91c7-701e282f6d06 was re-scheduled: u\'u"unexpected update 
keyword \\\'availability_zone\\\'"\\nTraceback (most recent call last):\\n\\n  
File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", 
line 142, in inner\\nreturn func(*args, **kwargs
 )\\n\\n  File "/opt/stack/new/nova/nova/conductor/manager.py", line 125, in 
instance_update\\nraise KeyError("unexpected update keyword \\\'%s\\\'" % 
key)\\n\\nKeyError: u"unexpected update keyword 
\\\'availability_zone\\\'"\\n\'\n']

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1441745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450859] [NEW] list servers always returns all with 'ip6' filter

2015-05-01 Thread David Kranz
Public bug reported:

A tempest test was failing because it was trying to filter servers based
on ip from an ipv6 subnet but was not using the 'ip6' query param. But
the fix to use 'ip6' failed because all servers are returned instead of
just the one with that ipv6 addr.

This is most easily seen by just doing:

nova list --ip6 xxx

which returns all servers vs

nova list --ip xxx
 
which returns none.


For reference, the actual failing call from 
http://logs.openstack.org/98/179398/1/experimental/check-tempest-dsvm-neutron-full-non-admin/aa764bb/console.html:


2015-05-01 15:53:56.426 | 2015-05-01 15:22:28,839 30116 INFO 
[tempest_lib.common.rest_client] Request 
(ListServerFiltersTestJSON:test_list_servers_filtered_by_ip): 200 GET 
http://127.0.0.1:8774/v2/40784510a9f046a0a0d70f339f2d71d8/servers?ip6=fd81%3Aaad7%3Afb2%3A0%3Af816%3A3eff%3Afef3%3A8aaa
 0.060s
2015-05-01 15:53:56.426 | 2015-05-01 15:22:28,840 30116 DEBUG
[tempest_lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
2015-05-01 15:53:56.426 | Body: None
2015-05-01 15:53:56.426 | Response - Headers: {'content-type': 
'application/json', 'x-compute-request-id': 
'req-ac54be26-7689-4092-b42a-f3161c80295b', 'date': 'Fri, 01 May 2015 15:22:28 
GMT', 'content-length': '1147', 'status': '200', 'connection': 'close', 
'content-location': 
'http://127.0.0.1:8774/v2/40784510a9f046a0a0d70f339f2d71d8/servers?ip6=fd81%3Aaad7%3Afb2%3A0%3Af816%3A3eff%3Afef3%3A8aaa'}
2015-05-01 15:53:56.426 | Body: {"servers": [{"id": 
"0ab24a98-9725-47cd-86c9-de907da24329", "links": [{"href": 
"http://127.0.0.1:8774/v2/40784510a9f046a0a0d70f339f2d71d8/servers/0ab24a98-9725-47cd-86c9-de907da24329";,
 "rel": "self"}, {"href": 
"http://127.0.0.1:8774/40784510a9f046a0a0d70f339f2d71d8/servers/0ab24a98-9725-47cd-86c9-de907da24329";,
 "rel": "bookmark"}], "name": "ListServerFiltersTestJSON-instance-319015482"}, 
{"id": "3a8bef8f-4f20-4b9f-89b0-905a8f4ba726", "links": [{"href": 
"http://127.0.0.1:8774/v2/40784510a9f046a0a0d70f339f2d71d8/servers/3a8bef8f-4f20-4b9f-89b0-905a8f4ba726";,
 "rel": "self"}, {"href": 
"http://127.0.0.1:8774/40784510a9f046a0a0d70f339f2d71d8/servers/3a8bef8f-4f20-4b9f-89b0-905a8f4ba726";,
 "rel": "bookmark"}], "name": "ListServerFiltersTestJSON-instance-313871351"}, 
{"id": "ce24a53a-f412-4fb5-8da5-c7d0cbdc5fba", "links": [{"href": 
"http://127.0.0.1:8774/v2/40784510a9f046a0a0d70f339f2d71d8/servers/ce24a53a-f412-4fb5-8da5-c7d0cbdc5fba";,
 "rel": "self"
 }, {"href": 
"http://127.0.0.1:8774/40784510a9f046a0a0d70f339f2d71d8/servers/ce24a53a-f412-4fb5-8da5-c7d0cbdc5fba";,
 "rel": "bookmark"}], "name": "ListServerFiltersTestJSON-instance-950662797"}]}

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1450859

Title:
  list servers always returns all  with 'ip6' filter

Status in OpenStack Compute (Nova):
  New

Bug description:
  A tempest test was failing because it was trying to filter servers
  based on ip from an ipv6 subnet but was not using the 'ip6' query
  param. But the fix to use 'ip6' failed because all servers are
  returned instead of just the one with that ipv6 addr.

  This is most easily seen by just doing:

  nova list --ip6 xxx

  which returns all servers vs

  nova list --ip xxx
   
  which returns none.

  
  For reference, the actual failing call from 
http://logs.openstack.org/98/179398/1/experimental/check-tempest-dsvm-neutron-full-non-admin/aa764bb/console.html:

  
  2015-05-01 15:53:56.426 | 2015-05-01 15:22:28,839 30116 INFO 
[tempest_lib.common.rest_client] Request 
(ListServerFiltersTestJSON:test_list_servers_filtered_by_ip): 200 GET 
http://127.0.0.1:8774/v2/40784510a9f046a0a0d70f339f2d71d8/servers?ip6=fd81%3Aaad7%3Afb2%3A0%3Af816%3A3eff%3Afef3%3A8aaa
 0.060s
  2015-05-01 15:53:56.426 | 2015-05-01 15:22:28,840 30116 DEBUG
[tempest_lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
  2015-05-01 15:53:56.426 | Body: None
  2015-05-01 15:53:56.426 | Response - Headers: {'content-type': 
'application/json', 'x-compute-request-id': 
'req-ac54be26-7689-4092-b42a-f3161c80295b', 'date': 'Fri, 01 May 2015 15:22:28 
GMT', 'content-length': '1147', 'status': '200', 'connection': 'close', 
'content-location': 
'http://127.0.0.1:8774/v2/40784510a9f046a0a0d70f339f2d71d8/servers?ip6=fd81%3Aaad7%3Afb2%3A0%3Af816%3A3eff%3Afef3%3A8aaa'}
  2015-05-01 15:53:56.426 | Body: {"servers": [{"id": 
"0ab24a98-9725-47cd-86c9-de907da24329", "links": [{"href": 
"http://127.0.0.1:8774/v2/40784510a9f046a0a0d70f339f2d71d8/servers/0ab24a98-9725-47cd-86c9-de907da24329";,
 "rel": "self"}, {"href": 
"http://127.0.0.1:8774/40784510a9

[Yahoo-eng-team] [Bug 1401900] [NEW] Gate jobs failing wth "Multiple possible networks found"

2014-12-12 Thread David Kranz
Public bug reported:

These tests are failing many times starting around 10:00 December 11

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwidGVtcGVzdC5zY2VuYXJpby50ZXN0X25ldHdvcmtfdjYuVGVzdEdldHRpbmdBZGRyZXNzLnRlc3RfZGhjcDZfc3RhdGVsZXNzX2Zyb21fb3NcIiBBTkQgbWVzc2FnZTpGQUlMRUQiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTgzOTI5MTYwMDN9

2014-12-12 07:08:27.244 | {0} 
tempest.scenario.test_network_v6.TestGettingAddress.test_dhcp6_stateless_from_os
 [3.761658s] ... FAILED
2014-12-12 07:08:30.255 | {0} 
tempest.scenario.test_network_v6.TestGettingAddress.test_slaac_from_os 
[3.008262s] ... FAILED

2014-12-12 07:16:56.626 | Traceback (most recent call last):
2014-12-12 07:16:56.626 |   File "tempest/test.py", line 112, in wrapper
2014-12-12 07:16:56.627 | return f(self, *func_args, **func_kwargs)
2014-12-12 07:16:56.627 |   File "tempest/scenario/test_network_v6.py", 
line 142, in test_slaac_from_os
2014-12-12 07:16:56.627 | self._prepare_and_test(address6_mode='slaac')
2014-12-12 07:16:56.628 |   File "tempest/scenario/test_network_v6.py", 
line 113, in _prepare_and_test
2014-12-12 07:16:56.628 | ssh1, srv1 = self.prepare_server()
2014-12-12 07:16:56.628 |   File "tempest/scenario/test_network_v6.py", 
line 102, in prepare_server
2014-12-12 07:16:56.628 | srv = 
self.create_server(create_kwargs=self.srv_kwargs)
2014-12-12 07:16:56.629 |   File "tempest/scenario/manager.py", line 198, 
in create_server
2014-12-12 07:16:56.629 | **create_kwargs)
2014-12-12 07:16:56.629 |   File 
"tempest/services/compute/json/servers_client.py", line 92, in create_server
2014-12-12 07:16:56.630 | resp, body = self.post('servers', post_body)
2014-12-12 07:16:56.630 |   File "tempest/common/rest_client.py", line 253, 
in post
2014-12-12 07:16:56.630 | return self.request('POST', url, 
extra_headers, headers, body)
2014-12-12 07:16:56.630 |   File "tempest/common/rest_client.py", line 467, 
in request
2014-12-12 07:16:56.631 | resp, resp_body)
2014-12-12 07:16:56.631 |   File "tempest/common/rest_client.py", line 516, 
in _error_checker
2014-12-12 07:16:56.631 | raise exceptions.BadRequest(resp_body)
2014-12-12 07:16:56.632 | BadRequest: Bad request
2014-12-12 07:16:56.632 | Details: {u'code': 400, u'message': u'Multiple 
possible networks found, use a Network ID to be more specific.'}

An exception can be seen while calling neutron in n-api.log. The test
itself has not changed.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401900

Title:
  Gate jobs failing wth "Multiple possible networks found"

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  These tests are failing many times starting around 10:00 December 11

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwidGVtcGVzdC5zY2VuYXJpby50ZXN0X25ldHdvcmtfdjYuVGVzdEdldHRpbmdBZGRyZXNzLnRlc3RfZGhjcDZfc3RhdGVsZXNzX2Zyb21fb3NcIiBBTkQgbWVzc2FnZTpGQUlMRUQiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTgzOTI5MTYwMDN9

  2014-12-12 07:08:27.244 | {0} 
tempest.scenario.test_network_v6.TestGettingAddress.test_dhcp6_stateless_from_os
 [3.761658s] ... FAILED
  2014-12-12 07:08:30.255 | {0} 
tempest.scenario.test_network_v6.TestGettingAddress.test_slaac_from_os 
[3.008262s] ... FAILED

  2014-12-12 07:16:56.626 | Traceback (most recent call last):
  2014-12-12 07:16:56.626 |   File "tempest/test.py", line 112, in wrapper
  2014-12-12 07:16:56.627 | return f(self, *func_args, **func_kwargs)
  2014-12-12 07:16:56.627 |   File "tempest/scenario/test_network_v6.py", 
line 142, in test_slaac_from_os
  2014-12-12 07:16:56.627 | 
self._prepare_and_test(address6_mode='slaac')
  2014-12-12 07:16:56.628 |   File "tempest/scenario/test_network_v6.py", 
line 113, in _prepare_and_test
  2014-12-12 07:16:56.628 | ssh1, srv1 = self.prepare_server()
  2014-12-12 07:16:56.628 |   File "tempest/scenario/test_network_v6.py", 
line 102, in prepare_server
  2014-12-12 07:16:56.628 | srv = 
self.create_server(create_kwargs=self.srv_kwargs)
  2014-12-12 07:16:56.629 |   File "tempest/scenario/manager.py", line 198, 
in create_server
  2014-12-12 07:16:56.629 | **create_kwargs)
  2014-12-12 07:16:56.629 |   File 
"tempest/services/compute/json/servers_client.py", line 92, in create_server
  2014-12-12 07:16:56.630 | resp, body = self.post('servers', post_body)
  2014-12-12 07:16:56.630 |   File "tempest/common/rest_client.py", line 
253, in post
  2014-12-12 07:16:56.630 | return self.request('POST', url, 

[Yahoo-eng-team] [Bug 1401900] Re: Gate jobs failing wth "Multiple possible networks found"

2014-12-12 Thread David Kranz
This was a tempest bug.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401900

Title:
  Gate jobs failing wth "Multiple possible networks found"

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  These tests are failing many times starting around 10:00 December 11

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwidGVtcGVzdC5zY2VuYXJpby50ZXN0X25ldHdvcmtfdjYuVGVzdEdldHRpbmdBZGRyZXNzLnRlc3RfZGhjcDZfc3RhdGVsZXNzX2Zyb21fb3NcIiBBTkQgbWVzc2FnZTpGQUlMRUQiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTgzOTI5MTYwMDN9

  2014-12-12 07:08:27.244 | {0} 
tempest.scenario.test_network_v6.TestGettingAddress.test_dhcp6_stateless_from_os
 [3.761658s] ... FAILED
  2014-12-12 07:08:30.255 | {0} 
tempest.scenario.test_network_v6.TestGettingAddress.test_slaac_from_os 
[3.008262s] ... FAILED

  2014-12-12 07:16:56.626 | Traceback (most recent call last):
  2014-12-12 07:16:56.626 |   File "tempest/test.py", line 112, in wrapper
  2014-12-12 07:16:56.627 | return f(self, *func_args, **func_kwargs)
  2014-12-12 07:16:56.627 |   File "tempest/scenario/test_network_v6.py", 
line 142, in test_slaac_from_os
  2014-12-12 07:16:56.627 | 
self._prepare_and_test(address6_mode='slaac')
  2014-12-12 07:16:56.628 |   File "tempest/scenario/test_network_v6.py", 
line 113, in _prepare_and_test
  2014-12-12 07:16:56.628 | ssh1, srv1 = self.prepare_server()
  2014-12-12 07:16:56.628 |   File "tempest/scenario/test_network_v6.py", 
line 102, in prepare_server
  2014-12-12 07:16:56.628 | srv = 
self.create_server(create_kwargs=self.srv_kwargs)
  2014-12-12 07:16:56.629 |   File "tempest/scenario/manager.py", line 198, 
in create_server
  2014-12-12 07:16:56.629 | **create_kwargs)
  2014-12-12 07:16:56.629 |   File 
"tempest/services/compute/json/servers_client.py", line 92, in create_server
  2014-12-12 07:16:56.630 | resp, body = self.post('servers', post_body)
  2014-12-12 07:16:56.630 |   File "tempest/common/rest_client.py", line 
253, in post
  2014-12-12 07:16:56.630 | return self.request('POST', url, 
extra_headers, headers, body)
  2014-12-12 07:16:56.630 |   File "tempest/common/rest_client.py", line 
467, in request
  2014-12-12 07:16:56.631 | resp, resp_body)
  2014-12-12 07:16:56.631 |   File "tempest/common/rest_client.py", line 
516, in _error_checker
  2014-12-12 07:16:56.631 | raise exceptions.BadRequest(resp_body)
  2014-12-12 07:16:56.632 | BadRequest: Bad request
  2014-12-12 07:16:56.632 | Details: {u'code': 400, u'message': u'Multiple 
possible networks found, use a Network ID to be more specific.'}

  An exception can be seen while calling neutron in n-api.log. The test
  itself has not changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1401900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260015] Re: PKI token contains the string "ERROR"

2014-02-04 Thread David Kranz
This was fixed a few months ago

https://github.com/openstack/tempest/commit/69bcb82a7fdeda2fdaf664a238a4ecbbf7cc58c9

** Changed in: tempest
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1260015

Title:
  PKI token contains the string "ERROR"

Status in OpenStack Identity (Keystone):
  Won't Fix
Status in Tempest:
  Fix Released

Bug description:
  The new gate check against unexpected "ERRORs" appearing in logs
  actually caught the string "ERROR" in a PKI token in the second
  jenkins run of the first patchset in:

https://review.openstack.org/#/c/61419/

  If you search the log output, you'll find the string "ERROR" buried in
  there, completely by coincidence. Here's the log output:

  2013-12-11 15:19:02.867 | Checking logs...
  2013-12-11 15:19:07.023 | Log File: g-api
  2013-12-11 15:19:07.919 | RESP BODY: {"access": {"token": {"issued_at": 
"2013-12-11T15:06:46.751200", "expires": "2013-12-12T15:06:46Z", "id": 
"MIISRwYJKoZIhvcNAQcCoIISODCCEjQCAQExCTAHBgUrDgMCGjCCEJ0GCSqGSIb3DQEHAaCCEI4EghCKeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxMy0xMi0xMVQxNTowNjo0Ni43NTEyMDAiLCAiZXhwaXJlcyI6ICIyMDEzLTEyLTEyVDE1OjA2OjQ2WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogbnVsbCwgImVuYWJsZWQiOiB0cnVlLCAiaWQiOiAiMzllMzkyOTlkNDA5NDdlNmFkYmNjYTgwMjZlYjg2ZDEiLCAibmFtZSI6ICJzZXJ2aWNlIn19LCAic2VydmljZUNhdGFsb2ciOiBbeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzQvdjIvMzllMzkyOTlkNDA5NDdlNmFkYmNjYTgwMjZlYjg2ZDEiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzQvdjIvMzllMzkyOTlkNDA5NDdlNmFkYmNjYTgwMjZlYjg2ZDEiLCAiaWQiOiAiNTRjMzEzOTdjNzgwNDFjOWIxMTJkNjM3ZmM3MWJmZTQiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3NC92Mi8zOWUzOTI5OWQ0MDk0N2U2YWRiY2NhODAyNmViODZkMSJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtd
 
LCAidHlwZSI6ICJjb21wdXRlIiwgIm5hbWUiOiAibm92YSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3Ni92Mi8zOWUzOTI5OWQ0MDk0N2U2YWRiY2NhODAyNmViODZkMSIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3Ni92Mi8zOWUzOTI5OWQ0MDk0N2U2YWRiY2NhODAyNmViODZkMSIsICJpZCI6ICJiZmFhNWRhMTAwYmQ0MGMyODc2YTdmM2E0MDEyZDZmMSIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc2L3YyLzM5ZTM5Mjk5ZDQwOTQ3ZTZhZGJjY2E4MDI2ZWI4NmQxIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZXYyIiwgIm5hbWUiOiAiY2luZGVyIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc0L3YzIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc0L3YzIiwgImlkIjogIjM5YzhhYTI2NGE3MzQyNTE4NjkyYWU1OTM2NDczMGMyIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzQvdjMifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiY29tcHV0ZXYzIiwgIm5hbWUiOiAibm92YSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6MzMzMyIsICJyZWdpb24iOiAiUmVnaW9uT25lI
 
iwgImludGVybmFsVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6MzMzMyIsICJpZCI6ICIwNThhZTdjZGFmNzk0YmVhYmE3MjA0MDQxYjFjODFmMyIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTozMzMzIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInMzIiwgIm5hbWUiOiAiczMifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjkyOTIiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjkyOTIiLCAiaWQiOiAiNzJiZjExMDE0ZDU3NGE4ZTg1NzEzODJhZTRiZDY5ZTIiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6OTI5MiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpbWFnZSIsICJuYW1lIjogImdsYW5jZSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3NyIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3NyIsICJpZCI6ICIwNjE2YTIxZmJhZDU0YzVhODEzMTQxMzQ0MDlhYjNkYSIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc3In1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogIm1ldGVyaW5nIiwgIm5hbWUiOiAiY2VpbG9tZXRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODAwMC
 
92MSIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODAwMC92MSIsICJpZCI6ICI1ZWRlZGYyZjkwNTI0Yzc0YjIwZTdmMTRlNjYyZjdkZiIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4MDAwL3YxIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImNsb3VkZm9ybWF0aW9uIiwgIm5hbWUiOiAiaGVhdC1jZm4ifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzYvdjEvMzllMzkyOTlkNDA5NDdlNmFkYmNjYTgwMjZlYjg2ZDEiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzYvdjEvMzllMzkyOTlkNDA5NDdlNmFkYmNjYTgwMjZlYjg2ZDEiLCAiaWQiOiAiMDU5YWUyNjEzNDE5NGZhMjhhMDg3ZjhiYTMwZjIzMmIiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3Ni92MS8zOWUzOTI5OWQ0MDk0N2U2YWRiY2NhODAyNmViODZkMSJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJ2b2x1bWUiLCAibmFtZSI6ICJjaW5kZXIifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzMvc2VydmljZXMvQWRtaW4iLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzMvc2VydmljZXMvQ2xvdWQiLCAiaWQiOiAiMGY
 
5ZmFhNmZhYmNhNDQzNDlkZDdhZWVkNmQyZjZlMjMiLCAicHVibGljVVJMI

[Yahoo-eng-team] [Bug 1283803] [NEW] keystone listens locally on admin port

2014-02-23 Thread David Kranz
Public bug reported:

I installed a vanilla devstack except for setting SERVICE_HOST in
localrc so I could run tempest from another machine. Tempest fails
trying to connect to adminURL and it seems to be because port 35357 is
only open locally. The conf file comment says:

# The base admin endpoint URL for keystone that are advertised  
# to clients (NOTE: this does NOT affect how keystone listens   
# for connections) (string value)   
#admin_endpoint=http://localhost:%(admin_port)s/

But this from  netstat. I would expect 35357 to be the same as the others. It 
is also possible this is a devstack issue but
I'm not sure so starting here.

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address   Foreign Address State  
tcp0  0 *:iscsi-target  *:* LISTEN 
tcp0  0 *:40956 *:* LISTEN 
tcp0  0 localhost:35357 *:* LISTEN 
tcp0  0 *:6080  *:* LISTEN 
tcp0  0 *:6081  *:* LISTEN 
tcp0  0 *:  *:* LISTEN 
tcp0  0 *:8773  *:* LISTEN 
tcp0  0 *:8774  *:* LISTEN 
tcp0  0 *:8775  *:* LISTEN 
tcp0  0 *:9191  *:* LISTEN 
tcp0  0 *:8776  *:* LISTEN 
tcp0  0 *:5000  *:* LISTEN 
... elided ...

And catalog:+-+---+
|   Property  |   Value   |
+-+---+
|   adminURL  | http://dkranz-devstack:35357/v2.0 |
|  id |  39932d3dcf4340a98727294ed5ec71b8 |
| internalURL |  http://dkranz-devstack:5000/v2.0 |
|  publicURL  |  http://dkranz-devstack:5000/v2.0 |
|region   | RegionOne |
+-+---+

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1283803

Title:
  keystone listens locally on admin port

Status in OpenStack Identity (Keystone):
  New

Bug description:
  I installed a vanilla devstack except for setting SERVICE_HOST in
  localrc so I could run tempest from another machine. Tempest fails
  trying to connect to adminURL and it seems to be because port 35357 is
  only open locally. The conf file comment says:

  # The base admin endpoint URL for keystone that are advertised
  
  # to clients (NOTE: this does NOT affect how keystone listens 
  
  # for connections) (string value) 
  
  #admin_endpoint=http://localhost:%(admin_port)s/  
  

  But this from  netstat. I would expect 35357 to be the same as the others. It 
is also possible this is a devstack issue but
  I'm not sure so starting here.

  Active Internet connections (only servers)
  Proto Recv-Q Send-Q Local Address   Foreign Address State 
 
  tcp0  0 *:iscsi-target  *:* LISTEN
 
  tcp0  0 *:40956 *:* LISTEN
 
  tcp0  0 localhost:35357 *:* LISTEN
 
  tcp0  0 *:6080  *:* LISTEN
 
  tcp0  0 *:6081  *:* LISTEN
 
  tcp0  0 *:  *:* LISTEN
 
  tcp0  0 *:8773  *:* LISTEN
 
  tcp0  0 *:8774  *:* LISTEN
 
  tcp0  0 *:8775  *:* LISTEN
 
  tcp0  0 *:9191  *:* LISTEN
 
  tcp0  0 *:8776  *:* LISTEN
 
  tcp0  0 *:5000  *:* LISTEN
 
  ... elided ...

  And catalog:+-+---+
  |   Property  |   Value   |
  +-+---+
  |   adminURL  | http://dkranz-devstack:35357/v2.0 |
  |  id |  39932d3dcf4340a98727294ed5ec71b8 |
  | internalURL |  http://dkranz-devstack:5000/v2.0 |
  |  publicURL  |  http://dkranz-devstack:5000/v2.0 |
  |region   |  

[Yahoo-eng-team] [Bug 1283803] Re: keystone listens locally on admin port

2014-02-24 Thread David Kranz
This issue is caused by keystone listening globally for the public url
(port 5000) but only on localhost for 35357. I poked a little more and
found the cause.

Setting SERVICE_HOST in localrc causes devstack to produce these values
in keystone.conf:

admin_bind_host = dkranz-devstack
admin_endpoint = http://dkranz-devstack:%(admin_port)s/
public_endpoint = http://dkranz-devstack:%(public_port)s/

I thought the purpose of this env variable was to make the catalog
expose endpoints that are accessible from outside the devstack machine
so it is  surprising this also sets the bind host which makes it not
accessible off the local machine. Is this behaviour intentional?

** Also affects: devstack
   Importance: Undecided
   Status: New

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1283803

Title:
  keystone listens locally on admin port

Status in devstack - openstack dev environments:
  New
Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  I installed a vanilla devstack except for setting SERVICE_HOST in
  localrc so I could run tempest from another machine. Tempest fails
  trying to connect to adminURL and it seems to be because port 35357 is
  only open locally. The conf file comment says:

  # The base admin endpoint URL for keystone that are advertised
  
  # to clients (NOTE: this does NOT affect how keystone listens 
  
  # for connections) (string value) 
  
  #admin_endpoint=http://localhost:%(admin_port)s/  
  

  But this from  netstat. I would expect 35357 to be the same as the others. It 
is also possible this is a devstack issue but
  I'm not sure so starting here.

  Active Internet connections (only servers)
  Proto Recv-Q Send-Q Local Address   Foreign Address State 
 
  tcp0  0 *:iscsi-target  *:* LISTEN
 
  tcp0  0 *:40956 *:* LISTEN
 
  tcp0  0 localhost:35357 *:* LISTEN
 
  tcp0  0 *:6080  *:* LISTEN
 
  tcp0  0 *:6081  *:* LISTEN
 
  tcp0  0 *:  *:* LISTEN
 
  tcp0  0 *:8773  *:* LISTEN
 
  tcp0  0 *:8774  *:* LISTEN
 
  tcp0  0 *:8775  *:* LISTEN
 
  tcp0  0 *:9191  *:* LISTEN
 
  tcp0  0 *:8776  *:* LISTEN
 
  tcp0  0 *:5000  *:* LISTEN
 
  ... elided ...

  And catalog:+-+---+
  |   Property  |   Value   |
  +-+---+
  |   adminURL  | http://dkranz-devstack:35357/v2.0 |
  |  id |  39932d3dcf4340a98727294ed5ec71b8 |
  | internalURL |  http://dkranz-devstack:5000/v2.0 |
  |  publicURL  |  http://dkranz-devstack:5000/v2.0 |
  |region   | RegionOne |
  +-+---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1283803/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282266] Re: enabled attribute missing from GET /v3/endpoints

2014-02-24 Thread David Kranz
** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1282266

Title:
  enabled attribute missing from GET /v3/endpoints

Status in OpenStack Identity (Keystone):
  In Progress
Status in Tempest:
  Invalid

Bug description:
  response from current master:

  RESP BODY: {"endpoints": [{"links": {"self":
  "http://localhost:5000/v3/endpoints/7237fc3ba1ec460595e8de463a5c7132"},
  "url": "http://localhost:35357/v3";, "region": "regionOne",
  "interface": "admin", "service_id":
  "0c8a9efdeada49d689c4d3ef29ecb3d7", "id":
  "7237fc3ba1ec460595e8de463a5c7132"}], "links": {"self":
  "http://localhost:5000/v3/endpoints";, "previous": null, "next": null}}

  response from stable/havana (this is correct):

  RESP BODY: {"endpoints": [{"links": {"self":
  "http://localhost:5000/v3/endpoints/6e1b54c3423347f1bafb20030dabb412"},
  "url": "http://127.0.0.1:35357/";, "region": null, "enabled": true,
  "interface": "admin", "service_id":
  "f43f1d5cb2e04edda9316077421062c8", "id":
  "6e1b54c3423347f1bafb20030dabb412"}], "links": {"self":
  "http://localhost:5000/v3/endpoints";, "previous": null, "next": null}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1282266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281969] Re: exceptions.ServerFault is raised when creating a server

2014-02-24 Thread David Kranz
logstash shows this happening a lot, about 7/8 of the time in a cells (
non-voting) run. I searched for

"The server has either erred or is incapable of performing the requested
operation. (HTTP 500)" as the message.

Don't see how this could be related to tempest.

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1281969

Title:
  exceptions.ServerFault is raised when creating a server

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  After submitting a patch, I got a lot of similar errors in tempest.

  http://logs.openstack.org/52/74552/2/check/check-tempest-dsvm-
  full/8841b27/console.html

  2014-02-19 07:46:41.335 | 
==
  2014-02-19 07:46:41.335 | FAIL: setUpClass 
(tempest.api.compute.v3.images.test_images_oneserver_negative.ImagesOneServerNegativeV3Test)
  2014-02-19 07:46:41.335 | setUpClass 
(tempest.api.compute.v3.images.test_images_oneserver_negative.ImagesOneServerNegativeV3Test)
  2014-02-19 07:46:41.335 | 
--
  2014-02-19 07:46:41.335 | _StringException: Traceback (most recent call last):
  2014-02-19 07:46:41.336 |   File 
"tempest/api/compute/v3/images/test_images_oneserver_negative.py", line 67, in 
setUpClass
  2014-02-19 07:46:41.336 | resp, server = 
cls.create_test_server(wait_until='ACTIVE')
  2014-02-19 07:46:41.336 |   File "tempest/api/compute/base.py", line 121, in 
create_test_server
  2014-02-19 07:46:41.336 | name, image_id, flavor, **kwargs)
  2014-02-19 07:46:41.336 |   File 
"tempest/services/compute/v3/json/servers_client.py", line 87, in create_server
  2014-02-19 07:46:41.336 | resp, body = self.post('servers', post_body, 
self.headers)
  2014-02-19 07:46:41.336 |   File "tempest/common/rest_client.py", line 184, 
in post
  2014-02-19 07:46:41.336 | return self.request('POST', url, headers, body)
  2014-02-19 07:46:41.336 |   File "tempest/common/rest_client.py", line 360, 
in request
  2014-02-19 07:46:41.336 | resp, resp_body)
  2014-02-19 07:46:41.337 |   File "tempest/common/rest_client.py", line 453, 
in _error_checker
  2014-02-19 07:46:41.337 | raise exceptions.ServerFault(message)
  2014-02-19 07:46:41.338 | ServerFault: Got server fault
  2014-02-19 07:46:41.338 | Details: The server has either erred or is 
incapable of performing the requested operation.
  2014-02-19 07:46:41.338 | 
  2014-02-19 07:46:41.338 | 
  2014-02-19 07:46:41.338 | 
==
  2014-02-19 07:46:41.339 | FAIL: setUpClass 
(tempest.api.compute.v3.servers.test_server_password.ServerPasswordV3Test)
  2014-02-19 07:46:41.339 | setUpClass 
(tempest.api.compute.v3.servers.test_server_password.ServerPasswordV3Test)
  2014-02-19 07:46:41.339 | 
--
  2014-02-19 07:46:41.339 | _StringException: Traceback (most recent call last):
  2014-02-19 07:46:41.339 |   File 
"tempest/api/compute/v3/servers/test_server_password.py", line 28, in setUpClass
  2014-02-19 07:46:41.340 | resp, cls.server = 
cls.create_test_server(wait_until="ACTIVE")
  2014-02-19 07:46:41.340 |   File "tempest/api/compute/base.py", line 121, in 
create_test_server
  2014-02-19 07:46:41.340 | name, image_id, flavor, **kwargs)
  2014-02-19 07:46:41.340 |   File 
"tempest/services/compute/v3/json/servers_client.py", line 87, in create_server
  2014-02-19 07:46:41.340 | resp, body = self.post('servers', post_body, 
self.headers)
  2014-02-19 07:46:41.342 |   File "tempest/common/rest_client.py", line 184, 
in post
  2014-02-19 07:46:41.342 | return self.request('POST', url, headers, body)
  2014-02-19 07:46:41.342 |   File "tempest/common/rest_client.py", line 360, 
in request
  2014-02-19 07:46:41.342 | resp, resp_body)
  2014-02-19 07:46:41.342 |   File "tempest/common/rest_client.py", line 453, 
in _error_checker
  2014-02-19 07:46:41.342 | raise exceptions.ServerFault(message)
  2014-02-19 07:46:41.343 | ServerFault: Got server fault
  2014-02-19 07:46:41.343 | Details: The server has either erred or is 
incapable of performing the requested operation.
  2014-02-19 07:46:41.343 | 
  2014-02-19 07:46:41.343 | 
  2014-02-19 07:46:41.343 | 
==
  2014-02-19 07:46:41.343 | FAIL: setUpClass 
(tempest.api.compute.v3.servers.test_create_server.ServersV3TestManualDisk)
  2014-02-19 07:46:41.343 | setUpClass 
(tempest.api.compute.v3.servers.test_create_server.ServersV3TestManualDisk)
  2014-02-19 07:46:41.343 | 
--
  2

[Yahoo-eng-team] [Bug 1265498] Re: Router over quota error without tenant isolation

2014-03-14 Thread David Kranz
It seems this is caused by creating 10 routers. The quota is 10. I think
it barely passes in the isolated case and fails sequential because the
demo tenant starts with one router already. I verified that this passes
without isolation if the quota is increased to 11. For now.

** Changed in: neutron
   Status: New => Invalid

** Changed in: tempest
   Importance: Undecided => High

** Changed in: tempest
 Assignee: (unassigned) => David Kranz (david-kranz)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1265498

Title:
  Router over quota error without tenant isolation

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in Tempest:
  New

Bug description:
  With parallel testing enabled, an error has been observed [1]. It
  seems the router quota is exceeded, which is compatible with a
  scenario where several tests creating routers are concurrently
  executed, and full tenant isolation is not enabled.

  There does not seem to be any issue on the neutron side; this error is
  probably due to tempest tests which needs to be made more robust - or
  perhaps it will just go away with full isolation

  [1] http://logs.openstack.org/85/64185/1/experimental/check-tempest-
  dsvm-neutron-isolated-parallel/706d454/console.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1265498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294773] Re: tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern - HTTP 500

2014-03-19 Thread David Kranz
This happens quite a lot and seems to be triggered by this error in the
n-cpu log:

2014-03-19 16:46:40.209 ERROR nova.network.neutronv2.api [req-
6afb4d61-2c01-43d7-9caf-fdda126f7497 TestVolumeBootPatternV2-1956299643
TestVolumeBootPatternV2-224738568] Failed to delete neutron port
914b04aa-7f0e-4551-a1e9-2f9acc890409

I will start with the assumption that this is a neutron issue.

** Changed in: neutron
   Status: New => Confirmed

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1294773

Title:
  
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
  - HTTP 500

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in Tempest:
  Invalid

Bug description:
  Example: http://logs.openstack.org/16/79816/3/check/check-tempest-
  dsvm-neutron/7a4eef5/console.html

  2014-03-19 16:48:18.200 | 
==
  2014-03-19 16:48:18.200 | FAIL: tearDownClass 
(tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2)
  2014-03-19 16:48:18.201 | tearDownClass 
(tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2)
  2014-03-19 16:48:18.201 | 
--
  2014-03-19 16:48:18.201 | _StringException: Traceback (most recent call last):
  2014-03-19 16:48:18.201 |   File "tempest/scenario/manager.py", line 149, in 
tearDownClass
  2014-03-19 16:48:18.201 | cls.cleanup_resource(thing, cls.__name__)
  2014-03-19 16:48:18.201 |   File "tempest/scenario/manager.py", line 113, in 
cleanup_resource
  2014-03-19 16:48:18.202 | resource.delete()
  2014-03-19 16:48:18.202 |   File 
"/opt/stack/new/python-novaclient/novaclient/v1_1/floating_ips.py", line 25, in 
delete
  2014-03-19 16:48:18.202 | self.manager.delete(self)
  2014-03-19 16:48:18.202 |   File 
"/opt/stack/new/python-novaclient/novaclient/v1_1/floating_ips.py", line 49, in 
delete
  2014-03-19 16:48:18.202 | self._delete("/os-floating-ips/%s" % 
base.getid(floating_ip))
  2014-03-19 16:48:18.202 |   File 
"/opt/stack/new/python-novaclient/novaclient/base.py", line 161, in _delete
  2014-03-19 16:48:18.202 | _resp, _body = self.api.client.delete(url)
  2014-03-19 16:48:18.203 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 292, in delete
  2014-03-19 16:48:18.203 | return self._cs_request(url, 'DELETE', **kwargs)
  2014-03-19 16:48:18.203 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 260, in 
_cs_request
  2014-03-19 16:48:18.203 | **kwargs)
  2014-03-19 16:48:18.203 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 242, in 
_time_request
  2014-03-19 16:48:18.203 | resp, body = self.request(url, method, **kwargs)
  2014-03-19 16:48:18.203 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 236, in request
  2014-03-19 16:48:18.204 | raise exceptions.from_response(resp, body, url, 
method)
  2014-03-19 16:48:18.204 | ClientException: The server has either erred or is 
incapable of performing the requested operation. (HTTP 500) (Request-ID: 
req-7d345883-b8db-4081-a643-7aa9169a95b6)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1294773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334297] [NEW] Strange response from os-server-groups with bad parameter

2014-06-25 Thread David Kranz
Public bug reported:

There was a bug in tempest that caused a call to DELETE os-server-groups
with a bad id. Here is the call from the tempest log:

2014-06-25 12:07:03.162 25653 INFO tempest.common.rest_client [-]
Request (ServerGroupTestJSON:tearDownClass): 200 DELETE
http://127.0.0.1:8774/v2/2bc18b72010d455a9db7cbc583e1dcfc/os-server-
groups/{u'policies': [u'affinity'], u'name': u'server-group-1944635656',
u'id': u'7c6a13c1-6d8b-4314-9d16-4eb6418a2170', u'members': [],
u'metadata': {}} 0.001s

Normally DELETE will return 204 and in this case I would have expected
400. But the call returns 200. What can be seen in the nova log seems to
indicate 400 but that is not what is actually getting sent back:

127.0.0.1 - - [25/Jun/2014 12:07:03] code 400, message Bad request syntax 
("DELETE /v2/2bc18b72010d455a9db7cbc583e1dcfc/os-server-groups/{u'policies': 
[u'affinity'], u'name': u'server-group-1944635656', u'id': 
u'7c6a13c1-6d8b-4314-9d16-4eb6418a2170', u'members': [], u'metadata': {}} 
HTTP/1.1")
127.0.0.1 - - [25/Jun/2014 12:07:03] "DELETE 
/v2/2bc18b72010d455a9db7cbc583e1dcfc/os-server-groups/{u'policies': 
[u'affinity'], u'name': u'server-group-1944635656', u'id': 
u'7c6a13c1-6d8b-4314-9d16-4eb6418a2170', u'members': [], u'metadata': {}} 
HTTP/1.1" 400 -
127.0.0.1 - - [25/Jun/2014 12:07:03] code 400, message Bad request syntax 
("DELETE /v2/2bc18b72010d455a9db7cbc583e1dcfc/os-server-groups/{u'policies': 
[u'affinity'], u'name': u'server-group-1366165944', u'id': 
u'd361d100-fc59-4393-b61b-30a2d4b27b6e', u'members': [], u'metadata': {}} 
HTTP/1.1")
127.0.0.1 - - [25/Jun/2014 12:07:03] "DELETE 
/v2/2bc18b72010d455a9db7cbc583e1dcfc/os-server-groups/{u'policies': 
[u'affinity'], u'name': u'server-group-1366165944', u'id': 
u'd361d100-fc59-4393-b61b-30a2d4b27b6e', u'members': [], u'metadata': {}} 
HTTP/1.1" 400 -
127.0.0.1 - - [25/Jun/2014 12:07:03] code 400, message Bad request syntax 
("DELETE /v2/2bc18b72010d455a9db7cbc583e1dcfc/os-server-groups/{u'policies': 
[u'affinity'], u'name': u'server-group-1366165944', u'id': 
u'9f8574d7-78b9-4926-98ea-61f2da971478', u'members': [], u'metadata': {}} 
HTTP/1.1")
127.0.0.1 - - [25/Jun/2014 12:07:03] "DELETE 
/v2/2bc18b72010d455a9db7cbc583e1dcfc/os-server-groups/{u'policies': 
[u'affinity'], u'name': u'server-group-1366165944', u'id': 
u'9f8574d7-78b9-4926-98ea-61f2da971478', u'members': [], u'metadata': {}} 
HTTP/1.1" 400 -
127.0.0.1 - - [25/Jun/2014 12:07:03] code 400, message Bad request syntax 
("DELETE /v2/2bc18b72010d455a9db7cbc583e1dcfc/os-server-groups/{u'policies': 
[u'affinity'], u'name': u'server-group-2072440191', u'id': 
u'01342594-9661-47fb-8816-e816ad2cae37', u'members': [], u'metadata': {}} 
HTTP/1.1")
127.0.0.1 - - [25/Jun/2014 12:07:03] "DELETE 
/v2/2bc18b72010d455a9db7cbc583e1dcfc/os-server-groups/{u'policies': 
[u'affinity'], u'name': u'server-group-2072440191', u'id': 
u'01342594-9661-47fb-8816-e816ad2cae37', u'members': [], u'metadata': {}} 
HTTP/1.1" 400 -
127.0.0.1 - - [25/Jun/2014 12:07:03] code 400, message Bad request syntax 
("DELETE /v2/2bc18b72010d455a9db7cbc583e1dcfc/os-server-groups/{u'policies': 
[u'anti-affinity'], u'name': u'server-group-684827235', u'id': 
u'53c45946-5a0b-451e-bf24-790b1db89963', u'members': [], u'metadata': {}} 
HTTP/1.1")
127.0.0.1 - - [25/Jun/2014 12:07:03] "DELETE 
/v2/2bc18b72010d455a9db7cbc583e1dcfc/os-server-groups/{u'policies': 
[u'anti-affinity'], u'name': u'server-group-684827235', u'id': 
u'53c45946-5a0b-451e-bf24-790b1db89963', u'members': [], u'metadata': {}} 
HTTP/1.1" 400 -

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334297

Title:
  Strange response from os-server-groups with bad parameter

Status in OpenStack Compute (Nova):
  New

Bug description:
  There was a bug in tempest that caused a call to DELETE os-server-
  groups with a bad id. Here is the call from the tempest log:

  2014-06-25 12:07:03.162 25653 INFO tempest.common.rest_client [-]
  Request (ServerGroupTestJSON:tearDownClass): 200 DELETE
  http://127.0.0.1:8774/v2/2bc18b72010d455a9db7cbc583e1dcfc/os-server-
  groups/{u'policies': [u'affinity'], u'name': u'server-
  group-1944635656', u'id': u'7c6a13c1-6d8b-4314-9d16-4eb6418a2170',
  u'members': [], u'metadata': {}} 0.001s

  Normally DELETE will return 204 and in this case I would have expected
  400. But the call returns 200. What can be seen in the nova log seems
  to indicate 400 but that is not what is actually getting sent back:

  127.0.0.1 - - [25/Jun/2014 12:07:03] code 400, message Bad request syntax 
("DELETE /v2/2bc18b72010d455a9db7cbc583e1dcfc/os-server-groups/{u'policies': 
[u'affinity'], u'name': u'server-group-1944635656', u'id': 
u'7c6a13c1-6d8b-4314-9d16-4eb6418a2170', u'members': [], u'metadata': {}} 
HTTP/1.1")
  127.0.0.1 - - [25/Jun/2014 12:07:03] "DELETE 
/v2/2bc18b72

[Yahoo-eng-team] [Bug 1343579] [NEW] Versionless GET on keystone gives different answer with port 5000 and 35357

2014-07-17 Thread David Kranz
Public bug reported:

On a system with both v2/v3 (devstack), using 35357 shows only v3.

5000:
http://docs.openstack.org/identity/api/v2.0";>






http://devstack-neutron:5000/v3/"; rel="self"/>








http://devstack-neutron:5000/v2.0/"; rel="self"/>
http://docs.openstack.org/"; type="text/html" rel="describedby"/>

http://devstack-neutron:5000/v2.0/"; rel="self"/>
http://docs.openstack.org/"; type="text/html" rel="describedby"/>



35357:
http://docs.openstack.org/identity/api/v2.0";>






http://devstack-neutron:35357/v3/"; rel="self"/>




** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1343579

Title:
  Versionless GET on keystone gives different answer with port 5000 and
  35357

Status in OpenStack Identity (Keystone):
  New

Bug description:
  On a system with both v2/v3 (devstack), using 35357 shows only v3.

  5000:
  http://docs.openstack.org/identity/api/v2.0";>
  
  
  
  
  
  
  http://devstack-neutron:5000/v3/"; rel="self"/>
  
  
  
  
  
  
  
  
  http://devstack-neutron:5000/v2.0/"; rel="self"/>
  http://docs.openstack.org/"; type="text/html" rel="describedby"/>
  
  http://devstack-neutron:5000/v2.0/"; rel="self"/>
  http://docs.openstack.org/"; type="text/html" rel="describedby"/>
  
  

  35357:
  http://docs.openstack.org/identity/api/v2.0";>
  
  
  
  
  
  
  http://devstack-neutron:35357/v3/"; rel="self"/>
  
  
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1343579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347156] [NEW] deleting floating-ip in nova-network does not free quota

2014-07-22 Thread David Kranz
Public bug reported:

It seems that when you allocate a floating-ip in a tenant with nova-
network, its quota is never returned after calling 'nova floating-ip-
delete' ecen though 'nova floating-ip-list' shows it gone. This behavior
applies to each tenant individually. The gate tests are passing because
they all run with tenant isolation. But this problem shows in the
nightly run without tenant isolation:

http://logs.openstack.org/periodic-qa/periodic-tempest-dsvm-full-non-
isolated-master/2bc5ead/console.html

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1347156

Title:
  deleting floating-ip in nova-network does not free quota

Status in OpenStack Compute (Nova):
  New

Bug description:
  It seems that when you allocate a floating-ip in a tenant with nova-
  network, its quota is never returned after calling 'nova floating-ip-
  delete' ecen though 'nova floating-ip-list' shows it gone. This
  behavior applies to each tenant individually. The gate tests are
  passing because they all run with tenant isolation. But this problem
  shows in the nightly run without tenant isolation:

  http://logs.openstack.org/periodic-qa/periodic-tempest-dsvm-full-non-
  isolated-master/2bc5ead/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1347156/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp