[Yahoo-eng-team] [Bug 1562863] Re: The command "nova-manage db archive_deleted_rows" doesn't move the entries to the shadow tables.

2016-09-10 Thread Vitaly Sedelnik
Removed duplicate mark to upstream nova bug
https://bugs.launchpad.net/nova/+bug/1183523. Targeted to 7.0-mu-6 and
8.0-updates. Invalid for 9.1 as there is fix to stable/mitaka which was
consumed via sync.

** Changed in: mos
   Status: New => Confirmed

** Changed in: mos
Milestone: 7.0-mu-4 => 7.0-mu-6

** Changed in: mos
 Assignee: Nova (nova) => MOS Maintenance (mos-maintenance)

** This bug is no longer a duplicate of bug 1183523
   db-archiving fails to clear some deleted rows from instances table

** Also affects: mos/9.x
   Importance: High
 Assignee: MOS Maintenance (mos-maintenance)
   Status: Confirmed

** Also affects: mos/7.0.x
   Importance: Undecided
   Status: New

** Also affects: mos/8.0.x
   Importance: Undecided
   Status: New

** Changed in: mos/7.0.x
   Status: New => Confirmed

** Changed in: mos/8.0.x
   Status: New => Confirmed

** Changed in: mos/7.0.x
   Importance: Undecided => High

** Changed in: mos/8.0.x
   Importance: Undecided => High

** Changed in: mos/8.0.x
 Assignee: (unassigned) => MOS Maintenance (mos-maintenance)

** Changed in: mos/7.0.x
 Assignee: (unassigned) => MOS Maintenance (mos-maintenance)

** Changed in: mos/7.0.x
Milestone: None => 7.0-mu-6

** Changed in: mos/8.0.x
Milestone: None => 8.0-updates

** Changed in: mos/9.x
Milestone: 7.0-mu-6 => 9.1

** Changed in: mos/9.x
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Nova,
which is a bug assignee.
https://bugs.launchpad.net/bugs/1562863

Title:
  The command "nova-manage db archive_deleted_rows" doesn't move the
  entries to the shadow tables.

Status in Mirantis OpenStack:
  Invalid
Status in Mirantis OpenStack 7.0.x series:
  Confirmed
Status in Mirantis OpenStack 8.0.x series:
  Confirmed
Status in Mirantis OpenStack 9.x series:
  Invalid

Bug description:
  Once an instance is deleted from the cloud its entry in the database is still 
present. 
  When trying to archive some number of deleted rows with "nova-manage db 
archive_deleted_rows --max-rows 1", a constraint error is displayed in nova 
logs.

  Expected results:
  Specified deleted rows number should be moved from production tables to 
shadow tables.

  Actual result:
  db-archiving fails to archive specified number of deleted rows with 
"nova-manage db archive_deleted_rows --max-rows 1", a constraint error is 
displayed in nova logs:

  2016-03-28 12:31:43.751 14147 ERROR oslo_db.sqlalchemy.exc_filters 
[req-0fd152a6-b298-4c3c-9aa7-ebdadc752b89 - - - - -] DBAPIError exception 
wrapped from (IntegrityError) (1451, 'Cannot delete or update a parent row: a 
foreign key constraint fails (`nova`.`block_device_mapping`, CONSTRAINT 
`block_device_mapping_instance_uuid_fkey` FOREIGN KEY (`instance_uuid`) 
REFERENCES `instances` (`uuid`))') 'DELETE FROM instances WHERE instances.id in 
(SELECT T1.id FROM (SELECT instances.id \nFROM instances \nWHERE 
instances.deleted != %s ORDER BY instances.id \n LIMIT %s) as T1)' (0, 1)
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 951, in 
_execute_context
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters 
context)
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 436, in 
do_execute
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 205, in execute
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters 
self.errorhandler(self, exc, value)
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/MySQLdb/connections.py", line 36, in 
defaulterrorhandler
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters raise 
errorclass, errorvalue
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters 
IntegrityError: (1451, 'Cannot delete or update a parent row: a foreign key 
constraint fails (`nova`.`block_device_mapping`, CONSTRAINT 
`block_device_mapping_instance_uuid_fkey` FOREIGN KEY (`instance_uuid`) 
REFERENCES `instances` (`uuid`))')
  2016-03-28 12:31:43.751 14147 TRACE oslo_db.sqlalchemy.exc_filters 
  2016-03-28 12:31:43.753 14147 WARNING nova.db.sqlalchemy.api 
[req-0fd152a6-b298-4c3c-9aa7-ebdadc752b89 - - - - -] IntegrityError detected 
when archiving table instances

  
  Steps to reproduce:
  1) create instance
  2) delete instance
  3) run command 'nova-manage db archive_deleted_rows --max_rows 1'

  Description of the environment:
  MOS: 7.0
  OS: Ubuntu 14.04 
  Reference

[Yahoo-eng-team] [Bug 1622159] [NEW] the available zone will get back to the default "nova"

2016-09-10 Thread quzhaoyang
Public bug reported:

Create a new available zone named "newzone", and use the available zone to 
create a vm.
Then, when you use the API interface to query the vm details, you will find the 
available zone of the vm is changed between the new available zone "newzone" 
and the default value "nova".It will be back to normal after a period of time.
The openstak dashboard also has this problem when you refresh the vm details 
page.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1622159

Title:
  the available zone will get back to the default "nova"

Status in OpenStack Compute (nova):
  New

Bug description:
  Create a new available zone named "newzone", and use the available zone to 
create a vm.
  Then, when you use the API interface to query the vm details, you will find 
the available zone of the vm is changed between the new available zone 
"newzone" and the default value "nova".It will be back to normal after a period 
of time.
  The openstak dashboard also has this problem when you refresh the vm details 
page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1622159/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622219] [NEW] launch instance using openstack server create

2016-09-10 Thread i.m.thabet
Public bug reported:

when trying to launch instance using openstack server create --flavor
m1.small --image Feora-24 --nic net-id=6c1548c0-49a2-493a-
89f1-5b98bcb1d33c  --security-group default --key-name mykey Fedora-24


error received "Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-9d307908-aead-4e21-9bc8-389f2b095230)"

nova-api log


2016-09-10 19:26:11.831 19472 ERROR nova.api.openstack.extensions raise 
NoSuchOptError(opt_name, group)
2016-09-10 19:26:11.831 19472 ERROR nova.api.openstack.extensions 
NoSuchOptError: no such option in group neutron: auth_plugin
2016-09-10 19:26:11.831 19472 ERROR nova.api.openstack.extensions
2016-09-10 19:26:11.837 19472 INFO nova.api.openstack.wsgi 
[req-9d307908-aead-4e21-9bc8-389f2b095230 ae1871edbbb540ccad13a3a4

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1622219

Title:
  launch instance using openstack server create

Status in OpenStack Compute (nova):
  New

Bug description:
  when trying to launch instance using openstack server create --flavor
  m1.small --image Feora-24 --nic net-id=6c1548c0-49a2-493a-
  89f1-5b98bcb1d33c  --security-group default --key-name mykey Fedora-24

  
  error received "Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-9d307908-aead-4e21-9bc8-389f2b095230)"

  nova-api log


  2016-09-10 19:26:11.831 19472 ERROR nova.api.openstack.extensions raise 
NoSuchOptError(opt_name, group)
  2016-09-10 19:26:11.831 19472 ERROR nova.api.openstack.extensions 
NoSuchOptError: no such option in group neutron: auth_plugin
  2016-09-10 19:26:11.831 19472 ERROR nova.api.openstack.extensions
  2016-09-10 19:26:11.837 19472 INFO nova.api.openstack.wsgi 
[req-9d307908-aead-4e21-9bc8-389f2b095230 ae1871edbbb540ccad13a3a4

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1622219/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622222] [NEW] openstack server create failure

2016-09-10 Thread i.m.thabet
Public bug reported:

openstack server create --flavor m1.small --image Feora-24 --nic net-
id=6c1548c0-49a2-493a-89f1-5b98bcb1d33c  --security-group default --key-
name mykey Fedora-24

fails with error " Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-c1a85afb-763d-4a0a-800b-da02d331132e)
"

nova-api.log

2016-09-10 19:26:11.831 19472 ERROR nova.api.openstack.extensions raise 
NoSuchOptError(opt_name, group)
2016-09-10 19:26:11.831 19472 ERROR nova.api.openstack.extensions 
NoSuchOptError: no such option in group neutron: auth_plugin
2016-09-10 19:26:11.831 19472 ERROR nova.api.openstack.extensions

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/162

Title:
  openstack server create failure

Status in OpenStack Compute (nova):
  New

Bug description:
  openstack server create --flavor m1.small --image Feora-24 --nic net-
  id=6c1548c0-49a2-493a-89f1-5b98bcb1d33c  --security-group default
  --key-name mykey Fedora-24

  fails with error " Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-c1a85afb-763d-4a0a-800b-da02d331132e)
  "

  nova-api.log

  2016-09-10 19:26:11.831 19472 ERROR nova.api.openstack.extensions raise 
NoSuchOptError(opt_name, group)
  2016-09-10 19:26:11.831 19472 ERROR nova.api.openstack.extensions 
NoSuchOptError: no such option in group neutron: auth_plugin
  2016-09-10 19:26:11.831 19472 ERROR nova.api.openstack.extensions

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/162/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621810] Re: neutron-lbaas quota unit tests broken

2016-09-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/368217
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lbaas/commit/?id=d27e8425b284042b98c0170119b7cdc9e0570690
Submitter: Jenkins
Branch:master

commit d27e8425b284042b98c0170119b7cdc9e0570690
Author: Stephen Balukoff 
Date:   Fri Sep 9 13:04:34 2016 -0700

Update quota tests to register quota resources

Change I8a40f38d7c0e5aeca257ba62115fa9b02ad5aa93 in the neutron code
tree recently altered the default behavior of quota tests to unregister
quota resources if they weren't explicitly registered in the test. This
broke neutron-lbaas quota tests. This commit updates the quota tests to
explicitly register our quota resources in the setUp() call.

Change-Id: If3c62574d728c1b7517f406890301f74e96566d1
Closes-Bug: #1621810


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1621810

Title:
  neutron-lbaas quota unit tests broken

Status in neutron:
  Fix Released

Bug description:
  http://logs.openstack.org/10/355510/3/check/gate-neutron-lbaas-
  python27-ubuntu-xenial/007c1f9/testr_results.html.gz

  
neutron_lbaas.tests.unit.services.loadbalancer.test_loadbalancer_quota_ext.LBaaSQuotaExtensionCfgTestCase.test_quotas_default_values

  Traceback (most recent call last):
File 
"neutron_lbaas/tests/unit/services/loadbalancer/test_loadbalancer_quota_ext.py",
 line 146, in test_quotas_default_values
  self.assertEqual(10, quota['quota']['pool'])
  KeyError: 'pool'

  
neutron_lbaas.tests.unit.services.loadbalancer.test_loadbalancer_quota_ext.LBaaSQuotaExtensionCfgTestCase.test_update_quotas_forbidden

  Traceback (most recent call last):
File 
"neutron_lbaas/tests/unit/services/loadbalancer/test_loadbalancer_quota_ext.py",
 line 158, in test_update_quotas_forbidden
  self.assertEqual(403, res.status_int)
File 
"/home/jenkins/workspace/gate-neutron-lbaas-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/jenkins/workspace/gate-neutron-lbaas-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 403 != 400

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1621810/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1183523] Re: db-archiving fails to clear some deleted rows from instances table

2016-09-10 Thread Matt Riedemann
** Changed in: nova/mitaka
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1183523

Title:
  db-archiving fails to clear some deleted rows from instances table

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  Fix Released

Bug description:
  Downstream bug report from Red Hat Bugzilla against Grizzly:
  https://bugzilla.redhat.com/show_bug.cgi?id=960644

  In unit tests, db-archiving moves all 'deleted' rows to the shadow
  tables.  However, in the real-world test, some deleted rows got stuck
  in the instances table.

  I suspect a bug in the way we deal with foreign key constraints.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1183523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622270] [NEW] libvirt: excessive warning logs like "couldn't obtain the vcpu count from domain id: 6bcf0dcb-722f-44a3-a0aa-3fa42dd1075a, exception: Requested operation is not va

2016-09-10 Thread Matt Riedemann
Public bug reported:

These warnings show up a ton which means they probably shouldn't be
warnings:

http://logs.openstack.org/79/368079/2/check/gate-tempest-dsvm-neutron-
multinode-full-ubuntu-
xenial/9a1a7f0/logs/screen-n-cpu.txt.gz?level=TRACE

2016-09-10 21:29:16.759 20105 WARNING nova.virt.libvirt.driver [req-
386e6a64-0d9c-48ec-bab9-4acabdf55265 - -] couldn't obtain the vcpu count
from domain id: 6bcf0dcb-722f-44a3-a0aa-3fa42dd1075a, exception:
Requested operation is not valid: cpu affinity is not supported


http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22couldn't%20obtain%20the%20vcpu%20count%20from%20domain%20id%5C%22%20AND%20message%3A%5C%22exception%3A%20Requested%20operation%20is%20not%20valid%3A%20cpu%20affinity%20is%20not%20supported%5C%22%20AND%20tags%3A%5C%22screen-n-cpu.txt%5C%22&from=7d

220K+ times in 7 days in the gate so this is excessive.

The warning is here:

https://github.com/openstack/nova/blob/e92d7537d67c7eb0e49415b429d56be762e02679/nova/virt/libvirt/driver.py#L4922

If cpu affinity is not supported, we shouldn't log that as a warning.

I'm not sure if this is something with the guest configuration or with
the version of libvirt/qemu on the host, we might need to ask danpb in
the #openstack-nova irc channel.

** Affects: nova
 Importance: Medium
 Status: Confirmed


** Tags: libvirt low-hanging-fruit

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1622270

Title:
  libvirt: excessive warning logs like "couldn't obtain the vcpu count
  from domain id: 6bcf0dcb-722f-44a3-a0aa-3fa42dd1075a, exception:
  Requested operation is not valid: cpu affinity is not supported"

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  These warnings show up a ton which means they probably shouldn't be
  warnings:

  http://logs.openstack.org/79/368079/2/check/gate-tempest-dsvm-neutron-
  multinode-full-ubuntu-
  xenial/9a1a7f0/logs/screen-n-cpu.txt.gz?level=TRACE

  2016-09-10 21:29:16.759 20105 WARNING nova.virt.libvirt.driver [req-
  386e6a64-0d9c-48ec-bab9-4acabdf55265 - -] couldn't obtain the vcpu
  count from domain id: 6bcf0dcb-722f-44a3-a0aa-3fa42dd1075a, exception:
  Requested operation is not valid: cpu affinity is not supported

  
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22couldn't%20obtain%20the%20vcpu%20count%20from%20domain%20id%5C%22%20AND%20message%3A%5C%22exception%3A%20Requested%20operation%20is%20not%20valid%3A%20cpu%20affinity%20is%20not%20supported%5C%22%20AND%20tags%3A%5C%22screen-n-cpu.txt%5C%22&from=7d

  220K+ times in 7 days in the gate so this is excessive.

  The warning is here:

  
https://github.com/openstack/nova/blob/e92d7537d67c7eb0e49415b429d56be762e02679/nova/virt/libvirt/driver.py#L4922

  If cpu affinity is not supported, we shouldn't log that as a warning.

  I'm not sure if this is something with the guest configuration or with
  the version of libvirt/qemu on the host, we might need to ask danpb in
  the #openstack-nova irc channel.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1622270/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595805] Re: Instance do not boot.

2016-09-10 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1595805

Title:
  Instance do not boot.

Status in OpenStack Compute (nova):
  Expired

Bug description:
  I am trying to do in Mitaka version.

  When Attempting to boot instance by running the following command.

  # nova boot --flavor 1 --key-name test-key2 --image 0f78c38d-b6bd-
  4b42-9ee5-32552f72fc44 test_instance

  Error such as the following has been returned.

  
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-c0928fee-97a6-483d-aecd-f870cf94941b)
  

  Please tell me the solution.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1595805/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463200] Re: check-tempest-dsvm-multinode-full fails due to "Failed to compute_task_migrate_server: No valid host was found"

2016-09-10 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1463200

Title:
  check-tempest-dsvm-multinode-full fails due to "Failed to
  compute_task_migrate_server: No valid host was found"

Status in OpenStack Compute (nova):
  Expired

Bug description:
  http://logs.openstack.org/81/181781/3/check/check-tempest-dsvm-
  multinode-full/e13a3a8/logs/screen-n-cond.txt.gz?level=TRACE

  2015-06-08 20:37:00.205 WARNING nova.scheduler.utils 
[req-73a672be-7fe7-4bb4-a13c-6c1bbe6eb3d6 ServerDiskConfigTestJSON-326631053 
ServerDiskConfigTestJSON-280407802] Failed to compute_task_migrate_server: No 
valid host was found. There are not enough hosts available.
  Traceback (most recent call last):

File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", 
line 142, in inner
  return func(*args, **kwargs)

File "/opt/stack/new/nova/nova/scheduler/manager.py", line 86, in 
select_destinations
  filter_properties)

File "/opt/stack/new/nova/nova/scheduler/filter_scheduler.py", line 89, in 
select_destinations
  raise exception.NoValidHost(reason=reason)

  NoValidHost: No valid host was found. There are not enough hosts
  available.

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmFpbGVkIHRvIGNvbXB1dGVfdGFza19taWdyYXRlX3NlcnZlcjogTm8gdmFsaWQgaG9zdCB3YXMgZm91bmRcIiBBTkQgdGFnczpcInNjcmVlbi1uLWNvbmQudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzM4MDMzMjU1ODN9

  There really isn't much in the compute logs for errors except this:

  http://logs.openstack.org/81/181781/3/check/check-tempest-dsvm-
  multinode-
  full/e13a3a8/logs/screen-n-cpu.txt.gz?level=TRACE#_2015-06-08_20_24_45_822

  2015-06-08 20:24:45.822 ERROR nova.compute.manager [req-5e3903be-
  18a7-4aa4-b874-f7116bac6e43 None None] No compute node record for host
  devstack-trusty-2-node-rax-iad-2929470.slave.openstack.org

  But that's not even at the same time.

  I see this in the n-api logs around the same time as the failures:

  http://logs.openstack.org/81/181781/3/check/check-tempest-dsvm-
  multinode-
  full/e13a3a8/logs/screen-n-api.txt.gz?level=TRACE#_2015-06-08_20_35_01_485

  015-06-08 20:35:01.485 WARNING nova.compute.api [req-eec9e9fb-8794
  -47cc-a67a-f60b3dd85ab4 MigrationsAdminTest-1879224794
  MigrationsAdminTest-93634600] [instance: 24d02424-1af8-43b8-941c-
  507806a22e79] instance's host devstack-trusty-2-node-rax-
  iad-2929470.slave.openstack.org is down, deleting from database

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1463200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445305] Re: the “admin”tenant can't show the server-group which created by common tenants

2016-09-10 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445305

Title:
  the “admin”tenant  can't  show the server-group which created by
  common tenants

Status in OpenStack Compute (nova):
  Expired

Bug description:
  virsion: icehouse

  I have two tenants , admin tenant and common tenant called test .
  I created a server-group called antiaffinitygroup under the test tenant.
  but there is nothing when i login the admin tenant and list the server-group.

  ex:
  [root@njq002 ~(keystone_test)]# nova server-group-list
  
+--+---++-+--+
  | Id   | Name  | Policies 
  | Members | Metadata |
  
+--+---++-+--+
  | 059bf32e-f416-4a27-b653-d78a147add80 | antiaffinitygroup | 
[u'anti-affinity'] | []  | {}   |
  
+--+---++-+--+

  [root@njq002 ~(keystone_admin)]# nova server-group-list
  ++--+--+-+--+
  | Id | Name | Policies | Members | Metadata |
  ++--+--+-+--+
  ++--+--+-+--+

  This can lead to a problem:
  In the admin tenant, the action will ignore any policys of server group when 
you resize or migrate the VM which belong to a common tenant

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1445305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456676] Re: Booting an instance with --block-device fails when it should be successful

2016-09-10 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1456676

Title:
  Booting an instance with --block-device fails when it should be
  successful

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Version:
  Kilo 2015.1.0

  openstack-nova-compute-2015.1.0-3.el7ost.noarch
  openstack-nova-common-2015.1.0-3.el7ost.noarch
  openstack-nova-console-2015.1.0-3.el7ost.noarch
  openstack-nova-scheduler-2015.1.0-3.el7ost.noarch
  python-nova-2015.1.0-3.el7ost.noarch
  openstack-nova-conductor-2015.1.0-3.el7ost.noarch
  openstack-nova-cert-2015.1.0-3.el7ost.noarch
  python-novaclient-2.23.0-1.el7ost.noarch
  openstack-nova-novncproxy-2015.1.0-3.el7ost.noarch
  openstack-nova-api-2015.1.0-3.el7ost.noarch

  Description:
  When trying to boot an instance with a block device it fails:

  nova --debug boot --flavor m1.small --image cirros --nic net-id
  =397cffaf-bf84-4a3b-8af4-95dd772546bd --block-device
  
source=volume,dest=volume,id=64cf7206-e327-44a8-bc93-fc432b4a4522,bus=ide,type=cdrom,bootindex=1
  test

  ERROR (BadRequest): Block Device Mapping is Invalid: Boot sequence for
  the instance and image/block device mapping combination is not valid.
  (HTTP 400) (Request-ID: req-5c85f777-348b-42d9-a618-ff2003968291)

  After a quick checkup it seems that the bug is related to 
https://bugs.launchpad.net/nova/+bug/1433609
  but the fix in the python client: https://review.openstack.org/#/c/153203/ is 
not present in python-novaclient-2.23.0-1.el7ost.noarch which causes the same 
problem described above.

  Steps to reproduce:
  1. Create a bootable volume
  2. launch an instance with the following command: nova boot --flavor  --image  --nic net-id= --block-device 
source=volume,dest=volume,id=,bus=ide,type=cdrom,bootindex=1 

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1456676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1428344] Re: Detaching iscsi lvm block volumes fails to remove luns on nova cpu host

2016-09-10 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1428344

Title:
  Detaching iscsi lvm block volumes fails to remove luns on nova cpu
  host

Status in OpenStack Compute (nova):
  Expired

Bug description:
  On kilo master detaching iscsi lvm block volumes fails to remove luns
  on nova cpu host

  How to reproduce with lvm backend
  Ubuntu 14.04
  clone devstack 
  Create volume1_lvm volume type lvmdriver-1
  Attach volume1_lvm to nova nst1
  ll /dev/disk/by-path/
  lrwxrwxrwx 1 root root   9 Mar  4 12:59 
ip-10.50.128.22:3260-iscsi-iqn.2010-10.org.openstack:volume-45451822-f305-406c-9eec-088b7e432af5-lun-1
 -> ../../sdg

  Dettach volume1_lvm
  ll /dev/disk/by-path/
  lrwxrwxrwx 1 root root   9 Mar  4 12:59 
ip-10.50.128.22:3260-iscsi-iqn.2010-10.org.openstack:volume-45451822-f305-406c-9eec-088b7e432af5-lun-1
 -> ../../sdg

  Seems to related to following libvirt code
  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume.py#L568-L576

  Attached is n-cpu.log

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1428344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1321946] Re: MagicMocks aren't using specs so typos pass tests

2016-09-10 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1321946

Title:
  MagicMocks aren't using specs so typos pass tests

Status in neutron:
  Expired

Bug description:
  Many MagicMocks currently aren't using specs or autospec so they
  respond to any function call.

  The downside to this is that calls to non-existent assertion calls
  silently pass instead of failing. An example of how these cases build
  up is available in the patch for the related bug [1].

  1. https://bugs.launchpad.net/neutron/+bug/1320774

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1321946/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1317868] Re: SYMC: LBAAS :Member with protocol port as HTTPS is not coming in active state for Lbaas site

2016-09-10 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1317868

Title:
  SYMC: LBAAS :Member with protocol port as HTTPS is not coming in
  active state for Lbaas site

Status in neutron:
  Expired

Bug description:
  Steps to Reproduce:
  Setup onto  ICEHouse GA:
  Build :2014.1-0ubuntu1~cloud0
  1. Create a lbaas site with one client and two web server.
  2. Create pool with protocol as https
  
+--+---+-+--+++
  | id   | name  | lb_method   | protocol | 
admin_state_up | status |
  
+--+---+-+--+++
  | aebd47e9-d125-49f6-aef2-2a97954c051a | pool1 | ROUND_ROBIN | HTTPS| 
True   | ACTIVE |
  
+--+---+-+--+++

  3. Create a health monitor for type https
  +--+---++
  | id   | type  | admin_state_up |
  +--+---++
  | affb69f6-e6b0-4f34-ab42-a89acecc60b3 | HTTPS | True   |
  +--+---++

  4. Create vip with protocol port as 443 and protocol as HTTPS
  
+--+--+---+--+++
  | id   | name | address   | protocol | 
admin_state_up | status |
  
+--+--+---+--+++
  | 405609a8-0e50-4d2c-a7b5-ab206290303d |  | 10.10.1.6 | HTTPS| True   
| ACTIVE |
  
+--+--+---+--+++
  5. list the lbaas member status:
  
+--+---+---++--+
  | id   | address   | protocol_port | 
admin_state_up | status   |
  
+--+---+---++--+
  | dd7a7f63-9218-4d21-b8f8-967018a3ae9f | 10.10.1.4 |   443 | True 
  | INACTIVE |
  | f844e226-4c3f-4351-9750-7cb14f614c3c | 10.10.1.5 |   443 | True 
  | INACTIVE |
  
+--+---+---++--+

  Actual Results:
  list the lbaas member status:
  
+--+---+---++--+
  | id   | address   | protocol_port | 
admin_state_up | status   |
  
+--+---+---++--+
  | dd7a7f63-9218-4d21-b8f8-967018a3ae9f | 10.10.1.4 |   443 | True 
  | INACTIVE |
  | f844e226-4c3f-4351-9750-7cb14f614c3c | 10.10.1.5 |   443 | True 
  | INACTIVE |
  
+--+---+---++--+
  Expected Results: Members should come in active state after the successfull 
creation of lbaas

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1317868/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346778] Re: Neutron does not work by default without a keystone admin user

2016-09-10 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346778

Title:
  Neutron does not work by default without a keystone admin user

Status in Ceilometer:
  Incomplete
Status in neutron:
  Expired

Bug description:
  The default neutron policy.json 'context_is_admin' only matches on
  'role:admin' and the account that neutron is configured with must
  match 'context_is_admin' for neutron to function correctly. This means
  that without modifying policy.json, a deployer cannot use a non-admin
  account for neutron.

  The policy.json keywords have no way to match the username of the
  neutron keystone credentials. This means that policy.json has to be
  modified for every deployment that doesn't use an admin user to match
  the keystone user neutron is configured with.

  This seems like an unnecessary burden to leave to deployers to achieve
  a secure deployment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1346778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1212338] Re: Auto-created ports count against port quota

2016-09-10 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1212338

Title:
  Auto-created ports count against port quota

Status in neutron:
  Expired

Bug description:
  Ports that are created for a new network -- lots of them by my
  experience (one for the L3 agent the tenant router is assigned to and
  one port for each compute worker in the AZ! -- count against the port
  quota for a tenant. So, in a practical sense, what this means is that
  for even a small AZ with 16 compute workers, even though a tenant's
  default network, subnet, and router quota is 10, the tenant actually
  won't be able to create more than a couple networks and floating IP
  addresses, since the port quota would be exhausted almost immediately.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1212338/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1312257] Re: ovs agent: provision_local_vlan shouldn't set-up flooding flows when l2pop is activated

2016-09-10 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1312257

Title:
  ovs agent: provision_local_vlan shouldn't set-up flooding flows when
  l2pop is activated

Status in neutron:
  Expired

Bug description:
  When l2-population option is activated, provision_local_vlan shoudln't
  set up a flow to flood packets on every tunnel ports. This is  not a
  big issue as corresponding flow is overwritten as soon as the first
  fdb_add call is recieved, but it is anyway unecessary and  should be
  removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1312257/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1220606] Re: LBaaS: apply default provider for existing pools

2016-09-10 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1220606

Title:
  LBaaS: apply default provider for existing pools

Status in neutron:
  Expired

Bug description:
  For pools that were created before service providers support
  'provider' field is empty. This causes neutron start failure with the
  following error:

  Delete associated loadbalancer pools before removing providers ['']

  I think that for such pools we can safely apply the default provider
  (haproxy) on start as it is the only provider supported at the moment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1220606/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1250105] Re: Cannot assign a floating IP to an instance that is reachable by an extra route

2016-09-10 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1250105

Title:
  Cannot assign a floating IP to an instance that is reachable by an
  extra route

Status in neutron:
  Expired

Bug description:
  Consider the following network topology:

  Public network --- router1 --- [multi-homed-instance]--- [internal-
  instance]

  In this topology, we have:
  - a public network
  - a router (router1)
  - an intermediate network (and a subnet)
  - an internal network (and a subnet)
  - a multi-homed instance with ports on both the intermediate network and the 
internal network
  - an internal-instance with a port connected to the internal network

  We are trying to create a network topology in which traffic to/from
  the internal network must go through the multi-homed-instance for
  security and other reasons.

  For the sake of this example we will use the following addresses:
  - Public network: 172.24.4.0/24
  - Intermediate subnet: 10.1.0.0/24
  - Internal subnet: 10.2.0.0/24
  - Router: 172.24.4.226 (public), 10.1.0.1 (intermediate)
  - multi-homed-instance: 10.1.0.10 (intermediate), 10.2.0.10 (internal)
  - internal-instance: 10.2.0.81 (internal)

  The default route on the internal subnet is set to 10.2.0.10 (the
  multi-homed-instance)

  Using the extra route extension, the router is configured to route all 
traffic to 10.2.0.0/24 through 10.1.0.10:
  neutron router-update router --routes type=dict list=true 
nexthop=10.1.0.10,destination=10.2.0.0/24

  When trying to allocate a floating IP address and assign it to the internal 
host, we get an exception:
  404-{u'NeutronError': {u'message': u'External network 
a470bb7f-e06d-4214-a1bb-a8ec7727db23 is not reachable from subnet 
10b47bbd-4ae4-4dbf-98ce-3f5bb9d7a081.  Therefore, cannot associate Port 
75cddce2-f865-45e9-99c5-8a0a6902ecfc with a Floating IP.', u'type': 
u'ExternalGatewayForFloatingIPNotFound', u'detail': u''}}

  
  The attached script can be used to recreate this bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1250105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269135] Re: ML2 unit test coverage - l2pop

2016-09-10 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1269135

Title:
  ML2 unit test coverage - l2pop

Status in neutron:
  Expired

Bug description:
  From tox -e cover; coverage report -m

  neutron/plugins/ml2/drivers/l2pop/db38  4 
 2  285%   32-34, 41
  neutron/plugins/ml2/drivers/l2pop/rpc   31  2 
12  486%   73, 82
  neutron/plugins/ml2/drivers/l2pop/mech_driver  124  7 
43  692%   71, 107, 112, 116-118, 172-175
  neutron/plugins/ml2/drivers/l2pop/config 3  0 
 0  0   100%   
  neutron/plugins/ml2/drivers/l2pop/constants  2  0 
 0  0   100%   
  neutron/plugins/ml2/drivers/l2pop/__init__   0  0 
 0  0   100%

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1269135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1209301] Re: neutron-manage db-sync with sqlite db fails

2016-09-10 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1209301

Title:
  neutron-manage db-sync with sqlite db  fails

Status in neutron:
  Expired

Bug description:
  We are building packages for Debian and we run an issue when we run 
"neutron-manage db-sync" command with sqlite backend.
  I use sqlalchemy with 0.7.8-1 release.

  Here is the complete log :
  http://paste.openstack.org/show/rYbM5FwjzFeJCGAv0b7F/


  Note: It works with MySQL backend.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1209301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1200585] Re: floatingip-create doesn't check if tenant_id exists or enabled

2016-09-10 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1200585

Title:
  floatingip-create doesn't check if tenant_id exists or enabled

Status in neutron:
  Expired

Bug description:
  how to reproduce:
  $ quantum floatingip-create --tenant-id 111 public

  Created a new floatingip:
  +-+--+
  | Field   | Value|
  +-+--+
  | fixed_ip_address|  |
  | floating_ip_address | 172.24.4.231 |
  | floating_network_id | 05fa4ce3-b834-40d1-bf9b-4794f057f40b |
  | id  | 520b88c2-3d70-4698-aef5-620275e50cf8 |
  | port_id |  |
  | router_id   |  |
  | tenant_id   | 111  |
  +-+--+

  expect result:
  HTTP 404
  Because tenant-id doesn't exist.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1200585/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1194206] Re: allow_overlapping_ips=False fails for Concurrent requests

2016-09-10 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1194206

Title:
  allow_overlapping_ips=False fails for Concurrent requests

Status in neutron:
  Expired

Bug description:
  In my use case if i set allow_overlapping_ips=False i expect no
  duplicate CIDR's . This works fine. But under concurrent requests this
  fails. It creates duplicate CIDR's

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1194206/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1044085] Re: automated test to detect missing rootwrap filters

2016-09-10 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1044085

Title:
  automated test to detect missing rootwrap filters

Status in neutron:
  Expired

Bug description:
  this is not specific to quantum, so we may want to coordinate with
  other projects, but we have a strong need to be able to detect when a
  new command has been added to the quantum codebase without the
  corresponding filters being added.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1044085/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1031473] Re: Only admin should update device_id

2016-09-10 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1031473

Title:
  Only admin should update device_id

Status in neutron:
  Expired

Bug description:
  For current implementation,  device_id can be updated by non-admin user.
  Policy check for device_id is also needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1031473/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620279] Re: Allow metadata agent to make calls to more than one nova_metadata_ip

2016-09-10 Thread Gary Kotton
I do not think that this is a bug. The nova_api IP configured in can be an IP 
address of a VIP of nova-api's. 
We do not need to invent the wheel here

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1620279

Title:
  Allow metadata agent to make calls to more than one nova_metadata_ip

Status in neutron:
  Won't Fix

Bug description:
  Currently in config of metadata agent there is option to set IP address of 
nova metadata service (nova_metadata_ip).
  There can be situation that there is more than one nova-api service in 
cluster and in such case if configured nova metadata IP will return e.g. error 
500 then it will be returned to instance, but there can be situation that all 
other nova-api services are working fine and call to other Nova service would 
return proper metadata.

  So proposition is to change nova_metadata_ip string option to list of
  IP addresses and to change metadata agent that it will try to make
  calls to one of configured Nova services. If response from this Nova
  service will not be 200, than agent will try to make call to next Nova
  service. If response from all Nova services will fail, then it will
  return lowest error code which will get from Nova (for example Nova-
  api-1 returned 500 and Nova-api-2 returned 404 - agent will return to
  VM response 404).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1620279/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp