[Yahoo-eng-team] [Bug 2084451] [NEW] unshelve to specific host produce inconsistent instance state

2024-10-14 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

{
  "server": {
"id": "61f14c91-d8da-44da-b7c0-96a076223692",
"name": "tempest-UnshelveToHostMultiNodesTest-server-315369380",
"status": "ACTIVE",## <---
"tenant_id": "593e419dc31b4918ab810919838d0deb",
"user_id": "4928347555cf465a95a11362addae648",
"metadata": {},
"hostId": "",
"image": {
  "id": "76dbcff9-5149-4590-8130-6ac5332731b6",
  "links": [
{
  "rel": "bookmark",
  "href": 
"https://10.0.18.136/compute/images/76dbcff9-5149-4590-8130-6ac5332731b6";
}
  ]
},
"flavor": {
  "vcpus": 1,
  "ram": 192,
  "disk": 1,
  "ephemeral": 0,
  "swap": 0,
  "original_name": "m1.nano",
  "extra_specs": {
"hw_rng:allowed": "True"
  }
},
"created": "2024-10-11T16:39:52Z",
"updated": "2024-10-11T16:40:51Z",
"addresses": {},
"accessIPv4": "",
"accessIPv6": "",
"links": [
  {
"rel": "self",
"href": 
"https://10.0.18.136/compute/v2.1/servers/61f14c91-d8da-44da-b7c0-96a076223692";
  },
  {
"rel": "bookmark",
"href": 
"https://10.0.18.136/compute/servers/61f14c91-d8da-44da-b7c0-96a076223692";
  }
],
"OS-DCF:diskConfig": "MANUAL",
"progress": 0,
"OS-EXT-AZ:availability_zone": "",
"config_drive": "True",
"key_name": null,
"OS-SRV-USG:launched_at": "2024-10-11T16:40:50.00",
"OS-SRV-USG:terminated_at": null,
"OS-EXT-SRV-ATTR:host": null,  ## <-
"OS-EXT-SRV-ATTR:instance_name": "instance-000b",
"OS-EXT-SRV-ATTR:hypervisor_hostname": null,
"OS-EXT-SRV-ATTR:reservation_id": "r-twvaw7yx",
"OS-EXT-SRV-ATTR:launch_index": 0,
"OS-EXT-SRV-ATTR:hostname": 
"tempest-unshelvetohostmultinodestest-server-315369380",
"OS-EXT-SRV-ATTR:kernel_id": "",
"OS-EXT-SRV-ATTR:ramdisk_id": "",
"OS-EXT-SRV-ATTR:root_device_name": "/dev/vda",
"OS-EXT-SRV-ATTR:user_data": null,
"OS-EXT-STS:task_state": null,
"OS-EXT-STS:vm_state": "active",
"OS-EXT-STS:power_state": 1,
"os-extended-volumes:volumes_attached": [],
"host_status": "",
"locked": false,
"locked_reason": null,
"description": null,
"tags": [],
"trusted_image_certificates": null,
"server_groups": []
  }
}

After an unshelve to specific host nova reports that the VM is active
but the host is empty. That should not happen.

It is visible in tempest executions randomly. E.g.
https://219c21f0d93c3f0999a0-b2aa11a8e0c554ee7a8f8052466e6a93.ssl.cf2.rackcdn.com/928590/6/check/nova-
next/0f822c2/testr_results.html

** Affects: nova
 Importance: Undecided
 Status: New

-- 
unshelve to specific host produce inconsistent instance state
https://bugs.launchpad.net/bugs/2084451
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2073872] Re: Neutron L3 agent not create ECMP in HA router but in single mode is ok

2024-10-04 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2073872

Title:
  Neutron L3 agent not create ECMP in HA router but in single mode is ok

Status in neutron:
  Expired

Bug description:
  Openstack version: Yoga
  OS: Ubuntu 20.04
  Deployment with kolla-ansible multinode.

  Following: https://specs.openstack.org/openstack/neutron-
  specs/specs/wallaby/l3-router-support-ecmp.html

  I used command: `openstack router add route --route
  destination=10.10.10.12/32,gateway=10.10.10.53 --route
  destination=10.10.10.12/32,gateway=10.10.10.42
  fd19eef2-0cc6-4e91-8a30-0a7e4c492c3d`

  In L3 HA mode route table in router:

  ```
  10.10.10.0/24 dev qr-84f2eaa6-d9 proto kernel scope link src 10.10.10.1
  10.10.10.12 via 10.10.10.53 dev qr-84f2eaa6-d9 proto 112
  10.10.10.12 via 10.10.10.42 dev qr-84f2eaa6-d9 proto 112
  10.211.0.0/16 dev qr-62eaee5e-6a proto kernel scope link src 10.211.3.236
  169.254.0.0/24 dev ha-8810cfd1-6c proto kernel scope link src 169.254.0.221
  169.254.192.0/18 dev ha-8810cfd1-6c proto kernel scope link src 
169.254.192.101
  ```

  Besides, when I delete first route(in ecmp route) with command:

  `openstack router remove route --route
  destination=10.10.10.12/32,gateway=10.10.10.53
  fd19eef2-0cc6-4e91-8a30-0a7e4c492c3d`

  response is ok, but route not deleted in router namespace. Next I
  delete last route, response ok again and this route is deleted. So
  only first route is exist.

  
  Follow add route commands above, In L3 single (not HA mode) route table in 
router:

  ```
  10.0.0.0/24 dev qr-6a3a4bca-cb proto kernel scope link src 10.0.0.1
  10.0.0.108 proto static
  nexthop via 10.0.0.185 dev qr-6a3a4bca-cb weight 1
  nexthop via 10.0.0.168 dev qr-6a3a4bca-cb weight 1
  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2073872/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2077533] Re: An error in processing one DVR router can lead to connectivity issues for other routers

2024-10-22 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2077533

Title:
  An error in processing one DVR router can lead to connectivity issues
  for other routers

Status in neutron:
  Expired

Bug description:
  I investigated the customer's issue and concluded that this code: 
  
https://opendev.org/openstack/neutron/src/commit/0807c94dc9843fff318c21d1f6f7b8838f948f5f/neutron/agent/l3/dvr_fip_ns.py#L155-L160
  which deletes the fip-namespace during router processing, leads to 
connectivity problems for other routers. This deletion of the fip-namespace 
also removes the veth pairs rfp/fpr for other routers. However, the 
reprocessing of those other routers does not occur. As a result, all other 
routers, except the one that triggered the deletion of the fip-namespace, are 
left without the rfp/fpr veth pair.

  The issue might be difficult to trigger, so I'll demonstrate it with a
  small hack:

  --- a/neutron/agent/l3/dvr_fip_ns.py
  +++ b/neutron/agent/l3/dvr_fip_ns.py
  @@ -151,6 +151,11 @@ class FipNamespace(namespaces.Namespace):
   try:
   self._update_gateway_port(
   agent_gateway_port, interface_name)
  +if getattr(self, 'test_fail', False):
  +self.test_fail = False
  +raise Exception('Test Fail')
  +else:
  +self.test_fail = True
   except Exception:
   # If an exception occurs at this point, then it is
   # good to clean up the namespace that has been created

  
  1) I create two routers with the same external network:

  [root@devstack0 ~]# openstack router create r1 --external-gateway public -c id
  +---+--+
  | Field | Value|
  +---+--+
  | id| 25085e63-45a6-4795-93dc-77cb245664d7 |
  +---+--+
  [root@devstack0 ~]# openstack router create r2 --external-gateway public -c id
  +---+--+
  | Field | Value|
  +---+--+
  | id| 3805cd53-5fed-4fa3-9147-f396761fc9cd |
  +---+--+
  [root@devstack0 ~]# ip netns exec fip-dad747c6-c234-41e3-ae27-c9602b81fbd2 ip 
a
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
 
  2: fpr-25085e63-4@if2:  mtu 1450 qdisc 
noqueue state UP group default qlen 1000
  link/ether 8e:e9:5e:65:9c:ad brd ff:ff:ff:ff:ff:ff link-netns 
qrouter-25085e63-45a6-4795-93dc-77cb245664d7
  inet 169.254.120.3/31 scope global fpr-25085e63-4
 valid_lft forever preferred_lft forever
  inet6 fe80::8ce9:5eff:fe65:9cad/64 scope link
 valid_lft forever preferred_lft forever
  3: fpr-3805cd53-5@if2:  mtu 1450 qdisc 
noqueue state UP group default qlen 1000
  link/ether 12:e1:bf:02:98:e0 brd ff:ff:ff:ff:ff:ff link-netns 
qrouter-3805cd53-5fed-4fa3-9147-f396761fc9cd
  inet 169.254.77.247/31 scope global fpr-3805cd53-5
 valid_lft forever preferred_lft forever
  inet6 fe80::10e1:bfff:fe02:98e0/64 scope link
 valid_lft forever preferred_lft forever
  68: fg-21441edb-3f:  mtu 1450 qdisc noqueue 
state UNKNOWN group default qlen 1000
  link/ether fa:16:3e:e4:ea:e1 brd ff:ff:ff:ff:ff:ff
  inet 10.20.30.95/24 brd 10.20.30.255 scope global fg-21441edb-3f
 valid_lft forever preferred_lft forever
  inet6 fe80::f816:3eff:fee4:eae1/64 scope link
 valid_lft forever preferred_lft forever
  [root@devstack0 ~]#

  2) I trigger an update of router r1 with a failure (see hack), which
  leads to the deletion of the fip-namespace and reprocessing of this
  router. Updating r1 causes the loss of the veth rfp/fpr pair for
  router r2, thus breaking router r2.

  [root@devstack0 ~]# openstack router set r1 --name r1-updated
  [root@devstack0 ~]# ip netns exec fip-dad747c6-c234-41e3-ae27-c9602b81fbd2 ip 
a
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
  2: fpr-25085e63-4@if3:  mtu 1450 qdisc 
noqueue state UP group default qlen 1000
  link/ether fa:68:ef:86:96:a5 brd ff:ff:ff:ff:ff:ff link-netns 
qrouter-25085e63-45a6-4795-93dc-77cb245664d7
  inet 169.254.120.3/31 scope global fpr-25085e63-4
 valid_lft forever preferred_lft forever
  inet6 fe80::f868:efff:fe86:96a5/64 scope link
 

[Yahoo-eng-team] [Bug 2054404] Re: Self Signed Certs Cause Metadata cert errors seemingly

2024-10-13 Thread Launchpad Bug Tracker
[Expired for kolla-ansible because there has been no activity for 60
days.]

** Changed in: kolla-ansible
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2054404

Title:
  Self Signed Certs Cause Metadata cert errors seemingly

Status in kolla-ansible:
  Expired
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  ==> /var/log/kolla/nova/nova-metadata-error.log <==
  2024-02-18 00:58:15.029954 AH01909: 
tunninet-server-noel.ny5.lan.tunninet.com:8775:0 server certificate does NOT 
include an ID which matches the server name
  2024-02-18 00:58:16.360069 AH01909: 
tunninet-server-noel.ny5.lan.tunninet.com:8775:0 server certificate does NOT 
include an ID which matches the server name

  I have no cert issues elsewhere, just this. What could cause this? it
  usually elsewhere has an IP and the fqdn as SAN's.

  How can i troubleshoot the root cause ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla-ansible/+bug/2054404/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2078856] Re: OVN invalid syntax '' in networks

2024-11-04 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2078856

Title:
  OVN invalid syntax '' in networks

Status in neutron:
  Expired

Bug description:
  On a 2023.2 based deployment with OVN as the ML2 driver, I see the following 
in ovn-northd logs every 30 seconds or so:
  2024-09-03T18:34:42.985Z|03113|ovn_util|INFO|invalid syntax '' in networks

  This setup is 3 controller/network nodes in HA with non-distributed
  routing setup. OVN version is 23.09.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2078856/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2075559] Re: 未经授权:您提出的请求需要身份验证

2024-11-03 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2075559

Title:
   未经授权:您提出的请求需要身份验证

Status in OpenStack Compute (nova):
  Expired

Bug description:
  
openstack,创建实例的时候报误,不管是在命令行还是网页都一样,到keystone日志中发现是没有经过授权这个是最主要的错误,openstack的版本是2019年的train

  下面是keystone的主要报错日志 keystone.服务器。瓶。应用程序
  [REQ-D28C65C7-CFD0-4625-B787-E2657D64FE 36 - - - - -]
  授权失败。您提出的请求需要身份验证。从 192.168.119.128: 未经授权:您提出的请求需要身份验证。

  下面是nova-api的日志 
  1:INFO nova.osapi_compute.wsgi.server [req-61f1270f-389d-4907-add3-b31c202478 
5e 525b2bb9a9e649beb15da84bbb38 c3b2d39335a248b7bfa65672a49eb7eb - 默认默认值] 
192.168.119.128 “POST /v2.1/servers HTTP/1.1” 状态: 500 len: 755 时间: 

  2:信息新星.osapi_计算。wsgi.server [req-e90fbfd2-4652-47ed-8e48-ff9e452f0d 04
  525b2bb9a9e649beb15da84bbb38 c3b2d39335a248b7bfa65672a49eb7eb -
  默认默认值] 192.168.119.128 “GET
  /v2.1/flavors/0590885e-8466-4c18-90e0-120ce38aec ff/os-extra_specs
  HTTP/1.1” 状态: 200 len: 417 时间: 0.0464470

  3:错误nova.应用程序接口。OpenStack的。WSGI
  [req-8f66ef9c-d905-45b0-8509-f17491790a e6
  525b2bb9a9e649beb15da84bbb38 c3b2d39335a248b7bfa65672a49eb7eb -
  默认默认值] API 方法中出现意外异常: 未经授权: 您提出的请求需要身份验证。(HTTP 401)(请求
  ID:req-d28c65c7-cfd0-4625-b787-e2657d64fe 36)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2075559/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2065451] Re: Updated image property did not synced to instance

2024-11-03 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2065451

Title:
  Updated image property did not synced to instance

Status in OpenStack Compute (nova):
  Expired

Bug description:
  We created an instance and then rebooted it, but the instance xml is
  changed after we reboot it, the following lines were added to the xml:

  
    
    
  

  We read the nova code, find that the input device info was refreshed
  into nova db using the following line:
  
nova.virt.libvirt.driver.LibvirtDriver.spawn#self._register_undefined_instance_details(context,
  instance),

  but when the instance is booting,
  nova.virt.libvirt.driver.LibvirtDriver.spawn#xml =
  self._get_guest_xml(context, instance, network_info, disk_info,
  image_meta, block_device_info=block_device_info, mdevs=mdevs,
  accel_info=accel_info). This line records the xml without the usb
  keyboard setting. When the instance is hard rebooted, nova will use
  the setting set by the previous line and regenerate the xml, this may
  be the trick.

  Thank you for looking into it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2065451/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2071329] Re: Secure boot is not supported in Openstack Wallaby (Devstack)

2024-11-03 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2071329

Title:
  Secure boot is not supported in Openstack Wallaby (Devstack)

Status in OpenStack Compute (nova):
  Expired

Bug description:
  We tried for UEFI boot which is working fine but when we try with
  secure boot its not working.

  Devstack - wallaby release

  
  ERROR nova.compute.manager [None req-fef7c46e-e122-47ca-856b-197e06d71b9a 
admin admin] [instance: d8aa207a-db1b-4df9-871f-f6657eeba6bd] Instance failed 
to spawn: nova.exception.SecureBootNotSupported: Secure Boot is not supported 
by host
  Traceback (most recent call last):
File "/opt/stack/nova/nova/compute/manager.py", line 2640, in 
_build_resources
  yield resources
File "/opt/stack/nova/nova/compute/manager.py", line 2409, in 
_build_and_run_instance
  self.driver.spawn(context, instance, image_meta,
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 4199, in spawn
  xml = self._get_guest_xml(context, instance, network_info,
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 7049, in 
_get_guest_xml
  conf = self._get_guest_config(instance, network_info, image_meta,
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6662, in 
_get_guest_config
  self._configure_guest_by_virt_type(guest, instance, image_meta, flavor)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6253, in 
_configure_guest_by_virt_type
  raise exception.SecureBootNotSupported()
  nova.exception.SecureBootNotSupported: Secure Boot is not supported by host

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2071329/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2072433] Re: "keystoneauth1.exceptions.http.Unauthorized"

2024-09-18 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2072433

Title:
  "keystoneauth1.exceptions.http.Unauthorized"

Status in OpenStack Compute (nova):
  Expired

Bug description:
  when create instance:
  openstack server create --flavor m1 --image cirros --nic net-id=provider 
--security-group default --key-name mykey provider-instance1
  Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-b3dd497b-73b9-4ae3-8c15-e1d074f6de33)

  
  2024-07-07 23:15:06.847 52501 INFO nova.osapi_compute.wsgi.server [-] 
10.10.145.144 "GET /v2.1 HTTP/1.1" status: 200 len: 783 time: 0.0013983
  2024-07-07 23:15:07.063 52501 INFO nova.osapi_compute.wsgi.server 
[req-e5313e04-fd68-41f6-97f6-1a786e90173f e74c48113a304c0c8cc057321530f6aa 
8efcd3d0862d4ae7abc4d0febcb76e9b - default default] 10.10.145.144 "POST 
/v2.1/flavors HTTP/1.1" status: 200 len: 789 time: 0.2144532
  2024-07-07 23:15:21.124 52501 INFO nova.api.openstack.wsgi 
[req-4a088bb9-af24-4665-bbac-477db8b10773 e74c48113a304c0c8cc057321530f6aa 
8efcd3d0862d4ae7abc4d0febcb76e9b - default default] HTTP exception thrown: 
Flavor m1 could not be found.
  2024-07-07 23:15:21.125 52501 INFO nova.osapi_compute.wsgi.server 
[req-4a088bb9-af24-4665-bbac-477db8b10773 e74c48113a304c0c8cc057321530f6aa 
8efcd3d0862d4ae7abc4d0febcb76e9b - default default] 10.10.145.144 "GET 
/v2.1/flavors/m1 HTTP/1.1" status: 404 len: 495 time: 0.0072858
  2024-07-07 23:15:21.143 52501 INFO nova.osapi_compute.wsgi.server 
[req-499fffd2-96c2-4bfa-bef4-8791530213f7 e74c48113a304c0c8cc057321530f6aa 
8efcd3d0862d4ae7abc4d0febcb76e9b - default default] 10.10.145.144 "GET 
/v2.1/flavors HTTP/1.1" status: 200 len: 581 time: 0.0164435
  2024-07-07 23:15:21.149 52501 INFO nova.osapi_compute.wsgi.server 
[req-043cb16e-c3d2-408f-af4e-31f78ae529b4 e74c48113a304c0c8cc057321530f6aa 
8efcd3d0862d4ae7abc4d0febcb76e9b - default default] 10.10.145.144 "GET 
/v2.1/flavors/0 HTTP/1.1" status: 200 len: 747 time: 0.0048816
  2024-07-07 23:15:21.244 52501 WARNING oslo_config.cfg 
[req-b3dd497b-73b9-4ae3-8c15-e1d074f6de33 e74c48113a304c0c8cc057321530f6aa 
8efcd3d0862d4ae7abc4d0febcb76e9b - default default] Deprecated: Option 
"api_servers" from group "glance" is deprecated for removal (
  Support for image service configuration via standard keystoneauth1 Adapter
  options was added in the 17.0.0 Queens release. The api_servers option was
  retained temporarily to allow consumers time to cut over to a real load
  balancing solution.
  ).  Its value may be silently ignored in the future.
  2024-07-07 23:15:22.126 52501 ERROR nova.api.openstack.wsgi 
[req-b3dd497b-73b9-4ae3-8c15-e1d074f6de33 e74c48113a304c0c8cc057321530f6aa 
8efcd3d0862d4ae7abc4d0febcb76e9b - default default] Unexpected exception in API 
method: keystoneauth1.exceptions.http.Unauthorized: The request you have made 
requires authentication. (HTTP 401) (Request-ID: 
req-fdede7d2-631b-449c-9f43-992191586cf1)
  2024-07-07 23:15:22.126 52501 ERROR nova.api.openstack.wsgi Traceback (most 
recent call last):
  2024-07-07 23:15:22.126 52501 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/api/openstack/wsgi.py", line 658, in 
wrapped
  2024-07-07 23:15:22.126 52501 ERROR nova.api.openstack.wsgi return 
f(*args, **kwargs)
  2024-07-07 23:15:22.126 52501 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/api/validation/__init__.py", line 110, in 
wrapper
  2024-07-07 23:15:22.126 52501 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
  2024-07-07 23:15:22.126 52501 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/api/validation/__init__.py", line 110, in 
wrapper
  2024-07-07 23:15:22.126 52501 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
  2024-07-07 23:15:22.126 52501 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/api/validation/__init__.py", line 110, in 
wrapper
  2024-07-07 23:15:22.126 52501 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
  2024-07-07 23:15:22.126 52501 ERROR nova.api.openstack.wsgi   [Previous line 
repeated 10 more times]
  2024-07-07 23:15:22.126 52501 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/api/openstack/compute/servers.py", line 
784, in create
  2024-07-07 23:15:22.126 52501 ERROR nova.api.openstack.wsgi instances, 
resv_id = self.compute_api.create(
  2024-07-07 23:15:22.126 52501 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/compute/api.py", line 2148, in create
  2024-07-07 23:15:22.126 52501 ERROR nova.api.openstack.wsgi return 
self._create_instance(

[Yahoo-eng-team] [Bug 2081252] [NEW] [designate] 2023.2 n-t-p job failing, test "test_port_with_publishing_subnet"

2024-09-20 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

The CI job "neutron-tempest-plugin-designate-scenario-2023-2" is
frequently failing [1].

The failing test is "test_port_with_publishing_subnet".

Logs:
* 
https://108db2f44ab966c3afef-3578f4b3c7df6e8f4dbcf87a4a72da28.ssl.cf2.rackcdn.com/929592/3/check/neutron-tempest-plugin-designate-scenario-2023-2/0ee146d/testr_results.html
* 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_907/929592/3/check/neutron-tempest-plugin-designate-scenario-2023-2/9077de5/testr_results.html

Snippet: https://paste.opendev.org/show/bAwChjRpjViOkugxgydZ/

[1]https://zuul.opendev.org/t/openstack/builds?job_name=neutron-tempest-
plugin-designate-scenario-2023-2&skip=0

** Affects: neutron
 Importance: Critical
 Status: New

-- 
[designate] 2023.2 n-t-p job failing, test "test_port_with_publishing_subnet"
https://bugs.launchpad.net/bugs/2081252
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498125] [NEW] Add hypervisor UUIDs to ComputeNode objects and API

2015-09-22 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

At present nova doesnt exposes the uuid from libvirt as hypervisor-uuid. It 
becomes an issue when we try to match with other service providers like Cisco 
UCS. 
Though there has been review put for this and it has been abandoned.
I think there is necessity rising to expose this vital information.

Abondoned review 
https://review.openstack.org/#/c/57295/

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: nova
-- 
Add hypervisor UUIDs to ComputeNode objects and API
https://bugs.launchpad.net/bugs/1498125
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469260] Re: [SRU] Custom vendor data causes cloud-init failure on 0.7.5

2015-09-22 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.5-0ubuntu1.11

---
cloud-init (0.7.5-0ubuntu1.11) trusty; urgency=medium

  [ Felipe Reyes ]
  * d/patches/fix-consumption-of-vendor-data.patch:
- Fix consumption of vendor-data in OpenStack to allow namespacing
  (LP: #1469260).

  [ Scott Moser ]
  * d/patches/lp-1461242-generate-ed25519-host-keys.patch:
- ssh: generate ed25519 host keys if supported (LP: #1461242)

 -- Scott Moser   Fri, 11 Sep 2015 20:22:00 -0400

** Changed in: cloud-init (Ubuntu Trusty)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1469260

Title:
  [SRU] Custom vendor data causes cloud-init failure on 0.7.5

Status in cloud-init:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Trusty:
  Fix Released
Status in cloud-init source package in Utopic:
  Invalid

Bug description:
  [Impact]

  When a vendor data json provides a dictionary without a 'cloud-init'
  key, cloud-init renders a non functional user-data, so any
  configuration (i.e. ssh public keys to use) is missed.

  This prevents cloud providers from publishing a vendor data that is
  not intended to be consumed by cloud-init.

  This patch checks for the existence of 'cloud-init' key and tries to
  get None, a string or a list as value, if this process fails or cloud-
  init key is missing the vendor data is set to None.

  [Test Case]

  * deploy an OpenStack cloud (easy right? :) )
- the easiest way is to branch 
https://code.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk and 
run: juju deployer -c default.yaml -d -v -s 10 trusty-kilo
  * configure vendor data
- Edit /etc/nova/nova.conf in neutron-gateway unit(s), include the 
following two lines:
  vendordata_driver=nova.api.metadata.vendordata_json.JsonFileVendorData
  vendordata_jsonfile_path=/etc/nova/vendordata.json
- Create /etc/nova/vendordata.json in neutron-gateway unit(s) with the 
following content:
  {"custom": {"a": 1, "b": [2, 3]}}
- Restart nova-api-metadata (sudo service nova-api-metadata restart)
  * Launch an instance using trusty

  Expected result:
  - the new instance is launched and is accesible according to the 
configuration used

  Actual result:
  - cloud-init fails to configure the ssh public key

  [Regression Potential]

  * This patch is already part of Vivid and there are no known issues.
  * This proposed fix was tested with a custom image and no issues were 
detected.

  [Other Info]

  I encountered this issue when adding custom vendor data via nova-
  compute. Originally the bug manifested as SSH host key generation
  failing to fire when vendor data was present (example vendor data
  below).

  {"msg": "", "uuid": "4996e2b67d2941818646481453de1efe", "users":
  [{"username": "erhudy", "sshPublicKeys": [], "uuid": "erhudy"}],
  "name": "TestTenant"}

  I launched a volume-backed instance, waited for it to fail, then
  terminated it and mounted its root volume to examine the logs. What I
  found was that cloud-init was failing to process vendor-data into MIME
  multipart (note the absence of the line that indicates that cloud-init
  is writing vendor-data.txt.i):

  2015-06-25 21:41:02,178 - util.py[DEBUG]: Writing to 
/var/lib/cloud/instance/obj.pkl - wb: [256] 9751 bytes
  2015-06-25 21:41:02,178 - util.py[DEBUG]: Writing to 
/var/lib/cloud/instances/65c9fb0c-0700-4f87-a22f-c59534e98dfb/user-data.txt - 
wb: [384] 0 bytes
  2015-06-25 21:41:02,184 - util.py[DEBUG]: Writing to 
/var/lib/cloud/instances/65c9fb0c-0700-4f87-a22f-c59534e98dfb/user-data.txt.i - 
wb: [384] 345 bytes
  2015-06-25 21:41:02,185 - util.py[DEBUG]: Writing to 
/var/lib/cloud/instances/65c9fb0c-0700-4f87-a22f-c59534e98dfb/vendor-data.txt - 
wb: [384] 234 bytes
  2015-06-25 21:41:02,185 - util.py[DEBUG]: Reading from /proc/uptime 
(quiet=False)

  After following the call chain all the way down, I found the
  problematic code in user_data.py:

  # Coverts a raw string into a mime message
  def convert_string(raw_data, headers=None):
  if not raw_data:
  raw_data = ''
  if not headers:
  headers = {}
  data = util.decomp_gzip(raw_data)
  if "mime-version:" in data[0:4096].lower():
  msg = email.message_from_string(data)
  for (key, val) in headers.iteritems():
  _replace_header(msg, key, val)
  else:
  mtype = headers.get(CONTENT_TYPE, NOT_MULTIPART_TYPE)
  maintype, subtype = mtype.split("/", 1)
  msg = MIMEBase(maintype, subtype, *headers)
  msg.set_payload(data)
  return msg

  raw_data in the case that is failing is a dictionary rather than the
  expected string, so slicing into data causes a TypeError: unhashable
  type exception.

  I think this bug was fixed after a fashion in 0.7.7, wh

[Yahoo-eng-team] [Bug 1461242] Re: cloud-init does not generate ed25519 keys

2015-09-22 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.5-0ubuntu1.11

---
cloud-init (0.7.5-0ubuntu1.11) trusty; urgency=medium

  [ Felipe Reyes ]
  * d/patches/fix-consumption-of-vendor-data.patch:
- Fix consumption of vendor-data in OpenStack to allow namespacing
  (LP: #1469260).

  [ Scott Moser ]
  * d/patches/lp-1461242-generate-ed25519-host-keys.patch:
- ssh: generate ed25519 host keys if supported (LP: #1461242)

 -- Scott Moser   Fri, 11 Sep 2015 20:22:00 -0400

** Changed in: cloud-init (Ubuntu Trusty)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1461242

Title:
  cloud-init does not generate ed25519 keys

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Trusty:
  Fix Released
Status in cloud-init source package in Utopic:
  Invalid
Status in cloud-init source package in Vivid:
  Confirmed
Status in cloud-init source package in Wily:
  Fix Released

Bug description:
  Cloud-init does not generate ed25519 hosts keys as expected. Ubuntu
  14.04 and later have SSH configurations expecting ed25519 keys by
  default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1461242/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384109] Re: Mechanism driver 'l2population' failed in update_port_postcommit

2015-09-22 Thread Launchpad Bug Tracker
[Expired for neutron (Ubuntu) because there has been no activity for 60
days.]

** Changed in: neutron (Ubuntu)
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384109

Title:
  Mechanism driver 'l2population' failed in update_port_postcommit

Status in neutron:
  Expired
Status in neutron package in Ubuntu:
  Expired

Bug description:
  OpenStack Juno, Ubuntu 14.04, 3 x neutron-server's with 32 API workers
  each, rally/boot-and-delete with a concurrency level of 150:

  2014-10-21 16:37:04.615 16312 ERROR neutron.plugins.ml2.managers 
[req-c4cdefd5-b2d9-46fa-a031-bddd03d981e6 None] Mechanism driver 'l2population' 
failed in update_port_postcommit
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers Traceback 
(most recent call last):
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 291, 
in _call_on_drivers
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/l2pop/mech_driver.py",
 line 135, in update_port_postcommit
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers 
self._update_port_up(context)
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/l2pop/mech_driver.py",
 line 228, in _update_port_up
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers 
agent_ports += self._get_port_fdb_entries(binding.port)
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/l2pop/mech_driver.py",
 line 45, in _get_port_fdb_entries
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers 
ip['ip_address']] for ip in port['fixed_ips']]
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers TypeError: 
'NoneType' object has no attribute '__getitem__'
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers
  2014-10-21 16:37:04.618 16312 ERROR oslo.messaging.rpc.dispatcher 
[req-c4cdefd5-b2d9-46fa-a031-bddd03d981e6 ] Exception during message handling: 
update_port_postcommit failed.
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, 
in _dispatch_and_reply
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, 
in _dispatch
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, 
in _do_dispatch
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/rpc.py", line 161, in 
update_device_up
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher host)
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 1136, in 
update_port_status
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher 
self.mechanism_manager.update_port_postcommit(mech_context)
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 527, 
in update_port_postcommit
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher 
continue_on_failure=True)
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 302, 
in _call_on_drivers
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher 
method=method_name
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher 
MechanismDriverError: update_port_postcommit failed.
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher
  2014-10-21 16:37:04.620 16312 ERROR oslo.messaging._drivers.common 
[req-c4cdefd5-b2d9-46fa-a031-bddd03d981e6 ] Returning exception 
update_port_postcommit failed. to caller
  2014-10-21 16:37:04.621 16312 ERROR oslo.messaging._drive

[Yahoo-eng-team] [Bug 1384109] Re: Mechanism driver 'l2population' failed in update_port_postcommit

2015-09-22 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384109

Title:
  Mechanism driver 'l2population' failed in update_port_postcommit

Status in neutron:
  Expired
Status in neutron package in Ubuntu:
  Expired

Bug description:
  OpenStack Juno, Ubuntu 14.04, 3 x neutron-server's with 32 API workers
  each, rally/boot-and-delete with a concurrency level of 150:

  2014-10-21 16:37:04.615 16312 ERROR neutron.plugins.ml2.managers 
[req-c4cdefd5-b2d9-46fa-a031-bddd03d981e6 None] Mechanism driver 'l2population' 
failed in update_port_postcommit
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers Traceback 
(most recent call last):
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 291, 
in _call_on_drivers
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/l2pop/mech_driver.py",
 line 135, in update_port_postcommit
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers 
self._update_port_up(context)
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/l2pop/mech_driver.py",
 line 228, in _update_port_up
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers 
agent_ports += self._get_port_fdb_entries(binding.port)
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/l2pop/mech_driver.py",
 line 45, in _get_port_fdb_entries
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers 
ip['ip_address']] for ip in port['fixed_ips']]
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers TypeError: 
'NoneType' object has no attribute '__getitem__'
  2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers
  2014-10-21 16:37:04.618 16312 ERROR oslo.messaging.rpc.dispatcher 
[req-c4cdefd5-b2d9-46fa-a031-bddd03d981e6 ] Exception during message handling: 
update_port_postcommit failed.
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, 
in _dispatch_and_reply
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, 
in _dispatch
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, 
in _do_dispatch
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/rpc.py", line 161, in 
update_device_up
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher host)
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 1136, in 
update_port_status
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher 
self.mechanism_manager.update_port_postcommit(mech_context)
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 527, 
in update_port_postcommit
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher 
continue_on_failure=True)
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 302, 
in _call_on_drivers
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher 
method=method_name
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher 
MechanismDriverError: update_port_postcommit failed.
  2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher
  2014-10-21 16:37:04.620 16312 ERROR oslo.messaging._drivers.common 
[req-c4cdefd5-b2d9-46fa-a031-bddd03d981e6 ] Returning exception 
update_port_postcommit failed. to caller
  2014-10-21 16:37:04.621 16312 ERROR oslo.messaging._drivers.common 
[req-c4

[Yahoo-eng-team] [Bug 1498850] [NEW] glance image-create without arguments is creating blank image

2015-09-23 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

The command glance image-create without any arguments  is  creating some
blank image and is in queued state.  The created blank image is listed
in CLI as well as dashbord.

 Instead it should get responded with correct usage format.

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: glance
-- 
glance image-create without arguments is creating blank image
https://bugs.launchpad.net/bugs/1498850
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to Glance.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1349888] Re: [SRU] Attempting to attach the same volume multiple times can cause bdm record for existing attachment to be deleted.

2015-09-23 Thread Launchpad Bug Tracker
This bug was fixed in the package nova - 1:2014.1.5-0ubuntu1.3

---
nova (1:2014.1.5-0ubuntu1.3) trusty; urgency=medium

  * Attempting to attach the same volume multiple times can cause
bdm record for existing attachment to be deleted. (LP: #1349888)
- d/p/fix-creating-bdm-for-failed-volume-attachment.patch

 -- Edward Hope-Morley   Tue, 08 Sep
2015 12:32:45 +0100

** Changed in: nova (Ubuntu Trusty)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1349888

Title:
  [SRU] Attempting to attach the same volume multiple times can cause
  bdm record for existing attachment to be deleted.

Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Trusty:
  Fix Released

Bug description:
  [Impact]

   * Ensure attching already attached volume to second instance does not
     interfere with attached instance volume record.

  [Test Case]

   * Create cinder volume vol1 and two instances vm1 and vm2

   * Attach vol1 to vm1 and check that attach was successful by doing:

     - cinder list
     - nova show 

     e.g. http://paste.ubuntu.com/12314443/

   * Attach vol1 to vm2 and check that attach fails and, crucially, that the
     first attach is unaffected (as above). You can also check the Nova db as
     follows:

     select * from block_device_mapping where source_type='volume' and \
     (instance_uuid='' or instance_uuid='');

     from which you would expect e.g. http://paste.ubuntu.com/12314416/ which
     shows that vol1 is attached to vm1 and vm2 attach failed.

   * finally detach vol1 from vm1 and ensure that it succeeds.

  [Regression Potential]

   * none

     

  nova assumes there is only ever one bdm per volume. When an attach is
  initiated a new bdm is created, if the attach fails a bdm for the
  volume is deleted however it is not necessarily the one that was just
  created. The following steps show how a volume can get stuck detaching
  because of this.

  $ nova list
  
c+--++++-+--+
  | ID   | Name   | Status | Task State | Power 
State | Networks |
  
+--++++-+--+
  | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | test13 | ACTIVE | -  | 
Running | private=10.0.0.2 |
  
+--++++-+--+

  $ cinder list
  
+--+---++--+-+--+-+
  |  ID  |   Status  |  Name  | Size | Volume 
Type | Bootable | Attached to |
  
+--+---++--+-+--+-+
  | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | available | test10 |  1   | lvm1 
   |  false   | |
  
+--+---++--+-+--+-+

  $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4
  +--+--+
  | Property | Value|
  +--+--+
  | device   | /dev/vdb |
  | id   | c1e38e93-d566-4c99-bfc3-42e77a428cc4 |
  | serverId | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 |
  | volumeId | c1e38e93-d566-4c99-bfc3-42e77a428cc4 |
  +--+--+

  $ cinder list
  
+--+++--+-+--+--+
  |  ID  | Status |  Name  | Size | Volume Type 
| Bootable | Attached to  |
  
+--+++--+-+--+--+
  | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | in-use | test10 |  1   | lvm1
|  false   | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 |
  
+--+++--+-+--+--+

  $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4
  ERROR (BadRequest): Invalid volume: status must be 'available' (HTTP 400) 
(Request-ID: req-1fa34b54-25b5-4296-9134-b63321b0015d)

  $ nova volume-detach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4

  $ cinder list
  
+--+---++--+-+--+--+
  |  ID  |   Status  |  Name  | Size | Volume

[Yahoo-eng-team] [Bug 1440285] [NEW] When neutron lbaas agent is not running, 'neutron lb*’ commands must display an error instead of "404 Not Found"

2015-09-24 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

When neutron lbaas agent is not running, all the ‘neutron lb*’ commands
display "404 Not Found". This makes the user think that something is
wrong with the lbaas agent (when it is not even running!).

Instead, when neutron lbaas agent is not running, an error like “Neutron
Load Balancer Agent not running” must be displayed so the user knows
that the lbaas agent must be started first.

The ‘ps’ command below shows that the neutron lbaas agent is not
running.

$ ps aux | grep lb
$

$ neutron lb-healthmonitor-list
404 Not Found
The resource could not be found.

$ neutron lb-member-list
404 Not Found
The resource could not be found.

$ neutron lb-pool-list
404 Not Found
The resource could not be found.

$ neutron lb-vip-list
404 Not Found
The resource could not be found.

$ neutron lbaas-healthmonitor-list
404 Not Found
The resource could not be found.

$ neutron lbaas-listener-list
404 Not Found
The resource could not be found.

$ neutron lbaas-loadbalancer-list
404 Not Found
The resource could not be found.

$ neutron lbaas-pool-list
404 Not Found
The resource could not be found.

$ neutron --version
2.3.11

=

Below are the neutron verbose messages that show "404 Not Found".

$ neutron -v lb-healthmonitor-list
DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://192.168.122.205:5000/v2.0/ -H "Accept: application/json" -H "User-Agent: 
python-keystoneclient"
DEBUG: keystoneclient.session RESP: [200] content-length: 341 vary: 
X-Auth-Token keep-alive: timeout=5, max=100 server: Apache/2.4.7 (Ubuntu) 
connection: Keep-Alive date: Sat, 04 Apr 2015 04:37:54 GMT content-type: 
application/json x-openstack-request-id: 
req-95c6d1e1-02a7-4077-8ed2-0cb4f574a397
RESP BODY: {"version": {"status": "stable", "updated": "2014-04-17T00:00:00Z", 
"media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": 
[{"href": "http://192.168.122.205:5000/v2.0/";, "rel": "self"}, {"href": 
"http://docs.openstack.org/";, "type": "text/html", "rel": "describedby"}]}}

DEBUG: stevedore.extension found extension EntryPoint.parse('table = 
cliff.formatters.table:TableFormatter')
DEBUG: stevedore.extension found extension EntryPoint.parse('csv = 
cliff.formatters.commaseparated:CSVLister')
DEBUG: stevedore.extension found extension EntryPoint.parse('yaml = 
clifftablib.formatters:YamlFormatter')
DEBUG: stevedore.extension found extension EntryPoint.parse('json = 
clifftablib.formatters:JsonFormatter')
DEBUG: stevedore.extension found extension EntryPoint.parse('html = 
clifftablib.formatters:HtmlFormatter')
DEBUG: neutronclient.neutron.v2_0.lb.healthmonitor.ListHealthMonitor 
get_data(Namespace(columns=[], fields=[], formatter='table', max_width=0, 
page_size=None, quote_mode='nonnumeric', request_format='json', 
show_details=False, sort_dir=[], sort_key=[]))
DEBUG: keystoneclient.auth.identity.v2 Making authentication request to 
http://192.168.122.205:5000/v2.0/tokens
DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://192.168.122.205:9696/v2.0/lb/health_monitors.json -H "User-Agent: 
python-neutronclient" -H "Accept: application/json" -H "X-Auth-Token: 
{SHA1}23f2a54d0348e6bfc5364565ece4baf2e2148fa8"
DEBUG: keystoneclient.session RESP:
DEBUG: neutronclient.v2_0.client Error message: 404 Not Found

The resource could not be found.

ERROR: neutronclient.shell 404 Not Found

The resource could not be found.

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py", line 
760, in run_subcommand
return run_command(cmd, cmd_parser, sub_argv)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py", line 
100, in run_command
return cmd.run(known_args)
  File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/common/command.py", line 
29, in run
return super(OpenStackCommand, self).run(parsed_args)
  File "/usr/local/lib/python2.7/dist-packages/cliff/display.py", line 91, in 
run
column_names, data = self.take_action(parsed_args)
  File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/common/command.py", line 
35, in take_action
return self.get_data(parsed_args)
  File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py",
 line 691, in get_data
data = self.retrieve_list(parsed_args)
  File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py",
 line 654, in retrieve_list
data = self.call_server(neutron_client, search_opts, parsed_args)
  File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py",
 line 626, in call_server
data = obj_lister(**search_opts)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 102, in with_params
ret = self.function(instance, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 1088, in list_health

[Yahoo-eng-team] [Bug 1477451] Re: Assumption that db drivers can ignore hints is false

2015-09-26 Thread Launchpad Bug Tracker
[Expired for Keystone because there has been no activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1477451

Title:
  Assumption that db drivers can ignore hints is false

Status in Keystone:
  Expired

Bug description:
  For hints, the documentation says that if the driver implementation
  doesn't filter by hints, they are handled at the controller level. But
  this assumption is false when using 'list users' and 'list groups'
  API.

  This was found while writing a Cassandra backend driver for Keystone.
  Similar could be done by commenting out lines from
  keystone/identity/backends/sql.py driver file which filters by hints.

  I am suspecting that the problem is at these two lines:

  
https://github.com/openstack/keystone/blob/7a28fdb6385ec31e3d46fe63b22028e599ea66b3/keystone/identity/core.py#L429

  I see two alternatives: either fix this, or change the documentation
  saying that hints filtering at higher (controller) level might not be
  enforced.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1477451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477163] Re: When we create instance with Ubuntu 15.0.4 image, default route is missing

2015-09-26 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1477163

Title:
  When we create instance with Ubuntu 15.0.4 image, default route is
  missing

Status in OpenStack Compute (nova):
  Expired

Bug description:
  We have issues with new image 15.0.4 (Ubuntu). When we create new
  instance in Ice House with older version of Ubuntu or other flavor of
  Linux, it is working fine but with 15.0.4 image, default route is not
  created. It is resolving when we add default route manually.

  Thanks,

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1477163/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493453] Re: [SRU] vendor_data isn't parsed properly when using the nocloud datasource

2015-09-28 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.7~bzr1091-0ubuntu9

---
cloud-init (0.7.7~bzr1091-0ubuntu9) vivid; urgency=medium

  * d/patches/lp-1493453-nocloudds-vendor_data.patch:
- fix vendor_data variable assignment for the NoCloud Datasource
  (LP: #1493453).

  * d/patches/lp-1461242-generate-ed25519-host-keys.patch:
- ssh: generate ed25519 host keys if supported (LP: #1461242).

 -- Ben Howard   Tue, 22 Sep 2015 15:02:06 -0600

** Changed in: cloud-init (Ubuntu Vivid)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1493453

Title:
  [SRU] vendor_data isn't parsed properly when using the nocloud
  datasource

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Trusty:
  Fix Committed
Status in cloud-init source package in Vivid:
  Fix Released
Status in cloud-init source package in Wily:
  Fix Released

Bug description:
  SRU Justification:

  [IMPACT] The NoCloud Datasource assigns vendor_data to the wrong
  cloud-init internal variable. This causes the vendor_data to be
  improperly parsed, and prevents it from being consummed.

  [FIX] See original report below

  [TESTING]
  1. Start in-cloud instance
  2. Update cloud-init to version in proposed
  3. Populate /var/lib/cloud/seed/nocloud/{user,meta,vendor}-data:

    meta-data:
   instance-id: testing

    user-data:
   #cloud-config
   packages:
   - pastebinit

    vendor-data:
   #cloud-config
   runcmd:
   - [ "touch", "/tmp/vd-worked" ]

  3. Configure instance for NoCloud DS:

  $ cat > /etc/cloud/cloud.cfg.d/999-sru.cfg 

[Yahoo-eng-team] [Bug 1461242] Re: cloud-init does not generate ed25519 keys

2015-09-28 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.7~bzr1091-0ubuntu9

---
cloud-init (0.7.7~bzr1091-0ubuntu9) vivid; urgency=medium

  * d/patches/lp-1493453-nocloudds-vendor_data.patch:
- fix vendor_data variable assignment for the NoCloud Datasource
  (LP: #1493453).

  * d/patches/lp-1461242-generate-ed25519-host-keys.patch:
- ssh: generate ed25519 host keys if supported (LP: #1461242).

 -- Ben Howard   Tue, 22 Sep 2015 15:02:06 -0600

** Changed in: cloud-init (Ubuntu Vivid)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1461242

Title:
  cloud-init does not generate ed25519 keys

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Trusty:
  Fix Released
Status in cloud-init source package in Utopic:
  Invalid
Status in cloud-init source package in Vivid:
  Fix Released
Status in cloud-init source package in Wily:
  Fix Released

Bug description:
  Cloud-init does not generate ed25519 hosts keys as expected. Ubuntu
  14.04 and later have SSH configurations expecting ed25519 keys by
  default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1461242/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493453] Re: [SRU] vendor_data isn't parsed properly when using the nocloud datasource

2015-09-28 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.5-0ubuntu1.12

---
cloud-init (0.7.5-0ubuntu1.12) trusty; urgency=medium

  * d/patches/lp-1493453-nocloudds-vendor_data.patch:
- fix vendor_data variable assignment for the NoCloud Datasource
  (LP: #1493453).

 -- Ben Howard   Mon, 21 Sep 2015 15:24:17 -0600

** Changed in: cloud-init (Ubuntu Trusty)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1493453

Title:
  [SRU] vendor_data isn't parsed properly when using the nocloud
  datasource

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Trusty:
  Fix Released
Status in cloud-init source package in Vivid:
  Fix Released
Status in cloud-init source package in Wily:
  Fix Released

Bug description:
  SRU Justification:

  [IMPACT] The NoCloud Datasource assigns vendor_data to the wrong
  cloud-init internal variable. This causes the vendor_data to be
  improperly parsed, and prevents it from being consummed.

  [FIX] See original report below

  [TESTING]
  1. Start in-cloud instance
  2. Update cloud-init to version in proposed
  3. Populate /var/lib/cloud/seed/nocloud/{user,meta,vendor}-data:

    meta-data:
   instance-id: testing

    user-data:
   #cloud-config
   packages:
   - pastebinit

    vendor-data:
   #cloud-config
   runcmd:
   - [ "touch", "/tmp/vd-worked" ]

  3. Configure instance for NoCloud DS:

  $ cat > /etc/cloud/cloud.cfg.d/999-sru.cfg 

[Yahoo-eng-team] [Bug 1500688] [NEW] VNC URL of instance unavailable in CLI

2015-09-29 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

I use heat template to build an autoscaling group with
'OS::Heat::AutoScalingGroup' and 'OS::Nova::Server' and it works fine. I
can see instance running both by CLI and dashboard. However, I can only
get to the console by dashboard directly. While using command 'nova get-
vnc-console instance_ID novnc', I got an error:'ERROR (NotFound): The
resource could not be found. (HTTP 404) (Request-ID: req-6f260624-56ad-
45fd-aa21-f86fb2c541d1)' instead of its URL.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
VNC URL of instance unavailable in CLI
https://bugs.launchpad.net/bugs/1500688
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499869] Re: maas wily deployment to HP Proliant m400 arm64 server cartridge fails

2015-09-29 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.7~bzr1147-0ubuntu1

---
cloud-init (0.7.7~bzr1147-0ubuntu1) wily; urgency=medium

  * New upstream snapshot.
* MAAS: fix oauth when system clock is bad (LP: #1499869)

 -- Scott Moser   Tue, 29 Sep 2015 20:16:57 -0400

** Changed in: cloud-init (Ubuntu Wily)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1499869

Title:
  maas wily deployment to HP Proliant m400 arm64 server cartridge fails

Status in cloud-init:
  Confirmed
Status in curtin:
  New
Status in cloud-init package in Ubuntu:
  Fix Released
Status in linux package in Ubuntu:
  Fix Committed
Status in linux source package in Vivid:
  In Progress
Status in cloud-init source package in Wily:
  Fix Released
Status in linux source package in Wily:
  Fix Committed

Bug description:
  This is the error seen on the console:

  [   64.149080] cloud-init[834]: 2015-08-27 15:03:29,289 - util.py[WARNING]: 
Failed fetching metadata from url http://10.229.32.21/MAAS/metadata/curtin
  [  124.513212] cloud-init[834]: 2015-09-24 17:23:10,006 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
[2427570/120s]: request error [HTTPConnectionPool(host='169.254.169.254', 
port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id 
(Caused by 
ConnectTimeoutError(, 'Connection to 169.254.169.254 timed out. (connect 
timeout=50.0)'))]
  [  124.515570] cloud-init[834]: 2015-09-24 17:23:10,007 - 
DataSourceEc2.py[CRITICAL]: Giving up on md from 
['http://169.254.169.25/2009-04-04/meta-data/instance-id'] after 2427570 seconds
  [  124.531624] cloud-init[834]: 2015-09-24 17:23:10,024 - 
url_helper.py[WARNING]: Calling 'http:///latest/meta-data/instance-id' failed [0/120s]: bad status code [404]

  This times out eventually and the node is left at the login prompt. I
  can install wily via netboot without issue and some time back, wily
  was deployable to this node from MAAS.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1499869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501505] [NEW] Allow updating of TLS refs

2015-09-30 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

A bug prevented updating of default_tls_container_ref and failing
with a 503
This bug uncovered a few other issues with null key checks
and complaints if sni_container_refs were not provided.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
Allow updating of TLS refs
https://bugs.launchpad.net/bugs/1501505
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480319] Re: Mutable args and wrap_db_retry

2015-10-02 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1480319

Title:
  Mutable args and wrap_db_retry

Status in neutron:
  Expired

Bug description:
  wrapped_db_retry may not work as expected if wrapped function modifies
  it's mutable arguments during execution: in this case on the second
  attempt the function will be called with modified args. Example:

  def create_router(self, context, router):
  r = router['router']
  gw_info = r.pop(EXTERNAL_GW_INFO, None)
  tenant_id = self._get_tenant_id_for_create(context, r)
  with context.session.begin(subtransactions=True):
  router_db = self._create_router_db(context, r, tenant_id)
  if gw_info:
  self._update_router_gw_info(context, router_db['id'],
  gw_info, router=router_db)
  dict =  self._make_router_dict(router_db)
  return dict

  because of pop() on a second attempt the router dict will not have
  gateway info so router will be created without it, silently and
  surprisingly for users.

  Just doing copy.deepcopy() inside wrap_db_retry will not work as
  arguments might be complex objects(like plugins) which do not support
  deepcopy(). So this needs a more crafty fix. Otherwise wrap_db_retry
  should be used carefully, checking that wrapped function does not
  modify mutable args.

  Currently neutron uses wrap_db_retry at API layer which is not safe
  given described issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1480319/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1502369] [NEW] Jenkins/tox fails in Glance-specs

2015-10-03 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Hi,

When we run "tox" in fresh clone of glance-specs repo, it fails with the
below error (full log can be found here:
http://paste.openstack.org/show/475234/):

running build_ext
  Traceback (most recent call last):
File "", line 1, in 
File "/tmp/pip-build-OfhUFL/Pillow/setup.py", line 767, in 
  zip_safe=not debug_build(),
File "/usr/lib/python2.7/distutils/core.py", line 151, in setup
  dist.run_commands()
File "/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands
  self.run_command(cmd)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
  cmd_obj.run()
File 
"/home/dramakri/glance-specs/.tox/py27/local/lib/python2.7/site-packages/wheel/bdist_wheel.py",
 line 175, in run
  self.run_command('build')
File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command
  self.distribution.run_command(command)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
  cmd_obj.run()
File "/usr/lib/python2.7/distutils/command/build.py", line 128, in run
  self.run_command(cmd_name)
File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command
  self.distribution.run_command(command)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
  cmd_obj.run()
File "/usr/lib/python2.7/distutils/command/build_ext.py", line 337, in run
  self.build_extensions()
File "/tmp/pip-build-OfhUFL/Pillow/setup.py", line 515, in build_extensions
  % (f, f))
  ValueError: --enable-jpeg requested but jpeg not found, aborting.

  
  Failed building wheel for Pillow
Failed to build Pillow

This causes Jenkins also to fail on any submission. I noticed this issue when I 
tried to upload a new spec (https://review.openstack.org/#/c/230679/) to the 
Glance-specs folder and it failed.
Link to the Jenkins log for the failed run: 
http://logs.openstack.org/79/230679/1/check/gate-glance-specs-docs/e34dc8b/console.html

** Affects: glance
 Importance: Undecided
 Assignee: Kairat Kushaev (kkushaev)
 Status: New


** Tags: jenkins
-- 
Jenkins/tox fails in Glance-specs
https://bugs.launchpad.net/bugs/1502369
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to Glance.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501772] Re: Metadata proxy process errors with binary user_data

2015-10-03 Thread Launchpad Bug Tracker
This bug was fixed in the package neutron - 2:7.0.0~rc1-0ubuntu4

---
neutron (2:7.0.0~rc1-0ubuntu4) wily; urgency=medium

  * Drop hard requirement on python-ryu for this cycle as it supports
a new alternative agent implementation for Open vSwitch and is not
the default, avoiding inclusion of ryu in main for Wily.
- d/control: Drop (Build-)Depends on ryu, add Suggests.
- d/p/drop-ryu-dep.patch: Patch out hard requirement on ryu.

 -- James Page   Fri, 02 Oct 2015 18:10:49 +0100

** Changed in: neutron (Ubuntu)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501772

Title:
  Metadata proxy process errors with binary user_data

Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released

Bug description:
  Boot instances with binary user data content (rather than simple text)
  is not happy right now:

  2015-10-01 13:19:39.109 10854 DEBUG neutron.agent.metadata.namespace_proxy 
[-] {'date': 'Thu, 01 Oct 2015 13:19:39 GMT', 'status': '200', 
'content-length': '979', 'content-type': 'text/plain; charset=UTF-8', 
'content-location': u'http://169.254.169.254/openstack/2013-10-17/user_data'} 
_proxy_request 
/usr/lib/python2.7/dist-packages/neutron/agent/metadata/namespace_proxy.py:90
  2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy 
[-] Unexpected error.
  2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy 
Traceback (most recent call last):
  2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy   
File 
"/usr/lib/python2.7/dist-packages/neutron/agent/metadata/namespace_proxy.py", 
line 55, in __call__
  2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy
 req.body)
  2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy   
File 
"/usr/lib/python2.7/dist-packages/neutron/agent/metadata/namespace_proxy.py", 
line 91, in _proxy_request
  2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy
 LOG.debug(content)
  2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy   
File "/usr/lib/python2.7/logging/__init__.py", line 1437, in debug
  2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy
 msg, kwargs = self.process(msg, kwargs)
  2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy   
File "/usr/lib/python2.7/dist-packages/oslo_log/log.py", line 139, in process
  2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy
 msg = _ensure_unicode(msg)
  2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy   
File "/usr/lib/python2.7/dist-packages/oslo_log/log.py", line 113, in 
_ensure_unicode
  2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy
 errors='xmlcharrefreplace',
  2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy   
File "/usr/lib/python2.7/dist-packages/oslo_utils/encodeutils.py", line 43, in 
safe_decode
  2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy
 return text.decode(incoming, errors)
  2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy   
File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode
  2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy
 return codecs.utf_8_decode(input, errors, True)
  2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy 
TypeError: don't know how to handle UnicodeDecodeError in error callback
  2015-10-01 13:19:39.109 10854 ERROR neutron.agent.metadata.namespace_proxy
  2015-10-01 13:19:39.112 10854 INFO neutron.wsgi [-] 192.168.21.15 - - 
[01/Oct/2015 13:19:39] "GET /openstack/2013-10-17/user_data HTTP/1.1" 500 343 
0.014536

  This is thrown be the log call just prior to it being served back to
  the instance.

  ProblemType: Bug
  DistroRelease: Ubuntu 15.10
  Package: neutron-metadata-agent 2:7.0.0~b3-0ubuntu3
  ProcVersionSignature: Ubuntu 4.2.0-11.13-generic 4.2.1
  Uname: Linux 4.2.0-11-generic x86_64
  ApportVersion: 2.19-0ubuntu1
  Architecture: amd64
  Date: Thu Oct  1 13:38:21 2015
  Ec2AMI: ami-05ce
  Ec2AMIManifest: FIXME
  Ec2AvailabilityZone: nova
  Ec2InstanceType: m1.small.osci
  Ec2Kernel: None
  Ec2Ramdisk: None
  JournalErrors: -- No entries --
  PackageArchitecture: all
  SourcePackage: neutron
  UpgradeStatus: No upgrade log present (probably fresh install)
  mtime.conffile..etc.neutron.metadata.agent.ini: 2015-10-01T13:18:25.075633

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help

[Yahoo-eng-team] [Bug 1501703] Re: unit test failures on 32 bit architectures

2015-10-03 Thread Launchpad Bug Tracker
This bug was fixed in the package neutron - 2:7.0.0~rc1-0ubuntu4

---
neutron (2:7.0.0~rc1-0ubuntu4) wily; urgency=medium

  * Drop hard requirement on python-ryu for this cycle as it supports
a new alternative agent implementation for Open vSwitch and is not
the default, avoiding inclusion of ryu in main for Wily.
- d/control: Drop (Build-)Depends on ryu, add Suggests.
- d/p/drop-ryu-dep.patch: Patch out hard requirement on ryu.

 -- James Page   Fri, 02 Oct 2015 18:10:49 +0100

** Changed in: neutron (Ubuntu)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501703

Title:
  unit test failures on 32 bit architectures

Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released

Bug description:
  Test all pass fine in Ubuntu on 64 bit archs, however on a 32 bit
  architecture (which is how we build packages in 14.04), two unit tests
  fail - this is a int/long type problem.

  ==
  FAIL: 
neutron.tests.unit.agent.linux.test_ip_lib.TestIpRuleCommand.test__make_canonical_fwmark
  
neutron.tests.unit.agent.linux.test_ip_lib.TestIpRuleCommand.test__make_canonical_fwmark
  --
  _StringException: Traceback (most recent call last):
  _StringException: Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'
stderr
stdout

  Traceback (most recent call last):
File "/«PKGBUILDDIR»/neutron/tests/unit/agent/linux/test_ip_lib.py", line 
633, in test__make_canonical_fwmark
  'type': 'unicast'}, actual)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 348, in 
assertEqual
  self.assertThat(observed, matcher, message)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 433, in 
assertThat
  raise mismatch_error
  MismatchError: !=:
  reference = {'fwmark': '0x400/0x', 'type': 'unicast'}
  actual= {'fwmark': '0x400/0xL', 'type': 'unicast'}

  Traceback (most recent call last):
  _StringException: Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'
stderr
stdout

  Traceback (most recent call last):
File "/«PKGBUILDDIR»/neutron/tests/unit/agent/linux/test_ip_lib.py", line 
633, in test__make_canonical_fwmark
  'type': 'unicast'}, actual)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 348, in 
assertEqual
  self.assertThat(observed, matcher, message)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 433, in 
assertThat
  raise mismatch_error
  MismatchError: !=:
  reference = {'fwmark': '0x400/0x', 'type': 'unicast'}
  actual= {'fwmark': '0x400/0xL', 'type': 'unicast'}

  
  ==
  FAIL: 
neutron.tests.unit.agent.linux.test_ip_lib.TestIpRuleCommand.test__make_canonical_fwmark_integer
  
neutron.tests.unit.agent.linux.test_ip_lib.TestIpRuleCommand.test__make_canonical_fwmark_integer
  --
  _StringException: Traceback (most recent call last):
  _StringException: Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'
stderr
stdout

  Traceback (most recent call last):
File "/«PKGBUILDDIR»/neutron/tests/unit/agent/linux/test_ip_lib.py", line 
642, in test__make_canonical_fwmark_integer
  'type': 'unicast'}, actual)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 348, in 
assertEqual
  self.assertThat(observed, matcher, message)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 433, in 
assertThat
  raise mismatch_error
  MismatchError: !=:
  reference = {'fwmark': '0x400/0x', 'type': 'unicast'}
  actual= {'fwmark': '0x400/0xL', 'type': 'unicast'}

  Traceback (most recent call last):
  _StringException: Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'
stderr
stdout

  Traceback (most recent call last):
File "/«PKGBUILDDIR»/neutron/tests/unit/agent/linux/test_ip_lib.py", line 
642, in test__make_canonical_fwmark_integer
  'type': 'unicast'}, actual)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 348, in 
assertEqual
  self.assertThat(observed, matcher, message)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 433, in 
assertThat
  raise mismatch_error
  MismatchError: !=:
  reference = {'fwmark': '0x400/0x', 'type': 'unicast'}
  actual= {'fwmark': '0x400/0xL', 'type': 'unicast'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501703/+subscriptions

-- 

[Yahoo-eng-team] [Bug 1475985] Re: Ping and ssh to instance fail, although access is enabled by default

2015-10-03 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475985

Title:
   Ping and ssh to instance fail, although access is enabled by default

Status in neutron:
  Expired

Bug description:
  Description of problem:
  I launched an instance based on image: 
rhel-guest-image-7.1-20150224.0.x86_64.qcow2 and there were not ping and ssh to 
it, although ssh and ping are enabled by default security. I then launched 
another instance and this time I could ping it and access it via ssh. Then I 
added rule "ALL TCP" which then enabled ping and ssh to the first instance 
(which previously failed).
  Finally, I removed the rule and launch another instance and was able to 
access this new instance via ssh and ping.

  Version-Release number of selected component (if applicable):
  openstack-neutron-openvswitch-2014.2.2-5.el7ost.noarch
  openstack-neutron-ml2-2014.2.2-5.el7ost.noarch
  python-neutronclient-2.3.9-1.el7ost.noarch
  openstack-neutron-2014.2.2-5.el7ost.noarch
  python-neutron-2014.2.2-5.el7ost.noarch

  How reproducible:

  
  Steps to Reproduce:
  1. Launch an instance and check if there are ping and ssh to it (In my case 
it failed)
  2. Add rule "ALL TCP" and check if there are ping and ssh to the instance 
(Now it worked for me)
  3. Remove the rule
  4. Launch another instance and check both instances for access via ping and 
ssh (Both worked for me)

  Actual results:
  Ping and ssh failed with the first instance

  Expected results:
  Firstly, ping and ssh should worked with the first instance. Secondly, let's 
assume that ping and ssh was not enabled by default security, after I removed 
the rule "ALL TCP", instances should not have been accessible via ping and ssh, 
although those were in my case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475985/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499869] Re: maas wily deployment to HP Proliant m400 arm64 server cartridge fails

2015-10-04 Thread Launchpad Bug Tracker
This bug was fixed in the package linux - 4.2.0-14.16

---
linux (4.2.0-14.16) wily; urgency=low

  [ Tim Gardner ]

  * Release Tracking Bug
- LP: #1501818
  * rebase to v4.2.2
  * [Config] CONFIG_RTC_DRV_XGENE=y
- LP: #1499869

  [ Upstream Kernel Changes ]

  * mei: do not access freed cb in blocking write
- LP: #1494076
  * mei: bus: fix drivers and devices names confusion
- LP: #1494076
  * mei: bus: rename nfc.c to bus-fixup.c
- LP: #1494076
  * mei: bus: move driver api functions at the start of the file
- LP: #1494076
  * mei: bus: rename uevent handler to mei_cl_device_uevent
- LP: #1494076
  * mei: bus: don't enable events implicitly in device enable
- LP: #1494076
  * mei: bus: report if event registration failed
- LP: #1494076
  * mei: bus: revamp device matching
- LP: #1494076
  * mei: bus: revamp probe and remove functions
- LP: #1494076
  * mei: bus: add reference to bus device in struct mei_cl_client
- LP: #1494076
  * mei: bus: add me client device list infrastructure
- LP: #1494076
  * mei: bus: enable running fixup routines before device registration
- LP: #1494076
  * mei: bus: blacklist the nfc info client
- LP: #1494076
  * mei: bus: blacklist clients by number of connections
- LP: #1494076
  * mei: bus: simplify how we build nfc bus name
- LP: #1494076
  * mei: bus: link client devices instead of host clients
- LP: #1494076
  * mei: support for dynamic clients
- LP: #1494076
  * mei: disconnect on connection request timeout
- LP: #1494076
  * mei: define async notification hbm commands
- LP: #1494076
  * mei: implement async notification hbm messages
- LP: #1494076
  * mei: enable async event notifications only from hbm version 2.0
- LP: #1494076
  * mei: add mei_cl_notify_request command
- LP: #1494076
  * mei: add a handler that waits for notification on event
- LP: #1494076
  * mei: add async event notification ioctls
- LP: #1494076
  * mei: support polling for event notification
- LP: #1494076
  * mei: implement fasync for event notification
- LP: #1494076
  * mei: bus: add and call callback on notify event
- LP: #1494076
  * mei: hbm: add new error code MEI_CL_CONN_NOT_ALLOWED
- LP: #1494076
  * mei: me: d0i3: add the control registers
- LP: #1494076
  * mei: me: d0i3: add flag to indicate D0i3 support
- LP: #1494076
  * mei: me: d0i3: enable d0i3 interrupts
- LP: #1494076
  * mei: hbm: reorganize the power gating responses
- LP: #1494076
  * mei: me: d0i3: add d0i3 enter/exit state machine
- LP: #1494076
  * mei: me: d0i3: move mei_me_hw_reset down in the file
- LP: #1494076
  * mei: me: d0i3: exit d0i3 on driver start and enter it on stop
- LP: #1494076
  * mei: me: add sunrise point device ids
- LP: #1494076
  * mei: hbm: bump supported HBM version to 2.0
- LP: #1494076
  * mei: remove check on pm_runtime_active in __mei_cl_disconnect
- LP: #1494076
  * mei: fix debugfs files leak on error path
- LP: #1494076

  [ Upstream Kernel Changes ]

  * rebase to v4.2.2
- LP: #1492132

 -- Tim Gardner   Tue, 29 Sep 2015 09:02:13
-0700

** Changed in: linux (Ubuntu Wily)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1499869

Title:
  maas wily deployment to HP Proliant m400 arm64 server cartridge fails

Status in cloud-init:
  Confirmed
Status in curtin:
  New
Status in cloud-init package in Ubuntu:
  Fix Released
Status in linux package in Ubuntu:
  Fix Released
Status in linux source package in Vivid:
  Fix Committed
Status in cloud-init source package in Wily:
  Fix Released
Status in linux source package in Wily:
  Fix Released

Bug description:
  This is the error seen on the console:

  [   64.149080] cloud-init[834]: 2015-08-27 15:03:29,289 - util.py[WARNING]: 
Failed fetching metadata from url http://10.229.32.21/MAAS/metadata/curtin
  [  124.513212] cloud-init[834]: 2015-09-24 17:23:10,006 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
[2427570/120s]: request error [HTTPConnectionPool(host='169.254.169.254', 
port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id 
(Caused by 
ConnectTimeoutError(, 'Connection to 169.254.169.254 timed out. (connect 
timeout=50.0)'))]
  [  124.515570] cloud-init[834]: 2015-09-24 17:23:10,007 - 
DataSourceEc2.py[CRITICAL]: Giving up on md from 
['http://169.254.169.25/2009-04-04/meta-data/instance-id'] after 2427570 seconds
  [  124.531624] cloud-init[834]: 2015-09-24 17:23:10,024 - 
url_helper.py[WARNING]: Calling 'http:///latest/meta-data/instance-id' failed [0/120s]: bad status code [404]

  This times out eventually and the node is left at the login prompt. I
  can install wily via netboot without issue and some time back, wily
  was de

[Yahoo-eng-team] [Bug 1481557] Re: kilo default cinder behavior changed

2015-10-04 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1481557

Title:
  kilo default cinder behavior changed

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Prior to Kilo, not having a:
  [cinder]
  os_region_name=SET

  set meant that nova would look in the region the nova call went to. As
  of Kilo, it appears to do an endpoint list and pull the first cinder
  endpoint it finds (even if it is in a different region.)

  I've not yet reproduced this in devstack but a multi-region devstack
  should see the right behavior prior to kilo and the wrong behavior in
  kilo (and presumably in Liberty.)

  This has a VERY STRONG OPERATIONS IMPACT due to the difficulty in
  debugging a cross region error like this (it's extraordinarily
  opaque).

  The simplest reproducer is to have a multi region openstack (for both
  nova and cinder) and create instances and volumes in both regions. In
  each region, try and attach a volume to a running instance. One side
  (whichever is uuid-numerically-lower) will likely succeed and the one
  that is numerically higher will fail (as it will try and find the
  volume in the wrong region.

  The workaround is straight-forward--set 
  [cinder]
  os_region_name=REGION
  in the nova.conf file but as this is a behavior change, it should have been 
called out in the release notes.

  Running:
  openstack kilo 2015.1.0 (nova and cinder) in a multi (well two) region env.

  distro info:

  [STAGING] medberry@chrcnc02-control-003:~$ dpkg -l \*nova\* \*cinder\* 
\*keystone\* \*neutron\* |cat |grep ii
  ii  cinder-api   1:2015.1.0-0ubuntu1~cloud0   
 all  Cinder storage service - API server
  ii  cinder-backup1:2015.1.0-0ubuntu1~cloud0   
 all  Cinder storage service - Scheduler server
  ii  cinder-common1:2015.1.0-0ubuntu1~cloud0   
 all  Cinder storage service - common files
  ii  cinder-scheduler 1:2015.1.0-0ubuntu1~cloud0   
 all  Cinder storage service - Scheduler server
  ii  cinder-volume1:2015.1.0-0ubuntu1~cloud0   
 all  Cinder storage service - Volume server
  ii  neutron-common   1:2015.1.0-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for Openstack - common
  ii  neutron-dhcp-agent   1:2015.1.0-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for Openstack - DHCP agent
  ii  neutron-l3-agent 1:2015.1.0-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for Openstack - l3 agent
  ii  neutron-metadata-agent   1:2015.1.0-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for Openstack - metadata 
agent
  ii  neutron-plugin-ml2   1:2015.1.0-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for Openstack - ML2 plugin
  ii  neutron-plugin-openvswitch-agent 1:2015.1.0-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for Openstack - Open vSwitch 
plugin agent
  ii  neutron-server   1:2015.1.0-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for Openstack - server
  ii  nova-api 1:2015.1.0-0ubuntu1.1~cloud0 
 all  OpenStack Compute - API frontend
  ii  nova-cert1:2015.1.0-0ubuntu1.1~cloud0 
 all  OpenStack Compute - certificate management
  ii  nova-common  1:2015.1.0-0ubuntu1.1~cloud0 
 all  OpenStack Compute - common files
  ii  nova-conductor   1:2015.1.0-0ubuntu1.1~cloud0 
 all  OpenStack Compute - conductor service
  ii  nova-consoleauth 1:2015.1.0-0ubuntu1.1~cloud0 
 all  OpenStack Compute - Console Authenticator
  ii  nova-novncproxy  1:2015.1.0-0ubuntu1.1~cloud0 
 all  OpenStack Compute - NoVNC proxy
  ii  nova-objectstore 1:2015.1.0-0ubuntu1.1~cloud0 
 all  OpenStack Compute - object store
  ii  nova-scheduler   1:2015.1.0-0ubuntu1.1~cloud0 
 all  OpenStack Compute - virtual machine scheduler
  ii  python-cinder1:2015.1.0-0ubuntu1~cloud0   
 all  Cinder Python libraries
  ii  python-cinderclient  1:1.1.1-0ubuntu1~cloud0  
 all  python bindings to the OpenStack Volume API
  ii  python-keystone  1:2015.1.0-0ub

[Yahoo-eng-team] [Bug 1469661] Re: 'volumes_attached' included the volume_id which is actually available

2015-10-04 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1469661

Title:
   'volumes_attached' included the volume_id which is actually available

Status in OpenStack Compute (nova):
  Expired

Bug description:
  [Env]
  Ubuntu 14.04
  OpenStack Icehouse

  
  [Descrition]
  I am usting pdb to debug nova attach_volume operation, due to the reason that 
the command is timeout, the volume failed to be attached to the instance, 
however, in the nova db, the attachment device is already recorded other than 
fallback, which is completely not right from user's perspective.

  For example, nova instance '1' shows "os-extended-
  volumes:volumes_attached | [{"id":
  "3c8205b9-5066-42ea-9180-601fac50a08e"}, {"id":
  "3c8205b9-5066-42ea-9180-601fac50a08e"}, {"id":
  "3c8205b9-5066-42ea-9180-601fac50a08e"}, {"id":
  "3c8205b9-5066-42ea-9180-601fac50a08e"}] |"  even if the volume
  3c8205b9-5066-42ea-9180-601fac50a08e is actually available.

  I am concerning there are some situations nova attach_volume would
  fail in the middle procedure which would have this issue as well, is
  it better to delay the db persistent step after the device is really
  being attached?

  ubuntu@xianghui-bastion:~/openstack-charm-testing/test$ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | d58a3b25-0434-4b92-a3a8-8b4188c611c3 | 1| ACTIVE | -  | Running 
| private=192.168.21.4 |
  
+--+--+++-+--+
  ubuntu@xianghui-bastion:~/openstack-charm-testing/test$ nova volume-list
  
+--+---+--+--+-+-+
  | ID   | Status| Display Name | Size | 
Volume Type | Attached to |
  
+--+---+--+--+-+-+
  | 3c8205b9-5066-42ea-9180-601fac50a08e | available | test | 2| 
None| |
  
+--+---+--+--+-+-+
  ubuntu@xianghui-bastion:~/openstack-charm-testing/test$ nova show 1
  
+--+---
  
---+
  | Property | Value
 

 |
  
+--+---
  
---+
  | OS-DCF:diskConfig| MANUAL   
 

 |
  | OS-EXT-AZ:availability_zone  | nova 
 

 |
  | OS-EXT-SRV-ATTR:host | juju-xianghui-machine-12 
 

 |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | 
juju-xianghui-machine-12.openstacklocal 
  

 |
  | OS-EXT-SRV-ATTR:instance_name| instance-0002
 

 |
  | OS-EXT-STS:power_state   | 1
 

 |
  | OS-EXT-STS:task_state| -
 

[Yahoo-eng-team] [Bug 1389690] Re: Unable to ping router

2015-10-04 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1389690

Title:
  Unable to ping router

Status in neutron:
  Expired

Bug description:
  I am unable to Ping my Router and when i go to dashboard and see my
  external network state is DOWN. I followed all the steps in open-stack
  documentation .i am Using CentOS 7 . what is the issue??

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1389690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481540] Re: no interface created in DHCP namespace for second subnet (dhcp enabled) on a network

2015-10-05 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1481540

Title:
  no interface created in DHCP namespace for second subnet (dhcp
  enabled) on a network

Status in neutron:
  Expired

Bug description:
  Steps to reproduce:

  On devstack:
  1. create a network.
  2. create a subnet (dhcp enabled); we can see DHCP namespace with one 
interface for the subnet.
  3. create another subnet (dhcp enabled).
  We do not see another interface for this subnet in DHCP namespace.

  
  LOGS:
  ==

  stack@ritesh05:/opt/stack/neutron$ neutron net-list
  
+--+-+--+
  | id   | name| subnets
  |
  
+--+-+--+
  | bff21881-abcf-4c68-afd7-fae081e87f9c | public  |
  |
  | beaabd4a-8211-41f5-906d-d685c1ee6b10 | private | 
0d8d834c-5806-453e-852e-4382f53d956c 20.0.0.0/24 |
  |  | | 
f11c53f8-f254-4c88-b6dd-3ba3fec68329 10.0.0.0/24 |
  
+--+-+--+
  stack@ritesh05:/opt/stack/neutron$ neutron subnet-list
  
+--+-+-++
  | id   | name| cidr| 
allocation_pools   |
  
+--+-+-++
  | 0d8d834c-5806-453e-852e-4382f53d956c | private-subnet2 | 20.0.0.0/24 | 
{"start": "20.0.0.2", "end": "20.0.0.254"} |
  | f11c53f8-f254-4c88-b6dd-3ba3fec68329 | private-subnet  | 10.0.0.0/24 | 
{"start": "10.0.0.2", "end": "10.0.0.254"} |
  
+--+-+-++
  stack@ritesh05:/opt/stack/neutron$

  stack@ritesh05:/opt/stack/neutron$ sudo ip netns exec 
qdhcp-beaabd4a-8211-41f5-906d-d685c1ee6b10 ifconfig
  loLink encap:Local Loopback
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:65536  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

  tapc97de8da-97 Link encap:Ethernet  HWaddr fa:16:3e:a2:b8:02
inet addr:10.0.0.2  Bcast:10.0.0.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fea2:b802/64 Scope:Link
UP BROADCAST RUNNING  MTU:1500  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B)  TX bytes:738 (738.0 B)

  stack@ritesh05:/opt/stack/neutron$

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1481540/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503187] [NEW] Update glance status image with deactivate status

2015-10-06 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Since kilo the feature to deactivate the images in glance has been added and 
this status of the image needs to be added to the diagram that is documented 
here - http://docs.openstack.org/developer/glance/statuses.html
The updated image can be found here: 
https://github.com/openstack/glance/blob/master/doc/source/images_src/image_status_transition.png

** Affects: glance
 Importance: Undecided
 Status: New

-- 
Update glance status image with deactivate status
https://bugs.launchpad.net/bugs/1503187
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to Glance.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419047] Re: Error on nova compute on Power

2015-10-07 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1419047

Title:
  Error on nova compute on Power

Status in OpenStack Compute (nova):
  Expired

Bug description:
  I'm trying to deploy OpenStack in a Power 8 server on a single node.

  I've created 1 VM for MAAS, 1 VM for juju bootstrap and 4 VMs to use
  them as compute, ceph and to hold all the OpenStack services.

  I've deployed OpenStack and everything seemed to work fine, however, I
  don't see any hypervisor. When I check into the nova logs I find:

  2015-02-06 11:36:25.499 54731 TRACE nova.openstack.common.threadgroup 
libvirtError: XML error: Missing CPU model name
  2015-02-06 13:08:29.419 66757 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2885, in 
get_host_capabilities
  2015-02-06 13:08:29.419 66757 TRACE nova.openstack.common.threadgroup 
libvirt.VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1419047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504165] [NEW] The Action Detach port/network from Qos-Policy is not working

2015-10-08 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

After Create Qos-Policy and Qos Rule to Port, I tried to detach the
Policy.

# neutron port-update  --no-qos-policy.

>From sriov-nic-agent.log

neutron.plugins.ml2.drivers.mech_sriov.agent.eswitch_manager [req-
b526991f-0632-4d9d-9b61-638414d7af1d - - - - -] VF with PCI slot
:04:00.7 is already assigned; skipping reset maximum rate

The BW on vf's didn't Changed.

My Environment:

- 2 Servers (All in one and Compute).
- OS : CentOS Linux release 7.1.1503 (Core)
- Kernel version : 3.10.0-123.el7.x86_64
- Openstack Version : Trunk

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
The Action Detach port/network from Qos-Policy is not working
https://bugs.launchpad.net/bugs/1504165
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456871] Re: objects.InstanceList.get_all(context, ['metadata', 'system_metadata']) return error can't locate strategy for %s %s" % (cls, key)

2015-10-09 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1456871

Title:
  objects.InstanceList.get_all(context, ['metadata','system_metadata'])
  return error can't locate strategy for %s %s" % (cls, key)

Status in OpenStack Compute (nova):
  Expired

Bug description:
  When invoke

  objects.InstanceList.get_all(context, ['metadata','system_metadata'])

  
  Then found the nova/objects/instance.py  function  
_expected_cols(expected_attrs):

  will return list ['metadata','system_metadata', 'extra',
  'extra.flavor'], then in the db query it throw the error: can't locate
  strategy for 
  (('lazy', 'joined'),)

  Could anyone can help have a look? Thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1456871/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439870] Re: Fixed IPs not being recorded in database

2015-10-10 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439870

Title:
  Fixed IPs not being recorded in database

Status in OpenStack Compute (nova):
  Expired

Bug description:
  When new VMs are spawned after deleting previous VMs, the new VMs
  obtain completely new ips and the old ones are not recycled to reuse.
  I looked into the mysql database to see where ips may be being stored
  and accessed by openstack to determine what the next in line should
  be, but didn' tmanage to find any ip information there. Has the
  location of this storage changed out of the fixed_ips table?
  Currently, this table is entirely empty:

  MariaDB [nova]> select * from fixed_ips;
  Empty set (0.00 sec)

  despite having many vms running on two different networks:

   mysql -e "select uuid, deleted, power_state, vm_state, display_name, host 
from nova.instances;"
  
+--+-+-+--+--+--+
  | uuid | deleted | power_state | vm_state | 
display_name | host |
  
+--+-+-+--+--+--+
  | 14600536-7ce1-47bf-8f01-1a184edb5c26 |   0 |   4 | error| 
Ctest| r001ds02.pcs |
  | abb38321-5b74-4f36-b413-a057897b8579 |   0 |   4 | stopped  | 
cent7| r001ds02.pcs |
  | 31cbb003-42d0-468a-be4d-81f710e29aef |   0 |   1 | active   | 
centos7T2| r001ds02.pcs |
  | 4494fd8d-8517-4f14-95e6-fe5a6a64b331 |   0 |   1 | active   | 
selin_test   | r001ds02.pcs |
  | 25505dc4-2ba9-480d-ba5a-32c2e91fc3c9 |   0 |   1 | active   | 
2NIC | r001ds02.pcs |
  | baff8cef-c925-4dfb-ae90-f5f167f32e83 |   0 |   4 | stopped  | 
kepairtest   | r001ds02.pcs |
  | 317e1fbf-664d-43a8-938a-063fd53b801d |   0 |   1 | active   | 
test | r001ds02.pcs |
  | 3a8c1a2d-1a4b-4771-8e62-ab1982759ecd |   0 |   1 | active   | 3 
   | r001ds02.pcs |
  | c4b2175a-296c-400c-bd54-16df3b4ca91b |   0 |   1 | active   | 
344  | r001ds02.pcs |
  | ac02369e-b426-424d-8762-71ca93eacd0c |   0 |   4 | stopped  | 
333  | r001ds02.pcs |
  | 504d9412-e2a3-492a-8bc1-480ce6249f33 |   0 |   1 | active   | 
libvirt  | r001ds02.pcs |
  | cc9f6f06-2ba6-4ec2-94f7-3a795aa44cc4 |   0 |   1 | active   | 
arger| r001ds02.pcs |
  | 0a247dbf-58b4-4244-87da-510184a92491 |   0 |   1 | active   | 
arger2   | r001ds02.pcs |
  | 4cb85bbb-7248-4d46-a9c2-fee312f67f96 |   0 |   1 | active   | 
gh   | r001ds02.pcs |
  | adf9de81-3986-4d73-a3f1-a29d289c2fe3 |   0 |   1 | active   | 
az   | r001ds02.pcs |
  | 8396eabf-d243-4424-8ec8-045c776e7719 |   0 |   1 | active   | 
sdf  | r001ds02.pcs |
  | 947905b5-7a2c-4afb-9156-74df8ed699c5 |  55 |   1 | deleted  | 
yh   | r001ds02.pcs |
  | f690d7ed-f8d5-45a1-b679-e79ea4d3366f |  56 |   1 | deleted  | 
tr   | r001ds02.pcs |
  | dd1aa5b1-c0ac-41f6-a6de-05be8963242f |  57 |   1 | deleted  | 
ig   | r001ds02.pcs |
  | 42688a7d-2ba2-4d5a-973f-e87f87c32326 |  58 |   1 | deleted  | 
td   | r001ds02.pcs |
  | 7c1014d8-237d-48f0-aa77-3aa09fff9101 |  59 |   1 | deleted  | 
td2  | r001ds02.pcs |
  
+--+-+-+--+--+--+

  I am using neutron networking with OVS.  It is my understanding that
  the mysql sqlalchemy is setup to leave old information accessible in
  mysql, but deleting the associated information manually doesn't seem
  to make a difference as to the fixed_ips issue I am experiencing. Are
  there solutions for this?

  nova --version : 2.20.0 ( 2014.2.1-1.el7 running on centOS7, epel-juno
  release)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439870/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505354] [NEW] oslo.db dependency changes breaks testing

2015-10-12 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

oslo.db updates removes testresources from their requirements. We must
now import this ourselves.

https://bugs.launchpad.net/nova/+bug/1503501

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
oslo.db dependency changes breaks testing
https://bugs.launchpad.net/bugs/1505354
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459828] Re: keystone-all crashes when ca_certs is not defined in conf

2015-10-12 Thread Launchpad Bug Tracker
[Expired for Keystone because there has been no activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1459828

Title:
  keystone-all crashes when ca_certs is not defined in conf

Status in Keystone:
  Expired
Status in Keystone icehouse series:
  Won't Fix

Bug description:
  When [ssl] ca_certs parameter is commented on keystone.conf, ssl
  module try to load the default ca_cert file
  (/etc/keystone/ssl/certs/ca.pem) and raises an IOError exception
  because it didn't find the file.

  This happens running on Python 2.7.9.

  I have a keystone cluster running on Python 2.7.7, with the very same
  keystone.conf file, and that crash doesn't happen.

  If any further information is required, don't hesitate in contacting
  me.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1459828/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492034] [NEW] nova network FlatDHCP (kilo) on XenServer 6.5 ebtables rules

2015-10-13 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

https://ask.openstack.org/en/question/62349/openstack-xenserver-65
-network-legacy-flat-dhcp-not-working/

On every instance creation a new  rule is pre-pended to ebtables that drops ARP 
packets from the bridge input.
 This causes a routing problem. Please look at the details in AskOpenStack link 
above.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
nova network FlatDHCP (kilo) on XenServer 6.5  ebtables rules 
https://bugs.launchpad.net/bugs/1492034
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457517] Re: Unable to boot from volume when flavor disk too small

2015-10-14 Thread Launchpad Bug Tracker
This bug was fixed in the package nova - 1:2015.1.1-0ubuntu2

---
nova (1:2015.1.1-0ubuntu2) vivid; urgency=medium

  [ Corey Bryant ]
  * d/rules: Prevent dh_python2 from guessing dependencies.

  [ Liang Chen ]
  * d/p/not-check-disk-size.patch: Fix booting from volume error
when flavor disk too small (LP: #1457517)

 -- Corey Bryant   Thu, 13 Aug 2015 15:13:43
-0400

** Changed in: nova (Ubuntu Vivid)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1457517

Title:
  Unable to boot from volume when flavor disk too small

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Vivid:
  Fix Released

Bug description:
  [Impact]

   * Without the backport, booting from volume requires flavor disk size
  larger than volume size, which is wrong. This patch skips flavor disk
  size checking when booting from volume.

  [Test Case]

   * 1. create a bootable volume
 2. boot from this bootable volume with a flavor that has disk size smaller 
than the volume size
 3. error should be reported complaining disk size too small
 4. apply this patch
 5. boot from that bootable volume with a flavor that has disk size smaller 
than the volume size again
 6. boot should succeed

  [Regression Potential]

   * none

  
  Version: 1:2015.1.0-0ubuntu1~cloud0 on Ubuntu 14.04

  I attempt to boot an instance from a volume:

  nova boot --nic net-id=[NET ID] --flavor v.512mb --block-device
  source=volume,dest=volume,id=[VOLUME
  ID],bus=virtio,device=vda,bootindex=0,shutdown=preserve vm

  This results in nova-api raising a FlavorDiskTooSmall exception in the
  "_check_requested_image" function in compute/api.py. However,
  according to [1], the root disk limit should not apply to volumes.

  [1] http://docs.openstack.org/admin-guide-cloud/content/customize-
  flavors.html

  Log (first line is debug output I added showing that it's looking at
  the image that the volume was created from):

  2015-05-21 10:28:00.586 25835 INFO nova.compute.api 
[req-1fb882c7-07ae-4c2b-86bd-3d174602d0ae f438b80d215c42efb7508c59dc80940c 
8341c85ad9ae49408fa25074adba0480 - - -] image: {'min_disk': 0, 'status': 
'active', 'min_ram': 0, 'properties': {u'container_format': u'bare', 
u'min_ram': u'0', u'disk_format': u'qcow2', u'image_name': u'Ubuntu 14.04 
64-bit', u'image_id': u'cf0dffef-30ef-4032-add0-516e88048d85', 
u'libvirt_cpu_mode': u'host-passthrough', u'checksum': 
u'76a965427d2866f006ddd2aac66ed5b9', u'min_disk': u'0', u'size': u'255524864'}, 
'size': 21474836480}
  2015-05-21 10:28:00.587 25835 INFO nova.api.openstack.wsgi 
[req-1fb882c7-07ae-4c2b-86bd-3d174602d0ae f438b80d215c42efb7508c59dc80940c 
8341c85ad9ae49408fa25074adba0480 - - -] HTTP exception thrown: Flavor's disk is 
too small for requested image.

  Temporary solution: I have special flavor for volume-backed instances so I 
just set the root disk on those to 0, but this doesn't work if volume are used 
on other flavors.
  Reproduce: create flavor with 1 GB root disk size, then try to boot an 
instance from a volume created from an image that is larger than 1 GB.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1457517/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506234] [NEW] Ironic virt driver in Nova calls destroy unnecessarily if spawn fails

2015-10-14 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

To give some context, calling destroy [5] was added as a bug fix [1]. It
was required back then because, Nova compute was not calling destroy on
catching the exception [2]. But now, Nova compute catches all exceptions
that happen during spawn and calls destroy (_shutdown_instance) [3]

Since Nova compute is already taking care of destroying the instance
before rescheduling, we shouldn't have to call destroy separately in the
driver. I confirmed in logs that destroy gets called twice if there is
any failure during _wait_for_active() [4] or timeout happens [5]


[1] https://review.openstack.org/#/c/99519/
[2] 
https://github.com/openstack/nova/blob/2014.1.5/nova/compute/manager.py#L2116-L2118
[3] 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2171-L2191
[4] 
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L431-L462
[5] 
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L823-L836

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ironic
-- 
Ironic virt driver in Nova calls destroy unnecessarily if spawn fails
https://bugs.launchpad.net/bugs/1506234
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413610] Re: Nova volume-update leaves volumes stuck in attaching/detaching

2015-10-15 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1413610

Title:
  Nova volume-update leaves volumes stuck in attaching/detaching

Status in Cinder:
  Incomplete
Status in OpenStack Compute (nova):
  Expired

Bug description:
  There is a problem with the nova command 'volume-update' that leaves
  cinder volumes in the states 'attaching' and 'deleting'.

  If the nova command 'volume-update' is used by a non admin user the
  command fails and the volumes referenced in the command are left in
  the states 'attaching' and 'deleting'.

  
  For example, if a non admin user runs the command
   $ nova volume-update d39dc7f2-929d-49bb-b22f-56adb3f378c7 
f0c3ea8f-c00f-4db8-aa20-e2dc6a1ddc9b 59b0cf66-67c8-4041-a505-78000b9c71f6

   Will result in the two volumes stuck like this:

   $ cinder list
   
+--+---+--+--+-+--+--+
   |  ID  |   Status  | Display Name | Size | 
Volume Type | Bootable | Attached to  |
   
+--+---+--+--+-+--+--+
   | 59b0cf66-67c8-4041-a505-78000b9c71f6 | attaching | vol2 |  1   |   
  None|  false   |  |
   | f0c3ea8f-c00f-4db8-aa20-e2dc6a1ddc9b | detaching | vol1 |  1   |   
  None|  false   | d39dc7f2-929d-49bb-b22f-56adb3f378c7 |
   
+--+---+--+--+-+--+--+

  
  And the following in the cinder-api log:

  
  2015-01-21 11:00:03.969 13588 DEBUG keystonemiddleware.auth_token [-] 
Received request from user: user_id None, project_id None, roles None service: 
user_id None, project_id None, roles None __call__ 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/auth_token.py:746
  2015-01-21 11:00:03.970 13588 DEBUG routes.middleware 
[req-d08bc456-14e9-4f97-9f4e-a9f87e7d6add a00b77059c8c400a4ff9a8496b2d 
d40e3207e34a4b558bf2d58bd3fe268a - - -] Matched POST 
/d40e3207e34a4b558bf2d58bd3fe268a/volumes/f0c3ea8f-c00f-4db8-aa20-e2dc6a1ddc9b/action
 __call__ 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/routes/middleware.py:100
  2015-01-21 11:00:03.971 13588 DEBUG routes.middleware 
[req-d08bc456-14e9-4f97-9f4e-a9f87e7d6add a00b77059c8c400a4ff9a8496b2d 
d40e3207e34a4b558bf2d58bd3fe268a - - -] Route path: 
'/{project_id}/volumes/:(id)/action', defaults: {'action': u'action', 
'controller': } 
__call__ 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/routes/middleware.py:102
  2015-01-21 11:00:03.971 13588 DEBUG routes.middleware 
[req-d08bc456-14e9-4f97-9f4e-a9f87e7d6add a00b77059c8c400a4ff9a8496b2d 
d40e3207e34a4b558bf2d58bd3fe268a - - -] Match dict: {'action': u'action', 
'controller': , 
'project_id': u'd40e3207e34a4b558bf2d58bd3fe268a', 'id': 
u'f0c3ea8f-c00f-4db8-aa20-e2dc6a1ddc9b'} __call__ 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/routes/middleware.py:103
  2015-01-21 11:00:03.972 13588 INFO cinder.api.openstack.wsgi 
[req-d08bc456-14e9-4f97-9f4e-a9f87e7d6add a00b77059c8c400a4ff9a8496b2d 
d40e3207e34a4b558bf2d58bd3fe268a - - -] POST 
http://192.0.2.24:8776/v1/d40e3207e34a4b558bf2d58bd3fe268a/volumes/f0c3ea8f-c00f-4db8-aa20-e2dc6a1ddc9b/action
  2015-01-21 11:00:03.972 13588 DEBUG cinder.api.openstack.wsgi 
[req-d08bc456-14e9-4f97-9f4e-a9f87e7d6add a00b77059c8c400a4ff9a8496b2d 
d40e3207e34a4b558bf2d58bd3fe268a - - -] Action body: 
{"os-migrate_volume_completion": {"new_volume": 
"59b0cf66-67c8-4041-a505-78000b9c71f6", "error": false}} get_method 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/cinder/api/openstack/wsgi.py:1010
  2015-01-21 11:00:03.973 13588 INFO cinder.api.openstack.wsgi 
[req-d08bc456-14e9-4f97-9f4e-a9f87e7d6add a00b77059c8c400a4ff9a8496b2d 
d40e3207e34a4b558bf2d58bd3fe268a - - -] 
http://192.0.2.24:8776/v1/d40e3207e34a4b558bf2d58bd3fe268a/volumes/f0c3ea8f-c00f-4db8-aa20-e2dc6a1ddc9b/action
 returned with HTTP 403
  2015-01-21 11:00:03.975 13588 INFO eventlet.wsgi.server 
[req-d08bc456-14e9-4f97-9f4e-a9f87e7d6add a00b77059c8c400a4ff9a8496b2d 
d40e3207e34a4b558bf2d58bd3fe268a - - -] 127.0.0.1 - - [21/Jan/2015 11:00:03] 
"POST 
/v1/d40e3207e34a4b558bf2d58bd3fe268a/volumes/f0c3ea8f-c00f-4db8-aa20-e2dc6a1ddc9b/action
 HTTP/1.1" 403 429 0.123613


  
  The problem is that the nova policy.json file allows a non admin user to run 
the command 'volume-update', but the cinder policy.json file requires the admin 
role to run the action os-migrate

[Yahoo-eng-team] [Bug 1507050] [NEW] LBaaS 2.0: Operating Status Tempest Test Changes

2015-10-16 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

SUMMARY:
A gate job for Neutron-LBaaS failed today (20141016).  It was identified that 
the failure occurred due to the introduction of new operating statues; namely, 
"DEGRADED".   

Per the following document, we will see the following valid types for 
operating_status: (‘ONLINE’, ‘OFFLINE’, ‘DEGRADED’, ‘ERROR’)
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/lbaas-api-and-objmodel-improvement.html


LOGS/STACKTRACE:
refer: 
http://logs.openstack.org/75/230875/12/gate/gate-neutron-lbaasv2-dsvm-listener/18155a8/console.html#_2015-10-15_23_12_27_433

Captured traceback:
2015-10-15 23:12:27.507 | 2015-10-15 23:12:27.462 | ~~~
2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.463 | Traceback (most recent 
call last):
2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.464 |   File 
"neutron_lbaas/tests/tempest/v2/api/test_listeners_admin.py", line 113, in 
test_create_listener_missing_tenant_id
2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.465 | 
listener_ids=[self.listener_id, admin_listener_id])
2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.466 |   File 
"neutron_lbaas/tests/tempest/v2/api/base.py", line 288, in _check_status_tree
2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.467 | assert 'ONLINE' == 
load_balancer['operating_status']
2015-10-15 23:12:27.509 | 2015-10-15 23:12:27.469 | AssertionError


RECOMMENDED ACTION:
1.  Modify the method, _check_status_tree, in  
neutron_lbaas/tests/tempest/v2/api/base.py  to accept 'DEGRADED" as a valid 
type.
2.  Add a wait for status/poller to check that a "DEGRADED" operating_status 
would transition over to "ONLINE".A timeout Exception should be thrown if 
we do not reach that state after some amount of seconds.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas-2.0
-- 
LBaaS 2.0: Operating Status Tempest Test Changes
https://bugs.launchpad.net/bugs/1507050
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486354] Re: DHCP namespace per VM

2015-10-18 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1486354

Title:
  DHCP namespace per VM

Status in neutron:
  Expired

Bug description:
  Problem Description
  ===

  How many namespaces can a linux host have without performance penalty?

  with a test, we found the linux box slows down significantly with
  about 300 namespaces.


  Proposed Change
  ===

  Add a configuration item to allow dhcp agent create one namespace per
  VM

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1486354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450471] Re: live-migration fails on shared storage with "Cannot block migrate"

2015-10-18 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1450471

Title:
  live-migration fails on shared storage with "Cannot block migrate"

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Kilo RC from Ubuntu Cloudarchive for trusty. Patches
  https://review.openstack.org/174307 and
  https://review.openstack.org/174059 are applied manually without which
  live-migration fails due to parameter changes.

  KVM, libvirt, 4 compute hosts, a 9 host ceph cluster for shared storage. 
Newly created instances work fine on all computes. When initiating live-migrate 
either from Horizon or from CLI, status switches back from MIGRATING very fast 
but VM stays on original host. Message
  MigrationError: Migration error: Cannot block migrate instance 
53d225ac-1915-4cff-8f15-54b5c66c20a3 with mapped volumes
  is found in log of compute host.

  Block-Migration option is explicitly not set, setting it for a try
  causes a different error message earlier.

  Pkg-Versions:
  ii  nova-common 1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - common files
  ii  nova-compute1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - compute node base
  ii  nova-compute-kvm1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - compute node (KVM)
  ii  nova-compute-libvirt1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - compute node libvirt support

  
  Full log:

  
  2015-04-30 13:42:46.985 17619 ERROR nova.compute.manager 
[req-ecbf0446-079a-45db-9000-539f06a9e9e4 382e4cb7197e43bf9b11fc1c6fa9d692 
9984ba4fc07c475e84a109967a897e4e - - -] [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] Pre live migration failed at compute04
  2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] Traceback (most recent call last):
  2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5217, in 
live_migration
  2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] block_migration, disk, dest, 
migrate_data)
  2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/rpcapi.py", line 621, in 
pre_live_migration
  2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] disk=disk, migrate_data=migrate_data)
  2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3]   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 156, in 
call
  2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] retry=self.retry)
  2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3]   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, in 
_send
  2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] timeout=timeout, retry=retry)
  2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3]   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 
350, in send
  2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] retry=retry)
  2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3]   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 
341, in _send
  2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] raise result
  2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] MigrationError_Remote: Migration error: 
Cannot block migrate instance 53d225ac-1915-4cff-8f15-54b5c66c20a3 with mapped 
volumes
  2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] Traceback (most recent call last):
  2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3] 
  2015-04-30 13:42:46.985 17619 TRACE nova.compute.manager [instance: 
53d225ac-1915-4cff-8f15-54b5c66c20a3]   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_rep

[Yahoo-eng-team] [Bug 1464377] Re: Keystone v2.0 api accepts tokens deleted with v3 api

2015-10-19 Thread Launchpad Bug Tracker
[Expired for Keystone because there has been no activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1464377

Title:
  Keystone v2.0 api accepts tokens deleted with v3 api

Status in Keystone:
  Expired

Bug description:
  Keystone tokens that are deleted using the v3 api are still accepted by
  the v2 api. Steps to reproduce:

  1. Request a scoped token as a member of a tenant.
  2. Delete it using DELETE /v3/auth/tokens
  3. Request the tenants you can access with GET v2.0/tenants
  4. The token is accepted and keystone returns the list of tenants

  The token was a PKI token. Admin tokens appear to be deleted correctly.
  This could be a problem if a user's access needs to be revoked but they
  are still able to access v2 functions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1464377/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499869] Re: maas wily deployment to HP Proliant m400 arm64 server cartridge fails

2015-10-20 Thread Launchpad Bug Tracker
This bug was fixed in the package linux - 3.19.0-31.36

---
linux (3.19.0-31.36) vivid; urgency=low

  [ Luis Henriques ]

  * Release Tracking Bug
- LP: #1503703

  [ Andy Whitcroft ]

  * Revert "SAUCE: aufs3: mmap: Fix races in madvise_remove() and
sys_msync()"
- LP: #1503655

  [ Ben Hutchings ]

  * SAUCE: aufs3: mmap: Fix races in madvise_remove() and sys_msync()
- LP: #1503655
- CVE-2015-7312

linux (3.19.0-31.35) vivid; urgency=low

  [ Brad Figg ]

  * Release Tracking Bug
- LP: #1503005

  [ Ben Hutchings ]

  * SAUCE: aufs3: mmap: Fix races in madvise_remove() and sys_msync()
- CVE-2015-7312

  [ Craig Magina ]

  * [Config] Add XGENE_EDAC, EDAC_SUPPORT and EDAC_ATOMIC_SCRUB
- LP: #1494357

  [ John Johansen ]

  * SAUCE: (no-up) apparmor: fix mount not handling disconnected paths
- LP: #1496430

  [ Laurent Dufour ]

  * SAUCE: powerpc/hvsi: Fix endianness issues in the HVSI driver
- LP: #1499357

  [ Tim Gardner ]

  * [Config] CONFIG_RTC_DRV_XGENE=y for only arm64
- LP: #1499869

  [ Upstream Kernel Changes ]

  * Revert "sit: Add gro callbacks to sit_offload"
- LP: #1500493
  * ipmi/powernv: Fix minor locking bug
- LP: #1493017
  * mmc: sdhci-pci: set the clear transfer mode register quirk for O2Micro
- LP: #1472843
  * perf probe ppc: Fix symbol fixup issues due to ELF type
- LP: #1485528
  * perf probe ppc: Use the right prefix when ignoring SyS symbols on ppc
- LP: #1485528
  * perf probe ppc: Enable matching against dot symbols automatically
- LP: #1485528
  * perf probe ppc64le: Fix ppc64 ABIv2 symbol decoding
- LP: #1485528
  * perf probe ppc64le: Prefer symbol table lookup over DWARF
- LP: #1485528
  * perf probe ppc64le: Fixup function entry if using kallsyms lookup
- LP: #1485528
  * perf probe: Improve detection of file/function name in the probe
pattern
- LP: #1485528
  * perf probe: Ignore tail calls to probed functions
- LP: #1485528
  * seccomp: cap SECCOMP_RET_ERRNO data to MAX_ERRNO
- LP: #1496073
  * EDAC: Cleanup atomic_scrub mess
- LP: #1494357
  * arm64: Enable EDAC on ARM64
- LP: #1494357
  * MAINTAINERS: Add entry for APM X-Gene SoC EDAC driver
- LP: #1494357
  * Documentation: Add documentation for the APM X-Gene SoC EDAC DTS
binding
- LP: #1494357
  * EDAC: Add APM X-Gene SoC EDAC driver
- LP: #1494357
  * arm64: Add APM X-Gene SoC EDAC DTS entries
- LP: #1494357
  * EDAC, edac_stub: Drop arch-specific include
- LP: #1494357
  * NVMe: Fix blk-mq hot cpu notification
- LP: #1498778
  * blk-mq: Shared tag enhancements
- LP: #1498778
  * blk-mq: avoid access hctx->tags->cpumask before allocation
- LP: #1498778
  * x86/ldt: Make modify_ldt synchronous
- LP: #1500493
  * x86/ldt: Correct LDT access in single stepping logic
- LP: #1500493
  * x86/ldt: Correct FPU emulation access to LDT
- LP: #1500493
  * md: flush ->event_work before stopping array.
- LP: #1500493
  * ipv6: addrconf: validate new MTU before applying it
- LP: #1500493
  * virtio-net: drop NETIF_F_FRAGLIST
- LP: #1500493
  * RDS: verify the underlying transport exists before creating a
connection
- LP: #1500493
  * xen/gntdev: convert priv->lock to a mutex
- LP: #1500493
  * xen/gntdevt: Fix race condition in gntdev_release()
- LP: #1500493
  * PCI: Restore PCI_MSIX_FLAGS_BIRMASK definition
- LP: #1500493
  * USB: qcserial/option: make AT URCs work for Sierra Wireless
MC7305/MC7355
- LP: #1500493
  * USB: qcserial: Add support for Dell Wireless 5809e 4G Modem
- LP: #1500493
  * nfsd: Drop BUG_ON and ignore SECLABEL on absent filesystem
- LP: #1500493
  * usb: chipidea: ehci_init_driver is intended to call one time
- LP: #1500493
  * crypto: qat - Fix invalid synchronization between register/unregister
sym algs
- LP: #1500493
  * crypto: ixp4xx - Remove bogus BUG_ON on scattered dst buffer
- LP: #1500493
  * mfd: arizona: Fix initialisation of the PM runtime
- LP: #1500493
  * xen-blkfront: don't add indirect pages to list when !feature_persistent
- LP: #1500493
  * xen-blkback: replace work_pending with work_busy in
purge_persistent_gnt()
- LP: #1500493
  * usb: gadget: f_uac2: fix calculation of uac2->p_interval
- LP: #1500493
  * hwrng: core - correct error check of kthread_run call
- LP: #1500493
  * USB: sierra: add 1199:68AB device ID
- LP: #1500493
  * regmap: regcache-rbtree: Clean new present bits on present bitmap
resize
- LP: #1500493
  * target/iscsi: Fix double free of a TUR followed by a solicited NOPOUT
- LP: #1500493
  * rbd: fix copyup completion race
- LP: #1500493
  * md/raid1: extend spinlock to protect raid1_end_read_request against
inconsistencies
- LP: #1500493
  * target: REPORT LUNS should return LUN 0 even for dynamic ACLs
- LP: #1500493
  * MIPS: Fix sched_getaffinity with MT FPAFF enabled
- LP: #150

[Yahoo-eng-team] [Bug 1508384] [NEW] QoS proxy functions

2015-10-21 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

The current QoS API is structured so that rules that are added to the API need 
to be added to the neutron client as well.
I propose the use of proxy functions in neutron that determine which functions 
to use based on the rule type retrieved using the rule_id or specified through 
the command line. These proxy functions will take the rule_id or rule_type, 
policy_id and a list containing the rest of the command line arguments and send 
them to the corresponding function of that rule.

This would allow new rules to be added to the QoS API without needing to
update the neutron client.

i.e
replace:
qos-bandwidth-limit-rule-create 
with
qos-rule-create  

and

replace:
qos-bandwidth-limit-rule-update  
with
qos-rule-update  

Further discussion and ideas would be appreciated.

** Affects: neutron
 Importance: Wishlist
 Status: New


** Tags: qos rfe
-- 
QoS proxy functions
https://bugs.launchpad.net/bugs/1508384
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465956] Re: nova rest api does't support force stop server

2015-10-24 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1465956

Title:
  nova rest api does't support force stop server

Status in OpenStack Compute (nova):
  Expired

Bug description:
  For libvirt dirver, It has already implemented how to stop a
  server(gracefully or hard), and which way used is controlled by
  "clean_shutdown" flag which come from nova.compute.api.API.stop(self,
  context, instance, do_cast=True,clean_shutdown=True) method.

  And above method is called by nova stop sever rest api, while this
  rest api does't support to deal with this flag(it always set the
  "clean_shutdown" flag as True), so I think we should add this support
  then user could choise which way to stop a server.

  The openstack I used  is master and the hypervisor is kvm.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1465956/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479467] Re: Github stable kilo using django 1.7+ doesn't work with keystone v3

2015-10-24 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1479467

Title:
  Github stable kilo using django 1.7+ doesn't work with keystone v3

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  Github stable kilo using django 1.7+ doesn't work with keystone v3

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1479467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488522] Re: PKIZ provider with 10s token expiration time, horizon sesssion timeout, the user will cannot login unless clear the browser cache.

2015-10-24 Thread Launchpad Bug Tracker
[Expired for OpenStack Identity (keystone) because there has been no
activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1488522

Title:
  PKIZ provider with 10s token expiration time, horizon sesssion
  timeout, the user will cannot login unless clear the browser cache.

Status in OpenStack Identity (keystone):
  Expired

Bug description:
  I use the keystone-manage to setup the PKI. In keystone.conf, provider is 
PKIZ and the token expiration set to 60s.
  When I log in to horizon, after a while the session timeout. 

  Then I cannot login to the horizon unless i clear the browser cache.

  I can reproduce this problem on devstack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1488522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467679] Re: Floating ip test cannot be turned off for horizon integration tests

2015-10-25 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1467679

Title:
  Floating ip test cannot be turned off for horizon integration tests

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  The floating ip test in horizon integration tests cannot be turned
  off. It needs to be able to be turned off because not all horizons
  have floating ips so those not having floating ips will fail every
  time for that test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1467679/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470363] Re: pci_passthrough_whitelist should support single quotes for keys and values

2015-10-26 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470363

Title:
  pci_passthrough_whitelist should support single quotes for keys and
  values

Status in OpenStack Compute (nova):
  Expired

Bug description:
  When having the following in /etc/nova/nova.conf

  pci_passthrough_whitelist={'devname':'enp5s0f1',
  'physical_network':'physnet2'}

  Nova compute fails to start and I get the error:

  2015-07-01 09:48:03.610 4791 ERROR nova.openstack.common.threadgroup 
[req-b86e5da5-a24e-4eb6-bebd-0ec36fc08021 - - - - -] Invalid PCI devices 
Whitelist config Invalid entry: '{'devname':'enp5s0f1', 
'physical_network':'physnet2'}'
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 
145, in wait
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
x.wait()
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 
47, in wait
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 175, in wait
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, in switch
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 214, in main
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 497, 
in run_service
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
service.start()
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 183, in start
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
self.manager.pre_start_hook()
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1291, in 
pre_start_hook
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
self.update_available_resource(nova.context.get_admin_context())
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6246, in 
update_available_resource
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup rt = 
self._get_resource_tracker(nodename)
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 715, in 
_get_resource_tracker
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
nodename)
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 78, 
in __init__
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
self.pci_filter = pci_whitelist.get_pci_devices_filter()
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/pci/whitelist.py", line 109, in 
get_pci_devices_filter
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
return PciHostDevicesWhiteList(CONF.pci_passthrough_whitelist)
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/pci/whitelist.py", line 89, in __init__
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
self.specs = self._parse_white_list_from_config(whitelist_spec)
  2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-

[Yahoo-eng-team] [Bug 1471167] Re: A volume attached one instance not working properly in K version

2015-10-26 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1471167

Title:
  A volume attached one instance not working properly in K version

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Reproducing method as following:
  1、create one instance

  [root@opencosf0ccfb2525a94ffa814d647f08e4d6a4 ~(keystone_admin)]# nova list
  
+--+-+++-+---+
  | ID   | Name| Status | Task State | 
Power State | Networks  |
  
+--+-+++-+---+
  | dc7c8242-9e02-4acf-9ae4-08030380e629 | test_zy | ACTIVE | -  | 
Running | net=192.168.0.111 |
  
+--+-+++-+---+
  2、run "nova volume-attach instance_id  volume_id ".

  3、after step2, the volume attached the instance successfuly.

  4、run "nova volume-attach instance_id  volume_id ", you will find the 
exception as following:
  [root@opencosf0ccfb2525a94ffa814d647f08e4d6a4 ~(keystone_admin)]# nova 
volume-attach  dc7c8242-9e02-4acf-9ae4-08030380e629  
1435df8a-c4d6-4993-a0fd-4f57de66a28e
  ERROR (BadRequest): Invalid volume: volume 
'1435df8a-c4d6-4993-a0fd-4f57de66a28e' status must be 'available'. Currently in 
'in-use' (HTTP 400) (Request-ID: req-45902cbb-1f00-432f-bfbf-b041bdcc2695)

  5、Execute command : nova reboot --hard  ,
then login to the instance , you will find the volume attached as /dev/vdb 
don't work ok

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1471167/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368037] Re: tempest-dsvm-postgres-full fail with 'Error. Unable to associate floating ip'

2015-10-26 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368037

Title:
  tempest-dsvm-postgres-full fail with 'Error. Unable to associate
  floating ip'

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Jenkins job failed with 'Error. Unable to associate floating ip'.
  Logs can be found here:
  
http://logs.openstack.org/67/120067/2/check/check-tempest-dsvm-postgres-full/1a45f89/console.html

  Log snippet:
  2014-09-10 18:55:16.527 | 2014-09-10 18:30:55,125 24275 INFO 
[tempest.common.rest_client] Request (TestVolumeBootPattern:_run_cleanups): 404 
GET 
http://127.0.0.1:8776/v1/5e4676bdfb7548b3b4dd4b084cee1752/volumes/651d8b59-df65-442c-9165-1b993374b24a
 0.025s
  2014-09-10 18:55:16.527 | 2014-09-10 18:30:55,157 24275 INFO 
[tempest.common.rest_client] Request (TestVolumeBootPattern:_run_cleanups): 404 
GET 
http://127.0.0.1:8774/v2/5e4676bdfb7548b3b4dd4b084cee1752/servers/25b9a6b8-dc48-47d3-9569-620e47ff0495
 0.032s
  2014-09-10 18:55:16.527 | 2014-09-10 18:30:55,191 24275 INFO 
[tempest.common.rest_client] Request (TestVolumeBootPattern:_run_cleanups): 404 
GET 
http://127.0.0.1:8774/v2/5e4676bdfb7548b3b4dd4b084cee1752/servers/bbf6f5c4-7f2f-48ec-9d86-50c89d636a6d
 0.032s
  2014-09-10 18:55:16.527 | }}}
  2014-09-10 18:55:16.527 | 
  2014-09-10 18:55:16.527 | Traceback (most recent call last):
  2014-09-10 18:55:16.527 |   File "tempest/test.py", line 128, in wrapper
  2014-09-10 18:55:16.527 | return f(self, *func_args, **func_kwargs)
  2014-09-10 18:55:16.528 |   File 
"tempest/scenario/test_volume_boot_pattern.py", line 164, in 
test_volume_boot_pattern
  2014-09-10 18:55:16.528 | keypair)
  2014-09-10 18:55:16.528 |   File 
"tempest/scenario/test_volume_boot_pattern.py", line 108, in _ssh_to_server
  2014-09-10 18:55:16.528 | floating_ip['ip'], server['id'])
  2014-09-10 18:55:16.528 |   File 
"tempest/services/compute/json/floating_ips_client.py", line 80, in 
associate_floating_ip_to_server
  2014-09-10 18:55:16.528 | resp, body = self.post(url, post_body)
  2014-09-10 18:55:16.528 |   File "tempest/common/rest_client.py", line 
219, in post
  2014-09-10 18:55:16.528 | return self.request('POST', url, 
extra_headers, headers, body)
  2014-09-10 18:55:16.528 |   File "tempest/common/rest_client.py", line 
435, in request
  2014-09-10 18:55:16.528 | resp, resp_body)
  2014-09-10 18:55:16.529 |   File "tempest/common/rest_client.py", line 
484, in _error_checker
  2014-09-10 18:55:16.529 | raise exceptions.BadRequest(resp_body)
  2014-09-10 18:55:16.529 | BadRequest: Bad request
  2014-09-10 18:55:16.529 | Details: {u'message': u'Error. Unable to 
associate floating ip', u'code': 400}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1368037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489300] Re: when launch vm the step "SelectProjectUserAction" should not display with kilo version.

2015-10-27 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1489300

Title:
  when launch vm the step "SelectProjectUserAction" should  not display
  with kilo version.

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  When launch vm with dashboard ,the first workflow "SelectProjectUserAction" 
should be hidden,but it displays.
  In file:  
\openstack\horizon\openstack_dashboard\dashboards\project\instances\workflows\create_instance.py

  class SelectProjectUserAction(workflows.Action):
  project_id = forms.ChoiceField(label=_("Project"))
  user_id = forms.ChoiceField(label=_("User"))

  def __init__(self, request, *args, **kwargs):
  super(SelectProjectUserAction, self).__init__(request, *args, 
**kwargs)
  # Set our project choices
  projects = [(tenant.id, tenant.name)
  for tenant in request.user.authorized_tenants]
  self.fields['project_id'].choices = projects

  # Set our user options
  users = [(request.user.id, request.user.username)]
  self.fields['user_id'].choices = users

  class Meta(object):
  name = _("Project & User")
  # Unusable permission so this is always hidden. However, we
  # keep this step in the workflow for validation/verification purposes.
  permissions = ("!",)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1489300/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482524] Re: volume table filter has an error

2015-10-27 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1482524

Title:
  volume table filter has an error

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  In page admin/volumes ,table filter maybe has an error:
  volumes in creating state can not be filted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1482524/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1510814] [NEW] There are some url in neutron gerrit dashboards redirect to an new url

2015-10-28 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

There are some url in neutron gerrit dashboards redirect to an new
url,it's not a big problem but i think it need be corrected.

** Affects: neutron
 Importance: Undecided
 Assignee: IanSun (sun-jun)
 Status: New

-- 
There are some url in neutron gerrit dashboards redirect to an new url
https://bugs.launchpad.net/bugs/1510814
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1510411] [NEW] neutron-sriov-nic-agent raises UnsupportedVersion security_groups_provider_updated

2015-10-28 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

neutron-sriov-nic-agent raises following exception:
2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher [-] 
Exception during message handling: Endpoint does not support RPC version 1.3. 
Attempted method: security_groups_provider_updated
2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
195, in _dispatch
2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher raise 
UnsupportedVersion(version, method=method)
2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher 
UnsupportedVersion: Endpoint does not support RPC version 1.3. Attempted 
method: security_groups_provider_updated
2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher

This VM is build with SRIOV port(macvtap). 
 jenkins@cnt-14:~$ sudo virsh list --all
 IdName   State

 10instance-0003  paused

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
neutron-sriov-nic-agent raises UnsupportedVersion 
security_groups_provider_updated
https://bugs.launchpad.net/bugs/1510411
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503088] [NEW] Deprecate max_fixed_ips_per_port

2015-10-28 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

https://review.openstack.org/230696
commit 37277cf4168260d5fa97f20e0b64a2efe2d989ad
Author: Kevin Benton 
Date:   Wed Sep 30 04:20:02 2015 -0700

Deprecate max_fixed_ips_per_port

This option does not have a clear use case since we prevent
users from setting their own IP addresses on shared networks.

DocImpact
Change-Id: I211e87790c955ba5c3904ac27b177acb2847539d
Closes-Bug: #1502356

** Affects: neutron
 Importance: Undecided
 Assignee: Takanori Miyagishi (miyagishi-t)
 Status: In Progress


** Tags: autogenerate-config-docs neutron
-- 
Deprecate max_fixed_ips_per_port
https://bugs.launchpad.net/bugs/1503088
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484836] Re: apache failed to restart with keystone wsgi app

2015-10-31 Thread Launchpad Bug Tracker
[Expired for OpenStack Identity (keystone) because there has been no
activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1484836

Title:
  apache failed to restart with keystone wsgi app

Status in OpenStack Identity (keystone):
  Expired

Bug description:
  From time to time apache failed to restart with init script when
  keystone wsgi application is running.

  There is steps how to reproduce it:

  launch apache start/stop in cycle
  while :; do service apache2 stop; service apache2 start; done

  after sime time apache failed to start, because it can't bind to opened 
socket:
   * Starting web server apache2
   *
   * Stopping web server apache2
   *
   * Starting web server apache2
   *
   * Stopping web server apache2
   *
   * Starting web server apache2
  (98)Address already in use: AH00072: make_sock: could not bind to address 
[::]:35357
  (98)Address already in use: AH00072: make_sock: could not bind to address 
0.0.0.0:35357
  no listening sockets available, shutting down
  AH00015: Unable to open logs
  Action 'start' failed.
  The Apache error log may have more information.
   *
   * The apache2 instance did not start within 20 seconds. Please read the log 
files to discover problems
   * Stopping web server apache2
   *
   * Starting web server apache2
   *
   * Stopping web server apache2

  Without keystone wsgi application, I can't reproduce error in 12
  hours. horizon and radosgw wsgi were enabled.

  It look like root cause is in keystone wsgi application itself.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1484836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486407] Re: devstack install failed, error in keystone requirement.

2015-10-31 Thread Launchpad Bug Tracker
[Expired for OpenStack Identity (keystone) because there has been no
activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1486407

Title:
  devstack install failed, error in keystone requirement.

Status in OpenStack Identity (keystone):
  Expired

Bug description:
  download the lastest devstack,  and use './stack.sh' to install the OpenStack.
  Error happens  as follow:

  
  2015-08-19 06:03:51.039 | + is_service_enabled ldap
  2015-08-19 06:03:51.050 | + return 1
  2015-08-19 06:03:51.050 | + recreate_database keystone
  2015-08-19 06:03:51.050 | + local db=keystone
  2015-08-19 06:03:51.051 | + recreate_database_mysql keystone
  2015-08-19 06:03:51.051 | + local db=keystone
  2015-08-19 06:03:51.051 | + mysql -uroot -pstackdb -h127.0.0.1 -e 'DROP 
DATABASE IF EXISTS keystone;'
  2015-08-19 06:03:51.060 | + mysql -uroot -pstackdb -h127.0.0.1 -e 'CREATE 
DATABASE keystone CHARACTER SET utf8;'
  2015-08-19 06:03:51.069 | + /usr/local/bin/keystone-manage db_sync
  2015-08-19 06:03:51.513 | Traceback (most recent call last):
  2015-08-19 06:03:51.513 |   File "/usr/local/bin/keystone-manage", line 4, in 

  2015-08-19 06:03:51.514 | 
__import__('pkg_resources').require('keystone==2015.2.0.dev43')
  2015-08-19 06:03:51.514 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3084, 
in 
  2015-08-19 06:03:51.535 | @_call_aside
  2015-08-19 06:03:51.536 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3070, 
in _call_aside
  2015-08-19 06:03:51.537 | f(*args, **kwargs)
  2015-08-19 06:03:51.537 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3097, 
in _initialize_master_working_set
  2015-08-19 06:03:51.537 | working_set = WorkingSet._build_master()
  2015-08-19 06:03:51.538 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 653, 
in _build_master
  2015-08-19 06:03:51.538 | return 
cls._build_from_requirements(__requires__)
  2015-08-19 06:03:51.538 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 666, 
in _build_from_requirements
  2015-08-19 06:03:51.539 | dists = ws.resolve(reqs, Environment())
  2015-08-19 06:03:51.539 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 839, 
in resolve
  2015-08-19 06:03:51.539 | raise DistributionNotFound(req, requirers)
  2015-08-19 06:03:51.540 | pkg_resources.DistributionNotFound: The 
'oslo.utils<1.5.0,>=1.4.0' distribution was not found and is required by 
keystone
  2015-08-19 06:03:51.546 | + exit_trap
  2015-08-19 06:03:51.546 | + local r=1
  2015-08-19 06:03:51.547 | ++ jobs -p
  2015-08-19 06:03:51.548 | + jobs=
  2015-08-19 06:03:51.549 | + [[ -n '' ]]
  2015-08-19 06:03:51.549 | + kill_spinner
  2015-08-19 06:03:51.549 | + '[' '!' -z '' ']'
  2015-08-19 06:03:51.549 | + [[ 1 -ne 0 ]]
  2015-08-19 06:03:51.550 | + echo 'Error on exit'
  2015-08-19 06:03:51.550 | Error on exit
  2015-08-19 06:03:51.550 | + [[ -z /opt/stack/logs ]]
  2015-08-19 06:03:51.550 | + /opt/stack/devstack/tools/worlddump.py -d 
/opt/stack/logs
  2015-08-19 06:03:51.950 | + exit 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1486407/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490842] Re: UnexpectedTaskStateError_Remote: Unexpected task state: expecting (u'resize_migrating', ) but the actual state is None

2015-11-01 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1490842

Title:
  UnexpectedTaskStateError_Remote: Unexpected task state: expecting
  (u'resize_migrating',) but the actual state is None

Status in OpenStack Compute (nova):
  Expired

Bug description:
  [req-7a72cf1e-b163-4863-9330-f2b60bd15a6e None] [instance: 
5dbb0778-e7d2-42bd-8427-b727301972cb] Setting instance vm_state to ERROR
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] Traceback (most recent call 
last):
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6705, in 
_error_out_instance_on_exception
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] yield
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3841, in 
resize_instance
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
instance.save(expected_task_state=task_states.RESIZE_MIGRATING)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/objects/base.py", line 189, in wrapper
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] ctxt, self, fn.__name__, 
args, kwargs)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py", line 351, in 
object_action
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] objmethod=objmethod, 
args=args, kwargs=kwargs)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line 152, in 
call
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] retry=self.retry)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/transport.py", line 90, in 
_send
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] timeout=timeout, 
retry=retry)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
408, in send
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] retry=retry)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
399, in _send
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] raise result
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
UnexpectedTaskStateError_Remote: Unexpected task state: expecting 
(u'resize_migrating',) but the actual state is None
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] Traceback (most recent call 
last):
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 400, in 
_object_dispatch
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] return getattr(target, 
method)(context, *args, **kwargs)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/objects/base.py", line 204, in wrapper
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] return fn(self, ctxt, 
*args, **kwargs)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 500, in save
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
columns_to_join=_expected_cols(expected_attrs))
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/db/api.py", line 766, in 
instance_update_and_get_original
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
columns_to_join=columns_to_join)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 143, in 
wrapper
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] return f(*args, **kwargs)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2318, in 
instance_update_and_get_original
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
columns_to_join=columns_to_join)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2369, in 
_instance_update
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] actual=actual_state, 
expected=expected)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 

[Yahoo-eng-team] [Bug 1491377] Re: traffics of unknown unicast or arp broadcast in the same compute node should not flooded out tunnels

2015-11-02 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491377

Title:
  traffics of unknown unicast or arp broadcast in the same compute node
  should not flooded out tunnels

Status in neutron:
  Expired

Bug description:
  Traffics of unknown unicast or arp broadcast in the same compute node
  may be flooded out tunnels to other compute nodes, that is a waste. It
  is better that these traffics are dropped in br-tun bridge.

  for example, both port1 and port2 are hosted in node A, when request
  port2's mac address from port1, the arp request will be flooded out
  br-tun, then br-tun will flood it out tunnels.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1491377/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492345] Re: Unable to perform nova-api operations on instance

2015-11-03 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1492345

Title:
  Unable to perform nova-api operations on instance

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Openstack version: Kilo
  Nova version:  1:2015.1.0-0ubuntu1.1

  Every time I try to perform an operation on an instance, the compute
  node reports the following error:

  2015-09-04 15:35:48.529 42883 INFO nova.compute.manager 
[req-b750c91a-ea2a-425c-832d-906e3c452904 c39a72988ef2478a930e627caa7f706a 
2ff06b822bab4d59a6f0bc81be34980f - - -] [instance: 
6a97676c-f735-4a93-b787-6d9b4c367836] Rebooting instance
  2015-09-04 15:35:48.847 42883 INFO nova.scheduler.client.report 
[req-b750c91a-ea2a-425c-832d-906e3c452904 c39a72988ef2478a930e627caa7f706a 
2ff06b822bab4d59a6f0bc81be34980f - - -] Compute_service record updated for 
('ncn11', 'ncn11.hpscto.local')
  2015-09-04 15:35:48.848 42883 ERROR oslo_messaging.rpc.dispatcher 
[req-b750c91a-ea2a-425c-832d-906e3c452904 c39a72988ef2478a930e627caa7f706a 
2ff06b822bab4d59a6f0bc81be34980f - - -] Exception during message handling: 
Cannot call obj_load_attr on orphaned Instance object
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6695, in 
reboot_instance
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher 
reboot_type)
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 88, in wrapped
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher payload)
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 71, in wrapped
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 327, in 
decorated_function
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance_uuid=instance_uuid)
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 298, in 
decorated_function
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 377, in 
decorated_function
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 355, in 
decorated_function
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
  2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self

[Yahoo-eng-team] [Bug 1486001] Re: Netapp ephemeral instance snapshot very slow

2015-11-04 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1486001

Title:
  Netapp ephemeral instance snapshot very slow

Status in OpenStack Compute (nova):
  Expired

Bug description:
  When I try to snapshot a instance carved out on netapp ephemeral
  storage mounted on /var/lib/nova/instances, process seems to be taking
  very long. It almost does full image download everytime even for same
  instance. And also I don't think it takes advantage netapp native
  snapshot / flex clone feature. I think think it can be feature
  enhancement too, to have nova use netapp utility for snapshot..

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1486001/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513313] [NEW] create vip failed for unbound method get_device_name() must be called with OVSInterfaceDriver instance as first argument

2015-11-04 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

We found our gate failed with following information

3:42.778 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager 
[req-ebb92ee8-2998-4a50-baf1-8123ce76b071 admin admin] Create vip 
e3152b05-2c41-40ac-9729-1756664f437e failed on device driver haproxy_ns
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager Traceback (most recent 
call last):
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/agent/agent_manager.py",
 line 221, in create_vip
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
driver.create_vip(vip)
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 348, in create_vip
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
self._refresh_device(vip['pool_id'])
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 344, in _refresh_device
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager if not 
self.deploy_instance(logical_config) and self.exists(pool_id):
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
254, in inner
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager return f(*args, 
**kwargs)
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 337, in deploy_instance
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
self.create(logical_config)
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 92, in create
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
logical_config['vip']['address'])
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 248, in _plug
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager interface_name = 
self.vif_driver.get_device_name(Wrap(port))
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager TypeError: unbound 
method get_device_name() must be called with OVSInterfaceDriver instance as 
first argument (got Wrap instance instead)
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager

** Affects: neutron
 Importance: Undecided
 Assignee: Kai Qiang Wu(Kennan) (wkqwu)
 Status: New

-- 
create vip failed for unbound method get_device_name() must be called with 
OVSInterfaceDriver instance as first argument
https://bugs.launchpad.net/bugs/1513313
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513473] [NEW] Introduce Functionality to Replace "location" and "copy-from" Flags for Glance Image Creation

2015-11-05 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Since the "location" and "copy-from" flags are being deprecated /
reserved in the newest version of the Glance CLI for creating images, it
would be useful to at  least replace their functionality with something
similar.

Suggest adding a flag called "--image-url" that eliminates the need to
copy an image to an OpenStack account in order to use it, similar to how
"--location" worked.

Suggest adding a flag called "--copy-url" that allows the user to
provide a URL to an existing image (e.g. on S3), where it can be copied
from, similar to how "--copy-from" worked.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
Introduce Functionality to Replace "location" and "copy-from" Flags for Glance 
Image Creation
https://bugs.launchpad.net/bugs/1513473
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to Glance.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1177432] Re: [SRU] Enable backports in cloud-init archive template

2015-11-05 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.7~bzr1154-0ubuntu1

---
cloud-init (0.7.7~bzr1154-0ubuntu1) xenial; urgency=medium

  * New upstream snapshot.
* create the same /etc/apt/sources.list that is present in default server
  ISO installs.  This change adds restricted, multiverse, and -backports
  (LP: #1177432).

 -- Scott Moser   Thu, 05 Nov 2015 12:10:00 -0500

** Changed in: cloud-init (Ubuntu Xenial)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1177432

Title:
  [SRU] cloud-init archive template should match Ubuntu Server

Status in cloud-init:
  New
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Precise:
  New
Status in cloud-init source package in Trusty:
  Fix Committed
Status in cloud-init source package in Vivid:
  Fix Committed
Status in cloud-init source package in Wily:
  Fix Committed
Status in cloud-init source package in Xenial:
  Fix Released

Bug description:
  [SRU Justification]
  Ubuntu Cloud Images are inconsistent with desktop and bare-metal server 
installations since backports, restricted and multiverse are not enabled. This 
is effected via cloud-init that uses a template to select an in-cloud archive.

  [FIX] Make the cloud-init template match that of Ubuntu-server.

  [REGRESION] The potential for regression is low. However, all users
  will experience slower fetch times on apt-get updates especially on
  slower or high latency networks.

  [TEST]
  1. Build image from -proposed
  2. Boot up image
  3. Confirm that "sudo apt-get update" pulls in backports, restricted and 
multiverse.

  Backports are currently not enabled in the cloud-init template. This
  is needed in order to get the backport kernels on cloud images.

  Related bugs:
   * bug 997371:  Create command to add "multiverse" and "-backports" to apt 
sources
   * bug 1513529:  cloud image built-in /etc/apt/sources.list needs updating

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1177432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513160] [NEW] UnsupportedObjectError on launching instance

2015-11-05 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

I had setup a openstack in single machine beforea month ,  it worked
fine till I added a new nova-compute node.

After adding a new nova-compute node I am not able to launch an
instance. Launch getting stucked in build state.

Found following exception in nova-compute of new node:

2015-11-04 23:02:20.460 2164 ERROR object 
[req-1696a514-fc24-49c0-af84-ec35bf67f7b1 af26d0f550b242428e8600f8a90a0d79 
ae1eb9a146ed4c3a9bf030c73567330e] Unable to instantiate unregistered object 
type NetworkRequestList
2015-11-04 23:02:20.461 2164 ERROR oslo.messaging.rpc.dispatcher [-] Exception 
during message handling: Unsupported object type NetworkRequestList
2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 121, 
in _do_dispatch
2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher 
new_args[argname] = self.serializer.deserialize_entity(ctxt, arg)
2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/rpc.py", line 111, in deserialize_entity
2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher return 
self._base.deserialize_entity(context, entity)
2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 575, in 
deserialize_entity
2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher entity = 
self._process_object(context, entity)
2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 542, in 
_process_object
2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher objinst = 
NovaObject.obj_from_primitive(objprim, context=context)
2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 251, in 
obj_from_primitive
2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher objclass = 
cls.obj_class_from_name(objname, objver)
2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 201, in 
obj_class_from_name
2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher raise 
exception.UnsupportedObjectError(objtype=objname)
2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher 
UnsupportedObjectError: Unsupported object type NetworkRequestList
2015-11-04 23:02:20.461 2164 TRACE oslo.messaging.rpc.dispatcher 
2015-11-04 23:02:20.463 2164 ERROR oslo.messaging._drivers.common [-] Returning 
exception Unsupported object type NetworkRequestList to caller
2015-11-04 23:02:20.464 2164 ERROR oslo.messaging._drivers.common [-] 
['Traceback (most recent call last):\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply\nincoming.message))\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', '  
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
121, in _do_dispatch\nnew_args[argname] = 
self.serializer.deserialize_entity(ctxt, arg)\n', '  File 
"/usr/lib/python2.7/dist-packages/nova/rpc.py", line 111, in 
deserialize_entity\nreturn self._base.deserialize_entity(context, 
entity)\n', '  File "/usr/lib/python2.7/dist-packages/nova/objects/base.py", 
line 575, in deserialize_entity\nentity = self._process_object(context, 
entity)\n', '  File "/usr/lib/python2.7/dist-packages/nova/objects/base.py", 
line 542, in _process_object\nobji
 nst = NovaObject.obj_from_primitive(objprim, context=context)\n', '  File 
"/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 251, in 
obj_from_primitive\nobjclass = cls.obj_class_from_name(objname, objver)\n', 
'  File "/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 201, in 
obj_class_from_name\nraise 
exception.UnsupportedObjectError(objtype=objname)\n', 'UnsupportedObjectError: 
Unsupported object type NetworkRequestList\n']


nova conf of new node:
---


[DEFAULT]
dhcpbridge_flagfile=

[Yahoo-eng-team] [Bug 1491930] Re: DevStack fails to spawn VMs in Fedora22

2015-11-06 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491930

Title:
  DevStack fails to spawn VMs in Fedora22

Status in devstack:
  Expired
Status in OpenStack Compute (nova):
  Expired

Bug description:
  When trying to spawn an instance on latest DevStack with F22, it fails to do 
so giving a nova trace [1]
  Latest commit 1d0b0d363e "Add/Overwrite default images in IMAGE_URLS and 
detect duplicates"

  * Steps to reproduce
  -

  1) Try to deploy any image

  
  [1] http://paste.openstack.org/show/28/

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1491930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491930] Re: DevStack fails to spawn VMs in Fedora22

2015-11-06 Thread Launchpad Bug Tracker
[Expired for devstack because there has been no activity for 60 days.]

** Changed in: devstack
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491930

Title:
  DevStack fails to spawn VMs in Fedora22

Status in devstack:
  Expired
Status in OpenStack Compute (nova):
  Expired

Bug description:
  When trying to spawn an instance on latest DevStack with F22, it fails to do 
so giving a nova trace [1]
  Latest commit 1d0b0d363e "Add/Overwrite default images in IMAGE_URLS and 
detect duplicates"

  * Steps to reproduce
  -

  1) Try to deploy any image

  
  [1] http://paste.openstack.org/show/28/

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1491930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493783] Re: nova-compute ceph "too many open files"

2015-11-08 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1493783

Title:
  nova-compute ceph "too many open files"

Status in OpenStack Compute (nova):
  Expired

Bug description:
  We’ve deployed OpenStack with Ceph as a storage (rbd). The instances
  are being boot using cinder volumes through Ceph too.

  Sometimes the following ERROR appears at the nova-compute.log: [Errno
  24] Too many open files

  If we get the list of open files from any nova-compute process, we’ll
  see the following:

  root@qk-3:~# ps -auxwww | grep nova-compute
  nova 137430  2.4  0.0 2395468 106888 ?  Ssl  Sep07  65:54 
/usr/bin/python /usr/bin/nova-compute --config-file=/etc/nova/nova.conf 
--config-file=/etc/nova/nova-compute.conf

  root@qk-3:~# lsof -p 137430 | wc -l
  5406

  Almost all open files is:
  nova-comp 137430 nova 5232r  FIFO0,8  0t0 948057084 pipe
  nova-comp 137430 nova 5233u  unix 0x881ad3192300  0t0 948012913 
/var/run/ceph/guests/ceph-client.cinder.137430.35244752.asok
  nova-comp 137430 nova 5234r  FIFO0,8  0t0 948057088 pipe
  nova-comp 137430 nova 5235u  unix 0x881ad3192680  0t0 948057085 
/var/run/ceph/guests/ceph-client.cinder.137430.35260960.asok
  nova-comp 137430 nova 5236r  FIFO0,8  0t0 948073475 pipe
  nova-comp 137430 nova 5237u  unix 0x881ad3192d80  0t0 948073473 
/var/run/ceph/guests/ceph-client.cinder.137430.35244752.asok
  nova-comp 137430 nova 5238r  FIFO0,8  0t0 948073522 pipe
  nova-comp 137430 nova 5239u  unix 0x881ad3195080  0t0 948073476 
/var/run/ceph/guests/ceph-client.cinder.137430.35361600.asok
  nova-comp 137430 nova 5240r  FIFO0,8  0t0 948073526 pipe
  nova-comp 137430 nova 5241u  unix 0x881ad3197700  0t0 948073523 
/var/run/ceph/guests/ceph-client.cinder.137430.35260960.asok
  nova-comp 137430 nova 5242r  FIFO0,8  0t0 948073532 pipe
  nova-comp 137430 nova 5243u  unix 0x881ad3196c80  0t0 948073527 
/var/run/ceph/guests/ceph-client.cinder.137430.35244752.asok
  nova-comp 137430 nova 5244r  FIFO0,8  0t0 948073535 pipe
  nova-comp 137430 nova 5245u  unix 0x881ad3196580  0t0 948073533 
/var/run/ceph/guests/ceph-client.cinder.137430.35244752.asok
  nova-comp 137430 nova 5246r  FIFO0,8  0t0 948163602 pipe
  nova-comp 137430 nova 5247u  unix 0x881ad3195b00  0t0 948073536 
/var/run/ceph/guests/ceph-client.cinder.137430.35244752.asok
  nova-comp 137430 nova 5248r  FIFO0,8  0t0 948160472 pipe
  nova-comp 137430 nova 5249u  unix 0x88301f2ec600  0t0 948163603 
/var/run/ceph/guests/ceph-client.cinder.137430.35260960.asok
  nova-comp 137430 nova 5250r  FIFO0,8  0t0 948160475 pipe
  nova-comp 137430 nova 5251u  unix 0x881ad319  0t0 948160473 
/var/run/ceph/guests/ceph-client.cinder.137430.35361600.asok
  nova-comp 137430 nova 5252r  FIFO0,8  0t0 948160490 pipe
  nova-comp 137430 nova 5253u  unix 0x881ad3190380  0t0 948160476 
/var/run/ceph/guests/ceph-client.cinder.137430.35260960.asok
  nova-comp 137430 nova 5254r  FIFO0,8  0t0 948160493 pipe
  nova-comp 137430 nova 5255u  unix 0x881ad3195780  0t0 948160491 
/var/run/ceph/guests/ceph-client.cinder.137430.35244752.asok
  nova-comp 137430 nova 5256r  FIFO0,8  0t0 948160498 pipe
  nova-comp 137430 nova 5257u  unix 0x881fafe18000  0t0 948160494 
/var/run/ceph/guests/ceph-client.cinder.137430.35260960.asok
  nova-comp 137430 nova 5258r  FIFO0,8  0t0 948261835 pipe
  nova-comp 137430 nova 5259u  unix 0x881fafe18e00  0t0 948160499 
/var/run/ceph/guests/ceph-client.cinder.137430.35666576.asok
  nova-comp 137430 nova 5260r  FIFO0,8  0t0 948265362 pipe
  nova-comp 137430 nova 5261u  unix 0x881fafe19180  0t0 948261836 
/var/run/ceph/guests/ceph-client.cinder.137430.35361600.asok
  nova-comp 137430 nova 5262r  FIFO0,8  0t0 948265500 pipe
  nova-comp 137430 nova 5263u  unix 0x881fafe19500  0t0 948265363 
/var/run/ceph/guests/ceph-client.cinder.137430.35318896.asok

  And:
  root@qk-3:~# ls -la 
/var/run/ceph/guests/ceph-client.cinder.137430.33776880.asok
  ls: cannot access 
/var/run/ceph/guests/ceph-client.cinder.137430.33776880.asok: No such file or 
directory

  As you can realize, all of these files were already closed.

  If we restart nova-compute process, the number of open file will be
  140-150, but within a few days it grows up to 4000-5000-1 files
  and we get the ”too many open files" Error again.

  Below is the ceph.conf from no

[Yahoo-eng-team] [Bug 1177432] Re: [SRU] cloud-init archive template should match Ubuntu Server

2015-11-10 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.7~bzr1149-0ubuntu3

---
cloud-init (0.7.7~bzr1149-0ubuntu3) wily; urgency=medium

  * d/patches/lp-1177432-same-archives-as-ubuntu-server.patch: use the
same archive pockets as Ubuntu Server (LP: #1177432).

 -- Ben Howard   Thu, 05 Nov 2015 12:41:19 -0700

** Changed in: cloud-init (Ubuntu Wily)
   Status: Fix Committed => Fix Released

** Changed in: cloud-init (Ubuntu Vivid)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1177432

Title:
  [SRU] cloud-init archive template should match Ubuntu Server

Status in cloud-init:
  New
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Precise:
  New
Status in cloud-init source package in Trusty:
  Fix Released
Status in cloud-init source package in Vivid:
  Fix Released
Status in cloud-init source package in Wily:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Released

Bug description:
  [SRU Justification]
  Ubuntu Cloud Images are inconsistent with desktop and bare-metal server 
installations since backports, restricted and multiverse are not enabled. This 
is effected via cloud-init that uses a template to select an in-cloud archive.

  [FIX] Make the cloud-init template match that of Ubuntu-server.

  [REGRESION] The potential for regression is low. However, all users
  will experience slower fetch times on apt-get updates especially on
  slower or high latency networks.

  [TEST]
  1. Build image from -proposed
  2. Boot up image
  3. Confirm that "sudo apt-get update" pulls in backports, restricted and 
multiverse.

  Backports are currently not enabled in the cloud-init template. This
  is needed in order to get the backport kernels on cloud images.

  Related bugs:
   * bug 997371:  Create command to add "multiverse" and "-backports" to apt 
sources
   * bug 1513529:  cloud image built-in /etc/apt/sources.list needs updating

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1177432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1177432] Re: [SRU] cloud-init archive template should match Ubuntu Server

2015-11-10 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.5-0ubuntu1.14

---
cloud-init (0.7.5-0ubuntu1.14) trusty; urgency=medium

  * d/patches/lp-1177432-same-archives-as-ubuntu-server.patch: use the
same archive pockets as Ubuntu Server (LP: #1177432).

 -- Ben Howard   Thu, 05 Nov 2015 09:50:57 -0700

** Changed in: cloud-init (Ubuntu Trusty)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1177432

Title:
  [SRU] cloud-init archive template should match Ubuntu Server

Status in cloud-init:
  New
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Precise:
  New
Status in cloud-init source package in Trusty:
  Fix Released
Status in cloud-init source package in Vivid:
  Fix Released
Status in cloud-init source package in Wily:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Released

Bug description:
  [SRU Justification]
  Ubuntu Cloud Images are inconsistent with desktop and bare-metal server 
installations since backports, restricted and multiverse are not enabled. This 
is effected via cloud-init that uses a template to select an in-cloud archive.

  [FIX] Make the cloud-init template match that of Ubuntu-server.

  [REGRESION] The potential for regression is low. However, all users
  will experience slower fetch times on apt-get updates especially on
  slower or high latency networks.

  [TEST]
  1. Build image from -proposed
  2. Boot up image
  3. Confirm that "sudo apt-get update" pulls in backports, restricted and 
multiverse.

  Backports are currently not enabled in the cloud-init template. This
  is needed in order to get the backport kernels on cloud images.

  Related bugs:
   * bug 997371:  Create command to add "multiverse" and "-backports" to apt 
sources
   * bug 1513529:  cloud image built-in /etc/apt/sources.list needs updating

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1177432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494615] Re: Opening the workflow in new window is not proper

2015-11-10 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1494615

Title:
  Opening the workflow in new window is not proper

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  I am using kilo , when ever i try to open the workflows in horizon
  dashboards like( Eg: Launch Instances) in a separate tab, it is not
  loading properly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1494615/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477373] Re: No way to convert V2 tokens to V3 if domain id changes

2015-11-10 Thread Launchpad Bug Tracker
[Expired for OpenStack Identity (keystone) because there has been no
activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1477373

Title:
  No way to convert V2 tokens to V3 if domain id changes

Status in OpenStack Identity (keystone):
  Expired

Bug description:
  While many people are still stuck on V2 tokens, we need a safe way to
  map them to V3.  If they default domain changes, the tokens will not
  be properly converted.  THe best that can be done now is to guess that
  the domain_id is "default" and the name is "Default"

  both these values should be included as hints in V2 tokens until they
  are completely removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1477373/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485779] [NEW] [neutron-lbaas]Delete member with non existing member id throws incorrect error message.

2015-11-11 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

neutron lbaas-member-delete with the non existing member id throws
incorrect error message.

For e.g.

neutron lbaas-member-delete 852bfa31-6522-4ccf-b48c-768cd2ab5212
test_pool

Throws the following error message.

Multiple member matches found for name '852bfa31-6522-4ccf-b48c-
768cd2ab5212', use an ID to be more specific.

Example:

$ neutron lbaas-member-list pool1

+--+--+---++--++
| id   | address  | protocol_port | weight | 
subnet_id| admin_state_up |
+--+--+---++--++
| 64e4d9f4-c2c5-4d58-b696-21cb7cff21ad | 10.3.3.5 |80 |  1 | 
e822a77b-5060-4407-a766-930d6fd8b644 | True   |
| a1a9c7a6-f9a5-4c12-9013-f0990a5f2d54 | 10.3.3.3 |80 |  1 | 
e822a77b-5060-4407-a766-930d6fd8b644 | True   |
| d9d060ee-8af3-4d98-9bb9-49bb81bc4c37 | 10.2.2.3 |80 |  1 | 
f6398da5-9234-4ed9-a0ca-29cbd33d44b9 | True   |
+--+--+---++--++

$ neutron lbaas-member-delete non-existing-uuid pool1
Multiple member matches found for name 'non-existing-uuid', use an ID to be 
more specific.

** Affects: neutron
 Importance: Medium
 Assignee: Reedip (reedip-banerjee)
 Status: Confirmed

-- 
[neutron-lbaas]Delete member with non existing member id throws incorrect error 
message.
https://bugs.launchpad.net/bugs/1485779
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515506] [NEW] There is no facility to name LBaaS v2 members and Health Monitors

2015-11-11 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

High Level Requirement: 
Currently there is no facility to name LBaaS v2 Members and Health Monitors.
Although optional, having  the NAME field allows the users to remember specific 
objects( in this case Health Monitors and Members) , so that any task related 
to these objects can be done easily , instead of retrieving the IDs of these 
objects everytime.

The following issue is raised to allow a new parameter 'name' to be
added to LBaaS Tables Health Monitors and Members, just like other LBaaS
tables ( listener, loadbalancer, pool) have.

Pre-Conditions:
LBaaS v2 is enabled in the system.

Version: 
Git ID :321da8f6263d46bf059163bcf7fd005cf68601bd

Environment:
Ubuntu 14.04, with Devstack All In One, FWaaS , LBaaSv2 and Octavia enabled.

Perceived Severity: Medium

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas
-- 
There is no facility to name LBaaS v2 members and Health Monitors
https://bugs.launchpad.net/bugs/1515506
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515564] [NEW] Internal server error when running qos-bandwidth-limit-rule-update as a tenant

2015-11-12 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

When running the following command as a tenant::
# neutron qos-bandwidth-limit-rule-update eadd705c-f295-4246-952e-bad2762e6b27 
policy2 --max-kbps 2048

I get the error: 
Request Failed:  server errointernalr while processing your request.

There should be a meaningful error such as - Updating policy is not
allowed

Version
==
CentOS 7.1
python-neutronclient-3.1.1-dev7.el7.centos.noarch

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
Internal server error when running qos-bandwidth-limit-rule-update as a tenant
https://bugs.launchpad.net/bugs/1515564
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445631] Re: Cells: tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_rebuild_server_in_stop_state

2015-11-13 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445631

Title:
  Cells:
  
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_rebuild_server_in_stop_state

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Failed Tempest due to a not yet tracked down race condition.

  2015-04-17 15:52:46.984 | 
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_rebuild_server_in_stop_state[gate,id-30449a88-5aff-4f9b-9866-6ee9b17f906d]
  2015-04-17 15:52:46.984 | 
-
  2015-04-17 15:52:46.984 | 
  2015-04-17 15:52:46.984 | Captured traceback:
  2015-04-17 15:52:46.984 | ~~~
  2015-04-17 15:52:46.984 | Traceback (most recent call last):
  2015-04-17 15:52:46.984 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 162, in 
test_rebuild_server_in_stop_state
  2015-04-17 15:52:46.984 | self.client.stop(self.server_id)
  2015-04-17 15:52:46.984 |   File 
"tempest/services/compute/json/servers_client.py", line 356, in stop
  2015-04-17 15:52:46.985 | return self.action(server_id, 'os-stop', 
None, **kwargs)
  2015-04-17 15:52:46.985 |   File 
"tempest/services/compute/json/servers_client.py", line 223, in action
  2015-04-17 15:52:46.985 | post_body)
  2015-04-17 15:52:46.985 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 252, in post
  2015-04-17 15:52:46.985 | return self.request('POST', url, 
extra_headers, headers, body)
  2015-04-17 15:52:46.985 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 629, in request
  2015-04-17 15:52:46.985 | resp, resp_body)
  2015-04-17 15:52:46.985 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 685, in _error_checker
  2015-04-17 15:52:46.985 | raise exceptions.Conflict(resp_body)
  2015-04-17 15:52:46.985 | tempest_lib.exceptions.Conflict: An object with 
that identifier already exists
  2015-04-17 15:52:46.985 | Details: {u'message': u"Cannot 'stop' instance 
79651f8a-15db-4067-b1e7-184c72341618 while it is in task_state rebuilding", 
u'code': 409}
  2015-04-17 15:52:46.986 | 
  2015-04-17 15:52:46.986 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1445631/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445629] Re: Cells: tempest.api.compute.servers.test_disk_config.ServerDiskConfigTestJSON.test_rebuild_server_with_manual_disk_config

2015-11-13 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445629

Title:
  Cells:
  
tempest.api.compute.servers.test_disk_config.ServerDiskConfigTestJSON.test_rebuild_server_with_manual_disk_config

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Tempest failure has been seen due to various failures in the tests.
  It's failed on 'MANUAL' != 'AUTO' in an assert, and occasionally the
  500 error as seen below.

  2015-04-17 15:52:46.992 | 
tempest.api.compute.servers.test_disk_config.ServerDiskConfigTestJSON.test_rebuild_server_with_manual_disk_config[gate,id-bef56b09-2e8c-4883-a370-4950812f430e]
  2015-04-17 15:52:46.992 | 
---
  2015-04-17 15:52:46.992 | 
  2015-04-17 15:52:46.992 | Captured traceback:
  2015-04-17 15:52:46.993 | ~~~
  2015-04-17 15:52:46.993 | Traceback (most recent call last):
  2015-04-17 15:52:46.993 |   File 
"tempest/api/compute/servers/test_disk_config.py", line 62, in 
test_rebuild_server_with_manual_disk_config
  2015-04-17 15:52:46.993 | disk_config='MANUAL')
  2015-04-17 15:52:46.993 |   File 
"tempest/services/compute/json/servers_client.py", line 286, in rebuild
  2015-04-17 15:52:46.993 | rebuild_schema, **kwargs)
  2015-04-17 15:52:46.993 |   File 
"tempest/services/compute/json/servers_client.py", line 223, in action
  2015-04-17 15:52:46.993 | post_body)
  2015-04-17 15:52:46.993 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 252, in post
  2015-04-17 15:52:46.993 | return self.request('POST', url, 
extra_headers, headers, body)
  2015-04-17 15:52:46.993 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 629, in request
  2015-04-17 15:52:46.994 | resp, resp_body)
  2015-04-17 15:52:46.994 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 734, in _error_checker
  2015-04-17 15:52:46.994 | raise exceptions.ServerFault(message)
  2015-04-17 15:52:46.994 | tempest_lib.exceptions.ServerFault: Got server 
fault
  2015-04-17 15:52:46.994 | Details: The server has either erred or is 
incapable of performing the requested operation.
  2015-04-17 15:52:46.994 | 
  2015-04-17 15:52:46.994 | 
  2015-04-17 15:52:46.994 | Captured pythonlogging:
  2015-04-17 15:52:46.994 | ~~~
  2015-04-17 15:52:46.994 | 2015-04-17 15:41:20,649 22297 INFO 
[tempest_lib.common.rest_client] Request 
(ServerDiskConfigTestJSON:test_rebuild_server_with_manual_disk_config): 200 GET 
http://127.0.0.1:8774/v2/122b3703a169473b95c2df822ab3b445/servers/64384837-aeed-4716-8b70-e352969dc70e
 0.093s
  2015-04-17 15:52:46.994 | 2015-04-17 15:41:20,649 22297 DEBUG
[tempest_lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
  2015-04-17 15:52:46.994 | Body: None
  2015-04-17 15:52:46.995 | Response - Headers: {'date': 'Fri, 17 Apr 
2015 15:41:20 GMT', 'content-location': 
'http://127.0.0.1:8774/v2/122b3703a169473b95c2df822ab3b445/servers/64384837-aeed-4716-8b70-e352969dc70e',
 'content-length': '1561', 'connection': 'close', 'content-type': 
'application/json', 'x-compute-request-id': 
'req-d4ffd48d-d352-43a0-93f2-69aaac4b15cc', 'status': '200'}
  2015-04-17 15:52:46.995 | Body: {"server": {"status": "ACTIVE", 
"updated": "2015-04-17T15:41:19Z", "hostId": 
"76f5dd41c013d68299b15e024f385c17e44224a89fec40092bc453a1", "addresses": 
{"private": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fc:d9:ec", "version": 4, 
"addr": "10.1.0.3", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": 
"http://127.0.0.1:8774/v2/122b3703a169473b95c2df822ab3b445/servers/64384837-aeed-4716-8b70-e352969dc70e";,
 "rel": "self"}, {"href": 
"http://127.0.0.1:8774/122b3703a169473b95c2df822ab3b445/servers/64384837-aeed-4716-8b70-e352969dc70e";,
 "rel": "bookmark"}], "key_name": null, "image": {"id": 
"c119e569-3af7-41c9-a5da-6bab97b7c508", "links": [{"href": 
"http://127.0.0.1:8774/122b3703a169473b95c2df822ab3b445/images/c119e569-3af7-41c9-a5da-6bab97b7c508";,
 "rel": "bookmark"}]}, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": 
"active", "OS-SRV-USG:launched_at": "2015-04-17T15:41:17.00", "flavor": 
{"id": "42", "links": [{"href": "http://127.0.0.1:8774
 /122b3703a169473b95c2df822ab3b445/flavors/42", "rel": "bookmark"}]}, "id": 
"64384837-aeed-4716-8b70-e352969d

[Yahoo-eng-team] [Bug 1445628] Re: Cells: Tempest test_ec2_instance_run.InstanceRunTest.test_run_idempotent_instances

2015-11-13 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445628

Title:
  Cells: Tempest
  test_ec2_instance_run.InstanceRunTest.test_run_idempotent_instances

Status in OpenStack Compute (nova):
  Expired

Bug description:
  The tempest test is failing, likely due to a race condition with
  setting system_metadata on instances too quickly.

  2015-04-17 15:52:47.001 | 
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_idempotent_instances[id-c881fbb7-d56e-4054-9d76-1c3a60a207b0]
  2015-04-17 15:52:47.001 | 

  2015-04-17 15:52:47.001 | 
  2015-04-17 15:52:47.001 | Captured traceback:
  2015-04-17 15:52:47.001 | ~~~
  2015-04-17 15:52:47.001 | Traceback (most recent call last):
  2015-04-17 15:52:47.001 |   File 
"tempest/thirdparty/boto/test_ec2_instance_run.py", line 120, in 
test_run_idempotent_instances
  2015-04-17 15:52:47.001 | self.assertEqual(reservation_1.id, 
reservation_1a.id)
  2015-04-17 15:52:47.001 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 350, in assertEqual
  2015-04-17 15:52:47.001 | self.assertThat(observed, matcher, message)
  2015-04-17 15:52:47.002 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
  2015-04-17 15:52:47.002 | raise mismatch_error
  2015-04-17 15:52:47.002 | testtools.matchers._impl.MismatchError: 
u'r-b5r49kpt' != u'r-pkxdvrw5'
  2015-04-17 15:52:47.002 | 
  2015-04-17 15:52:47.002 | 
  2015-04-17 15:52:47.002 | Captured pythonlogging:
  2015-04-17 15:52:47.002 | ~~~
  2015-04-17 15:52:47.002 | 2015-04-17 15:49:33,598 22297 INFO 
[tempest_lib.common.rest_client] Request 
(InstanceRunTest:test_run_idempotent_instances): 200 GET 
http://127.0.0.1:35357/v2.0/users/c70380a98d8a4842a3a40f8470aef63d/credentials/OS-EC2
 0.378s
  2015-04-17 15:52:47.002 | 2015-04-17 15:49:33,598 22297 DEBUG
[tempest_lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
  2015-04-17 15:52:47.002 | Body: None
  2015-04-17 15:52:47.002 | Response - Headers: {'date': 'Fri, 17 Apr 
2015 15:49:33 GMT', 'x-openstack-request-id': 
'req-d4e9b050-8987-4880-a8ad-bf7d0ddeff46', 'vary': 'X-Auth-Token', 'server': 
'Apache/2.4.7 (Ubuntu)', 'content-length': '225', 'connection': 'close', 
'content-type': 'application/json', 'status': '200', 'content-location': 
'http://127.0.0.1:35357/v2.0/users/c70380a98d8a4842a3a40f8470aef63d/credentials/OS-EC2'}
  2015-04-17 15:52:47.003 | Body: {"credentials": [{"access": 
"a1dcddda366840f0b39548da591c3eac", "tenant_id": 
"d3f919661707425eb0cf76113ffd03c4", "secret": 
"c33e9727f7f743eca190009b96edea13", "user_id": 
"c70380a98d8a4842a3a40f8470aef63d", "trust_id": null}]}
  2015-04-17 15:52:47.003 | 2015-04-17 15:49:39,084 22297 INFO 
[tempest_lib.common.rest_client] Request 
(InstanceRunTest:test_run_idempotent_instances): 200 GET 
http://127.0.0.1:35357/v2.0/users/c70380a98d8a4842a3a40f8470aef63d/credentials/OS-EC2
 0.479s
  2015-04-17 15:52:47.003 | 2015-04-17 15:49:39,085 22297 DEBUG
[tempest_lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
  2015-04-17 15:52:47.003 | Body: None
  2015-04-17 15:52:47.003 | Response - Headers: {'date': 'Fri, 17 Apr 
2015 15:49:38 GMT', 'x-openstack-request-id': 
'req-7dec7fd6-19e0-4bd1-9819-374b286e2ba1', 'vary': 'X-Auth-Token', 'server': 
'Apache/2.4.7 (Ubuntu)', 'content-length': '225', 'connection': 'close', 
'content-type': 'application/json', 'status': '200', 'content-location': 
'http://127.0.0.1:35357/v2.0/users/c70380a98d8a4842a3a40f8470aef63d/credentials/OS-EC2'}
  2015-04-17 15:52:47.003 | Body: {"credentials": [{"access": 
"a1dcddda366840f0b39548da591c3eac", "tenant_id": 
"d3f919661707425eb0cf76113ffd03c4", "secret": 
"c33e9727f7f743eca190009b96edea13", "user_id": 
"c70380a98d8a4842a3a40f8470aef63d", "trust_id": null}]}
  2015-04-17 15:52:47.003 | 2015-04-17 15:49:45,016 22297 INFO 
[tempest_lib.common.rest_client] Request 
(InstanceRunTest:test_run_idempotent_instances): 200 GET 
http://127.0.0.1:35357/v2.0/users/c70380a98d8a4842a3a40f8470aef63d/credentials/OS-EC2
 0.430s
  2015-04-17 15:52:47.003 | 2015-04-17 15:49:45,016 22297 DEBUG
[tempest_lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/

[Yahoo-eng-team] [Bug 1385295] [NEW] use_syslog=True does not log to syslog via /dev/log anymore

2015-11-14 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

python-oslo.log SRU:
[Impact]

 * Nova services not able to write log to syslog

[Test Case]

 * 1. Set use_syslog to True in nova.conf/cinder.conf
   2. stop rsyslog service
   3. restart nova/cinder services
   4. restart rsyslog service
   5. Log is not written to syslog after rsyslog is brought up.

[Regression Potential]

 * none


Reproduced on:
https://github.com/openstack-dev/devstack 
514c82030cf04da742d16582a23cc64962fdbda1
/opt/stack/keystone/keystone.egg-info/PKG-INFO:Version: 2015.1.dev95.g20173b1
/opt/stack/heat/heat.egg-info/PKG-INFO:Version: 2015.1.dev213.g8354c98
/opt/stack/glance/glance.egg-info/PKG-INFO:Version: 2015.1.dev88.g6bedcea
/opt/stack/cinder/cinder.egg-info/PKG-INFO:Version: 2015.1.dev110.gc105259

How to reproduce:
Set
 use_syslog=True
 syslog_log_facility=LOG_SYSLOG
for Openstack config files and restart processes inside their screens

Expected:
Openstack logs logged to syslog as well

Actual:
Nothing goes to syslog

** Affects: oslo.log
 Importance: High
 Assignee: John Stanford (jxstanford)
 Status: Fix Released

** Affects: cinder (Ubuntu)
 Importance: Medium
 Status: Invalid

** Affects: nova
 Importance: Medium
 Status: Invalid

** Affects: python-oslo.log (Ubuntu)
 Importance: High
 Assignee: Liang Chen (cbjchen)
 Status: In Progress


** Tags: in-stable-kilo patch
-- 
use_syslog=True does not log to syslog via /dev/log anymore
https://bugs.launchpad.net/bugs/1385295
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492759] Re: heat-engine refers to a non-existent novaclient's method

2015-11-14 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1492759

Title:
  heat-engine refers to a non-existent novaclient's method

Status in heat:
  Invalid
Status in OpenStack Compute (nova):
  Expired

Bug description:
  Openstack Kilo on Centos 7

  I cannot create a stack. heat-engine failed regardless of template what used 
for.
   
  Error message: ERROR: Property error: : resources.pgpool.properties.flavor: : 
'OpenStackComputeShell' object has no attribute '_discover_extensions

  heat-engine log:
  
  2015-09-06 15:34:08.242 19788 DEBUG oslo_messaging._drivers.amqp [-] unpacked 
context: {u'username': None, u'user_id': u'665b2e5b102a413c90433933aade392b', 
u'region_name': None, u'roles': [u'user', u'heat_stack_owner'], 
u'user_identity': u'- daddy', u'tenant_id': 
u'b408e8f5cb56432a96767c83583ea051', u'auth_token': u'***', u'auth_token_info': 
{u'token': {u'methods': [u'password'], u'roles': [{u'id': 
u'0698f895b3544a20ac511c6e287691d4', u'name': u'user'}, {u'id': 
u'2061bd7e4e9d4da4a3dc2afff69a823e', u'name': u'heat_stack_owner'}], 
u'expires_at': u'2015-09-06T14:34:08.136737Z', u'project': {u'domain': {u'id': 
u'default', u'name': u'Default'}, u'id': u'b408e8f5cb56432a96767c83583ea051', 
u'name': u'daddy'}, u'catalog': [{u'endpoints': [{u'url': 
u'http://172.17.1.1:9292', u'interface': u'admin', u'region': u'CEURegion', 
u'region_id': u'CEURegion', u'id': u'5dce804bafb34b159ec1b4385460a481'}, 
{u'url': u'http://172.17.1.1:9292', u'interface': u'public', u'region': 
u'CEURegion', u'region_id
 ': u'CEURegion', u'id': u'a5728528ead84649bd561f9841011ff4'}, {u'url': 
u'http://172.17.1.1:9292', u'interface': u'internal', u'region': u'CEURegion', 
u'region_id': u'CEURegion', u'id': u'e205b5ba78e0479fb391d90f4958a8a0'}], 
u'type': u'image', u'id': u'0a0dd8432bd64f88b2c1ffd3d5d23b78', u'name': 
u'glance'}, {u'endpoints': [{u'url': u'http://172.17.1.1:9696', u'interface': 
u'admin', u'region': u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'15831ae42aa143cb94f0d3adc1b353fb'}, {u'url': u'http://172.17.1.1:9696', 
u'interface': u'public', u'region': u'CEURegion', u'region_id': u'CEURegion', 
u'id': u'74bf11a2b9334256bf9abdc618556e2b'}, {u'url': 
u'http://172.17.1.1:9696', u'interface': u'internal', u'region': u'CEURegion', 
u'region_id': u'CEURegion', u'id': u'd326b2c9fa614cad8586c79ab76a66a0'}], 
u'type': u'network', u'id': u'0e75266a6c284a289edb11b1c627c53f', u'name': 
u'neutron'}, {u'endpoints': [{u'url': 
u'http://172.17.1.1:8774/v2/b408e8f5cb56432a96767c83583ea051', u'interface': 
u'int
 ernal', u'region': u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'083e629299bb429ba6ad1bf03451e8db'}, {u'url': 
u'http://172.17.1.1:8774/v2/b408e8f5cb56432a96767c83583ea051', u'interface': 
u'public', u'region': u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'3942023115194893bb6762d02e47524a'}, {u'url': 
u'http://172.17.1.1:8774/v2/b408e8f5cb56432a96767c83583ea051', u'interface': 
u'admin', u'region': u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'b6f4f8a8bc33444b862cd3d9360c67e2'}], u'type': u'compute', u'id': 
u'2a259406aeef4667873d06ef361a1c44', u'name': u'nova'}, {u'endpoints': 
[{u'url': u'http://172.17.1.1:8776/v2/b408e8f5cb56432a96767c83583ea051', 
u'interface': u'admin', u'region': u'CEURegion', u'region_id': u'CEURegion', 
u'id': u'919bab67f54b4973807dcefb37fc22aa'}, {u'url': 
u'http://172.17.1.1:8776/v2/b408e8f5cb56432a96767c83583ea051', u'interface': 
u'internal', u'region': u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'ce0963a3cfba44deb818f7d0551d8bdf'}, {u'url': u
 'http://172.17.1.1:8776/v2/b408e8f5cb56432a96767c83583ea051', u'interface': 
u'public', u'region': u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'e98842d6a18840f7a1d0595957eaa4d6'}], u'type': u'volume', u'id': 
u'5e3afcf192bb4ad8ad9bfd589b0641b9', u'name': u'cinder'}, {u'endpoints': 
[{u'url': u'http://172.17.1.1:8000/v1', u'interface': u'public', u'region': 
u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'4385c791314e4f8a926411b9f4707513'}, {u'url': u'http://172.17.1.1:8000/v1', 
u'interface': u'admin', u'region': u'CEURegion', u'region_id': u'CEURegion', 
u'id': u'a1ed10e71e3d4c81b4f3e175f4c29e3f'}, {u'url': 
u'http://172.17.1.1:8000/v1', u'interface': u'internal', u'region': 
u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'd6d2e7dc54fc4abbb99d93f95d795340'}], u'type': u'cloudformation', u'id': 
u'7a80a5d594414d6fb07f5332bca1d0e1', u'name': u'heat-cfn'}, {u'endpoints': 
[{u'url': u'http://172.17.1.1:5000/v2.0', u'interface': u'public', u'region': 
u'CEURegion', u'region_id': u'CEUR
 egion', u'id': u'0fef9f451d9b42bcaeea6addda1c3870'}, {u'url': 
u'http://172.17.1.1:35357/v2.0', u'interface': u'admin', u'region': 
u'CEURegion', u'

[Yahoo-eng-team] [Bug 1359651] Re: xenapi: still get MAP_DUPLICATE_KEY in some edge cases

2015-11-14 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359651

Title:
  xenapi: still get MAP_DUPLICATE_KEY in some edge cases

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Older version of XenServer require us to keep the live copy of
  xenstore updated in sync with the copy of xenstore recorded in the
  xenapi metadata for that VM.

  Code inspection has shown that we don't consistently keep those two
  copies up to date.

  While its hard to reproduce this errors, (add_ip_address_to_vm seems
  particuarly likely to hit issues), it seems best to tidy up the
  xenstore writing code so we consistently add/remove keys from the live
  copy and the copy in xenapi.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1359651/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461406] Re: libvirt: missing iotune parse for LibvirtConfigGuestDisk

2015-11-14 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461406

Title:
  libvirt: missing  iotune parse for  LibvirtConfigGuestDisk

Status in OpenStack Compute (nova):
  Expired

Bug description:
  We support  instance disk IO control with  iotune like :


  102400


  we set iotune in class LibvirtConfigGuestDisk  in libvirt/config.py . The 
method parse_dom doesn't parse iotue options now.
  Need fix that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405726] Re: getting scoped federation token fails when using db2

2015-11-15 Thread Launchpad Bug Tracker
[Expired for OpenStack Identity (keystone) because there has been no
activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1405726

Title:
  getting scoped federation token fails when using db2

Status in OpenStack Identity (keystone):
  Expired

Bug description:
  I am using federation.
  Following are the commands I executed.

  I already have an admin_group created that is gets mapped to when user is 
back from doing saml authentication with IdP.
  I then do

  openstack role add --group admin_group --domain default  admin

   curl --insecure -X GET https://172.20.14.16:35357/v3/OS-FEDERATION/domains 
-H "User-Agent: python-keystoneclient" -H "Content-Type: application/json" -H 
"X-Auth-Token:  58e6ceef8dcf4aceb508323e5a2a7c35"
  {"domains": [{"links": {"self": 
"https://172.20.14.16:5000/v3/domains/default"}, "enabled": true, 
"description": "Owns users and tenants (i.e. projects) available on Identity 
API v2.", "name": "Default", "id": "default"}], "links": {"self": 
"https://172.20.14.16:5000/v3/OS-FEDERATION/domains";, "previous": null, "next": 
null}}

  openstack role add --group admin_group --project admin admin
  curl --insecure -X GET https://172.20.14.16:35357/v3/OS-FEDERATION/projects 
-H "User-Agent: python-keystoneclient" -H "Content-Type: application/json" -H 
"X-Auth-Token:  58e6ceef8dcf4aceb508323e5a2a7c35"

  command to get scoped token*
  curl --insecure -X POST  POST https://sp.machine:35357/v3/auth/tokens  -H 
"User-Agent: python-keystoneclient" -H "Content-Type: application/json" -H 
"X-Auth-Token:  58e6ceef8dcf4aceb508323e5a2a7c35"  -d 
'{"auth":{"identity":{"methods":["saml2"],"saml2":{"id":"58e6ceef8dcf4aceb508323e5a2a7c35"}},"scope":{"project":{"domain":
 {"id": "default"},"name":"admin"'

  This gives an error as follows
  2014-12-26 05:58:14.622 26820 ERROR keystone.common.wsgi [-] 
(ProgrammingError) ibm_db_dbi::ProgrammingError: SQLNumResultCols failed: 
[IBM][CLI Driver][DB2/LINUXX8664] SQL0134N  Improper use of a string column, 
host variable, constant, or function "ROLE_EXTRA".  SQLSTATE=42907 SQLCODE=-134 
'SELECT DISTINCT role.id AS role_id, role.name AS role_name, role.extra AS 
role_extra \nFROM role, assignment \nWHERE assignment."type" = ? AND 
assignment.target_id = ? AND role.id = assignment.role_id AND 
assignment.actor_id IN (?)' ('GroupProject', 
'c9efdd57ae9d4f5f97d07424c5c4da90', '83ef4a24bf18480f849e903ddfaba7a9')
  2014-12-26 05:58:14.622 26820 TRACE keystone.common.wsgi Traceback (most 
recent call last):
  2014-12-26 05:58:14.622 26820 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/common/wsgi.py", line 207, in 
__call__
  2014-12-26 05:58:14.622 26820 TRACE keystone.common.wsgi result = 
method(context, **params)
  2014-12-26 05:58:14.622 26820 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/auth/controllers.py", line 343, in 
authenticate_for_token
  2014-12-26 05:58:14.622 26820 TRACE keystone.common.wsgi domain_id, 
auth_context, trust, metadata_ref, include_catalog)
  2014-12-26 05:58:14.622 26820 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/common/manager.py", line 78, in 
_wrapper
  2014-12-26 05:58:14.622 26820 TRACE keystone.common.wsgi return f(*args, 
**kw)
  2014-12-26 05:58:14.622 26820 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/token/providers/common.py", line 
428, in issue_v3_token
  2014-12-26 05:58:14.622 26820 TRACE keystone.common.wsgi domain_id)
  2014-12-26 05:58:14.622 26820 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/token/providers/common.py", line 
503, in _handle_saml2_tokens
  2014-12-26 05:58:14.622 26820 TRACE keystone.common.wsgi group_ids, 
project_id, domain_id, user_id)
  2014-12-26 05:58:14.622 26820 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/token/providers/common.py", line 
199, in _populate_roles_for_groups
  2014-12-26 05:58:14.622 26820 TRACE keystone.common.wsgi domain_id)
  2014-12-26 05:58:14.622 26820 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/common/manager.py", line 78, in 
_wrapper
  2014-12-26 05:58:14.622 26820 TRACE keystone.common.wsgi return f(*args, 
**kw)
  2014-12-26 05:58:14.622 26820 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/assignment/backends/sql.py", line 
320, in get_roles_for_groups
  2014-12-26 05:58:14.622 26820 TRACE keystone.common.wsgi return 
[role.to_dict() for role in query.all()]
  2014-12-26 05:58:14.622 26820 TRACE keystone.common.wsgi   File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/orm/query.py", line 2115, in all
  2014-1

[Yahoo-eng-team] [Bug 1496219] Re: get image error when boot a instance

2015-11-15 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496219

Title:
  get image error when boot a instance

Status in OpenStack Compute (nova):
  Expired

Bug description:
  2015-09-16 11:26:10.018 DEBUG nova.quota 
[req-4d4a1b1e-3ea3-41bd-b0d4-8b3adf2a1fd0 admin admin] Getting all quota usages 
for project: 457fcc6f0fb049a89bef6271495788c6 from (pid=5839) 
get_project_quotas /opt/stack/nova/nova/quota.py:290
  2015-09-16 11:26:10.030 INFO nova.osapi_compute.wsgi.server 
[req-4d4a1b1e-3ea3-41bd-b0d4-8b3adf2a1fd0 admin admin] 10.0.10.50 "GET 
/v2.1/457fcc6f0fb049a89bef6271495788c6/limits?reserved=1 HTTP/1.1" status: 200 
len: 779 time: 0.0331810
  ^C
  [stack@devstack logs]$ less -R n-api.log.2015-09-15-171330
  2015-09-16 11:26:09.816 ERROR nova.api.openstack.extensions 
[req-27b137d8-77da-4082-8553-05598efff073 admin admin] Unexpected exception in 
API method
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions return 
f(*args, **kwargs)
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 73, in wrapper
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions return 
func(*args, **kwargs)
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/servers.py", line 597, in create
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions 
**create_kwargs)
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/hooks.py", line 149, in inner
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions rv = f(*args, 
**kwargs)
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 1557, in create
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions 
check_server_group_quota=check_server_group_quota)
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 1139, in _create_instance
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions image_id, 
boot_meta = self._get_image(context, image_href)
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 849, in _get_image
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions image = 
self.image_api.get(context, image_href)
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/image/api.py", line 93, in get
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions 
show_deleted=show_deleted)
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/image/glance.py", line 320, in show
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions 
include_locations=include_locations)
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/image/glance.py", line 497, in _translate_from_glance
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions 
include_locations=include_locations)
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/image/glance.py", line 559, in _extract_attributes
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions queued = 
getattr(image, 'status') == 'queued'
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/openstack/common/apiclient/base.py",
 line 490, in __getattr__
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions self.get()
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/openstack/common/apiclient/base.py",
 line 508, in get
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions new = 
self.manager.get(self.id)
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/openstack/common/apiclient/base.py",
 line 493, in __getattr__
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions raise 
AttributeError(k)
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions AttributeError: id
  2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions
  2015-09-16 11:26:09.817 INFO nova.api.openstack.wsgi 
[req-27b137d8-77da-4082-8553-05598efff073 admin admin] HTTP exception thrown: 
Unexpected API Error. Please report this at http:/

[Yahoo-eng-team] [Bug 1466485] Re: keystone fails with: ArgsAlreadyParsedError: arguments already parsed: cannot register CLI option

2015-11-16 Thread Launchpad Bug Tracker
[Expired for OpenStack Identity (keystone) because there has been no
activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1466485

Title:
  keystone fails with: ArgsAlreadyParsedError: arguments already parsed:
  cannot register CLI option

Status in grenade:
  Incomplete
Status in OpenStack Identity (keystone):
  Expired

Bug description:
  Grenade jobs in master fail with the following scenario:

  - grenade.sh attempts to list glance images [1];
  - glance fails because keystone httpd returns 500 [2];
  - keystone fails because "ArgsAlreadyParsedError: arguments already parsed: 
cannot register CLI option" [3]

  Sean Dague says that it's because grenade does not upgrade keystone
  script, and the script should not even be installed in a way it's now
  installed (copied into /var/www/...).

  Relevant thread: http://lists.openstack.org/pipermail/openstack-
  dev/2015-June/067147.html

  [1]: 
http://logs.openstack.org/66/185066/3/check/check-grenade-dsvm-neutron/45d8663/logs/grenade.sh.txt.gz#_2015-06-18_09_08_32_989
  [2]: 
http://logs.openstack.org/66/185066/3/check/check-grenade-dsvm-neutron/45d8663/logs/new/screen-g-api.txt.gz#_2015-06-18_09_08_42_531
  [3]: 
http://logs.openstack.org/66/185066/3/check/check-grenade-dsvm-neutron/45d8663/logs/apache/keystone.txt.gz#_2015-06-18_09_08_46_675874

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1466485/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367110] [NEW] novaclient quota-update should handle tenant names

2015-11-17 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

nova quota-update should either reject tenant_ids which don't match a
valid uuid or translate tenant names to tenant ids

** Affects: nova
 Importance: Medium
 Assignee: Harshada Mangesh Kakad (harshada-kakad)
 Status: Confirmed

-- 
novaclient quota-update should handle tenant names
https://bugs.launchpad.net/bugs/1367110
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472899] Re: bug in ipam driver code

2015-11-17 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472899

Title:
  bug in ipam driver code

Status in neutron:
  Expired

Bug description:
  http://logs.openstack.org/26/195326/21/check/check-tempest-dsvm-
  networking-
  ovn/c071fed/logs/screen-q-svc.txt.gz?level=TRACE#_2015-07-09_04_45_12_248

  Fails on existing tempest test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1472899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515670] [NEW] VPNaaS: Modify neutron API users to detect multiple local subnet feature

2015-11-18 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

In review of 231133, Akihiro mentioned follow up work for the neutron
API consumers, so that Horizon can detect whether or not the new
multiple local subnet feature, with endpoint groups, is available.

At the moment, multiple local subnet feature has been implemented in
VPNaaS API, but API consumers need to try VPNaaS API to detect multiple
local subnet feature is available or not. It is better to detect the
feature without trying to call VPNaaS API.

The suggested approach is to add an extension which represents this
feature.

Placeholder for that work.

** Affects: neutron
 Importance: High
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: vpnaas
-- 
VPNaaS: Modify neutron API users to detect multiple local subnet feature
https://bugs.launchpad.net/bugs/1515670
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506187] Re: [SRU] Azure: cloud-init should use VM unique ID

2015-11-18 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.7~bzr1156-0ubuntu1

---
cloud-init (0.7.7~bzr1156-0ubuntu1) xenial; urgency=medium

  * New upstream snapshot.
  * d/cloud-init.preinst: migrate Azure instance ID from old ID to stable
ID (LP: #1506187).

 -- Ben Howard   Tue, 17 Nov 2015 11:59:49 -0700

** Changed in: cloud-init (Ubuntu Xenial)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1506187

Title:
  [SRU] Azure: cloud-init should use VM unique ID

Status in cloud-init:
  In Progress
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Precise:
  New
Status in cloud-init source package in Trusty:
  New
Status in cloud-init source package in Vivid:
  New
Status in cloud-init source package in Wily:
  New
Status in cloud-init source package in Xenial:
  Fix Released

Bug description:
  SRU JUSTIFICATION

  [IMPACT] On Azure, the InstanceID is currently detected via a fabric
  provided XML file. With the new CRP stack, this ID is not guaranteed
  to be stable. As a result instances may go re-provision upon reboot.

  [FIX] Use DMI data to detect the instance ID and migrate existing
  instances to the new ID.

  [REGRESSION POTENTIAL] The fix is both in the cloud-init code and in
  the packaging. If the instance ID is not properly migrated, then a
  reboot may trigger re-provisioning.

  [TEST CASES]
  1. Boot instance on Azure.
  2. Apply cloud-init from -proposed. A migration message should apply.
  3. Get the new instance ID:
 $ sudo cat /sys/class/dmi/id/product_uuid
  4. Confirm that /var/lib/cloud/instance is a symlink to 
/var/lib/cloud/instances/
  5. Re-install cloud-init and confirm that migration message is NOT displayed.

  [TEST CASE 2]
  1. Build new cloud-image from -proposed
  2. Boot up instance
  3. Confirm that /sys/class/dmi/id/product_uuid is used to get instance ID 
(see /var/log/cloud-init.log)

  
  [ORIGINAL REPORT]
  The Azure datasource currently uses the InstanceID from the SharedConfig.xml 
file.  On our new CRP stack, this ID is not guaranteed to be stable and could 
change if the VM is deallocated.  If the InstanceID changes then cloud-init 
will attempt to reprovision the VM, which could result in temporary loss of 
access to the VM.

  Instead cloud-init should switch to use the VM Unique ID, which is
  guaranteed to be stable everywhere for the lifetime of the VM.  The VM
  unique ID is explained here: https://azure.microsoft.com/en-us/blog
  /accessing-and-using-azure-vm-unique-id/

  In short, the unique ID is available via DMI, and can be accessed with
  the command 'dmidecode | grep UUID' or even easier via sysfs in the
  file "/sys/devices/virtual/dmi/id/product_uuid".

  Steve

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1506187/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241027] Re: Intermitent Selenium unit test timout error

2015-11-19 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1241027

Title:
  Intermitent Selenium unit test timout error

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  I have the following error *SOMETIMES* (eg: sometimes it does work,
  sometimes it doesn't):

  This is surprising, because the python-selenium, which is non-free,
  isn't installed in my environment, and we were supposed to have a
  patch to not use it if it was detected it wasn't there.

  Since there's a 2 seconds timeout, probably it happens when my server
  is busy. I would suggest to first try increasing this timeout to
  something like 5 seconds or something similar...

  ERROR: test suite for 
  --
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 227, in run
  self.tearDown()
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 350, in
  tearDown
  self.teardownContext(ancestor)
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 366, in
  teardownContext
  try_run(context, names)
File "/usr/lib/python2.7/dist-packages/nose/util.py", line 469, in try_run
  return func()
File
  
"/home/zigo/sources/openstack/havana/horizon/build-area/horizon-2013.2~rc3/horizon/test/helpers.py",
  line 179, in tearDownClass
  super(SeleniumTestCase, cls).tearDownClass()
File "/usr/lib/python2.7/dist-packages/django/test/testcases.py", line
  1170, in tearDownClass
  cls.server_thread.join()
File "/usr/lib/python2.7/dist-packages/django/test/testcases.py", line
  1094, in join
  self.httpd.shutdown()
File "/usr/lib/python2.7/dist-packages/django/test/testcases.py", line
  984, in shutdown
  "Failed to shutdown the live test server in 2 seconds. The "
  RuntimeError: Failed to shutdown the live test server in 2 seconds. The
  server might be stuck or generating a slow response.

  The same way, there's this one, which must be related (or shall I say,
  due to the previous error?):

  ERROR: test suite for 
  --
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 208, in run
  self.setUp()
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 291, in setUp
  self.setupContext(ancestor)
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 314, in
  setupContext
  try_run(context, names)
File "/usr/lib/python2.7/dist-packages/nose/util.py", line 469, in try_run
  return func()
File
  
"/home/zigo/sources/openstack/havana/horizon/build-area/horizon-2013.2~rc3/horizon/test/helpers.py",
  line 173, in setUpClass
  super(SeleniumTestCase, cls).setUpClass()
File "/usr/lib/python2.7/dist-packages/django/test/testcases.py", line
  1160, in setUpClass
  raise cls.server_thread.error
  WSGIServerException: [Errno 98] Address already in use

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1241027/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   3   4   5   6   7   8   9   10   >