[Yahoo-eng-team] [Bug 1498475] Re: the default security groups is not allowed to be deleted, but all the rules are allowed.
The default security group not able to be deleted is a base feature in Nova. ** Changed in: nova Status: New => Opinion ** Changed in: nova Status: Opinion => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1498475 Title: the default security groups is not allowed to be deleted,but all the rules are allowed. Status in OpenStack Compute (nova): Won't Fix Bug description: the default security groups is not allowed to be deleted,but all the rules are allowed. the default security groups is no used when no rule in it. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1498475/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1489921] Re: Nova connects to rabbitmq successfully but has invalid credentials
This really isn't a nova bug, this section of the conf routes directly to oslo.messaging and nova never even knows it's a thing ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1489921 Title: Nova connects to rabbitmq successfully but has invalid credentials Status in OpenStack Compute (nova): Invalid Status in oslo.messaging: New Bug description: From rabbitmq log: =INFO REPORT 28-Aug-2015::10:54:20 === accepting AMQP connection <0.15664.0> (10.0.2.26:55772 -> 10.0.2.8:5672) =INFO REPORT 28-Aug-2015::10:54:20 === Mirrored queue 'q-agent-notifier-security_group-update_fanout_c8d714e02b944c7f91dad2530a34ff01' in vhost '/': Adding mirror on node 'rabbit@os-controller-1003': <7448.19519.0> =INFO REPORT 28-Aug-2015::10:54:20 === Mirrored queue 'q-agent-notifier-dvr-update_fanout_87fb0fc8e8224ffb88ea91ee20ad8e29' in vhost '/': Adding mirror on node 'rabbit@os-controller-1002': <7447.3416.0> =INFO REPORT 28-Aug-2015::10:54:20 === Mirrored queue 'q-agent-notifier-dvr-update_fanout_87fb0fc8e8224ffb88ea91ee20ad8e29' in vhost '/': Adding mirror on node 'rabbit@os-controller-1003': <7448.19521.0> =ERROR REPORT 28-Aug-2015::10:54:20 === closing AMQP connection <0.15305.0> (10.0.2.26:55758 -> 10.0.2.8:5672): {handshake_error,starting,0, {amqp_error,access_refused, "AMQPLAIN login refused: user 'openstack' - invalid credentials", 'connection.start_ok'}} =INFO REPORT 28-Aug-2015::10:54:21 === accepting AMQP connection <0.15747.0> (10.0.2.26:55773 -> 10.0.2.8:5672) From Nova Log: 2015-08-28 10:54:19.524 14743 DEBUG oslo_concurrency.lockutils [req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] Lock "compute_resources" acquired by "_update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:444 2015-08-28 10:54:19.827 14743 INFO nova.compute.resource_tracker [req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] Total usable vcpus: 48, total allocated vcpus: 1 2015-08-28 10:54:19.827 14743 INFO nova.compute.resource_tracker [req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] Final resource view: name=osc-1001.prd.cin1.corp.hosting.net phys_ram=257524MB used_ram=2560MB phys_disk=5GB used_disk=20GB total_vcpus=48 used_vcpus=1 pci_stats= 2015-08-28 10:54:19.886 14743 INFO nova.scheduler.client.report [req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] Compute_service record updated for ('osc-1001.prd.cin1.corp.hosting.net', 'osc-1001.prd.cin1.corp.hosting.net') 2015-08-28 10:54:19.886 14743 INFO nova.compute.resource_tracker [req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] Compute_service record updated for osc-1001.prd.cin1.corp.hosting.net:osc-1001.prd.cin1.corp.hosting.net 2015-08-28 10:54:19.887 14743 DEBUG oslo_concurrency.lockutils [req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] Lock "compute_resources" released by "_update_available_resource" :: held 0.363s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:456 2015-08-28 10:54:19.922 14743 DEBUG nova.service [req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] Creating RPC server for service compute start /usr/lib/python2.7/site-packages/nova/service.py:188 2015-08-28 10:54:19.925 14743 INFO oslo_messaging._drivers.impl_rabbit [req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] Connecting to AMQP server on 10.0.2.8:5672 2015-08-28 10:54:19.943 14743 INFO oslo_messaging._drivers.impl_rabbit [req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] Connected to AMQP server on 10.0.2.8:5672 2015-08-28 10:54:19.969 14743 DEBUG nova.service [req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python2.7/site-packages/nova/service.py:206 2015-08-28 10:54:19.969 14743 DEBUG nova.servicegroup.drivers.db [req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] DB_Driver: join new ServiceGroup member osc-1001.prd.cin1.corp.hosting.net to the compute group, service = join /usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py:59 Rabbit configuration in nova.conf [oslo_messaging_rabbit] rabbit_hosts=10.0.2.8:5672,10.0.2.7:5672,10.0.2.6:5672 rabbit_userid=openstack rabbit_password=placeholderpassword Functionality seems fine and nothing shows up in the nova log but rabbitmq references this hypervisor and with only the nova openstack service running i get the message. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1489921/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More h
[Yahoo-eng-team] [Bug 1489226] Re: Nova should support specifying the block devices(/dev/sd*) name to attach to the instance
** Changed in: nova Status: New => Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1489226 Title: Nova should support specifying the block devices(/dev/sd*) name to attach to the instance Status in OpenStack Compute (nova): Opinion Bug description: nova and horizon dashboard should support that specify the block devices(e.g. /dev/sd*) to attach to an instance. Users can type the block devices(e.g. /dev/sd*) to attach to the instance and the instance can map the block devices(e.g. /dev/sd*) with a symlink(e.g. /dev/sd* -> ../../xvd*), and the horizon dashboard can display this symlink relation to users. e.g.: Instance: /dev/sd* -> ../../xvd* Dashboard: /dev/sd* vol-xx /dev/xvd* While Amazon EC2 has supported this function very well. In the EC2 Attach Volume dialog box, start typing the name or ID of the instance to attach the volume to in the Instance box, and select it from the list of suggestion options (only instances in the same Availability Zone as the volume are displayed). Device names like /dev/sdh and xvdh are used by Amazon EC2 to describe block devices, the block device mapping is used by Amazon EC2 to specify the block devices to attach to an EC2 instance. Please refer to the following picture for detail. http://www.bogotobogo.com/DevOps/AWS/images/AttachingVolume/AttachVolumeMenu.png http://www.bogotobogo.com/DevOps/AWS/images/AttachingVolume/AttachVolumeDialog.png http://www.bogotobogo.com/DevOps/AWS/images/AttachingVolume/bogo-ami_Instance_with_new_volume.png The same hit the Horizon: https://bugs.launchpad.net/horizon/+bug/1489227 Best Regards, Sibiao Luo To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1489226/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1480698] Re: MySQL error - too many connections
This is not really a nova job, it's unsurprising that we see this in neutron jobs, there are more services, so more connections. Though now that we're using the pure python mysql driver we might not need such a high API worker count ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1480698 Title: MySQL error - too many connections Status in OpenStack Compute (nova): Invalid Status in oslo.db: New Bug description: http://logs.openstack.org/59/138659/33/check/gate-tempest-dsvm- neutron-linuxbridge/29e7adc/logs/screen-n-api.txt.gz?level=ERROR 2015-07-21 11:29:53.660 ERROR nova.api.ec2 [req-522a314d- e88e-4982-b014-64141aeef73a tempest-EC2KeysTest-362858920 tempest- EC2KeysTest-451984995] Unexpected OperationalError raised: (_mysql_exceptions.OperationalError) (1040, 'Too many connections') To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1480698/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1459958] Re: Error messages returned to the user are not consistent across all apis
I feel like this is one of those things where we need the API working group error specification and can move on from there. Pattern matching on exception names (which might change), isn't really a good long term plan. ** Changed in: nova Status: New => Opinion ** Changed in: nova Importance: Undecided => Wishlist -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1459958 Title: Error messages returned to the user are not consistent across all apis Status in Cinder: Fix Released Status in OpenStack Compute (nova): Opinion Bug description: Error messages returned to the user are not consistent across all apis in case of all exceptions derived from NotFound exception, e.g., VolumeNotFound, SnapshotNotFound etc. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1459958/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1482234] Re: nova CLI not able to show direction of security rule
Nova doesn't support this function natively ** Changed in: nova Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1482234 Title: nova CLI not able to show direction of security rule Status in OpenStack Compute (nova): Won't Fix Bug description: As Horizon has the ability to differ between ingress and egress secgroup rules, the nova CLI should be able (preferably by default) to show this information. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1482234/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1446583] Re: services no longer reliably stop in stable/kilo
This has returned in liberty and is currently blocking grenade in much the same way as last time. https://review.openstack.org/#/c/190175/ is most likely part of the problem. It looks like after we ensured that SIGTERM actually terminated the program at the kilo release, folks from hds.com complained that some work units weren't processed after that patch, so it was flipped back to a graceful version which is possible will never exit. Current unit tests in oslo.service have gotten much flakier since then as well, which may be related. ** Also affects: oslo.service Importance: Undecided Status: New ** Changed in: oslo.service Importance: Undecided => Critical ** Summary changed: - services no longer reliably stop in stable/kilo + services no longer reliably stop in stable/liberty / master ** Description changed: In attempting to upgrade the upgrade branch structure to support stable/kilo -> master in devstack gate, we found the project could no longer pass Grenade testing. The reason is because pkill -g is no longer reliably killing off the services: http://logs.openstack.org/91/175391/5/gate/gate-grenade- dsvm/0ad4a94/logs/grenade.sh.txt.gz#_2015-04-21_03_15_31_436 It has been seen with keystone-all and cinder-api on this patch series: http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhlIGZvbGxvd2luZyBzZXJ2aWNlcyBhcmUgc3RpbGwgcnVubmluZ1wiIEFORCBtZXNzYWdlOlwiZGllXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mjk2MTU0NTQ2MzB9 There were a number of changes to the oslo-incubator service.py code during kilo, it's unclear at this point which is the issue. + + Note: this has returned in stable/liberty / master and oslo.service, see + comment #50 for where this reemerges. -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1446583 Title: services no longer reliably stop in stable/liberty / master Status in Cinder: Fix Released Status in Cinder kilo series: Fix Released Status in Keystone: Fix Released Status in Keystone kilo series: Fix Released Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) kilo series: Fix Released Status in oslo-incubator: Fix Released Status in oslo.service: New Bug description: In attempting to upgrade the upgrade branch structure to support stable/kilo -> master in devstack gate, we found the project could no longer pass Grenade testing. The reason is because pkill -g is no longer reliably killing off the services: http://logs.openstack.org/91/175391/5/gate/gate-grenade- dsvm/0ad4a94/logs/grenade.sh.txt.gz#_2015-04-21_03_15_31_436 It has been seen with keystone-all and cinder-api on this patch series: http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhlIGZvbGxvd2luZyBzZXJ2aWNlcyBhcmUgc3RpbGwgcnVubmluZ1wiIEFORCBtZXNzYWdlOlwiZGllXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mjk2MTU0NTQ2MzB9 There were a number of changes to the oslo-incubator service.py code during kilo, it's unclear at this point which is the issue. Note: this has returned in stable/liberty / master and oslo.service, see comment #50 for where this reemerges. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1446583/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513879] Re: NeutronClientException: 404 Not Found
The existing understanding is that this is how python-neutronclient functions if there is no L3 agent. However, that seems really wrong. Because it means you have to know the topology of the services on the neutron side in order to use python-neutronclient correctly. That someone defeats the purpose of having a library to access your service. This really should be fixed in python-neutronclient. ** Also affects: python-neutronclient Importance: Undecided Status: New ** Changed in: python-neutronclient Importance: Undecided => High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1513879 Title: NeutronClientException: 404 Not Found Status in OpenStack Compute (nova): In Progress Status in python-neutronclient: New Status in tripleo: Triaged Bug description: Tripleo isn't currently working with trunk nova, the undercloud is failing to build overcloud instances, nova compute is showing this exception Nov 05 13:10:45 instack.localdomain nova-compute[21338]: 2015-11-05 13:10:45.163 21338 ERROR nova.virt.ironic.driver [req-7df4cae6-f00a- 41a2-91e0-db1e6f130059 a800cb834fbd4a70915e2272dce924ac 102a2b78e079410f9afd8b8b46278c19 - - -] Error preparing deploy for instance 9ae5b605-58e3-40ee-b944-56cbf5806e51 on baremetal node f5c30846-4ada-444e-85d9-6e3be2a74782. Nov 05 13:10:45 instack.localdomain nova-compute[21338]: 2015-11-05 13:10:45.434 21338 DEBUG nova.virt.ironic.driver [req-7df4cae6-f00a-41a2-91e0-db1e6f130059 a800cb834fbd4a70915e2272dce924ac 102a2b78e079410f9afd8b8b46278c19 - - -] unplug: instance_uuid=9ae5b605-58e3-40ee-b944-56cbf5806e51 vif=[] _unplug_vifs /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:1093 Instance failed to spawn Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2165, in _build_resources yield resources File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2012, in _build_and_run_instance block_device_info=block_device_info) File "/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 791, in spawn flavor=flavor) File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 197, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 782, in spawn self._plug_vifs(node, instance, network_info) File "/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 1058, in _plug_vifs network_info_str = str(network_info) File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 515, in __str__ return self._sync_wrapper(fn, *args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 498, in _sync_wrapper self.wait() File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 530, in wait self[:] = self._gt.wait() File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 175, in wait return self._exit_event.wait() File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 125, in wait current.throw(*self._exc) File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 214, in main result = function(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/utils.py", line 1178, in context_wrapper return func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1574, in _allocate_network_async six.reraise(*exc_info) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1557, in _allocate_network_async dhcp_options=dhcp_options) File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 733, in allocate_for_instance update_cells=True) File "/usr/lib/python2.7/site-packages/nova/network/base_api.py", line 244, in get_instance_nw_info result = self._get_instance_nw_info(context, instance, **kwargs) File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 930, in _get_instance_nw_info preexisting_port_ids) File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1708, in _build_network_info_model current_neutron_port) File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1560, in _nw_info_get_ips client, fixed_ip['ip_address'], port['id']) File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1491, in _get_floating_ips_by_fixed_and_port port_id=port) File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1475, in _safe_get_floating_ips for k, v in six.iteritems(kwargs)])) File "/usr/lib/python2.7/site-packages/os
[Yahoo-eng-team] [Bug 1516158] Re: os-instance_usage_audit_log is used instead of os-instance-usage-audit-log
This is actually a documentation bug, should be fixed on the API site. ** Changed in: nova Importance: Undecided => Medium ** Also affects: openstack-api-site Importance: Undecided Status: New ** Changed in: openstack-api-site Importance: Undecided => Medium ** Changed in: openstack-api-site Status: New => Confirmed ** Changed in: nova Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1516158 Title: os-instance_usage_audit_log is used instead of os-instance-usage- audit-log Status in OpenStack Compute (nova): Won't Fix Status in openstack-api-site: Confirmed Bug description: os-instance-usage-audit-log is not being used, os-instance_usage_audiog is , this is really weird curl -g -i -X GET http://192.168.122.239:8774/v2.1/d1c5aa58af6c426492c642eb649017be/os- instance-usage-audit-log -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X -OpenStack-Nova-API-Version: 2.6" -H "X-Auth-Token: 2832457c94bd485094ffba70206208dc" 404 Not Found The resource could not be found. curl -g -i -X GET http://192.168.122.239:8774/v2.1/d1c5aa58af6c426492c642eb649017be/os-instance_usage_audiog -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-OpenStack-Nova-API-Version: 2.6" -H "X-Auth-Token: 2832457c94bd485094ffba70206208dc" {"instance_usage_audit_logs": {"total_errors": 0, "total_instances": 0, "log": {}, "num_hosts_running": 0, "num_hosts_done": 0, "num_hosts_not_run": 1, "hosts_not_run": ["devstack1"], "overall_status": "0 of 1 hosts done. 0 errors.", "period_ending": "2015-11-01 00:00:00", "period_beginning": "2015-10-01 00:00:00", "num_hosts": 1}} To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1516158/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1503522] Re: multiple create server response only has a single server's info
This is potentially an API enhancement, which would need a nova-spec. However, multiple create is an API that's somewhat out of favor at this point. I'd personally be opposed to extending multiple create like this. If the application needs / wants more granular info just issue the creates yourself don't push through the multiple create interface which is going to have less info. ** Changed in: nova Status: New => Opinion ** Changed in: nova Importance: Undecided => Wishlist -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1503522 Title: multiple create server response only has a single server's info Status in OpenStack Compute (nova): Opinion Bug description: when creating multiple server with min_count/max_count, the response only has one server's info, as follow: { "server": { "id": "11f8d434-cee0-45e2-8f24-d9b13403af16", "links": [ { "href": "http://localhost:8774/v2/ff2f74b28d274a3489a5cbb196cdd36c/servers/11f8d434-cee0-45e2-8f24-d9b13403af16";, "rel": "self" }, { "href": "http://localhost:8774/ff2f74b28d274a3489a5cbb196cdd36c/servers/11f8d434-cee0-45e2-8f24-d9b13403af16";, "rel": "bookmark" } ], "adminPass": "jFYiGXPSU483" } } } This may be a headache if someone wants to do something for every newly created server, and I think it is also a good design to get what you request, the better response may be like: { "servers": [] } or [ {"server": {}} {"server": {}} ] any ideas? To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1503522/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1461653] Re: Attaching volume fails if keystone has multiple endpoints of Cinder (juno)
Which is a server side configuration, not something that comes from the client. ** Changed in: nova Status: New => Invalid ** Changed in: nova Importance: Undecided => Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1461653 Title: Attaching volume fails if keystone has multiple endpoints of Cinder (juno) Status in OpenStack Compute (nova): Invalid Bug description: I recently deployed my 2nd OpenStack Juno region and as soon as the cinder node came up in the 2nd region attaching of volumes stopped working in both regions. I have tested through horizon & directly on the command line using nova & cinder. I have OS_REGION_NAME specified as an environment variable & have the region name properly selected when using horizon. OS_AUTH_URL=https://cloudapi-Region1.mydomain.net:5000/v2.0 OS_PASSWORD=mypassword OS_REGION_NAME=Region1 OS_TENANT_NAME='Test Tenant' OS_USERNAME=myuser When I try to attach I get the following error: nova volume-attach c38ed460-4547-4a7d-b917-7d6c7aafa38e c57544a1-443c-4ee0-a009-19b5e1c3598a ERROR: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-caa01caa-cbae-4164-88ce-fa1c2c9834a9) And when I check the nova api log I see the following error: 2015-06-03 18:20:56.634 11151 ERROR nova.api.openstack [req-caa01caa-cbae-4164-88ce-fa1c2c9834a9 None] Caught error: AmbiguousEndpoints: [{u'adminURL': u'https://cloudapi-Region2.mydomain.net:8776/v1/f6121817631f4c35bf40c9db1d973e63', u'region': u'Region2', u'id': u'5447a4d47f1e439899b99d01ada74426', 'serviceName': u'cinder', u'internalURL': u'https://cloudapi-Region2.mydomain.net:8776/v1/f6121817631f4c35bf40c9db1d973e63', u'publicURL': u'https://cloudapi-Region2.mydomain.net:8776/v1/f6121817631f4c35bf40c9db1d973e63'}, {u'adminURL': u'https://cloudapi-Region1.mydomain.net:8776/v1/f6121817631f4c35bf40c9db1d973e63', u'region': u'Region1', u'id': u'5544fc1b1b7449af83161c56d4d3dfe9', 'serviceName': u'cinder', u'internalURL': u'https://cloudapi-Region1.mydomain.net:8776/v1/f6121817631f4c35bf40c9db1d973e63', u'publicURL': u'https://cloudapi-Region1.mydomain.net:8776/v1/f6121817631f4c35bf40c9db1d973e63'}] 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack Traceback (most recent call last): 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 124, in __call__ 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack return req.get_response(self.application) 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/request.py", line 1320, in send 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack application, catch_exc_info=False) 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/request.py", line 1284, in call_application 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack app_iter = application(self.environ, start_response) 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack return resp(environ, start_response) 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py", line 661, in __call__ 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack return self._app(env, start_response) 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack return resp(environ, start_response) 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack return resp(environ, start_response) 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in __call__ 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack response = self.app(environ, start_response) 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack return resp(environ, start_response) 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__ 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack resp = self.call_func(req, *args, **self.kwargs) 2015-06-03 18:20:56.634 11151 TRACE nova.api.openstack File "/usr/lib/python2.7/dist-p
[Yahoo-eng-team] [Bug 1351315] Re: default security group for a tenant can't be deleted
This would be an API change and require a nova-spec. If you want to do that, please go down that path. ** Changed in: nova Status: Confirmed => Opinion ** Changed in: nova Importance: Undecided => Wishlist -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1351315 Title: default security group for a tenant can't be deleted Status in OpenStack Compute (nova): Opinion Bug description: When you create a new project, add a user to it, and boot a vm for that tenant, by default, nova adds the tenant's vm to the default security group. #keystone tenant-create --name foo #keystone user-role-add --user admin --role Member --tenant foo #OS_TENANT_NAME=foo nova boot --image cirros-0.3.1-x86_64-uec --flavor m1.tiny foo_vm #nova secgroup-list --all-tenants ++-+-+--+ | Id | Name| Description | Tenant_ID| ++-+-+--+ | 2 | default | default | 1a1878b5d05648a3970c6c0c2a648a0b | | 1 | default | default | 9b84a2926f5b4df091774afc1ad7e1f3 | ++-+-+--+ The issue is that if I want to delete the security group related to my tenant I can't. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1351315/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1325472] Re: API not idempotent
Features are't bugs. Marking as Opinion to close. ** Changed in: nova Status: Confirmed => Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1325472 Title: API not idempotent Status in OpenStack Compute (nova): Opinion Bug description: The API for creating an instance does not support idempotent usage. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1325472/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1522454] [NEW] Nova is leaking libvirt internal ids on some Instance Not Found errors
Public bug reported: Nova from master in the gate. libvirt is incorrectly using InstanceNotFound as an internal exception, but handing it libvirt internal ids instead of an openstack uuid or ec2id. This means it jumps up through the stack and back to the user, giving errors over http like Instance instance-00a not found. This is both an information leak, and useless bit of information for the user to figure out what's going on. libvirt should use an internal exception instead. ** Affects: nova Importance: Medium Status: Triaged ** Tags: libvirt low-hanging-fruit -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1522454 Title: Nova is leaking libvirt internal ids on some Instance Not Found errors Status in OpenStack Compute (nova): Triaged Bug description: Nova from master in the gate. libvirt is incorrectly using InstanceNotFound as an internal exception, but handing it libvirt internal ids instead of an openstack uuid or ec2id. This means it jumps up through the stack and back to the user, giving errors over http like Instance instance-00a not found. This is both an information leak, and useless bit of information for the user to figure out what's going on. libvirt should use an internal exception instead. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1522454/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1024586] Re: avoid the use of kpartx in file injection
For other reasons we stopped using this path for file injection by default. This bug is sufficiently old that I assume it is no longer in progress. ** Changed in: nova Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1024586 Title: avoid the use of kpartx in file injection Status in OpenStack Compute (Nova): Invalid Bug description: kpartx has various problems... 1. The git repo on kernel.org is no longer available. 2. kpartx -l had side effects: $ kpartx -l /bin/ls $ ls text file busy To fix you need to run losetup -a to find the assigned loopback device and then losetup -d /dev/loop... 3. On an unconnected loop device we get warnings, but an EXIT_SUCCESS ? # kpartx -a /dev/loop1 && echo EXIT_SUCCESS read error, sector 0 llseek error llseek error llseek error EXIT_SUCCESS 4. Also for a loop device that is connected, I get a "failed" warning, but the EXIT_SUCCESS is appropriate in that case as the mapped device is present and usable # kpartx -a /dev/loop0 /dev/mapper/loop0p1: mknod for loop0p1 failed: File exists 5. There are issues with qcow2 encoded cirros images # qemu-img info cirros-0.3.0-x86_64-disk.img image: cirros-0.3.0-x86_64-disk.img file format: qcow2 virtual size: 39M (41126400 bytes) disk size: 9.3M cluster_size: 65536 # qemu-nbd -c /dev/nbd15 $PWD/cirros-0.3.0-x86_64-disk.img # ls -la /sys/block/nbd15/pid -r--r--r--. 1 root root 4096 Jun 8 10:19 /sys/block/nbd15/pid # kpartx -a /dev/nbd15 device-mapper: resume ioctl on nbd15p1 failed: Invalid argument create/reload failed on nbd15p1 6. There was a report that `kpartx -[ad]` were not synchronous with the creation/deletion of /dev/mapper/nbdXXpX requiring sleep calls to avoid failures. The best way to avoid the need for kpartx is to use the newer kernel auto partition mapping feature available since kernel 3.2 and only fallback to kpartx if not exists ... '%sp%s' % (self.device, self.partition) Note the nbd module must be mounted with param max_part=16 etc. so that would need documentation. Also we would need to test the same applies for raw loopback images as well as nbd To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1024586/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1006725] Re: Incorrect error returned during Create Image and multi byte characters used for Image name
We are running this test in the gate, and not seeing it. Can you provide links to complete logs somewhere so we can figure out what's going on here? ** Changed in: nova Status: Confirmed => Invalid ** Changed in: nova Status: Invalid => Incomplete -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1006725 Title: Incorrect error returned during Create Image and multi byte characters used for Image name Status in OpenStack Compute (Nova): Incomplete Status in Tempest: Fix Released Bug description: Our tempest tests that checks for 400 Bad Request return code fails with a ComputeFault instead. Pass multi-byte character image name during Create Image Actual Response Code: ComputeFault, 500 Expected Response Code: 400 Bad Request Return an error if the server name has a multi-byte character ... FAIL == FAIL: Return an error if the server name has a multi-byte character -- Traceback (most recent call last): File "/opt/stack/tempest/tests/test_images.py", line 251, in test_create_image_specify_multibyte_character_server_name self.fail("Should return 400 Bad Request if multi byte characters" AssertionError: Should return 400 Bad Request if multi byte characters are used for image name >> begin captured logging << tempest.config: INFO: Using tempest config file /opt/stack/tempest/etc/tempest.conf tempest.common.rest_client: ERROR: Request URL: http://10.2.3.164:8774/v2/1aeac1cfbfdd43c2845b2cb3a4f15790/images/24ceff93-1af3-41ab-802f-9fc4d8b90b69 tempest.common.rest_client: ERROR: Request Body: None tempest.common.rest_client: ERROR: Response Headers: {'date': 'Thu, 31 May 2012 06:02:33 GMT', 'status': '404', 'content-length': '62', 'content-type': 'application/json; charset=UTF-8', 'x-compute-request-id': 'req-7a15d284-e934-47a1-87f4-7746e949c7a2'} tempest.common.rest_client: ERROR: Response Body: {"itemNotFound": {"message": "Image not found.", "code": 404}} tempest.common.rest_client: ERROR: Request URL: http://10.2.3.164:8774/v2/1aeac1cfbfdd43c2845b2cb3a4f15790/servers/ecb51dfb-493d-4ef8-9178-1adc3d96a04d/action tempest.common.rest_client: ERROR: Request Body: {"createImage": {"name": "\ufeff43802479847"}} tempest.common.rest_client: ERROR: Response Headers: {'date': 'Thu, 31 May 2012 06:02:44 GMT', 'status': '500', 'content-length': '128', 'content-type': 'application/json; charset=UTF-8', 'x-compute-request-id': 'req-1a9505f5-4dfc-44e7-b04a-f8daec0f956e'} tempest.common.rest_client: ERROR: Response Body: {u'computeFault': {u'message': u'The server has either erred or is incapable of performing the requested operation.', u'code': 500}} - >> end captured logging << - To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1006725/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1092605] Re: Inconsistency between nova-manage help message and actual usage.
nova-manage is largely not used beyond db-sync at this point, marking as invalid because I think this is probably quite out of date. ** Changed in: nova Assignee: Mark McLoughlin (markmc) => (unassigned) ** Changed in: nova Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1092605 Title: Inconsistency between nova-manage help message and actual usage. Status in OpenStack Compute (Nova): Invalid Bug description: In current implementation, a lot of optional arguments for nova-manage sub actions are required (aka non-optional). For example: $nova-manage shell script -h usage: nova-manage shell script [-h] [--path ] [action_args [action_args ...]] positional arguments: action_args optional arguments: -h, --help show this help message and exit --path Script path What the help message says is, --path is optional, which means user can safely ignore this argument but in fact they can't. $nova-manage shell script Runs the script from the specifed path with flags set properly. arguments: path An argument is missing: path It seems 'nova-manage' detect a missing argument but that's confusing and inconsistent. Why the helpl message doesn't suggest so? Looking into the implementation, nova-manage relies on a cliutils module from oslo to do argument inspection. This is kind of an indirect and inefficient way to do it. The argparse module (used by oslo cfg module) is able to do argument checking when parsing arguments. nova-manage failed to do this due to incorrect usage of @args decorator: - @@ -202,7 +202,7 @@ class ShellCommands(object): readline.parse_and_bind("tab:complete") code.interact() -@args('--path', dest='path', metavar='', help='Script path') +@args('--path', required=True, dest='path', metavar='', help='Script path') def script(self, path): Simply adding a 'required=True' to @args allows argparse module to detect incorrect input as well as generate consistent help message. And cliutils module is _not_ needed any more. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1092605/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1262424] Re: Files without code should not contain copyright notices
deleting all completed projects do deal with launchpad timeout errors ** No longer affects: pbr ** No longer affects: oslo-incubator ** No longer affects: heat ** No longer affects: tempest ** No longer affects: zaqar ** No longer affects: python-troveclient ** No longer affects: ceilometer ** No longer affects: cinder ** No longer affects: ironic ** No longer affects: keystone ** No longer affects: horizon ** No longer affects: trove ** No longer affects: python-neutronclient ** No longer affects: python-openstackclient -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1262424 Title: Files without code should not contain copyright notices Status in OpenStack Neutron (virtual network service): In Progress Status in OpenStack Compute (Nova): In Progress Status in Python client library for Cinder: Fix Committed Status in Taskflow for task-oriented systems.: Fix Committed Bug description: Due to a recent policy change in HACKING (http://docs.openstack.org/developer/hacking/#openstack-licensing), empty files should no longer contain copyright notices. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1262424/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1190533] Re: Foreign keys are not enabled in sqlite
** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1190533 Title: Foreign keys are not enabled in sqlite Status in OpenStack Compute (Nova): Invalid Bug description: Foreign key constraints are not enabled in sqlite. It is impossible to write tests that involve foreign key constraints. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1190533/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1315095] Re: grenade nova network (n-net) fails to start
I think this is a screen issue ** No longer affects: nova -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1315095 Title: grenade nova network (n-net) fails to start Status in Grenade - OpenStack upgrade testing: Confirmed Bug description: Here we see that n-net never started logging to it's screen: http://logs.openstack.org/02/91502/1/check/check-grenade- dsvm/912e89e/logs/new/ The errors in n-cpu seem to support that the n-net service never started. According to http://logs.openstack.org/02/91502/1/check/check-grenade- dsvm/912e89e/logs/grenade.sh.log.2014-05-01-042623, circa "2014-05-01 04:31:15.580" the interesting bits should be in: /opt/stack/status/stack/n-net.failure But I don't see that captured. I'm not sure why n-net did not start. To manage notifications about this bug go to: https://bugs.launchpad.net/grenade/+bug/1315095/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1358362] Re: TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
*** This bug is a duplicate of bug 1241275 *** https://bugs.launchpad.net/bugs/1241275 So I think this bug is actually in python-neutronclient, however in looking at the code I can't see any way that such a thing could happen as that code path should have been protected from this since - 2013-10-23 commit commit e49819caf95fc6985036231b1e5717f0ff7b6c61 Author: Drew Thorstensen Date: Wed Oct 23 16:41:45 2013 -0500 New exception when auth_url is not specified Certain scenarios into the neutron client will not specify the auth_url. This is typically when a token is specified. However, when the token is expired the neutron client will attempt to refresh the token. Users of this may not have passed in all of the required information for this reauthentication to properly occur. This code fixes an error that occurs in this flow where the auth_url (which is None) is appended to another string. This results in a core Python error. The update will provide a more targetted error message specifying to the user that the auth_url needs to be specified. An associated unit test is also included to validate this behavior. Change-Id: I577ce0c009a9a281acdc238d290a22c5e561ff82 Closes-Bug: #1241275 ** Changed in: nova Status: New => Incomplete ** Also affects: python-neutronclient Importance: Undecided Status: New ** Changed in: nova Status: Incomplete => Invalid ** This bug has been marked a duplicate of bug 1241275 Nova / Neutron Client failing upon re-authentication after token expiration -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1358362 Title: TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' Status in OpenStack Compute (Nova): Invalid Status in Python client library for Neutron: New Bug description: We had several instances go into error state on bootstack with the following traceback: 2014-08-17 22:12:37.022 1232 ERROR nova.api.openstack.wsgi [req-068c2700-29a4-46ec-a9f7-9e956c06f3c6 4e68a0dd10e04db5b57c917ca8c521b1 d97d645e7867484b81311b7f9ee2ab15] Exception handling resource: unsupported operand type(s) for +: 'NoneType' and 'str' 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi Traceback (most recent call last): 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi File "/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 887, in post_process_extensions 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi **action_args) 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi File "/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/contrib/security_groups.py", line 590, in show 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi return self._show(req, resp_obj) 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi File "/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/contrib/security_groups.py", line 586, in _show 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi self._extend_servers(req, [resp_obj.obj['server']]) 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi File "/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/contrib/security_groups.py", line 550, in _extend_servers 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi servers)) 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi File "/usr/lib/python2.7/dist-packages/nova/network/security_group/neutron_driver.py", line 345, in get_instances_security_groups_bindings 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi ports = self._get_ports_from_server_list(servers, neutron) 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi File "/usr/lib/python2.7/dist-packages/nova/network/security_group/neutron_driver.py", line 304, in _get_ports_from_server_list 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi ports.extend(neutron.list_ports(**search_opts).get('ports')) 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 111, in with_params 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi ret = self.function(instance, *args, **kwargs) 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 306, in list_ports 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi **_params) 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 1250, in list 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi for r in self._pagination(collection, path, **params): 2014-08-17 22:12:37.022 1232 TRA
[Yahoo-eng-team] [Bug 1239484] Re: failed nova db migration upgrading from grizzly to havana
Honestly, upgrade up from Folsom is pretty out of scope now, as the folsom and grizzly branches have been eoled, and havana is eol in a couple of weeks. ** Changed in: nova Status: In Progress => Won't Fix ** Changed in: nova Importance: High => Wishlist -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1239484 Title: failed nova db migration upgrading from grizzly to havana Status in Ubuntu Cloud Archive: New Status in OpenStack Compute (Nova): Won't Fix Status in OpenStack Compute (nova) icehouse series: New Bug description: I recently upgraded a Nova cluster from grizzly to havana. We're using the Ubuntu Cloud Archive and so in terms of package versions the upgrade was from 1:2013.1.3-0ubuntu1~cloud0 to 1:2013.2~rc2-0ubuntu1~cloud0. We're using mysql-server-5.5 5.5.32-0ubuntu0.12.04.1 from Ubuntu 12.04 LTS. After the upgrade, "nova-manage db sync" failed as follows: # nova-manage db sync 2013-10-13 21:08:54.132 26592 INFO migrate.versioning.api [-] 161 -> 162... 2013-10-13 21:08:54.138 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.140 26592 INFO migrate.versioning.api [-] 162 -> 163... 2013-10-13 21:08:54.145 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.146 26592 INFO migrate.versioning.api [-] 163 -> 164... 2013-10-13 21:08:54.154 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.154 26592 INFO migrate.versioning.api [-] 164 -> 165... 2013-10-13 21:08:54.162 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.162 26592 INFO migrate.versioning.api [-] 165 -> 166... 2013-10-13 21:08:54.167 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.170 26592 INFO migrate.versioning.api [-] 166 -> 167... 2013-10-13 21:08:54.175 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.176 26592 INFO migrate.versioning.api [-] 167 -> 168... 2013-10-13 21:08:54.184 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.184 26592 INFO migrate.versioning.api [-] 168 -> 169... 2013-10-13 21:08:54.189 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.189 26592 INFO migrate.versioning.api [-] 169 -> 170... 2013-10-13 21:08:54.199 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.199 26592 INFO migrate.versioning.api [-] 170 -> 171... 2013-10-13 21:08:54.204 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.205 26592 INFO migrate.versioning.api [-] 171 -> 172... 2013-10-13 21:08:54.841 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.842 26592 INFO migrate.versioning.api [-] 172 -> 173... 2013-10-13 21:08:54.883 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 409 from table: key_pairs 2013-10-13 21:08:54.888 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 257 from table: key_pairs 2013-10-13 21:08:54.889 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 383 from table: key_pairs 2013-10-13 21:08:54.897 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 22 from table: key_pairs 2013-10-13 21:08:54.905 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 65 from table: key_pairs 2013-10-13 21:08:54.911 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 106 from table: key_pairs 2013-10-13 21:08:54.911 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 389 from table: key_pairs 2013-10-13 21:08:54.923 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 205 from table: key_pairs 2013-10-13 21:08:54.928 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 259 from table: key_pairs 2013-10-13 21:08:54.934 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 127 from table: key_pairs 2013-10-13 21:08:54.946 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 337 from table: key_pairs 2013-10-13 21:08:54.951 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 251 from table: key_pairs 2013-10-13 21:08:54.991 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.991 26592 INFO migrate.versioning.api [-] 173 -> 174... 2013-10-13 21:08:55.052 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:55.053 26592 INFO migrate.versioning.api [-] 174 -> 175... 2013-10-13 21:08:55.146 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:55.147 26592 INFO migrate.versioning.api [-] 175 -> 176... 2013-10-13 21:08:55.171 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:55.172 26592 INFO migrate.versioning.api [-] 176 -> 177... 2013-10-13 21:08:55.236 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:55.237 26592 INFO migrate.versioning.api [-] 177 -> 178... 2
[Yahoo-eng-team] [Bug 1172774] Re: NameError: name '_' is not defined while running unit tests
** Changed in: nova Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1172774 Title: NameError: name '_' is not defined while running unit tests Status in OpenStack Compute (Nova): Invalid Bug description: when running nosetests -v against master of nova I get this error: 15:19:49 Traceback (most recent call last): 15:19:49 File "/usr/lib64/python2.6/site-packages/nose/loader.py", line 413, in loadTestsFromName 15:19:49 addr.filename, addr.module) 15:19:49 File "/usr/lib64/python2.6/site-packages/nose/importer.py", line 47, in importFromPath 15:19:49 return self.importFromDir(dir_path, fqname) 15:19:49 File "/usr/lib64/python2.6/site-packages/nose/importer.py", line 94, in importFromDir 15:19:49 mod = load_module(part_fqname, fh, filename, desc) 15:19:49 File "/var/lib/openstack-nova-test/nova/conductor/__init__.py", line 17, in 15:19:49 from nova.conductor import api as conductor_api 15:19:49 File "/var/lib/openstack-nova-test/nova/conductor/api.py", line 19, in 15:19:49 from nova.conductor import manager 15:19:49 File "/var/lib/openstack-nova-test/nova/conductor/manager.py", line 17, in 15:19:49 from nova.api.ec2 import ec2utils 15:19:49 File "/var/lib/openstack-nova-test/nova/api/ec2/__init__.py", line 31, in 15:19:49 from nova.api.ec2 import apirequest 15:19:49 File "/var/lib/openstack-nova-test/nova/api/ec2/apirequest.py", line 27, in 15:19:49 from nova.api.ec2 import ec2utils 15:19:49 File "/var/lib/openstack-nova-test/nova/api/ec2/ec2utils.py", line 22, in 15:19:49 from nova import availability_zones 15:19:49 File "/var/lib/openstack-nova-test/nova/availability_zones.py", line 20, in 15:19:49 from nova import db 15:19:49 File "/var/lib/openstack-nova-test/nova/db/__init__.py", line 23, in 15:19:49 from nova.db.api import * 15:19:49 File "/var/lib/openstack-nova-test/nova/db/api.py", line 48, in 15:19:49 from nova.cells import rpcapi as cells_rpcapi 15:19:49 File "/var/lib/openstack-nova-test/nova/cells/rpcapi.py", line 27, in 15:19:49 from nova import exception 15:19:49 File "/var/lib/openstack-nova-test/nova/exception.py", line 123, in 15:19:49 class NovaException(Exception): 15:19:49 File "/var/lib/openstack-nova-test/nova/exception.py", line 131, in NovaException 15:19:49 message = _("An unknown exception occurred.") 15:19:49 NameError: name '_' is not defined To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1172774/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1251266] Re: allow_resize_to_same_host=true should be the default
This flag exists solely for all in one testing, and is not intended to be used on a real deployment. ** Changed in: nova Status: In Progress => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1251266 Title: allow_resize_to_same_host=true should be the default Status in OpenStack Compute (Nova): Won't Fix Bug description: The flag allow_resize_to_same_host in the nova.conf file is set to 'false' as default. Thus, the the command 'nova resize ' will fail. The function this flag offers doesn't rise any vulnerability in the system, but gives the freedom to the user to change the flavor of an instance. There's no logic in create a functionality just to not give it in the default configuration. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1251266/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1318973] Re: Inconsistent summaries of nova v3 api
I feel like with the new v2.1 plan this is currently invalid. We should reopen later when it's something that we might actively have on our horizon. ** Changed in: nova Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1318973 Title: Inconsistent summaries of nova v3 api Status in OpenStack Compute (Nova): Invalid Bug description: We can get summaries of each api extension through "show extensions" api, and they seem inconsistent. Many summaries include "support.", but the other ones include "Extension.". $ nova --os-compute-api-version 3 extension-list +--+---+---+-+ | Name | Summary | Alias | Version | +--+---+---+-+ | Consoles | Consoles. | consoles | 1 | | Extensions | Extension information. | extensions| 1 | | FlavorAccess | Flavor access support. | flavor-access | 1 | | FlavorsExtraSpecs| Flavors Extension. | flavor-extra-specs| 1 | | FlavorManage | Flavor create/delete API support. | flavor-manage | 1 | | Flavors | Flavors Extension. | flavors | 1 | | Ips | Server addresses. | ips | 1 | | Keypairs | Keypair Support. | keypairs | 1 | | AccessIPs| Access IPs support. | os-access-ips | 1 | | AdminActions | Enable admin-only server actions... | os-admin-actions | 1 | | AdminPassword| Admin password management support. | os-admin-password | 1 | | Agents | Agents support. | os-agents | 1 | | Aggregates | Admin-only aggregate administration. | os-aggregates | 1 | | AttachInterfaces | Attach interface support. | os-attach-interfaces | 1 | | AvailabilityZone | 1. Add availability_zone to the Create Server API | os-availability-zone | 1 | | BlockDeviceMapping | Block device mapping boot support. | os-block-device-mapping | 1 | | Cells| Enables cells-related functionality such as adding neighbor cells,... | os-cells | 1 | | Certificates | Certificates support. | os-certificates | 1 | | ConfigDrive | Config Drive Extension. | os-config-drive | 1 | | ConsoleAuthTokens| Console token authentication support. | os-console-auth-tokens| 1 | | ConsoleOutput| Console log output support, with tailing ability. | os-console-output | 1 | | CreateBackup | Create a backup of a server. | os-create-backup | 1 | | DeferredDelete | Instance deferred delete. | os-deferred-delete| 1 | | Evacuate | Enables server evacuation. | os-evacuate | 1 | | ExtendedAvailabilityZone | Extended Server Attributes support. | os-extended-availability-zone | 1 | | ExtendedServerAttributes | Extended Server Attributes support. | os-extended-server-attributes | 1 | | ExtendedStatus
[Yahoo-eng-team] [Bug 1265416] Re: Use 'project' instead of 'tenant' in v3 api
** Changed in: nova Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1265416 Title: Use 'project' instead of 'tenant' in v3 api Status in OpenStack Compute (Nova): Invalid Bug description: For v3 api consistent, we prefer use 'project' instead of 'tenant'. Discussion at: http://lists.openstack.org/pipermail/openstack-dev/2013-November/020222.html To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1265416/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1297605] Re: VMware: Error when snapshotting ISO instance with 0GB root disk
** Changed in: nova Status: In Progress => Fix Committed ** Changed in: nova Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1297605 Title: VMware: Error when snapshotting ISO instance with 0GB root disk Status in OpenStack Compute (Nova): Fix Released Status in OpenStack Compute (nova) icehouse series: Fix Released Bug description: When using the VC Driver, snapshotting an instance that was boot with an ISO and has no root disk will cause the following error (full trace below): AttributeError: 'NoneType' object has no attribute 'split' Scenario is as follows: 1. Boot an instance using an ISO. Make sure the flavor specifies a 0GB root disk size 2. Snapshot the instance Full traceback: Traceback (most recent call last): File "/opt/stack/oslo.messaging/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply incoming.message)) File "/opt/stack/oslo.messaging/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch return self._do_dispatch(endpoint, method, ctxt, args) File "/opt/stack/oslo.messaging/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch result = getattr(endpoint, method)(ctxt, **new_args) File "/opt/stack/nova/nova/exception.py", line 88, in wrapped payload) File "/opt/stack/nova/nova/openstack/common/excutils.py", line 68, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/opt/stack/nova/nova/exception.py", line 71, in wrapped return f(self, context, *args, **kw) File "/opt/stack/nova/nova/compute/manager.py", line 280, in decorated_function pass File "/opt/stack/nova/nova/openstack/common/excutils.py", line 68, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/opt/stack/nova/nova/compute/manager.py", line 266, in decorated_function return function(self, context, *args, **kwargs) File "/opt/stack/nova/nova/compute/manager.py", line 309, in decorated_function e, sys.exc_info()) File "/opt/stack/nova/nova/openstack/common/excutils.py", line 68, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/opt/stack/nova/nova/compute/manager.py", line 296, in decorated_function return function(self, context, *args, **kwargs) File "/opt/stack/nova/nova/compute/manager.py", line 359, in decorated_function % image_id, instance=instance) File "/opt/stack/nova/nova/openstack/common/excutils.py", line 68, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/opt/stack/nova/nova/compute/manager.py", line 349, in decorated_functionot)[0] File "/opt/stack/nova/nova/virt/vmwareapi/ds_util.py", line 38, in split_datastore_path spl = datastore_path.split('[', 1)[1].split(']', 1) AttributeError: 'NoneType' object has no attribute 'split' To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1297605/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1178156] Re: resource tracker for bare metal nodes tries to subdivide resource
Extremely old bm bug, marking as won't fix ** Changed in: nova Status: Triaged => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1178156 Title: resource tracker for bare metal nodes tries to subdivide resource Status in OpenStack Compute (Nova): Won't Fix Bug description: after deploying a small instance on big hardware: 2013-05-09 08:48:30,085.085 19736 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources 2013-05-09 08:48:30,208.208 19736 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 97792 2013-05-09 08:48:30,208.208 19736 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 2038 2013-05-09 08:48:30,209.209 19736 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 23 2013-05-09 08:48:30,308.308 19736 INFO nova.compute.resource_tracker [-] Compute_service record updated for ubuntu:96deccd5-0ad9-4bb5-979b-009bebac52fc This should show 0, 0 and 0 : the size of the instance is not the amount to subtract :). I don't know if this is just cosmetic. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1178156/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1174518] Re: rescue extension not supported by bare metal
** Changed in: nova Status: Triaged => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1174518 Title: rescue extension not supported by bare metal Status in OpenStack Compute (Nova): Won't Fix Status in tripleo - openstack on openstack: Triaged Bug description: And it would be super-useful there. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1174518/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1180664] Re: How to update flavor parameters
** Changed in: nova Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1180664 Title: How to update flavor parameters Status in OpenStack Compute (Nova): Invalid Bug description: I have created new flavors using rest API , but i have not found any API for updating the parameters of a flavor like vcpu ,memory,disk,etc.let me know how can i proceed regarding this. I have ubuntu 12.10 and grizzle installed To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1180664/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1158328] Re: passwords in config files stored in plaintext
I feel like this is pretty strongly out of scope. Applications that need to talk to databases that require passwords need access to those passwords in plain text. While we could do obfuscation, it doesn't really address the issue, it just makes you think you addressed it. Honestly better to leave things clear so people rightly understand that a compromise of that file means all bets are off. ** Changed in: nova Status: Confirmed => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1158328 Title: passwords in config files stored in plaintext Status in OpenStack Compute (Nova): Won't Fix Bug description: The credentials for database conenctions and the keystone authtoken are stored in plaintext within the nova.conf and apipaste config files. These values should be encrypted. A scheme similar to /etc/shadow would be great. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1158328/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1089128] Re: Tests require MySQL to develop locally
** Changed in: nova Status: Confirmed => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1089128 Title: Tests require MySQL to develop locally Status in OpenStack Compute (Nova): Won't Fix Bug description: Currently developers need MySQL installed in order to develop locally (due to MySQL-Python being in the test-requires file). This can result in the following error my_config missing error: http://paste.openstack.org/show/27856/ The workaround for Mac is to install MySQL. If you use brew, `brew install mysql` will work. Long term, it would be nice to not require MySQL and instead make it optional. It looks like Monty started on this but later reverted this work with commit 6e9f3bb10a105411b0eb3e8f22a252af0784cb0b. This bug is to track somehow fixing this, either by completing what Monty started, or finding some other approach. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1089128/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1180540] Re: Conductor manager imports compute api
** Changed in: nova Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1180540 Title: Conductor manager imports compute api Status in OpenStack Compute (Nova): Invalid Bug description: There is work to move calls to conductor into the compute api. This creates a circular dependency which must be worked around. Ideally the conductor would not need to call into the compute api, so we should fix the three calls that occur. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1180540/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1160026] Re: If nova-dhcpbridge.conf is missing, no message will warn about it.
No longer valid ** Changed in: nova Status: Triaged => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1160026 Title: If nova-dhcpbridge.conf is missing, no message will warn about it. Status in OpenStack Compute (Nova): Won't Fix Bug description: Hello, I'm not quite sure it's a bug or more of a very nice to have for everybody. But last few days we've been investigating some weird behaviour which was due to the fact that the dhcp bridge wasn't working properly because /etc/nova/nova-dhcpbridge.conf was missing. Would it be possible to have a DEBUG/VERBOSE/WARNING of some kind when it's missing? Or even better, set the default options in nova- dhcpbridge and use them if /etc/nova/nova-dhcpbridge.conf is missing! Thank you very much, Dave To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1160026/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 991531] Re: Use users credentials in s3 connection if using keystone
No longer seems valid, the code in this area has radically changed. ** Changed in: nova Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/991531 Title: Use users credentials in s3 connection if using keystone Status in OpenStack Compute (Nova): Invalid Bug description: When nova talks to an s3 image service it currently uses hard coded credentials FLAGS.s3_access_key and FLAGS.s3_secret_key. If using keystone auth it should/can use the users keystone credentials. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/991531/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1275906] Re: Remove str() from message formating block
honestly, I think this is too low a priority to even keep in the tracker ** Changed in: nova Status: Triaged => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1275906 Title: Remove str() from message formating block Status in OpenStack Compute (Nova): Invalid Bug description: Remove str() from message formatting code, example: "Error %s" % str(x), because %s conversion converts any Python object using str() To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1275906/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1028688] Re: flavor details should include arch
This is so old now. While it's probably worth thinking about this in the current nova arch, we should do so with fresh eyes if it's important. ** Changed in: nova Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1028688 Title: flavor details should include arch Status in OpenStack Compute (Nova): Invalid Bug description: for supporting arm/arm64 in addition to x86 instances within the same region/zone, flavor details should also list arch, so a user/program can select the flavor appropriately. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1028688/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1333746] Re: novncproxy crash at start
*** This bug is a duplicate of bug 1334327 *** https://bugs.launchpad.net/bugs/1334327 ** This bug has been marked a duplicate of bug 1334327 spice not working on debian 7 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1333746 Title: novncproxy crash at start Status in OpenStack Compute (Nova): New Bug description: Hi everyone, since the last upgrade you have done on icehouse, novncproxy won't start. This is the Trace I get : Traceback (most recent call last): File "/usr/bin/nova-novncproxy", line 10, in sys.exit(main()) File "/usr/lib/python2.7/dist-packages/nova/cmd/novncproxy.py", line 87, in main wrap_cmd=None) File "/usr/lib/python2.7/dist-packages/nova/console/websocketproxy.py", line 38, in __init__ ssl_target=None, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/websockify/websocketproxy.py", line 231, in __init__ websocket.WebSocketServer.__init__(self, RequestHandlerClass, *args, **kwargs) TypeError: __init__() got an unexpected keyword argument 'no_parent' it seems there is a conflict with websockify. I'm running on debian wheezy amd64. If you need more information, please ask. regards, Axel Vanzaghi To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1333746/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 810493] Re: No support for sparse images
Until the glance issue is addressed, it's not possible to do anything in nova for this. Removing nova. ** No longer affects: nova -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/810493 Title: No support for sparse images Status in OpenStack Image Registry and Delivery Service (Glance): Confirmed Bug description: I could have sworn I filed this bug already, but I don't see it now. Oh, well. Glance does not seem to support any sort of sparse images. For example, Ubuntu's cloud images are a 1½ GB filesystem, but if it were sparsely allocated it would only take up a couple of hundred MB. Amazon handles this by using tarballs as their image transport format. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/810493/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1329995] Re: Sporadic tempest failures: "The server could not comply with the request since it is either malformed or otherwise incorrect"
The logs aren't available any more, without more info this isn't possible to address. ** Changed in: nova Status: New => Incomplete ** No longer affects: openstack-ci ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1329995 Title: Sporadic tempest failures: "The server could not comply with the request since it is either malformed or otherwise incorrect" Status in OpenStack Compute (Nova): Invalid Bug description: In one of my Tempest review runs, I'm seeing the following error fail some tests: Traceback (most recent call last): File "tempest/services/compute/xml/servers_client.py", line 388, in wait_for_server_status raise_on_error=raise_on_error) File "tempest/common/waiters.py", line 86, in wait_for_server_status _console_dump(client, server_id) File "tempest/common/waiters.py", line 27, in _console_dump resp, output = client.get_console_output(server_id, None) File "tempest/services/compute/xml/servers_client.py", line 596, in get_console_output length=length) File "tempest/services/compute/xml/servers_client.py", line 439, in action resp, body = self.post("servers/%s/action" % server_id, str(doc)) File "tempest/common/rest_client.py", line 209, in post return self.request('POST', url, extra_headers, headers, body) File "tempest/common/rest_client.py", line 419, in request resp, resp_body) File "tempest/common/rest_client.py", line 468, in _error_checker raise exceptions.BadRequest(resp_body) BadRequest: Bad request Details: {'message': 'The server could not comply with the request since it is either malformed or otherwise incorrect.', 'code': '400'} Full log for the run here: http://logs.openstack.org/93/98693/5/check /check-tempest-dsvm-full-icehouse/71d6c8c/console.html To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1329995/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1331213] Re: Compute node configuration issue
looks like support request ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1331213 Title: Compute node configuration issue Status in OpenStack Compute (Nova): Invalid Bug description: Hello. I'm having a problem using nova node configuration. I have a controller node and a compute node. When I start the service on compute node I get "2014-06-17 18:08:27 15268 DEBUG nova.virt.libvirt.driver [-] Connecting to libvirt: qemu:///system _get_connection /usr/lib/python2.7/dist-packages/n ova/virt/libvirt/driver.py:344 2014-06-17 18:08:28 15268 CRITICAL nova [-] (OperationalError) no such table: instances u'SELECT instances.created_at ..." Environment : Ubuntu 12.x, IceHouse Any help is appreciated. Thank you To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1331213/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1334151] Re: tempest.api.compute.servers.test_server_actions.ServerActionsTestXML.test_create_backup
*** This bug is a duplicate of bug 1329995 *** https://bugs.launchpad.net/bugs/1329995 ** This bug has been marked a duplicate of bug 1329995 Sporadic tempest failures: "The server could not comply with the request since it is either malformed or otherwise incorrect" -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1334151 Title: tempest.api.compute.servers.test_server_actions.ServerActionsTestXML.test_create_backup Status in OpenStack Compute (Nova): New Bug description: http://logs.openstack.org/76/101876/1/gate/gate-tempest-dsvm- full/1543d84/ https://review.openstack.org/#/c/101876/ 2014-06-24 22:40:12.195 | == 2014-06-24 22:40:12.196 | Failed 1 tests - output below: 2014-06-24 22:40:12.196 | == 2014-06-24 22:40:12.197 | 2014-06-24 22:40:12.197 | tempest.api.compute.servers.test_server_actions.ServerActionsTestXML.test_create_backup[gate] 2014-06-24 22:40:12.197 | - 2014-06-24 22:40:12.198 | 2014-06-24 22:40:12.198 | Captured traceback: 2014-06-24 22:40:12.199 | ~~~ 2014-06-24 22:40:12.199 | Traceback (most recent call last): 2014-06-24 22:40:12.200 | File "tempest/api/compute/servers/test_server_actions.py", line 316, in test_create_backup 2014-06-24 22:40:12.200 | self.servers_client.wait_for_server_status(self.server_id, 'ACTIVE') 2014-06-24 22:40:12.201 | File "tempest/services/compute/xml/servers_client.py", line 390, in wait_for_server_status 2014-06-24 22:40:12.201 | raise_on_error=raise_on_error) 2014-06-24 22:40:12.201 | File "tempest/common/waiters.py", line 106, in wait_for_server_status 2014-06-24 22:40:12.202 | _console_dump(client, server_id) 2014-06-24 22:40:12.202 | File "tempest/common/waiters.py", line 27, in _console_dump 2014-06-24 22:40:12.203 | resp, output = client.get_console_output(server_id, None) 2014-06-24 22:40:12.203 | File "tempest/services/compute/xml/servers_client.py", line 598, in get_console_output 2014-06-24 22:40:12.204 | length=length) 2014-06-24 22:40:12.204 | File "tempest/services/compute/xml/servers_client.py", line 441, in action 2014-06-24 22:40:12.205 | resp, body = self.post("servers/%s/action" % server_id, str(doc)) 2014-06-24 22:40:12.205 | File "tempest/common/rest_client.py", line 218, in post 2014-06-24 22:40:12.206 | return self.request('POST', url, extra_headers, headers, body) 2014-06-24 22:40:12.206 | File "tempest/common/rest_client.py", line 430, in request 2014-06-24 22:40:12.206 | resp, resp_body) 2014-06-24 22:40:12.207 | File "tempest/common/rest_client.py", line 479, in _error_checker 2014-06-24 22:40:12.207 | raise exceptions.BadRequest(resp_body) 2014-06-24 22:40:12.208 | BadRequest: Bad request 2014-06-24 22:40:12.208 | Details: {'message': 'The server could not comply with the request since it is either malformed or otherwise incorrect.', 'code': '400'} To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1334151/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1297261] Re: messages when the module import fails are very misleading and not descriptive
this is an oslo issue, it's in openstack/ namespace ** Also affects: oslo-incubator Importance: Undecided Status: New ** Changed in: oslo-incubator Status: New => Confirmed ** No longer affects: nova -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1297261 Title: messages when the module import fails are very misleading and not descriptive Status in The Oslo library incubator: Confirmed Bug description: I had problem to import vmware module due to lack of it's dep. The message in log though was saying that there is no vmware module, which is far from true. The problematic code is at least: /usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py def import_class(import_str): ... try: __import__(mod_str) return getattr(sys.modules[mod_str], class_str) except (ValueError, AttributeError): raise ImportError('Class %s cannot be found (%s)' % (class_str, traceback.format_exception(*sys.exc_info( Which would obfuscate the error message when some module import dies on ValueError, and: def import_object_ns(name_space, import_str, *args, **kwargs): """Tries to import object from default namespace. Imports a class and return an instance of it, first by trying to find the class in a default namespace, then failing back to a full path if not found in the default namespace. """ import_value = "%s.%s" % (name_space, import_str) try: return import_class(import_value)(*args, **kwargs) except ImportError: return import_class(import_str)(*args, **kwargs) Which will say ImportError: Missing module import_str, but only as a result of failure of the first import in the try-except block, effectively hiding the true reason of the failure. In other words, if the first import fails for some interesting reasons, some other, possible meaningles import is tried and only it's error is let to propagate. +++ This bug was initially created as a clone of Bug #1080424 +++ Description of problem: When having nova-compute alone on a node, I cannot start it (using PYTHONVERBOSE=1 and slightly modified init script for debuging purposes): 2014-03-25 11:10:42.153 10009 INFO nova.virt.driver [-] Loading compute driver 'vmwareapi.VMwareVCDriver' import nova.virt.vmwareapi # directory /usr/lib/python2.6/site-packages/nova/virt/vmwareapi # /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/__init__.pyc matches /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/__init__.py import nova.virt.vmwareapi # precompiled from /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/__init__.pyc # /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.pyc matches /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py import nova.virt.vmwareapi.driver # precompiled from /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.pyc # /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/error_util.pyc matches /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/error_util.py import nova.virt.vmwareapi.error_util # precompiled from /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/error_util.pyc # /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/host.pyc matches /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/host.py import nova.virt.vmwareapi.host # precompiled from /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/host.pyc # /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vim_util.pyc matches /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vim_util.py import nova.virt.vmwareapi.vim_util # precompiled from /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vim_util.pyc # /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vm_util.pyc matches /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vm_util.py import nova.virt.vmwareapi.vm_util # precompiled from /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vm_util.pyc # /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vim.pyc matches /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vim.py import nova.virt.vmwareapi.vim # precompiled from /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vim.pyc 2014-03-25 11:10:42.155 10009 ERROR nova.virt.driver [-] Unable to load the virtualization driver 2014-03-25 11:10:42.155 10009 TRACE nova.virt.driver Traceback (most recent call last): 2014-03-25 11:10:42.155 10009 TRACE nova.virt.driver File "/usr/lib/python2.6/site-packages/nova/virt/driver.py", line 1115, in load_compute_driver 2014-03-25 11:10:42.155 10009 TRACE nova.virt.driver virtapi) 2014-03-25 11:10:42.155 10009 TRACE nova.virt.driver File "/usr/lib/python2.6/site-packages/nova/openstack/common/importut
[Yahoo-eng-team] [Bug 1290679] Re: Nova Cells cannot work with zmq
Honestly, zmq is basically unsupported at this point, I think that if this comes back as a feature request it needs to be through the oslo.messaging program ** Changed in: nova Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1290679 Title: Nova Cells cannot work with zmq Status in OpenStack Compute (Nova): Won't Fix Bug description: I use OpenStack Nova Havana with zeromq to build my environment. In my environment, there is a controller node, two child cells node and four compute node like follows: http://pastebin.com/WtG0GVDv controller is the parent cell, cell1 and cell2 are child cell. While, when i start nova service , there are many errors in controller , cell1 and cell2 Controller cells.log: http://pastebin.com/VBVMdDym cell1 cells.log:http://pastebin.com/LsbpGvYc cell2 cells.log:http://pastebin.com/Q91STJWn And follows are my matchmaker_ring.json controller: http://pastebin.com/BduvHw3H cell1: http://pastebin.com/Lv4F9MHw cell2: http://pastebin.com/1aKMiZJx I think , the file imp_zmq must implemente the function cast_to_server and fanout_cast_to_server To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1290679/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1281748] Re: nova-compute crash due to missing DB column 'compute_nodes_1.metrics'
This is a support request, I see a solution attached ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1281748 Title: nova-compute crash due to missing DB column 'compute_nodes_1.metrics' Status in OpenStack Compute (Nova): Invalid Bug description: Doing a fresh install from http://docs.openstack.org/trunk/install- guide/install/apt/content/index.html Running Ubuntu 12.04.4 LTS + cloud-archive:havana PPA as per guide. The only variation from the guide is that I have setup a RBD store for Glance. Here are the package versions: # dpkg -l | grep nova ii nova-ajax-console-proxy 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - AJAX console proxy - transitional package ii nova-api 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - API frontend ii nova-cert1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - certificate management ii nova-common 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - common files ii nova-compute 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - compute node ii nova-compute-kvm 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - compute node (KVM) ii nova-conductor 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - conductor service ii nova-consoleauth 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - Console Authenticator ii nova-doc 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - documentation ii nova-network 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - Network manager ii nova-novncproxy 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - NoVNC proxy ii nova-scheduler 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - virtual machine scheduler ii python-nova 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute Python libraries ii python-novaclient1:2.15.0-0ubuntu1~cloud0 client library for OpenStack Compute API Everything is smooth up to setting up nova-compute. Other nova services seem to be working correctly, but when I got to deploying the first image the instance didn't start. I am attaching the nova- compute.log file. The nova-compute process crashes immediately when you attempt to start it. According to the errors it appears that there is a missing field (metrics) in the compute_nodes table. The error doesn't mention it, but the table also appears to be missing the extra_resources column as well. Here is what my MySQL says the schema is for compute_nodes CREATE TABLE `compute_nodes` ( `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, `deleted_at` datetime DEFAULT NULL, `id` int(11) NOT NULL AUTO_INCREMENT, `service_id` int(11) NOT NULL, `vcpus` int(11) NOT NULL, `memory_mb` int(11) NOT NULL, `local_gb` int(11) NOT NULL, `vcpus_used` int(11) NOT NULL, `memory_mb_used` int(11) NOT NULL, `local_gb_used` int(11) NOT NULL, `hypervisor_type` mediumtext NOT NULL, `hypervisor_version` int(11) NOT NULL, `cpu_info` mediumtext NOT NULL, `disk_available_least` int(11) DEFAULT NULL, `free_ram_mb` int(11) DEFAULT NULL, `free_disk_gb` int(11) DEFAULT NULL, `current_workload` int(11) DEFAULT NULL, `running_vms` int(11) DEFAULT NULL, `hypervisor_hostname` varchar(255) DEFAULT NULL, `deleted` int(11) DEFAULT NULL, `host_ip` varchar(39) DEFAULT NULL, `supported_instances` text, `pci_stats` text, PRIMARY KEY (`id`), KEY `fk_compute_nodes_service_id` (`service_id`), CONSTRAINT `fk_compute_nodes_service_id` FOREIGN KEY (`service_id`) REFERENCES `services` (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8$$ So it would appear that there is no metrics column to select as the error indicates. I thought this might have been an issue with `nova- manage db sync`, so I dropped, recreated, ran `nova-manage db sync` again on the DB, and restarted the services but I am getting the same issue. The nova-manage log for the db schema updates doesn't have anything that would indicate any issues (schema appears to be at version 216). To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1281748/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.
[Yahoo-eng-team] [Bug 1274767] Re: bad DHCP host name at line 1 of /opt/stack/data/nova/networks/nova-br100.conf
** Changed in: nova Status: Confirmed => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1274767 Title: bad DHCP host name at line 1 of /opt/stack/data/nova/networks/nova- br100.conf Status in OpenStack Compute (Nova): Won't Fix Bug description: bad DHCP host name at line 1 of /opt/stack/data/nova/networks/nova- br100.conf http://logs.openstack.org/51/63551/6/gate/gate-tempest-dsvm-postgres- full/4860441/logs/syslog.txt.gz logstash query: message:"bad DHCP host name at line 1 of /opt/stack/data/nova/networks/nova-br100.conf" AND filename:"logs/syslog.txt" Seen in the gate Jan 30 22:38:43 localhost dnsmasq[3604]: read /etc/hosts - 8 addresses Jan 30 22:38:43 localhost dnsmasq[3604]: bad DHCP host name at line 1 of /opt/stack/data/nova/networks/nova-br100.conf Jan 30 22:38:43 localhost dnsmasq[3604]: bad DHCP host name at line 2 of /opt/stack/data/nova/networks/nova-br100.conf Jan 30 22:38:43 localhost dnsmasq-dhcp[3604]: read /opt/stack/data/nova/networks/nova-br100.conf Jan 30 22:38:44 localhost dnsmasq[3604]: read /etc/hosts - 8 addresses Jan 30 22:38:44 localhost dnsmasq[3604]: bad DHCP host name at line 1 of /opt/stack/data/nova/networks/nova-br100.conf Jan 30 22:38:44 localhost dnsmasq[3604]: bad DHCP host name at line 2 of /opt/stack/data/nova/networks/nova-br100.conf Jan 30 22:38:44 localhost dnsmasq[3604]: bad DHCP host name at line 3 of /opt/stack/data/nova/networks/nova-br100.conf Jan 30 22:38:44 localhost dnsmasq[3604]: bad DHCP host name at line 4 of /opt/stack/data/nova/networks/nova-br100.conf Jan 30 22:38:44 localhost dnsmasq-dhcp[3604]: read /opt/stack/data/nova/networks/nova-br100.conf Jan 30 22:38:44 localhost dnsmasq[3604]: read /etc/hosts - 8 addresses To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1274767/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1259323] Re: Libvirt console parameter incorrect for ARM KVM
Calxeda ARM is no long in businesss, so marking as won't fix. If other arm folks come forward, please feel free to reopen. ** Changed in: nova Importance: Undecided => Low ** Changed in: nova Status: Triaged => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1259323 Title: Libvirt console parameter incorrect for ARM KVM Status in OpenStack Compute (Nova): Won't Fix Bug description: If you configure nova to run on Calxeda ARM with libvirt/KVM, the generated libvirt configuration passes console as: … root=/dev/vda console=tty0 console=ttyS0 For ARM guests the libvirt configuration should use 'console=ttyAMA0', hence as it stands you lose serial output for the guest. Currently the console settings are hard coded in nova/virt/libvirt/driver.py . I think we should modify that to be operator configurable via an option in nova.conf. I can submit a change accordingly, but would like feedback on if this sounds reasonable. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1259323/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1306559] Re: Fix python26 compatibility for RFCSysLogHandler
** No longer affects: sahara -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1306559 Title: Fix python26 compatibility for RFCSysLogHandler Status in Cinder: Confirmed Status in OpenStack Image Registry and Delivery Service (Glance): Confirmed Status in OpenStack Identity (Keystone): Confirmed Status in Murano: Fix Committed Status in OpenStack Neutron (virtual network service): Confirmed Bug description: Currently used pattern https://review.openstack.org/#/c/63094/15/openstack/common/log.py (lines 471-479) will fail for Python 2.6.x. In order to fix the broken Python 2.6.x compatibility, old style explicit superclass method calls should be used instead. Here is an example of how to check this for Python v2.7 and v2.6: import logging.handlers print type(logging.handlers.SysLogHandler) print type(logging.Handler) Results would be: Python 2.7: , so super() may be used for RFCSysLogHandler(logging.handlers.SysLogHandler) Python 2.6:, so super() may *NOT* be used for RFCSysLogHandler(logging.handlers.SysLogHandler) To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1306559/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1306559] Re: Fix python26 compatibility for RFCSysLogHandler
deleting projects that have fixed this to get around launchpad timeout limitations ** No longer affects: oslo-incubator ** No longer affects: ceilometer ** No longer affects: heat -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1306559 Title: Fix python26 compatibility for RFCSysLogHandler Status in Cinder: Confirmed Status in OpenStack Image Registry and Delivery Service (Glance): Confirmed Status in OpenStack Identity (Keystone): Confirmed Status in Murano: Fix Committed Status in OpenStack Neutron (virtual network service): Confirmed Bug description: Currently used pattern https://review.openstack.org/#/c/63094/15/openstack/common/log.py (lines 471-479) will fail for Python 2.6.x. In order to fix the broken Python 2.6.x compatibility, old style explicit superclass method calls should be used instead. Here is an example of how to check this for Python v2.7 and v2.6: import logging.handlers print type(logging.handlers.SysLogHandler) print type(logging.Handler) Results would be: Python 2.7: , so super() may be used for RFCSysLogHandler(logging.handlers.SysLogHandler) Python 2.6:, so super() may *NOT* be used for RFCSysLogHandler(logging.handlers.SysLogHandler) To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1306559/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1245746] Re: Grizzly to Havana Upgrade wipes out Nova quota_usages table
** Changed in: nova Status: Incomplete => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1245746 Title: Grizzly to Havana Upgrade wipes out Nova quota_usages table Status in OpenStack Compute (Nova): Fix Released Status in OpenStack Compute (nova) havana series: Fix Released Bug description: In grizzly, there is no user_id in quota_usages table, and the database with quota usages table is like this: mysql> select * from quota_usages; +-+-+++--+---++--+---+-+ | created_at | updated_at | deleted_at | id | project_id | resource | in_use | reserved | until_refresh | deleted | +-+-+++--+---++--+---+-+ | 2013-10-29 03:03:05 | 2013-10-29 03:19:30 | NULL | 1 | 9cb04bffbe784771bd28fa093d749804 | instances | 1 |0 | NULL | 0 | | 2013-10-29 03:03:05 | 2013-10-29 03:19:30 | NULL | 2 | 9cb04bffbe784771bd28fa093d749804 | ram |512 |0 | NULL | 0 | | 2013-10-29 03:03:05 | 2013-10-29 03:19:30 | NULL | 3 | 9cb04bffbe784771bd28fa093d749804 | cores | 1 |0 | NULL | 0 | +-+-+++--+---++--+---+-+ The problem can be recreated througth the following steps: 1. In upgrade from Grizzly to Havana, migration script 203_make_user_quotas_key_and_value.py adds 'user_id' column to quota_usages table and its shadow table. 2. Migration script 216_sync_quota_usages.py willl delete all the any instances/cores/ram/etc quota_usages without a user_id by delete_null_rows. Since this is a Grizzly to Havana upgrade, and there is no user_id colume in Grizzly ( user_id is added by 203_make_user_quotas_key_and_value.py in Havana), all the instances/cores/ram/etc resources in quota_usages will be deleted. 3. Then script 216_sync_quota_usages.py will try to add new quota_usages entrance based on a query of resources left on the table. Remember from step 2, they are already deleted, therefore there will be no quota entry inserted or updated. The result is the quota_usage entry from Grizzly are wiped out during upgrade to Havana. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1245746/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1240728] Re: tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescued_vm_attach_volume is nondeterministic
** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1240728 Title: tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescued_vm_attach_volume is nondeterministic Status in Cinder: Confirmed Status in OpenStack Compute (Nova): Invalid Bug description: Traceback (most recent call last): File "tempest/api/compute/servers/test_server_rescue.py", line 111, in _unrescue self.servers_client.wait_for_server_status(server_id, 'ACTIVE') File "tempest/services/compute/json/servers_client.py", line 156, in wait_for_server_status return waiters.wait_for_server_status(self, server_id, status) File "tempest/common/waiters.py", line 80, in wait_for_server_status raise exceptions.TimeoutException(message) TimeoutException: Request timed out Details: Server 802897a6-6793-4af2-9d84-8750be518380 failed to reach ACTIVE status within the required time (400 s). Current status: SHUTOFF. Sample failure: http://logs.openstack.org/51/52151/1/gate/gate- tempest-devstack-vm-full/6b393f5/ Basic query for the failure string: http://logstash.openstack.org/#eyJzZWFyY2giOiJAbWVzc2FnZTpcIkZBSUw6IHRlbXBlc3QuYXBpLmNvbXB1dGUuc2VydmVycy50ZXN0X3NlcnZlcl9yZXNjdWUuU2VydmVyUmVzY3VlVGVzdEpTT04udGVzdF9yZXNjdWVkX3ZtX2F0dGFjaF92b2x1bWVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiYWxsIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4MTk2MTIyMjkwMSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ== To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1240728/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1211194] Re: nova-compute(folsom) segmentation fault libvirt_type=lxc
** Changed in: nova Status: Incomplete => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1211194 Title: nova-compute(folsom) segmentation fault libvirt_type=lxc Status in OpenStack Compute (Nova): Won't Fix Bug description: nova-compute(folsom) segmentation fault libvirt_type=lxc grizzly seems to be the same . when I was creating a new instance, the nova-compute segmentation fault. but the LXC container seems running ok ! ===LOG=== 2013-08-12 07:34:38 DEBUG nova.utils [req-fe05d497-a694-4208-80aa-cb20a9ec8315 651f24b16d33415cb0aa684ee4d0a7d5 b9aa4945e61e47e39f0687c740d1017e] Result was 0 execute /usr/local/lib/python2.7/dist-packages/nova-2012.2.5-py2.7.egg/nova/utils.py:203 2013-08-12 07:34:38 DEBUG nova.utils [req-fe05d497-a694-4208-80aa-cb20a9ec8315 651f24b16d33415cb0aa684ee4d0a7d5 b9aa4945e61e47e39f0687c740d1017e] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-restore -c execute /usr/local/lib/python2.7/dist-packages/nova-2012.2.5-py2.7.egg/nova/utils.py:187 2013-08-12 07:34:39 DEBUG nova.utils [req-fe05d497-a694-4208-80aa-cb20a9ec8315 651f24b16d33415cb0aa684ee4d0a7d5 b9aa4945e61e47e39f0687c740d1017e] Result was 0 execute /usr/local/lib/python2.7/dist-packages/nova-2012.2.5-py2.7.egg/nova/utils.py:203 2013-08-12 07:34:39 DEBUG nova.network.linux_net [req-fe05d497-a694-4208-80aa-cb20a9ec8315 651f24b16d33415cb0aa684ee4d0a7d5 b9aa4945e61e47e39f0687c740d1017e] IPTablesManager.apply completed with success _apply /usr/local/lib/python2.7/dist-packages/nova-2012.2.5-py2.7.egg/nova/network/linux_net.py:375 2013-08-12 07:34:39 DEBUG nova.virt.firewall [req-fe05d497-a694-4208-80aa-cb20a9ec8315 651f24b16d33415cb0aa684ee4d0a7d5 b9aa4945e61e47e39f0687c740d1017e] [instance: 608dae79-e40b-4139-a925-80b77783eaec] Provider Firewall Rules refreshed prepare_instance_filter /usr/local/lib/python2.7/dist-packages/nova-2012.2.5-py2.7.egg/nova/virt/firewall.py:193 2013-08-12 07:34:39 DEBUG nova.utils [req-fe05d497-a694-4208-80aa-cb20a9ec8315 651f24b16d33415cb0aa684ee4d0a7d5 b9aa4945e61e47e39f0687c740d1017e] Got semaphore "iptables" for method "_apply"... inner /usr/local/lib/python2.7/dist-packages/nova-2012.2.5-py2.7.egg/nova/utils.py:764 2013-08-12 07:34:39 DEBUG nova.utils [req-fe05d497-a694-4208-80aa-cb20a9ec8315 651f24b16d33415cb0aa684ee4d0a7d5 b9aa4945e61e47e39f0687c740d1017e] Attempting to grab file lock "iptables" for method "_apply"... inner /usr/local/lib/python2.7/dist-packages/nova-2012.2.5-py2.7.egg/nova/utils.py:768 2013-08-12 07:34:39 DEBUG nova.utils [req-fe05d497-a694-4208-80aa-cb20a9ec8315 651f24b16d33415cb0aa684ee4d0a7d5 b9aa4945e61e47e39f0687c740d1017e] Got file lock "iptables" for method "_apply"... inner /usr/local/lib/python2.7/dist-packages/nova-2012.2.5-py2.7.egg/nova/utils.py:794 2013-08-12 07:34:39 DEBUG nova.utils [req-fe05d497-a694-4208-80aa-cb20a9ec8315 651f24b16d33415cb0aa684ee4d0a7d5 b9aa4945e61e47e39f0687c740d1017e] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-save -c -t filter execute /usr/local/lib/python2.7/dist-packages/nova-2012.2.5-py2.7.egg/nova/utils.py:187 2013-08-12 07:34:42 DEBUG nova.utils [req-fe05d497-a694-4208-80aa-cb20a9ec8315 651f24b16d33415cb0aa684ee4d0a7d5 b9aa4945e61e47e39f0687c740d1017e] Result was 0 execute /usr/local/lib/python2.7/dist-packages/nova-2012.2.5-py2.7.egg/nova/utils.py:203 2013-08-12 07:34:42 DEBUG nova.utils [req-fe05d497-a694-4208-80aa-cb20a9ec8315 651f24b16d33415cb0aa684ee4d0a7d5 b9aa4945e61e47e39f0687c740d1017e] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-restore -c execute /usr/local/lib/python2.7/dist-packages/nova-2012.2.5-py2.7.egg/nova/utils.py:187 2013-08-12 07:34:43 DEBUG nova.utils [req-fe05d497-a694-4208-80aa-cb20a9ec8315 651f24b16d33415cb0aa684ee4d0a7d5 b9aa4945e61e47e39f0687c740d1017e] Result was 0 execute /usr/local/lib/python2.7/dist-packages/nova-2012.2.5-py2.7.egg/nova/utils.py:203 2013-08-12 07:34:43 DEBUG nova.utils [req-fe05d497-a694-4208-80aa-cb20a9ec8315 651f24b16d33415cb0aa684ee4d0a7d5 b9aa4945e61e47e39f0687c740d1017e] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-save -c -t mangle execute /usr/local/lib/python2.7/dist-packages/nova-2012.2.5-py2.7.egg/nova/utils.py:187 2013-08-12 07:34:44 DEBUG nova.utils [req-fe05d497-a694-4208-80aa-cb20a9ec8315 651f24b16d33415cb0aa684ee4d0a7d5 b9aa4945e61e47e39f0687c740d1017e] Result was 0 execute /usr/local/lib/python2.7/dist-packages/nova-2012.2.5-py2.7.egg/nova/utils.py:203 2013-08-12 07:34:44 DEBUG nova.utils [req-fe05d497-a694-4208-80aa-cb20a9ec8315 651f24b16d33415cb0aa684ee4d0a7d5 b9aa4945e61e47e39f0687c740d1017e] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/
[Yahoo-eng-team] [Bug 1226344] Re: instance-gc doesn't work on baremetal
baremetal deprecated ** Changed in: nova Status: Incomplete => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1226344 Title: instance-gc doesn't work on baremetal Status in OpenStack Compute (Nova): Won't Fix Status in tripleo - openstack on openstack: Triaged Bug description: Per bug 1226342 it's possible to get into a situation where a nova bm node has an instance uuid associated with it that doesn't exist anymore (it may be presented 'DELETED' or just completely gone). This should be detected and result in the node being forced off and deassociated from the instance uuid, in the same way a VM hypervisor kills rogue VM's, but for some reason it's not working. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1226344/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1052557] Re: Cleanup instance metadata access in nova/compute
old incomplete bug. Seems like it should actually be a blueprint if it's going to exist at all. ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1052557 Title: Cleanup instance metadata access in nova/compute Status in OpenStack Compute (Nova): Invalid Bug description: We currently access attributes in a way that obscures the actual system_metadata key, automatically prefixing "image_" for instance.. This makes it difficult for people to read the code to figure out which image-properties they need to be set. The goal here is to make code more transparent. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1052557/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1202449] Re: Race condition with spawn/delete of instance sharing same source group security rule
*** This bug is a duplicate of bug 1182131 *** https://bugs.launchpad.net/bugs/1182131 I believe this was probably addressed with: commit 6aa368b99249d01f8fd7183c15d11986ad6a6fb7 Author: Dan Smith Date: Thu Jul 3 08:09:39 2014 -0700 Avoid re-adding iptables rules for instances that have disappeared The remove_filters_for_instance() method fails silently if the instance's chain is gone (i.e. it's been deleted). If this happens while we're refreshing security group rules, we will not notice this case and re-add stale rules for an old instance, breaking our firewall for new instances. This adds a quick check after we've captured the lock to see if the associated chain exists, and bails if it doesn't. Change-Id: Ic75988939f82de49735d85fe99a9eecd4baf45c9 Related-bug: #1182131 ** Changed in: nova Status: Incomplete => Fix Committed ** This bug has been marked a duplicate of bug 1182131 nova-compute: instance created in self-referencing secgroup produces KeyError -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1202449 Title: Race condition with spawn/delete of instance sharing same source group security rule Status in OpenStack Compute (Nova): Fix Committed Bug description: Getting the below error when launching and instance at the same time as deleting another instance. These instances share a security group rule that has the same source group (hope that makes sense) Instance e5fc8a20-384a-4976-890d-54631962c9e2 was deleted at about the same time as 98fd38d1-82cb-449e-92f9-1936d1e2628c was created. 98fd38d1-82cb-449e-92f9-1936d1e2628c failed when trying to build its iptables because e5fc8a20-384a-4976-890d-54631962c9e2 no longer existed. 2013-07-17 15:30:35.754 ERROR nova.compute.manager [req-6f82cdb7-db20-4f17-a344-edf6da70a871 X] [instance: 98fd38d1-82cb-449e-92f9-1936d1e2628c] Instance failed to spawn 2013-07-17 15:30:35.754 26853 TRACE nova.compute.manager [instance: 98fd38d1-82cb-449e-92f9-1936d1e2628c] Traceback (most recent call last): 2013-07-17 15:30:35.754 26853 TRACE nova.compute.manager [instance: 98fd38d1-82cb-449e-92f9-1936d1e2628c] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1119, in spawn 2013-07-17 15:30:35.754 26853 TRACE nova.compute.manager [instance: 98fd38d1-82cb-449e-92f9-1936d1e2628c] block_device_info) 2013-07-17 15:30:35.754 26853 TRACE nova.compute.manager [instance: 98fd38d1-82cb-449e-92f9-1936d1e2628c] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1528, in spawn 2013-07-17 15:30:35.754 26853 TRACE nova.compute.manager [instance: 98fd38d1-82cb-449e-92f9-1936d1e2628c] block_device_info) 2013-07-17 15:30:35.754 26853 TRACE nova.compute.manager [instance: 98fd38d1-82cb-449e-92f9-1936d1e2628c] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2443, in _create_domain_and_network 2013-07-17 15:30:35.754 26853 TRACE nova.compute.manager [instance: 98fd38d1-82cb-449e-92f9-1936d1e2628c] self.firewall_driver.prepare_instance_filter(instance, network_info) 2013-07-17 15:30:35.754 26853 TRACE nova.compute.manager [instance: 98fd38d1-82cb-449e-92f9-1936d1e2628c] File "/usr/lib/python2.7/dist-packages/nova/virt/firewall.py", line 193, in prepare_instance_filter 2013-07-17 15:30:35.754 26853 TRACE nova.compute.manager [instance: 98fd38d1-82cb-449e-92f9-1936d1e2628c] ipv4_rules, ipv6_rules = self.instance_rules(instance, network_info) 2013-07-17 15:30:35.754 26853 TRACE nova.compute.manager [instance: 98fd38d1-82cb-449e-92f9-1936d1e2628c] File "/usr/lib/python2.7/dist-packages/nova/virt/firewall.py", line 424, in instance_rules 2013-07-17 15:30:35.754 26853 TRACE nova.compute.manager [instance: 98fd38d1-82cb-449e-92f9-1936d1e2628c] conductor_api=capi) 2013-07-17 15:30:35.754 26853 TRACE nova.compute.manager [instance: 98fd38d1-82cb-449e-92f9-1936d1e2628c] File "/usr/lib/python2.7/dist-packages/nova/network/api.py", line 102, in wrapped 2013-07-17 15:30:35.754 26853 TRACE nova.compute.manager [instance: 98fd38d1-82cb-449e-92f9-1936d1e2628c] return func(self, context, *args, **kwargs) 2013-07-17 15:30:35.754 26853 TRACE nova.compute.manager [instance: 98fd38d1-82cb-449e-92f9-1936d1e2628c] File "/usr/lib/python2.7/dist-packages/nova/network/api.py", line 375, in get_instance_nw_info 2013-07-17 15:30:35.754 26853 TRACE nova.compute.manager [instance: 98fd38d1-82cb-449e-92f9-1936d1e2628c] result = self._get_instance_nw_info(context, instance) 2013-07-17 15:30:35.754 26853 TRACE nova.compute.manager [instance: 98fd38d1-82cb-449e-92f9-1936d1e2628c] File "/usr/lib/python2.7/dist-packages/nova/network/api.py", line 392, in _get_instance_nw_info 2013-07-17 15:30:35.754 26853 TRACE nova.compute.manager [instance: 9
[Yahoo-eng-team] [Bug 1193113] Re: DevicePathInUse exception in devstack-vm-quantum
Old incomplete bug, moving to invalid ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1193113 Title: DevicePathInUse exception in devstack-vm-quantum Status in OpenStack Compute (Nova): Invalid Bug description: I just got this during verification of one of my changes. I don't think it's related to the change (https://review.openstack.org/#/c/33478/) so I'm reporting it here before I reverify. Full log: http://logs.openstack.org/33478/1/gate/gate-tempest- devstack-vm-quantum/32609/logs/screen-n-cpu.txt.gz Also, this was for stable/grizzly. I'm not sure how to specify that in LP. 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/new/nova/nova/openstack/common/rpc/amqp.py", line 430, in _process_data 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp rval = self.proxy.dispatch(ctxt, version, method, **args) 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/new/nova/nova/openstack/common/rpc/dispatcher.py", line 133, in dispatch 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp return getattr(proxyobj, method)(ctxt, **kwargs) 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/new/nova/nova/exception.py", line 117, in wrapped 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp temp_level, payload) 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__ 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp self.gen.next() 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/new/nova/nova/exception.py", line 94, in wrapped 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp return f(self, context, *args, **kw) 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/new/nova/nova/compute/manager.py", line 209, in decorated_function 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp pass 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__ 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp self.gen.next() 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/new/nova/nova/compute/manager.py", line 195, in decorated_function 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp return function(self, context, *args, **kwargs) 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/new/nova/nova/compute/manager.py", line 237, in decorated_function 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp e, sys.exc_info()) 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__ 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp self.gen.next() 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/new/nova/nova/compute/manager.py", line 224, in decorated_function 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp return function(self, context, *args, **kwargs) 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/new/nova/nova/compute/manager.py", line 2854, in reserve_block_device_name 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp return do_reserve() 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/new/nova/nova/openstack/common/lockutils.py", line 242, in inner 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp retval = f(*args, **kwargs) 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/new/nova/nova/compute/manager.py", line 2843, in do_reserve 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp context, instance, bdms, device) 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/new/nova/nova/compute/utils.py", line 165, in get_device_name_for_instance 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp raise exception.DevicePathInUse(path=device) 2013-06-20 19:26:25.981 23879 TRACE nova.openstack.common.rpc.amqp DevicePathInUse: The supplied device path (/dev/vdb) is in use. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1193113/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://laun
[Yahoo-eng-team] [Bug 1176446] Re: nova list as admin is slow (no vms)
Believe this is fixed old incomplete bug should be invalid ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1176446 Title: nova list as admin is slow (no vms) Status in OpenStack Compute (Nova): Invalid Bug description: Running nova 2013.1 from Ubuntu packages. I saw bug 1160487 and thought this might be a dup but I dont believe so because running nova list as admin who owns no VMs at all takes about 25-30 seconds to return . We also applied commit e653938ff7bc6b9b3e97e784bb07516576305b3e to nova which significantly improved nova list for none admin tenants only. nova --debug list EQ: curl -i http://10.34.104.187:35357/v2.0/tokens -X POST -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-novaclient" -d '{"auth": {"tenantName": "nicira", "passwordCredentials": {"username": "admin", "password": "x!"}}}' INFO (connectionpool:191) Starting new HTTP connection (1): 10.34.104.187 DEBUG (connectionpool:283) "POST /v2.0/tokens HTTP/1.1" 200 2416 RESP: [200] {'date': 'Sat, 04 May 2013 23:32:29 GMT', 'content-type': 'application/json', 'content-length': '2416', 'vary': 'X-Auth-Token'} RESP BODY: {"access": {"token": {"issued_at": "2013-05-04T23:32:29.848568", "expires": "2013-05-05T23:32:29Z", "id": "166650472b6e4bc0bd0ec3c1ab82a2e2", "tenant": {"description": "Default Tenant - Admin", "enabled": true, "id": "fc9ba4c1d32d48679b5c3e9b2c004b9b", "name": "nicira"}}, "serviceCatalog": [{"endpoints": [{"adminURL": "http://10.34.104.185:8774/v2/fc9ba4c1d32d48679b5c3e9b2c004b9b";, "region": "PA", "internalURL": "http://10.34.104.185:8774/v2/fc9ba4c1d32d48679b5c3e9b2c004b9b";, "id": "280c800402da47d393e4e0890a5a830e", "publicURL": "http://10.34.104.185:8774/v2/fc9ba4c1d32d48679b5c3e9b2c004b9b"}], "endpoints_links": [], "type": "compute", "name": "nova"}, {"endpoints": [{"adminURL": "http://10.34.104.188:9696";, "region": "PA", "internalURL": "http://10.34.104.188:9696";, "id": "2b188ab59755429c94324088bb2fa9a2", "publicURL": "http://10.34.104.188:9696"}], "endpoints_links": [], "type": "network", "name": "quantum"}, {"endpoints": [{"adminURL": "http://10.34.104.185:9292";, " region": "PA", "internalURL": "http://10.34.104.185:9292";, "id": "be1d2f2449ac448299c1258913b16474", "publicURL": "http://10.34.104.185:9292"}], "endpoints_links": [], "type": "image", "name": "glance"}, {"endpoints": [{"adminURL": "http://10.34.104.190:8776/v1/fc9ba4c1d32d48679b5c3e9b2c004b9b";, "region": "PA", "internalURL": "http://10.34.104.190:8776/v1/fc9ba4c1d32d48679b5c3e9b2c004b9b";, "id": "9ae35a87f24040038851ce9c9e20147d", "publicURL": "http://10.34.104.190:8776/v1/fc9ba4c1d32d48679b5c3e9b2c004b9b"}], "endpoints_links": [], "type": "volume", "name": "cinder"}, {"endpoints": [{"adminURL": "http://10.34.104.185:8773/service/Cloud";, "region": "PA", "internalURL": "http://10.34.104.185:8773/service/Cloud";, "id": "0ae37a0217d6445e8adbb5ce08146c0b", "publicURL": "http://10.34.104.185:8773/service/Cloud"}], "endpoints_links": [], "type": "ec2", "name": "ec2"}, {"endpoints": [{"adminURL": "http://10.34.104.187:35357/v2.0";, "region": "PA", "internalURL": "http://10.34.104.187:5000/v2 .0", "id": "37b3aa6fade24ced8d6dae8fdaac8449", "publicURL": "http://10.34.104.187:5000/v2.0"}], "endpoints_links": [], "type": "identity", "name": "keystone"}], "user": {"username": "admin", "roles_links": [], "id": "5e363b8f0665443d89ca9d9787a19a81", "roles": [{"name": "admin"}, {"name": "_member_"}], "name": "admin"}, "metadata": {"is_admin": 0, "roles": ["b04ac30a90f64c3692d54c73e924e2ae", "9fe2ff9ee4384b1894a90878d3e92bab"]}}} REQ: curl -i http://10.34.104.185:8774/v2/fc9ba4c1d32d48679b5c3e9b2c004b9b/servers/detail -X GET -H "X-Auth-Project-Id: nicira" -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 166650472b6e4bc0bd0ec3c1ab82a2e2" INFO (connectionpool:191) Starting new HTTP connection (1): 10.34.104.185 DEBUG (connectionpool:283) "GET /v2/fc9ba4c1d32d48679b5c3e9b2c004b9b/servers/detail HTTP/1.1" 200 15 RESP: [200] {'date': 'Sat, 04 May 2013 23:33:06 GMT', 'x-compute-request-id': 'req-32739176-1998-4b1e-8fa6-c2f7b029b6a7', 'content-type': 'application/json', 'content-length': '15'} RESP BODY: {"servers": []} nova-api logs in debug mode: 2013-05-04 16:32:40.958 8633 INFO nova.osapi_compute.wsgi.server [-] (8633) accepted ('10.34.104.185', 58359) 2013-05-04 16:32:41.080 DEBUG nova.api.openstack.wsgi [req-32739176-1998-4b1e-8fa6-c2f7b029b6a7 5e363b8f0665443d89ca9d9787a19a81 fc9ba4c1d32d48679b5c3e9b2c004b9b] No Content-Type provided in request get_body /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:791 2013-05-04 16:32:41.080 DEBUG nova.api.openstack.wsgi [req-32739176-1998-4b1e
[Yahoo-eng-team] [Bug 1174153] Re: data from previous tenants accessible with nova baremetal
** Changed in: nova Status: Incomplete => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1174153 Title: data from previous tenants accessible with nova baremetal Status in OpenStack Bare Metal Provisioning Service (Ironic): Triaged Status in OpenStack Compute (Nova): Won't Fix Status in OpenStack Security Notes: Fix Released Bug description: At the moment the baremetal driver resets the partition table on the first hard disk, but doesn't wipe the data. This has two holes: other disks have their partition tables preserved; tenant data is able to be read by the new instance. Wiping disks can be slow (particularly in cases where TRIM cannot be relied on), so we probably want to only do it when the new instance is for a new tenant. To manage notifications about this bug go to: https://bugs.launchpad.net/ironic/+bug/1174153/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1260310] Re: gate-tempest-dsvm-full failure with "An error occurred while enabling hairpin mode on domain with xml"
No real reproduce here, moving on... ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1260310 Title: gate-tempest-dsvm-full failure with "An error occurred while enabling hairpin mode on domain with xml" Status in OpenStack Compute (Nova): Invalid Bug description: Kibana search http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkFuIGVycm9yIG9jY3VycmVkIHdoaWxlIGVuYWJsaW5nIGhhaXJwaW4gbW9kZSBvbiBkb21haW4gd2l0aCB4bWxcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4Njg1MjgzMTk3M30= This is pretty infrequent To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1260310/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1313707] Re: instance status turn to ERROR when running instance suspend
4 months in incomplete status, closing ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1313707 Title: instance status turn to ERROR when running instance suspend Status in OpenStack Compute (Nova): Invalid Bug description: Description of problem: When trying to suspend an instance the instance's status turn to Error. The instance's flavor details are: ++--+ | Property | Value| ++--+ | name | m1.small | | ram| 2048 | | OS-FLV-DISABLED:disabled | False| | vcpus | 1| | extra_specs| {} | | swap | | | os-flavor-access:is_public | True | | rxtx_factor| 1.0 | | OS-FLV-EXT-DATA:ephemeral | 40 | | disk | 20 | | id | 7427e83a-5f96-43af-936b-a054191482ab | ++--+ Version-Release number of selected component (if applicable): openstack-nova-common-2013.2.3-6.el6ost.noarch openstack-nova-console-2013.2.3-6.el6ost.noarch openstack-nova-network-2013.2.3-6.el6ost.noarch python-novaclient-2.15.0-4.el6ost.noarch python-nova-2013.2.3-6.el6ost.noarch openstack-nova-compute-2013.2.3-6.el6ost.noarch openstack-nova-conductor-2013.2.3-6.el6ost.noarch openstack-nova-novncproxy-2013.2.3-6.el6ost.noarch openstack-nova-scheduler-2013.2.3-6.el6ost.noarch openstack-nova-api-2013.2.3-6.el6ost.noarch openstack-nova-cert-2013.2.3-6.el6ost.noarch How reproducible: 100% Steps to Reproduce: 1. launch an instance from an iso image with the flavor as it is detailed above. 2. suspend the instance. Actual results: The instance status turns to ERROR. Expected results: The instance should be suspend Additional info: The error from the compute log is attached. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1313707/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1301519] Re: nova.conf.sample missing from the 2014.1.rc1 tarball
This is a policy decision. We should bring it back up on the mailing list if we think the policy should change. ** Changed in: nova Status: Incomplete => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1301519 Title: nova.conf.sample missing from the 2014.1.rc1 tarball Status in OpenStack Compute (Nova): Won't Fix Bug description: This patch [1] removed the nova.conf.sample because it's not gated but now we are left without the sample config file in the tarball. We could generate the nova.conf.sample in setup.py (based on this comment [2]) and include it in the tarball for rc2. [1] https://review.openstack.org/#/c/81588/ [2] https://bugs.launchpad.net/nova/+bug/1294774/comments/4 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1301519/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1291471] Re: can't boot a volume from a volume that has been created from a snapshot
3 months in Incomplete status waiting for info. ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1291471 Title: can't boot a volume from a volume that has been created from a snapshot Status in OpenStack Compute (Nova): Invalid Bug description: Description of problem: A volume that has been created from a snapshot of a volume failed to boot an instance with the following error: 2014-03-12 18:03:39.790 9573 ERROR nova.compute.manager [req-f67dabd7-f013-483a-a386-d5a511b86be7 1654b1a85ba647df87fc9258962949fb 87761b8cc7d34be29063ad24073b2172] [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] Instance failed block d evice setup 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] Traceback (most recent call last): 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1387, in _prep_block_device 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] self._await_block_device_map_created) + 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] File "/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 283, in attach_block_devices 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] block_device_mapping) 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] File "/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 170, in attach 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] connector) 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 176, in wrapper 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] res = method(self, ctx, volume_id, *args, **kwargs) 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 274, in initialize_connection 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] connector) 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] File "/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line 321, in initialize_connection 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] {'connector': connector})[1]['connection_info'] 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] File "/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line 250, in _action 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] return self.api.client.post(url, body=body) 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] File "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 210, in post 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] return self._cs_request(url, 'POST', **kwargs) 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] File "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 174, in _cs_request 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] **kwargs) 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] File "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 157, in request 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] raise exceptions.from_response(resp, body) 2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 9f1a00b6-4b88-431e-a163-feaf06e0bfe3] ClientException: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-e990ac94-97d9-41f3-b1e1-ca63e7d1d2bc) 2014-03-12 18:03:40.289 9573 ERROR nova.openstack.common.rpc.amqp [req-f67dabd7-f013-483a-a386-d5a511b86be7 1654b1a85ba647df87fc9258962949fb 87761b8cc7d34be29063ad24073b2172] Exception during message handling 2014-03-12 18:03:40.289 9573 TRACE nova.
[Yahoo-eng-team] [Bug 1288700] Re: Unable to list hypervisors information
6 months in incomplete state ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1288700 Title: Unable to list hypervisors information Status in OpenStack Dashboard (Horizon): Invalid Status in OpenStack Compute (Nova): Invalid Bug description: While accessing the horizon dashboard, I'm unable to list the Hypervisors. The python-six version has been updated from 1.3.0 fc19 package to 1.5.2 using pip. However, the error is the following: 2014-03-06 12:46:20.399 31393 TRACE nova.api.openstack.wsgi 2014-03-06 12:46:20.724 31393 ERROR nova.api.openstack.wsgi [req-3875bd6c-b142-448c-91a0-8a9f21e7acc6 myUser 8e067beb179c49fdaa553e80024fa3ba] Exception handling resource: 'NoneType' object has no attribute '__getitem__' 2014-03-06 12:46:20.724 31393 TRACE nova.api.openstack.wsgi Traceback (most recent call last): 2014-03-06 12:46:20.724 31393 TRACE nova.api.openstack.wsgi File "/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py", line 997, in _process_stack 2014-03-06 12:46:20.724 31393 TRACE nova.api.openstack.wsgi action_result = self.dispatch(meth, request, action_args) 2014-03-06 12:46:20.724 31393 TRACE nova.api.openstack.wsgi File "/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py", line 1078, in dispatch 2014-03-06 12:46:20.724 31393 TRACE nova.api.openstack.wsgi return method(req=request, **action_args) 2014-03-06 12:46:20.724 31393 TRACE nova.api.openstack.wsgi File "/usr/lib/python2.7/site-packages/nova/api/openstack/compute/contrib/hypervisors.py", line 175, in detail 2014-03-06 12:46:20.724 31393 TRACE nova.api.openstack.wsgi for hyp in compute_nodes]) 2014-03-06 12:46:20.724 31393 TRACE nova.api.openstack.wsgi File "/usr/lib/python2.7/site-packages/nova/api/openstack/compute/contrib/hypervisors.py", line 148, in _view_hypervisor 2014-03-06 12:46:20.724 31393 TRACE nova.api.openstack.wsgi 'host': hypervisor['service']['host'], 2014-03-06 12:46:20.724 31393 TRACE nova.api.openstack.wsgi TypeError: 'NoneType' object has no attribute '__getitem__' 2014-03-06 12:46:20.724 31393 TRACE nova.api.openstack.wsgi To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1288700/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1036672] Re: Unable to spawn instance after I delete and create same network
really old invalid bug ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1036672 Title: Unable to spawn instance after I delete and create same network Status in OpenStack Compute (Nova): Invalid Bug description: Steps to reproduce: 1. Create project and network 2. Spawn instance 3. Delete instance 4. Delete project and network 5. Create same network 6. Create project 7. Spawn instance Affected releases: OpenStack Essex Description: You are unable to spawn instance with network that was previosly deleted and recreated. Instance get ERROR state and in nova- compute.log you will see this error: 2012-08-13 19:47:38 nova.compute.manager: ERROR [req-51fc575d-6e0f-482e-8087-7fcacfd72af4 0d2d8dab9f9d415986d28505ed4861bb b833a8f1094b4815b50ddcc21b2b6423] Instance failed to spawn 2012-08-13 19:47:38 TRACE nova.compute.manager [instance: f6531b3d-9764-48b8-b36f-695c8f06480a] Traceback (most recent call last): 2012-08-13 19:47:38 TRACE nova.compute.manager [instance: f6531b3d-9764-48b8-b36f-695c8f06480a] File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 598, in _spawn 2012-08-13 19:47:38 TRACE nova.compute.manager [instance: f6531b3d-9764-48b8-b36f-695c8f06480a] self._legacy_nw_info(network_info), block_device_info) 2012-08-13 19:47:38 TRACE nova.compute.manager [instance: f6531b3d-9764-48b8-b36f-695c8f06480a] File "/usr/lib/python2.6/site-packages/nova/exception.py", line 114, in wrapped 2012-08-13 19:47:38 TRACE nova.compute.manager [instance: f6531b3d-9764-48b8-b36f-695c8f06480a] return f(*args, **kw) 2012-08-13 19:47:38 TRACE nova.compute.manager [instance: f6531b3d-9764-48b8-b36f-695c8f06480a] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/connection.py", line 919, in spawn 2012-08-13 19:47:38 TRACE nova.compute.manager [instance: f6531b3d-9764-48b8-b36f-695c8f06480a] block_device_info=block_device_info) 2012-08-13 19:47:38 TRACE nova.compute.manager [instance: f6531b3d-9764-48b8-b36f-695c8f06480a] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/connection.py", line 1539, in to_xml 2012-08-13 19:47:38 TRACE nova.compute.manager [instance: f6531b3d-9764-48b8-b36f-695c8f06480a] rescue, block_device_info) 2012-08-13 19:47:38 TRACE nova.compute.manager [instance: f6531b3d-9764-48b8-b36f-695c8f06480a] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/connection.py", line 1422, in _prepare_xml_info 2012-08-13 19:47:38 TRACE nova.compute.manager [instance: f6531b3d-9764-48b8-b36f-695c8f06480a] nics.append(self.vif_driver.plug(instance, network, mapping)) 2012-08-13 19:47:38 TRACE nova.compute.manager [instance: f6531b3d-9764-48b8-b36f-695c8f06480a] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/vif.py", line 99, in plug 2012-08-13 19:47:38 TRACE nova.compute.manager [instance: f6531b3d-9764-48b8-b36f-695c8f06480a] return self._get_configurations(network, mapping) 2012-08-13 19:47:38 TRACE nova.compute.manager [instance: f6531b3d-9764-48b8-b36f-695c8f06480a] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/vif.py", line 69, in _get_configurations 2012-08-13 19:47:38 TRACE nova.compute.manager [instance: f6531b3d-9764-48b8-b36f-695c8f06480a] 'ip_address': mapping['ips'][0]['ip'], 2012-08-13 19:47:38 TRACE nova.compute.manager [instance: f6531b3d-9764-48b8-b36f-695c8f06480a] IndexError: list index out of range 2012-08-13 19:47:38 TRACE nova.compute.manager [instance: f6531b3d-9764-48b8-b36f-695c8f06480a] To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1036672/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 883322] Re: nova.network.manager should handle exceptions and rollback.
Really old incomplete bug ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/883322 Title: nova.network.manager should handle exceptions and rollback. Status in OpenStack Compute (Nova): Invalid Bug description: nova.network.manager should handle exceptions and rollback. Example associate_floating_ip didn't handle driver.bind_floating_ip(). The method should handle exceptions and cleanup. https://github.com/openstack/nova/blob/master/nova/network/manager.py#L341 This also affects. - FloatingIP.init_host_floating_ips() - FloatingIP.disassociate_floating_ip() - NetworkManager.deallocate_fixed_ip() - FlatDHCPManager.init_host() - FlatDHCPManager._setup_network() - VlanManager.init_host() - VlanManager._setup_network() To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/883322/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 998145] Re: compute api delete race condition part 2
Really old incomplete bug. ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/998145 Title: compute api delete race condition part 2 Status in OpenStack Compute (Nova): Invalid Bug description: If a compute_api.delete() command comes in at about the same time that a compute_manager.run_instance() is happening for that instance, it is possible that the compute manager will not notice that the instance has been deleted from the database when it goes to assign the host. Thus, resources might be allocated unnecessarily (and in rarer cases the deleted status might actually be overwritten as if it had never happened.) This is a corollary to bug 998117 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/998145/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 825241] Re: SQLAlchemy + Postgres + Eventlet
Really old incomplete bug. ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/825241 Title: SQLAlchemy + Postgres + Eventlet Status in OpenStack Compute (Nova): Invalid Bug description: Using SQLAlchemy + Postgres will result in your APIs only handling 1 SQL request at a time. Since most requests are SQL-based this could have a negative impact on customer experience. There is a branch in which fixes the issue for MySQL but I was unable to do the same for Postgres due to my limited knowledge of Postgres and the fact that there is an issue with the current Ubuntu psycopg2 package which is causing errors. More likely than not this will be resolved when psycopg2 is upgraded on Ubuntu and we can solve this concurrency problem for Postgres. Related MySQL branch: https://code.launchpad.net/~rackspace-titan/nova /sqlalchemy-eventlet/+merge/71087 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/825241/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1173408] Re: macs_for_instance needs to be rolled back
** Changed in: nova Status: Incomplete => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1173408 Title: macs_for_instance needs to be rolled back Status in OpenStack Compute (Nova): Won't Fix Bug description: Since the build_instance method calls out to a driver to get the macs for an instance, there likely needs to a let said driver know that we don't need said macs on failure. This could be handled in various ways, likely a contract with the driver layer that needs to be made concrete could also help solve this. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1173408/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1074726] Re: nova.compute.utils.get_device_name_for_instance breaks volume attachment on Hyper-V
** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1074726 Title: nova.compute.utils.get_device_name_for_instance breaks volume attachment on Hyper-V Status in OpenStack Compute (Nova): Invalid Bug description: The following exception is raised in nova.compute.utils.get_device_name_for_instance when trying to attach a Cinder volume on Hyper-V in Grizzly: DevicePathInUse: The supplied device path (sda3) is in use. Exception trace details: http://paste.openstack.org/show/24033/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1074726/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1090268] Re: "Too many open files" are opened by nova-compute
powervm has been removed from tree, marking won't fix. ** Changed in: nova Status: Incomplete => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1090268 Title: "Too many open files" are opened by nova-compute Status in OpenStack Compute (Nova): Won't Fix Bug description: 1. nova-comp open too many files about 1112. (but the output of "ulimit -n" just is 1024) 2. nova-comp open too many files inode 3641 /lib/modules/2.6.32-279.el6.x86_64/kernel/net/sunrpc/auth_gss/rpcsec_gss_krb5.ko nova-comp 2091628u REG 0,9 0 3641 anon_inode nova-comp 2091629u REG 0,9 0 3641 anon_inode ... nova-comp 2097522u REG 0,9 0 3641 anon_inode nova-comp 2097523u REG 0,9 0 3641 anon_inode ... --- [test@Openstack_Grizzly_ControlNode ~]$ ps -ef | grep nova test 10374 10357 0 04:54 pts/100:00:00 grep nova test 11292 11272 0 Dec04 pts/700:00:01 python /home/.../nova/bin/nova-api test 11300 11292 0 Dec04 pts/700:00:01 python /home/.../nova/bin/nova-api test 11303 11292 0 Dec04 pts/700:00:50 python /home/.../nova/bin/nova-api test 11304 11292 0 Dec04 pts/700:00:01 python /home/.../nova/bin/nova-api test 11479 11373 0 Dec04 pts/901:21:03 python /home/.../nova/bin/nova-network test 11650 11485 0 Dec04 pts/10 00:39:28 python /home/.../nova/bin/nova-scheduler nobody 13152 1 0 Dec04 ?00:02:29 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file= --domain=novalocal --pid-file=/home/.../data/nova/networks/nova-br100.pid --listen-address=10.0.1.1 --except-interface=lo --dhcp-range=set:'private',10.0.1.2,static,120s --dhcp-lease-max=256 --dhcp-hostsfile=/home/.../data/nova/networks/nova-br100.conf --dhcp-script=/home/.../nova/bin/nova-dhcpbridge --leasefile-ro root 13153 13152 0 Dec04 ?00:00:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file= --domain=novalocal --pid-file=/home/.../data/nova/networks/nova-br100.pid --listen-address=10.0.1.1 --except-interface=lo --dhcp-range=set:'private',10.0.1.2,static,120s --dhcp-lease-max=256 --dhcp-hostsfile=/home/.../data/nova/networks/nova-br100.conf --dhcp-script=/home/.../nova/bin/nova-dhcpbridge --leasefile-ro test 20916 1 0 Dec05 ?01:25:35 /usr/bin/python /usr/bin/nova-compute test 20975 1 0 Dec05 ?01:24:55 /usr/bin/python /usr/bin/nova-compute --config-file=/etc/nova/nova2.conf [test@Openstack_Grizzly_ControlNode ~]$ [test@Openstack_Grizzly_ControlNode ~]$ ulimit -n 1024 [test@Openstack_Grizzly_ControlNode ~]$ lsof -p 20916 | wc -l 1112 [test@Openstack_Grizzly_ControlNode ~]$ lsof -p 20975 | wc -l 1112 [test@Openstack_Grizzly_ControlNode ~]$ lsof -p 20975 | grep 3641 | wc -l 1009 [test@Openstack_Grizzly_ControlNode ~]$ lsof -p 20916 | grep 3641 | wc -l 1009 [test@Openstack_Grizzly_ControlNode ~]$ [test@Openstack_Grizzly_ControlNode ~]$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 14874 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size(512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited [test@Openstack_Grizzly_ControlNode ~]$ [test@Openstack_Grizzly_ControlNode ~]$ ls -li /lib/modules/2.6.32-279.el6.x86_64/kernel/net/sunrpc/auth_gss/rpcsec_gss_krb5.ko 3641 -rwxr--r--. 1 root root 49576 Jun 14 2012 /lib/modules/2.6.32-279.el6.x86_64/kernel/net/sunrpc/auth_gss/rpcsec_gss_krb5.ko [test@Openstack_Grizzly_ControlNode ~]$ nova-compute.log: 2012-12-13 22:31:21 20916 ERROR nova.virt.powervm.common [-] Error while trying to connect: Error reading SSH protocol banner[Errno 24] Too many open files 2012-12-13 22:31:21 20916 TRACE nova.virt.powervm.common Traceback (most recent call last): 2012-12-13 22:31:21 20916 TRACE nova.virt.powervmt.common File "/home/.../nova/nova/virt/powervm/common.py", line 60, in ssh_connect 2012-12-13 22:31:21 20916 TRACE nova.virt.powervm.common port=port) 2012-12-13 22:31:21 20916 TRACE nova.virt.powervm.common File "/usr/lib/python2.6/site-packages/paramiko/client.py", line 295, in connect 2012-12-13 22:31:21 20916 TRACE nova.virt.powervm.common t.start_client() 2012-12-13 22:31:21 20916 TRACE nova.virt.powervm.common
[Yahoo-eng-team] [Bug 955366] Re: Random KVM boot failure: "This kernel requires the following features not present on the CPU"
IIRC Suse actually does some really clever on their graphics boot which was actually really hard to handle in Xen guests. I kind of wonder if that's an issue with the old kvm that was used here. Anyway, I don't think this is an OpenStack bug per say. ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/955366 Title: Random KVM boot failure: "This kernel requires the following features not present on the CPU" Status in OpenStack Compute (Nova): Invalid Bug description: Ubuntu Oneiric, KVM, devstack. While booting identical images, I got the below error on boot from one of the instances. Booting again from the same image does work. I looked for any error messages in /var/log or anything different, I couldn't find anything. I'm guessing this is a KVM bug, though my google-fu is not up to finding any bug reports for KVM because I think this happens all the time whenever the image is mismatched. That isn't the case here though, because booting another instance from the same image works. Error printed during boot: This kernel requires the following features not present on the CPU: fpu msr pae cx8 cmov fxsr sse sse2 Unable to boot - please use a kernel appropriate for your CPU To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/955366/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1063889] Re: MemoryError while uploading Image from Glance to ESX
** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1063889 Title: MemoryError while uploading Image from Glance to ESX Status in OpenStack Compute (Nova): Invalid Bug description: We tried to installed the Folsom release (python- nova_2012.2-0ubuntu3~cloud0) on a precise ubuntu-server installation and use an ESX Host as compute node. After fixing the problem described in #1063885 we are running in MemoryErrors: Creating a new instance with: nova boot --flavor 2 --image 6e4deabd-ccd6-412e-9bed-e3293f45a2db test1 we get the following error in nova-computelog: 2012-10-08 17:40:47 DEBUG nova.virt.vmwareapi.vmware_images [req-b1d31b85-f637-4a2c-9aa0-82365775c18c 4d3ed50af55d43e1ac3c2279e7fcded8 1ff95471d60b431b84a161d2e70bfa03] [instance: a65b235a-27b0-45b8-9c2f-6c93f5ee3f1c] Downloading image 6e4deabd-ccd6-412e-9bed-e3293f45a2db from glance image server from (pid=4140) fetch_image /usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmware_images.py:92 2012-10-08 17:41:55 ERROR nova.virt.vmwareapi.io_util [-] 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.io_util Traceback (most recent call last): 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.io_util File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/io_util.py", line 154, in _inner 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.io_util data = self.input.read(None) 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.io_util File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/read_write_util.py", line 53, in read 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.io_util return self.iter.next() 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.io_util File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/read_write_util.py", line 59, in get_next 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.io_util for data in self.glance_read_iter: 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.io_util File "/usr/lib/python2.7/StringIO.py", line 76, in next 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.io_util r = self.readline() 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.io_util File "/usr/lib/python2.7/StringIO.py", line 154, in readline 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.io_util self.buf += ''.join(self.buflist) 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.io_util MemoryError 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.io_util 2012-10-08 17:41:55 ERROR nova.virt.vmwareapi.vmware_images [req-b1d31b85-f637-4a2c-9aa0-82365775c18c 4d3ed50af55d43e1ac3c2279e7fcded8 1ff95471d60b431b84a161d2e70bfa03] 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.vmware_images Traceback (most recent call last): 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.vmware_images File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmware_images.py", line 69, in start_transfer 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.vmware_images read_event.wait() 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.vmware_images File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.vmware_images return hubs.get_hub().switch() 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.vmware_images File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 177, in switch 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.vmware_images return self.greenlet.switch() 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.vmware_images MemoryError 2012-10-08 17:41:55 TRACE nova.virt.vmwareapi.vmware_images To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1063889/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1245553] Re: test_xenapi fails when nova and tests are in different directory trees
** Changed in: nova Assignee: Dirk Mueller (dmllr) => (unassigned) ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1245553 Title: test_xenapi fails when nova and tests are in different directory trees Status in OpenStack Compute (Nova): Invalid Bug description: Nova's xenapi tests uses realpath joined together with "../../../../" to determine the parent directory, which does not work if one of the parent directories is a symlink, as it traverses then into a different than the original directory subtree. It should use "normpath" which simply works by stripping off path components, as other unit tests for determining the root dir already do. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1245553/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1064427] Re: folsom nova-cert ver 2012.2 locale support
** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1064427 Title: folsom nova-cert ver 2012.2 locale support Status in OpenStack Compute (Nova): Invalid Bug description: OS: Ubuntu 12.04.1 LTS Precise default locale: ru_RU #cat /var/lib/locales/supported.d/local ru_RU.UTF-8 UTF-8 en_US.UTF-8 UTF-8 #cat /etc/default/locale LANG="ru_RU.UTF-8" If I try service nova-cert start I get an error in /var/log/upstart/nova-cert.log: File "/usr/lib/python2.7/dist-packages/nova/openstack/common/log.py", line 264, in logging_excepthook getLogger(product_name).critical(str(value), **extra) UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-10: ordinal not in range(128) if I change locale to LANG="en_US.UTF-8" and reboot, nova-cert service starts OK. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1064427/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1163112] Re: nova calls libvirt but failed:Operation not supported
configuration error ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1163112 Title: nova calls libvirt but failed:Operation not supported Status in OpenStack Compute (Nova): Invalid Bug description: Hi all: I use github to install nova and quantum, but when I launch an instance, nova-compute fails: 2013-04-02 11:00:15DEBUG [nova.openstack.common.lockutils] Released file lock "iptables" at /var/lock/nova/nova-iptables for method "_apply"... 2013-04-02 11:00:17ERROR [nova.compute.manager] Instance failed to spawn Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/nova-2013.2.a89.ge9912c6-py2.7.egg/nova/compute/manager.py", line 1069, in _spawn block_device_info) File "/usr/local/lib/python2.7/dist-packages/nova-2013.2.a89.ge9912c6-py2.7.egg/nova/virt/libvirt/driver.py", line 1520, in spawn block_device_info) File "/usr/local/lib/python2.7/dist-packages/nova-2013.2.a89.ge9912c6-py2.7.egg/nova/virt/libvirt/driver.py", line 2435, in _create_domain_and_network domain = self._create_domain(xml, instance=instance) File "/usr/local/lib/python2.7/dist-packages/nova-2013.2.a89.ge9912c6-py2.7.egg/nova/virt/libvirt/driver.py", line 2396, in _create_domain domain.createWithFlags(launch_flags) File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 187, in doit result = proxy_call(self._autowrap, f, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 147, in proxy_call rv = execute(f,*args,**kwargs) File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 76, in tworker rv = meth(*args,**kwargs) File "/usr/lib/python2.7/dist-packages/libvirt.py", line 581, in createWithFlags if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) libvirtError: Unable to add bridge br-int port tap89ed2dc0-2e: Operation not supported 2013-04-02 11:00:17DEBUG [nova.openstack.common.lockutils] Got semaphore "compute_resources" for method "abort"... 2013-04-02 11:00:17DEBUG [nova.compute.claims] Aborting claim: [Claim: 512 MB memory, 0 GB disk, 1 VCPUS] Is it because user nova call libvirt to create a port so it has not enough permission? note1:I set up sudoer: root@node1:~# cat /etc/sudoers.d/nova_sudoers Defaults:nova !requiretty nova ALL = (root) NOPASSWD: /usr/local/bin/nova-rootwrap note2:I login as root, execute " ovs-vsctl add-port", and succeed. root@node1:~# ovs-vsctl add-port br-int tap89ed2dc0-2e root@node1:~# ovs-vsctl show f3f4cdc0-1391-45fd-a535-1947d5aea488 Bridge "br0" Port "eth0" Interface "eth0" Port "br0" Interface "br0" type: internal Bridge br-int Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "tap89ed2dc0-2e" Interface "tap89ed2dc0-2e" Bridge br-tun Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "gre-1" Interface "gre-1" type: gre options: {in_key=flow, out_key=flow, remote_ip="192.168.19.1"} Port br-tun Interface br-tun type: internal ovs_version: "1.4.0+build0" I add the following lines to qemu.conf cgroup_device_acl = [ "/dev/null", "/dev/full", "/dev/zero", "/dev/random", "/dev/urandom", "/dev/ptmx", "/dev/kvm", "/dev/kqemu", "/dev/rtc", "/dev/hpet","/dev/net/tun", ] root@node1:~# service qemu-kvm restart qemu-kvm stop/waiting qemu-kvm start/running root@node1:~# service libvirt-bin restart libvirt-bin stop/waiting libvirt-bin start/running, process 30917 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1163112/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1175973] Re: LXC Folsom Linuxbridge
** Changed in: nova/folsom Status: New => Won't Fix ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1175973 Title: LXC Folsom Linuxbridge Status in OpenStack Neutron (virtual network service): Invalid Status in OpenStack Compute (Nova): Invalid Status in OpenStack Compute (nova) folsom series: Won't Fix Bug description: I'm trying to setup a compute node using LXC, but when I try to launch an instance I got this error in compute.log file : 2013-05-03 10:19:22 ERROR nova.compute.manager [req- 0192bab0-df91-45c7-8f95-a038589e8e6d 220efc2031ed4ee5a5f46c5f710f2020 e7a14c7861ff42eea61fcf0edefb8d7e] [instance: 0c41a5b7-5a2b-48c4-952b- 7296157eca5a] Build error: ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 501, in _run_instance\ninjected_files, admin_password)\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 754, in _spawn\nblock_device_info)\n', ' File "/usr/lib/python2.6 /site-packages/nova/exception.py", line 117, in wrapped\n temp_level, payload)\n', ' File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__\nself.gen.next()\n', ' File "/usr/lib/python2.6/site-packages/nova/exception.py", line 92, in wrapped\nreturn f(*args, **kw)\n', ' File "/usr/lib/python2.6 /site-packages/nova/virt/libvirt/driver.py", line 1093, in spawn\n block_device_info)\n', ' File "/usr/lib/python2.6/site- packages/nova/virt/libvirt/driver.py", line 1933, in _create_domain_and_network\ndomain = self._create_domain(xml)\n', ' File "/usr/lib/python2.6/site- packages/nova/virt/libvirt/driver.py", line 1912, in _create_domain\n domain.createWithFlags(launch_flags)\n', ' File "/usr/lib/python2.6 /site-packages/eventlet/tpool.py", line 187, in doit\nresult = proxy_call(self._autowrap, f, *args, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 147, in proxy_call\nrv = execute(f,*args,**kwargs)\n', ' File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 76, in tworker\nrv = meth(*args,**kwargs)\n', ' File "/usr/lib64/python2.6/site-packages/libvirt.py", line 708, in createWithFlags\nif ret == -1: raise libvirtError (\'virDomainCreateWithFlags() failed\', dom=self)\n', 'libvirtError: Unable to add bridge port veth0: No such device\n'] Well, I guess the problem is related to the bridge creation. The last error above shows that libvirt was not able to add the port veth0 to the bridge "" (no name is mentioned ) Investigating the log file of nova compute shows the following weird line : 2013-05-03 10:19:13 DEBUG nova.compute.manager [req- 0192bab0-df91-45c7-8f95-a038589e8e6d 220efc2031ed4ee5a5f46c5f710f2020 e7a14c7861ff42eea61fcf0edefb8d7e] [instance: 0c41a5b7-5a2b-48c4-952b- 7296157eca5a] Instance network_info: |[VIF({'network': Network({'bridge': '', 'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': 'fixed', 'floating_ips': [], 'address': u'172.18.0.3'})], 'version': 4, 'meta': {'dhcp_server': u'172.18.0.1'}, 'dns': [IP({'meta': {}, 'version': 4, 'type': 'dns', 'address': u'172.16.1.5'}), IP({'meta': {}, 'version': 4, 'type': 'dns', 'address': u'8.8.8.8'})], 'routes': [], 'cidr': u'172.18.0.0/16', 'gateway': IP({'meta': {}, 'version': 4, 'type': 'gateway', 'address': u'172.18.255.254'})})], 'meta': {'injected': False, 'tenant_id': u'e7a14c7861ff42eea61fcf0edefb8d7e'}, 'id': u 'f8bb9e5b-8a51-40f9-8ce7-f3300600f9f1', 'label': u'LXC'}), 'meta': {}, 'id': u'55748af2-457f-47fa-a57d-9ae498c0a04b', 'address': u'fa:16:3e:00:c3:9f'})]| _allocate_network /usr/lib/python2.6/site- packages/nova/compute/manager.py:726 the start of the line shows this : "VIF({'network': Network({'bridge': '', 'subnets':" As you can see there is no name affected to the bridge attribute. So, my guess is that this problem is related to Quantum. Because I'm using the same configuration of Quantum on an other node running KVM, and it's working fine. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1175973/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1159828] Re: Remove image-id when booting from a volume
*** This bug is a duplicate of bug 1155512 *** https://bugs.launchpad.net/bugs/1155512 ** This bug has been marked a duplicate of bug 1155512 Issues with booting from the volume -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1159828 Title: Remove image-id when booting from a volume Status in OpenStack Compute (Nova): Incomplete Bug description: The image and its metadata can be copied to volume and volume_glance_metadata tables. Even though image-id is not specified, we can also find the link with the original image, kernel and ramdisk. I suggest we remove the dependency on image-id when boot VM from the volume. I also raised a bug for cinder: https://bugs.launchpad.net/cinder/+bug/1159824, which I think is to be fix before this one. For the image, which has ramdisk and kernel, we can also copy them into volumes, so that we can remove the dependency to specify the image id. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1159828/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1059899] Re: nova fails to configure dnsmasq, resulting in DNS timeouts in instances
In looking at the code I feel like this is effectively addressed in the current linux_net.py, please reopen if it's still an issue. ** Changed in: nova Status: Incomplete => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1059899 Title: nova fails to configure dnsmasq, resulting in DNS timeouts in instances Status in OpenStack Compute (Nova): Fix Released Status in “nova” package in Ubuntu: Confirmed Bug description: Nova uses dnsmasq to answer questions about name <-> IP resolution for instances. By default, it does nothing about things where there is no answer. This causes dnsmasq to forward the query (for which it should be authoritative) off to the nameserver found in resolv.conf. If the zone is properly delegated to nova via a forward only zone declaration in the resolver, then we run into the situation where the instance asks dnsmasq which asks the resolver which asks dnsmasq which then times out. Combine this with linux' love for IPv6, and a single domain search list in resolv.conf, and anything that looks up a host name (e.g., sudo) will take 10 seconds (5 seconds each for the lookup of $(hostname).$domain and $(hostname) RRs), before it fails back to looking up $(hostname).$domain A RR and gets an answer. The fix that worked for us was to add --dnsmasq_config_file=/etc/nova/dnsmasq.conf (not --dns_server, because we DO NOT WANT -h and -R passed to dnsmasq, and we need to specify multiple --server directives) and then dnsmasq.conf gets "--server=/xxx.yyy.10.in-addr.arpa/ --server=/openstack.example.com/" which tells it to not forward queries for those zones off-machine. (The lack of -h and -R means that we do not break our ability to resolute the rest of the DNS world.) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1059899/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1200146] Re: _parse_datetime() of simple_tenant_usage.py should support an additional datetime format
** Changed in: nova Status: Incomplete => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1200146 Title: _parse_datetime() of simple_tenant_usage.py should support an additional datetime format Status in OpenStack Compute (Nova): Won't Fix Bug description: _parse_datetime() of simple_tenant_usage.py currently supports following datetime formats: 1) "%Y-%m-%dT%H:%M:%S" 2) "%Y-%m-%dT%H:%M:%S.%f" 3) "%Y-%m-%d %H:%M:%S.%f" Support for the "%Y-%m-%d %H:%M:%S" datetime format should be added (ISO8601 without microseconds part, that uses the space character as a separator between date and time). To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1200146/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1211848] Re: Nova V3 APi should return 501 (Not implemented) for missing extensions
** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1211848 Title: Nova V3 APi should return 501 (Not implemented) for missing extensions Status in OpenStack Compute (Nova): Invalid Bug description: The V2 API returns 404 is an attempt is made to call an extension which is not enabled. 501 would be a more appropriate error code To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1211848/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1195494] Re: Nova Compute Error in nova-manage db sync
** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1195494 Title: Nova Compute Error in nova-manage db sync Status in OpenStack Compute (Nova): Invalid Bug description: Hi All, Grizzly - I was successfully able to implement one controller node and a compute node, BUT when I tried to add another compute node with the controller I got below error. Per document I followed till stopping of the required services and then "nova-manage db sync". Not sure if it is because of nova database is already present in Controller and I had done the db sync for my first controller-compute configuration. If we need to drop the old nova database then its not the solution as every time it would be required when you will be adding the additional compute node. Please advise. Nova.conf is exactly the copy of working compute node and with my_ip, vnc details changed. Error on new compute node: 2013-06-27 22:48:50 DEBUG nova.utils [-] backend from (pid=2029) __get_backend /usr/lib/python2.7/dist-packages/nova/utils.py:663 Command failed, please check log for more info 2013-06-27 22:48:50 CRITICAL nova [-] 2013-06-27 22:48:50 TRACE nova Traceback (most recent call last): 2013-06-27 22:48:50 TRACE nova File "/usr/bin/nova-manage", line 1746, in 2013-06-27 22:48:50 TRACE nova main() 2013-06-27 22:48:50 TRACE nova File "/usr/bin/nova-manage", line 1733, in main 2013-06-27 22:48:50 TRACE nova fn(*fn_args, **fn_kwargs) 2013-06-27 22:48:50 TRACE nova File "/usr/bin/nova-manage", line 1102, in sync 2013-06-27 22:48:50 TRACE nova return migration.db_sync(version) 2013-06-27 22:48:50 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/db/migration.py", line 30, in db_sync 2013-06-27 22:48:50 TRACE nova return IMPL.db_sync(version=version) 2013-06-27 22:48:50 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/migration.py", line 53, in db_sync 2013-06-27 22:48:50 TRACE nova versioning_api.upgrade(FLAGS.sql_connection, repo_path, version) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1195494/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1194792] Re: OperationalError: (OperationalError) (1054, "Unknown column 'services.disabled_reason' in 'field list'")
** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1194792 Title: OperationalError: (OperationalError) (1054, "Unknown column 'services.disabled_reason' in 'field list'") Status in OpenStack Compute (Nova): Invalid Bug description: in nova-conductor. 2013-06-26 08:02:44,656.656 1487 TRACE nova.openstack.common.rpc.amqp OperationalError: (OperationalError) (1054, "Unknown column 'services.disabled_reason' in 'field list'") 'SELECT services.created_at AS services_created_at, services.updated_at AS services_updated_at, services.deleted_at AS services_deleted_at, services.deleted AS services_deleted, services.id AS services_id, services.host AS services_host, services.`binary` AS services_binary, services.topic AS services_topic, services.report_count AS services_report_count, services.disabled AS services_disabled, services.disabled_reason AS services_disabled_reason \nFROM services \nWHERE services.deleted = %s AND services.host = %s AND services.`binary` = %s \n LIMIT %s' (0, 'ubuntu.localdomain', 'nova- compute', 1) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1194792/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1198813] Re: Duplicated glance image service
old incomplete bug ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1198813 Title: Duplicated glance image service Status in Cinder: Incomplete Status in OpenStack Bare Metal Provisioning Service (Ironic): Incomplete Status in OpenStack Compute (Nova): Invalid Status in Python client library for Glance: New Bug description: This code is duplicated in nova, cinder and ironic. Should be removed and use the common version on python-glanceclient once the code lands. https://review.openstack.org/#/c/33327/ To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1198813/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1252828] Re: gate-grenade-devstack-vm fails on test_server_addresses
** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1252828 Title: gate-grenade-devstack-vm fails on test_server_addresses Status in OpenStack Compute (Nova): Invalid Bug description: This is the test that fails: 2013-11-19 07:30:45.592 | == 2013-11-19 07:30:45.633 | FAIL: setUpClass (tempest.api.compute.servers.test_server_addresses.ServerAddressesTestXML) 2013-11-19 07:30:45.668 | setUpClass (tempest.api.compute.servers.test_server_addresses.ServerAddressesTestXML) 2013-11-19 07:30:45.846 | -- 2013-11-19 07:30:46.074 | _StringException: Traceback (most recent call last): 2013-11-19 07:30:46.075 | File "tempest/api/compute/servers/test_server_addresses.py", line 31, in setUpClass 2013-11-19 07:30:46.075 | resp, cls.server = cls.create_test_server(wait_until='ACTIVE') 2013-11-19 07:30:46.075 | File "tempest/api/compute/base.py", line 118, in create_test_server 2013-11-19 07:30:46.075 | server['id'], kwargs['wait_until']) 2013-11-19 07:30:46.075 | File "tempest/services/compute/xml/servers_client.py", line 365, in wait_for_server_status 2013-11-19 07:30:46.075 | extra_timeout=extra_timeout) 2013-11-19 07:30:46.075 | File "tempest/common/waiters.py", line 73, in wait_for_server_status 2013-11-19 07:30:46.076 | raise exceptions.BuildErrorException(server_id=server_id) 2013-11-19 07:30:46.076 | BuildErrorException: Server 7ffe527d-e91e-45e3-91c0-b4bafb6fb657 failed to build and is in ERROR status Elastic-recheck thought it failed on bug 1251784 but I don't see that in the scheduler logs. I actually don't see anything in compute logs for a build failure. The only thing I can find is in the scheduler log but it's a debug level error and it's logged several times: http://paste.openstack.org/show/53610/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1252828/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1163312] Re: nova-network fails using FlatDHCPManager in Grizzly
really old incomplete bug ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1163312 Title: nova-network fails using FlatDHCPManager in Grizzly Status in OpenStack Compute (Nova): Invalid Status in Fedora: New Bug description: Running nova-network on Fedora 17, Grizzly (openstack- nova-2013.1-0.10.g3.fc19.noarch) setting network_manager=nova.network.manager.FlatManager, nova-network starts and runs properly. However when changing to network_manager=nova.network.manager.FlatDHCPManager and restarting nova-network, it fails with the following in the file /var/log/nova/network.log: 2013-04-02 07:29:12.139 1571 AUDIT nova.service [-] Starting network node (version 2013.1-0.10.g3.fc19) 2013-04-02 07:29:12.140 1571 DEBUG nova.network.l3 [-] Initializing linux_net L3 driver initialize /usr/lib/python2.7/site-packages/nova/network/l3.py:81 2013-04-02 07:29:12.140 1571 DEBUG nova.openstack.common.lockutils [-] Got semaphore "iptables" for method "_apply"... inner /usr/lib/python2.7/site-packages/nova/openstack/co mmon/lockutils.py:185 2013-04-02 07:29:12.140 1571 DEBUG nova.openstack.common.lockutils [-] Attempting to grab file lock "iptables" for method "_apply"... inner /usr/lib/python2.7/site-packages/no va/openstack/common/lockutils.py:196 2013-04-02 07:29:12.214 1571 CRITICAL nova [-] [Errno 13] Permission denied: '/var/lock/nova' 2013-04-02 07:29:12.214 1571 TRACE nova Traceback (most recent call last): 2013-04-02 07:29:12.214 1571 TRACE nova File "/usr/bin/nova-network", line 54, in 2013-04-02 07:29:12.214 1571 TRACE nova service.wait() 2013-04-02 07:29:12.214 1571 TRACE nova File "/usr/lib/python2.7/site-packages/nova/service.py", line 689, in wait 2013-04-02 07:29:12.214 1571 TRACE nova _launcher.wait() 2013-04-02 07:29:12.214 1571 TRACE nova File "/usr/lib/python2.7/site-packages/nova/service.py", line 209, in wait 2013-04-02 07:29:12.214 1571 TRACE nova super(ServiceLauncher, self).wait() 2013-04-02 07:29:12.214 1571 TRACE nova File "/usr/lib/python2.7/site-packages/nova/service.py", line 179, in wait 2013-04-02 07:29:12.214 1571 TRACE nova service.wait() 2013-04-02 07:29:12.214 1571 TRACE nova File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 166, in wait 2013-04-02 07:29:12.214 1571 TRACE nova return self._exit_event.wait() 2013-04-02 07:29:12.214 1571 TRACE nova File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 116, in wait 2013-04-02 07:29:12.214 1571 TRACE nova return hubs.get_hub().switch() 2013-04-02 07:29:12.214 1571 TRACE nova File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 177, in switch 2013-04-02 07:29:12.214 1571 TRACE nova return self.greenlet.switch() 2013-04-02 07:29:12.214 1571 TRACE nova File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 192, in main 2013-04-02 07:29:12.214 1571 TRACE nova result = function(*args, **kwargs) 2013-04-02 07:29:12.214 1571 TRACE nova File "/usr/lib/python2.7/site-packages/nova/service.py", line 147, in run_server 2013-04-02 07:29:12.214 1571 TRACE nova server.start() 2013-04-02 07:29:12.214 1571 TRACE nova File "/usr/lib/python2.7/site-packages/nova/service.py", line 429, in start 2013-04-02 07:29:12.214 1571 TRACE nova self.manager.init_host() 2013-04-02 07:29:12.214 1571 TRACE nova File "/usr/lib/python2.7/site-packages/nova/network/manager.py", line 1562, in init_host 2013-04-02 07:29:12.214 1571 TRACE nova self.l3driver.initialize() 2013-04-02 07:29:12.214 1571 TRACE nova File "/usr/lib/python2.7/site-packages/nova/network/l3.py", line 82, in initialize 2013-04-02 07:29:12.214 1571 TRACE nova linux_net.init_host() 2013-04-02 07:29:12.214 1571 TRACE nova File "/usr/lib/python2.7/site-packages/nova/network/linux_net.py", line 633, in init_host 2013-04-02 07:29:12.214 1571 TRACE nova add_snat_rule(ip_range) 2013-04-02 07:29:12.214 1571 TRACE nova File "/usr/lib/python2.7/site-packages/nova/network/linux_net.py", line 623, in add_snat_rule 2013-04-02 07:29:12.214 1571 TRACE nova iptables_manager.apply() 2013-04-02 07:29:12.214 1571 TRACE nova File "/usr/lib/python2.7/site-packages/nova/network/linux_net.py", line 383, in apply 2013-04-02 07:29:12.214 1571 TRACE nova self._apply() 2013-04-02 07:29:12.214 1571 TRACE nova File "/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", line 210, in inner 2013-04-02 07:29:12.214 1571 TRACE nova fileutils.ensure_tree(local_lock_path) 2013-04-02 07:29:12.214 1571 TRACE nova File "/usr/lib/python2.7/site-packages/nova/openstack/common/fileutils.py", line 29, in ensure_tree 2013-04-02 07:29:12.214 1571 TRACE nova os.ma
[Yahoo-eng-team] [Bug 1240849] Re: Cannot ssh into an instance after reboot
** No longer affects: nova -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1240849 Title: Cannot ssh into an instance after reboot Status in OpenStack Neutron (virtual network service): Fix Released Status in neutron havana series: In Progress Bug description: I was able to ssh the instance before reboot on the floating IP, but it failed after hard reboot. According to this log it was working 2013-10-13_17_48_45_702. So something added after this date, probably related to the issue. http://logs.openstack.org/37/50337/4/check/check-tempest-devstack-vm-neutron/9aeca12/logs/tempest.txt.gz#_2013-10-13_17_48_45_702 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1240849/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 836973] Re: nova should keep instance data after termination
This wishlist bug has been open a year without any activity. I'm going to move it to "Opinion / Wishlist", which is an easily-obtainable queue of older requests that have come on. This bug can be reopened (set back to "New") if someone decides to work on this. ** Changed in: nova Status: Confirmed => Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/836973 Title: nova should keep instance data after termination Status in OpenStack Compute (Nova): Opinion Bug description: On EC2, instances are available in DescribeInstances output afer they're terminated. In nova, at the moment, they disappear immediately. This makes getting "shutdown" console output just about impossible. Relevant link to amazon ec2 documentation: http://docs.amazonwebservices.com/AWSEC2/latest/DeveloperGuide/index.html?instance-console.html "Only the most recent 64 KB of posted output is stored, which is available for at least 1 hour after the last posting." $ euca-run-instances --key mykey ami-0056 RESERVATION r-227nsegw smoser_project default INSTANCE i-0200 ami-0056scheduling mykey 0 m1.small2011-08-29T20:11:09Zunknown zone aki-0052ari-0053 $ euca-get-console-output i-0201 | tail -n 5 ec2: 1024 83:c3:7a:9b:42:fc:0d:c5:48:96:bd:46:62:25:bf:34 /etc/ssh/ssh_host_dsa_key.pub (DSA) ec2: -END SSH HOST KEY FINGERPRINTS- ec2: # landscape-client is not configured, please run landscape-config. $ euca-terminate-instances i-0201 #no output # wait 10 seconds or so $ euca-describe-instances i-0201 #no output $ echo $? 0 $ euca-get-console-output i-0201 InstanceNotFound: Instance %(instance_id)s could not be found. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/836973/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1198177] Re: Configuring SSL for Nova
very old wishlist item ** Changed in: nova Status: Confirmed => Invalid ** Changed in: nova Assignee: Ilya Alekseyev (ilyaalekseyev) => (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1198177 Title: Configuring SSL for Nova Status in OpenStack Compute (Nova): Invalid Bug description: Hi, Installed nova 2013.1.2 through Github. I've configured nova with keystone by following below steps. 1)Configured keystone with SSL by the steps followed in the below link https://bugs.launchpad.net/keystone/+bug/1194001. 2)Created HTTPS endpoints for NOVA with service_type NOVAHTTPS. 3)Added below configurations in /etc/nova/nova.conf in [default]. enabled_ssl_apis=['ec2', 'osapi_compute', 'metadata', 'quantum'] ssl_ca_file=/root/certs/ca.crt (Certificate Authority) ssl_cert_file=/root/certs/server_cert_key.pem (server cert + server key) ssl_key_file="/root/certs/server.key (server key) 4) In /etc/nova/nova.conf, if “auth_protocol” is mapped with http in [keystone_authtoken] section then comment it, by default it allows “https”. #auth_protocol = http 5) Edited /etc/nova/api-paste.ini and added below lines in [filter:authtoken] section and comment “auth_protocol”. #auth_protocol = http certfile = /root/original/server_cert_key.pem (server cert + server key) keyfile = /root/original/server.key (server key) My Observations: 1) There is no option of passing OS_CERT through novaclient. 2) No Configuration flag for making use_ssl=True in the server side to allow ssl connections. Ref Link: https://answers.launchpad.net/nova/+question/231263 Thanks, Sasikiran. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1198177/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 919051] Re: EC2 param validation should not be in middleware
This wishlist bug has been open a year without any activity. I'm going to move it to "Opinion / Wishlist", which is an easily-obtainable queue of older requests that have come on. This bug can be reopened (set back to "New") if someone decides to work on this. ** Changed in: nova Status: Confirmed => Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/919051 Title: EC2 param validation should not be in middleware Status in OpenStack Compute (Nova): Opinion Bug description: The implementation of data validation in the EC2 API is in a middleware. This seems odd as a middleware is intended to be an optional piece of code. Param validation should not be optional. We should look into pulling the validation out of a middleware and into nova.api.ec2.cloud To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/919051/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 827569] Re: ec2metadata service does not include 2011-01-01
This wishlist bug has been open a year without any activity. I'm going to move it to "Opinion / Wishlist", which is an easily-obtainable queue of older requests that have come on. This bug can be reopened (set back to "New") if someone decides to work on this. ** Changed in: nova Status: Confirmed => Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/827569 Title: ec2metadata service does not include 2011-01-01 Status in OpenStack Compute (Nova): Opinion Bug description: On EC2: $ wget -q -O - http://169.254.169.254/; echo 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 2009-04-04 2011-01-01 latest on Openstack: wget -q -O - http://169.254.169.254/; echo 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 2009-04-04 I noticed this when using 'ec2metadata'. I shoudl probably back of the api version for that, or use 'latest'. but it would be nice if 2011-01-01 was present. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/827569/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1052776] Re: Missing prmisc mode on VLAN bridge
Long old incomplete bug, probably should be set invalid. We possibly need local switch config information for why this option seems needed in jaroslav's environment. ** Changed in: nova Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1052776 Title: Missing prmisc mode on VLAN bridge Status in OpenStack Compute (Nova): Invalid Bug description: Hello, I experienced communication issue on VLAN network mode with multi_host = True. The instances cannot communicate through the network because VLAN bridge is not in PROMISC mode and it has associated IP address for nova-nework gateway. ip a show br_vlan100 16: br_vlan100: mtu 1500 qdisc noqueue state UNKNOWN link/ether . brd ff:ff:ff:ff:ff:ff inet aa.bb.xx.yy/zz brd . scope global br_vlan100 valid_lft forever preferred_lft forever Setting PROMISC mode on this interface fix the network traffic (fast fix at https://github.com/pulchart/nova/commit/0c86671d3f0e2ac4cae06a5e713584ee26724cec). Regards, Jaroslav To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1052776/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1100799] Re: os-services API extension does not follow REST's CRUD principles
I think all these API design points shouldn't be bugs any more ** Changed in: nova Importance: Undecided => Wishlist ** Changed in: nova Assignee: Tiago Rodrigues de Mello (timello) => (unassigned) ** Changed in: nova Status: In Progress => Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1100799 Title: os-services API extension does not follow REST's CRUD principles Status in OpenStack Compute (Nova): Opinion Bug description: os-services extension builds a non standard URL format for update action. The current URL is os-services/[enable|disable] and it should be os-services/ and pass the action via body instead. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1100799/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1175667] Re: nova flavor-show does not return the 'latest' version of a flavor
This looks fixed upstream ** Changed in: nova Status: In Progress => Fix Released ** Changed in: nova Importance: Undecided => Medium ** Changed in: nova Assignee: Darren Birkett (darren-birkett) => (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1175667 Title: nova flavor-show does not return the 'latest' version of a flavor Status in OpenStack Compute (Nova): Fix Released Bug description: Create a new flavor: root@devstack1:/opt/stack/nova/nova# nova flavor-create nynewflavor 100 128 20 1 +-+-+---+--+---+--+---+-+---+ | ID | Name| Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +-+-+---+--+---+--+---+-+---+ | 100 | nynewflavor | 128 | 20 | 0 | | 1 | 1.0 | True | +-+-+---+--+---+--+---+-+---+ root@devstack1:/opt/stack/nova/nova# nova flavor-show 100 ++-+ | Property | Value | ++-+ | name | nynewflavor | | ram| 128 | | OS-FLV-DISABLED:disabled | False | | vcpus | 1 | | extra_specs| {} | | swap | | | os-flavor-access:is_public | True| | rxtx_factor| 1.0 | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 20 | | id | 100 | ++-+ Delete the flavor and create a new flavor with the same flavorID: root@devstack1:/opt/stack/nova/nova# nova flavor-delete 100 root@devstack1:/opt/stack/nova/nova# nova flavor-create nynewnewnewflavor 100 128 20 1 +-+---+---+--+---+--+---+-+---+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +-+---+---+--+---+--+---+-+---+ | 100 | nynewnewnewflavor | 128 | 20 | 0 | | 1 | 1.0 | True | +-+---+---+--+---+--+---+-+---+ root@devstack1:/opt/stack/nova/nova# nova flavor-show 100 ++---+ | Property | Value | ++---+ | name | nynewnewnewflavor | | ram| 128 | | OS-FLV-DISABLED:disabled | False | | vcpus | 1 | | extra_specs| {}| | swap | | | os-flavor-access:is_public | True | | rxtx_factor| 1.0 | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 20| | id | 100 | ++---+ Delete this flavor and then flavor-show the ID root@devstack1:/opt/stack/nova/nova# nova flavor-delete 100 root@devstack1:/opt/stack/nova/nova# nova flavor-show 100 ++-+ | Property | Value | ++-+ | name | nynewflavor | | ram| 128 | | OS-FLV-DISABLED:disabled | False | | vcpus | 1 | | extra_specs| {} | | swap | | | os-flavor-access:is_public | True| | rxtx_factor| 1.0 | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 20 | | id | 100 | ++-+ I see the FIRST instance of the flavor. I think I always want to see the latest version of a flavor, deleted or active. Rinse and repeat with the create/deletes, I will only ever see the first version of that flavor. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1175667/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1343858] Re: build/resize retry behavior not consistent
** Changed in: nova Status: In Progress => Opinion ** Changed in: nova Importance: Undecided => Wishlist -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1343858 Title: build/resize retry behavior not consistent Status in OpenStack Compute (Nova): Opinion Bug description: nova/schedule/utils.py: Case1: when CONF.scheduler_max_attempts >1, if the request contained an exception from a previous compute build/resize operation, the exception message would be logged in conductor.log Case2:when CONF.scheduler_max_attempts ==1, if the request contained an exception from a previous compute build/resize operation, the exception message wouldnot be logged in conductor.log I think this two case should keep consistent behavior even this may not cause something wrong, just for Strict code def populate_retry(filter_properties, instance_uuid): max_attempts = _max_attempts() force_hosts = filter_properties.get('force_hosts', []) force_nodes = filter_properties.get('force_nodes', []) if max_attempts == 1 or force_hosts or force_nodes: # re-scheduling is disabled. return # retry is enabled, update attempt count: retry = filter_properties.setdefault( 'retry', { 'num_attempts': 0, 'hosts': [] # list of compute hosts tried }) retry['num_attempts'] += 1 _log_compute_error(instance_uuid, retry) <<< would not run here when max_attempts == 1 if retry['num_attempts'] > max_attempts: exc = retry.pop('exc', None) msg = (_('Exceeded max scheduling attempts %(max_attempts)d ' 'for instance %(instance_uuid)s. ' 'Last exception: %(exc)s.') % {'max_attempts': max_attempts, 'instance_uuid': instance_uuid, 'exc': exc}) raise exception.NoValidHost(reason=msg) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1343858/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311500] Re: Nova 'os-security-group-default-rules' API does not work with neutron
** No longer affects: nova -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1311500 Title: Nova 'os-security-group-default-rules' API does not work with neutron Status in OpenStack Neutron (virtual network service): Confirmed Bug description: Nova APIs 'os-security-group-default-rules' does not work if 'conf->security_group_api' is 'neutron'. I wrote the test cases for above Nova APIs (https://review.openstack.org/#/c/87924) and it fails in gate neutron tests. I further investigated this issue and found that in 'nova/api/openstack/compute/contrib/security_group_default_rules.py', 'security_group_api' is set according to 'conf->security_group_api' (https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/security_group_default_rules.py#L107). If 'conf->security_group_api' is 'nova' then, 'NativeNovaSecurityGroupAPI(NativeSecurityGroupExceptions, compute_api.SecurityGroupAPI)' is being used in this API and no issue here. It works fine. If 'conf->security_group_api' is 'neutron' then, 'NativeNeutronSecurityGroupAPI(NativeSecurityGroupExceptions, neutron_driver.SecurityGroupAPI)' is being used in this API and 'neutron_driver.SecurityGroupAPI' (https://github.com/openstack/nova/blob/master/nova/network/security_group/neutron_driver.py#L48) does not have any of the function which are being called from this API class. So gives AttributeError (http://logs.openstack.org/24/87924/2/check/check-tempest-dsvm- neutron-full/7951abf/logs/screen-n-api.txt.gz). Traceback - . . 2014-04-21 00:44:22.430 10186 TRACE nova.api.openstack File "/opt/stack/new/nova/nova/api/openstack/compute/contrib/security_group_default_rules.py", line 130, in create 2014-04-21 00:44:22.430 10186 TRACE nova.api.openstack if self.security_group_api.default_rule_exists(context, values): 2014-04-21 00:44:22.430 10186 TRACE nova.api.openstack AttributeError: 'NativeNeutronSecurityGroupAPI' object has no attribute 'default_rule_exists' I think this API is only for Nova-network as currently there is no such feature exist in neutron. So this API should always use the nova network security group driver (https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/security_groups.py#L669). To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1311500/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1294853] Re: service_get_all in nova.compute.api should return a List object and should not do a filtering
** Changed in: nova Status: In Progress => Opinion ** Changed in: nova Importance: Undecided => Wishlist ** Changed in: nova Assignee: Pawel Koniszewski (pawel-koniszewski) => (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1294853 Title: service_get_all in nova.compute.api should return a List object and should not do a filtering Status in OpenStack Compute (Nova): Opinion Bug description: service_get_all is filtering the results returned by the service object and returning an array. This api should return a List object instead and the filtering should be done in the sqlalchemy api To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1294853/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1239606] Re: Use provider:physical_network to propagate it in NetworkInfo
https://review.openstack.org/#/c/90666/ seems to have been merged and the author of the patch above said that was the new fix ** Changed in: nova Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1239606 Title: Use provider:physical_network to propagate it in NetworkInfo Status in OpenStack Compute (Nova): Invalid Bug description: provider:physical_network is available in network/neutronv2/api as one of attributes of network objects in method _nw_info_build_network. It should be used and added to the network returned by this method. Retrieving it from port.binding:profile dictionary should be removed, since maintaining this data on port level is supported by specific plugin only (Mellanox). To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1239606/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1234925] Re: nova-novncproxy logs 'No handlers could be found for logger "nova.openstack.common.rpc.common"' when rabbitmq is unavailable
super old bug, olso.messaging is now where this should be if it still exists ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1234925 Title: nova-novncproxy logs 'No handlers could be found for logger "nova.openstack.common.rpc.common"' when rabbitmq is unavailable Status in OpenStack Compute (Nova): Invalid Bug description: I get 'No handlers could be found for logger "nova.openstack.common.rpc.common"' when rabbitmq host is unavailable. If i add 'logging.basicConfig()' to the top of nova/openstack/common/log.py. I get 'ERROR:nova.openstack.common.rpc.common:AMQP server on x.x.x.x:5671 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds." It would seem that the python logging is getting masked or is uninitialize. I believe this should be reproducable with the following: https://github.com/openstack/nova/tree/c64aeee362026c5e83f4c34e6469d59c529eeda7 nova-novncproxy --config-file nova.conf --config-file rabbit.conf nova.conf: [DEFAULT] debug=True verbose=True rabbit.conf: [DEFAULT] rabbit_host = localhost rabbit_port = rabbit_userid = guest rabbit_password = guest WARNING: no 'numpy' module, HyBi protocol will be slower WebSocket server settings: - Listen on 0.0.0.0:6080 - Flash security policy server - Web server. Web root: /usr/share/novnc - No SSL/TLS support (no cert file) - proxying from 0.0.0.0:6080 to ignore:ignore 1: x.x.xx: new handler Process 2: x.x.xx: new handler Process 3: x.x.xx: new handler Process 1: x.x.xx: "GET /vnc_auto.html?token=-5fed-4b05--0f1b4795cdaa HTTP/1.1" 200 - 4: x.x.xx: new handler Process 5: x.x.xx: new handler Process 6: x.x.xx: new handler Process 7: x.x.xx: new handler Process 7: x.x.xx: Plain non-SSL (ws://) WebSocket connection 7: x.x.xx: Version hybi-13, base64: 'True' 7: x.x.xx: Path: '/websockify' No handlers could be found for logger "nova.openstack.common.rpc.common" To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1234925/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1261551] Re: LXC volume attach does not work
Apparmor issue ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1261551 Title: LXC volume attach does not work Status in OpenStack Compute (Nova): Invalid Bug description: According to the older bug 1009701 (https://bugs.launchpad.net/nova/+bug/1009701), LXC volume attach should begin working with newer versions of libvirt (1.0.1 or 1.0.2). Based on testing with libvirt version 1.1.x, however, I get the following error: libvirtError: Unable to create device /proc/4895/root/dev/sdb: Permission denied To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1261551/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1271706] Re: Misleading warning about MySQL TRADITIONAL mode not being set
seems fixed in nova ** No longer affects: nova -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1271706 Title: Misleading warning about MySQL TRADITIONAL mode not being set Status in OpenStack Telemetry (Ceilometer): Fix Released Status in Orchestration API (Heat): Fix Released Status in OpenStack Identity (Keystone): Fix Released Status in The Oslo library incubator: Fix Released Bug description: common.db.sqlalchemy.session logs a scary warning if create_engine is not being called with mysql_traditional_mode set to True: WARNING keystone.openstack.common.db.sqlalchemy.session [-] This application has not enabled MySQL traditional mode, which means silent data corruption may occur. Please encourage the application developers to enable this mode. That warning is problematic for several reasons: (1) It suggests the wrong mode. Arguably TRADITIONAL is better than the default, but STRICT_ALL_TABLES would actually be more useful. (2) The user has no way to fix the warning. (3) The warning does not take into account that a global sql-mode may in fact have been set via the server-side MySQL configuration, in which case the session *may* in fact be using TRADITIONAL mode all along, despite the warning saying otherwise. This makes (2) even worse. My suggested approach would be: - Remove the warning. - Make the SQL mode a config option. Patches forthcoming. To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1271706/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp