Public bug reported:
Kilo code
Reproduce steps:
1. Assuming that we have one nova-compute node named 'icm' which is
added into one aggregate named 'zhaoqin'
[root@icm ~]# nova aggregate-details zhaoqin
++-+---+---++
| Id | Name
Public bug reported:
Latest Kilo code.
In inspect_capabilities() of nova/virt/disk/vfs/guestfs.py, guestfs api,
which is C-extension, will hang nova-compute process when it is invoked.
This problem will result in message queue time out error and instance
booting failure.
And example of this prob
Public bug reported:
Latest Kilo code
Reproduce steps:
1. Do not define any host aggregate. AZ of host is 'nova'. Boot one
instance named 'zhaoqin-nova' whose AZ is 'nova'
2. Create host aggregate 'zhaoqin' whose AZ is 'zhaoqin-az'. Add host to
'zhaoqin' aggregate. Now AZ of instance 'zhaoqin
It seems that it is already fixed by
https://review.openstack.org/#/c/153092/
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/
Public bug reported:
oslo.messaging 1.6.0 has moved notification implementation to
oslo_messaging directory. Need to change the class path in
entry_points.txt.
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yaho
Public bug reported:
Kilo latest code
Reproduce steps:
1. In nova.conf, compute_driver=libvirt.LibvirtDriver. Start nova-
compute.
2. Modify nova.conf, compute_driver=vmwareapi.VMwareVCDriver. Restart
nova-compute.
3. Modify nova.conf, compute_driver=libvirt.LibvirtDriver. Restart
nova-co
Public bug reported:
The current workflow of rebooting a instance is:
1. Reboot an active instance. If succeed, instance is still in active
state.
2. Reboot an active instance. If fail, and power_state is running,
instance is still in active state.
3. Reboot an active instance. If fail, and pow
Public bug reported:
In resource_tracker.py, the exception path of _get_host_metrics()
contains a wrong variable name.
for monitor in self.monitors:
try:
metrics += monitor.get_metrics(nodename=nodename)
except Exception:
LOG.warn(_(
** Also affects: keystone
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1389985
Title:
CLI will fail one time after restarti
** Also affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1389985
Title:
CLI will fail one time after restarting D
Public bug reported:
1. Change OpenStack server to Russian locale, LANG=ru_RU.utf8
2. Set firefox client browser locale to russian(ru)
3. Trigger an operational failure that has a message that tries to get
written to a Nova instance fault
Stacktrace
2014-10-30 05:55:34.933 18371 TRACE oslo.me
Public bug reported:
When I run longevity test against Juno code, I notice that that delete
VM operation occasionally fails. The stack trace is:
File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2507, in
_delete_instance
self._shutdown_instance(context, instance, bdms)
Eventually, I notice that x-auth-token parameter in glance request http
header is still a unicode string. No error will occur after change it to
a plain string.
>>> "\r\n".join(['PUT /v1/images/ffec0090-a529-4e5a-8a8d-603a1828105a
>>> HTTP/1.1', 'Host: 10.104.0.154:9292', 'Accept-Encoding: gzip,
Public bug reported:
When I attempt to create a VM snapshot via Horizon UI, I name this new
snapshot a unicode name (snapshot-ABC一丁七ÇàâアイウДфэبتثअइउ€¥噂ソ十豹竹敷). Then
the snapshot operation failed in nova-compute process. The stack trace
is:
2014-10-29 16:11:26.077 3551 ERROR oslo.messaging.rpc.dispa
Public bug reported:
When I post an 'attach interface' request to Nova with an invalid port
id, Nova returns an HTTP 500 error and a confusing error message.
REQ: curl -i
'http://10.104.0.214:8774/v2/b6a08719633d416da2d12265debac838/servers/fd21141e-4fb6-4e2a-9638-f37efe854003/os-interface'
-X
Public bug reported:
Related bug --> https://bugs.launchpad.net/nova/+bug/1363901
When attaching interface to an instance, Neutron may return several
types of error to Nova (Eg. PortNotFound, PortInUse and etc.) Nova need
to translate those error into a correct HTTP error code, so that end
user
Public bug reported:
There are still many 'require_admin_context' defined for db operation,
which prevent rbac definition in policy.json. For example, a user
defined non-admin role to manage quota will not be able to modify quota
size. Plan to remove 'require_admin_context' from sqlalchemy module
Public bug reported:
When I post an 'attach interface' request to Nova with an invalid
network id, Nova returns an HTTP 500 which informs me that the attach
interface operation fails.
REQ: curl -i
'http://10.90.10.24:8774/v2/19abae5746b242d489d1c2862b228d8b/servers/4e863fad-e868-48a1-8735-2da9a3
** Project changed: nova => neutron
** Also affects: nova
Importance: Undecided
Status: New
** Also affects: python-neutronclient
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed
Public bug reported:
When I post an 'attach interface' request to Nova with an invalid ip for
defined network, Nova returns an HTTP 500 error and a confusing error
message.
REQ: curl -i
'http://10.90.10.24:8774/v2/19abae5746b242d489d1c2862b228d8b/servers/4e863fad-e868-48a1-8735-2da9a38561e8/os-i
Public bug reported:
When I post an 'attach interface' request to Nova with an invalid fixed
ip, Nova returns an HTTP 500 error and a confusing error message.
REQ: curl -i
'http://10.90.10.24:8774/v2/19abae5746b242d489d1c2862b228d8b/servers/1b1618fa-ddbd-4fce-aa04-720a72ec7dfe/os-interface'
-X
Public bug reported:
When I post an 'attach interface' request to Nova with an in-used fixed
ip, Nova returns an HTTP 500 error and a confusing error message.
REQ: curl -i
'http://10.90.10.24:8774/v2/19abae5746b242d489d1c2862b228d8b/servers/b5cdb8f7-2350-4e28-bf75-7a696dfba73a/os-interface'
-X
Public bug reported:
OS version: RHEL 6.5
libvirt version: libvirt-0.10.2-29.el6_5.9.x86_64
When I attempt to live migrate my KVM instance using latest Juno code on
RHEL 6.5, I notice nova-compute error on source compute node:
2014-08-27 09:24:41.836 26638 ERROR nova.virt.libvirt.driver [-]
[in
Public bug reported:
The logic of _modify_rules() seems not correct. For instance, assuming
that we have a in-memory table like this:
:bn-chain001 - [0:0]
:chain002 - [0:0]
[0:0] -A bn-chain001 rule001
[0:0] -A chain002 rule002
and iptables-save output like this:
# Generated by zhaoqin on mars
Public bug reported:
ENOENT error breaks update_status() during PowerKVM testing. It should
be a bug of libvirt driver.
Exception log:
2014-08-15 16:03:59.038 42817 ERROR nova.openstack.common.periodic_task [-]
Error during PowerVCComputeManager.update_available_resource: [Errno 2] No such
fil
Public bug reported:
When 10 user starts to provision VMs to a VCenter, OpenStack chooses one same
datastore for everyone.
After the first clone task is complete, OpenStack recognizes that datastore
space usage is increased, and will choose another datastore. However, all the
next 9 provision t
Public bug reported:
When a VM with an attached iSCSI disk fails to migrate, the rollback
methods does not detach the disk from target host. What happens is
_lookup_by_name() fails, since the VM does not exist on the target host.
In detach_volume(), it is supposed to print a warning based on the
Public bug reported:
Python stack
(gdb) py-bt
#4 file '/usr/lib64/python2.6/subprocess.py', in '_eintr_retry_call'
#8 file '/usr/lib64/python2.6/subprocess.py', in '_execute_child'
#11 file '/usr/lib64/python2.6/subprocess.py', in '__init__'
#18 file '/usr/lib/python2.6/site-packages/eventlet/gre
*** This bug is a duplicate of bug 1270304 ***
https://bugs.launchpad.net/bugs/1270304
It seem that the process who was operating VM disk file is
nova 14765 27141 0 Apr18 ?00:01:18 /usr/libexec/qemu-kvm
-global virtio-blk-pci.scsi=off -nodefconfig -nodefaults -nographic
-machine
Public bug reported:
2014-03-20 16:48:43.013 4979 TRACE nova.openstack.common.rpc.amqp File
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1019, in
_cleanup_resize
2014-03-20 16:48:43.013 4979 TRACE nova.openstack.common.rpc.amqp
utils.execute('rm', '-rf', target, de
Hi Chen Zheng, I was not able to reproduce your problem today. Here is
what I did:
1. create one controller and two compute (zhaoqin-RHEL-GPFS-tmp and
zhaoqin-RHEL-GPFS-tmp1)
2. create two host groups
[root@zhaoqin-RHEL-GPFS-tmp chaochin]# nova aggregate-details gpfs
++--+---
31 matches
Mail list logo