Public bug reported:
When trying to boot and delete instances (Rally) saw this stack trace:
2016-08-12 15:21:58.859 38365 ERROR nova.compute.manager [instance:
47fbd50b-94ec-4395-884c-9131e0e3f335] File
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/nova/compute/manager
l mapping because of this.
See 'nova/objects/instance_mapping.py'
** Affects: nova
Importance: Undecided
Assignee: Mark Doffman (mjdoffma)
Status: In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscri
/openstack/nova/blob/master/nova/scheduler/weights/affinity.py#L49
** Affects: nova
Importance: Low
Assignee: Mark Doffman (mjdoffma)
Status: New
** Tags: scheduler
** Changed in: nova
Importance: Undecided => Low
** Changed in: nova
Assignee: (unassigned) =&
'notifications.info' isn't something specific to nova. Notifications
queue is shared between projects.
** Project changed: nova => oslo.messaging
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bu
Belive that for now this is invalid. There is code that is superficially
similar between the 'detach' code in manager.py and the block_device
attach function, but there are subtle differences. The code in
manager.py calls roll_detach on failure, which I believe is
inappropriate for the block_device
If you look at
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4628
you can see the three different functions used for getting available disk space.
'get_volume_group_info'
'get_pool_info' and
'get_fs_info'
All of these methods are going to return the ACTUAL disk space
*** This bug is a duplicate of bug 1405367 ***
https://bugs.launchpad.net/bugs/1405367
This has been fixed and will be available in the liberty release.
** This bug has been marked a duplicate of bug 1405367
Rbd backend doesn't support disk IO qos
--
You received this bug notification be
Assuming you haven't seen any more of these since changing your ceph
config, lets close this one as invalid.
** Changed in: nova
Status: Incomplete => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Comput
Clearly an issue with novaclient help rather than nova.
** Project changed: nova => python-novaclient
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1497151
Title:
erro
https://blueprints.launchpad.net/neutron/+spec/mtu-selection-and-
advertisement
Blueprint for Neutron goes in to detail about how instances should get
the correct MTU value (Via DHCP or otherwise). If the MTU value is not
supplied for Neutron then the instance has no way of determining what
the co
LIkey a support request, not likely a bug with nova but possibly a
support question or problem with openstack-chef. Could you try asking
for support from openstack-chef.
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
En
Probably shouldn't be stopping a VM not through openstack. Doing so not
supported.
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.
Sorry to try and bring this one back to life, but i'm just not sure that
its really invalid. Marked https://bugs.launchpad.net/nova/+bug/1494617
as duplicate.
Seems that for the images api this now implements the empty list .
However I think that for flavors and servers api Ithat the behavior is
s
*** This bug is a duplicate of bug 794730 ***
https://bugs.launchpad.net/bugs/794730
** This bug has been marked a duplicate of bug 794730
API doesn't specify what limit=0 means
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed
14 matches
Mail list logo