** Changed in: tempest
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1646002
Title:
periodic-tempest-dsvm-neutron-full-ssh-
** Also affects: openstack-gate
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1656386
Title:
Reduce neutron services' memory footprint
Statu
I marked this as incomplete from a Tempest POV - I couldn't find
anything wrong with the tests in Tempest that seem to trigger this,
apart from triggering sometimes what looks like a libvirt issue.
** Also affects: libvirt
Importance: Undecided
Status: New
** Changed in: tempest
** Also affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1646779
Title:
Cannot connect to libvirt
Status in Open
** Changed in: devstack
Status: In Progress => Fix Released
** Changed in: tempest
Status: Fix Released => Fix Committed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpa
The root disk for the cirros image is blank before boot.
The boot process starts from initrd. The file system in initrd is then
copied to /dev/vda and boot continues from there. Injection happens
before boot, thus there's no /etc folder found.
The test should inject to / instead.
** Also affect
This is a very old bug. Anna set it back to new in Aug 2016 as it may be
related to https://bugs.launchpad.net/mos/+bug/1606218, which has been fixed
since.
Hence I will set this to invalid. If some hits an ssh bug in the gate again,
please file a new bug.
** Changed in: tempest
Status:
Public bug reported:
I hit the following deadlock in a dsvm job:
http://paste.openstack.org/show/476503/
The full log is here:
http://logs.openstack.org/00/234200/5/experimental/gate-tempest-dsvm-neutron-full-test-accounts/4dccd24/logs/screen-n-api.txt.gz#_2015-10-16_13_23_36_379
The exception
The cause for the degradation could be anywhere - keystone or any other
co-located service which is hitting your devstack node resources. The
only way Tempest could make thing worst is by not cleaning test
resources properly, which still would hardly justify the slowdown.
Such a slow down is worry
Public bug reported:
Since May 15th 2015 I sometimes see failures in both check and gate
pipelines with "Failure prepping block device in the gate" and the
following common signature:
http://logstash.openstack.org/#eyJzZWFyY2giOiJcImJsb2NrX2RldmljZV9vYmpba2V5XSA9IGRiX2Jsb2NrX2RldmljZVtrZXldXCIiLC
My two cents: all python bindings should rely on python-keystoneclient
for obtaining token and url from catalogue, rather than duplicating the
logic to get a token and parse the catalogue, like cinder and nova
client do. This will make it easier in future to support new version of
the identity API.
cided
Status: New
** Bug watch added: code.google.com/p/httplib2/issues #143
http://code.google.com/p/httplib2/issues/detail?id=143
** Changed in: tempest
Assignee: (unassigned) => Andrea Frittoli (andrea-frittoli)
--
You received this bug notification because you are a member
Public bug reported:
The response to the token API in the v2 API is not consistent between
JSON and XML
In JSON the format is as follows:
"serviceCatalog": [
{
"endpoints": [
{
"adminURL":
"http://127.0.0.1:8774/v2/
Public bug reported:
In the grenade test [0] for a bp I'm working on, ServerRescueTestXML
rescue_unrescue test failed because the VM did not get into RESCUE state
in time. It seems that the test is flacky.
>From the tempest log [1] I see the sequence VM ACTIVE, RESCUE issues,
WAIT, timeout, DELET
14 matches
Mail list logo