Searching for
failed to reach ACTIVE status and task state "None"
shows a lot of different bug tickets. This does not seem like a tempest
bug.
** Changed in: tempest
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which i
This is still hitting persistently but not that often. I think this is
more likely a bug in neutron than in tempest so marking accordingly.
Please reopen in tempest if more evidence appears.
** Changed in: neutron
Status: New => Confirmed
** Changed in: tempest
Status: New => Invali
This looks real though there is only one hit in the past 8 days and the
log is not available. The test immediately preceding this failure has an
addCLeanUp that unrescues:
def _unrescue(self, server_id):
resp, body = self.servers_client.unrescue_server(server_id)
self.assertEqu
I don't see any hits for this in logstash. There is nothing unusual
about this test and it is surrounded by similar tests that pass. So
there must be some issue in keystone that is causing the admin
credentials to be rejected here.
** Changed in: tempest
Status: New => Invalid
** Also affe
Tempest does check for token expiry and no test should fail due to an
expired token. So this must be a keystone issue. I just looked at
another bug that got an unauthorized for one of the keystone tests with
no explanation which I also added keystone to the ticket
https://bugs.launchpad.net/keyston
*** This bug is a duplicate of bug 1260537 ***
https://bugs.launchpad.net/bugs/1260537
None is available I'm afraid. This is not a bug in tempest and this
ticket https://bugs.launchpad.net/tempest/+bug/1260537 is used to track
such things for whatever good it does.
** This bug has been marked
This must be a race of some sort in tempest or neutron but I'm not sure
which.
** Also affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.ne
Looks like some kind of nova issue.
** Changed in: tempest
Status: New => Invalid
** Also affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
This happened once in the last week. So it is real but not common. I am
assuming this is a nova issue and not tempest. Please reopen if there is
evidence to the contrary.
** Changed in: tempest
Status: New => Invalid
** Also affects: nova
Importance: Undecided
Status: New
--
Yo
This is still showing up in logstash
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQm90b1NlcnZlckVycm9yOiBCb3RvU2VydmVyRXJyb3I6IDUwMCBJbnRlcm5hbCBTZXJ2ZXIgRXJyb3JcIiBBTkQgTk9UIGJ1aWxkX2JyYW5jaDpcInN0YWJsZS9oYXZhbmFcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcG
I am not seeing how this is a bug in tempest. Tempest is deleting the vm
only after nova reports that the volume is 'in-use' which seems fine.
It would be nice if there was a backtrace, log, or something associated
with this ticket. This might be a cinder issue but more likely nova and
the test is
I really don't know if this is a problem with glance or devstack or
something else.
** Changed in: tempest
Status: New => Invalid
** Also affects: glance
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team,
The proximate cause of these errors is that at the time this bug was
reported, we tried to grab the console in an erroneous way that causes a
400. This issue was fixed and I would just close this as another random
"server failed to boot" issue except that right before the failed server
create, ther
The same issue is showing up in current jobs
http://logs.openstack.org/87/44287/9/check/gate-tempest-devstack-vm-
full/c3a07eb/logs/screen-n-cond.txt.gz
** Changed in: nova
Status: Fix Released => New
--
You received this bug notification because you are a member of Yahoo!
Engineering Tea
OK, I created https://bugs.launchpad.net/nova/+bug/1233789
** Changed in: nova
Status: New => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1084706
T
One of these came from logs.openstack.org/94/58494/3/check/check-
tempest-devstack-vm-full/2ef9650/logs/screen-n-cpu.txt.gz
But this was probably from a "check" run on a proposed commit. I have
been scanning all builds to finalize the whitelist for log errors.
Closing this ticket for now.
** Chan
Public bug reported:
>From the log file from this change
http://logs.openstack.org/33/59533/1/gate/gate-tempest-dsvm-
full/1f2c988/console.html
2013-12-03 21:30:31.851 22827 ERROR swiftclient [-] Object GET failed:
http://127.0.0.1:8080/v1/AUTH_4d22a858761e4b90b536f489ccff34ca/glance/a6c33fc7-48
*** This bug is a duplicate of bug 1258848 ***
https://bugs.launchpad.net/bugs/1258848
Please include a pointer to the log file for such reports. According to
logstash this has hit 48 times in the last two weeks which is a very low
failure rate. Ideally flaky bugs like this would be fixed. If
This non-white-listed error showed up in n-cpu:
2013-11-27 00:53:57.756 ERROR nova.virt.libvirt.driver [req-
298cf8f1-3907-4494-8b6e-61e9b88dfded ListImageFiltersTestXML-
tempest-656023876-user ListImageFiltersTestXML-tempest-656023876-tenant]
An error occurred while enabling hairpin mode on domai
This shows up in n-cpu:
The "model server went away" showed up 11 times in the last two weeks
with the last one being on Dec. 3. This sample size is too small for me
to close at this time.
2013-11-25 15:24:22.099 21076 ERROR nova.servicegroup.drivers.db [-] model
server went away
2013-11-25 15:2
Public bug reported:
Lots of these slipped in during the current log checking outage:
2014-01-14 00:05:15.220 | 2014-01-14 00:04:13.658 26807 ERROR
nova.virt.driver [-] Exception dispatching event
: Info cache for
instance a0896255-1e5d-477d-9d16-0ab69687ba41 could not be found.
>From
>http://
Public bug reported:
logstash showed several dozen examples of this in the last week,
searching for
"u'port_state': u'BUILD'"
tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesTestJSON.test_create_list_show_delete_interfaces[gate,network,smoke]
2014-12-29 23:02:14.022 |
--
Public bug reported:
A gate job failed with this error in the n-cpu log. The job is
http://logs.openstack.org/13/145713/7/gate/gate-tempest-dsvm-
full/71a2280/
>From http://logs.openstack.org/13/145713/7/gate/gate-tempest-dsvm-
full/71a2280/logs/screen-n-cpu.txt.gz
2015-01-13 12:24:35.148 29194
This error is coming from the nova compute log here.
http://logs.openstack.org/10/115110/20/check/check-tempest-dsvm-neutron-
pg-full-2/3c885b8/logs/screen-n-cpu.txt.gz
2015-01-16 01:53:19.798 ERROR oslo.messaging.rpc.dispatcher
[req-6bd7d570-7e04-4118-9547-6f8b6fdd67fa TestMinimumBasicScenario-
Public bug reported:
Thousands of matches in the last two days:
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiTm8gdmFsaWQgaG9zdCB3YXMgZm91bmQuIFRoZXJlIGFyZSBub3QgZW5vdWdoIGhvc3RzIGF2YWlsYWJsZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsIn
It is true that this particular sub-case of the bug title has only one
patch responsible, there are many other patches shown in logstash that
could not possibly cause this problem but which experience it. So this
seems to be a problem that can randomly impact any patch. Though it may
be difficult
** Changed in: tempest
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1441745
Title:
Lots of gate failures with "not enough hosts available
Public bug reported:
A tempest test was failing because it was trying to filter servers based
on ip from an ipv6 subnet but was not using the 'ip6' query param. But
the fix to use 'ip6' failed because all servers are returned instead of
just the one with that ipv6 addr.
This is most easily seen b
Public bug reported:
These tests are failing many times starting around 10:00 December 11
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwidGVtcGVzdC5zY2VuYXJpby50ZXN0X25ldHdvcmtfdjYuVGVzdEdldHRpbmdBZGRyZXNzLnRlc3RfZGhjcDZfc3RhdGVsZXNzX2Zyb21fb3NcIiBBTkQgbWVzc2FnZTpGQUlMRUQiLCJmaWVsZHMi
This was a tempest bug.
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401900
Title:
Gate jobs failing wth "Multiple possible netwo
This was fixed a few months ago
https://github.com/openstack/tempest/commit/69bcb82a7fdeda2fdaf664a238a4ecbbf7cc58c9
** Changed in: tempest
Status: New => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keysto
Public bug reported:
I installed a vanilla devstack except for setting SERVICE_HOST in
localrc so I could run tempest from another machine. Tempest fails
trying to connect to adminURL and it seems to be because port 35357 is
only open locally. The conf file comment says:
# The base admin endpoint
This issue is caused by keystone listening globally for the public url
(port 5000) but only on localhost for 35357. I poked a little more and
found the cause.
Setting SERVICE_HOST in localrc causes devstack to produce these values
in keystone.conf:
admin_bind_host = dkranz-devstack
admin_endpoint
** Changed in: tempest
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1282266
Title:
enabled attribute missing from GET /v3/endpoints
Status in OpenStack
logstash shows this happening a lot, about 7/8 of the time in a cells (
non-voting) run. I searched for
"The server has either erred or is incapable of performing the requested
operation. (HTTP 500)" as the message.
Don't see how this could be related to tempest.
** Also affects: nova
Importa
: neutron
Status: New => Invalid
** Changed in: tempest
Importance: Undecided => High
** Changed in: tempest
Assignee: (unassigned) => David Kranz (david-kranz)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutr
This happens quite a lot and seems to be triggered by this error in the
n-cpu log:
2014-03-19 16:46:40.209 ERROR nova.network.neutronv2.api [req-
6afb4d61-2c01-43d7-9caf-fdda126f7497 TestVolumeBootPatternV2-1956299643
TestVolumeBootPatternV2-224738568] Failed to delete neutron port
914b04aa-7f0e-4
Public bug reported:
There was a bug in tempest that caused a call to DELETE os-server-groups
with a bad id. Here is the call from the tempest log:
2014-06-25 12:07:03.162 25653 INFO tempest.common.rest_client [-]
Request (ServerGroupTestJSON:tearDownClass): 200 DELETE
http://127.0.0.1:8774/v2/2b
Public bug reported:
On a system with both v2/v3 (devstack), using 35357 shows only v3.
5000:
http://docs.openstack.org/identity/api/v2.0";>
http://devstack-neutron:5000/v3/"; rel="self"/>
http://devstack-neutron:5000/v2.0/"; rel="self"/>
http://docs.openstack.org/"; type="text/html
Public bug reported:
It seems that when you allocate a floating-ip in a tenant with nova-
network, its quota is never returned after calling 'nova floating-ip-
delete' ecen though 'nova floating-ip-list' shows it gone. This behavior
applies to each tenant individually. The gate tests are passing b
40 matches
Mail list logo