[Expired for OpenStack Identity (keystone) because there has been no
activity for 60 days.]
** Changed in: keystone
Status: Incomplete => Expired
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone)
Public bug reported:
tempest-horizon job in stable/rocky is failing.
https://review.opendev.org/#/c/716541/
tempest-horizon commit 48bde4d38f0b76c5694c3c6e10e1b96056353ec2 dropped python
2.7 support, but the tempest job in stable/rocky is run on python2.7 so it
looks like we should not drop pyt
I've just closed out the openstack/nova change as this isn't fixable on
the n-cpu side at the moment without an idempotent connection_info
refresh API.
We can however fix this in openstack/cinder by forcing the NFS c-vol
driver to update the saved connection_info during the snapshot, allowing
n-cp
Public bug reported:
The current implementation of the external ports in OVN is not
accounting for the VNIC type VNIC_DIRECT_PHYSICAL. In OVN terms, both
VNIC_DIRECT and VNIC_DIRECT_PHYSICAL should do exactly the same.
** Affects: neutron
Importance: High
Assignee: Lucas Alvares Gomes (
** Also affects: nova/stein
Importance: Undecided
Status: New
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Also affects: nova/pike
Importance: Undecided
Status: New
** Also affects: nova/ussuri
Importance: High
Assignee: Mark Goddard (mgo
Public bug reported:
Description
===
It seems that when nova-compute process run I/O intensive task on a busy file
system,
it can become stuck and got disconnected from rabbitmq cluster.
>From my understanding nova-compute do not use true OS multithreading,
but internal python multi-tas
As per Dan comments from the https://review.opendev.org/171984 this is
not a bug as the behavior could be useful in some cases.
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Open
Reviewed: https://review.opendev.org/718292
Committed:
https://git.openstack.org/cgit/openstack/glance/commit/?id=7c4eda8f62b74fec039214c150709db488242a79
Submitter: Zuul
Branch:master
commit 7c4eda8f62b74fec039214c150709db488242a79
Author: khashf
Date: Tue Apr 7 18:12:40 2020 -0700
Based on that the reporter cannot reproduce the problem I'm closing the
bug as Invalid. Please reset the bug state to New if you can reproduce
it again.
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, wh
This did not happened in the last 30 days so I'm closing this bug as
Invalid.
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/b
Based on the last comment from Allison the solution is to configure
oslo_middleware properly to allow bigger HTTP request bodies:
[oslo_middleware]/max_request_body_size
So I'm closing this bug as Invalid.
** Changed in: nova
Status: New => Invalid
--
You received this bug notification
If you are using pci_alias in the flavor that means you are passing
through PCI devices. In this case nova does not know (and care) that the
PCI device is happen to be an SRIOV VF, or a PFGA or a GPU. Therefore
nova will not configure anything SRIOV specific on the passed PCI
device.
So I'm settin
Reviewed: https://review.opendev.org/718307
Committed:
https://git.openstack.org/cgit/openstack/glance/commit/?id=b6d61446321926b01b55688653329842a059e05b
Submitter: Zuul
Branch:master
commit b6d61446321926b01b55688653329842a059e05b
Author: khashf
Date: Tue Apr 7 21:51:00 2020 -0700
13 matches
Mail list logo