did you mean to file this to cinder.
or did you file this for disk quota in the flavor
https://github.com/openstack/nova/blob/master/nova/api/validation/extra_specs/quota.py#L149-L167
i assume for the first link
https://docs.openstack.org/cinder/latest/admin/basic-volume-qos.html
that this sh
im torn between considering this a wishlist bug or a feature request.
i think this is related perhaps to the resource provider mapings
with this configuration
[devices]
enabled_vgpu_types = nvidia-474,nvidia-475,nvidia-476
[vgpu_nvidia-474]
device_addresses = :61:00.4,:61:01.0
[vgpu_nvid
Public bug reported:
When creating ~250 ports on a single subnet we currently take around 140
seconds.
This duration increased significantly with the introduction of a fix for
https://bugs.launchpad.net/neutron/+bug/1865891 that exclusively locks a
subnet when a port is created/updated in there.
Reviewed: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/875400
Committed:
https://opendev.org/openstack/ovn-octavia-provider/commit/cc30eae60c57a5b46e3e564875b67d7bb5edfff5
Submitter: "Zuul (22348)"
Branch:master
commit cc30eae60c57a5b46e3e564875b67d7bb5edfff5
Author: Fernand
Public bug reported:
Bug originally found by Alex Katz and reported in the bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=2149713
Description of problem:
When a stateless security group is attached to the instance it fails to fetch
metadata info. An explicit rule is required to allow meta
This is expected behavior. On Ubuntu 18.04, on Openstack, the network
was configured using the fallback network interface only. In Ubuntu
20.04, it was updated to use network_data.json. Since changing this
behavior would have resulted in backwards incompatibly, it is not
default behavior on 18.04.
Public bug reported:
Since a couple of weeks we have a problem in our production environment
when restarting our l3-agent. (Our assumption is that this might has
something to do with our upgrade to wallaby, as we never saw this
problem on prior releases before.)
The l3 agent is hosting around 300
Nova does not impos any limits on the mig profiles or mdev types that
can be used.
this is a hardware limitation of the nvida gpus not a nova limitation.
you are using A100 which does support using more then one type but only
on specific types which they have documented in there product docs.
if
** Also affects: nova (Ubuntu)
Importance: Undecided
Status: New
** No longer affects: nova (Ubuntu)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2008883
Titl
This sounds like a issue with your networking configuration.
nova is not involved in any data copy for live migration (ram or disk)
its entirely handled by libvirt and qemu.
the libvirt live_migration_bandwidth is a maximum limit on how much bandwidth
can be used by libvirt/qemu to do the data t
10 matches
Mail list logo