This is not a bug this is intentional bevhior added by https://github.com/openstack/nova/commit/26c41eccade6412f61f9a8721d853b545061adcc To address https://bugs.launchpad.net/nova/+bug/1633120
** Changed in: nova Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/2026831 Title: Table nova/pci_devices is not updated after removing attached SRIOV port Status in OpenStack Compute (nova): Won't Fix Bug description: Description =========== When I create an SRIOV port and attach it to an instance then Nova/pci_devices db table for the VF is correctly updated and status of the VF is changed from "available" to "allocated". If I detach the port from the instance, the VF's status is also correctly reverted back to "available". But in case the port is deleted before it is detached from the instance, the VF's status stays "allocated" (in the db Nova/pci_devices) and it makes this VF unusable. Steps to reproduce ================== 1) create an SRIOV port in Openstack (VNIC type = Direct) and attach it to a VM 2) delete the SRIOV port from Openstack without detaching it from the VM first Expected result =============== 1) VF detached from the VM 2) VF's status in database (Nova/pci_devices) changed to "available" Actual result ============= 1) VF detached from the VM 2) VF's status in database (Nova/pci_devices) IS NOT changed to "available", it stays "allocated" Environment =========== 1. Openstack version: Yoga rpm -qa | grep nova python3-novaclient-17.7.0-1.el8.noarch openstack-nova-conductor-25.2.0-1.el8.noarch python3-nova-25.2.0-1.el8.noarch openstack-nova-common-25.2.0-1.el8.noarch openstack-nova-scheduler-25.2.0-1.el8.noarch openstack-nova-api-25.2.0-1.el8.noarch openstack-nova-novncproxy-25.2.0-1.el8.noarch 2. Which hypervisor did you use? Libvirt + KVM What's the version of that? libvirt-7.6.0-6.el8.x86_64 qemu-kvm-6.0.0-33.el8.x86_64 2. Which storage type did you use? This issue is storage independent. 3. Which networking type did you use? Neutron + openvswitch + sriovnicswitch Logs & Configs ============== (hypervisor) nova-compute.log: Before: PciDevicePool(count=16,numa_node=0,product_id='XXX',tags={dev_type='type- VF',parent_ifname='XXX',physical_network='XXX',remote_managed='false'},vendor_id='XXX') After: PciDevicePool(count=15,numa_node=0,product_id='XXX',tags={dev_type='type- VF',parent_ifname='XXX',physical_network='XXX',remote_managed='false'},vendor_id='XXX') To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/2026831/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp