** Changed in: nova
       Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1901739

Title:
   libvirt.libvirtError: internal error: missing block job data for disk
  'vda'

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) victoria series:
  Fix Released

Bug description:
  Description
  ===========

  nova-grenade-multinode has failed a number of times, due to
  
tempest.api.compute.admin.test_live_migration.LiveAutoBlockMigrationV225Test.test_live_block_migration_paused
  test failures with the following logs:

  https://api.us-east.open-
  
edge.io:8080/swift/v1/AUTH_e02c11e4e2c24efc98022353c88ab506/zuul_opendev_logs_30d/759831/4/gate
  /nova-grenade-multinode/30d8eb1/job-output.txt

  
  2020-10-27 16:59:26.667763 | primary | 2020-10-27 16:59:26.667 | 
tempest.api.compute.admin.test_live_migration.LiveAutoBlockMigrationV225Test.test_live_block_migration_paused[id-1e107f21-61b2-4988-8f22-b196e938ab88]
  2020-10-27 16:59:26.669590 | primary | 2020-10-27 16:59:26.669 | 
---------------------------------------------------------------------------------------------------------------------------------[..]
  testtools.matchers._impl.MismatchError: 'ubuntu-bionic-rax-iad-0021082674' != 
'ubuntu-bionic-rax-iad-0021082634': Live Migration failed. Migrations list for 
Instance 

  https://api.us-east.open-
  
edge.io:8080/swift/v1/AUTH_e02c11e4e2c24efc98022353c88ab506/zuul_opendev_logs_30d/759831/4/gate
  /nova-grenade-multinode/30d8eb1/logs/screen-n-cpu.txt

  Oct 27 16:59:14.123461 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]: 
ERROR nova.virt.libvirt.driver [-] [instance: 
4d564e22-8ba4-48fb-ac93-27bea660fd77] Live Migration failure: internal error: 
missing block job data for disk 'vda': libvirt.libvirtError: internal error: 
missing block job data for disk 'vda'
  Oct 27 16:59:14.123702 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]: 
DEBUG nova.virt.libvirt.driver [-] [instance: 
4d564e22-8ba4-48fb-ac93-27bea660fd77] Migration operation thread notification 
{{(pid=9691) thread_finished 
/opt/stack/new/nova/nova/virt/libvirt/driver.py:9416}}
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]: 
Traceback (most recent call last):
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
File "/usr/local/lib/python3.6/dist-packages/eventlet/hubs/hub.py", line 476, 
in fire_timers
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
  timer()
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
File "/usr/local/lib/python3.6/dist-packages/eventlet/hubs/timer.py", line 59, 
in __call__
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
  cb(*args, **kw)
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
File "/usr/local/lib/python3.6/dist-packages/eventlet/event.py", line 175, in 
_do_send
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
  waiter.switch(result)
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
File "/usr/local/lib/python3.6/dist-packages/eventlet/greenthread.py", line 
221, in main
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
  result = function(*args, **kwargs)
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
File "/opt/stack/new/nova/nova/utils.py", line 661, in context_wrapper
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
  return func(*args, **kwargs)
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 9070, in 
_live_migration_operation
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
  LOG.error("Live Migration failure: %s", e, instance=instance)
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
File "/usr/local/lib/python3.6/dist-packages/oslo_utils/excutils.py", line 220, 
in __exit__
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
  self.force_reraise()
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
File "/usr/local/lib/python3.6/dist-packages/oslo_utils/excutils.py", line 196, 
in force_reraise
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
  six.reraise(self.type_, self.value, self.tb)
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
File "/usr/local/lib/python3.6/dist-packages/six.py", line 703, in reraise
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
  raise value
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 9063, in 
_live_migration_operation
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
  bandwidth=CONF.libvirt.live_migration_bandwidth)
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
File "/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 681, in migrate
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
  destination, params=params, flags=flags)
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
File "/usr/local/lib/python3.6/dist-packages/eventlet/tpool.py", line 190, in 
doit
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
  result = proxy_call(self._autowrap, f, *args, **kwargs)
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
File "/usr/local/lib/python3.6/dist-packages/eventlet/tpool.py", line 148, in 
proxy_call
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
  rv = execute(f, *args, **kwargs)
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
File "/usr/local/lib/python3.6/dist-packages/eventlet/tpool.py", line 129, in 
execute
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
  six.reraise(c, e, tb)
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
File "/usr/local/lib/python3.6/dist-packages/six.py", line 703, in reraise
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
  raise value
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
File "/usr/local/lib/python3.6/dist-packages/eventlet/tpool.py", line 83, in 
tworker
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
  rv = meth(*args, **kwargs)
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
File "/usr/local/lib/python3.6/dist-packages/libvirt.py", line 1941, in 
migrateToURI3
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]:   
  raise libvirtError('virDomainMigrateToURI3() failed')
  Oct 27 16:59:14.135971 ubuntu-bionic-rax-iad-0021082634 nova-compute[9691]: 
libvirt.libvirtError: internal error: missing block job data for disk 'vda'

  https://api.us-east.open-
  
edge.io:8080/swift/v1/AUTH_e02c11e4e2c24efc98022353c88ab506/zuul_opendev_logs_30d/759831/4/gate
  /nova-grenade-multinode/30d8eb1/logs/libvirt/libvirtd.txt

  2020-10-27 16:59:13.867+0000: 31617: debug : 
qemuMonitorJSONIOProcessEvent:183 : handle BLOCK_JOB_READY 
handler=0x7f9e065ffe70 data=0x55b1b50ec200
  2020-10-27 16:59:13.867+0000: 31617: debug : qemuMonitorEmitBlockJob:1523 : 
mon=0x7f9e1804d570
  2020-10-27 16:59:13.867+0000: 31617: debug : qemuProcessHandleBlockJob:944 : 
Block job for device drive-virtio-disk0 (domain: 
0x7f9e18007920,instance-0000001e) type 2 status 3
  2020-10-27 16:59:13.867+0000: 31617: debug : virObjectEventDispose:124 : 
obj=0x55b1b511a380
  2020-10-27 16:59:13.867+0000: 31617: debug : virObjectEventDispose:124 : 
obj=0x55b1b5113060
  2020-10-27 16:59:13.867+0000: 31620: debug : 
qemuDomainObjExitMonitorInternal:7655 : Exited monitor (mon=0x7f9e1804d570 
vm=0x7f9e18007920 name=instance-0000001e)
  2020-10-27 16:59:13.867+0000: 31620: debug : qemuDomainObjEndJob:7516 : 
Stopping job: async nested (async=migration out vm=0x7f9e18007920 
name=instance-0000001e)
  2020-10-27 16:59:13.867+0000: 31620: error : 
qemuMigrationSrcNBDStorageCopyReady:512 : internal error: missing block job 
data for disk 'vda'
  2020-10-27 16:59:13.867+0000: 31620: debug : 
qemuMigrationSrcNBDCopyCancel:704 : Cancelling drive mirrors for domain 
instance-0000001e
  2020-10-27 16:59:13.867+0000: 31620: debug : 
qemuMigrationSrcNBDCopyCancelled:619 : All disk mirrors are gone

  
  Steps to reproduce
  ==================

  Run nova-grenade-multinode.

  Expected result
  ===============

  
tempest.api.compute.admin.test_live_migration.LiveAutoBlockMigrationV225Test.test_live_block_migration_paused
  succeeds.

  Actual result
  =============

  
tempest.api.compute.admin.test_live_migration.LiveAutoBlockMigrationV225Test.test_live_block_migration_paused
  fails.

  Environment
  ===========
  1. Exact version of OpenStack you are running. See the following
    list for all releases: http://docs.openstack.org/releases/

     https://review.opendev.org/#/c/759831/

  2. Which hypervisor did you use?
     (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
     What's the version of that?

     libvirt + QEMU

  2. Which storage type did you use?
     (For example: Ceph, LVM, GPFS, ...)
     What's the version of that?

     N/A

  3. Which networking type did you use?
     (For example: nova-network, Neutron with OpenVSwitch, ...)

     N/A

  Logs & Configs
  ==============

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1901739/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to     : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

Reply via email to