> -----Original Message----- > From: Dr. David Alan Gilbert [mailto:dgilb...@redhat.com] > Sent: Tuesday, June 27, 2017 6:28 PM > To: Zhang, Yulei <yulei.zh...@intel.com> > Cc: qemu-devel@nongnu.org; Tian, Kevin <kevin.t...@intel.com>; > joonas.lahti...@linux.intel.com; zhen...@linux.intel.com; Zheng, Xiao > <xiao.zh...@intel.com>; Wang, Zhi A <zhi.a.w...@intel.com> > Subject: Re: [Qemu-devel] [RFC 0/5] vfio: Introduce Live migration capability > to vfio_mdev device > > * Yulei Zhang (yulei.zh...@intel.com) wrote: > > Summary > > > > This series RFC would like to introduce the live migration capability > > to vfio_mdev device. > > > > As currently vfio_mdev device don't support migration, we introduce a > > device flag VFIO_DEVICE_FLAGS_MIGRATABLE to help determine whether > the > > mdev device can be migrate or not, it will check the flag during the > > device initialization and decide to init the new vfio region > > VFIO_PCI_DEVICE_STATE_REGION_INDEX. > > > > The intention to add the new region is using it for vfio_mdev device > > status save and restore during the migration. The access to this > > region will be trapped and forward to the vfio_mdev device driver. > > There is an alternative way to achieve it is to add a new vfio ioctl > > to help fetch and save the device status. > > > > Also this series include two new vfio ioctl > > #define VFIO_DEVICE_PCI_STATUS_SET _IO(VFIO_TYPE, VFIO_BASE + 14) > > #define VFIO_DEVICE_PCI_GET_DIRTY_BITMAP _IO(VFIO_TYPE, VFIO_BASE > + > > 15) > > > > The first one is used to contorl the device running status, we want to > > stop the mdev device before quary the status from its device driver > > and restart the device after migration. > > The second one is used to do the mdev device dirty page synchronization. > > > > So the vfio_mdev device migration sequence would be Source VM side: > > start migration > > | > > V > > get the cpu state change callback > > use status set ioctl to stop the mdev device > > | > > V > > save the deivce status into Qemufile which is > > read from the new vfio device status region > > | > > V > > quary the dirty page bitmap from deivce > > and add into qemu dirty list for sync > > That ordering is interesting; I think the main migration flow is normally to > complete migration of RAM and then migrate the devices; so I worry about > that order. > > Dave
Dave, thanks for help review the patch, you are right about the sequence, I will modify the description in next version. > > > Target VM side: > > restore the mdev device after get the > > saved status context from Qemufile > > | > > V > > get the cpu state change callback > > use status set ioctl to start the mdev > > device to put it in running status > > | > > V > > finish migration > > > > Yulei Zhang (5): > > vfio: introduce a new VFIO region for migration support > > vfio: Add struct vfio_vmstate_info to introduce vfio device put/get > > funtion > > vfio: introduce new VFIO ioctl VFIO_DEVICE_PCI_STATUS_SET > > vfio: use vfio_device_put/vfio_device_get for device status > > save/restore > > vifo: introduce new VFIO ioctl VFIO_DEVICE_PCI_GET_DIRTY_BITMAP > > > > hw/vfio/pci.c | 204 > ++++++++++++++++++++++++++++++++++++++++++++- > > hw/vfio/pci.h | 3 + > > linux-headers/linux/vfio.h | 34 +++++++- > > 3 files changed, 239 insertions(+), 2 deletions(-) > > > > -- > > 2.7.4 > > > > > -- > Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK