On Thu, 19 Nov 2020 14:13:24 +0530 Kirti Wankhede <kwankh...@nvidia.com> wrote:
> On 11/14/2020 2:47 PM, Shenming Lu wrote: > > When running VFIO migration, I found that the restoring of VFIO PCI device’s > > config space is before VGIC on ARM64 target. But generally, interrupt > > controllers > > need to be restored before PCI devices. > > Is there any other way by which VGIC can be restored before PCI device? > > > Besides, if a VFIO PCI device is > > configured to have directly-injected MSIs (VLPIs), the restoring of its > > config > > space will trigger the configuring of these VLPIs (in kernel), where it > > would > > return an error as I saw due to the dependency on kvm’s vgic. > > > > Can this be fixed in kernel to re-initialize the kernel state? > > > To avoid this, we can move the saving of the config space from the iterable > > process to the non-iterable process, so that it will be called after VGIC > > according to their priorities. > > > > With this change, at resume side, pre-copy phase data would reach > destination without restored config space. VFIO device on destination > might need it's config space setup and validated before it can accept > further VFIO device specific migration state. > > This also changes bit-stream, so it would break migration with original > migration patch-set. Config space can continue to change while in pre-copy, if we're only sending config space at the initiation of pre-copy, how are any changes that might occur before the VM is stopped conveyed to the target? For example the guest might reboot and a device returned to INTx mode from MSI during pre-copy. Thanks, Alex > > Signed-off-by: Shenming Lu <lushenm...@huawei.com> > > --- > > hw/vfio/migration.c | 22 ++++++---------------- > > 1 file changed, 6 insertions(+), 16 deletions(-) > > > > diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c > > index 3ce285ea39..028da35a25 100644 > > --- a/hw/vfio/migration.c > > +++ b/hw/vfio/migration.c > > @@ -351,7 +351,7 @@ static int vfio_update_pending(VFIODevice *vbasedev) > > return 0; > > } > > > > -static int vfio_save_device_config_state(QEMUFile *f, void *opaque) > > +static void vfio_save_device_config_state(QEMUFile *f, void *opaque) > > { > > VFIODevice *vbasedev = opaque; > > > > @@ -365,13 +365,14 @@ static int vfio_save_device_config_state(QEMUFile *f, > > void *opaque) > > > > trace_vfio_save_device_config_state(vbasedev->name); > > > > - return qemu_file_get_error(f); > > + if (qemu_file_get_error(f)) > > + error_report("%s: Failed to save device config space", > > + vbasedev->name); > > } > > > > static int vfio_load_device_config_state(QEMUFile *f, void *opaque) > > { > > VFIODevice *vbasedev = opaque; > > - uint64_t data; > > > > if (vbasedev->ops && vbasedev->ops->vfio_load_config) { > > int ret; > > @@ -384,15 +385,8 @@ static int vfio_load_device_config_state(QEMUFile *f, > > void *opaque) > > } > > } > > > > - data = qemu_get_be64(f); > > - if (data != VFIO_MIG_FLAG_END_OF_STATE) { > > - error_report("%s: Failed loading device config space, " > > - "end flag incorrect 0x%"PRIx64, vbasedev->name, data); > > - return -EINVAL; > > - } > > - > > trace_vfio_load_device_config_state(vbasedev->name); > > - return qemu_file_get_error(f); > > + return 0; > > } > > > > static int vfio_set_dirty_page_tracking(VFIODevice *vbasedev, bool start) > > @@ -575,11 +569,6 @@ static int vfio_save_complete_precopy(QEMUFile *f, > > void *opaque) > > return ret; > > } > > > > - ret = vfio_save_device_config_state(f, opaque); > > - if (ret) { > > - return ret; > > - } > > - > > ret = vfio_update_pending(vbasedev); > > if (ret) { > > return ret; > > @@ -720,6 +709,7 @@ static SaveVMHandlers savevm_vfio_handlers = { > > .save_live_pending = vfio_save_pending, > > .save_live_iterate = vfio_save_iterate, > > .save_live_complete_precopy = vfio_save_complete_precopy, > > + .save_state = vfio_save_device_config_state, > > .load_setup = vfio_load_setup, > > .load_cleanup = vfio_load_cleanup, > > .load_state = vfio_load_state, > > >