On Tue, 2 Apr 2019 at 15:47, Catherine Ho <catherine.h...@gmail.com> wrote:
> Hi Peter Maydell > > On Tue, 2 Apr 2019 at 11:05, Peter Maydell <peter.mayd...@linaro.org> > wrote: > >> On Tue, 2 Apr 2019 at 09:57, Catherine Ho <catherine.h...@gmail.com> >> wrote: >> > The root cause is the used idx is moved forward after 1st time >> incoming, and in 2nd time incoming, >> > the last_avail_idx will be incorrectly restored from the saved device >> state file(not in the ram). >> > >> > I watched this even on x86 for a virtio-scsi disk >> > >> > Any ideas for supporting 2nd time, 3rd time... incoming restoring? >> >> Does the destination end go through reset between the 1st and 2nd >> > seems not, please see my step below > >> incoming attempts? I'm not a migration expert, but I thought that >> devices were allowed to assume that their state is "state of the >> device following QEMU reset" before the start of an incoming >> migration attempt. >> > > Here is my step: > 1. start guest normal by qemu with shared memory-backend file > 2. stop the vm. save the device state to another file via monitor migrate > "exec: cat>..." > 3. quit the vm > via "quit" command of monitor > 4. retore the vm by qemu -incoming "exec:cat ..." > 5. continue the vm via monito, the 1st incoming works fine > 6. quit the vm > 7. retore the vm by qemu -incoming "exec:cat ..." for 2nd time > 8. continue -> error happened > Actually, this can be fixed by forcely restore the idx by > virtio_queue_restore_last_avail_idx() > But I am sure whether it is reasonable. > s/sure/not sure > > B.R. > >> >> thanks >> -- PMM >> >