Il 06/02/2014 04:10, Alexey Kardashevskiy ha scritto: >> > Ok, I thought Alexey was saying we are not redirtying that handful of >> > pages. > > Every iteration we read the dirty map from KVM and send all dirty pages > across the stream.
But we never finish because qemu_savevm_state_pending is only called _after_ the g_usleep? And thus there's time for the guest to redirty those pages. Does something like this fix it (of course for a proper pages the goto should be eliminated)? diff --git a/migration.c b/migration.c index 7235c23..804c3bd 100644 --- a/migration.c +++ b/migration.c @@ -589,6 +589,7 @@ static void *migration_thread(void *opaque) } else { int ret; +final_phase: DPRINTF("done iterating\n"); qemu_mutex_lock_iothread(); start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME); @@ -640,10 +641,16 @@ static void *migration_thread(void *opaque) qemu_file_reset_rate_limit(s->file); initial_time = current_time; initial_bytes = qemu_ftell(s->file); - } - if (qemu_file_rate_limit(s->file)) { - /* usleep expects microseconds */ - g_usleep((initial_time + BUFFER_DELAY - current_time)*1000); + } else if (qemu_file_rate_limit(s->file)) { + pending_size = qemu_savevm_state_pending(s->file, max_size); + DPRINTF("pending size %" PRIu64 " max %" PRIu64 "\n", + pending_size, max_size); + if (pending_size >= max_size) { + /* usleep expects microseconds */ + g_usleep((initial_time + BUFFER_DELAY - current_time)*1000); + } else { + goto final_phase; + } } }