This is still RFC because the kernel counterpart is still under review. However please feel free to read into the code a bit if you want; they've even got rich comments so not really in WIP status itself. Any kind of comments are greatly welcomed.
For anyone who wants to try (we need to upgrade kernel too): KVM branch: https://github.com/xzpeter/linux/tree/kvm-dirty-ring QEMU branch for testing: https://github.com/xzpeter/qemu/tree/kvm-dirty-ring Overview ======== KVM dirty ring is a new interface to pass over dirty bits from kernel to the userspace. Instead of using a bitmap for each memory region, the dirty ring contains an array of dirtied GPAs to fetch, one ring per vcpu. There're a few major changes comparing to how the old dirty logging interface would work: - Granularity of dirty bits KVM dirty ring interface does not offer memory region level granularity to collect dirty bits (i.e., per KVM memory slot). Instead the dirty bit is collected globally for all the vcpus at once. The major effect is on VGA part because VGA dirty tracking is enabled as long as the device is created, also it was in memory region granularity. Now that operation will be amplified to a VM sync. Maybe there's smarter way to do the same thing in VGA with the new interface, but so far I don't see it affects much at least on regular VMs. - Collection of dirty bits The old dirty logging interface collects KVM dirty bits when synchronizing dirty bits. KVM dirty ring interface instead used a standalone thread to do that. So when the other thread (e.g., the migration thread) wants to synchronize the dirty bits, it simply kick the thread and wait until it flushes all the dirty bits to the ramblock dirty bitmap. A new parameter "dirty-ring-size" is added to "-accel kvm". By default, dirty ring is still disabled (size==0). To enable it, we need to be with: -accel kvm,dirty-ring-size=65536 This establishes a 64K dirty ring buffer per vcpu. Then if we migrate, it'll switch to dirty ring. I gave it a shot with a 24G guest, 8 vcpus, using 10g NIC as migration channel. When idle or dirty workload small, I don't observe major difference on total migration time. When with higher random dirty workload (800MB/s dirty rate upon 20G memory, worse for kvm dirty ring). Total migration time is (ping pong migrate for 6 times, in seconds): |-------------------------+---------------| | dirty ring (4k entries) | dirty logging | |-------------------------+---------------| | 70 | 58 | | 78 | 70 | | 72 | 48 | | 74 | 52 | | 83 | 49 | | 65 | 54 | |-------------------------+---------------| Summary: dirty ring average: 73s dirty logging average: 55s The KVM dirty ring will be slower in above case. The number may show that the dirty logging is still preferred as a default value because small/medium VMs are still major cases, and high dirty workload happens frequently too. And that's what this series did. Please refer to the code and comment itself for more information. Thanks, Peter Xu (9): KVM: Fixup kvm_log_clear_one_slot() ioctl return check linux-headers: Update memory: Introduce log_sync_global() to memory listener KVM: Create the KVMSlot dirty bitmap on flag changes KVM: Provide helper to get kvm dirty log KVM: Provide helper to sync dirty bitmap from slot to ramblock KVM: Cache kvm slot dirty bitmap size KVM: Add dirty-ring-size property KVM: Dirty ring support accel/kvm/kvm-all.c | 591 ++++++++++++++++++++++++++++++++---- accel/kvm/trace-events | 7 + include/exec/memory.h | 12 + include/hw/core/cpu.h | 10 + include/sysemu/kvm_int.h | 5 + linux-headers/asm-x86/kvm.h | 1 + linux-headers/linux/kvm.h | 44 +++ memory.c | 33 +- qemu-options.hx | 3 + 9 files changed, 638 insertions(+), 68 deletions(-) -- 2.24.1