On 24.09.19 16:47, Igor Mammedov wrote: > Changelog: > since v6: > - include and rebase on top of > [PATCH 0/2] kvm: clear dirty bitmaps from all overlapping memslots > https://www.mail-archive.com/qemu-devel@nongnu.org/msg646200.html > - minor fixups suggested during v6 review > - more testing incl. hacked x86 > since v5: > - [1/2] fix migration that wasn't starting and make sure that KVM part > is able to handle 1:n MemorySection:memslot arrangement > since v3: > - fix compilation issue > - advance HVA along with GPA in kvm_set_phys_mem() > since v2: > - break migration from old QEMU (since 2.12-4.1) for guest with >8TB RAM > and drop migratable aliases patch as was agreed during v2 review > - drop 4.2 machines patch as it's not prerequisite anymore > since v1: > - include 4.2 machines patch for adding compat RAM layout on top > - 2/4 add missing in v1 patch for splitting too big MemorySection on > several memslots > - 3/4 amend code path on alias destruction to ensure that RAMBlock is > cleaned properly > - 4/4 add compat machine code to keep old layout (migration-wise) for > 4.1 and older machines > > > While looking into unifying guest RAM allocation to use hostmem backends > for initial RAM (especially when -mempath is used) and retiring > memory_region_allocate_system_memory() API, leaving only single hostmem > backend, > I was inspecting how currently it is used by boards and it turns out several > boards abuse it by calling the function several times (despite documented > contract > forbiding it). > > s390 is one of such boards where KVM limitation on memslot size got propagated > to board design and memory_region_allocate_system_memory() was abused to > satisfy > KVM requirement for max RAM chunk where memory region alias would suffice. > > Unfortunately, memory_region_allocate_system_memory() usage created migration > dependency where guest RAM is transferred in migration stream as several > RAMBlocks > if it's more than KVM_SLOT_MAX_BYTES. During v2 review it was agreed to ignore > migration breakage (documenting it in release notes) and leaving only KVM fix. > > In order to replace these several RAM chunks with a single memdev and keep it > working with KVM memslot size limit, the later was modified to deal with > memory section split on several KVMSlots and manual RAM splitting in s390 > was replace by single memory_region_allocate_system_memory() call. > > Tested: > * s390 with hacked KVM_SLOT_MAX_BYTES = 128Mb > - guest reboot cycle in ping-pong migration > * x86 with hacke max memslot = 128 and manual_dirty_log_protect enabled > - ping-pong migration with workload dirtying RAM around a split area >
Thanks, v7 applied to s390-next.