> > Please find the full cli args and two guest logs for DIMM > initalization attached. As you can see, the freshly populated DIMMs > are probably misplaced in SRAT ('already populated' messages), despite > the fact that the initialized ranges are looking correct at a glance. > When VM is migrated to the destination (with equal RAM device > configuration) which is simular to a VM with 16G RAM this misplacement > is causing the mentioned panic in the guest. This should be very > easily reproducible and the issue is very critical as well, I don`t > even understand why I missed this issue earlier.
Answering back to myself - I made a wrong statement before, the physical mapping *are* different with different cases, of course! Therefore, the issue looks much simpler and I`d have a patch over a couple of days if nobody fix this earlier. [ 102.757100] init_memory_mapping: [mem 0x240000000-0x25fffffff] [ 102.794016] [mem 0x240000000-0x25fffffff] page 2M [ 102.798456] [ffffea0007c00000-ffffea0007ffffff] PMD -> [ffff88015e400000-ffff88015e7fffff] on node 0 [ 102.801853] [ffffea0008000000-ffffea00081fffff] PMD -> [ffff880019600000-ffff8800197fffff] on node 0 vs [ 0.411285] init_memory_mapping: [mem 0x240000000-0x25fffffff] [ 0.411288] [mem 0x240000000-0x25fffffff] page 2M [ 0.416637] [ffffea0007c00000-ffffea0007ffffff] PMD -> [ffff880019000000-ffff8800193fffff] on node 0 [ 0.422727] [ffffea0008000000-ffffea00083fffff] PMD -> [ffff880018c00000-ffff880018ffffff] on node 0