The mem field itself will switch from the outside view to the "inside" view if the VM is reporting detailed memory usage informatio via the ballooning device.
Since sometime other processes belong to a VM too, vor example swtpm, we collect all PIDs belonging to the VM cgroup and fetch their PSS data to account for shared libraries used. Signed-off-by: Aaron Lauterer <a.laute...@proxmox.com> --- Notes: changes since: RFC: * collect memory info for all processes in cgroup directly without too generic helper function src/PVE/QemuServer.pm | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm index 01f7a4f..4dd30c4 100644 --- a/src/PVE/QemuServer.pm +++ b/src/PVE/QemuServer.pm @@ -2586,6 +2586,7 @@ sub vmstatus { $d->{uptime} = 0; $d->{cpu} = 0; $d->{mem} = 0; + $d->{memhost} = 0; $d->{netout} = 0; $d->{netin} = 0; @@ -2670,6 +2671,24 @@ sub vmstatus { $d->{pressureiofull} = $pressures->{io}->{full}->{avg10}; $d->{pressurememorysome} = $pressures->{memory}->{some}->{avg10}; $d->{pressurememoryfull} = $pressures->{memory}->{full}->{avg10}; + + my $fh = IO::File->new("/sys/fs/cgroup/qemu.slice/${vmid}.scope/cgroup.procs", "r"); + if ($fh) { + while (my $childPid = <$fh>) { + chomp($childPid); + open(my $SMAPS_FH, '<', "/proc/$childPid/smaps_rollup") + or die "failed to open PSS memory-stat from process - $!\n"; + + while (my $line = <$SMAPS_FH>) { + if ($line =~ m/^Pss:\s+([0-9]+) kB$/) { + $d->{memhost} = $d->{memhost} + int($1) * 1024; + last; + } + } + close $SMAPS_FH; + } + } + close($fh); } return $res if !$full; -- 2.39.5 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel