The mem field itself will switch from the outside view to the "inside"
view if the VM is reporting detailed memory usage informatio via the
ballooning device.

Since sometime other processes belong to a VM too, vor example swtpm,
we collect all PIDs belonging to the VM cgroup and fetch their PSS data
to account for shared libraries used.

Signed-off-by: Aaron Lauterer <a.laute...@proxmox.com>
---
 PVE/QemuServer.pm | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 5f36772..c5eb5c1 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2867,6 +2867,7 @@ sub vmstatus {
        $d->{uptime} = 0;
        $d->{cpu} = 0;
        $d->{mem} = 0;
+       $d->{memhost} = 0;
 
        $d->{netout} = 0;
        $d->{netin} = 0;
@@ -2951,6 +2952,14 @@ sub vmstatus {
        $d->{pressureiofull} = $pressures->{io}{full}{avg10};
        $d->{pressurememorysome} = $pressures->{memory}{some}{avg10};
        $d->{pressurememoryfull} = $pressures->{memory}{full}{avg10};
+
+       my $fh = IO::File->new 
("/sys/fs/cgroup/qemu.slice/${vmid}.scope/cgroup.procs", "r");
+       if ($fh) {
+           while (my $childPid = <$fh>) {
+               $d->{memhost} = $d->{memhost} + 
PVE::ProcFSTools::read_smaps_rollup($childPid, "Pss");
+           }
+       }
+       close($fh);
     }
 
     return $res if !$full;
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to