>>Why do you need 10s resulution? Isn't 1min good enough?
Well, if the 1min is an average of 10s metric, it's ok.
I'm currently using 1min average and 5min average, so it's not a problem
with current rrds.
Thanks for the informations !
(I'll resend a patch to add pressure to rrd, and also add v
> BTW, I'm currently playing with reading the rrd files, and I have notice than
> lower precision is 1minute.
> as pvestatd send values around each 10s, is this 1minute precision an average
> of 6x10s values send by pvestatd ?
Yes (we also store the MAX)
> I'm currently working on a poc of vm b
>>I have no idea how reliable this is, because we do not use cgroups v2.
But yes,
>>I think this would be useful.
I have tested it on a host with a lot of small vms. (something like 400vms
on a 48cores), with this number of vms, they was a lot of context
switches, and vms was laggy.
cpu usage was
> I have notice that it's possible to get pressure info for each vm/ct
> through cgroups
>
> /sys/fs/cgroup/unified/qemu.slice/.scope/cpu.pressure
> /sys/fs/cgroup/unified/lxc//cpu.pressure
>
>
> Maybe it could be great to have some new rrd graphs for each vm/ct ?
> They are very useful counters
Hi,
I have notice that it's possible to get pressure info for each vm/ct
through cgroups
/sys/fs/cgroup/unified/qemu.slice/.scope/cpu.pressure
/sys/fs/cgroup/unified/lxc//cpu.pressure
Maybe it could be great to have some new rrd graphs for each vm/ct ?
They are very useful counters to known a sp
read new /proc/pressure/(cpu,disk,io) introduced in kernel 4.20.
This give more granular informations than loadaverage.
Signed-off-by: Alexandre Derumier
---
src/PVE/ProcFSTools.pm | 18 ++
1 file changed, 18 insertions(+)
diff --git a/src/PVE/ProcFSTools.pm b/src/PVE/ProcFSToo