Signed-off-by: Alexandre Derumier
---
PVE/Service/pvestatd.pm | 14 ++
1 file changed, 14 insertions(+)
diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm
index 832d9dc5..7ed12504 100755
--- a/PVE/Service/pvestatd.pm
+++ b/PVE/Service/pvestatd.pm
@@ -236,6 +236,9 @@ sub u
Signed-off-by: Alexandre Derumier
---
PVE/Service/pvestatd.pm | 4
1 file changed, 4 insertions(+)
diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm
index 7ed12504..1e7400e0 100755
--- a/PVE/Service/pvestatd.pm
+++ b/PVE/Service/pvestatd.pm
@@ -170,12 +170,16 @@ sub update_node
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 2 ++
1 file changed, 2 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 4fc183e..09f3a0c 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2971,6 +2971,8 @@ sub vmstatus {
$d->{cpu} = $old->{cpu
Signed-off-by: Alexandre Derumier
---
src/PVE/LXC.pm | 2 ++
1 file changed, 2 insertions(+)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index fe63087..af47ff9 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -299,6 +299,8 @@ sub vmstatus {
} else {
$d->{cpu} = 0;
This create 1 single rrd for each metric
allowed paths:
pve2-metrics/vms//
pve2-metrics/nodes//
pve2-metrics/storages///
Signed-off-by: Alexandre Derumier
---
data/src/status.c | 51 +++
1 file changed, 51 insertions(+)
diff --git a/data/src/status.
only "some" values for now, not sure we need full values
Signed-off-by: Alexandre Derumier
---
PVE/Service/pvestatd.pm | 35 +++
1 file changed, 35 insertions(+)
diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm
index b1e71ec8..832d9dc5 100755
--- a/
Signed-off-by: Alexandre Derumier
---
src/PVE/CGroup.pm | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/src/PVE/CGroup.pm b/src/PVE/CGroup.pm
index 44b3297..d3873fd 100644
--- a/src/PVE/CGroup.pm
+++ b/src/PVE/CGroup.pm
@@ -380,7 +380,8 @@ sub get_pressure_stat {
},
available since kernel 5.13
https://lore.kernel.org/all/20210303034659.91735-2-zhouchengm...@bytedance.com/T/#u
Signed-off-by: Alexandre Derumier
---
src/PVE/CGroup.pm | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/src/PVE/CGroup.pm b/src/PVE/CGroup.pm
index d3873fd..bc5b8
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 5 +
1 file changed, 5 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 9441cf2..4fc183e 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2933,6 +2933,11 @@ sub vmstatus {
if ($pstat->{vsize}) {
Hi,
I'm still working on vm balancing/scheduling, and need some new metrics.
This patch-series add new metrics stats in rrd
- cpu,mem,io pressure for qemu/lxc/host
- hostcpu/hostmem cgroup for qemu
- ksm
as we discussed last year, theses new metrics are pushed in single rrd files,
like proxmox
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 11 +++
1 file changed, 11 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index e9aa248..9441cf2 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2922,8 +2922,11 @@ sub vmstatus {
my $pstat =
please ignore this, there was an old file in my folder
On 5/24/22 16:45, Stefan Hrdlicka wrote:
This adds a dropdown box for LVM, LVMThin & ZFS storage options where a
cluster node needs to be chosen. As default the first node in the list is
selected. It restricts the the storage to be only avai
This adds a dropdown box for LVM, LVMThin & ZFS storage options where a
cluster node needs to be chosen. As default the first node in the list is
selected. It restricts the the storage to be only availabe on the
selected node.
Signed-off-by: Stefan Hrdlicka
---
www/manager6/controller/StorageEdi
This patch doesn't flow the solution as suggested in #2822. It adds a node
combobox on the top of the add storage dialog for ZFS and LVM(Thin).
The user has to select the node where the storage should be added. The
restriction to the selected node is automatically set as well. The
default value is
This adds a dropdown box for LVM, LVMThin & ZFS storage options where a
cluster node needs to be chosen. As default the current node is
selected. It restricts the the storage to be only availabe on the
selected node.
Signed-off-by: Stefan Hrdlicka
---
Depends on the change in pve-storage
www/m
this enables forwarding of request to the correct node if a node is set
Signed-off-by: Stefan Hrdlicka
---
PVE/API2/Storage/Config.pm | 7 +++
1 file changed, 7 insertions(+)
diff --git a/PVE/API2/Storage/Config.pm b/PVE/API2/Storage/Config.pm
index 6bd770e..82b73ca 100755
--- a/PVE/API2/St
forgot to mention in the commit message, i believe this is the issue the user
runs into here:
https://forum.proxmox.com/threads/zfs-replication-sometimes-fails.104134/
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cg
when running replication, we don't want to keep replication states for
non-local vms. Normally this would not be a problem, since on migration,
we transfer the states anyway, but when the ha-manager steals a vm, it
cannot do that. In that case, having an old state lying around is
harmful, since the
if we have multiple jobs for the same vmid with the same schedule,
the last_sync, next_sync and vmid will always be the same, so the order
depends on the order of the $jobs hash (which is random; thanks perl)
to have a fixed order, take the jobid also into consideration
Signed-off-by: Dominik Csa
When canceling a backup in PVE via a signal it's easy to run into a
situation where the job is already failing when the backup_cancel QMP
command comes in. With a bit of unlucky timing on top, it can happen
that job_exit() runs between schedulung of job_cancel_bh() and
execution of job_cancel_bh().
20 matches
Mail list logo