On 2025-07-08 20:07, Michael Köppl wrote:
> Just added a suggestion inline
>
> On 6/24/25 13:28, Lukas Wagner wrote:
>> The backup job details view was never updated after the overhaul of the
>> notification system. In this commit we remove the left-over
>> notification-policy/target handling and
--- Begin Message ---
> + my $backing_path = $class->path($scfg, $name, $storeid,
> $backing_snap) if $backing_snap;
>>also, this should probably encode a relative path so that renaming
>>the VG and
>>adapting the storage.cfg entry works without breaking the back
>>reference?
About relative
--- Begin Message ---
> +
> + # we can simply reformat the current lvm volume to avoid
> + # a long safe remove.(not needed here, as the allocated space
> + # is still the same owner)
> + eval { lvm_qcow2_format($class, $storeid, $scfg, $volname,
> $format, $snap) };
>>what if the volu
Those were only used in the first iteration of the new notification
stack, which unfortunately hit pvetest too soon. These two keys have no
effect and were proactively removed by the GUI when changing
backup job settings.
With the major update to PVE 9 these can finally be dropped. The pve8to9
scr
These were only used in the 'old' revamped notification stack which was
briefly available on pvetest. With PVE 9 we can finally get completely
rid of these.
Signed-off-by: Lukas Wagner
Reviewed-by: Michael Köppl
Tested-by: Michael Köppl
---
Notes:
Changes since v1:
- Rebased onto lat
The backup job details view was never updated after the overhaul of the
notification system. In this commit we remove the left-over
notification-policy/target handling and change the view so that we
display the current configuration based on notification-mode, mailto and
mailnotification.
Signed-o
Add test cases to verify that the node affinity rules, which will be
added in a following patch, are functionally equivalent to the
existing HA groups.
These test cases verify the following scenarios for (a) unrestricted and
(b) restricted groups (i.e. non-strict and strict node affinity rules):
Signed-off-by: Shan Shaji
---
Release notes:
- build with Flutter 3.29
- allow filtering for paused guests, e.g., suspended VMs
- avoid displaying an unknown status for VMs in the prelaunch
state (e.g., backing up stopped VMs)
- pre-select the configured default authentication-realm in
If we see that the migration to the new pve-{type}-9.0 rrd format has been done
or is ongoing (new dir exists), we collect and send out the new format with
additional
columns for nodes and VMs (guests).
Those are:
Nodes:
* memfree
* arcsize
* pressures:
* cpu some
* io some
* io full
* me
We add several new columns to nodes and VMs (guest) RRDs. See futher
down for details. Additionally we change the RRA definitions on how we
aggregate the data to match how we do it for the Proxmox Backup Server
[0].
The migration of an existing installation is handled by a dedicated
tool. Only onc
Superseded by
https://lore.proxmox.com/pve-devel/20250709141034.169726-1-f.we...@proxmox.com/
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
When discovering a new volume group (VG), for example on boot, LVM
triggers autoactivation. With the default settings, this activates all
logical volumes (LVs) in the VG. Activating an LV creates a
device-mapper device and a block device under /dev/mapper.
Autoactivation is problematic for shared
When discovering a new volume group (VG), for example on boot, LVM
triggers autoactivation. With the default settings, this activates all
logical volumes (LVs) in the VG. Activating an LV creates a
device-mapper device and a block device under /dev/mapper.
This is not necessarily problematic for l
Starting with PVE 9, the LVM and LVM-thin plugins create new LVs with
the `--setautoactivation n` flag to fix #4997 [1]. However, this does
not affect already existing LVs of setups upgrading from PVE 8.
Hence, add a new script under /usr/share/pve-manager/migrations that
finds guest volume LVs wi
# Summary
With default settings, LVM autoactivates LVs when it sees a new VG, e.g. after
boot or iSCSI login. In a cluster with guest disks on a shared LVM VG (e.g. on
top of iSCSI/Fibre Channel (FC)/direct-attached SAS), this can indirectly cause
guest creation or migration to fail. See bug #4997
On 2025-07-08 20:20, Michael Köppl wrote:
> Had a closer look at the implementation, which apart from 2 suggestions
> on pve-manager 2/2 looks good.
>
> Quickly had a look at the Backup job details dialog as well and tested
> through the various combinations for notification settings. Information
On Wed, Jul 09, 2025 at 02:43:06PM +0200, Filip Schauer wrote:
> On 25/06/2025 10:50, Wolfgang Bumiller wrote:
> > > +my $pid = PVE::LXC::find_lxc_pid($vmid);
> > > +my $rootdir = "/proc/$pid/root";
> > ^ When using this path over a potentially longer period of time it's
> > better to use
>
--- Begin Message ---
On Tue, 2025-07-08 at 12:58 +0200, Dominik Csapak wrote:
> On 7/8/25 12:04, Adam Kalisz wrote:
> > Hi Dominik,
> >
>
> Hi,
>
> > this is a big improvement, I have done some performance
> > measurements
> > again:
> >
> > Ryzen:
> > 4 worker threads:
> > restore image compl
Hi
We’re more and more approached by partners who are looking to improve
performance for Proxmox virtualization environment for use cases such as
databases, AI workloads, and other resource-intensive tasks.
In order to maximize storage subsystem performance within virtual machines,
some of them
From: Folke Gleumes
Originally-by: Folke Gleumes
[AL: rebased]
Signed-off-by: Aaron Lauterer
---
src/PVE/ProcFSTools.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/ProcFSTools.pm b/src/PVE/ProcFSTools.pm
index b67211e..f342890 100644
--- a/src/PVE/ProcFSTools.pm
From: Folke Gleumes
Originally-by: Folke Gleumes
[AL:
rebased on current master
merged into single function for generic cgroups
]
Signed-off-by: Aaron Lauterer
---
Notes:
while the read_cgroup_pressure function would fit better into
SysFSTools.pm I have kept it in ProcFSTools.p
pve2.3-vm has been introduced with commit 3b6ad3ac back in 2013. By now
there should not be any combination of clustered nodes that still send
the old pve2-vm variant.
Signed-off-by: Aaron Lauterer
---
PVE/API2Tools.pm | 18 +-
1 file changed, 1 insertion(+), 17 deletions(-)
dif
the newer pve2.3-vm schema has been introduced with commit ba9dcfc1 back
in 2013. By now there should be no cluster where an older node might
still send the old pve2-vm schema.
Signed-off-by: Aaron Lauterer
---
src/pmxcfs/status.c | 13 +++--
1 file changed, 3 insertions(+), 10 deletions
For PVE9 we plan to add additional fields in the metrics that are
collected and distributed in the cluster. The new fields/columns are
added at the end of the current ones. This makes it possible for PVE8
installations to still use them by cutting the new additional data.
To make it more future pr
This patch series expands the RRD format for nodes and VMs. For all types
(nodes, VMs, storage) we adjust the aggregation to align them with the way they
are done on the Backup Server. Therefore, we have new RRD defitions for all
3 types.
New values are added for nodes and VMs. In particular:
Nod
if the new rrd pve-node-9.0 files are present, they contain the current
data and should be used.
'decade' is now possible as timeframe with the new RRD format.
Signed-off-by: Aaron Lauterer
---
Notes:
changes since:
RFC:
* switch from pve9- to pve-{type}-9.0 schema
PVE/API2/Nodes.
as this will also be displayed in the status of VMs
Signed-off-by: Aaron Lauterer
---
Notes:
this is a dedicated patch that should be applied only for PVE9 as it
adds new data in the result
PVE/API2/Cluster.pm | 7 +++
PVE/API2Tools.pm| 3 +++
2 files changed, 10 insertions(+)
From: Folke Gleumes
Originally-by: Folke Gleumes
[AL:
* rebased on current master
* switch to new, more generic read_cgroup_pressure function
]
Signed-off-by: Aaron Lauterer
---
src/PVE/LXC.pm | 8
1 file changed, 8 insertions(+)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
i
The mem field itself will switch from the outside view to the "inside"
view if the VM is reporting detailed memory usage informatio via the
ballooning device.
Since sometime other processes belong to a VM too, vor example swtpm,
we collect all PIDs belonging to the VM cgroup and fetch their PSS da
Signed-off-by: Aaron Lauterer
---
Notes:
changes since RFC:
* switch from pve9-vm to pve-vm-90 schema
src/PVE/API2/LXC.pm | 11 ++-
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 28f7fdd..fc59ec9 100644
--- a/src/PVE
From: Folke Gleumes
Originally-by: Folke Gleumes
[AL:
* rebased on current master
* switch to new, more generic read_cgroup_pressure function
]
Signed-off-by: Aaron Lauterer
---
src/PVE/QemuServer.pm | 8
1 file changed, 8 insertions(+)
diff --git a/src/PVE/QemuServer.pm b/sr
For PVE9 there will be additional fields in the metrics that are
collected. The new columns/fields are added at the end of the current
ones. Therefore, if we get the new format, we need to cut it.
Paths to rrd filenames needed to be set manually to 'pve2-...' and will
use the 'node' part instead o
Signed-off-by: Aaron Lauterer
---
Notes:
changes since:
RFC:
* switch from pve9-storage to pve-storage-90 schema
src/PVE/API2/Storage/Status.pm | 9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.p
Signed-off-by: Aaron Lauterer
---
.cargo/config.toml | 5 +
.gitignore | 5 +
Cargo.toml | 20 ++
build.rs| 29 +++
src/lib.rs | 5 +
src/main.rs | 502
src/parallel_handler.rs
We add a new function to handle different key names, as it would
otherwise become quite unreadable.
It checks which key format exists for the type and resource:
* the old pve2-{type} / pve2.3-vm
* the new pve-{type}-{version}
and will return the one that was found. Since we will only have one key
Signed-off-by: Aaron Lauterer
---
Notes:
changes since:
RFC:
* switch from pve9-vm to pve-vm-90 schema
src/PVE/API2/Qemu.pm | 11 ++-
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/src/PVE/API2/Qemu.pm b/src/PVE/API2/Qemu.pm
index 2e6358e..2867a53 100644
--- a
Instead of RSS, let's use the same PSS values as for the specific host
view as default, in case this value is not overwritten by the balloon
info.
Signed-off-by: Aaron Lauterer
---
src/PVE/QemuServer.pm | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/src/PVE/QemuServer.
I sent a v1:
https://lore.proxmox.com/pve-devel/20250709112309.2299797-4-a.laute...@proxmox.com/T/#ma39a76bc318b3733527639e39e4670a3232f6d87
It incorporates the suggested changes and has some other changes as
well, more in the cover letter of it and the individual patches
___
We've been thinking more about the ipv6 forwarding issue and still
aren't sure about the best approach, so we'd like to hear other
opinions.
Problem
===
As explained in the commit "frr: add global ipv6 forwarding" we enabled
*global* ipv6 forwarding for two reasons:
1) So that non-full
On 09/07/2025 14:34, Filip Schauer wrote:
@@ -2827,6 +2882,30 @@ sub vm_start { }
PVE::GuestHelpers::exec_hookscript($conf, $vmid, 'post-start'); + if
($conf->{ipmanagehost}) { + my @dhcpv4_interfaces = (); + my
@dhcpv6_interfaces = (); + foreach my $k (sort keys %$conf) { + next
if $k !~ m/^n
Am 09.07.25 um 08:40 schrieb Friedrich Weber:
> On 08/07/2025 19:01, Thomas Lamprecht wrote:
>> Am 08.07.25 um 18:43 schrieb Friedrich Weber:
>>> When upgrading to Debian Trixie, iproute2 is also upgraded. Its
>>> postinst deletes /etc/iproute2/rt_tables.d if it is currently empty
>>> (or only cont
Superseded by:
https://lore.proxmox.com/pve-devel/20250709123435.64796-1-f.scha...@proxmox.com
On 11/06/2025 16:48, Filip Schauer wrote:
Add basic support for OCI (Open Container Initiative) images [0] as
container templates.
___
pve-devel mailing l
On 25/06/2025 10:50, Wolfgang Bumiller wrote:
+my $pid = PVE::LXC::find_lxc_pid($vmid);
+my $rootdir = "/proc/$pid/root";
^ When using this path over a potentially longer period of time it's
better to use
my ($pid, $pidfd) = PVE::LXC::open_lxc_pid($vmid);
The open pidfd should gua
On 25/06/2025 10:26, Wolfgang Bumiller wrote:
I think this should be handled with a separate key in the containers
network configuration. Maybe a "setup" property which defaults to
"container" and can be set to "host" (not sure if we ever need more,
if we know we don't, it could be a boolean...)
On 17/06/2025 10:01, Christoph Heiss wrote:
I also test with `ghcr.io/nixos/nix:latest`, which interestingly fails
to start with
DEBUGutils - ../src/lxc/utils.c:run_buffer:560 - Script exec
/usr/share/lxcfs/lxc.mount.hook 107 lxc mount produced output:
/usr/share/lxcfs/lxc.mount.hook: 15:
--- Begin Message ---
> +sub get_snap_name {
>>should this be public?
I'll make it private
> +sub get_snap_volname {
>>should this be public?
> +
> +sub parse_snapname {
>>should this be public?
This two methods are used in volume_snapshot_info(), defined in plugin,
and use by lvmplugin too
On 2025-07-08 20:17, Michael Köppl wrote:
> On 6/24/25 13:28, Lukas Wagner wrote:
>> The backup job details view was never updated after the overhaul of the
>> notification system. In this commit we remove the left-over
>> notification-policy/target handling and change the view so that we
>> displ
On Wed, 09 Jul 2025 12:22:42 +0200, Shan Shaji wrote:
>
Applied, thanks!
[1/1] chore: bump version to 1.8.1(46)
commit: ef8735a8e61dcbbe372c189c84af53950425dd23
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cg
Add basic support for OCI (Open Container Initiative) images [0] as
container templates.
An OCI image can be obtained from a registry like Docker Hub. This patch
series does not implement the OCI Distribution Spec, so this requires
external tools.
Either using Docker:
```
$ docker pull httpd
$ d
Signed-off-by: Filip Schauer
---
Introduced in v3
proxmox-io/src/lib.rs | 3 ++
proxmox-io/src/range_reader.rs | 94 ++
2 files changed, 97 insertions(+)
create mode 100644 proxmox-io/src/range_reader.rs
diff --git a/proxmox-io/src/lib.rs b/proxmox-io/
This aims to add basic support for the Open Container Initiative image
format according to the specification. [0]
[0] https://github.com/opencontainers/image-spec/blob/main/spec.md
Signed-off-by: Filip Schauer
---
This patch depends on changes made to proxmox-perl-rs in patch 03/13.
Meaning that
Signed-off-by: Filip Schauer
---
Introduced in v3
pct.adoc | 72 +---
1 file changed, 64 insertions(+), 8 deletions(-)
diff --git a/pct.adoc b/pct.adoc
index 529b72f..b538f56 100644
--- a/pct.adoc
+++ b/pct.adoc
@@ -54,15 +54,22 @@ the cluster
This prevents an error during Debian container setup when the
/etc/network directory is missing. This fixes container creation from
Debian based OCI images.
Signed-off-by: Filip Schauer
---
src/PVE/LXC/Setup/Debian.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/PVE/LXC/Setup/Debian.p
This is needed for OCI container images bundled as tar files, as
generated by `docker save`. OCI images do not need additional
compression, since the content is usually compressed already.
Signed-off-by: Filip Schauer
---
Changed since v2:
* Modify VZTMPL_EXT_RE_1 regex to put "tar" into capture
When a container uses the default `/sbin/init` entrypoint, network
interface configuration is usually managed by processes within the
container. However, containers with a different entrypoint might not
have any internal network management process. Consequently, IP addresses
might not be assigned.
This can still break `/bin/sh` if an OCI image injects a different
`libc.so.6` with $LD_LIBRARY_PATH.
Signed-off-by: Filip Schauer
---
Arbitrary code execution is theoretically still possible with a
specially crafted OCI image that provides a shared library and points
$LD_LIBRARY_PATH to its pare
Signed-off-by: Filip Schauer
---
This patch depends on the proxmox-oci crate added in patch 02/13.
Changed since v2:
* rebase onto newest master (6132d4d36cbd)
* forward all errors to Perl
* remove oci-spec dependency
Changed since v1:
* rebase on latest master (3d9806cb3c7f)
* add new dependenc
Ensure that both /etc/systemd/network and /etc/systemd/system-preset
exist before writing files into them. This fixes container creation from
the docker.io/fedora & docker.io/ubuntu OCI images.
Signed-off-by: Filip Schauer
---
Changed since v2:
* rebase onto newest master (5a8b3f962f16) and re-fo
Signed-off-by: Filip Schauer
---
Changed since v2:
* rebase onto newest master (5a8b3f962f16) and re-format with
proxmox-perltidy
src/PVE/API2/LXC.pm | 2 +-
src/PVE/LXC.pm| 2 ++
src/PVE/LXC/Config.pm | 12
3 files changed, 15 insertions(+), 1 deletion(-)
diff --git
Signed-off-by: Filip Schauer
---
This depends on the change made to pve-storage in patch 11/13.
It might make sense to bump pve-storage and with it bump the dependency
to libpve-storage-perl in debian/control.
Changed since v2:
* rebase onto newest master (84b22751f211) and re-format
Introduced
This crate can parse an OCI image tarball and extract its rootfs. Layers
are applied in sequence, but an overlay filesystem is currently not
used.
Signed-off-by: Filip Schauer
---
Changed since v2:
* remove reachable unwraps & refactor code
* increase hasher buffer size from 4096 to 32768 (matchi
Containers that do not use the default `/sbin/init` entrypoint may lack
in-container network management. A previous commit already handles
static IP addresses. Now this commit also handles DHCP. This is done
using a `dhclient` process for each network interface.
Signed-off-by: Filip Schauer
---
C
This patch series contains the following features:
* transparent altname support for {pve, proxmox}-firewall and pve-network
* pveeth tool for pinning NIC names
Both are features aimed at mitigating the fallout caused from changing network
interface names. Sending it as an RFC, since I will be gon
pveeth is a tool for pinning / unpinning network interface names. It
works by generating a link file in /usr/local/lib/systemd/network and
then updating the following files by replacing the old name with the
pinned name (this also works for altnames!):
* /etc/network/interfaces
* /etc/pve/nodes/no
Those helpers will be used by several other packages to implement the
altname support. Those helpers will also be used by the new pveeth
tool which can be used for pinning interface names.
Signed-off-by: Stefan Hanreich
---
src/PVE/Network.pm | 45 +
1
This works by reading all the currently configured altnames and then
replacing any occurences of altnames when creating the firewall rules.
We handle it this way because nftables has no support for matching on
the altnames of interfaces.
Signed-off-by: Stefan Hanreich
---
proxmox-firewall/src/co
Add a bare-bones struct for parsing the output of `ip -details -json
link show`. Currently we only require the name of the interfaces as
well as its altnames for transparently supporting altnames in the
firewall.
Signed-off-by: Stefan Hanreich
---
Notes:
There is probably a better place than
With the introduction of pveeth, users can now pin their NICs with
prefix nicX. In order for our stack to correctly pick up the pinned
interfaces, we need to add this prefix to the regex used for detecting
physical interfaces.
In the future we should abandon this method of detecting physical
inter
Add support for altnames by transparently mapping them with the
information from 'ip link' when generating the ruleset. The firewall
will now replace any altname in the ruleset with the actual, physical,
name from the interface. We handle it this way, because iptables
cannot match on the altnames o
Since this only has an effect on applying the configuration, users
will still need to reapply the configuration when an interface changes
names / altnames. In order to add full altname support for IS-IS,
altname support would need to be implemented in FRR.
Signed-off-by: Stefan Hanreich
---
Note
--- Begin Message ---
for external snapshot, we simply use snap volname as src.
don't use internal snapshot option in the command line.
Signed-off-by: Alexandre Derumier
---
src/PVE/QemuServer/QemuImage.pm| 6 ++-
src/test/run_qemu_img_convert_tests.pl | 59 ++
2
--- Begin Message ---
Signed-off-by: Alexandre Derumier
---
src/PVE/Storage/Common.pm | 33
src/PVE/Storage/Plugin.pm | 40 +++
2 files changed, 53 insertions(+), 20 deletions(-)
diff --git a/src/PVE/Storage/Common.pm b/src/PVE
--- Begin Message ---
template guests are never running and never write
to their disks/mountpoints, those $running parameters there can be
dropped.
Signed-off-by: Alexandre Derumier
---
src/PVE/Storage/RBDPlugin.pm | 4 +---
src/PVE/Storage/ZFSPlugin.pm | 4 +---
2 files changed, 2 insertions(+)
--- Begin Message ---
Signed-off-by: Alexandre Derumier
---
src/PVE/Storage/Plugin.pm | 32
1 file changed, 24 insertions(+), 8 deletions(-)
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index c2f376b..65a34b1 100644
--- a/src/PVE/Storage/Plu
--- Begin Message ---
This add a $running param to volume_snapshot,
it can be used if some extra actions need to be done at the storage
layer when the snapshot has already be done at qemu level.
Signed-off-by: Alexandre Derumier
---
ApiChangeLog | 4
src/PVE/Storage.
--- Begin Message ---
we format lvm logical volume with qcow2 to handle snapshot chain.
like for qcow2 file, when a snapshot is taken, the current lvm volume
is renamed to snap volname, and a new current lvm volume is created
with the snap volname as backing file
snapshot volname is similar to lv
--- Begin Message ---
fixme:
- add test for internal (was missing) && external qemu snapshots
- is it possible to use blockjob transactions for commit && steam
for atomatic disk commit ?
Signed-off-by: Alexandre Derumier
---
src/PVE/QemuConfig.pm | 4 +-
src/PVE/QemuServer.pm
--- Begin Message ---
add a snapext option to enable the feature
When a snapshot is taken, the current volume is renamed to snap volname
and a current image is created with the snap volume as backing file
Signed-off-by: Alexandre Derumier
---
src/PVE/Storage.pm| 1 -
src/PVE/Stora
--- Begin Message ---
We need to define name-nodes for all backing chain images,
to be able to live rename them with blockdev-reopen
For linked clone, we don't need to definebase image(s) chain.
They are auto added with #block nodename.
Signed-off-by: Alexandre Derumier
---
src/PVE/QemuServer/B
--- Begin Message ---
and use it for plugin linked clone
This also enable extended_l2=on, as it's mandatory for backing file
preallocation.
Preallocation was missing previously, so it should increase performance
for linked clone now (around x5 in randwrite 4k)
cluster_size is set to 128k, as it
--- Begin Message ---
This patch series implement qcow2 external snapshot support for files && lvm
volumes
The current internal qcow2 snapshots have bad write performance because no
metadatas can be preallocated.
This is particulary visible on a shared filesystem like ocfs2 or gfs2.
Also other
--- Begin Message ---
Returns if the volume is supporting qemu snapshot:
'internal' : do the snapshot with qemu internal snapshot
'external' : do the snapshot with qemu external snapshot
undef : does not support qemu snapshot
Signed-off-by: Alexandre Derumier
---
ApiChangeLog
--- Begin Message ---
not sure if it was an error, but it's failing now with new more
restricted volname parsing
Signed-off-by: Alexandre Derumier
---
src/test/cfg2cmd/efi-raw-old.conf | 2 +-
src/test/cfg2cmd/efi-raw-old.conf.cmd | 2 +-
src/test/cfg2cmd/efi-raw-temp
--- Begin Message ---
and add missing preallocation
https://github.com/qemu/qemu/commit/dc5f690b97ccdffa79fe7169bb26b0ebf06688bf
Signed-off-by: Alexandre Derumier
---
src/PVE/Storage/Plugin.pm | 28 +---
1 file changed, 25 insertions(+), 3 deletions(-)
diff --git a/src/P
--- Begin Message ---
Signed-off-by: Alexandre Derumier
---
ApiChangeLog | 3 +++
src/PVE/Storage.pm | 25 +
src/PVE/Storage/BTRFSPlugin.pm | 6 ++
src/PVE/Storage/ESXiPlugin.pm| 6 ++
src/PVE/Storage/LVMPlugin.pm | 6
--- Begin Message ---
use same template than zfspoolplugin tests
Signed-off-by: Alexandre Derumier
---
src/test/Makefile | 5 +-
src/test/run_test_lvmplugin.pl | 577 +
2 files changed, 581 insertions(+), 1 deletion(-)
create mode 100755 src/test/r
--- Begin Message ---
Signed-off-by: Alexandre Derumier
---
src/PVE/Storage/Plugin.pm | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index aab2024..b65d296 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storag
--- Begin Message ---
This compute the whole size of a qcow2 volume with datas + metadatas.
Needed for qcow2 over lvm volume.
Signed-off-by: Alexandre Derumier
---
src/PVE/Storage/Plugin.pm | 23 +++
1 file changed, 23 insertions(+)
diff --git a/src/PVE/Storage/Plugin.pm b/s
On Wed, 09 Jul 2025 16:09:58 +0200, Friedrich Weber wrote:
> When discovering a new volume group (VG), for example on boot, LVM
> triggers autoactivation. With the default settings, this activates all
> logical volumes (LVs) in the VG. Activating an LV creates a
> device-mapper device and a block d
On Wed, 09 Jul 2025 16:09:59 +0200, Friedrich Weber wrote:
> When discovering a new volume group (VG), for example on boot, LVM
> triggers autoactivation. With the default settings, this activates all
> logical volumes (LVs) in the VG. Activating an LV creates a
> device-mapper device and a block d
sorry, there was a bit of a mess in this series, especially patches to
the pve-cluster repo.
sent a v2:
https://lore.proxmox.com/pve-devel/20250709163703.2540012-1-a.laute...@proxmox.com/T/#t
it should now the good and track the changes for pve8 and pve9 branches
cleanly for the affected rep
With PVE9 now we have additional fields in the metrics that are
collected and distributed in the cluster. The new fields/columns are
added at the end of the existing ones. This makes it possible for PVE8
installations to still use them by cutting the new additional data.
To make it more future pro
The mem field itself will switch from the outside view to the "inside"
view if the VM is reporting detailed memory usage informatio via the
ballooning device.
Since sometime other processes belong to a VM too, vor example swtpm,
we collect all PIDs belonging to the VM cgroup and fetch their PSS da
From: Folke Gleumes
Originally-by: Folke Gleumes
[AL:
* rebased on current master
* switch to new, more generic read_cgroup_pressure function
]
Signed-off-by: Aaron Lauterer
---
src/PVE/LXC.pm | 8
1 file changed, 8 insertions(+)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
i
There's zero actual commit message and basically no rustdoc comment for
the public interface, that needs to improve, especially library crates
should be held to higher standards in this regard.
non-exhaustive list of things that I'd find relevant for such crates:
- describing the background in the
On Wed, 09 Jul 2025 16:10:00 +0200, Friedrich Weber wrote:
> Starting with PVE 9, the LVM and LVM-thin plugins create new LVs with
> the `--setautoactivation n` flag to fix #4997 [1]. However, this does
> not affect already existing LVs of setups upgrading from PVE 8.
>
> Hence, add a new script u
We add a new function to handle different key names, as it would
otherwise become quite unreadable.
It checks which key format exists for the type and resource:
* the old pve2-{type} / pve2.3-vm
* the new pve-{type}-{version}
and will return the one that was found. Since we will only have one key
From: Folke Gleumes
Originally-by: Folke Gleumes
[AL: rebased]
Signed-off-by: Aaron Lauterer
---
src/PVE/ProcFSTools.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/ProcFSTools.pm b/src/PVE/ProcFSTools.pm
index b67211e..f342890 100644
--- a/src/PVE/ProcFSTools.pm
pve2.3-vm has been introduced with commit 3b6ad3ac back in 2013. By now
there should not be any combination of clustered nodes that still send
the old pve2-vm variant.
Signed-off-by: Aaron Lauterer
---
PVE/API2Tools.pm | 18 +-
1 file changed, 1 insertion(+), 17 deletions(-)
dif
Signed-off-by: Aaron Lauterer
---
.cargo/config.toml | 5 +
.gitignore | 5 +
Cargo.toml | 20 ++
build.rs| 29 +++
src/lib.rs | 5 +
src/main.rs | 502
src/parallel_handler.rs
1 - 100 of 117 matches
Mail list logo