"-n" would be treated like a command here otherwise.
Fixes: dc36013 ("unconfigured: rework stopping systemd-udevd slightly")
Signed-off-by: Christoph Heiss
---
unconfigured.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/unconfigured.sh b/unconfigured.sh
index ddab415..caa
On 6/26/25 09:04, Gabriel Goller wrote:
> * When booting up there is a race between openfabric initiating the
> interface (circuit) and the underlying interface coming up. This will
> result in fabricd not configuring the circuit. That's also why a FRR
> restart after the initial boot fixes t
Co-developed-by: Alexandre Derumier
Signed-off-by: Fiona Ebner
---
Changes since the previous series:
* adapt to earlier OVMF changes
* add switch to blockdev mirror (after putting prepartion in earlier
patches)
src/PVE/QemuServer.pm | 165 +-
src/PVE/
Replying to this, just so that we keep a record on the mailing list.
On 12.06.2025 17:01, Hannes Duerr wrote:
Tested as follow:
Created 5 Proxmox VE nodes
joined them as cluster
added a two interfaces per node, all interfaces are on the same host bridge.
Assigned the interfaces VLAN tags so that
> Wolfgang Bumiller hat am 26.06.2025 13:36 CEST
> geschrieben:
>
>
> On Wed, Jun 25, 2025 at 11:56:31AM +0200, Daniel Kral wrote:
> > OpenSSH 10.0 removes support for the DSA signature algorithm [0], which
> > is the base version that will be shipped for Debian 13 trixie [1]. Since
> > it h
Preparation to make the zeroinit driver and backup work with blockdev.
Fiona Ebner (3):
PVE backup: prepare for the switch to using blockdev rather than drive
block/zeroinit: support using as blockdev driver
blockdev: query file child QMP command
block/zeroinit.c | 12 ---
bloc
Instead of continuing without informing the user, a warning will now be
displayed if the owner of a volume could not be determined due to a
storage error. In addition, an explicit check for the existence of the
underlying storage is added before the ownership check. If the storage
no longer exists,
On June 25, 2025 5:56 pm, Fiona Ebner wrote:
> Changes to OVMF patches (left-over from part two):
> * 01/31 is new
> * keep get_efivars_size() as a wrapper in QemuServer module
> * keep early check for CPU bitness in QemuServer module
> * use read-only flag for OVMF code
> * collect some parameters
Bump to 10.3.1-1+pve2. Add patch that adds networking.service to
frr.service dependencies.
Signed-off-by: Gabriel Goller
---
debian/changelog | 7 +++
1 file changed, 7 insertions(+)
diff --git a/debian/changelog b/debian/changelog
index 2f19f309424a..ec46e0a7c221 100644
--- a/debian/change
Add networking.service to the 'After' dependency directive. Guarantees that
the frr.service will start after the networking.service is done.
We had some issues with data races between FRR and ifupdown [0], mostly
around the dummy interface. At startup, FRR and by extension fabricd is
up faster tha
On Thu, 26 Jun 2025 09:50:46 +0200, Christoph Heiss wrote:
> "-n" would be treated like a command here otherwise.
>
>
Applied, thanks!
[1/1] unconfigured: add missing [ ] around if clause
commit: 190a64e5dd300a9192dfe32d2f28b3589961652c
___
pv
ZFS does not have a filesystem_path() method, so the default
implementation for qemu_blockdev_options() cannot be re-used. This is
most likely, because snapshots are currently not directly accessible
via a filesystem path in the Proxmox VE storage layer.
Signed-off-by: Fiona Ebner
---
No changes
Introduce qemu_blockdev_options() plugin method.
In terms of the plugin API only, adding the qemu_blockdev_options()
method is a fully backwards-compatible change. When qemu-server will
switch to '-blockdev' however, plugins where the default implemenation
is not sufficient, will not be usable for
Changes in v3:
* Make tidy.
* After the upstream discussion [0], do not patch QEMU. Instead, make
sure that the 'keyring' is set in the storage's configuration and
set the rbd_cache_policy on the EFI image itself. For the 'keyring'
option, we also need something in pve8to9 so that users that
For QEMU, when using '-blockdev', there is no way to specify the
keyring file like was possible with '-drive', so it has to be set in
the corresponding Ceph configuration file. As it applies to all images
on the storage, it also is the most natural place for the setting.
Signed-off-by: Fiona Ebner
Signed-off-by: Fiona Ebner
---
No changes in v3.
src/PVE/Storage/ISCSIDirectPlugin.pm | 14 ++
1 file changed, 14 insertions(+)
diff --git a/src/PVE/Storage/ISCSIDirectPlugin.pm
b/src/PVE/Storage/ISCSIDirectPlugin.pm
index 9b7f77c..8c6b4ab 100644
--- a/src/PVE/Storage/ISCSIDirectP
This is in preparation to switch qemu-server from using '-drive' to
the modern '-blockdev' in the QEMU commandline options as well as for
the qemu-storage-daemon, which only supports '-blockdev'. The plugins
know best what driver and options are needed to access an image, so
a dedicated plugin meth
This is mostly in preparation for external qcow2 snapshot support.
For internal qcow2 snapshots, which currently are the only supported
variant, it is not possible to attach the snapshot only. If access to
that is required it will need to be handled differently, e.g. via a
FUSE/NBD export.
Signed
For '-drive', qemu-server sets special cache options for EFI disk
using RBD. In preparation to seamlessly switch to the new '-blockdev'
interface, do the same here. Note that the issue from bug #3329, which
is solved by these cache options, still affects current versions.
With -blockdev, the cache
The mon host parsing is adapted from other places. While there
currently is no support for vector notation in the storage config
(it's a pve-storage-portal-dns-list option), it doesn't hurt to
anticipate it, should the list of mon hosts come from a ceph.conf
instead anytime in the future.
Co-devel
Reported-by: Alexandre Derumier
Signed-off-by: Fiona Ebner
---
No changes in v3.
src/PVE/Storage/ZFSPlugin.pm | 16
1 file changed, 16 insertions(+)
diff --git a/src/PVE/Storage/ZFSPlugin.pm b/src/PVE/Storage/ZFSPlugin.pm
index f0fa522..c03fcca 100644
--- a/src/PVE/Storage/ZF
On Wed, Jun 25, 2025 at 11:56:31AM +0200, Daniel Kral wrote:
> OpenSSH 10.0 removes support for the DSA signature algorithm [0], which
> is the base version that will be shipped for Debian 13 trixie [1]. Since
> it has been marked deprecated for some time and generating DSA
> signatures with OpenSS
Also allow finding block nodes by their node name rather than just via
an associated block backend, which might not exist for block nodes.
For regular drives, it is essential to not use the throttle group,
because otherwise the limits intended only for the guest would also
apply to the backup job.
Signed-off-by: Fiona Ebner
---
block/zeroinit.c | 12 +---
qapi/block-core.json | 5 +++--
2 files changed, 12 insertions(+), 5 deletions(-)
diff --git a/block/zeroinit.c b/block/zeroinit.c
index f9d513db15..036edb17f5 100644
--- a/block/zeroinit.c
+++ b/block/zeroinit.c
@@ -66,6 +6
There currently does not seem to be a good way to obtain information
about the file child of a node, so add a custom command. The
query-block and query-named-block-nodes commands lack the necessary
info and while x-debug-query-block-graph exists, that is explicitly
only for debugging and experiment
To make warnings visually consistent with the handling of other storage
errors in destroy_vm(), replace the use of warn with log_warn.
Signed-off-by: Michael Köppl
---
src/PVE/QemuServer.pm | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/Qe
Even though the value in $conf->{$opt} contains a volume ID for unused
mount points at the moment, this is not guaranteed to be true in the
future. To ensure that a valid volume ID is used here, run call
parse_volume() first.
No functional change is intended here as the values of $conf->{$opt}
and
This series aims to fix #3711 [0] and streamline the detach/remove
behavior around volumes that are either mounted into a container or
attached to a VM as a hard disk. It also contains adds warnings in case
a volume's underlying storage does not exist anymore. It is a
continuation of a series from
Align error handling behavior when checking for linked clones with the
rest of destroy_vm's error handling approach. In case an error occurred,
a warning is printed and the execution continues, since:
1. The same validation occurs later in the process
2. The VM removal will still be blocked if the
Similar to the handling of storage errors in other parts of
destroy_vm(), an error during the call to PVE::Storage::path() should
not stop the VM from being destroyed. Instead, the user should be warned
and the function should continue.
Originally-by: Stefan Hrdlicka
[ MK: log_warn if check fail
$volid states more clearly that it's a volume ID, avoiding confusion
about the values these variables hold.
No functional change intended.
Signed-off-by: Michael Köppl
---
src/PVE/LXC.pm | 20 ++--
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/src/PVE/LXC.pm b/
To keep behavior around non-existent storages consistent with
vdisk_free(), also print warning if storage does not exist.
Signed-off-by: Michael Köppl
---
src/PVE/Storage.pm | 5 +
1 file changed, 5 insertions(+)
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index ec875fd0..ddb8822e
Instead of failing later in the function, users are warned if the
underlying storage no longer exists.
Signed-off-by: Michael Köppl
---
src/PVE/Storage.pm | 6 ++
1 file changed, 6 insertions(+)
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index 69eb435f..ec875fd0 100755
--- a/src/P
This check matches the behavior already implemented for VMs and prevents
partial storage deletion if a container template has a linked clone. In
such cases, the destruction of the container template will now fail,
informing the user that the base volume is still in use by the linked
clone. In case
Errors during deletion of a mountpoint volume should not stop users from
destroying a container. Instead of failing, a warning is printed and the
destruction of the container continues.
Originally-by: Stefan Hrdlicka
[ MK: remove ignore-storage-errors param ]
Signed-off-by: Michael Köppl
---
s
The GUI and TUI installers already implement checks to ensure systems
have the minimum required number of disks available for the various RAID
configurations (min 2 disks for RAID1, min 4 disks for RAID10, etc).
This change adds an early check of the answer file to the
auto-installer, improving the
Check that the configured swapsize is not greater than hdsize / 8 as
stated in the admin guide [0]. Define the behavior for the auto-installer as
well as the TUI and GUI installers.
[0] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#advanced_lvm_options
Signed-off-by: Michael Köppl
---
P
Signed-off-by: Michael Köppl
---
proxmox-auto-installer/src/utils.rs | 12 +++-
proxmox-auto-installer/tests/parse-answer.rs | 1 +
.../parse_answer_fail/duplicate_disk.json | 3 +++
.../parse_answer_fail/duplicate_disk.toml | 15 +++
4 fil
Signed-off-by: Michael Köppl
---
proxmox-installer-common/src/utils.rs | 17 +++--
1 file changed, 15 insertions(+), 2 deletions(-)
diff --git a/proxmox-installer-common/src/utils.rs
b/proxmox-installer-common/src/utils.rs
index 8adcec0..1fe6a74 100644
--- a/proxmox-installer-common
Add checks for valid subnet mask (greater than /0 and at most /32 for
IPv4). In addition, check if the address entered by the user is valid
within the given subnet, i.e. not a network address or broadcast
address. /31 is considered an exception in accordance with RFC3021 [0],
considering any of the
The requirement of hdsize/4 was not checked anywhere and adding sanity
checks for maxroot<=hdsize/4 would stop users from installing PVE on
smaller disks (see [0]), whereas the installation actually tries its
best to successfully install even on disks below 12GB. So instead of
adding sanity checks,
Adapt the return type of CidrAddressEditView's get_value implementation
for the FormViewGetValue trait to handle errors in case of invalid CIDR
similarly to other (parsing) errors done in the TUIs network dialog.
Signed-off-by: Michael Köppl
---
proxmox-tui-installer/src/main.rs | 8 +--
Instead of having parts of the RAID setup checks scattered in multiple
places, move the core of the checks to implementations of the
ZfsRaidLevel and BtrfsRaidLevel enums.
Signed-off-by: Michael Köppl
---
No functional change intended.
proxmox-installer-common/src/disk_checks.rs | 156 -
The goal of this series is to add additional sanity checks to the
auto-installer and the TUI and GUI installers. The following checks were
added:
* Btrfs / ZFS RAID: check if the required number of disks is available
* LVM: check if swapsize < hdsize
* Duplicate disks in answer file disk selection
44 matches
Mail list logo