On 13/02/2025 13:57, Fiona Ebner wrote:
> The description 'VZDump backup file' for content type 'backup' is
> wrong for PBS and other future backup providers. Just use 'Backup' to
> describe the content type everywhere and avoid confusion.
>
> Signed-off-by: Fiona Ebner
> ---
> www/manager6/Util
Dbus has a limit of 512 connections by default and signals should be
disconnected as soon as they are not needed anymore.
This should alleviate https://bugzilla.proxmox.com/show_bug.cgi?id=5876.
Signed-off-by: Maximiliano Sandoval
---
Differences from v1:
- remove two guards on finish_callback
Am 06.03.25 um 11:44 schrieb Dominik Csapak:
> so new guests (or guests with the 'latest' machine type) have
> that setting automatically disabled.
>
> The previous default (enabling S3/S4), does not make too much sense in a
> virtual environment, and sometimes makes problems, e.g. Windows default
The helpers had lots of unnecessary intermediate assignments, which we
can just simplify.
Signed-off-by: Stefan Hanreich
---
src/PVE/Network/SDN/Ipams/NetboxPlugin.pm | 13 -
1 file changed, 4 insertions(+), 9 deletions(-)
diff --git a/src/PVE/Network/SDN/Ipams/NetboxPlugin.pm
b/sr
We use the IP ranges of netbox to represent the dhcp ranges. We were
already querying the IP ranges for a IP when starting a guest, but we
never created the IP ranges in the first place. Additionally implement
deleting the IP ranges when the subnet gets deleted.
These methods try to check for any
The netbox integration did not properly return the IP when creating
the entries in netbox. This lead to errors on starting the guest,
stating that an IP could not be allocated.
Originally-by: lou lecrivain
Signed-off-by: Stefan Hanreich
---
src/PVE/Network/SDN/Ipams/NetboxPlugin.pm | 8 ++--
While it should make practically no difference, it opens up potential
errors in the future, so just remove the conditional assignments and
explicitly define the variable as undef, so the intention is more
clear.
Signed-off-by: Stefan Hanreich
---
src/PVE/Network/SDN/Ipams/NetboxPlugin.pm | 6 +++
Create a helper method that abstracts the common code used in making
netbox requests. Move all api_request incovations over to using the
helper method. This saves us from writing lots of repeated code.
This also updates the helpers and introduces error checking there.
Helpers didn't catch any erro
Because of how the Netbox IPAM plugin works (utilizing IP ranges to
represent DHCP ranges), we need a hook in the IPAM plugin that runs on
updates to the subnet because DHCP ranges can be edited. The update
hook in Netbox checks which DHCP ranges got added and which got
deleted and then performs th
This function did not catch any possible errors, nor respect the
$noerr parameter.
Signed-off-by: Stefan Hanreich
---
src/PVE/Network/SDN/Ipams/NetboxPlugin.pm | 4
1 file changed, 4 insertions(+)
diff --git a/src/PVE/Network/SDN/Ipams/NetboxPlugin.pm
b/src/PVE/Network/SDN/Ipams/NetboxPlu
Deleting a subnet did not delete any created entities in Netbox.
Implement deletion of a subnet by deleting all entities that are
created in Netbox upon creation of a subnet.
We are checking for any leftover IP assignments before deleting the
prefix, so we do not accidentally delete any manually c
On 07/03/2025 12:49, Fabian Grünbichler wrote:
> Reviewed-by: Fabian Grünbichler
Thanks, added the trailer on the upstream PR:
https://github.com/openzfs/zfs/pull/17125/commits/693ab2e972f64d9418109d273f5294f3a401dae6
___
pve-devel mailing list
pve
Am 06.03.25 um 11:44 schrieb Dominik Csapak:
> If we have multiple 'globalFlags', we have to encode each one separately
> on the commandline with '-global OPTION', since QEMU does not allow to
> have multiple options here.
>
> We currently only have one such flag that used the 'globalFlags' list,
diff --git a/www/manager6/sdn/fabrics/Common.js
b/www/manager6/sdn/fabrics/Common.js
new file mode 100644
index ..72ec093fc928
--- /dev/null
+++ b/www/manager6/sdn/fabrics/Common.js
@@ -0,0 +1,222 @@
+Ext.define('PVE.sdn.Fabric.InterfacePanel', {
+extend: 'Ext.grid.Panel',
+mi
+ Proxmox.Utils.API2Request({
+ url: `/cluster/sdn/fabrics/`,
+ method: 'GET',
+ success: function(response, opts) {
+ let ospf = Object.entries(response.result.data.ospf);
+ let openfabric =
Object.entries(re
Am 13.02.25 um 14:17 schrieb Dominik Csapak:
> this will fail with the new checks for mdev when we don't have the
> correct config.
Nit: it would only fail after the next commit ;)
>
> namely a device that has mediated devices, should have 'mdev' set in the
> mapping config
>
> Signed-off-by: D
so we test that logic at least once.
Signed-off-by: Dominik Csapak
---
drop the pve version test, and move it, since we might want to have it
regardless if the rest is applied or not.
test/cfg2cmd/q35-windows-pinning.conf | 5 +
test/cfg2cmd/q35-windows-pinning.conf.cmd | 24 ++
Add the pveX variants (where X > 0) to the list too, so one knows they
exits. Also this allows them to be shown and chosen in the UI.
Signed-off-by: Dominik Csapak
---
new in v2
PVE/API2/Qemu/Machine.pm | 24 +++-
1 file changed, 23 insertions(+), 1 deletion(-)
diff --git a/
superseded by v2:
https://lore.proxmox.com/pve-devel/20250307144436.122621-1-d.csa...@proxmox.com/
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
We only have one place where we use it, so add the global flag inline,
instead of collecting and doing it at the end. This makes it consistent
with our other places where we need '-global' flags.
Adapt the tests, since that global flag changes place, the resulting
qemu hardware should be identical
When creating or updating guests with ostype windows, we want to pin the
machine version to a specific one. Since introduction of that feature,
we never bumped the pve machine version, so this was missing.
Append the pve machine version only if it's not 0 so we don't add that
unnecessarily.
Signe
So users can disable them (they're enabled by default in QEMU)
Signed-off-by: Dominik Csapak
---
changes from v1:
* rework the method with suggestions from fiona
* change way we add flags because we don't have globalflags anymore
PVE/QemuServer.pm | 4
PVE/QemuServer/Machine.pm |
and retroactively add descriptions for previous bumps.
Signed-off-by: Dominik Csapak
---
new in v2
PVE/API2/Qemu/Machine.pm | 9 +
PVE/QemuServer/Machine.pm | 15 +++
2 files changed, 24 insertions(+)
diff --git a/PVE/API2/Qemu/Machine.pm b/PVE/API2/Qemu/Machine.pm
index 1
so new guests (or guests with the 'latest' machine type) have that
setting automatically disabled.
The previous default (enabling S3/S4), does not make too much sense in a
virtual environment, and sometimes makes problems, e.g. Windows defaults
to using 'hybrid shutdown' and 'fast startup' when S4
and clarify what windows guests will be pinned to.
Signed-off-by: Dominik Csapak
---
new in v2
qm.adoc | 15 +++
1 file changed, 15 insertions(+)
diff --git a/qm.adoc b/qm.adoc
index 4bb8f2c..16ed870 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -173,6 +173,21 @@ This means that after a fre
when we don't have a specific machine version on a windows guest, we use
the creation meta info to pin the machine version. Currently we always
append the pve machine version from the current installed kvm version,
which is not necessarily the version we pinned the guest to.
Instead, use the same
since they make some problems (e.g. windows hybrid shutdown is enabled
by default then -> which makes vGPU problem). Libvirt/virsh also
disables that by default (and tries preventing enabling it.)
This series introduces a new pve1 version for 9.2 machine versions, and
pins new windows guests to th
Am 13.02.25 um 14:16 schrieb Dominik Csapak:
> pve-docs:
>
> Dominik Csapak (2):
> qm: resource mapping: add description for `mdev` option
> qm: resource mapping: document `live-migration-capable` setting
Those two as well:
Reviewed-by: Fiona Ebner
Good work on this series!
_
Am 13.02.25 um 14:17 schrieb Dominik Csapak:
> if the hardware/driver is capable, the admin can now mark a pci device
> as 'live-migration-capable', which then tries enabling live migration
> for such devices.
>
> mark it as experimental when configuring and in the migrate window
>
> Signed-off-b
To bugs got noticed by Fiona, namely missing use statements – which was
not relevant in practice, but still good to have them correct as it
easily can cause "spooky actions at a distance" when changing things
elsewhere. The other one was a conditional use-statement, which is
always a rather nasty b
On 07/03/2025 13:29, Gabriel Goller wrote:
> This includes a new frr-test-tools package that we are not interested in
> (it's a testing package), so we ignore it with a BuildProfile.
>
> Signed-off-by: Gabriel Goller
> ---
> Makefile | 2 +-
> debian/control | 9 +
> frr
Am 13.02.25 um 14:17 schrieb Dominik Csapak:
> we now return the 'allowed_nodes'/'not_allowed_nodes' also if the vm is
> running, when it has mapped resources. So do that checks independently
> so that the user has instant feedback where those resources exist.
>
> Signed-off-by: Dominik Csapak
R
Am 27.01.25 um 12:29 schrieb Fiona Ebner:
> Changes in v5:
> * everything new in v5 except the last 3 patches
> * new approach, use special config section instead of config key
> * add tests and some fixes for configuration handling
> * make special section handling more generic
> * also check for
Am 07.03.25 um 14:19 schrieb Fiona Ebner:
> Am 13.02.25 um 14:17 schrieb Dominik Csapak:
>> this now takes into account the 'not_allowed_nodes' hash we get from the
>> api call. With that, we can now limit the 'local_resources' check for
>> online vms only, as for offline guests, the 'unavailable-r
Am 13.02.25 um 14:17 schrieb Dominik Csapak:
> showing a final transfer log line helps with identifying what was
> actually transferred. E.g. it could happen that the VFIO state was only
> transferred in the last iteration. In such a case we would not see that
> information at all.
>
> Signed-off-
Am 07.03.25 um 14:30 schrieb Fiona Ebner:
> Am 13.02.25 um 14:17 schrieb Dominik Csapak:
>> those should be able to migrate even for online vms. If the mapping does
>> not exist on the target node, that will be caught further down anyway.
>>
>> Signed-off-by: Dominik Csapak
>> ---
>> no changes in
On 3/6/25 17:42, Fiona Ebner wrote:
Am 13.02.25 um 14:17 schrieb Dominik Csapak:
we currently only call deactivate_volumes, but we actually want to call
the whole vm_stop_cleanup, since that is not invoked by the vm_stop
above (we cannot parse the config anymore) and might do other cleanups
we a
Am 07.03.25 um 13:20 schrieb Fiona Ebner:
> Am 13.02.25 um 14:17 schrieb Dominik Csapak:
>> so that we can show a proper warning in the migrate dialog and check it
>> in the bulk migrate precondition check
>>
>> the unavailable_storages and should be the same as before, but
>> we now always return
This simplifies the comparison of IPs by using the object-oriented
interface over the procedural one. Also instantiate the ips using the
new method rather than using new, which isn't a keyword in Perl. This
fixes the respective perlcritic warning.
Signed-off-by: Stefan Hanreich
---
src/PVE/Netwo
Am 13.02.25 um 14:17 schrieb Dominik Csapak:
> those should be able to migrate even for online vms. If the mapping does
> not exist on the target node, that will be caught further down anyway.
>
> Signed-off-by: Dominik Csapak
> ---
> no changes in v6
> PVE/API2/Nodes.pm | 13 +++--
> 1
Am 13.02.25 um 14:17 schrieb Dominik Csapak:
> this now takes into account the 'not_allowed_nodes' hash we get from the
> api call. With that, we can now limit the 'local_resources' check for
> online vms only, as for offline guests, the 'unavailable-resources' hash
> already includes mapped device
Am 13.02.25 um 14:17 schrieb Dominik Csapak:
> by also providing the global config in assert_valid, and by also
> adding the mdev config in the 'toCheck' object in the gui
>
> For the gui, we extract the mdev property from the global entry, and add
> it to the individual mapping entries, that way
Check for overlapping DHCP ranges and reject them if there are any
overlaps. If we can be certain that there are no overlapping DHCP
ranges this saves us from running into errors later in IPAM modules
where overlapping DHCP ranges are not allowed.
Signed-off-by: Stefan Hanreich
---
src/PVE/Netwo
Signed-off-by: Stefan Hanreich
---
src/PVE/Network/SDN/SubnetPlugin.pm | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/PVE/Network/SDN/SubnetPlugin.pm
b/src/PVE/Network/SDN/SubnetPlugin.pm
index 4bff2dd..8733018 100644
--- a/src/PVE/Network/SDN/SubnetPlugin.pm
+++ b/s
Net::IP accepts a myriad of different IP objects from ranges to
prefixes to singular IPs. We check if the object consists only of a
singular IP and normalize the IP if it has size 1 (since then it
could still be a /32 prefix or a range consisting of one IP).
Otherwise we would theoretically accept
This includes a new frr-test-tools package that we are not interested in
(it's a testing package), so we ignore it with a BuildProfile.
Signed-off-by: Gabriel Goller
---
Makefile | 2 +-
debian/control | 9 +
frr| 2 +-
3 files changed, 11 insertions(+), 2 deletions(-)
Am 13.02.25 um 14:17 schrieb Dominik Csapak:
> Show the transferred VFIO state (when there is one), but since there is
> no total here, so we can't show that, just what was transferred up until
> now.
>
> Signed-off-by: Dominik Csapak
Without the unrelated hunk below:
Reviewed-by: Fiona Ebner
Oops, forgot to add the repo to the header. This obviously applies to
the `frr` repo.
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Add the patches for the dummy_as_loopback series, which area already
merged upstream, but not yet released. Also enable it by default.
Link: https://github.com/FRRouting/frr/pull/18242
Signed-off-by: Gabriel Goller
---
...A_IF_DUMMY-flag-for-dummy-interfaces.patch | 125 +++
...on-to-tre
Update changelog with latest version
Signed-off-by: Gabriel Goller
---
debian/changelog | 9 +
1 file changed, 9 insertions(+)
diff --git a/debian/changelog b/debian/changelog
index e630dba40305..a1ec8e899f11 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,12 @@
+frr (10
These patches enable the bgp daemon per default and implement the
bgp-evpn autort feature. Also add the topotest for the autort feature.
Signed-off-by: Gabriel Goller
---
...atch => 0001-enable-bgp-bfd-daemons.patch} | 17 +-
...on-for-RT-auto-derivation-to-force-A.patch | 77 ++--
.../0003-te
Am 13.02.25 um 14:17 schrieb Dominik Csapak:
> so that we can show a proper warning in the migrate dialog and check it
> in the bulk migrate precondition check
>
> the unavailable_storages and should be the same as before, but
> we now always return (not_)allowed_nodes too.
What do you mean by "u
> Friedrich Weber hat am 07.03.2025 10:52 CET geschrieben:
> # Summary
>
> With default settings, LVM autoactivates LVs when it sees a new VG, e.g. after
> boot or iSCSI login. In a cluster with guest disks on a shared LVM VG (e.g. on
> top of iSCSI/Fibre Channel (FC)/direct-attached SAS), this c
On 3/7/25 12:59, Mira Limbeck wrote:
> Thank you for the patch!
>
> some comments inline
>
>> +sub iscsi_test_session {
>> +my ($portal, $sid) = @_;
>> +my $cmd = [$ISCSIADM, '--mode', 'session', '--sid', $sid, '-P1'];
>> +
>> +my $res = 0;
>> +eval {
>> +run_command($cmd,
--- Begin Message ---
On Fri, Mar 07, 2025 at 09:24:25AM +0100, Roland Kammerer wrote:
> On Tue, Feb 25, 2025 at 11:50:31AM +0100, Max Carrara wrote:
> > 6. Is there any other things you'd like to mention? Feedback, critique
> >and such are all welcome!
something I forgot: the last time I chec
Thank you for the patch!
some comments inline
> +sub iscsi_test_session {
> +my ($portal, $sid) = @_;
> +my $cmd = [$ISCSIADM, '--mode', 'session', '--sid', $sid, '-P1'];
> +
> +my $res = 0;
> +eval {
> +run_command($cmd, errmsg => 'iscsi session test failed', outfunc =>
Oh sorry, messed up the Subject, I briefly thought about applying it
already, but it should come together with the dependency bump for
guest-common, so I didn't yet.
Am 07.03.25 um 12:20 schrieb Fiona Ebner:
> Am 13.02.25 um 14:17 schrieb Dominik Csapak:
>> this will fail with the new checks for m
> Friedrich Weber hat am 07.03.2025 10:52 CET geschrieben:
>
>
> zfs-initramfs ships an initramfs-tools boot script that
> unconditionally activates all LVs on boot. This can cause issues if
> the LV resides on a shared LVM VG on top of a shared LUN, in
> particular Fibre Channel / directed-at
Am 13.02.25 um 14:17 schrieb Dominik Csapak:
> such as the mapping name and if it's marked for live-migration (pci only)
>
> Signed-off-by: Dominik Csapak
Reviewed-by: Fiona Ebner
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists
Am 13.02.25 um 14:17 schrieb Dominik Csapak:
> They have to be marked as 'live-migration-capable' in the mapping
> config, and the driver and qemu must support it.
>
> For the gui checks, we now return the whole object of the mapped
> resources, which includes info like the name and if it's marked
Am 13.02.25 um 14:17 schrieb Dominik Csapak:
> by giving the mapping config to assert_valid, not only the specific mapping
>
> Signed-off-by: Dominik Csapak
Reviewed-by: Fiona Ebner
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://list
Am 13.02.25 um 14:17 schrieb Dominik Csapak:
> the default is 'auto', but for those which are marked as capable for
> live migration, we want to explicitly enable that, so we get an early
> error on start if the driver does not support that.
>
> Signed-off-by: Dominik Csapak
Reviewed-by: Fiona E
Am 13.02.25 um 14:16 schrieb Dominik Csapak:
> so that we can decide in qemu-server to allow live-migration.
> The driver and QEMU must be capable of that, and it's the
> admin's responsibility to know and configure that
>
> Mark the option as experimental in the description.
>
> Signed-off-by: D
On 07/03/2025 10:52, Friedrich Weber wrote:
> [...]
> Notes:
> new in v2
>
> I'm also planning on sending the inner patch upstream, well post here
> once I do.
Opened a PR upstream: https://github.com/openzfs/zfs/pull/17125
___
pve-dev
Am 13.02.25 um 14:16 schrieb Dominik Csapak:
> but that lives int he 'global' part of the mapping config, not in a
> specific mapping. To check that, add it to the $configured_props from
> there.
>
> this requires all call sites to be adapted otherwise the check will
> always fail for devices that
On 3/6/25 15:52, Fiona Ebner wrote:
Am 06.03.25 um 11:44 schrieb Dominik Csapak:
So users can disable them (they're enabled by default in QEMU)
Signed-off-by: Dominik Csapak
---
This patch may make sense, regardless if we'll apply the reversal of the
default...
PVE/QemuServer.pm |
Am 07.03.25 um 11:05 schrieb Dominik Csapak:
> On 3/7/25 11:00, Fiona Ebner wrote:
>> Am 07.03.25 um 10:54 schrieb Dominik Csapak:
>>> On 3/6/25 13:55, Fiona Ebner wrote:
Am 06.03.25 um 13:15 schrieb Dominik Csapak:
> On 3/6/25 13:13, Fiona Ebner wrote:
>> Am 06.03.25 um 11:44 schrieb
Am 07.03.25 um 11:02 schrieb Dominik Csapak:
> On 3/6/25 15:52, Fiona Ebner wrote:
>> Am 06.03.25 um 11:44 schrieb Dominik Csapak:
>>> diff --git a/PVE/QemuServer/Machine.pm b/PVE/QemuServer/Machine.pm
>>> index ebaf2dcc..377abc8a 100644
>>> --- a/PVE/QemuServer/Machine.pm
>>> +++ b/PVE/QemuServer/
sent a v2:
https://lore.proxmox.com/pve-devel/20250307095245.65698-1-f.we...@proxmox.com/T/
On 10/02/2025 11:47, Fabian Grünbichler wrote:
> [...]
>
>>
>> So I'm wondering:
>>
>> (a) could the ZFS initramfs script use `-aay` instead of `-ay`, so the
>> `--setautoactivation` flag has an effect aga
Am 07.03.25 um 10:58 schrieb Dominik Csapak:
> On 3/6/25 15:32, Fiona Ebner wrote:
>> Am 06.03.25 um 11:44 schrieb Dominik Csapak:
>>> When creating or updating guests with ostype windows, we want to pin the
>>> machine version to a specific one. Since introduction of that feature,
>>> we never bum
On 3/6/25 15:32, Fiona Ebner wrote:
Am 06.03.25 um 11:44 schrieb Dominik Csapak:
When creating or updating guests with ostype windows, we want to pin the
machine version to a specific one. Since introduction of that feature,
we never bumped the pve machine version, so this was missing.
Append t
On 3/7/25 11:00, Fiona Ebner wrote:
Am 07.03.25 um 10:54 schrieb Dominik Csapak:
On 3/6/25 13:55, Fiona Ebner wrote:
Am 06.03.25 um 13:15 schrieb Dominik Csapak:
On 3/6/25 13:13, Fiona Ebner wrote:
Am 06.03.25 um 11:44 schrieb Dominik Csapak:
If we have multiple 'globalFlags', we have to enc
Am 07.03.25 um 10:54 schrieb Dominik Csapak:
> On 3/6/25 13:55, Fiona Ebner wrote:
>> Am 06.03.25 um 13:15 schrieb Dominik Csapak:
>>> On 3/6/25 13:13, Fiona Ebner wrote:
Am 06.03.25 um 11:44 schrieb Dominik Csapak:
> If we have multiple 'globalFlags', we have to encode each one
> sepa
Am 29.11.24 um 15:29 schrieb Fiona Ebner:
> Am 09.09.24 um 12:20 schrieb Fiona Ebner:
>> Many people will use 'upgrade' instead of 'full-upgrade' or
>> 'dist-upgrade' (e.g. [0][1]) despite the documentation explicitly
>> mentioning 'dist-upgrade' [3]. Proxmox projects use different
>> packaging gua
On 3/6/25 15:20, Fiona Ebner wrote:
Am 06.03.25 um 15:15 schrieb Dominik Csapak:
On 3/6/25 15:10, Fiona Ebner wrote:
Am 06.03.25 um 14:36 schrieb Dominik Csapak:
On 3/6/25 14:10, Fiona Ebner wrote:
Am 06.03.25 um 11:44 schrieb Dominik Csapak:
diff --git a/PVE/QemuServer/Machine.pm b/PVE/Qemu
On 3/6/25 13:55, Fiona Ebner wrote:
Am 06.03.25 um 13:15 schrieb Dominik Csapak:
On 3/6/25 13:13, Fiona Ebner wrote:
Am 06.03.25 um 11:44 schrieb Dominik Csapak:
If we have multiple 'globalFlags', we have to encode each one separately
on the commandline with '-global OPTION', since QEMU does n
Makes the definition more amenable for future additions.
No functional change intended.
Signed-off-by: Friedrich Weber
---
Notes:
new in v2
src/PVE/Storage/LVMPlugin.pm | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Stora
# Summary
With default settings, LVM autoactivates LVs when it sees a new VG, e.g. after
boot or iSCSI login. In a cluster with guest disks on a shared LVM VG (e.g. on
top of iSCSI/Fibre Channel (FC)/direct-attached SAS), this can indirectly cause
guest creation or migration to fail. See bug #4997
zfs-initramfs ships an initramfs-tools boot script that
unconditionally activates all LVs on boot. This can cause issues if
the LV resides on a shared LVM VG on top of a shared LUN, in
particular Fibre Channel / directed-attached SAS LUNs, as these are
usually visible at boot time. See bug #4997 [1
When discovering a new volume group (VG), for example on boot, LVM
triggers autoactivation. With the default settings, this activates all
logical volumes (LVs) in the VG. Activating an LV creates a
device-mapper device and a block device under /dev/mapper.
This is not necessarily problematic for l
Add an option to choose a file format (qcow2, raw, vmdk) when restoring
a vm backup to file based storage. This options allows all disks to be
recreated with the specified file format if supported by the target
storage.
Signed-off-by: Markus Frank
---
v3:
* added requires => 'archive' to disk-for
This patch series allows to restore all VM disks with a specified format
if supported by the target storage. The existing storage and the new
disk-format options can act as a default/fallback for per disk
storage/format customisation in the future (#4275).
v3:
* see individual patches
v2:
* renam
Prerequisite for "ui: restore window: add diskformat option"
The hide condition is copied from the format selector item in the same
file.
Signed-off-by: Markus Frank
---
v3:
* added (me.hideFormatWhenStorageEmpty && !me.autoSelect) to the hide
condition in initComponent instead of manually hidin
This is done by changing the StorageSelector to a DiskStorageSelector.
Using the hideFormatWhenStorageEmpty option of the DiskStorageSelector
to hide the DiskFormatSelector when no storage is selected, as the
DiskFormatSelector would show the default value qcow2 in a disabled
state, which could co
applied, with a minor cleanup
On Fri, Mar 07, 2025 at 09:14:14AM +0100, Maximiliano Sandoval wrote:
> Dbus has a limit of 512 connections by default and signals should be
> disconnected as soon as they are not needed anymore.
>
> This should alleviate https://bugzilla.proxmox.com/show_bug.cgi?id=
--- Begin Message ---
Hi Max,
took me a bit longer than expected, but here we go...
On Tue, Feb 25, 2025 at 11:50:31AM +0100, Max Carrara wrote:
> Thanks a lot for the offer! I do actually have a couple questions. It
> would be nice if you could answer them, as it would aid in cleaning all
> this
Wolfgang Bumiller writes:
> On Mon, Mar 03, 2025 at 03:42:53PM +0100, Maximiliano Sandoval wrote:
>> Dbus has a limit of 512 connections by default and signals should be
>> disconnected as soon as they are not needed anymore.
>>
>> This should alleviate https://bugzilla.proxmox.com/show_bug.cgi
87 matches
Mail list logo