Since there are certain checks that depend on the QEMU binary version,
tests with a fixed QEMU binary version make it less likely to catch
issues on current setups, because for those, the QEMU binary version
will always be higher than in the tests.
Set the machine version, because these tests depe
Since there are certain checks that depend on the QEMU binary version,
tests with a fixed QEMU binary version make it less likely to catch
issues on current setups, because for those, the QEMU binary version
will always be higher than in the tests.
Some of the affected tests explicitly mention the
Since there are certain checks that depend on the QEMU binary version,
tests with a fixed QEMU binary version make it less likely to catch
issues on current setups, because current setups will always have a
newer QEMU binary version than the test.
There are only three tests that explicitly want to
Since there are certain checks that depend on the QEMU binary version,
tests with a fixed QEMU binary version make it less likely to catch
issues on current setups, because for those, the QEMU binary version
will always be higher than in the tests.
For all but one of the affected tests, there's no
Since there are certain checks that depend on the QEMU binary version,
tests with a fixed QEMU binary version make it less likely to catch
issues on current setups, because for those, the QEMU binary version
will always be higher than in the tests.
Two of the affected tests explicitly mention the
Since there are certain checks that depend on the QEMU binary version,
tests with a fixed QEMU binary version make it less likely to catch
issues on current setups, because for those, the QEMU binary version
will always be higher than in the tests.
For the affected tests, there's no real requireme
The parameter was added by ac0077cc ("Use 'QEMU version' ->
'+pve-version' mapping for machine types") but it doesn't seem like
there ever was a caller. In particular, none of the current callers
pass in a value and it's not clear when one would require passing a
different version than the KVM bina
The minimum supported version for Proxmox VE 8 nodes is QEMU 8.0 and
the beginning of the config_to_command() function already has a check
for at least version 5.0. No other caller of get_vm_machine() passes
in the parameter, so it can be removed from there as well.
Signed-off-by: Fiona Ebner
---
This is necessary to bump the minimum required version in
config_to_command() beyond 4.1.1, because otherwise there will be an
error message mismatch making the test fail.
Bump the tests all the way to 9.0.0, because that is the current version
and because then the test doesn't have to be touched
The minimum supported version for a Proxmox VE 8 node is QEMU 8.0.
Signed-off-by: Fiona Ebner
---
PVE/QemuServer.pm | 4 ++--
test/cfg2cmd/old-qemu.conf | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 75995366..39b3343
The minimum supported version for a Proxmox VE 8 node is QEMU 8.0.
Signed-off-by: Fiona Ebner
---
PVE/QemuServer.pm | 4 ++--
test/cfg2cmd/old-qemu.conf | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 65784187..7599536
There are quite a few preparation changes in other sub-crates
(auto-installer, installer-common).
I've only gotten through them for now and haven't looked at the actual
post-hook crate stuff.
Wouldn't it be nicer to split the preparation patches into their own
commit? It would make the patch s
In my tests, with secure boot disabled, it failed to parse the
run-env-info.json because the Perl code wrote it this way:
"secure_boot":""
And it currently cannot parse a string. Setting it manually to:
"secure_boot":0
helped. The question is, if we want the parser to be more flexible or
fix
sent a v2:
https://lists.proxmox.com/pipermail/pve-devel/2024-July/064855.html
On 7/23/24 15:14, Stefan Hanreich wrote:
> When detaching and attaching the network device on update, the
> link_down setting is not considered and the network device always gets
> attached to the guest - even if link_d
When detaching and attaching the network device on update, the
link_down setting is not considered and the network device always gets
attached to the guest - even if link_down is set.
Fixes: 3f14f206 ("nic online bridge/vlan change: link disconnect/reconnect")
Signed-off-by: Stefan Hanreich
Revie
On 7/23/24 16:00, Fiona Ebner wrote:
>
> Am 23.07.24 um 15:14 schrieb Stefan Hanreich:
>> When detaching and attaching the network device on update, the
>> link_down setting is not considered and the network device always gets
>> attached to the guest - even if link_down is set.
>>
>> Fixes: 3f
Am 23.07.24 um 15:14 schrieb Stefan Hanreich:
> When detaching and attaching the network device on update, the
> link_down setting is not considered and the network device always gets
> attached to the guest - even if link_down is set.
>
> Fixes: 3f14f206 ("nic online bridge/vlan change: link di
As reported in the community forum [0], after migration, the VM might
not immediately be able to respond to QMP commands, which means the VM
could fail to resume and stay in paused state on the target.
The reason seems to be that activating the block drives in QEMU can
take a bit of time. For exam
Make clear that it affects only out-/inbound traffic and can be used if
the underlying physical NICs support only a limited number of VLANs when
offloading is possible.
Signed-off-by: Aaron Lauterer
---
After some off-list discussion with @Stefan Hanreich after his review,
we came to the conclusi
When detaching and attaching the network device on update, the
link_down setting is not considered and the network device always gets
attached to the guest - even if link_down is set.
Fixes: 3f14f206 ("nic online bridge/vlan change: link disconnect/reconnect")
Signed-off-by: Stefan Hanreich
---
Remove ureq, because it does not support unix sockets.
Signed-off-by: Dietmar Maurer
---
termproxy/Cargo.toml | 2 +-
termproxy/src/cli.rs | 29 +
termproxy/src/main.rs | 59 +--
3 files changed, 71 insertions(+), 19 deletions(-)
di
Currently, when completing a drive mirror job, only errors matching
"cannot be completed" will be handled. Other errors are ignored and
a wrong message that the job was completed successfully will be
printed to the log. An instance of this popped up in the community
forum [0].
The QMP command used
On Tue, Jul 23, 2024 at 01:04:06PM GMT, Aaron Lauterer wrote:
>
> I quickly compared both variants and realized again, that with the panic, we
> can pretty debug print the structs, making it quite a bit easier to compare
> the expected result to the actual one.
Yep, that's right. Seems I've missed
Tested the patches on my machine and everything worked as advertised.
It might make sense to note that this setting currently only applies to
the bridge_ports specified in the configuration, not the bridge
interface itself. Not sure if this is an ifupdown2 bug or intended. I
think it is actually a
On 2024-07-23 12:46, Christoph Heiss wrote:
On Tue, Jul 23, 2024 at 12:39:20PM GMT, Aaron Lauterer wrote:
Do we still see which test case actually failed? IIRC I used the panic so I
can print the needed info, mainly the name of the current test scenario so
it is easier to find out which fai
Am 18.07.24 um 09:55 schrieb Dominik Csapak:
> s/untis/units/
>
> Signed-off-by: Dominik Csapak
For reference, this was already applied (with an improved commit title):
https://git.proxmox.com/?p=pve-docs.git;a=commitdiff;h=a0d52904cd807e3a4bd327926793d056fe0d8cba
_
On Tue, Jul 23, 2024 at 12:39:20PM GMT, Aaron Lauterer wrote:
> Do we still see which test case actually failed? IIRC I used the panic so I
> can print the needed info, mainly the name of the current test scenario so
> it is easier to find out which failed.
Yes, since it printed earlier in the cod
switching these tests over to something like https://insta.rs could be
something we might want to do mid-/long term.
On 2024-07-23 12:39, Aaron Lauterer wrote:
Do we still see which test case actually failed? IIRC I used the panic
so I can print the needed info, mainly the name of the current
Do we still see which test case actually failed? IIRC I used the panic
so I can print the needed info, mainly the name of the current test
scenario so it is easier to find out which failed.
On 2024-07-18 15:48, Christoph Heiss wrote:
Signed-off-by: Christoph Heiss
---
Changes v1 -> v2:
*
Ping, still applies cleanly to current master as of today (23-07-2024).
Did a quick test round of Auto/GUI/TUI too, just to confirm everything.
On Thu, May 16, 2024 at 03:39:30PM GMT, Christoph Heiss wrote:
> This series tries to improve upon some small things around the
> installation progress r
There can be one dirty bitmap for each backup target ID (which are
tracked in the backup_access_bitmaps hash table). The QMP user can
specify the ID of the bitmap it likes to use. This ID is then compared
to the current one for the given target. If they match, the bitmap is
re-used (should it still
For the external backup API, it will be necessary to add a fleecing
image even for small disks like EFI and TPM, because there is no other
place the old data could be copied to when a new guest write comes in.
Signed-off-by: Fiona Ebner
---
PVE/VZDump/QemuServer.pm | 14 --
1 file ch
The state of the VM's disk images at the time the backup is started
is preserved via a snapshot-access block node. Old data is moved to
the fleecing image when new guest writes come in. The snapshot-access
block node, as well as the associated bitmap in case of incremental
backup, will be exported
In anticipation of future storage plugins that might not have
PBS-specific formats or adhere to the vzdump naming scheme for
backups.
Signed-off-by: Fiona Ebner
---
www/manager6/Utils.js | 10 ++
www/manager6/grid/BackupView.js| 4 ++--
www/manager6/storage/BackupView.j
TPM drives are already detached there and it's better to group
these things together.
Signed-off-by: Fiona Ebner
---
PVE/VZDump/QemuServer.pm | 25 +
1 file changed, 9 insertions(+), 16 deletions(-)
diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index 0
Like this nbd_stop() can be called from a module that cannot include
QemuServer.pm.
Signed-off-by: Fiona Ebner
---
PVE/API2/Qemu.pm | 3 ++-
PVE/CLI/qm.pm| 3 ++-
PVE/QemuServer.pm| 6 --
PVE/QemuServer/QMPHelpers.pm | 6 ++
4 files changed, 10 ins
When the VM is only started for backup, the VM will be stopped at that
point again. While the detach helpers do not warn about errors
currently, that might change in the future. This is also in
preparation for other cleanup QMP helpers that are more verbose about
failure.
Signed-off-by: Fiona Ebne
For providing snapshot-access to external backup providers, EFI and
TPM also need an associated fleecing image. The new caller will thus
need a different filter.
Signed-off-by: Fiona Ebner
---
pve-backup.c | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/pve-backup.c
Signed-off-by: Fiona Ebner
---
src/PVE/Storage.pm | 10 ++
1 file changed, 10 insertions(+)
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index aea57ab..b9913a4 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -1726,6 +1726,16 @@ sub extract_vzdump_config {
Avoids some line bloat in the create_backup_jobs_bh() function and is
in preparation for setting up the snapshot access independently of
fleecing, in particular that will be useful for providing access to
the snapshot via NBD.
Signed-off-by: Fiona Ebner
---
pve-backup.c | 95
Signed-off-by: Fiona Ebner
---
pve-backup.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/pve-backup.c b/pve-backup.c
index 33c23e53c2..d931746453 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -626,7 +626,8 @@ static void create_backup_jobs_bh(void *opaque) {
bo
The new_backup_provider() method can be used by storage plugins for
external backup providers. If the method returns a provider, Proxmox
VE will use callbacks to that provider for backups and restore instead
of using its usual backup/restore mechanisms.
API age and version are both bumped.
The ba
The example uses a simple directory structure to save the backups,
grouped by guest ID. VM backups are saved as configuration files and
qcow2 images, with backing files when doing incremental backups.
Container backups are saved as configuration files and a tar file or
squashfs image (added to test
First, the provider is asked about what restore mechanism to use.
Currently, 'directory' and 'tar' are possible, for restoring either
from a directory containing the full filesystem structure (for which
rsync is used) or a potentially compressed tar file containing the
same.
The new functions are
For external backup providers, the state of the VM's disk images at
the time the backup is started is preserved via a snapshot-access
block node. Old data is moved to the fleecing image when new guest
writes come in. The snapshot-access block node, as well as the
associated bitmap in case of increm
First, the provider is asked about what restore mechanism to use.
Currently, only 'qemu-img' is possible. Then the configuration files
are restored, the provider gives information about volumes contained
in the backup and finally the volumes are restored via
'qemu-img convert'.
The code for the re
Hooks from the backup provider are called during start/end/abort for
both job and backup. And it is necessary to adapt some log messages
and special case some things like is already done for PBS, e.g. log
file handling.
Signed-off-by: Fiona Ebner
---
PVE/VZDump.pm | 43
The filesystem structure is made available as a directory in a
consistent manner (with details depending on the vzdump backup mode)
just like for regular backup via tar.
The backup provider needs to back up the guest and firewall
configuration and then the filesystem structure, honoring the ID map
Allow overlapping request by removing the assert that made it
impossible. There are only two callers:
1. block_copy_task_create()
It already asserts the very same condition before calling
reqlist_init_req().
2. cbw_snapshot_read_lock()
There is no need to have read requests be non-overlapping i
Makes it a clean error for buggy (external) backup providers where the
size might not be set at all.
Signed-off-by: Fiona Ebner
---
PVE/QemuServer.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index d05705b1..0db9f667 100644
--- a/PVE/QemuServer.pm
==
A backup provider needs to implement a storage plugin as well as a
backup provider plugin. The storage plugin is for integration in
Proxmox VE's front-end, so users can manage the backups via
UI/API/CLI. The backup provider plugin is for interfacing with the
backup provider's backend to int
The device name needs to be queried while holding the graph read lock
and since it doesn't change during the whole operation, just get it
once during setup and avoid the need to query it again in different
places.
Also in preparation to use it more often in error messages and for the
upcoming exte
The drained section needs to be terminated before breaking out of the
loop in the error scenarios. Otherwise, guest IO on the drive would
become stuck.
If the job is created successfully, then the job completion callback
will clean up the snapshot access block nodes. In case failure
happened befor
In preparation for allowing multiple backup providers. Each backup
target can then have its own dirty bitmap and there can be additional
checks that the current backup state is actually associated to the
expected target.
Signed-off-by: Fiona Ebner
---
pve-backup.c | 8 +++-
1 file changed, 7
The name of this configuration option has been changed with commit
0e1d973 [0]. The patch of the commit introducing this tests [1] was
posted earlier and wasn't rebased properly before applying.
[0] 0e1d973 ("install: config: rename option lvm_auto_rename ->
existing_storage_auto_rename")
[1] 893
since `print` is doing buffered IO, we don't always get an error there,
even if the underlying write does not work.
To properly catch that, do an unbuffered `syswrite` which circumvents
all buffers and writes directly to the file handle.
We aren't actually interested in the specific error here, b
Note: this is not intended to be applied, but more of a POC.
since kernel 6.8, NVIDIAs vGPU driver does not use the generic mdev
interface anymore, since they relied on a feature there which is not
available anymore. IIUC the kernel [0] recommends drivers to implement
their own device specific fea
Ping. Patches still apply.
On 2024-05-29 14:23, Markus Frank wrote:
Patch series to enable AMD Secure Encrypted Virtualization (SEV)
https://www.amd.com/en/developer/sev.html
changes v11:
* removed systemd service and added run_command in qemu-server instead
* moved SEV related code to CPUConf
Am 23/07/2024 um 09:50 schrieb Aaron Lauterer:
>
>
> On 2024-07-22 19:02, Thomas Lamprecht wrote:
>>
>> applied, thanks, one question still inline though.
>>
>>
>>> + if (defined($current_properties->{$setting}) && $value eq
>>> $current_properties->{$setting}) {
>> hmm, might this cause tro
On 2024-07-22 19:02, Thomas Lamprecht wrote:
applied, thanks, one question still inline though.
+ if (defined($current_properties->{$setting}) && $value eq
$current_properties->{$setting}) {
hmm, might this cause trouble (or at least noisy warnings) with properties
that are defin
60 matches
Mail list logo