Accroding to RFC 8555:
> The MAC key SHOULD be provided in base64url-encoded form...
However, currently we are only decoding the MAC key as base64.
This patch chooses the correct function to decode the user provided
MAC key. This can fix authentication error when a user uses command
`pvenode acme
There are patches from upstream now too[0]. They should be functionally
equivalent to this one, but it's nicer to avoid needless diff compared
to upstream, so I'll send a mail with those instead (after
checking/testing).
[0]:
https://lore.kernel.org/qemu-devel/20240124173834.66320-1-hre...@redhat.
Hi,
I known it's a old request, but is their any roadmap about multi-
cluster/ multiple single hosts central gui ?
We have a lot of vmware customers looking to migrate to proxmox,
but it's really a blocker currently. (This is mostly onprem entreprise
customers, with multiple geo remote locati
This essentially repeats commit 6b7c181 ("add patch to work around
stuck guest IO with iothread and VirtIO block/SCSI") with an added
fix for the SCSI event virtqueue, which requires special handling.
This is to avoid the issue [3] that made the revert 2a49e66 ("Revert
"add patch to work around stu
awk internally uses float for every calculation, printing a large float
with awk results in 1.233e+09 format which causes the script to fail afterwards.
Instead I am printing the float without decimals.
Signed-off-by: Stefan Lendl
---
debian/patches/awk-printf.diff | 16
debian/
Signed-off-by: Christoph Heiss
---
Changes v1 -> v2:
* no changes
pve-installation.adoc | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/pve-installation.adoc b/pve-installation.adoc
index ccb32be..6e011fa 100644
--- a/pve-installation.adoc
+++ b/pve-installation.adoc
@@ -
As the grub entry specific to this was removed with the 8.1 release, add
a separate section for this to link users too.
Unfortunaly it is relatively often needed, due to very old or very new
hardware, or when Nvidia cards are installed.
Signed-off-by: Christoph Heiss
---
Changes v1 -> v2:
* no
Some things changed with the 8.1 release, so update all the relevant
things here too.
Signed-off-by: Christoph Heiss
---
Changes v1 -> v2:
* no changes
pve-installation.adoc | 17 -
1 file changed, 8 insertions(+), 9 deletions(-)
diff --git a/pve-installation.adoc b/pve-insta
The 8.1 release changed some things, so update to keep in sync with the
latest ISO.
Signed-off-by: Christoph Heiss
---
Changes v1 -> v2:
* no changes
Not resending the full patch due to it's size and nothing having changed
since v1. For the original patch, see [0].
[0] https://lists.proxmox.c
This mainly updates the installation section, to bring it up-to-date
with the latest available installer ISO.
v1: https://lists.proxmox.com/pipermail/pve-devel/2023-November/060814.html
Changes v1 -> v2:
* rebased on lastest master
* dropped two obsolete patches
Christoph Heiss (5):
screen
Especially for GRUB there were a myriad of different casing variants
(e.g. grub, Grub, GRUB), so unify them, with GRUB being the official
casing.
For systemd-boot, fix an instance where it was not typeset as
monospace, like everywhere else.
Signed-off-by: Christoph Heiss
---
Changes v1 -> v2:
v2 out: https://lists.proxmox.com/pipermail/pve-devel/2024-January/061465.html
On Fri, Nov 24, 2023 at 11:45:55AM +0100, Christoph Heiss wrote:
>
> This mainly updates the installation section, to bring it up-to-date
> with the latest available installer ISO.
>
> The last two patches are simply
PVE::Storage::path() neither activates the storage of the passed-in volume, nor
does it ensure that the returned value is actually a file or block device, so
this actually fixes two issues. PVE::Storage::abs_filesystem_path() actually
takes care of both, while still calling path() under the hood (s
From: Vladimir Sementsov-Ogievskiy
Currently block_copy creates copy_bitmap in source node. But that is in
bad relation with .independent_close=true of copy-before-write filter:
source node may be detached and removed before .bdrv_close() handler
called, which should call block_copy_state_free(),
When a backup for a VM is started, QEMU will install a
"copy-before-write" filter in its block layer. This filter ensures
that upon new guest writes, old data still needed for the backup is
sent to the backup target first. The guest write blocks until this
operation is finished so guest IO to not-y
which will be needed to allocate fleecing images.
Signed-off-by: Fiona Ebner
---
PVE/VZDump/QemuServer.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index be7d8e1e..51498dbc 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuSe
Using fleecing backup like in [0] on a qcow2 image (with metadata
preallocation) can lead to the following assertion failure:
> bdrv_co_do_block_status: Assertion `!(ret & BDRV_BLOCK_ZERO)' failed.
In the reproducer [0], it happens because the BDRV_BLOCK_RECURSE flag
will be set by the qcow2 driv
Signed-off-by: Fiona Ebner
---
PVE/VZDump.pm | 12
1 file changed, 12 insertions(+)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index 4185ed62..bdf48fb2 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -130,6 +130,15 @@ my $generate_notes = sub {
return $notes_template;
};
+
Signed-off-by: Fiona Ebner
---
vzdump.adoc | 28
1 file changed, 28 insertions(+)
diff --git a/vzdump.adoc b/vzdump.adoc
index 24a3e80..eb67141 100644
--- a/vzdump.adoc
+++ b/vzdump.adoc
@@ -136,6 +136,34 @@ not included in backups. For volume mount points you can
s
With backup fleecing, it might be necessary to discard the source.
There will be an assertion failure if bitmaps on the source side have
a bigger granularity than the block copy's cluster size, so just
consider the source side too.
This also supersedes the hunk in block/backup.c added by
"PVE-Back
From: Vladimir Sementsov-Ogievskiy
Add a parameter that enables discard-after-copy. That is mostly useful
in "push backup with fleecing" scheme, when source is snapshot-access
format driver node, based on copy-before-write filter snapshot-access
API:
[guest] [snapshot-access] ~~ blockdev-ba
When a fleecing option is given, it is expected that each device has
a corresponding "-fleecing" block device already attached, except for
EFI disk and TPM state, where fleecing is never used.
The following graph was adapted from [0] which also contains more
details about fleecing.
[guest]
|
Make variables more local. Put failure case for !blk first to avoid
an additional else block with indentation.
Signed-off-by: Fiona Ebner
---
Can be squashed into "PVE-Backup: Proxmox backup patches for QEMU".
pve-backup.c | 27 +++
1 file changed, 11 insertions(+), 16
no functional change intended. Should make it easier to read and add
more logic on top (for backup fleecing).
Signed-off-by: Fiona Ebner
---
git diff --patience to produce a better diff
Can be squashed into "PVE-Backup: Proxmox backup patches for QEMU".
pve-backup.c | 112
It's a property string, because that avoids having an implicit
"enabled" as part of a 'fleecing-storage' property. And there likely
will be more options in the future, e.g. threshold/limit for the
fleecing image size.
Signed-off-by: Fiona Ebner
---
src/PVE/VZDump/Common.pm | 25 +
In preparation to fix an issue for backup fleecing where discarding
the source would lead to an assertion failure when the fleecing image
has larger granularity than the backup target.
Signed-off-by: Fiona Ebner
---
Still need to wait on a response from upstream. For now this hack, so
that the R
Management for fleecing images is implemented here. If the fleecing
option is set, for each disk (except EFI disk and TPM state) a new raw
fleecing image is allocated on the configured fleecing storage (same
storage as original disk by default). The disk is attached to QEMU
with the 'size' paramete
oh!!! Thanks you very much Fiona !!!
This is really the blocking feature for me, still not using pbs because
of this.
I'll try to build a lab for testing as soon as possible
(I'm a bit busy with fosdem preparation)
I'l also to test vm crash/host crash when backup is running, to see
how it's ha
Stupid question: Wouldn't It be much easier to add a simple IO-buffer
with limited capacity, implemented inside the RUST backup code?
> +WARNING: Theoretically, the fleecing image can grow to the same size as the
> +original image, e.g. if the guest re-writes a whole disk while the backup is
> +b
Hi Dietmar !
>>Stupid question: Wouldn't It be much easier to add a simple IO-buffer
>>with limited capacity, implemented inside the RUST backup code?
At work, we are running a backup cluster on remote location with hdd ,
and a production cluster with super fast nvme,
and sometimes I have really
--- Begin Message ---
yes, fantastic news, i did not reckon that we have progress here that
soon! :)
thanks very much for this effort, this is a great step forward!
roland
Am 25.01.24 um 17:02 schrieb DERUMIER, Alexandre:
oh!!! Thanks you very much Fiona !!!
This is really the blocking featur
> >>Stupid question: Wouldn't It be much easier to add a simple IO-buffer
> >>with limited capacity, implemented inside the RUST backup code?
>
> At work, we are running a backup cluster on remote location with hdd ,
> and a production cluster with super fast nvme,
> and sometimes I have really b
32 matches
Mail list logo