The state of the VM's disk images at the time the backup is started is
preserved via a snapshot-access block node. Old data is moved to the
fleecing image when new guest writes come in. The snapshot-access
block node, as well as the associated bitmap in case of incremental
backup, will be made avai
Archive names start with the guest type and ID and then the same
timestamp format as PBS.
Container archives have the following structure:
guest.config
firewall.config
filesystem/ # containing the whole filesystem structure
VM archives have the following structure
guest.config
firewall.config
vol
There can be one dirty bitmap for each backup target ID (which are
tracked in the backup_access_bitmaps hash table). The QMP user can
specify the ID of the bitmap it likes to use. This ID is then compared
to the current one for the given target. If they match, the bitmap is
re-used (should it still
The example uses a simple directory structure to save the backups,
grouped by guest ID. VM backups are saved as configuration files and
qcow2 images, with backing files when doing incremental backups.
Container backups are saved as configuration files and a tar file or
squashfs image (added to test
Changes the behavior of the "Regenerate Image" button in the VM's
CloudInit tab from using the more expensive VM update API endpoint to
using the CloudInit update API endpoint.
Originally-by: Alexandre Derumier
Signed-off-by: Daniel Kral
---
Changes since v1 (as suggested by @Fiona):
- added
In preparation to re-use it for checking potentially untrusted
archives.
Signed-off-by: Fiona Ebner
---
New in v3.
src/PVE/LXC/Create.pm | 51 +--
1 file changed, 30 insertions(+), 21 deletions(-)
diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.
Changes in v3:
* Add storage_has_feature() helper and use it to decide on whether the
storage uses a backup provider, instead of having this be implicit
with whether a backup provider is returned by new_backup_provider().
* Fix querying block-node size for fleecing in stop mode, by issuing
th
While restore_external_archive() already has a check, that happens
after an existing container is destroyed.
Signed-off-by: Fiona Ebner
---
New in v3.
src/PVE/API2/LXC.pm | 14 ++
1 file changed, 14 insertions(+)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 213e518
The drained section needs to be terminated before breaking out of the
loop in the error scenarios. Otherwise, guest IO on the drive would
become stuck.
If the job is created successfully, then the job completion callback
will clean up the snapshot access block nodes. In case failure
happened befor
The device name needs to be queried while holding the graph read lock
and since it doesn't change during the whole operation, just get it
once during setup and avoid the need to query it again in different
places.
Also in preparation to use it more often in error messages and for the
upcoming exte
For providing snapshot-access to external backup providers, EFI and
TPM also need an associated fleecing image. The new caller will thus
need a different filter.
Signed-off-by: Fiona Ebner
---
No changes in v3.
pve-backup.c | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff -
For fleecing, the size needs to match exactly what QEMU sees. In
particular, EFI disks might be attached with a 'size=' option, meaning
that size can be different from the volume's size. Commit 36377acf
("backup: disk info: also keep track of size") introduced size
tracking and it was used for flee
In preparation for allowing multiple backup providers. Each backup
target can then have its own dirty bitmap and there can be additional
checks that the current backup state is actually associated to the
expected target.
Signed-off-by: Fiona Ebner
---
No changes in v3.
pve-backup.c | 8 +++
Allow overlapping request by removing the assert that made it
impossible. There are only two callers:
1. block_copy_task_create()
It already asserts the very same condition before calling
reqlist_init_req().
2. cbw_snapshot_read_lock()
There is no need to have read requests be non-overlapping i
Avoids some line bloat in the create_backup_jobs_bh() function and is
in preparation for setting up the snapshot access independently of
fleecing, in particular that will be useful for providing access to
the snapshot via NBD.
Signed-off-by: Fiona Ebner
---
No changes in v3.
pve-backup.c | 95
For the external backup API, it will be necessary to add a fleecing
image even for small disks like EFI and TPM, because there is no other
place the old data could be copied to when a new guest write comes in.
Signed-off-by: Fiona Ebner
---
Changes in v3:
* adapt to context changes from previous
Signed-off-by: Fiona Ebner
---
No changes in v3.
pve-backup.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/pve-backup.c b/pve-backup.c
index 33c23e53c2..d931746453 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -626,7 +626,8 @@ static void create_backup_jobs_bh(void *o
Makes it a clean error for buggy (external) backup providers where the
size might not be set at all.
Signed-off-by: Fiona Ebner
---
No changes in v3.
PVE/QemuServer.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 49b6ca17..30e51a8c 100644
---
The new_backup_provider() method can be used by storage plugins for
external backup providers. If the method returns a provider, Proxmox
VE will use callbacks to that provider for backups and restore instead
of using its usual backup/restore mechanisms.
API age and version are both bumped.
The ba
The first use case is running the container backup subroutine for
external providers inside a user namespace. That allows them to see
the filesystem to back-up from the containers perspective and also
improves security because of isolation.
Copied and adapted the relevant parts from the pve-buildp
Signed-off-by: Fiona Ebner
---
Changes in v3:
* use new storage_has_feature() helper
src/PVE/Storage.pm | 11 +++
1 file changed, 11 insertions(+)
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index 69500bf..9f9a86b 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -17
Hooks from the backup provider are called during start/end/abort for
both job and backup. And it is necessary to adapt some log messages
and special case some things like is already done for PBS, e.g. log
file handling.
Signed-off-by: Fiona Ebner
---
Changes in v3:
* use new storage_has_feature(
Like this nbd_stop() can be called from a module that cannot include
QemuServer.pm.
Signed-off-by: Fiona Ebner
---
No changes in v3.
PVE/API2/Qemu.pm | 3 ++-
PVE/CLI/qm.pm| 3 ++-
PVE/QemuServer.pm| 6 --
PVE/QemuServer/QMPHelpers.pm | 6 ++
4 f
This gives backup providers more freedom, e.g. mount backed-up mount
point volumes individually.
Suggested-by: Fabian Grünbichler
Signed-off-by: Fiona Ebner
---
New in v3.
src/PVE/LXC/Create.pm | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/src/PVE/LXC/Create.pm b/sr
TPM drives are already detached there and it's better to group
these things together.
Signed-off-by: Fiona Ebner
---
No changes in v3.
PVE/VZDump/QemuServer.pm | 25 +
1 file changed, 9 insertions(+), 16 deletions(-)
diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/Q
used for the shared 'COMMON_TAR_FLAGS' variable.
Signed-off-by: Fiona Ebner
---
New in v3.
src/PVE/LXC/Create.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm
index 117103c..7c5bf0a 100644
--- a/src/PVE/LXC/Create.pm
+++ b/src/PVE/LXC/Create.
Suggested-by: Fabian Grünbichler
Signed-off-by: Fiona Ebner
---
New in v3.
Actual checking being done depends on Fabian's hardening patches:
https://lore.proxmox.com/pve-devel/20241104104221.228730-1-f.gruenbich...@proxmox.com/
PVE/QemuServer.pm | 6 ++
1 file changed, 6 insertions(+)
di
First, the provider is asked about what restore mechanism to use.
Currently, only 'qemu-img' is possible. Then the configuration files
are restored, the provider gives information about volumes contained
in the backup and finally the volumes are restored via
'qemu-img convert'.
The code for the re
Hi!
sorry once for that it took so long to get back to you!
with the nits below addressed, consider this
Reviewed-by: Fabian Grünbichler
it would be nice if the systemd unit change (and potentially the default
file?) could also be submitted for upstream inclusion, so that we can
reduce this de
First, the provider is asked about what restore mechanism to use.
Currently, 'directory' and 'tar' are possible, for restoring either
from a directory containing the full filesystem structure (for which
rsync is used) or a potentially compressed tar file containing the
same.
The new functions are
When the VM is only started for backup, the VM will be stopped at that
point again. While the detach helpers do not warn about errors
currently, that might change in the future. This is also in
preparation for other cleanup QMP helpers that are more verbose about
failure.
Signed-off-by: Fiona Ebne
On 11/4/24 13:02, Fiona Ebner wrote:
Since those users can already use the cloudinit_update endpoint (just
not via UI) and thus effectively don't need "VM.Config.CDROM", I think
it's better to change the UI to use that endpoint as well.
The endpoint was added here [0] and the switch in the UI wa
'tar' itself already protects against '..' in component names and
strips absolute member names when extracting (if not used with the
--absolute-names option) and in general seems sane for extracting.
Additionally, the extraction already happens in the user namespace
associated to the container. So
In preparation to re-use it for restore from backup providers.
Signed-off-by: Fiona Ebner
---
New in v3.
src/PVE/LXC/Create.pm | 42 +-
1 file changed, 25 insertions(+), 17 deletions(-)
diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm
index 7c
In anticipation of future storage plugins that might not have
PBS-specific formats or adhere to the vzdump naming scheme for
backups.
Signed-off-by: Fiona Ebner
---
No changes in v3.
www/manager6/Utils.js | 10 ++
www/manager6/grid/BackupView.js| 4 ++--
www/manager6/
Which looks up whether a storage supports a given feature in its
'plugindata'. This is intentionally kept simple and not implemented
as a plugin method for now. Should it ever become more complex
requiring plugins to override the default implementation, it can
later be changed to a method.
Suggest
The filesystem structure is made available as a directory in a
consistent manner (with details depending on the vzdump backup mode)
just like for regular backup via tar.
The backup_container() method of the backup provider is executed in
a user namespace with the container's ID mapping applied. Th
For external backup providers, the state of the VM's disk images at
the time the backup is started is preserved via a snapshot-access
block node. Old data is moved to the fleecing image when new guest
writes come in. The snapshot-access block node, as well as the
associated bitmap in case of increm
On 10/10/24 17:56, Stefan Hanreich wrote:
Additionally add information about the SDN VNet firewall, which has
been introduced with this changes.
Signed-off-by: Stefan Hanreich
---
Makefile | 1 +
gen-pve-firewall-vnet-opts.pl | 12
pve-firewall-vnet-opts.adoc
only small nits inline
Tested-By: Aaron Lauterer
Reviewed-By: Aaron Lauterer
On 2024-10-18 13:59, Christoph Heiss wrote:
This has been requested by at least one user one user [0] and definitely
makes sense, esp. for BMCs/IPMIs where one might not be able to control
the partition label.
[0]
except for two small style nits in comments in patch 4/4 I don't have
anything to complain about.
Consider this series
Tested-By: Aaron Lauterer
Reviewed-By: Aaron Lauterer
On 2024-10-18 13:59, Christoph Heiss wrote:
This series allow specifying the partition label the
`proxmox-fetch-answ
this adds a 'tagview' to the web ui, organizing guests by their tags
(for details see the pve-manager patch)
I'm not super happy all in all with how much special casing must be
done, but i could not (yet?) figure out a better way.
changes from v2:
* rebased on master (tooltip generation changed s
in the tag view, we have a custom 'full' style in a place where we
can have another tagstyle class above. to compensate for that, we have
to add another condition to those styles, namely that there is not the
'proxmox-tags-full' in between.
Signed-off-by: Dominik Csapak
---
src/css/ext6-pmx.css
and keep the functionality in ResourceTree as generic as possible.
We achieve this by having an 'itemMap' function that can split one item
from the store into multiple to add to the tree.
for the updates, we have to have an 'idMapFn' (to get the original id
back)
we also have to modify how the m
Signed-off-by: Dominik Csapak
---
pve-gui.adoc | 1 +
1 file changed, 1 insertion(+)
diff --git a/pve-gui.adoc b/pve-gui.adoc
index bda370f..9e4650d 100644
--- a/pve-gui.adoc
+++ b/pve-gui.adoc
@@ -383,6 +383,7 @@ and the corresponding interfaces for each menu item on the
right.
* *Permissions
Am 04.11.24 um 11:42 schrieb Fabian Grünbichler:
> this allows checking some extra attributes for images which come from a
> potentially malicious source.
>
> since file_size_info is not part of the plugin API, no API bump is needed. if
> desired, a similar check could also be implemented in volum
Disables the "Regenerate image" button in the VM CloudInit tab for
users, which lack the necessary permission "VM.Config.CloudInit" for the
CloudInit update API endpoint.
This is a cosmetic change as the CloudInit update API endpoint would
fail because of insufficient permissions anyway.
Signed-o
On 2024-11-07 15:16, Dominik Csapak wrote:
On 11/7/24 14:52, Aaron Lauterer wrote:
gave this a quick test.
two things I noticed:
* root element in tree per tag: wouldn't it be better to override the
display style to 'full'? Otherwise I might have a lot of colorful
dots, but don't know wh
gave this a quick test.
two things I noticed:
* root element in tree per tag: wouldn't it be better to override the
display style to 'full'? Otherwise I might have a lot of colorful dots,
but don't know what the tags are called.
* I am not 100% sure, but would it be possible to distinguish g
Does what it says. Tested in combination with the adjacent UI patch [0].
[0]
https://lore.proxmox.com/pve-devel/20240802143736.172810-1-m.sando...@proxmox.com/
Tested-By: Aaron Lauterer
On 2024-09-13 10:13, Maximiliano Sandoval wrote:
A popular ISO compressed exclusively with bz2 is OPNs
Tested in combination with the adjacent backend patch [0] by Downloading
a bz2 compressed ISO and booting a VM from it.
One small style nit inline, but I don't think that warants a new
version, as we can fix it in a follow up or just directly when applying
the patch.
[0]
https://lore.proxmo
On Thu, Oct 10, 2024 at 05:56:33PM GMT, Stefan Hanreich wrote:
> This also includes support for parsing rules referencing IPSets in the
> new SDN scope and generating those IPSets in the firewall.
>
> Loading SDN configuration is optional, since loading it requires root
> privileges which we do no
On 11/7/24 14:52, Aaron Lauterer wrote:
gave this a quick test.
two things I noticed:
* root element in tree per tag: wouldn't it be better to override the display style to 'full'?
Otherwise I might have a lot of colorful dots, but don't know what the tags are called.
that should be the case
--- Begin Message ---
Hi Aaron,
Thanks for the review and testing! I'll post v4 today to
address your comments and add those commit trailers.
Thanks,
Severen
--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxm
--- Begin Message ---
Hi everyone,
This is another small update to my previous patch series [1]
adding optional support for preventing PVE from suggesting
previously used VM/CT IDs. Aaron had some small style nits, so
I've addressed those and also added the Reviewed-by and
Tested-by commit trailer
--- Begin Message ---
Add a 'suggest unique VMIDs' row to the datacenter options page that
allows choosing whether the `/cluster/nextid` API endpoint (and thereby
any UI elements that suggest IDs) should avoid suggesting previously
used IDs. This option defaults to off to ensure that this change in
--- Begin Message ---
After a container is destroyed, record that its ID has been used via the
`PVE::UsedVmidList` module so that the `/cluster/nextids` endpoint can
later optionally avoid suggesting previously used IDs.
Co-authored-by: Daniel Krambrock
Signed-off-by: Severen Redwood
Tested-by:
--- Begin Message ---
After a virtual machine is destroyed, record that its ID has been used
via the `PVE::UsedVmidList` module so that the `/cluster/nextids`
endpoint can later optionally avoid suggesting previously used IDs.
Co-authored-by: Daniel Krambrock
Signed-off-by: Severen Redwood
Teste
--- Begin Message ---
Add `/etc/pve/used_vmids.list` to the list of cluster files, which will
be used for recording previously used VM/CT IDs. This is required so
that we can optionally ensure that such IDs are not suggested by the
`/cluster/nextid` API endpoint.
Co-authored-by: Daniel Krambrock
--- Begin Message ---
At the moment, the `/cluster/nextid` API endpoint will return the lowest
available VM/CT ID, which means that it will suggest re-using VM IDs.
This can be undesirable, so add an optional check to ensure that it
chooses an ID which is not and has never been in use.
This option
--- Begin Message ---
Add the `unique-next-id` property to the datacentre config schema to
track whether only unique (ie. neither currently nor previously in use)
VM/CT IDs should be suggested by the `/cluster/nextid` API endpoint.
Co-authored-by: Daniel Krambrock
Signed-off-by: Severen Redwood
61 matches
Mail list logo