Am 20.01.25 um 12:28 schrieb Filip Schauer:
> Extend volume import functionality to support 'iso', 'snippets',
> 'vztmpl', and 'import' types, in addition to the existing support for
> 'images' and 'rootdir'. This is a prerequisite for the ability to move
> ISOs, snippets and container templates be
I'd call it copy-volume, since that better fits with the default delete
behavior.
Am 20.01.25 um 12:28 schrieb Filip Schauer:
> The method can be called from the PVE shell with `pvesm move-volume`:
>
> ```
> pvesm move-volume [--target-node ]
> [--delete]
> ```
>
> For example to move a VMA b
Am 20.01.25 um 12:28 schrieb Filip Schauer:
> Add the ability to move a backup, ISO, container template, snippet, or
> OVA/OVF between storages and nodes via an API method. Moving a VMA
> backup to a Proxmox Backup Server requires the proxmox-vma-to-pbs
> package to be installed. Currently only VMA
Am 20.01.25 um 12:28 schrieb Filip Schauer:
> This commit adds the "backup+size" export format. When this format is
> used, the data stream starts with metadata of the backup (protected flag
> & notes) followed by the contents of the backup archive.
>
> Signed-off-by: Filip Schauer
> ---
> src/P
Am 20.01.25 um 12:28 schrieb Filip Schauer:
> Extract the file decompression code into its own reusable subroutine.
>
> Signed-off-by: Filip Schauer
Reviewed-by: Fiona Ebner
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmo
Am 20.01.25 um 12:28 schrieb Filip Schauer:
> Add the ability to move an iso, snippet or vztmpl between storages and
> nodes.
>
> Use either curl to call the API method:
>
> ```
> curl
> https://$APINODE:8006/api2/json/nodes/$SOURCENODE/storage/$SOURCESTORAGE/content/$SOURCEVOLUME
> \
> --i
Am 20.01.25 um 12:28 schrieb Filip Schauer:
> Avoid the overhead of SSH when moving a volume between storages on the
> same node.
>
> Signed-off-by: Filip Schauer
> ---
> src/PVE/Storage.pm | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/src/PVE/Storage.pm b/src/P
Am 20.01.25 um 12:28 schrieb Filip Schauer:
> Extend the move API to support moving VMA backups to a Proxmox Backup
> Server.
>
> Signed-off-by: Filip Schauer
> ---
> debian/control | 1 +
> src/PVE/API2/Storage/Content.pm | 53 +++
> src/PVE/Storage/PBS
--- Begin Message ---
On 04/02/2025 16:46, Fabian Grünbichler wrote:
Ivaylo Markov via pve-devel hat am 04.02.2025
13:44 CET geschrieben:
Greetings,
I was pointed here to discuss the StorPool storage plugin[0] with the
dev team.
If I understand correctly, there is a concern with the our HA wa
Thanks.
I have tested AMD SEV SNP and it works fine with your patch series.
Tested-by: Markus Frank
One thing I noticed:
I would add a note in the WebUI that you need kernel >= 6.11 installed on the pve
host to enable SEV-SNP as long as >=6.11 is not the default kernel.
On 2025-02-07 09:51,
Mira Limbeck writes:
> On 2/13/25 12:01, Fiona Ebner wrote:
>> Am 10.02.25 um 13:07 schrieb Daniel Herzig:
>>> From: Leo Nunner
>>>
>>
>> @Mira do you know more by chance?
> I don't think vendor-data should be part of the instance-id. It's used
> to create a first configuration that a user can
Fiona Ebner writes:
> Am 10.02.25 um 13:07 schrieb Daniel Herzig:
>> From: Leo Nunner
>>
>> Introduce configuration parameters for cloud-init. Like with VMs, it's
>> possible to specify:
>> - user
>> - password
>> - ssh keys
>> - enable/disable updates on first boot
>>
>> It's
Reads pretty good all in all. Maybe I was a bit nitpicky in the initial
section, but better to suggest too much than too little.
Am 10.01.25 um 17:58 schrieb Alexander Zeidler:
> diff --git a/pvecm.adoc b/pvecm.adoc
> index 15dda4e..4028e92 100644
> --- a/pvecm.adoc
> +++ b/pvecm.adoc
> @@ -486,6
meh, somehow the commit subject line got cut off (maybe forgot to save?)
should have been:
"implement experimental vgpu live migration"
sorry for the inconvenience
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-
we now return the 'allowed_nodes'/'not_allowed_nodes' also if the vm is
running, when it has mapped resources. So do that checks independently
so that the user has instant feedback where those resources exist.
Signed-off-by: Dominik Csapak
---
no changes in v6
www/manager6/window/Migrate.js | 26
this now takes into account the 'not_allowed_nodes' hash we get from the
api call. With that, we can now limit the 'local_resources' check for
online vms only, as for offline guests, the 'unavailable-resources' hash
already includes mapped devices that don't exist on the target node.
This now also
those should be able to migrate even for online vms. If the mapping does
not exist on the target node, that will be caught further down anyway.
Signed-off-by: Dominik Csapak
---
no changes in v6
PVE/API2/Nodes.pm | 13 +++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/
in a new section about additional options
Signed-off-by: Dominik Csapak
---
no changes in v6
qm.adoc | 12
1 file changed, 12 insertions(+)
diff --git a/qm.adoc b/qm.adoc
index 4bb8f2c..6f337fe 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -1944,6 +1944,18 @@ To create mappings `Mapping.Mo
Show the transferred VFIO state (when there is one), but since there is
no total here, so we can't show that, just what was transferred up until
now.
Signed-off-by: Dominik Csapak
---
no changes in v6
PVE/API2/Qemu.pm | 2 +-
PVE/QemuMigrate.pm | 12 +++-
2 files changed, 12 insertion
by also providing the global config in assert_valid, and by also
adding the mdev config in the 'toCheck' object in the gui
For the gui, we extract the mdev property from the global entry, and add
it to the individual mapping entries, that way we can reuse the checking
logic of the other properties
showing a final transfer log line helps with identifying what was
actually transferred. E.g. it could happen that the VFIO state was only
transferred in the last iteration. In such a case we would not see that
information at all.
Signed-off-by: Dominik Csapak
---
new in v6
PVE/QemuMigrate.pm | 1
== Summmary ==
Includes some useful cleanups/features
This is implemented for mapped resources. This requires driver and
hardware support, but aside from nvidia vgpus there don't seem to be
many drivers (if any) that do support that.
qemu already supports that for vfio-pci devices, so nothing to
by giving the mapping config to assert_valid, not only the specific mapping
Signed-off-by: Dominik Csapak
---
changes in v6:
* add `my $config ...` line since that was introduced in a different
patch in v5 that was dropped with v6
depends on changes from pve-guest-common
PVE/QemuServer/PCI.p
but that lives int he 'global' part of the mapping config, not in a
specific mapping. To check that, add it to the $configured_props from
there.
this requires all call sites to be adapted otherwise the check will
always fail for devices that are capable of mediated devices
Signed-off-by: Dominik
we added it to the lspci one, but we'll also need it when querying
a single device
code is the same as in the lspci sub
Signed-off-by: Dominik Csapak
---
new in v6
src/PVE/SysFSTools.pm | 4
1 file changed, 4 insertions(+)
diff --git a/src/PVE/SysFSTools.pm b/src/PVE/SysFSTools.pm
index
so that we can show a proper warning in the migrate dialog and check it
in the bulk migrate precondition check
the unavailable_storages and should be the same as before, but
we now always return (not_)allowed_nodes too.
to make the code a bit easier to read, reorganize how we construct
the (not_)
if the hardware/driver is capable, the admin can now mark a pci device
as 'live-migration-capable', which then tries enabling live migration
for such devices.
mark it as experimental when configuring and in the migrate window
Signed-off-by: Dominik Csapak
---
no changes in v6
www/manager6/windo
Signed-off-by: Dominik Csapak
---
no changes in v6
qm.adoc | 6 ++
1 file changed, 6 insertions(+)
diff --git a/qm.adoc b/qm.adoc
index 6f337fe..0d18d7e 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -1955,6 +1955,12 @@ Currently there are the following options:
mapping, the mediated device will b
They have to be marked as 'live-migration-capable' in the mapping
config, and the driver and qemu must support it.
For the gui checks, we now return the whole object of the mapped
resources, which includes info like the name and if it's marked as
live-migration capable. (while deprecating the old
we currently only call deactivate_volumes, but we actually want to call
the whole vm_stop_cleanup, since that is not invoked by the vm_stop
above (we cannot parse the config anymore) and might do other cleanups
we also want to do (like mdev cleanup).
For this to work properly we have to clone the
such as the mapping name and if it's marked for live-migration (pci only)
Signed-off-by: Dominik Csapak
---
no changes in v6
PVE/API2/Qemu.pm | 2 +-
PVE/QemuMigrate.pm | 7 ---
PVE/QemuServer.pm | 17 ++---
3 files changed, 15 insertions(+), 11 deletions(-)
diff --git a/PV
and keep it the same for all current callers as before by setting the
additional 'noerr' parameter to '1'.
Signed-off-by: Dominik Csapak
---
reordered in v6, was 4/11 in v5, no changes otherwise
PVE/CLI/qm.pm | 2 +-
PVE/QemuServer.pm | 13 -
2 files changed, 9 insertions(+), 6
the default is 'auto', but for those which are marked as capable for
live migration, we want to explicitly enable that, so we get an early
error on start if the driver does not support that.
Signed-off-by: Dominik Csapak
---
no changes in v6
PVE/QemuServer/PCI.pm | 8 +++-
1 file changed, 7
this will fail with the new checks for mdev when we don't have the
correct config.
namely a device that has mediated devices, should have 'mdev' set in the
mapping config
Signed-off-by: Dominik Csapak
---
reordered in v6, was 10/11 in v5, no changes otherwise
test/run_config2command_tests.pl |
so that we can decide in qemu-server to allow live-migration.
The driver and QEMU must be capable of that, and it's the
admin's responsibility to know and configure that
Mark the option as experimental in the description.
Signed-off-by: Dominik Csapak
---
no changes in v6
src/PVE/Mapping/PCI.pm
The description 'VZDump backup file' for content type 'backup' is
wrong for PBS and other future backup providers. Just use 'Backup' to
describe the content type everywhere and avoid confusion.
Signed-off-by: Fiona Ebner
---
www/manager6/Utils.js | 2 +-
1 file changed, 1 insertion(+), 1 deletio
Am 10.02.25 um 13:07 schrieb Daniel Herzig:
> diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
> index 5cc37f7..e3ed93b 100644
> --- a/src/PVE/LXC/Config.pm
> +++ b/src/PVE/LXC/Config.pm
> @@ -450,6 +450,63 @@ my $features_desc = {
> },
> };
>
> +my $cicustom_fmt = {
> +user =
Am 13.02.25 um 12:01 schrieb Fiona Ebner:
> Am 10.02.25 um 13:07 schrieb Daniel Herzig:
>> diff --git a/src/PVE/LXC/Cloudinit.pm b/src/PVE/LXC/Cloudinit.pm
>> new file mode 100644
>> index 000..3e8617b
>> --- /dev/null
>> +++ b/src/PVE/LXC/Cloudinit.pm
>> @@ -0,0 +1,114 @@
>> +package PVE::LXC:
Am 13.02.25 um 12:29 schrieb Mira Limbeck:
> On 2/13/25 12:01, Fiona Ebner wrote:
>> Am 10.02.25 um 13:07 schrieb Daniel Herzig:
>>> +sub gen_cloudinit_metadata {
>>> +my ($user) = @_;
>>> +
>>> +my $uuid_str = Digest::SHA::sha1_hex($user);
>>
>> Hmm, shouldn't this also depend on the vendo
Am 10.02.25 um 13:07 schrieb Daniel Herzig:
> +sub dump_cloudinit_config {
> +my ($conf, $type) = @_;
> +
> +if ($type eq 'user') {
> + return cloudinit_userdata($conf);
> +} else { # metadata config
I'd also guard this with a "$type eq 'meta'" and die for unknown types.
> + m
On 2/13/25 12:01, Fiona Ebner wrote:
> Am 10.02.25 um 13:07 schrieb Daniel Herzig:
>> From: Leo Nunner
>>
>> The code to generate the actual configuration works pretty much the same
>> as with the VM system. We generate an instance ID by hashing the user
>> configuration, causing cloud-init to run
> Mira Limbeck hat am 12.02.2025 15:51 CET geschrieben:
>
>
> On 2/11/25 06:40, Thomas Skinner wrote:
> > Signed-off-by: Thomas Skinner
> > ---
> > src/PVE/API2/OpenId.pm | 79
> > src/PVE/AccessControl.pm | 2 +-
> > src/PVE/Auth/OpenId.pm | 33
Am 10.02.25 um 13:07 schrieb Daniel Herzig:
> From: Leo Nunner
>
> The code to generate the actual configuration works pretty much the same
> as with the VM system. We generate an instance ID by hashing the user
> configuration, causing cloud-init to run every time said configuration
> changes.
>
Am 10.02.25 um 13:07 schrieb Daniel Herzig:
> From: Leo Nunner
>
> Introduce configuration parameters for cloud-init. Like with VMs, it's
> possible to specify:
> - user
> - password
> - ssh keys
> - enable/disable updates on first boot
>
> It's also possible to pass through cust
Am 13.02.25 um 11:18 schrieb Mira Limbeck:
> On 2/13/25 11:10, Fiona Ebner wrote:
>> Am 10.02.25 um 13:07 schrieb Daniel Herzig:
>>> From: Leo Nunner
>>>
>>> Introduce configuration parameters for cloud-init. Like with VMs, it's
>>> possible to specify:
>>> - user
>>> - password
>>> -
On 2/13/25 11:10, Fiona Ebner wrote:
> Am 10.02.25 um 13:07 schrieb Daniel Herzig:
>> From: Leo Nunner
>>
>> Introduce configuration parameters for cloud-init. Like with VMs, it's
>> possible to specify:
>> - user
>> - password
>> - ssh keys
>> - enable/disable updates on first boo
> Orwa Diraneyya via pve-devel hat am 04.01.2025
> 19:47 CET geschrieben:
> From: Orwa Diraneyya
>
> After this fix, users of Proxmox will be able to
> use the root filesystem tarballs found publicly
> (e.g. at https://cloud-images.ubuntu.com/) as LXC
> container templates.
>
> Currently, th
Am 10.02.25 um 13:07 schrieb Daniel Herzig:
> From: Leo Nunner
>
> Introduce configuration parameters for cloud-init. Like with VMs, it's
> possible to specify:
> - user
> - password
> - ssh keys
> - enable/disable updates on first boot
>
> It's also possible to pass through cust
On 12.02.2025 12:17, Stefan Hanreich wrote:
This still has some issues (see below), maybe we can look at it together
next week (will be gone after today) and see if we can make some
additional structural improvements to the whole controller / frr logic?
There were also some regressions/bugs wi
Am 10.02.25 um 13:07 schrieb Daniel Herzig:
> From: Leo Nunner
>
> …the same way as it's already being done for VMs.
Nit: can we avoid the unicode ellipsis here ;)
>
> Signed-off-by: Leo Nunner
> ---
> gen-pct-cloud-init-opts.pl | 16
> 1 file changed, 16 insertions(+)
> cr
On 2/11/25 13:45, Christoph Heiss wrote:
On Mon Jan 20, 2025 at 3:51 PM CET, Dominik Csapak wrote:
[..]
+my sub get_current_node_mapping {
+my ($mapping_config, $mapping_name) = @_;
+
+my $node = PVE::INotify::nodename();
+my $devices = PVE::Mapping::PCI::get_node_mapping($mapping_co
Signed-off-by: Hannes Duerr
---
debian/changelog | 5 +
debian/control | 15 +++
debian/copyright | 14 ++
debian/rules | 8
debian/source/format | 1 +
5 files changed, 43 insertions(+)
create mode 100644 debian/changelog
create mod
the package ships a script that helps to set up Nvidia vgpu drivers.
Signed-off-by: Hannes Duerr
---
debian/control | 1 +
1 file changed, 1 insertion(+)
diff --git a/debian/control b/debian/control
index 6c94df09..ab02fd76 100644
--- a/debian/control
+++ b/debian/control
@@ -89,6 +89,7 @@ Depe
Changes in v5:
in commits
Changes in v4:
in commits
Changes in v3:
* install headers for every installed kernel version by default
* additionally add patch to only install headers for running kernel
version and newer ones, this requires the new dependency
"libdpkg-perl"
* remove unnecessary i
We add the pve-nvidia-vgpu-helper script to simplify the installation of
the required Nvidia Vgpu driver dependencies.
The script performs the following tasks
- install consistent dependencies
- check the currently running kernel and install the necessary kernel
headers for the running kernel and
Signed-off-by: Hannes Duerr
---
debian/control | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/debian/control b/debian/control
index 334bf25..352e63a 100644
--- a/debian/control
+++ b/debian/control
@@ -8,7 +8,9 @@ Homepage: https://www.proxmox.com
Package: pve-nvidia-vg
SR-IOV must be enabled each time the system is restarted.
This systemd service should take over this task and enable SR-IOV per
pci-id/gpu after a system restart.
Signed-off-by: Hannes Duerr
---
Notes:
Changes in v4:
* Change nvidia-vgpud.service nvidia-vgpu-mgr.service to `Before=`
57 matches
Mail list logo