[pve-devel] [PATCH pve-manager 7/7] utils: avoid line-break in pending changes message
Remove line-break on sdn "pending configuration" message on removed sdn objects. Signed-off-by: Stefan Hanreich Co-authored-by: Gabriel Goller Signed-off-by: Gabriel Goller --- www/manager6/Utils.js | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js index aa415759ca27..52197e538213 100644 --- a/www/manager6/Utils.js +++ b/www/manager6/Utils.js @@ -192,7 +192,7 @@ Ext.define('PVE.Utils', { if (value === undefined) { return ' '; } else { - return `${Ext.htmlEncode(value)}`; + return `${Ext.htmlEncode(value)}`; } } else if (rec.data.pending[key] !== undefined && rec.data.pending[key] !== null) { if (rec.data.pending[key] === 'deleted') { -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH corosync] corosync.service: add patch to reduce log spam in broken network setups
Am 04.04.25 um 10:14 schrieb Maximiliano Sandoval: > Friedrich Weber writes: ... > An option that might require lower maintenance would be to ship a > service file override, e.g. at > /lib/systemd/system/corosync.service.d/set-log-rate-limit.conf with > contents: > > ``` > [Service] > LogRateLimitIntervalSec=1s > LogRateLimitBurst=200 > ``` > > No strong feelings, it is just a matter of taste. Would be more fitting if we did not package corosync our self, as is this integrated way would be fine to me. That sasid yours could be too. But ... > >> + 1 file changed, 2 insertions(+) >> + >> +diff --git a/init/corosync.service.in b/init/corosync.service.in >> +index bd2a48a9..3d7ea2db 100644 >> +--- a/init/corosync.service.in >> b/init/corosync.service.in >> +@@ -10,6 +10,8 @@ EnvironmentFile=-@INITCONFIGDIR@/corosync >> + ExecStart=@SBINDIR@/corosync -f $COROSYNC_OPTIONS >> + ExecStop=@SBINDIR@/corosync-cfgtool -H --force >> + Type=notify >> ++LogRateLimitIntervalSec=1s >> ++LogRateLimitBurst=200 > > 200 hundred messages per second might be a bit too many. Since we are > not sure how many messages a unlucky user might see, I would suggest to > lower it a bit for the time being, 100 is a good round number. > ... well, this is a core cluster service, having more available from a log burst is IMO really justified here. That's also why I won't apply this patch for now, systemd already has default rate limiting for _very_ noisy stuff, it can also handle high log rates just fine and this only affects broken setups until they got fixed. ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH pve-docs 1/1] fabrics: add initial documentation for sdn fabrics
On 31.03.2025 10:44, Shannon Sterz wrote: On Fri Mar 28, 2025 at 6:13 PM CET, Gabriel Goller wrote: [snip] I agree with everthing above. + +Router-ID Selection +^^^ + +Each node in a fabric needs a unique router ID, which is an IPv4 (or IPv6 in +OpenFabric) address in dotted-decimal notation (e.g., 192.168.1.1). does this apply to v6 too? i've never seen IPv6 represented in dotted decimal notation. imo if you mention IPv6 here, you should specify it's representation. maybe: Each node in a fabric needs a unique router ID, which is an IPv4 address in dotted decimal notation (e.g. 192.168.1.1). In OpenFabric this can also be an IPv6 address in the typical hexadecimal representation separated by colons (e.g., 2001:db8::1428:57ab). Oops, yeah I forgot this. [snip] +Name:: This is the name of the OpenFabric fabric and can be at most 8 characters long. + +Loopback Prefix:: CIDR (IPv4 or IPv6) network range (e.g., 10.0.0.0/24) used to verify that +all router-IDs in the fabric are contained within this prefix. + +Hello Interval:: Controls how frequently (in seconds) hello packets are sent to +discover and maintain connections with neighboring nodes. Lower values detect +failures faster but increase network traffic. If empty, the default value will what is the default value? would it make sense to mention that here? I decided against using default values as I wanted FRR to handle them. We also don't set default-options in the FRR config, but just delete the property and let FRR handle it. Maybe I could write the: "the default FRR value will be used"? +be used. This option is global on the fabric, meaning every interface on every +node in this fabric will inherit this hello-interval property. + +[[pvesdn_openfabric_node]] +On the Node +^^^ + +Node:: Select the node which will be added to the fabric. Only nodes that +currently are in the cluster will be shown. + +Router-ID:: A unique IPv4 or IPv6 address used to generate the OpenFabric +Network Entity Title (NET). Each node in the same fabric must have a different +Router-ID, while a single node must use the same NET address across all fabrics +(this consistency is automatically managed by {pve}). + +NOTE: When using IPv6 addresses, we use the last 3 segments to generate the +NET. Ensure these segments differ between nodes. would it make sense to make this a `WARNING` instead of a `NOTE`? sounds like this is a bit more important to get right. yep, I agree. [snip] +[[pvesdn_ospf_fabric]] +On the Fabric +^ + +Area:: This specifies the OSPF area identifier, which can be either an integer +(i32) or an IP address. Areas are a way to organize and structure OSPF networks i32 is super intuitive for Rust programmers but "32-bit signed integer" would be clearer to everyone else ;) Will fix this as well. Thanks for looking over this! The docs are still quite spare, so I hope I'll get around adding more stuff for the next version. ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH v8 container 6/7] restore tar archive: check potentially untrusted archive
From: Fiona Ebner 'tar' itself already protects against '..' in component names and strips absolute member names when extracting (if not used with the --absolute-names option) and in general seems sane for extracting. Additionally, the extraction already happens in the user namespace associated to the container. So for now, start out with some basic sanity checks. The helper can still be extended with more checks. Checks: * list files in archive - will already catch many corrupted/bogus archives. * check that there are at least 10 members - should also catch archives not actually containing a container root filesystem or structural issues early. * check that /sbin directory or link exists in archive - ideally the check would be for /sbin/init, but this cannot be done efficiently before extraction, because it would require to keep track of the whole archive to be able to follow symlinks. * abort if there is a multi-volume member in the archive - cheap and is never expected. Checks that were considered, but not (yet) added: * abort when a file has unrealistically large size - while this could help to detect certain kinds of bogus archives, there can be valid. use cases for extremely large sparse files, so it's not clear what a good limit would be (1 EiB maybe?). Also, an attacker could just adapt to such a limit creating multiple files and the actual extraction is already limited by the size of the allocated container volume. * check that /sbin/init exists after extracting - cannot be done efficiently before extraction, because it would require to keep track of the whole archive to be able to follow symlinks. During setup there already is detection of /etc/os-release, so issues with the structure will already be noticed. Adding a hard fail for untrusted archives would require either passing that information to the setup phase or extracting the protected_call method from there into a helper. * adding 'restrict' to the (common) tar flags - the tar manual (not the man page) documents: "Disable use of some potentially harmful 'tar' options. Currently this option disables shell invocation from multi-volume menu.". The flag was introduced in 2005 and this is still the only thing it is used for. Trying to restore a multi-volume archive already fails without giving multiple '--file' arguments and '--multi-volume', so don't bother adding the flag. * check format of tar file - would require yet another invocation of the decompressor and there seems to be no built-in way to just display the format with 'tar'. The 'file' program could be used, but it seems to not make a distinction between old GNU and GNU and old POSIX and POSIX formats, with the old ones being candidates to prohibit. So that would leave just detecting the old 'v7' format. Suggested-by: Fabian Grünbichler Signed-off-by: Fiona Ebner --- No changes to v7. src/PVE/LXC/Create.pm | 75 --- 1 file changed, 71 insertions(+), 4 deletions(-) diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm index 43fc5fe..53c584b 100644 --- a/src/PVE/LXC/Create.pm +++ b/src/PVE/LXC/Create.pm @@ -99,12 +99,73 @@ my sub tar_compression_option { } } +# Basic checks trying to detect issues with a potentially untrusted or bogus tar archive. +# Just listing the files is already a good check against corruption. +# 'tar' itself already protects against '..' in component names and strips absolute member names +# when extracting, so no need to check for those here. +my sub check_tar_archive { +my ($archive) = @_; + +print "checking archive..\n"; + +# To resolve links to get to 'sbin/init' would mean keeping track of everything in the archive, +# because the target might be ordered first. Check only that 'sbin' exists here. +my $found_sbin; + +# Just to detect bogus archives, any valid container filesystem should have more than this. +my $required_members = 10; +my $member_count = 0; + +my $check_file_list = sub { + my ($line) = @_; + + $member_count++; + + # Not always just a single number, e.g. for character devices. + my $size_re = qr/\d+(?:,\d+)?/; + + # The date is in ISO 8601 format. The last part contains the potentially quoted file name, + # potentially followed by some additional info (e.g. where a link points to). + my ($type, $perms, $uid, $gid, $size, $date, $time, $file_info) = + $line =~ m!^([a-zA-Z\-])(\S+)\s+(\d+)/(\d+)\s+($size_re)\s+(\S+)\s+(\S+)\s+(.*)$!; + if (!defined($type)) { + print "check tar: unable to parse line: $line\n"; + return; + } + + die "found multi-volume member in archive\n" if $type eq 'M'; + + if (!$found_sbin && ( + ($file_info =~ m!^(?:\./)?sbin/$! && $type eq 'd') + || ($file_info =~ m!^(?:\./)?sbin ->! && $type eq 'l') + || ($file_info =~ m!^(?:
[pve-devel] [PATCH storage/manager v2] allow upload & import of qcow2 in the web UI
most of the building blocks are already there: * we can have qcow2 files in an import storage * we can import qcow2 files via the api from such a storage this series fills in the missing bits & pieces: * allow uploading qcow2 files into an import storage via the webgui * adding the possibility to select such a file when creating a vm/disk We could maybe also allow this for raw/vmdk if we want to, but IMHO we can start out with qcow2 and add the others as necssary. (if wanted, I can of course also add the others in a next version or as a follow up) changes from v1: * fixed an issue where the file selector would be hidden but invalid pve-storage: Dominik Csapak (1): import: allow upload of qcow2 files into import storage src/PVE/API2/Storage/Status.pm | 17 - src/PVE/Storage.pm | 2 +- 2 files changed, 17 insertions(+), 2 deletions(-) pve-manager: Dominik Csapak (3): ui: storage content: allow upload of qcow2 for import type ui: form: file selector: allow optional filter ui: qemu hd edit: allow importing a disk from the import storage www/manager6/form/FileSelector.js | 10 www/manager6/qemu/HDEdit.js| 72 +- www/manager6/window/UploadToStorage.js | 2 +- 3 files changed, 82 insertions(+), 2 deletions(-) -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH storage v9 07/29] extract backup config: delegate to backup provider for storages that support it
Signed-off-by: Fiona Ebner --- src/PVE/Storage.pm | 11 +++ 1 file changed, 11 insertions(+) diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm index 10a4abc..7174f0f 100755 --- a/src/PVE/Storage.pm +++ b/src/PVE/Storage.pm @@ -1759,6 +1759,17 @@ sub extract_vzdump_config { storage_check_enabled($cfg, $storeid); return PVE::Storage::PBSPlugin->extract_vzdump_config($scfg, $volname, $storeid); } + + if (storage_has_feature($cfg, $storeid, 'backup-provider')) { + my $plugin = PVE::Storage::Plugin->lookup($scfg->{type}); + my $log_function = sub { + my ($log_level, $message) = @_; + my $prefix = $log_level eq 'err' ? 'ERROR' : uc($log_level); + print "$prefix: $message\n"; + }; + my $backup_provider = $plugin->new_backup_provider($scfg, $storeid, $log_function); + return $backup_provider->archive_get_guest_config($volname, $storeid); + } } my $archive = abs_filesystem_path($cfg, $volid); -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH qemu-server 2/2] vnc: connect to audio device if there is one
this is in preparation of noVNC audio support. For that to work we have to connect vnc to the audiodev, else qemu does not know which device to encode audio from. Since we only can have one audio device, simply use that if it exists. This works simultaneously for SPICE and VNC. Live migration is not impacted, since it's only a logical change and not a hardware change. (Tested live-migration multiple times in both directions, worked without any issues). Signed-off-by: Dominik Csapak --- PVE/QemuServer.pm | 4 +++- test/cfg2cmd/audio.conf.cmd | 2 +- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 96786a96..dfdbee99 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -3706,7 +3706,9 @@ sub config_to_command { push @$cmd, '-display', 'egl-headless,gl=core' if $vga->{type} eq 'virtio-gl'; # VIRGL my $socket = PVE::QemuServer::Helpers::vnc_socket($vmid); - push @$cmd, '-vnc', "unix:$socket,password=on"; + my $audioconf = conf_has_audio($conf); + my $audiodev = defined($audioconf) ? ",audiodev=$audioconf->{backend_id}" : ""; + push @$cmd, '-vnc', "unix:$socket,password=on$audiodev"; } else { push @$cmd, '-vga', 'none' if $vga->{type} eq 'none'; push @$cmd, '-nographic'; diff --git a/test/cfg2cmd/audio.conf.cmd b/test/cfg2cmd/audio.conf.cmd index add6f55d..7014bf51 100644 --- a/test/cfg2cmd/audio.conf.cmd +++ b/test/cfg2cmd/audio.conf.cmd @@ -12,7 +12,7 @@ -smp '3,sockets=1,cores=3,maxcpus=3' \ -nodefaults \ -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \ - -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \ + -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on,audiodev=none-backend0' \ -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \ -m 768 \ -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \ -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] About PVE Backup Integration Guide
Note there is a v6 of the patch series now [0]. Am 01.04.25 um 09:02 schrieb Prashant Patil: > We have gone through plugins POD material; we have few questions from > very little understanding about plugins: > > 1. Storage plugin > 1. What is the main purpose of storage plugin? The storage plugin is needed, so that users can configure your backup server as a backup storage in Proxmox VE, i.e. select it when configuring a backup job or starting a backup, list the backups and issue restore operations. > 2. Do we need to implement our own storage plugin? Which all > functions we need to implement? Yes. Look at the example plugins [1][2] to see which methods are required. > 2. Backup Provider plugin > 1. What is the main purpose of backup provider plugin? This is used by the Proxmox VE backend to interface with your backup server/solution during backup and restore tasks. > 2. Do we need to implement our own backup provider plugin? Which > all functions we need to implement? Yes. You need to implement all methods, but you can decide which backup and restore mechanisms you want to implement. One for VMs and one for containers. > 3. Which model this provider refers to - push model or pull model? Not sure what you mean here. The Proxmox VE backend will call your plugins methdos during backup. It's up to you how exactly you handle the data. > 4. It looks like we have two main callback functions – job_hook() > and backup_hook(). Who calls these functions and when do they > get called? They got replaced by job_init(), job_cleanup(), backup_init(), backup_cleanup() etc. in v6 of the series, but the purpose is still the very same. I'll just quote from [3]: > In Proxmox VE, a backup job consists of backup tasks for individual > guests. There are methods for initialization and cleanup of the job, > i.e. job_init() and job_cleanup() and for each guest backup, i.e. > backup_init() and backup_cleanup(). > > The backup_get_mechanism() method is used to decide on the backup > mechanism. Currently, 'file-handle' or 'nbd' for VMs, and 'directory' > for containers is possible. The method also let's the plugin indicate > whether to use a bitmap for incremental VM backup or not. It is enough > to implement one mechanism for VMs and one mechanism for containers. > > Next, there are methods for backing up the guest's configuration and > data, backup_vm() for VM backup and backup_container() for container > backup, with the latter running > > Finally, some helpers like getting the provider name or volume ID for > the backup target, as well as for handling the backup log. > > The backup transaction looks as follows: > > First, job_init() is called that can be used to check backup server > availability and prepare the connection. Then for each guest > backup_init() followed by backup_vm() or backup_container() and finally > backup_cleanup(). Afterwards job_cleanup() is called. For containers, > there is an additional backup_container_prepare() call while still > privileged. The actual backup_container() call happens as the > (unprivileged) container root user, so that the file owner and group IDs > match the container's perspective. See also the documentation for each method for more details. > 5. backup_get_mechanism() – This function needs to be called by > external backup product? No, all the functions are called by the Proxmox VE backup stack during backup. > 6. backup_vm() – This function needs to be called by external > backup product? Does it support all disk formats and backed > storages? No. The Proxmox VE backend will call your implementation of this method during backup of a VM guest. [0]: https://lore.proxmox.com/pve-devel/20250331132020.105324-1-f.eb...@proxmox.com/ [1]: BackupProviderDirExamplePlugin.pm in https://lore.proxmox.com/pve-devel/20250331132020.105324-17-f.eb...@proxmox.com/ [2]: BorgBackupPlugin.pm in https://lore.proxmox.com/pve-devel/20250331132020.105324-18-f.eb...@proxmox.com/ [3]: https://lore.proxmox.com/pve-devel/20250331132020.105324-13-f.eb...@proxmox.com/ Best Regards, Fiona ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH ifupdown2 1/1] Correctly handle IPv6 addresses in vxlan
--- Begin Message --- Just noticed I didn’t reply to this before - the patch set in ifupdown2 does not actually address IPv6 local tunnel IPs at all. > On Jan 23, 2025, at 04:26, Stefan Hanreich wrote: > > Hi, it seems like ifupdown2 will be merging a PR for VXLAN IPv6 support > in the near future [1]. Hence why I think we should be holding off on > this, until we know which solution (there are several PRs for this > actually) they merge. Otherwise we run into the problem of having > incompatible / custom settings for this. > > [1] > https://github.com/CumulusNetworks/ifupdown2/pull/318#issuecomment-2541947074 > --- End Message --- ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] About PVE Backup Integration Guide
Am 01.04.25 um 10:52 schrieb Prashant Patil: > When does Proxmox VE backend calls functions from the plugin? The > backup/restore tasks/jobs are configured in our Backup Solution hence > this question. During running a Proxmox VE backup job or backup/restore invocation via API. If you manage the jobs on the server side, you will need to start the backup via the corresponding API call: For backup: https://pve.proxmox.com/pve-docs/api-viewer/#/nodes/{node}/vzdump For restore of VM, POST call with 'archive' argument being the volume ID used by your storage plugin implementation: https://pve.proxmox.com/pve-docs/api-viewer/#/nodes/{node}/qemu For restore of VM, POST call with 'restore' argument set to true and 'ostemplate' argument being the volume ID used by your storage plugin implementation: https://pve.proxmox.com/pve-docs/api-viewer/#/nodes/{node}/lxc See also: https://pve.proxmox.com/pve-docs/chapter-pveum.html#pveum_tokens However, note that users will still be able to configure jobs/issue backups/restores for your backup storage on the Proxmox VE-side. That is where the backups are taken/needed for restore, so the provider API is designed to be fully integrated on the Proxmox VE-side. Best Regards, Fiona ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH zfsonlinux 6/8] d/control: libzfslinux-dev Depends: libtirpc-dev, zlib1g-dev (Closes: #1095855)
follows debian-upstream commit e6dc49f60a43045ef87cf683305e03c864274aac Originally-by: наб Signed-off-by: Stoiko Ivanov --- debian/control | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/debian/control b/debian/control index 6ee22da9e..5a3ac6867 100644 --- a/debian/control +++ b/debian/control @@ -68,7 +68,10 @@ Description: Solaris userland utility library for Linux Package: libzfslinux-dev Section: contrib/libdevel Architecture: linux-any -Depends: libssl-dev | libssl1.0-dev, +Depends: libblkid-dev, + libssl-dev | libssl1.0-dev, + libtirpc-dev, + zlib1g-dev, libnvpair3linux (= ${binary:Version}), libuutil3linux (= ${binary:Version}), libzfs6linux (= ${binary:Version}), -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH v8 qemu-server 06/11] image convert: collect options in hash argument
From: Fiona Ebner In preparation to add another option and to improve style for the callers. One of the test cases that specified $is_zero_initialized is for a non-existent storage, so the option was not added there. Signed-off-by: Fiona Ebner --- No changes to v7. PVE/QemuServer.pm | 19 +++--- PVE/QemuServer/ImportDisk.pm | 3 +- test/run_qemu_img_convert_tests.pl | 58 -- 3 files changed, 56 insertions(+), 24 deletions(-) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index ffd5d56..5c6cb94 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -7787,8 +7787,14 @@ sub convert_iscsi_path { die "cannot convert iscsi path '$path', unknown format\n"; } +# The possible options are: +# bwlimit - The bandwidth limit in KiB/s. +# is-zero-initialized - If the destination image is zero-initialized. +# snapname - Use this snapshot of the source image. sub qemu_img_convert { -my ($src_volid, $dst_volid, $size, $snapname, $is_zero_initialized, $bwlimit) = @_; +my ($src_volid, $dst_volid, $size, $opts) = @_; + +my ($bwlimit, $snapname) = $opts->@{qw(bwlimit snapname)}; my $storecfg = PVE::Storage::config(); my ($src_storeid) = PVE::Storage::parse_volume_id($src_volid, 1); @@ -7846,7 +7852,7 @@ sub qemu_img_convert { push @$cmd, $src_path; -if (!$dst_is_iscsi && $is_zero_initialized) { +if (!$dst_is_iscsi && $opts->{'is-zero-initialized'}) { push @$cmd, "zeroinit:$dst_path"; } else { push @$cmd, $dst_path; @@ -8307,7 +8313,12 @@ sub clone_disk { push $cmd->@*, "bs=$bs", "osize=$size", "if=$src_path", "of=$dst_path"; run_command($cmd); } else { - qemu_img_convert($drive->{file}, $newvolid, $size, $snapname, $sparseinit, $bwlimit); + my $opts = { + bwlimit => $bwlimit, + 'is-zero-initialized' => $sparseinit, + snapname => $snapname, + }; + qemu_img_convert($drive->{file}, $newvolid, $size, $opts); } } } @@ -8391,7 +8402,7 @@ sub create_efidisk($$$) { my $volid = PVE::Storage::vdisk_alloc($storecfg, $storeid, $vmid, $fmt, undef, $vars_size); PVE::Storage::activate_volumes($storecfg, [$volid]); -qemu_img_convert($ovmf_vars, $volid, $vars_size_b, undef, 0); +qemu_img_convert($ovmf_vars, $volid, $vars_size_b); my $size = PVE::Storage::volume_size_info($storecfg, $volid, 3); return ($volid, $size/1024); diff --git a/PVE/QemuServer/ImportDisk.pm b/PVE/QemuServer/ImportDisk.pm index 30d56ba..75b5e0d 100755 --- a/PVE/QemuServer/ImportDisk.pm +++ b/PVE/QemuServer/ImportDisk.pm @@ -70,7 +70,8 @@ sub do_import { local $SIG{PIPE} = sub { die "interrupted by signal $!\n"; }; PVE::Storage::activate_volumes($storecfg, [$dst_volid]); - PVE::QemuServer::qemu_img_convert($src_path, $dst_volid, $src_size, undef, $zeroinit); + PVE::QemuServer::qemu_img_convert( + $src_path, $dst_volid, $src_size, { 'is-zero-initialized' => $zeroinit }); PVE::Storage::deactivate_volumes($storecfg, [$dst_volid]); PVE::QemuConfig->lock_config($vmid, $create_drive) if !$params->{'skip-config-update'}; }; diff --git a/test/run_qemu_img_convert_tests.pl b/test/run_qemu_img_convert_tests.pl index 29c188d..652e61f 100755 --- a/test/run_qemu_img_convert_tests.pl +++ b/test/run_qemu_img_convert_tests.pl @@ -55,7 +55,7 @@ my $storage_config = { my $tests = [ { name => 'qcow2raw', - parameters => [ "local:$vmid/vm-$vmid-disk-0.qcow2", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10, undef, 0, undef ], + parameters => [ "local:$vmid/vm-$vmid-disk-0.qcow2", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10 ], expected => [ "/usr/bin/qemu-img", "convert", "-p", "-n", "-f", "qcow2", "-O", "raw", "/var/lib/vz/images/$vmid/vm-$vmid-disk-0.qcow2", "/var/lib/vz/images/$vmid/vm-$vmid-disk-0.raw" @@ -63,7 +63,7 @@ my $tests = [ }, { name => "raw2qcow2", - parameters => [ "local:$vmid/vm-$vmid-disk-0.raw", "local:$vmid/vm-$vmid-disk-0.qcow2", 1024*10, undef, 0, undef ], + parameters => [ "local:$vmid/vm-$vmid-disk-0.raw", "local:$vmid/vm-$vmid-disk-0.qcow2", 1024*10 ], expected => [ "/usr/bin/qemu-img", "convert", "-p", "-n", "-f", "raw", "-O", "qcow2", "/var/lib/vz/images/$vmid/vm-$vmid-disk-0.raw", "/var/lib/vz/images/$vmid/vm-$vmid-disk-0.qcow2" @@ -71,7 +71,7 @@ my $tests = [ }, { name => "local2rbd", - parameters => [ "local:$vmid/vm-$vmid-disk-0.raw", "rbd-store:vm-$vmid-disk-0", 1024*10, undef, 0, undef ], + parameters => [ "local:$vmid/vm-$vmid-disk-0.raw", "rbd-store:vm-$vmid-disk-0", 1024*10 ], expected => [ "/usr/bin/qemu-img", "convert", "-p", "-n", "-f", "raw", "-O", "raw",
[pve-devel] [PATCH qemu-server v9 13/29] backup: implement backup for external providers
The state of the VM's disk images at the time the backup is started is preserved via a snapshot-access block node. Old data is moved to the fleecing image when new guest writes come in. The snapshot-access block node, as well as the associated bitmap in case of incremental backup, will be made available to the external provider. They are exported via NBD and for 'nbd' mechanism, the NBD socket path is passed to the provider, while for 'file-handle' mechanism, the NBD export is made accessible via a file handle and the bitmap information is made available via a $next_dirty_region->() function. For 'file-handle', the 'nbdinfo' and 'nbdfuse' binaries are required. The provider can indicate that it wants to do an incremental backup by returning the bitmap ID that was used for a previous backup and it will then be told if the bitmap was newly created (either first backup or old bitmap was invalid) or if the bitmap can be reused. The provider then reads the parts of the NBD or virtual file it needs, either the full disk for full backup, or the dirty parts according to the bitmap for incremental backup. The bitmap has to be respected, reads to other parts of the image will return an error. After backing up each part of the disk, it should be discarded in the export to avoid unnecessary space usage in the fleecing image (requires the storage underlying the fleecing image to support discard too). Signed-off-by: Fiona Ebner [WB: - instead of backup_vm_available_bitmaps call backup_vm_query_incremental, which provides a bitmap-mode instead, pass this along and use just the storage id as bitmap name] Signed-off-by: Wolfgang Bumiller [FE: Move construction of $devices parameter to after sizes are available. They were undef before. Adapt to changed backup-access QMP interface not requiring an explicit bitmap name anymore.] Signed-off-by: Fiona Ebner --- Changes in v9: * Move construction of $devices parameter to after sizes are available. They were undef before. * Adapt to changed backup-access QMP interface not requiring an explicit bitmap name anymore. PVE/VZDump/QemuServer.pm | 413 ++- 1 file changed, 411 insertions(+), 2 deletions(-) diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm index 65f01791..887f53ba 100644 --- a/PVE/VZDump/QemuServer.pm +++ b/PVE/VZDump/QemuServer.pm @@ -3,12 +3,15 @@ package PVE::VZDump::QemuServer; use strict; use warnings; +use Fcntl qw(:mode); use File::Basename; -use File::Path; +use File::Path qw(make_path remove_tree); +use File::stat qw(); use IO::File; use IPC::Open3; use JSON; use POSIX qw(EINTR EAGAIN); +use Time::HiRes qw(usleep); use PVE::Cluster qw(cfs_read_file); use PVE::INotify; @@ -20,7 +23,7 @@ use PVE::QMPClient; use PVE::Storage::Plugin; use PVE::Storage::PBSPlugin; use PVE::Storage; -use PVE::Tools; +use PVE::Tools qw(run_command); use PVE::VZDump; use PVE::Format qw(render_duration render_bytes); @@ -30,6 +33,7 @@ use PVE::QemuServer::Drive qw(checked_volume_format); use PVE::QemuServer::Helpers; use PVE::QemuServer::Machine; use PVE::QemuServer::Monitor qw(mon_cmd); +use PVE::QemuServer::QMPHelpers; use base qw (PVE::VZDump::Plugin); @@ -284,6 +288,8 @@ sub archive { if ($self->{vzdump}->{opts}->{pbs}) { $self->archive_pbs($task, $vmid); +} elsif ($self->{vzdump}->{'backup-provider'}) { + $self->archive_external($task, $vmid); } else { $self->archive_vma($task, $vmid, $filename, $comp); } @@ -1148,11 +1154,90 @@ sub snapshot { # nothing to do } +my sub cleanup_file_handles { +my ($self, $file_handles) = @_; + +for my $file_handle ($file_handles->@*) { + close($file_handle) or $self->log('warn', "unable to close file handle - $!"); +} +} + +my sub cleanup_nbd_mounts { +my ($self, $info) = @_; + +for my $mount_point (keys $info->%*) { + my $pid_file = delete($info->{$mount_point}->{'pid-file'}); + unlink($pid_file) or $self->log('warn', "unable to unlink '$pid_file' - $!"); + # Do a lazy unmount, because the target might still be busy even if the file handle was + # already closed. + eval { run_command(['fusermount', '-z', '-u', $mount_point ]); }; + if (my $err = $@) { + delete $info->{$mount_point}; + $self->log('warn', "unable to unmount NBD backup source '$mount_point' - $err"); + } +} + +# Wait for the unmount before cleaning up child PIDs to avoid 'nbdfuse' processes being +# interrupted by the signals issued there. +my $waited; +my $wait_limit = 50; # 5 seconds +for ($waited = 0; $waited < $wait_limit && scalar(keys $info->%*); $waited++) { + for my $mount_point (keys $info->%*) { + delete($info->{$mount_point}) if !-e $info->{$mount_point}->{'virtual-file'}; + eval { remove_tree($mount_point); }; + } + usleep(100_000); +} +
Re: [pve-devel] [PATCH v4 pve-storage 1/5] qcow2: add external snapshot support
--- Begin Message --- > > @@ -716,7 +721,11 @@ sub filesystem_path { > > my $dir = $class->get_subdir($scfg, $vtype); > > - $dir .= "/$vmid" if $vtype eq 'images'; > + if ($scfg->{snapext} && $snapname) { > + $name = $class->get_snap_volname($volname, $snapname); > + } else { > + $dir .= "/$vmid" if $vtype eq 'images'; > + } >>this is a bit weird, as it mixes volnames (with the `$vmid/` prefix) >>and names (without), it's only called twice in this patch, and this >>here already has $volname parsed, so could we maybe let >>get_snap_volname take and return the $name part without the dir? ok! > > my $path = "$dir/$name"; > > @@ -873,7 +882,7 @@ sub clone_image { > } > > sub alloc_image { > - my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_; > + my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size, > $backing) = @_; >>this extends the storage API, so it should actually do that.. and >>probably $backing should not be an arbitrary path, but something that >>is resolved locally? I'll send the $snapname as param instead > + > + if($backing) { > + push @$cmd, '-b', $backing, '-F', 'qcow2'; > + push @$options, 'extended_l2=on','cluster_size=128k'; > + }; > + push @$options, preallocation_cmd_option($scfg, $fmt); > + push @$cmd, '-o', join(',', @$options) if @$options > 0; > + push @$cmd, '-f', $fmt, $path; > + push @$cmd, "${size}K" if !$backing; >>is this because it will automatically take the size from the backing >>image? Yes, it take size from the backing. (and refuse if you specify size param at the same time than backing file) > + my $path = $class->path($scfg, $volname, $storeid); > + my $snappath = $class->path($scfg, $volname, $storeid, $snap); > + #rename current volume to snap volume > + die "snapshot volume $snappath already exist\n" if -e $snappath; > + rename($path, $snappath) if -e $path; >>this is still looking weird.. I don't think it makes sense interface >>wise to allow snapshotting a volume that doesn't even exist.. This is more by security, I'm still unsure of the behaviour if you have multiple disks, and that snapshot is dying in the middle. (1 disk rename, the other not renamed). > + > + my ($vtype, $name, $vmid, undef, undef, $isBase, $format) = > + $class->parse_volname($volname); > + > + $class->alloc_image($storeid, $scfg, $vmid, 'qcow2', $name, undef, > $snappath); > + if ($@) { > + eval { $class->free_image($storeid, $scfg, $volname, 0) }; > + warn $@ if $@; >>missing cleanup - this should undo the rename from above Do you have an idea how to do it with mutiple disk ? (revert renaming of other disks elsewhere in the code? just keep them like this)? > > @@ -1187,9 +1251,15 @@ sub volume_snapshot_rollback { > > my $path = $class->filesystem_path($scfg, $volname); > > - my $cmd = ['/usr/bin/qemu-img', 'snapshot','-a', $snap, $path]; > - > - run_command($cmd); > + if ($scfg->{snapext}) { > + #simply delete the current snapshot and recreate it > + my $path = $class->filesystem_path($scfg, $volname); > + unlink($path); > + $class->volume_snapshot($scfg, $storeid, $volname, $snap); >>instead of volume_snapshot, this could simply call alloc_image with >>the backing file? then volume_snapshot could always rename and always >>cleanup properly.. Yes , better like this indeed > > + } else { > + #we rebase the child image on the parent as new backing image >>should we extend this to make it clear what this means? it means >>copying any parts of $snap that are not in $parent and not yet >>overwritten by $child into $child, right? >> yes, intermediate snapshot: (rebase) --- snap1 (10G)-->snap2 (1G)current(1G) ---> delete snap2 rebase current on snap1 snap1(10G)->current(2G) or snap1 (10G)-->snap2 (1G)> snap3 (1G)--->current(1G) ---> delete snap2 rebase snap3 on snap1 snap1 (10G)---> snap3 (2G)--->current(1G) >>so how expensive this is depends on: >>- how many changes are between $parent and $snap (increases cost) >>- how many of those are overwritten by changes between $snap and >>$child (decreases cost) but yes, the final size of the child is not 100% the additional content of the deleted snapshot, if some blocks has already been overwriten in the child so, we could call it: "merge diff content of the delete snap to the childsnap" >>+sub get_snap_volname { >>+my ($class, $volname, $snapname) = @_; >>+ >>+my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, >>$format) = $class->parse_volname($volname); +$name = !$snapname || $snapname eq 'current' ? $volname : "$vmid/snap-$snapname-$name"; >>other way round would be better to group by volume first IMHO ($vmid/snap-$name-$snapname), as this is similar to how we encode >>snapshots often on the storage level (volume@snap). we also need to >>have some delimiter between snapshot and volume name that is not >>allowed in either (hard for volname si
Re: [pve-devel] [PATCH v4 qemu-server 11/11] qcow2: add external snapshot support
commit description missing here as well.. I haven't tested this (or the first patches doing the blockdev conversion) yet, but I see a few bigger design/architecture issues left (besides FIXMEs for missing pieces that previously worked ;)): - we should probably move the decision whether a snapshot is done on the storage layer or by qemu into the control of the storage plugin, especially since we are currently cleaning that API up to allow easier implementation of external plugins - if we do that, we can also make "uses external qcow2 snapshots" a property of the storage plugin+config to replace hard-coded checks for the snapext property or lvm+qcow2 - there are a few operations here that should not call directly into the storage plugin code or do equivalent actions, but should rather get a proper interface in that storage plugin API the first one is the renaming of a blockdev while it is used, which is currently done like this: -- "link" snapshot path to make it available under old and new name -- handle blockdev additions/reopening/backing-file updates/deletions on the qemu layer -- remove old snapshot path link -- if LVM, rename actual volume (for non-LVM, linking followed by unlinking the source is effectively a rename already) I wonder whether that couldn't be made more straight-forward by doing -- rename snapshot volume/image (qemu must already have the old name open anyway and should be able to continue using it) -- do blockdev additions/reopening/backing-file updates/deletions on the qemu layer or is there an issue/check in qemu somewhere that prevents this approach? if not, we could just introduce a "volume_snapshot_rename" or extend rename_volume with a snapshot parameter.. the second thing that happens is deleting a snapshot volume/path, without deleting the whole snapshot.. that one we could easily support by extending volume_snapshot_delete by extending the $running parameter (e.g., passing "2") or adding a new one to signify that all the housekeeping was already done, and just the actual snapshot volume should be deleted. this shouldn't be an issue provided all such calls are guarded by first checking that we are using external snapshots.. > Alexandre Derumier via pve-devel hat am > 11.03.2025 11:29 CET geschrieben: > Signed-off-by: Alexandre Derumier > --- > PVE/QemuConfig.pm | 4 +- > PVE/QemuServer.pm | 226 +--- > PVE/QemuServer/Drive.pm | 4 + > 3 files changed, 220 insertions(+), 14 deletions(-) > > diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm > index b60cc398..2b3acb15 100644 > --- a/PVE/QemuConfig.pm > +++ b/PVE/QemuConfig.pm > @@ -377,7 +377,7 @@ sub __snapshot_create_vol_snapshot { > > print "snapshotting '$device' ($drive->{file})\n"; > > -PVE::QemuServer::qemu_volume_snapshot($vmid, $device, $storecfg, $volid, > $snapname); > +PVE::QemuServer::qemu_volume_snapshot($vmid, $device, $storecfg, $drive, > $snapname); > } > > sub __snapshot_delete_remove_drive { > @@ -414,7 +414,7 @@ sub __snapshot_delete_vol_snapshot { > my $storecfg = PVE::Storage::config(); > my $volid = $drive->{file}; > > -PVE::QemuServer::qemu_volume_snapshot_delete($vmid, $storecfg, $volid, > $snapname); > +PVE::QemuServer::qemu_volume_snapshot_delete($vmid, $storecfg, $drive, > $snapname); > > push @$unused, $volid; > } > diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm > index 60481acc..6ce3e9c6 100644 > --- a/PVE/QemuServer.pm > +++ b/PVE/QemuServer.pm > @@ -4449,20 +4449,200 @@ sub qemu_block_resize { > } > > sub qemu_volume_snapshot { > -my ($vmid, $deviceid, $storecfg, $volid, $snap) = @_; > +my ($vmid, $deviceid, $storecfg, $drive, $snap) = @_; > > +my $volid = $drive->{file}; > my $running = check_running($vmid); > - > -if ($running && do_snapshots_with_qemu($storecfg, $volid, $deviceid)) { > - mon_cmd($vmid, 'blockdev-snapshot-internal-sync', device => $deviceid, > name => $snap); > +my $do_snapshots_with_qemu = do_snapshots_with_qemu($storecfg, $volid, > $deviceid) if $running; forbidden syntax > +if ($do_snapshots_with_qemu) { > + if($do_snapshots_with_qemu == 2) { > + my $snapshots = PVE::Storage::volume_snapshot_info($storecfg, > $volid); > + my $parent_snap = $snapshots->{'current'}->{parent}; > + my $size = PVE::Storage::volume_size_info($storecfg, $volid, 5); > + blockdev_rename($storecfg, $vmid, $deviceid, $drive, 'current', > $snap, $parent_snap); > + blockdev_external_snapshot($storecfg, $vmid, $deviceid, $drive, > $snap, $size); > + } else { > + mon_cmd($vmid, 'blockdev-snapshot-internal-sync', device => > $deviceid, name => $snap); > + } > } else { > PVE::Storage::volume_snapshot($storecfg, $volid, $snap); > } > } > > +sub blockdev_external_snapshot { > +my ($storecfg, $vmid, $deviceid, $drive, $snap, $size) = @_; > + > +
[pve-devel] [PATCH 4/4] run cargo fmt with edition 2024
Signed-off-by: Maximiliano Sandoval --- proxmox-ve-config/src/firewall/cluster.rs | 6 -- proxmox-ve-config/src/firewall/common.rs | 4 ++-- proxmox-ve-config/src/firewall/ct_helper.rs| 2 +- proxmox-ve-config/src/firewall/guest.rs| 4 ++-- proxmox-ve-config/src/firewall/host.rs | 2 +- proxmox-ve-config/src/firewall/parse.rs| 2 +- proxmox-ve-config/src/firewall/ports.rs| 2 +- proxmox-ve-config/src/firewall/types/address.rs| 6 -- proxmox-ve-config/src/firewall/types/alias.rs | 2 +- proxmox-ve-config/src/firewall/types/ipset.rs | 2 +- proxmox-ve-config/src/firewall/types/log.rs| 2 +- proxmox-ve-config/src/firewall/types/port.rs | 2 +- proxmox-ve-config/src/firewall/types/rule.rs | 4 ++-- proxmox-ve-config/src/firewall/types/rule_match.rs | 6 +++--- proxmox-ve-config/src/guest/types.rs | 2 +- proxmox-ve-config/src/guest/vm.rs | 2 +- proxmox-ve-config/src/sdn/config.rs| 4 ++-- proxmox-ve-config/src/sdn/ipam.rs | 2 +- proxmox-ve-config/tests/sdn/main.rs| 4 ++-- 19 files changed, 32 insertions(+), 28 deletions(-) diff --git a/proxmox-ve-config/src/firewall/cluster.rs b/proxmox-ve-config/src/firewall/cluster.rs index ce3dd53..97dcc15 100644 --- a/proxmox-ve-config/src/firewall/cluster.rs +++ b/proxmox-ve-config/src/firewall/cluster.rs @@ -135,6 +135,7 @@ pub struct Options { #[cfg(test)] mod tests { use crate::firewall::types::{ +Cidr, address::IpList, alias::{AliasName, AliasScope}, ipset::{IpsetAddress, IpsetEntry}, @@ -143,7 +144,6 @@ mod tests { rule_match::{ Icmpv6, Icmpv6Code, IpAddrMatch, IpMatch, Ports, Protocol, RuleMatch, Tcp, Udp, }, -Cidr, }; use super::*; @@ -226,7 +226,9 @@ IN BGP(REJECT) -log crit -source 1.2.3.4 Alias::new( "wide", Cidr::new_v6( -[0x, 0x, 0x, 0x, 0x, 0x, 0x, 0x000], +[ +0x, 0x, 0x, 0x, 0x, 0x, 0x, 0x000 +], 64 ) .unwrap(), diff --git a/proxmox-ve-config/src/firewall/common.rs b/proxmox-ve-config/src/firewall/common.rs index 3999168..09feb83 100644 --- a/proxmox-ve-config/src/firewall/common.rs +++ b/proxmox-ve-config/src/firewall/common.rs @@ -1,10 +1,10 @@ use std::collections::{BTreeMap, HashMap}; use std::io; -use anyhow::{bail, format_err, Error}; +use anyhow::{Error, bail, format_err}; use serde::de::IntoDeserializer; -use crate::firewall::parse::{parse_named_section_tail, split_key_value, SomeString}; +use crate::firewall::parse::{SomeString, parse_named_section_tail, split_key_value}; use crate::firewall::types::ipset::{IpsetName, IpsetScope}; use crate::firewall::types::rule::{Direction, Kind}; use crate::firewall::types::{Alias, Group, Ipset, Rule}; diff --git a/proxmox-ve-config/src/firewall/ct_helper.rs b/proxmox-ve-config/src/firewall/ct_helper.rs index 40e4fee..03e2637 100644 --- a/proxmox-ve-config/src/firewall/ct_helper.rs +++ b/proxmox-ve-config/src/firewall/ct_helper.rs @@ -1,4 +1,4 @@ -use anyhow::{bail, Error}; +use anyhow::{Error, bail}; use serde::Deserialize; use std::collections::HashMap; use std::sync::OnceLock; diff --git a/proxmox-ve-config/src/firewall/guest.rs b/proxmox-ve-config/src/firewall/guest.rs index 23eaa4e..1114452 100644 --- a/proxmox-ve-config/src/firewall/guest.rs +++ b/proxmox-ve-config/src/firewall/guest.rs @@ -4,13 +4,13 @@ use std::io; use crate::guest::types::Vmid; use crate::guest::vm::NetworkConfig; +use crate::firewall::types::Ipset; use crate::firewall::types::alias::{Alias, AliasName}; use crate::firewall::types::ipset::IpsetScope; use crate::firewall::types::log::LogLevel; use crate::firewall::types::rule::{Direction, Rule, Verdict}; -use crate::firewall::types::Ipset; -use anyhow::{bail, Error}; +use anyhow::{Error, bail}; use serde::Deserialize; use crate::firewall::parse::serde_option_bool; diff --git a/proxmox-ve-config/src/firewall/host.rs b/proxmox-ve-config/src/firewall/host.rs index 394896c..39ac67c 100644 --- a/proxmox-ve-config/src/firewall/host.rs +++ b/proxmox-ve-config/src/firewall/host.rs @@ -1,7 +1,7 @@ use std::io; use std::net::IpAddr; -use anyhow::{bail, Error}; +use anyhow::{Error, bail}; use serde::Deserialize; use crate::host::utils::{host_ips, network_interface_cidrs}; diff --git a/proxmox-ve-config/src/firewall/parse.rs b/proxmox-ve-config/src/firewall/parse.rs index 8cf4757..4d02352 100644 --- a/proxmox-ve-config/src/firewall/parse.rs +++ b/proxmox-ve-config/src/firewall/parse.rs @@ -1,6 +1,6 @@ use std::fmt; -use anyhow::{bail, format_err, Error}; +use anyhow::{Error, bail, format_err}; const NAME_SPECIAL_CHARACTERS
Re: [pve-devel] [RFC PATCH http-server] fix #6230: increase allowed post size
Am 12.03.25 um 14:27 schrieb Dominik Csapak: > In some situations, e.g. having a large resource mapping, the UI can > generate a request that is bigger than the current limit of 64KiB. > > Our files in pmxcfs can grow up to 1 MiB, so theoretically, a single > mapping can grow to that size. In practice, a single entry will have > much less. In #6230, a user has a mapping with about ~130KiB. > > Increase the limit to 512KiB so we have a bit of buffer left. s/buffer/headroom/ ? > > We have to also increase the 'rbuf_max' size here, otherwise the request > will fail (since the buffer is too small for the request). > Since the post limit and the rbuf_max are tightly coupled, let it > reflect that in the code. To do that sum the post size + max header > size there. > > Signed-off-by: Dominik Csapak > --- > sending as RFC because: > * not sure about the rbuf_max calculation, but we have to increase it > when we increase $limit_max_post. (not sure how much is needed exactly) > * ther are alternative ways to deal with that, but some of those are vastly > more work: > - optimize the pci mapping to reduce the number of bytes we have to > send (e.g. by reducing the property names, or somehow magically > detect devices that belong together) > - add a new api for the mappings that can update the entries without > sending the whole mapping again (not sure if we can make this > backwards compatible) > - ignore the problem and simply tell the users to edit the file > manually (I don't like this one...) > > also, I tried to benchmark this, but did not find a tool that does this > in a good way (e.g. apachebench complained about ssl, and i couldn't get > it to work right). @Thomas you did such benchmarks laft according to git > log, do you remember what you used then? argh, my commit message back then looks like I tried to write what I used but then fubmled (or got knocked on the head) and sent it out unfinished. To my defence, Wolfgang applied it ;P I'm not totally sure what I used back then, might have been something custom-made too. FWIW, recently I used oha [0] and found it quite OK, albeit I did not try it with POST data, but one can define the method and pass a request body from CLI argument directly or a file, and it has a flag to allow "insecure" TLS certs. [0]: https://github.com/hatoo/oha > @@ -1891,7 +1891,7 @@ sub accept_connections { > $self->{conn_count}++; > $reqstate->{hdl} = AnyEvent::Handle->new( > fh => $clientfh, > - rbuf_max => 64*1024, > + rbuf_max => $limit_max_post + ($limit_max_headers * > $limit_max_header_size), The header part is wrong as the header limits are independent, i.e., the request must have less than $limit_max_headers separate headers and all those together must be smaller than $limit_max_header_size. So just adding $limit_max_header_size is enough, no multiplication required. > timeout => $self->{timeout}, > linger => 0, # avoid problems with ssh - really needed ? > on_eof => sub { ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH storage v5 12/32] extract backup config: delegate to backup provider for storages that support it
Signed-off-by: Fiona Ebner --- src/PVE/Storage.pm | 11 +++ 1 file changed, 11 insertions(+) diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm index 41d91a1..6041ccb 100755 --- a/src/PVE/Storage.pm +++ b/src/PVE/Storage.pm @@ -1759,6 +1759,17 @@ sub extract_vzdump_config { storage_check_enabled($cfg, $storeid); return PVE::Storage::PBSPlugin->extract_vzdump_config($scfg, $volname, $storeid); } + + if (storage_has_feature($cfg, $storeid, 'backup-provider')) { + my $plugin = PVE::Storage::Plugin->lookup($scfg->{type}); + my $log_function = sub { + my ($log_level, $message) = @_; + my $prefix = $log_level eq 'err' ? 'ERROR' : uc($log_level); + print "$prefix: $message\n"; + }; + my $backup_provider = $plugin->new_backup_provider($scfg, $storeid, $log_function); + return $backup_provider->restore_get_guest_config($volname, $storeid); + } } my $archive = abs_filesystem_path($cfg, $volid); -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH v1 pve-storage 5/8] pluginbase: document hooks
On March 26, 2025 3:20 pm, Max Carrara wrote: > Add docstrings for the following methods: > - on_add_hook > - on_update_hook > - on_delete_hook > > Signed-off-by: Max Carrara > --- > src/PVE/Storage/PluginBase.pm | 85 ++- > 1 file changed, 74 insertions(+), 11 deletions(-) > > diff --git a/src/PVE/Storage/PluginBase.pm b/src/PVE/Storage/PluginBase.pm > index 8a61dc3..b3ce684 100644 > --- a/src/PVE/Storage/PluginBase.pm > +++ b/src/PVE/Storage/PluginBase.pm > @@ -626,29 +626,92 @@ sub find_free_diskname { > > =head2 HOOKS > > +The following methods are called whenever a storage associated with the given > +plugin is added, updated, or deleted. These methods are useful for: > + > +=over > + > +=item * Setting up certain prerequisites when adding the storage (and then > +tearing them down again when the storage is deleted) > + > +=item * Handling sensitive parameters that shouldn't be written directly > +to C and ought to be stored elsewhere > + > +=item * Ensuring that certain conditions in the configuration are being > upheld > +that cannot be done via the remaining API otherwise > + > +=back > + > +and more. > + > +=cut > + > +=head3 $plugin->on_add_hook($storeid, \%scfg, %param) > + > +=head3 $plugin->on_add_hook(...) > + > +B May be implemented in a storage plugin. > + > +Called during the addition of a storage, before the new storage configuration > +gets written. see comments on first patch ;) > + > +C<%param> contains additional key-value arguments, usually sensitive keys > that > +have been extracted from C<\%scfg> in order not to write them to the storage > +configuration. > + > +Cs in order to abort the addition if there are (grave) problems. > + > +C is B when this method is called. This method is called while C is locked. Although I am not sure what extra information this provides? same applies to the rest below.. > + > =cut > > -# called during addition of storage (before the new storage config got > written) > -# die to abort addition if there are (grave) problems > -# NOTE: runs in a storage config *locked* context > sub on_add_hook { > my ($class, $storeid, $scfg, %param) = @_; > return undef; > } > > -# called during storage configuration update (before the updated storage > config got written) > -# die to abort the update if there are (grave) problems > -# NOTE: runs in a storage config *locked* context > +=head3 $plugin->on_update_hook($storeid, \%scfg, %param) > + > +=head3 $plugin->on_update_hook(...) > + > +B May be implemented in a storage plugin. > + > +Called during the update of a storage configuration, before the new > +configuration gets written. > + > +C<%param> contains additional key-value arguments, usually sensitive keys > that > +have been extracted from C<\%scfg> in order not to write them to the storage > +configuration. > + > +Cs in order to abort the config update if there are (grave) problems. > + > +C is B when this method is called. > + > +=cut > + > sub on_update_hook { > my ($class, $storeid, $scfg, %param) = @_; > return undef; > } > > -# called during deletion of storage (before the new storage config got > written) > -# and if the activate check on addition fails, to cleanup all storage traces > -# which on_add_hook may have created. > -# die to abort deletion if there are (very grave) problems > -# NOTE: runs in a storage config *locked* context > +=head3 $plugin->on_delete_hook($storeid, \%scfg) > + > +=head3 $plugin->on_delete_hook(...) > + > +B May be implemented in a storage plugin. > + > +Called during the deletion of a storage, before the new storage configuration > +gets written. Also gets called if the activation check during storage > +addition fails in order to clean up all traces which > +Con_add_hook($storeid, \%scfg, %param)" >>> > +may have created. > + > +Cs in order to abort the deletion of there are (very grave) problems. > + > +C is B when this method is called. > + > +=cut > + > sub on_delete_hook { > my ($class, $storeid, $scfg) = @_; > return undef; > -- > 2.39.5 > > > > ___ > pve-devel mailing list > pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > > > ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH-SERIES qemu 0/6] async snapshot improvements
Most importantly, start using a dedicated IO thread for the state file when doing a live snapshot. Having the state file be in the iohandler context means that a blk_drain_all() call in the main thread or vCPU thread that happens while the snapshot is running will result in a deadlock. This change should also help in general to reduce load on the main thread and for it to get stuck on IO, i.e. same benefits as using a dedicated IO thread for regular drives. This is particularly interesting when the VM state storage is a network storage like NFS. With some luck, it could also help with bug #6262 [0]. The failure there happens while issuing/right after the savevm-start QMP command, so the most likely coroutine is the process_savevm_co() that was previously scheduled to the iohandler context. Likely someone polls the iohandler context and wants to enter the already scheduled coroutine leading to the abort(): > qemu_aio_coroutine_enter: Co-routine was already scheduled in > 'aio_co_schedule' With a dedicated iothread, there hopefully is no such race. Additionally, fix up some edge cases in error handling and setting the state of the snapshot operation. [0]: https://bugzilla.proxmox.com/show_bug.cgi?id=6262 Fiona Ebner (6): savevm-async: improve setting state of snapshot operation in savevm-end handler savevm-async: rename saved_vm_running to vm_needs_start savevm-async: improve runstate preservation savevm-async: cleanup error handling in savevm_start savevm-async: use dedicated iothread for state file savevm-async: treat failure to set iothread context as a hard failure migration/savevm-async.c | 119 +++ 1 file changed, 69 insertions(+), 50 deletions(-) -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH ha-manager 09/15] manager: apply colocation rules when selecting service nodes
On March 25, 2025 4:12 pm, Daniel Kral wrote: > Add a mechanism to the node selection subroutine, which enforces the > colocation rules defined in the rules config. > > The algorithm manipulates the set of nodes directly, which the service > is allowed to run on, depending on the type and strictness of the > colocation rules, if there are any. shouldn't this first attempt to satisfy all rules, and if that fails, retry with just the strict ones, or something similar? see comments below (maybe I am missing/misunderstanding something) > > This makes it depend on the prior removal of any nodes, which are > unavailable (i.e. offline, unreachable, or weren't able to start the > service in previous tries) or are not allowed to be run on otherwise > (i.e. HA group node restrictions) to function correctly. > > Signed-off-by: Daniel Kral > --- > src/PVE/HA/Manager.pm | 203 - > src/test/test_failover1.pl | 4 +- > 2 files changed, 205 insertions(+), 2 deletions(-) > > diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm > index 8f2ab3d..79b6555 100644 > --- a/src/PVE/HA/Manager.pm > +++ b/src/PVE/HA/Manager.pm > @@ -157,8 +157,201 @@ sub get_node_priority_groups { > return ($pri_groups, $group_members); > } > > +=head3 get_colocated_services($rules, $sid, $online_node_usage) > + > +Returns a hash map of all services, which are specified as being in a > positive > +or negative colocation in C<$rules> with the given service with id C<$sid>. > + > +Each service entry consists of the type of colocation, strictness of > colocation > +and the node the service is currently assigned to, if any, according to > +C<$online_node_usage>. > + > +For example, a service C<'vm:101'> being strictly colocated together > (positive) > +with two other services C<'vm:102'> and C<'vm:103'> and loosely colocated > +separate with another service C<'vm:104'> results in the hash map: > + > +{ > + 'vm:102' => { > + affinity => 'together', > + strict => 1, > + node => 'node2' > + }, > + 'vm:103' => { > + affinity => 'together', > + strict => 1, > + node => 'node2' > + }, > + 'vm:104' => { > + affinity => 'separate', > + strict => 0, > + node => undef > + } > +} > + > +=cut > + > +sub get_colocated_services { > +my ($rules, $sid, $online_node_usage) = @_; > + > +my $services = {}; > + > +PVE::HA::Rules::Colocation::foreach_colocation_rule($rules, sub { > + my ($rule) = @_; > + > + for my $csid (sort keys %{$rule->{services}}) { > + next if $csid eq $sid; > + > + $services->{$csid} = { > + node => $online_node_usage->get_service_node($csid), > + affinity => $rule->{affinity}, > + strict => $rule->{strict}, > + }; > +} > +}, { > + sid => $sid, > +}); > + > +return $services; > +} > + > +=head3 get_colocation_preference($rules, $sid, $online_node_usage) > + > +Returns a list of two hashes, where each is a hash map of the colocation > +preference of C<$sid>, according to the colocation rules in C<$rules> and the > +service locations in C<$online_node_usage>. > + > +The first hash is the positive colocation preference, where each element > +represents properties for how much C<$sid> prefers to be on the node. > +Currently, this is a binary C<$strict> field, which means either it should be > +there (C<0>) or must be there (C<1>). > + > +The second hash is the negative colocation preference, where each element > +represents properties for how much C<$sid> prefers not to be on the node. > +Currently, this is a binary C<$strict> field, which means either it should > not > +be there (C<0>) or must not be there (C<1>). > + > +=cut > + > +sub get_colocation_preference { > +my ($rules, $sid, $online_node_usage) = @_; > + > +my $services = get_colocated_services($rules, $sid, $online_node_usage); > + > +my $together = {}; > +my $separate = {}; > + > +for my $service (values %$services) { > + my $node = $service->{node}; > + > + next if !$node; > + > + my $node_set = $service->{affinity} eq 'together' ? $together : > $separate; > + $node_set->{$node}->{strict} = $node_set->{$node}->{strict} || > $service->{strict}; > +} > + > +return ($together, $separate); > +} > + > +=head3 apply_positive_colocation_rules($together, $allowed_nodes) > + > +Applies the positive colocation preference C<$together> on the allowed node > +hash set C<$allowed_nodes> directly. > + > +Positive colocation means keeping services together on a single node, and > +therefore minimizing the separation of services. > + > +The allowed node hash set C<$allowed_nodes> is expected to contain any node, > +which is available to the service, i.e. each node is currently online, is > +available according to other location constraints, and the service has not > +failed running there y
[pve-devel] applied-series: [PATCH installer 0/4] tui, auto: re-use default zfs arc calculation from run env
Am 28.02.25 um 10:43 schrieb Christoph Heiss: > As discovered during the PMG 8.2 release cycle and suggested by Thomas, unify > the ZFS ARC maximum calculation between GUI and TUI. > > In short; this series exports the calculated default value for the ZFS ARC > maximum size in the `run-env.json` file. In turn, this is read by common Rust > and can be used from there in the TUI and auto-installer. > > Diffstat > > > Christoph Heiss (4): > run env: provide default ZFS ARC maximum size value > tui: use default ZFS ARC maximum size from runtime enviroment > auto: use default ZFS ARC maximum size from runtime enviroment > gtk, tui: leave 1 GiB headroom for OS in ZFS ARC max size edit view > > Proxmox/Install/RunEnv.pm | 10 +++- > proxinstall | 3 +- > proxmox-auto-installer/src/utils.rs | 2 +- > .../resources/parse_answer/disk_match.toml| 1 + > .../parse_answer/disk_match_all.toml | 1 + > .../parse_answer/disk_match_any.toml | 1 + > .../tests/resources/parse_answer/zfs.toml | 1 + > .../zfs_raid_level_uppercase.toml | 1 + > .../tests/resources/run-env-info.json | 2 +- > proxmox-installer-common/src/options.rs | 58 +-- > proxmox-installer-common/src/setup.rs | 3 + > proxmox-tui-installer/src/views/bootdisk.rs | 48 ++- > test/zfs-arc-max.pl | 12 +--- > 13 files changed, 43 insertions(+), 100 deletions(-) > applied series, had to workaround the garbled diff from patch 3/4 due to mail length limits, thanks!, luckily one can just edit the patch in .git/rebase-apply/0001 and use git apply and git am --continue to fix this locally. ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH v4 pve-common 00/12] Introduce and Package PVE::Path & PVE::Filesystem
On Fri Feb 7, 2025 at 3:03 PM CET, Max Carrara wrote: > Introduce and Package PVE::Path & PVE::Filesystem - v4 > == > Bump -- would be nice to get this merged if there are no other things left to address, in order to have this available for some cleanups in the future. If there are any outstanding issues however, please lmk! :) Also, @Fiona -- do you want to add your R-b tags, since you reviewed the previous versions of this series? Thought I'd ask :P ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] superseded: [PATCH-SERIES qemu/storage/qemu-server/container/manager v7 00/37] backup provider API
Superseded-by: https://lore.proxmox.com/pve-devel/20250403123118.264974-1-w.bumil...@proxmox.com/ On Tue, Apr 01, 2025 at 07:33:58PM +0200, Fiona Ebner wrote: > v6: > https://lore.proxmox.com/pve-devel/20250331132020.105324-1-f.eb...@proxmox.com/ > v5: > https://lore.proxmox.com/pve-devel/20250321134852.103871-1-f.eb...@proxmox.com/ > v4: > https://lore.proxmox.com/pve-devel/20241114150754.374376-1-f.eb...@proxmox.com/ > v3: > https://lore.proxmox.com/pve-devel/20241107165146.125935-1-f.eb...@proxmox.com/ > > Changes in v7: > * Verify result from backup_init(). Document allowed characters, i.e. > $PVE::Storage::SAFE_CHAR_CLASS_RE + slash. > * Move backup_container_perpare() to directly above backup_container() > in the base backup provider plugin. > * Document limitation of backup_container() method not being able to > persistently modify $self, because it runs in a fork(). > * Support per-device bitmap names. Changed the QMP interface for > setup-backup-access to take an actual list of device information > rather than just a single string (device list). This also makes it > more extensible. Added a new backup_vm_available_bitmaps() method > to the backup provider API. The backup provider can check on the > server which bitmaps can be used in principle and tell us to try. > Acutal bitmap mode still will be what's passed to backup_vm(), > because the requested bitmap might not exist (anymore). This also > means the get_backup_mechanism() method will return only the > mechanism and not the bitmap name. > * Improve documentation relating to snapshot-access in context of > dirty bitmap and reference NBD dirty-bitmap metacontext > documentation. > * Dropped already applied pve-common patch for fallocate syscall. > > Changes in v6: > * Factor out some helpers in QEMU backup code. > * Import constants in LXC::Namespace module from PVE::Tools instead of > re-declaring. > * Require that source for 'qemu-img' restore mechanisms is in raw > format. > * Document that methods for extracting configurations are called > without restore_*_init() first. > * Rename restore_get_*_config() methods to archive_get_*_config(). > * Replace job/backup hooks with dedicated methods, > {job,backup}_{init,cleanup} and backup_container_prepare() for the > privileged part right before backup_container() is called. > * Remove backup_get_archive_name() method. The archive name is now > returned as part of the backup_init() method. That was also a bit of > a misnomer, since it was necessary for the backup provider to > remember that archive name for later. > * Supersede backup_get_task_size() method and return stats as part of > the backup_cleanup() method (in the success case). Currently, this > only includes the archive size as well, but could be extended later > to allow custom stats to include in the task notification summary. > * Drop reference to vm_backup_get_device_info() method in docs/cover > letter. This was dropped in an earlier version. > * Squash implementation of external backup in qemu-server together > with patch with untrusted check. > > Changes in v5: > * Drop already applied patches. > * Rebase on latest master. > * Set finishing state and end time in backup state in QEMU to avoid > wrongly detecting previous backup as still active leading to a > warning. > * Unbreak container restore with a bandwidth limit. > * Switch from 'block-device' mechanism to 'file-handle', providing > access to the image contents as a (virtual) file created with > 'nbdfuse'. The reason is that block devices created via qemu-nbd > will lead to a lot of ugly error messages if the device is > disconnected before partition probing is finished (which can be > delayed even until the end of the backup, apparently there is a > lock). This also avoids the need to load the 'nbd' kernel module > and not exposing the devices as host block devices is just nicer, > potentially also security-wise. This requires using the fallocate > syscall instead of the previous BLKDISCARD ioctl. A helper for that > is provided via the Storage/Common.pm module. > * Indicate support for QEMU feature via 'backup-access-api' flag > returned by 'query-proxmox-support' QMP command rather than > requiring a version guard, so it can land whenever. > * Update storage's ApiChangelog, split out API age+version bump into > its own patch. > * Let plugins define their own list of sensitive properties. > * Add postinst snippet to load NBD module during qemu-server upgrade. > * Check nbd/parameters directory instead of nbd/coresize to determine > whether the module is loaded. coresize was just picked on a whim, > parameters seems cleaner. > * Handle edge case with size not being a single number for character > devices when parsing/checking tar contents. > * Borg plugin: > * improve SSH options/handling > * improve handling and permissions of run and ssh directories > * mount borg arch
Re: [pve-devel] [PATCH v4 pve-storage 5/5] lvm: add lvremove helper
> Alexandre Derumier via pve-devel hat am > 11.03.2025 11:28 CET geschrieben: > > > ___ > pve-devel mailing list > pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > Signed-off-by: Alexandre Derumier > --- > src/PVE/Storage/LVMPlugin.pm | 15 ++- > 1 file changed, 10 insertions(+), 5 deletions(-) > > diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm > index 2431fcd..ab3563b 100644 > --- a/src/PVE/Storage/LVMPlugin.pm > +++ b/src/PVE/Storage/LVMPlugin.pm > @@ -373,6 +373,14 @@ sub lvrename { > ); > } > > +sub lvremove { > + my ($name, $vg) = @_; > + > + my $path = $vg ? "$vg/$name" : $name; why? it's only called twice below and both have a volume group set? the call in qemu-server is forbidden - you must never call directly into plugin code like that.. > + my $cmd = ['/sbin/lvremove', '-f', $path]; > + run_command($cmd, errmsg => "lvremove '$path' error"); > +} > + > sub alloc_image { > my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size, $backing) = @_; > > @@ -453,8 +461,7 @@ sub free_image { > warn $@ if $@; > > $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub { > - my $cmd = ['/sbin/lvremove', '-f', "$vg/del-$volname"]; > - run_command($cmd, errmsg => "lvremove '$vg/del-$volname' error"); > + lvremove("del-$volname", $vg); > }); > print "successfully removed volume $volname ($vg/del-$volname)\n"; > }; > @@ -470,9 +477,7 @@ sub free_image { > run_command($cmd, errmsg => "lvrename '$vg/$volname' error"); > return $zero_out_worker; > } else { > - my $tmpvg = $scfg->{vgname}; > - $cmd = ['/sbin/lvremove', '-f', "$tmpvg/$volname"]; > - run_command($cmd, errmsg => "lvremove '$tmpvg/$volname' error"); > + lvremove($volname, $scfg->{vgname}); > } > > return undef; > -- > 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH storage v4] fix #957 iscsi: improve iscsi_test_portal logic
On 20/03/2025 11:15, Mira Limbeck wrote: > [...] >> >>> + # check session state instead if available >>> + my $sessions = iscsi_session($cache, $target); >>> + for my $session ($sessions->@*) { >>> + next if $session->{portal} ne $portal; >>> + return iscsi_test_session($session->{session_id}); >> >> So if we have a session but it is not LOGGED_IN, we return 0. >> I know this is what I suggested in my v3 comment, but now I'm not so >> sure anymore. Couldn't it be the case that the session is broken for >> some reason, but discovery would still works? In such a case, we would >> now consider the portal offline. We could instead fall back to a TCP ping: >> >> my $state = iscsi_test_session($session->{session_id}); >> return $state if $state; >> >> Any opinions (from others)? > After talking off-list about this, we do want to fall back to the > tcp_ping if the session is not logged in. For discovery no session is > needed. So even without a login, it might still be reachable via ping > and a discovery possible. Yeah, let's fall back to a tcp_ping if there is a session that is not LOGGED_IN. Sorry for the confusion with my earlier suggestion on v3. One minor thing I forgot -- the commit message might benefit from a few more details: - mention that when we test connectivity of a portal, we now first check whether a logged-in session to that portal is already present and fall back to the TCP ping only if there is no such session - acknowledge that this is not going to remove TCP pings (and thus the log messages on the target side) completely -- TCP pings are still done e.g. if there is no active session yet. And finally, the first line is missing a colon after "fix #957", but this is a really minor thing now -- we can also fix when applying, just mentioning it here for completeness. @Victor, if you send a v5 I'll do some final testing on that version and, if everything looks good, add my Tested-by/Reviewed-by trailers there. [1] https://pve.proxmox.com/wiki/Developer_Documentation#Commits_and_Commit_Messages ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH proxmox-i18n] updated Japanese language ja.po
--- Begin Message --- --- ja.po.org 2025-03-20 22:22:38.644618693 +0900 +++ ja.po 2025-03-20 23:21:15.603578886 +0900 @@ -8,7 +8,7 @@ "Project-Id-Version: proxmox translations\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: Wed Feb 26 10:16:22 2025\n" -"PO-Revision-Date: 2024-11-19 15:18+0100\n" +"PO-Revision-Date: 2025-03-20 23:21+0900\n" "Last-Translator: ribbon \n" "Language-Team: Japanese \n" "Language: ja\n" @@ -63,9 +63,8 @@ msgstr "新バージョンがインストール済みだが旧バージョンが動作中;再起動が必要" #: proxmox-backup/www/window/DataStoreEdit.js:117 -#, fuzzy msgid "A relative path" -msgstr "絶対パス" +msgstr "相対パス" #: pve-manager/www/manager6/window/PCIMapEdit.js:191 msgid "" @@ -270,9 +269,8 @@ msgstr "Active Directory サーバー" #: proxmox-backup/www/tape/ChangerStatus.js:856 -#, fuzzy msgid "Activity" -msgstr "稼働中" +msgstr "アクティビティ" #: proxmox-widget-toolkit/src/Utils.js:643 #: proxmox-widget-toolkit/src/node/APTRepositories.js:162 @@ -352,33 +350,28 @@ msgstr "EFIディスク追加" #: proxmox-widget-toolkit/src/panel/WebhookEditPanel.js:73 -#, fuzzy msgid "Add Header" -msgstr "ヘッダ" +msgstr "ヘッダ追加" #: proxmox-backup/www/datastore/Content.js:1260 -#, fuzzy msgid "Add Namespace" -msgstr "名前空間" +msgstr "名前空間追加" #: proxmox-backup/www/config/SyncView.js:195 -#, fuzzy msgid "Add Pull Sync Job" -msgstr "レルム同期ジョブ" +msgstr "Pull同期ジョブ追加" #: proxmox-backup/www/config/SyncView.js:201 -#, fuzzy msgid "Add Push Sync Job" -msgstr "同期ジョブ" +msgstr "Push同期ジョブ追加" #: pmg-gui/js/PBSConfig.js:112 msgid "Add Remote" msgstr "リモートを追加" #: proxmox-widget-toolkit/src/panel/WebhookEditPanel.js:95 -#, fuzzy msgid "Add Secret" -msgstr "シークレット" +msgstr "シークレット追加" #: pmg-gui/js/Utils.js:720 msgid "Add Separator" @@ -666,9 +659,8 @@ msgstr "任意のルールに一致" #: pve-manager/www/manager6/ceph/Pool.js:313 -#, fuzzy msgid "Application" -msgstr "レプリケーション" +msgstr "アプリケーション" #: pve-manager/www/manager6/dc/OptionView.js:468 msgid "Applies to new edits" @@ -1469,7 +1461,7 @@ #: pve-manager/www/manager6/qemu/AgentIPView.js:165 msgid "Cannot get info from Guest AgentError: {0}" -msgstr "" +msgstr "ゲストエージェントから情報を得られませんError: {0}" #: pve-manager/www/manager6/storage/ImageView.js:52 msgid "Cannot remove disk image." @@ -2127,14 +2119,12 @@ "接続に失敗しました。ネットワークエラーまたはProxmox VEサービスが未実行?" #: proxmox-widget-toolkit/src/window/ConsentModal.js:15 -#, fuzzy msgid "Consent" -msgstr "コンソール" +msgstr "コンセント" #: proxmox-backup/www/config/NodeOptionView.js:60 -#, fuzzy msgid "Consent Text" -msgstr "Content Type" +msgstr "コンセントタイプ" #: proxmox-widget-toolkit/src/Utils.js:708 pmg-gui/js/ServerStatus.js:59 #: pve-manager/www/manager6/Utils.js:2045 @@ -2604,9 +2594,8 @@ #: proxmox-backup/www/datastore/DataStoreListSummary.js:39 #: proxmox-backup/www/datastore/Summary.js:74 -#, fuzzy msgid "Datastore is not mounted" -msgstr "データストアが無効" +msgstr "データストアが非マウント" #: proxmox-backup/www/datastore/DataStoreList.js:196 msgid "Datastores" @@ -2973,7 +2962,6 @@ msgstr "デバイスノード" #: proxmox-backup/www/window/DataStoreEdit.js:75 -#, fuzzy msgid "Device path" msgstr "デバイスパス" @@ -4377,15 +4365,16 @@ #: pve-manager/www/manager6/grid/FirewallOptions.js:155 #: pve-manager/www/manager6/grid/FirewallOptions.js:160 #: pve-manager/www/manager6/grid/FirewallOptions.js:165 -#, fuzzy msgid "Forward Policy" -msgstr "ハッシュポリシー" +msgstr "フォワードポリシー" #: pve-manager/www/manager6/grid/FirewallRules.js:243 msgid "" "Forward rules only take effect when the nftables firewall is activated in " "the host options" msgstr "" +"nftables ファイアウォールが乾すとオプションで有効な場合にのみフォワードルール\n" +"は効果があります。" #: proxmox-widget-toolkit/src/Utils.js:686 msgid "Forwarded mails to the local root user" @@ -4445,9 +4434,8 @@ #: pve-manager/www/manager6/window/GuestImport.js:692 #: pve-manager/www/manager6/window/GuestImport.js:878 -#, fuzzy msgid "From Default" -msgstr "既定" +msgstr "既定から" #: pve-manager/www/manager6/qemu/PCIEdit.js:292 #: pve-manager/www/manager6/qemu/PCIEdit.js:301 @@ -5087,13 +5075,12 @@ "If the display type uses SPICE you are able to use the default SPICE " "clipboard." msgstr "" -"もしディスプレイタイプがSPICEを使用している場合、デフォルトのSPICEクリップ" -"ボードを使用できます" +"もしディスプレイタイプがSPICEを使用している場合、既定のSPICEクリップボードを使\n" +"用可能" #: pmg-gui/js/Utils.js:428 -#, fuzzy msgid "Ignore header information" -msgstr "ネットワーク情報なし" +msgstr "ヘッダ情報を無視" #: pve-manager/www/manager6/Utils.js:694 #: pve-manager/www/manager6/storage/Browser.js:146 @@ -5110,7 +5097,7 @@ #: pve-manager/www/manager6/window/GuestImport.js:949 msgid "Import Guest - {0}" -msgstr "" +msgstr "ゲストのインポート - {0}" #: pve-manager/www/manager6/window/GuestImport.js:568 msgid "Import Working Storage" @@ -5268,7 +5255,7 @@ #: pve-manager/www/manager6/qemu/MachineEdit.js:153 msgid "Intel (AMD Compatible)" -msgstr "" +msgstr "Intel (AMD互換)" #: proxmox-widget-toolkit/src/form/NetworkSelector.js:102 #: pve-manager/www/manager6/grid/FirewallRules.js:279 @@ -5335,9 +5322,8 @@ msgstr "このトークンはすでに登録済み?" #
[pve-devel] applied-series: [PATCH SERIES access-control/docs/manager/perl-rs/proxmox-openid v4] Make OIDC userinfo endpoint optional
with Mira's follow-up folded in. Quoting Thomas Skinner (2025-03-24 04:37:32) > Changes since v3: > - adjust option to "query userinfo endpoint" with default enabled > > access-control: > > Thomas Skinner (1): > fix #4234: add library functions for openid optional userinfo request > > src/PVE/API2/OpenId.pm | 6 +- > src/PVE/Auth/OpenId.pm | 7 +++ > 2 files changed, 12 insertions(+), 1 deletion(-) > > > docs: > > Thomas Skinner (1): > fix #4234: add docs for openid optional userinfo request > > pveum.adoc | 8 > 1 file changed, 8 insertions(+) > > > manager: > > Thomas Skinner (1): > fix #4234: add GUI option for openid optional userinfo request > > www/manager6/dc/AuthEditOpenId.js | 10 ++ > 1 file changed, 10 insertions(+) > > > perl-rs: > > Thomas Skinner (1): > fix #4234: openid: adjust openid verification function for userinfo > option > > pve-rs/src/openid/mod.rs | 7 ++- > 1 file changed, 6 insertions(+), 1 deletion(-) > > > proxmox-openid: > > Thomas Skinner (1): > fix #4234: openid: add library functions for optional userinfo > endpoint > > proxmox-openid/src/lib.rs | 30 +- > 1 file changed, 29 insertions(+), 1 deletion(-) > > > -- > 2.39.5 > > > ___ > pve-devel mailing list > pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > > ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH proxmox-i18n v2 2/2] make: add proxmox-datacenter-manager translations
Am 24.01.25 um 15:37 schrieb Maximiliano Sandoval: > The catalog-{lang}.mo files are generated only with strings that are > relevant to the proxmox-datacenter-manager instead of the whole > {lang}.po file. The msgmerge command will produce all strings containing > in the {lang}.po file but the ones that do not concern > proxmox-datacenter-manager.pot will be written without a source file, > then msgattrib will discard all those strings with --no-obsolete. We > throw a --no-fuzzy to further decrease the resulting catalog-{lang}.mo's > file size. > > One thing to note is that xtr, unlike our script to extract translations > will add comments, context, and plural forms. > > Signed-off-by: Maximiliano Sandoval > --- > .gitmodules| 9 + > Makefile | 33 ++--- > debian/control | 8 +++- > debian/pdm-i18n.install| 1 + > proxmox-datacenter-manager | 1 + > proxmox-yew-comp | 1 + > proxmox-yew-widget-toolkit | 1 + > 7 files changed, 50 insertions(+), 4 deletions(-) > create mode 100644 debian/pdm-i18n.install > create mode 16 proxmox-datacenter-manager > create mode 16 proxmox-yew-comp > create mode 16 proxmox-yew-widget-toolkit > > diff --git a/.gitmodules b/.gitmodules > index a81a7e3..885b6e1 100644 > --- a/.gitmodules > +++ b/.gitmodules > @@ -12,3 +12,12 @@ > [submodule "proxmox-backup"] > path = proxmox-backup > url = ../proxmox-backup > +[submodule "proxmox-datacenter-manager"] > + path = proxmox-datacenter-manager > + url = ../proxmox-datacenter-manager > +[submodule "proxmox-yew-widget-toolkit"] > + path = proxmox-yew-widget-toolkit > + url = ../proxmox-yew-widget-toolkit These are in the ui/ subdir on git.proxmox.com so above won't work; I'd just use the full git URLs here, people trying local changes can add the local path based origin manually in the submodule after all. > +[submodule "proxmox-yew-comp"] > + path = proxmox-yew-comp > + url = ../proxmox-yew-comp Same here. Your patch seems also to miss the diffs for the new, actual git submodule directories, e.g. something like: 8< diff --git a/proxmox-datacenter-manager b/proxmox-datacenter-manager new file mode 16 index 000..512bf55 --- /dev/null +++ b/proxmox-datacenter-manager @@ -0,0 +1 @@ +Subproject commit 512bf55a8542af7d49f3e52f70054557a9d73054 diff --git a/proxmox-yew-comp b/proxmox-yew-comp new file mode 16 index 000..9d4e9ff --- /dev/null +++ b/proxmox-yew-comp @@ -0,0 +1 @@ +Subproject commit 9d4e9ff5e942f5c3feabe5e5910f1a7dcd00ae5a diff --git a/proxmox-yew-widget-toolkit b/proxmox-yew-widget-toolkit new file mode 16 index 000..79fc490 --- /dev/null +++ b/proxmox-yew-widget-toolkit @@ -0,0 +1 @@ +Subproject commit 79fc49095cd98c51c9798b0d05de26f40567394f >8 As just the .gitmodules entries on their own are not complete, due to missing the revision the module should point too, they are basically just a config for where that revision can be fetched from. btw. instead of pinging a series unconditionally a better approach might be to use that as an opportunity to self-test the whole series, i.e. apply them locally and see if all works out; that would have made most issues of this series visible. If that succeeded, then asking someone directly for an end-to-end test and/or review before sending the second ping or so out would also be more productive IMO, as a tested-by gives maintainers at least some basic assurance that the whole thing works in principle... > diff --git a/Makefile b/Makefile > index 90a7453..ca98ef9 100644 > --- a/Makefile > +++ b/Makefile > @@ -78,13 +81,16 @@ submodule: > || git submodule update --init This target should be modified so that the tests include the newly added submodule paths, i.e.: 8< diff --git a/Makefile b/Makefile index ca98ef9..066b57a 100644 --- a/Makefile +++ b/Makefile @@ -77,8 +77,14 @@ $(DSC): $(BUILDDIR) lintian $(DSC) submodule: - test -f pmg-gui/Makefile -a -f proxmox-backup/Makefile -a -f pve-manager/Makefile \ - || git submodule update --init + test \ + -f pmg-gui/Makefile \ + -a -f proxmox-backup/Makefile \ + -a -f pve-manager/Makefile \ + -a -f proxmox-datacenter-manager/Makefile \ + -a -f proxmox-yew-widget-toolkit/Makefile \ + -a -f proxmox-yew-comp/Makefile \ + || git submodule update --init --recursive .PHONY: install install: $(PMG_LANG_FILES) $(PVE_LANG_FILES) $(PBS_LANG_FILES) $(PDM_LANG_FILES) >8 > do_update: > $(MAKE) update_pot > @@ -128,7 +151,11 @@ init-%.po: messages.pot > msginit -i $^ -l $^ -o $*.po --no-translator > > .INTERMEDIATE: messages.pot > -messages.pot: proxmox-widget-toolkit.pot proxmox-mailgateway.pot > pve-manager.pot proxmox-backup.pot > +messages.pot: proxmox-widget-toolkit.pot proxmox-mailgatewa
Re: [pve-devel] [PATCH storage v5 1/1] import: allow upload of guest images files into import storage
Am 01.04.25 um 10:23 schrieb Dominik Csapak: > so users can upload qcow2/raw/vmdk files directly in the ui > Pre-existing, but we put all uploads to /var/tmp/pveupload-XYZ first, right? This already makes some users unhappy with ISOs IIRC and for images we can expect it to get worse as those are usually even larger. Should we at least show a warning/hint about this in the UI? > Signed-off-by: Dominik Csapak > --- > no changes in v5 > > src/PVE/API2/Storage/Status.pm | 17 - > src/PVE/Storage.pm | 3 ++- > 2 files changed, 18 insertions(+), 2 deletions(-) > > diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm > index c854b53..b23d283 100644 > --- a/src/PVE/API2/Storage/Status.pm > +++ b/src/PVE/API2/Storage/Status.pm The API method descriptions don't mention support for uploading/downloading images yet. > @@ -456,6 +456,7 @@ __PACKAGE__->register_method ({ > > my $path; > my $isOva = 0; > + my $imageFormat; Style nit: This is not how we usually name multi-word Perl variables (also pre-existing for isOva). > > if ($content eq 'iso') { > if ($filename !~ m![^/]+$PVE::Storage::ISO_EXT_RE_0$!) { > @@ -472,7 +473,12 @@ __PACKAGE__->register_method ({ > raise_param_exc({ filename => "invalid filename or wrong > extension" }); > } Nit: if you already extract the extension from matching above here, you don't need to match again below. > > - $isOva = 1; > + if ($filename =~ m/\.ova$/) { > + $isOva = 1; > + } elsif ($filename =~ > m/${PVE::Storage::UPLOAD_IMPORT_IMAGE_EXT_RE_1}$/) { > + $imageFormat = $1; > + } > + > $path = PVE::Storage::get_import_dir($cfg, $storage); > } else { > raise_param_exc({ content => "upload content type '$content' not > allowed" }); > @@ -543,6 +549,9 @@ __PACKAGE__->register_method ({ > > if ($isOva) { > assert_ova_contents($tmpfilename); > + } elsif (defined($imageFormat)) { > + # checks untrusted image > + PVE::Storage::file_size_info($tmpfilename, 10, > $imageFormat, 1); > } > }; > if (my $err = $@) { > @@ -667,6 +676,7 @@ __PACKAGE__->register_method({ > > my $path; > my $isOva = 0; > + my $imageFormat; > > if ($content eq 'iso') { > if ($filename !~ m![^/]+$PVE::Storage::ISO_EXT_RE_0$!) { > @@ -685,6 +695,8 @@ __PACKAGE__->register_method({ > Similar here regarding extension matching, then you don't even need to define a second regex. > if ($filename =~ m/\.ova$/) { > $isOva = 1; > + } elsif ($filename =~ > m/${PVE::Storage::UPLOAD_IMPORT_IMAGE_EXT_RE_1}$/) { > + $imageFormat = $1; > } > > $path = PVE::Storage::get_import_dir($cfg, $storage); > @@ -717,6 +729,9 @@ __PACKAGE__->register_method({ > > if ($isOva) { > assert_ova_contents($tmp_path); > + } elsif (defined($imageFormat)) { > + # checks untrusted image > + PVE::Storage::file_size_info($tmp_path, 10, $imageFormat, 1); > } > }; > > diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm > index c5d4ff8..09d9883 100755 > --- a/src/PVE/Storage.pm > +++ b/src/PVE/Storage.pm > @@ -116,7 +116,8 @@ our $BACKUP_EXT_RE_2 = > qr/\.(tgz|(?:tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPR > > our $IMPORT_EXT_RE_1 = qr/\.(ova|ovf|qcow2|raw|vmdk)/; > > -our $UPLOAD_IMPORT_EXT_RE_1 = qr/\.(ova)/; > +our $UPLOAD_IMPORT_EXT_RE_1 = qr/\.(ova|qcow2|raw|vmdk)/; > +our $UPLOAD_IMPORT_IMAGE_EXT_RE_1 = qr/\.(qcow2|raw|vmdk)/; > > our $SAFE_CHAR_CLASS_RE = qr/[a-zA-Z0-9\-\.\+\=\_]/; > our $SAFE_CHAR_WITH_WHITESPACE_CLASS_RE = qr/[ a-zA-Z0-9\-\.\+\=\_]/; ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH pve-manager 6/7] fabrics: Add main FabricView
On 02.04.2025 11:50, Christoph Heiss wrote: Some comments inline - did the review mostly in tandem with testing the UI, to get a better context. On Fri Mar 28, 2025 at 6:13 PM CET, Gabriel Goller wrote: [..] diff --git a/www/manager6/dc/Config.js b/www/manager6/dc/Config.js index 74728c8320e9..68f7be8d6042 100644 --- a/www/manager6/dc/Config.js +++ b/www/manager6/dc/Config.js @@ -229,6 +229,14 @@ Ext.define('PVE.dc.Config', { hidden: true, iconCls: 'fa fa-shield', itemId: 'sdnfirewall', + }, + { + xtype: 'pveSDNFabricView', + groups: ['sdn'], + title: gettext('Fabrics'), Do we really want to translate 'Fabrics'? Or rather just keep the name always at 'Fabrics' as the name of the concept/feature itself? Since it is a established term, after all. I'd keep it, IMHO. (But we seem to translate everything else here in the file, so really not sure.) Hmm I mean it's probably the same in every language, but we also seem to translate everything, even "VNET Firewall" and "IPAM", so IMO I'd leave it just for the sake of consistency. + hidden: true, + iconCls: 'fa fa-road', + itemId: 'sdnfabrics', }); } diff --git a/www/manager6/sdn/FabricsView.js b/www/manager6/sdn/FabricsView.js new file mode 100644 index ..0ef12defb1a8 --- /dev/null +++ b/www/manager6/sdn/FabricsView.js [..] +Ext.define('PVE.sdn.Fabric.View', { [..] + { + text: gettext('Protocol'), + dataIndex: 'protocol', + width: 100, + renderer: function(value, metaData, rec) { + if (rec.data.type !== 'fabric') { + return ""; + } + + return PVE.Utils.render_sdn_pending(rec, value, 'protocol'); + }, A mapping for the internal protocol name to a user-visible would be nice here, i.e. 'ospf' -> 'OSPF' and 'openfabric' -> 'OpenFabric'. Not much difference currently, but would look a bit better in the UI :) Oh, that's a nice one, added this. [snip] + handler: 'addActionTreeColumn', + getTip: (_v, _m, _rec) => gettext('Add'), nit: "Add Node" would be better here for node rows, if that's possible. I agree. + getClass: (_v, _m, { data }) => { + if (data.type === 'fabric') { + return 'fa fa-plus-circle'; + } + + return 'pmx-hidden'; + }, + isActionDisabled: (_v, _r, _c, _i, { data }) => data.type !== 'fabric', + }, + { + tooltip: gettext('Edit'), ^ same as above Here it's a bit different, because the Edit button is on Fabrics and Nodes, so I'd have to add the getTip callback and format the type in it. IMO this is not worth it. + handler: 'editAction', + getClass: (_v, _m, { data }) => { + // the fabric type (openfabric, ospf, etc.) cannot be edited + if (data.type) { + return 'fa fa-pencil fa-fw'; + } + + return 'pmx-hidden'; + }, + isActionDisabled: (_v, _r, _c, _i, { data }) => !data.type, + }, + { + tooltip: gettext('Delete'), ^ and same here as well Same as my answer above. + handler: 'deleteAction', + getClass: (_v, _m, { data }) => { + // the fabric type (openfabric, ospf, etc.) cannot be deleted + if (data.type) { + return 'fa critical fa-trash-o'; + } + + return 'pmx-hidden'; + }, + isActionDisabled: (_v, _r, _c, _i, { data }) => !data.type, + }, + ], + }, + { + header: gettext('Interfaces'), + width: 100, + dataIndex: 'interface', + renderer: function(value, metaData, rec) { + let interfaces = rec.data.pending?.interface || rec.data.interface || []; nit: could be const (see [0] tho regarding rules for const/let) [0] https://pve.proxmox.com/wiki/Javascript_Style_Guide#Variables Yep. [snip] +initComponent: function() { + let me = this; + + + let add_button = new Proxmox.button.Button({ New variables should generally be camelCase [0]. [1] https://pve.proxmox.com/wiki/Javascript_Style_Guide#Casing Fixed as well. + text: gettext('Add Node'), + handler: 'addActionTbar', + disabled: true, + }); Since there is also an "add node" button in the "Action" column for each fabric, do we need a separat
[pve-devel] [PATCH zfsonlinux 3/8] d/copyright: remove files deleted by upstream
Signed-off-by: Shengqi Chen (cherry picked from commit d9b0863953ff843f103bb45e33c410d4e0db5c24) Signed-off-by: Stoiko Ivanov --- debian/copyright | 7 --- 1 file changed, 7 deletions(-) diff --git a/debian/copyright b/debian/copyright index 4ac98c266..6beb70757 100644 --- a/debian/copyright +++ b/debian/copyright @@ -709,13 +709,6 @@ Copyright: 2015, Chunwei Chen. 2005, 2010, Oracle and/or its affiliates. License: CDDL-1.0 -Files: module/os/freebsd/zfs/zfs_znode.c - module/os/linux/zfs/zfs_znode.c -Copyright: 2013, 2015 Delphix. - 2007, Jeremy Teo - 2005, 2010, Oracle and/or its affiliates. -License: CDDL-1.0 - Files: module/zfs/zio_compress.c Copyright: 2013, Saso Kiselkov. 2013, Delphix. -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH container v5 29/32] restore tar archive: check potentially untrusted archive
'tar' itself already protects against '..' in component names and strips absolute member names when extracting (if not used with the --absolute-names option) and in general seems sane for extracting. Additionally, the extraction already happens in the user namespace associated to the container. So for now, start out with some basic sanity checks. The helper can still be extended with more checks. Checks: * list files in archive - will already catch many corrupted/bogus archives. * check that there are at least 10 members - should also catch archives not actually containing a container root filesystem or structural issues early. * check that /sbin directory or link exists in archive - ideally the check would be for /sbin/init, but this cannot be done efficiently before extraction, because it would require to keep track of the whole archive to be able to follow symlinks. * abort if there is a multi-volume member in the archive - cheap and is never expected. Checks that were considered, but not (yet) added: * abort when a file has unrealistically large size - while this could help to detect certain kinds of bogus archives, there can be valid. use cases for extremely large sparse files, so it's not clear what a good limit would be (1 EiB maybe?). Also, an attacker could just adapt to such a limit creating multiple files and the actual extraction is already limited by the size of the allocated container volume. * check that /sbin/init exists after extracting - cannot be done efficiently before extraction, because it would require to keep track of the whole archive to be able to follow symlinks. During setup there already is detection of /etc/os-release, so issues with the structure will already be noticed. Adding a hard fail for untrusted archives would require either passing that information to the setup phase or extracting the protected_call method from there into a helper. * adding 'restrict' to the (common) tar flags - the tar manual (not the man page) documents: "Disable use of some potentially harmful 'tar' options. Currently this option disables shell invocation from multi-volume menu.". The flag was introduced in 2005 and this is still the only thing it is used for. Trying to restore a multi-volume archive already fails without giving multiple '--file' arguments and '--multi-volume', so don't bother adding the flag. * check format of tar file - would require yet another invocation of the decompressor and there seems to be no built-in way to just display the format with 'tar'. The 'file' program could be used, but it seems to not make a distinction between old GNU and GNU and old POSIX and POSIX formats, with the old ones being candidates to prohibit. So that would leave just detecting the old 'v7' format. Suggested-by: Fabian Grünbichler Signed-off-by: Fiona Ebner --- Changes in v5: * Print lines that cannot be parsed. * Handle edge case where size is not a single number (e.g. for character devices). src/PVE/LXC/Create.pm | 75 --- 1 file changed, 71 insertions(+), 4 deletions(-) diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm index 1d0474d..c762d54 100644 --- a/src/PVE/LXC/Create.pm +++ b/src/PVE/LXC/Create.pm @@ -99,12 +99,73 @@ my sub tar_compression_option { } } +# Basic checks trying to detect issues with a potentially untrusted or bogus tar archive. +# Just listing the files is already a good check against corruption. +# 'tar' itself already protects against '..' in component names and strips absolute member names +# when extracting, so no need to check for those here. +my sub check_tar_archive { +my ($archive) = @_; + +print "checking archive..\n"; + +# To resolve links to get to 'sbin/init' would mean keeping track of everything in the archive, +# because the target might be ordered first. Check only that 'sbin' exists here. +my $found_sbin; + +# Just to detect bogus archives, any valid container filesystem should have more than this. +my $required_members = 10; +my $member_count = 0; + +my $check_file_list = sub { + my ($line) = @_; + + $member_count++; + + # Not always just a single number, e.g. for character devices. + my $size_re = qr/\d+(?:,\d+)?/; + + # The date is in ISO 8601 format. The last part contains the potentially quoted file name, + # potentially followed by some additional info (e.g. where a link points to). + my ($type, $perms, $uid, $gid, $size, $date, $time, $file_info) = + $line =~ m!^([a-zA-Z\-])(\S+)\s+(\d+)/(\d+)\s+($size_re)\s+(\S+)\s+(\S+)\s+(.*)$!; + if (!defined($type)) { + print "check tar: unable to parse line: $line\n"; + return; + } + + die "found multi-volume member in archive\n" if $type eq 'M'; + + if (!$found_sbin && ( + ($file_info =~ m!^(?:\./)?sbin/$! && $type eq 'd') +
[pve-devel] [PATCH qemu-server v5 21/32] backup: future-proof checks for QEMU feature support
The features returned by the 'query-proxmox-support' QMP command are booleans, so just checking for definedness is not enough in principle. In practice, a feature is currently always true if defined. Still, fix the checks, should the need to disable support for a feature ever arise in the future and to avoid propagating the pattern further. Signed-off-by: Fiona Ebner --- New in v5. PVE/VZDump/QemuServer.pm | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm index 0b22b34d..c2d64192 100644 --- a/PVE/VZDump/QemuServer.pm +++ b/PVE/VZDump/QemuServer.pm @@ -639,7 +639,7 @@ my sub check_and_prepare_fleecing { my $use_fleecing = $fleecing_opts && $fleecing_opts->{enabled} && !$is_template; -if ($use_fleecing && !defined($qemu_support->{'backup-fleecing'})) { +if ($use_fleecing && !$qemu_support->{'backup-fleecing'}) { $self->log( 'warn', "running QEMU version does not support backup fleecing - continuing without", @@ -739,7 +739,7 @@ sub archive_pbs { # pve-qemu supports it since 5.2.0-1 (PVE 6.4), so safe to die since PVE 8 die "master key configured but running QEMU version does not support master keys\n" - if !defined($qemu_support->{'pbs-masterkey'}) && defined($master_keyfile); + if !$qemu_support->{'pbs-masterkey'} && defined($master_keyfile); $attach_tpmstate_drive->($self, $task, $vmid); -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH v8 qemu-server 10/11] backup: support 'missing-recreated' bitmap action
From: Fiona Ebner A new 'missing-recreated' action was added on the QEMU side. Signed-off-by: Fiona Ebner --- No changes to v7. PVE/VZDump/QemuServer.pm | 3 +++ 1 file changed, 3 insertions(+) diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm index 676dad2..894e337 100644 --- a/PVE/VZDump/QemuServer.pm +++ b/PVE/VZDump/QemuServer.pm @@ -316,6 +316,8 @@ my $bitmap_action_to_human = sub { } } elsif ($action eq "invalid") { return "existing bitmap was invalid and has been cleared"; +} elsif ($action eq "missing-recreated") { + return "expected bitmap was missing and has been recreated"; } else { return "unknown"; } @@ -1372,6 +1374,7 @@ my sub backup_access_to_volume_info { 'new' => 'new', 'used' => 'reuse', 'invalid' => 'new', + 'missing-recreated' => 'new', }; my $volumes = {}; -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH v8 qemu-server 08/11] backup: implement restore for external providers
From: Fiona Ebner First, the provider is asked about what restore mechanism to use. Currently, only 'qemu-img' is possible. Then the configuration files are restored, the provider gives information about volumes contained in the backup and finally the volumes are restored via 'qemu-img convert'. The code for the restore_external_archive() function was copied and adapted from the restore_proxmox_backup_archive() function. Together with restore_vma_archive() it seems sensible to extract the common parts and use a dedicated module for restore code. The parse_restore_archive() helper was renamed, because it's not just parsing. While currently, the format for the source can only be raw, do an untrusted check for the source for future-proofing. Still serves as a basic sanity check currently. Signed-off-by: Fiona Ebner [WB: fix 'bwlimit' typo] Signed-off-by: Wolfgang Bumiller --- Changes in v8: described in the trailers above ^ PVE/API2/Qemu.pm | 30 +- PVE/QemuServer.pm | 149 ++ 2 files changed, 176 insertions(+), 3 deletions(-) diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm index 156b1c7..6c7c1d0 100644 --- a/PVE/API2/Qemu.pm +++ b/PVE/API2/Qemu.pm @@ -927,7 +927,7 @@ __PACKAGE__->register_method({ return $res; }}); -my $parse_restore_archive = sub { +my $classify_restore_archive = sub { my ($storecfg, $archive) = @_; my ($archive_storeid, $archive_volname) = PVE::Storage::parse_volume_id($archive, 1); @@ -941,6 +941,22 @@ my $parse_restore_archive = sub { $res->{type} = 'pbs'; return $res; } + if (PVE::Storage::storage_has_feature($storecfg, $archive_storeid, 'backup-provider')) { + my $log_function = sub { + my ($log_level, $message) = @_; + my $prefix = $log_level eq 'err' ? 'ERROR' : uc($log_level); + print "$prefix: $message\n"; + }; + my $backup_provider = PVE::Storage::new_backup_provider( + $storecfg, + $archive_storeid, + $log_function, + ); + + $res->{type} = 'external'; + $res->{'backup-provider'} = $backup_provider; + return $res; + } } my $path = PVE::Storage::abs_filesystem_path($storecfg, $archive); $res->{type} = 'file'; @@ -1101,7 +1117,7 @@ __PACKAGE__->register_method({ 'backup', ); - $archive = $parse_restore_archive->($storecfg, $archive); + $archive = $classify_restore_archive->($storecfg, $archive); } } @@ -1160,7 +1176,15 @@ __PACKAGE__->register_method({ PVE::QemuServer::check_restore_permissions($rpcenv, $authuser, $merged); } } - if ($archive->{type} eq 'file' || $archive->{type} eq 'pipe') { + if (my $backup_provider = $archive->{'backup-provider'}) { + PVE::QemuServer::restore_external_archive( + $backup_provider, + $archive->{volid}, + $vmid, + $authuser, + $restore_options, + ); + } elsif ($archive->{type} eq 'file' || $archive->{type} eq 'pipe') { die "live-restore is only compatible with backup images from a Proxmox Backup Server\n" if $live_restore; PVE::QemuServer::restore_file_archive($archive->{path} // '-', $vmid, $authuser, $restore_options); diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 93f985b..491da44 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -7175,6 +7175,155 @@ sub restore_proxmox_backup_archive { } } +sub restore_external_archive { +my ($backup_provider, $archive, $vmid, $user, $options) = @_; + +die "live restore from backup provider is not implemented\n" if $options->{live}; + +my $storecfg = PVE::Storage::config(); + +my ($storeid, $volname) = PVE::Storage::parse_volume_id($archive); +my $scfg = PVE::Storage::storage_config($storecfg, $storeid); + +my $tmpdir = "/run/qemu-server/vzdumptmp$$"; +rmtree($tmpdir); +mkpath($tmpdir) or die "unable to create $tmpdir\n"; + +my $conffile = PVE::QemuConfig->config_file($vmid); +# disable interrupts (always do cleanups) +local $SIG{INT} = + local $SIG{TERM} = + local $SIG{QUIT} = + local $SIG{HUP} = sub { print STDERR "got interrupt - ignored\n"; }; + +# Note: $oldconf is undef if VM does not exists +my $cfs_path = PVE::QemuConfig->cfs_config_path($vmid); +my $oldconf = PVE::Cluster::cfs_read_file($cfs_path); +my $new_conf_raw = ''; + +my $rpcenv = PVE::RPCEnvironment::get(); +my $devinfo = {}; # info about drives included in backup +my $virtdev_hash = {}; # info about allocated drives + +eval {
[pve-devel] applied: [PATCH novnc] backport correct fix for extra keys
Am 25.03.25 um 08:53 schrieb Dominik Csapak: > Instead of our own fix for the extra keys, backport the proper one from > upstream. As mentioned in the commit message, this also fixes > the clibpoard textarea margin. > > We can drop this patch then when we update to a new upstream release > with this included. > > Signed-off-by: Dominik Csapak > --- > ...-Fix-appearance-of-extra-key-buttons.patch | 60 +++ > .../patches/0020-fix-broken-extra-keys.patch | 32 -- > debian/patches/series | 2 +- > 3 files changed, 61 insertions(+), 33 deletions(-) > create mode 100644 > debian/patches/0020-Fix-appearance-of-extra-key-buttons.patch > delete mode 100644 debian/patches/0020-fix-broken-extra-keys.patch > > applied, thanks! ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH storage/manager v5] allow down/upload & import of images in the web UI
most of the building blocks are already there: * we can have image files in an import storage * we can import image files via the api from such a storage this series fills in the missing bits & pieces: * allow uploading/downloading image files into an import storage via the webgui * adding the possibility to select such a file when creating a vm/disk One minor "ugliness": when switching between import on/off, the target storage selector resets. This is normally intended by the component, since it's most often only disabled when it's still visible, except here in this case. If this is a blocker, I can of course add an option to the selector to not do this here, but IMHO this is a rather odd use case anyway, so I opted for not handling that explicitely. changes from v4: * add an 'extension alias' mechanism for the upload window, to have separate extensions to validate for the file selector and the file name field (thanks for noticing @Filip) changes from v3: * allow raw (.raw, .img) and vmdk files too * automatically append '.raw' for '.img' files changes from v2: * fix correctly unset 'import-from' in wizard when going to summary, back to disk, unselecting import, then going forward to the summary again * fixed an issue with the file selector being mistakenly disabled changes from v1: * fixed an issue where the file selector would be hidden but invalid pve-storage: Dominik Csapak (1): import: allow upload of guest images files into import storage src/PVE/API2/Storage/Status.pm | 17 - src/PVE/Storage.pm | 3 ++- 2 files changed, 18 insertions(+), 2 deletions(-) pve-manager: Dominik Csapak (3): ui: storage content: allow upload of guest images for import type ui: form: file selector: allow optional filter ui: qemu hd edit: allow importing a disk from the import storage www/manager6/form/FileSelector.js | 10 +++ www/manager6/qemu/HDEdit.js | 71 - www/manager6/window/DownloadUrlToStorage.js | 4 ++ www/manager6/window/UploadToStorage.js | 24 +-- 4 files changed, 104 insertions(+), 5 deletions(-) -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH proxmox-ve-rs 16/17] ve-config: add section-config to frr types conversion
On Fri Mar 28, 2025 at 6:13 PM CET, Gabriel Goller wrote: > [..] > Signed-off-by: Gabriel Goller > --- > proxmox-ve-config/Cargo.toml| 7 + > proxmox-ve-config/debian/control| 37 ++- > proxmox-ve-config/src/sdn/fabric/mod.rs | 416 > 3 files changed, 454 insertions(+), 6 deletions(-) > > diff --git a/proxmox-ve-config/Cargo.toml b/proxmox-ve-config/Cargo.toml > index 3f7639efa153..231e237fb82f 100644 > --- a/proxmox-ve-config/Cargo.toml > +++ b/proxmox-ve-config/Cargo.toml > @@ -24,3 +24,10 @@ proxmox-serde = { version = "0.1.2" } > proxmox-sys = "0.6.4" > proxmox-sortable-macro = "0.1.3" > proxmox-network-types = { version = "0.1", path = > "../proxmox-network-types/" } > +proxmox-frr = { version = "0.1", path = "../proxmox-frr/", optional = true } > + > +[features] > +frr = ["dep:proxmox-frr" ] > + > +[dev-dependencies] > +similar-asserts = "1" The (new) dependency on librust-similar-asserts-dev is missing in debian/control, just noticed while trying to build the package :^) Didn't review further (yet). ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH v1 pve-storage 8/8] pluginbase: document import and export methods
Am 01.04.25 um 10:40 schrieb Fabian Grünbichler: > On March 26, 2025 3:20 pm, Max Carrara wrote: >> +=head3 $plugin->volume_export(\%scfg, $storeid, $fh, $volname, $format [, >> $snapshot, $base_snapshot, $with_snapshots]) >> + >> +=head3 $plugin->volume_export(...) >> + >> +Exports a volume or a volume's C<$snapshot> into a file handle C<$fh> as a >> +stream with a desired export C<$format>. See L for all >> import/export >> +formats. >> + >> +Optionally, C<$snapshot> (if provided) may have a C<$base_snapshot>, and >> +C<$with_snapshots> states whether the volume has snapshots overall. > > this is incomplete/wrong > > $with_snapshots means the export should include snapshots, not whether > the volume has snapshots.. > $snapshot means "this is the snapshot to export" if only exporting the > snapshot ($with_snapshots == 0), or "this is the *last* snapshot export" > if exporting a stream of snapshots ($with_snapshots == 1) > $base_snapshot means "this is the start of the snapshot range to export" > (i.e., do an incremental export on top of this base) > > this is mostly only relevant for zfs at the moment (other storages can > either export a volume including its snapshots, or just the volume, but > no complicated incremental streams of snapshots), but will change once > we implement replication export/import for other storages.. There are already ideas floating around to change this and add proper format negotiation. We'll also need to ask the target what it supports like for remote migration, the sending side cannot really know that. And as part of that change from the confusing set of snapshot-related parameters to having an actual "what kind of transport" enum: 1. current data (of an image or a snapshot) 2. full sync with all snapshots 3. incremental stream No details worked out yet though and not really relevant for documenting the status quo. >> +Optionally, C<$snapshot> (if provided) may have a C<$base_snapshot>, and >> +C<$with_snapshots> states whether the volume has snapshots overall. >> Renaming an >> +existing volume may also optionally be allowed via C<$allow_rename> > > see above, but here $snapshot is mainly there to have the same > arguments for volume_import_formats so a plugin can have a single > implementation, not because it is used anywhere IIRC.. The LVMPlugin.pm and Plugin.pm do have different implementations of the volume_{export,import}_formats() methods, precisely because they need to ignore the $snapshot parameter for the import case. Yes, we do pass the same arguments, but it can only matter for the "incremental stream" scenario. Otherwise, the parameter has nothing to do with the import side. Here it would also be much nicer to have an actual "what kind of transport" parameter. ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied-series: [PATCH manager 1/4] node: options: add config option for ballooning target
Am 12.03.25 um 16:15 schrieb Friedrich Weber: > The option is intended for the percentage of host memory that pvestatd > uses as the target for automatic memory allocation (ballooning). > > Signed-off-by: Friedrich Weber > --- > PVE/NodeConfig.pm | 8 > 1 file changed, 8 insertions(+) > > applied series, thanks! Simplified the renderes of node config options in a follow-up. As we have that `value === undefined ? fallback : value` pattern quite often it might be good to add a common helper for that, and there I'd be fine with unconditionally using htmlEncode. ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied-series: [PATCH manager/storage 0/2] fix #3716: allow downloading iso/vztmpl/ova via https in proxied environments
Am 26.03.25 um 11:51 schrieb Friedrich Weber: > A user in enterprise support reported (and users also reported elsewhere [1] > [2]) that ISO downloads via https currently do not work in environments using > a > proxy configured via the datacenter option `http_proxy`, if the connection to > the ISO repository needs to go via the proxy. Proxied ISO downloads via http > work. OVA and VZ template downloads are also affected. > > The reason is that > > - when querying the metadata via LWP, the proxy is only set for the `http` > scheme, not `https` > > - when spawning wget to download the ISO, only the `http_proxy` environment > variable is set, not `https_proxy`, so wget will not use a proxy for the > https > connection > > Hence, neither operation uses the proxy for https. If the node cannot reach > the > destination without the proxy, both operations time out. > > Fix this by > > - patch 1: setting the `http_proxy` from the datacenter config also for the > https scheme when querying the metadata via LWP > > - patch 2: passing the `https_proxy` environment variable to the wget command, > setting it to the `http_proxy` from the datacenter config > > Tested by running squid in a container, and setting up the firewall to drop > outgoing traffic from the PVE to everything but the proxy. Running > tcpconnect-bpfcc from bpfcc-tools helps for tracing the destination of the > http/https connections. > > Maximiliano and I discussed this and looked into earlier iterations on this. > > When a similar series was initially sent in 2021 [3], Thomas raised the > concern > [4] that our proxy settings should be more fine-grained and allow the user to > differentiate between resources that should be proxied and that should not be > proxied. For example, ISO repositories may be external (and thus may only be > reachable via proxying) or internal (and thus may not be reachable via the > proxy). Same for ACME endpoints. One idea was to group http requests issued by > our stack into categories (such as `base`, `acme`, `template`) and allow the > proxy setting to only apply to certain categories. I agree that something like > this sounds like a useful feature, and one user also requested [5] something > along these lines. > > However, Maximiliano and I would argue this concern is orthogonal to the issue > fixed by this patch. If a user has configured `http_proxy`, having the ISO > download work via http and fail via https is inconsistent and thus confusing > (see also Dominik's post [6] from back then). It might even nudge users into > using http instead (which can still give the same integrity guarantees if they > retrieve the checksum via https and compare them, but this is easy to get > wrong). We'd propose we use the `http_proxy` for both https and https for now, > and can still look into the categorization feature #5420 later. I agree with that it being orthogonal, so while it's not ideal it still improves the status quo. > > Other places in our stack also use the `http_proxy` datacenter option for > https > connections, e.g. the ones that use proxmox_http::HttpClient with ProxyConfig > such as with the notification system's webhook endpoint. > > One argument against this patch is that it breaks backwards compatibility: > Existing setups with `http_proxy` and an *internal* ISO repository from which > they download via https will break. If this is a concern, I'd suggest we wait > for PVE 9 to apply this. It is a concern, but not sure if delaying this to a major release is of any help if there is then still no way to workaround that without disabling and then re-setting the proxy setting before/after such an operation. > What do you think? I've seen published papers that were less elaborate and less convincing ;-) > > FTR, there is some overlap with a patch series by Hannes Laimer [7] but that > one only concerned the `query-url-metadata` and the `apl_download` endpoints > (for downloading appliance templates), not the ISO download. > > [1] https://bugzilla.proxmox.com/show_bug.cgi?id=3716 > [2] https://bugzilla.proxmox.com/show_bug.cgi?id=5420#c2 > [3] > https://lore.proxmox.com/pve-devel/20211109141359.990235-1-o.bek...@proxmox.com/ > [4] > https://lore.proxmox.com/pve-devel/42391428-bd80-2d55-5cb6-7c8ecd97a...@proxmox.com/ > [5] https://bugzilla.proxmox.com/show_bug.cgi?id=5420#c0 > [6] > https://lore.proxmox.com/pve-devel/a03631a3-fe78-7f6f-137d-7ee6fdf8f...@proxmox.com/ > [7] > https://git.proxmox.com/?p=proxmox.git;a=blob;f=proxmox-notify/src/endpoints/webhook.rs;h=34dbac5488;hb=7abd2da759d#l266 > [8] > https://lore.proxmox.com/pve-devel/20240308123535.1500-1-h.lai...@proxmox.com/ > > Co-authored-by: Maximiliano Sandoval > > storage: > > Friedrich Weber (1): > fix #3716: api: download from url: use proxy option for https > > src/PVE/API2/Storage/Status.pm | 1 + > 1 file changed, 1 insertion(+) > > > manager: > > Friedrich Weber (1): > fix #3716: api: nodes: query metad
[pve-devel] [POC storage v5 13/32] add backup provider example
The example uses a simple directory structure to save the backups, grouped by guest ID. VM backups are saved as configuration files and qcow2 images, with backing files when doing incremental backups. Container backups are saved as configuration files and a tar file or squashfs image (added to test the 'directory' restore mechanism). Whether to use incremental VM backups and which backup mechanisms to use can be configured in the storage configuration. The 'nbdinfo' binary from the 'libnbd-bin' package is required for backup mechanism 'nbd' for VM backups, the 'mksquashfs' binary from the 'squashfs-tools' package is required for backup mechanism 'squashfs' for containers. Signed-off-by: Fiona Ebner --- Changes in v5: * Reserve NBD nodes. * Adapt to changed file_size_info() signature. * Die if NBD module not loaded (new qemu-server already loads it). * Unbreak logging warnings by using the correct function. * Ensure to sync() writes to the backup target (i.e. a qcow2 file exposed as a block device via qemu-nbd) before disconnecting its qemu-nbd block device node. * Switch from 'block-device' to 'virtual-file' mechanism (just a rename here). .../BackupProvider/Plugin/DirectoryExample.pm | 802 ++ src/PVE/BackupProvider/Plugin/Makefile| 2 +- .../Custom/BackupProviderDirExamplePlugin.pm | 308 +++ src/PVE/Storage/Custom/Makefile | 5 + src/PVE/Storage/Makefile | 1 + 5 files changed, 1117 insertions(+), 1 deletion(-) create mode 100644 src/PVE/BackupProvider/Plugin/DirectoryExample.pm create mode 100644 src/PVE/Storage/Custom/BackupProviderDirExamplePlugin.pm create mode 100644 src/PVE/Storage/Custom/Makefile diff --git a/src/PVE/BackupProvider/Plugin/DirectoryExample.pm b/src/PVE/BackupProvider/Plugin/DirectoryExample.pm new file mode 100644 index 000..e595407 --- /dev/null +++ b/src/PVE/BackupProvider/Plugin/DirectoryExample.pm @@ -0,0 +1,802 @@ +package PVE::BackupProvider::Plugin::DirectoryExample; + +use strict; +use warnings; + +use Fcntl qw(SEEK_SET); +use File::Path qw(make_path remove_tree); +use IO::File; +use IPC::Open3; + +use PVE::Storage::Common; +use PVE::Storage::Plugin; +use PVE::Tools qw(file_get_contents file_read_firstline file_set_contents run_command); + +use base qw(PVE::BackupProvider::Plugin::Base); + +# Private helpers + +my sub log_info { +my ($self, $message) = @_; + +$self->{'log-function'}->('info', $message); +} + +my sub log_warning { +my ($self, $message) = @_; + +$self->{'log-function'}->('warn', $message); +} + +my sub log_error { +my ($self, $message) = @_; + +$self->{'log-function'}->('err', $message); +} + +# Try to use the same bitmap ID as last time for incremental backup if the storage is configured for +# incremental VM backup. Need to start fresh if there is no previous ID or the associated backup +# doesn't exist. +my sub get_bitmap_id { +my ($self, $vmid, $vmtype) = @_; + +return if $self->{'storage-plugin'}->get_vm_backup_mode($self->{scfg}) ne 'incremental'; + +my $previous_info_dir = "$self->{scfg}->{path}/$vmid/"; + +my $previous_info_file = "$previous_info_dir/previous-info"; +my $info = file_read_firstline($previous_info_file) // ''; +$self->{$vmid}->{'old-previous-info'} = $info; +my ($bitmap_id, $previous_backup_id) = $info =~ m/^(\d+)\s+(\d+)$/; +my $previous_backup_dir = + $previous_backup_id ? "$self->{scfg}->{path}/$vmid/$vmtype-$previous_backup_id" : undef; + +if ($bitmap_id && -d $previous_backup_dir) { + $self->{$vmid}->{'previous-backup-dir'} = $previous_backup_dir; +} else { + # need to start fresh if there is no previous ID or the associated backup doesn't exist + $bitmap_id = $self->{$vmid}->{'backup-time'}; +} + +$self->{$vmid}->{'bitmap-id'} = $bitmap_id; +make_path($previous_info_dir); +die "unable to create directory $previous_info_dir\n" if !-d $previous_info_dir; +file_set_contents($previous_info_file, "$bitmap_id $self->{$vmid}->{'backup-time'}"); + +return $bitmap_id; +} + +# NOTE: This is just for proof-of-concept testing! A backup provider plugin should either use the +# 'nbd' backup mechansim and use the NBD protocol or use the 'file-handle' mechanism. There should +# be no need to use /dev/nbdX nodes for proper plugins. +my sub bind_next_free_dev_nbd_node { +my ($options) = @_; + +# /dev/nbdX devices are reserved in a file. Those reservations expires after $expiretime. +# This avoids race conditions between allocation and use. + +die "file '/sys/module/nbd' does not exist - 'nbd' kernel module not loaded?" + if !-e "/sys/module/nbd"; + +my $line = PVE::Tools::file_read_firstline("/sys/module/nbd/parameters/nbds_max") + or die "could not read 'nbds_max' parameter file for 'nbd' kernel module\n"; +my ($nbds_max) = ($line =~ m/(\d+)/) + or die "could not determine 'nbds_max' paramet
Re: [pve-devel] [PATCH pve-network 09/17] sdn: running: apply fabrics config
On 4/2/25 12:41, Fabian Grünbichler wrote: > On March 28, 2025 6:13 pm, Gabriel Goller wrote: >> From: Stefan Hanreich >> >> Save the fabrics configuration in the running configuration, when >> applying the SDN configuration. This causes the FRR configuration to >> be actually generated for the openfabric and ospf plugins, since the >> FRR configuration is generated from the running configuration. >> >> Signed-off-by: Stefan Hanreich >> Co-authored-by: Gabriel Goller >> Signed-off-by: Gabriel Goller >> --- >> src/PVE/Network/SDN.pm | 8 +++- >> 1 file changed, 7 insertions(+), 1 deletion(-) >> >> diff --git a/src/PVE/Network/SDN.pm b/src/PVE/Network/SDN.pm >> index a0b61275e10b..12f0f9361389 100644 >> --- a/src/PVE/Network/SDN.pm >> +++ b/src/PVE/Network/SDN.pm >> @@ -155,13 +155,19 @@ sub commit_config { >> my $zones_cfg = PVE::Network::SDN::Zones::config(); >> my $controllers_cfg = PVE::Network::SDN::Controllers::config(); >> my $subnets_cfg = PVE::Network::SDN::Subnets::config(); >> +my $openfabric_cfg = >> PVE::Network::SDN::Fabrics::config_for_protocol("openfabric") >> +->get_inner(); >> +my $ospf_cfg = PVE::Network::SDN::Fabrics::config_for_protocol("ospf") >> +->get_inner(); >> >> my $vnets = { ids => $vnets_cfg->{ids} }; >> my $zones = { ids => $zones_cfg->{ids} }; >> my $controllers = { ids => $controllers_cfg->{ids} }; >> my $subnets = { ids => $subnets_cfg->{ids} }; >> +my $openfabric = { ids => $openfabric_cfg }; >> +my $ospf = { ids => $ospf_cfg }; >> >> -$cfg = { version => $version, vnets => $vnets, zones => $zones, >> controllers => $controllers, subnets => $subnets }; >> +$cfg = { version => $version, vnets => $vnets, zones => $zones, >> controllers => $controllers, subnets => $subnets, openfabric => $openfabric, >> ospf => $ospf }; > > wouldn't it be more in line to have fabrics => fabrics_config here? Yes, probably makes more sense. The initial idea was to stick to the one key / file pattern, but for the fabrics it probably makes more sense to have them contained in a top-level key. Will change. ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH storage v6 10/37] add storage_has_feature() helper function
Which looks up whether a storage supports a given feature in its 'plugindata'. This is intentionally kept simple and not implemented as a plugin method for now. Should it ever become more complex requiring plugins to override the default implementation, it can later be changed to a method. Suggested-by: Fabian Grünbichler Signed-off-by: Fiona Ebner --- src/PVE/Storage.pm| 8 src/PVE/Storage/Plugin.pm | 10 ++ 2 files changed, 18 insertions(+) diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm index c5d4ff8..8cbfb4f 100755 --- a/src/PVE/Storage.pm +++ b/src/PVE/Storage.pm @@ -213,6 +213,14 @@ sub storage_check_enabled { return storage_check_node($cfg, $storeid, $node, $noerr); } +sub storage_has_feature { +my ($cfg, $storeid, $feature) = @_; + +my $scfg = storage_config($cfg, $storeid); + +return PVE::Storage::Plugin::storage_has_feature($scfg->{type}, $feature); +} + # storage_can_replicate: # return true if storage supports replication # (volumes allocated with vdisk_alloc() has replication feature) diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm index 65cf43f..80daeea 100644 --- a/src/PVE/Storage/Plugin.pm +++ b/src/PVE/Storage/Plugin.pm @@ -246,6 +246,16 @@ sub dirs_hash_to_string { return join(',', map { "$_=$hash->{$_}" } sort keys %$hash); } +sub storage_has_feature { +my ($type, $feature) = @_; + +my $data = $defaultData->{plugindata}->{$type}; +if (my $features = $data->{features}) { + return $features->{$feature}; +} +return; +} + sub default_format { my ($scfg) = @_; -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH qemu-server v15 6/12] migration: check_local_resources for virtiofs
add dir mapping checks to check_local_resources Since the VM needs to be powered off for migration, migration should work with a directory on shared storage with all caching settings. Signed-off-by: Markus Frank --- v15: * removed unnecessary "if ($entry->{dirid})" check PVE/QemuServer.pm| 10 +- test/MigrationTest/Shared.pm | 7 +++ 2 files changed, 16 insertions(+), 1 deletion(-) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 558b924d..5e9c3981 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -2486,6 +2486,7 @@ sub check_local_resources { my $nodelist = PVE::Cluster::get_nodelist(); my $pci_map = PVE::Mapping::PCI::config(); my $usb_map = PVE::Mapping::USB::config(); +my $dir_map = PVE::Mapping::Dir::config(); my $missing_mappings_by_node = { map { $_ => [] } @$nodelist }; @@ -2497,6 +2498,8 @@ sub check_local_resources { $entry = PVE::Mapping::PCI::get_node_mapping($pci_map, $id, $node); } elsif ($type eq 'usb') { $entry = PVE::Mapping::USB::get_node_mapping($usb_map, $id, $node); + } elsif ($type eq 'dir') { + $entry = PVE::Mapping::Dir::get_node_mapping($dir_map, $id, $node); } if (!scalar($entry->@*)) { push @{$missing_mappings_by_node->{$node}}, $key; @@ -2525,9 +2528,14 @@ sub check_local_resources { push @$mapped_res, $k; } } + if ($k =~ m/^virtiofs/) { + my $entry = parse_property_string('pve-qm-virtiofs', $conf->{$k}); + $add_missing_mapping->('dir', $k, $entry->{dirid}); + push @$mapped_res, $k; + } # sockets are safe: they will recreated be on the target side post-migrate next if $k =~ m/^serial/ && ($conf->{$k} eq 'socket'); - push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel)\d+$/; + push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel|virtiofs)\d+$/; } die "VM uses local resources\n" if scalar @loc_res && !$noerr; diff --git a/test/MigrationTest/Shared.pm b/test/MigrationTest/Shared.pm index e69bf84f..f5fb70ff 100644 --- a/test/MigrationTest/Shared.pm +++ b/test/MigrationTest/Shared.pm @@ -90,6 +90,13 @@ $mapping_pci_module->mock( }, ); +our $mapping_dir_module = Test::MockModule->new("PVE::Mapping::Dir"); +$mapping_dir_module->mock( +config => sub { + return {}; +}, +); + our $ha_config_module = Test::MockModule->new("PVE::HA::Config"); $ha_config_module->mock( vm_is_ha_managed => sub { -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH docs v1] package repos: revise Ceph section, introduce 2 attributes
On 2025-03-25 12:19, Alexander Zeidler wrote: On Mon Mar 24, 2025 at 5:50 PM CET, Aaron Lauterer wrote: On 2025-02-10 11:36, Alexander Zeidler wrote: * Remove duplicated text to maintain clarity * Mention available installation methods (web-based wizard, CLI tool) * Create a table to see the support state of Ceph releases I am not sure if having this in the rather static documentation is a good idea. Having it in the Proxmox VE wiki where changes can be made quickly is probably better. Similar how we handle it for PVE releases with https://pve.proxmox.com/wiki/FAQ https://pve.proxmox.com/wiki/FAQ is not meant for manual modification. It is auto-generated from pve-faq.adoc and also included in the doc: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#faq-support-table I am not sure if quick edits are needed here. While the estimated EOL should not change, the other table content and admin guide version may/should be updated anyway with new PVE/Ceph releases. Good point, I stand corrected ;) * List and link to the EOL dates of Ceph releases * Reword the descriptions of available PVE repositories * Add two new attributes to avoid manual editing multiple lines ** debian-codename=bookworm It seems that the codename is not available in the build process, so updating the new attribute on Debian major release upgrades may be fine. ** pve-version=8 While `revnumber` ("8.3.1") is made available by the current build process, it is not as suitable as only mentioning the major version number. Signed-off-by: Alexander Zeidler --- asciidoc/asciidoc-pve.conf | 2 + pve-package-repos.adoc | 165 ++--- 2 files changed, 63 insertions(+), 104 deletions(-) diff --git a/asciidoc/asciidoc-pve.conf b/asciidoc/asciidoc-pve.conf index 0c28298..8ad635f 100644 --- a/asciidoc/asciidoc-pve.conf +++ b/asciidoc/asciidoc-pve.conf @@ -3,6 +3,8 @@ proxmoxGmbh=Proxmox Server Solutions GmbH copyright=Proxmox Server Solutions GmbH pve=Proxmox VE +pve-version=8 +debian-codename=bookworm pricing-url=https://proxmox.com/en/proxmox-virtual-environment/pricing website=https://www.proxmox.com/ forum-url=https://forum.proxmox.com/ diff --git a/pve-package-repos.adoc b/pve-package-repos.adoc index c831cd9..e0a9fb5 100644 --- a/pve-package-repos.adoc +++ b/pve-package-repos.adoc @@ -24,6 +24,7 @@ security updates, bug fixes and new features. APT Repositories are defined in the file `/etc/apt/sources.list` and in `.list` files placed in `/etc/apt/sources.list.d/`. +[[repository_management]] Repository Management ^ Since we already have the auto-generated "_repository_management" anchor, I would keep it, but manually define it here. This way we do not break any existing references to this part of the admin guide. In this case adding "_repository_management" seems fine to me, although our forum shows only 2 results for it. I would not take the forum as a benchmark. Once we have an anchor for a deep link, we should preserve it if possible to not break any deep links that we are not aware of. Nothing worse than links not working anymore as expected. ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH v1 pve-storage 7/8] pluginbase: document volume operations
On Mon Mar 31, 2025 at 5:12 PM CEST, Fabian Grünbichler wrote: > On March 26, 2025 3:20 pm, Max Carrara wrote: > > Add docstrings for the following methods: > > - list_volumes > > - get_volume_attribute > > - update_volume_attribute > > - volume_size_info > > - volume_resize > > - volume_snapshot > > - volume_snapshot_info > > - volume_rollback_is_possible > > - volume_snapshot_rollback > > - volume_snapshot_delete > > - volume_snapshot_needs_fsfreeze > > - storage_can_replicate > > - volume_has_feature > > - map_volume > > - unmap_volume > > - activate_volume > > - deactivate_volume > > - rename_volume > > - prune_backups > > > > Signed-off-by: Max Carrara > > Co-authored-by: Maximiliano Sandoval > > --- > > src/PVE/Storage/PluginBase.pm | 401 -- > > 1 file changed, 388 insertions(+), 13 deletions(-) > > > > diff --git a/src/PVE/Storage/PluginBase.pm b/src/PVE/Storage/PluginBase.pm > > index 37b1471..a1bfc8d 100644 > > --- a/src/PVE/Storage/PluginBase.pm > > +++ b/src/PVE/Storage/PluginBase.pm > > @@ -861,108 +861,483 @@ sub free_image { > > > > =cut > > > > +=head3 $plugin->list_volumes($storeid, \%scfg, $vmid, \@content_types) > > + > > +B Must be implemented in every storage plugin. > > + > > +Returns a list of volumes for the given guest whose content type is within > > the Upfront note: Unless I otherwise comment something, I agree with you. Just sparing myself from writing and you from reading too many ACKs :P > > this is wrong - what if $vmid is not set? or if content type is snippets > or one of other ones that don't have an associated guest/owner? or what > if there is no $vmid given, as in the example below? Right, thanks for spotting this too. > > > +given C<\@content_types>. If C<\@content_types> is empty, no volumes will > > be > > +returned. See Cplugindata()" >>> for all > > content types. > > here we are actually talking about content types for once (and this also > translates from content type "images" and "rootdir" to volume type > "images"! in PVE::Plugin!). > > > + > > +# List all backups for a guest > > +my $backups = $plugin->list_volumes($storeid, $scfg, $vmid, > > ['backup']); > > + > > +# List all containers and virtual machines on the storage > > +my $guests = $plugin->list_volumes($storeid, $scfg, undef, ['rootdir', > > 'images']) > > eh, this is a bad example? it doesn't list the guests, it lists their > volumes.. > > > + > > +The returned listref of hashrefs has the following structure: > > + > > +[ > > + { > > + content => "images", > > + ctime => "1702298038", # creation time as unix timestamp > > + format => "raw", > > + size => 9663676416, # in bytes! > > + vmid => 102, > > + volid => "local-lvm:vm-102-disk-0", > > + }, > > + # [...] > > +] > > + > > +Backups will also contain additional keys: > > + > > +[ > > + { > > + content => "backup", > > + ctime => 1742405070, # creation time as unix timestamp > > + format => "tar.zst", > > + notes => "...", # comment that was entered when backup was created > > + size => 328906840, # in bytes! > > + subtype => "lxc", # "lxc" for containers, "qemu" for VMs > > + vmid => 106, > > + volid => "local:backup/vzdump-lxc-106-2025_03_19-18_24_30.tar.zst", > > + }, > > + # [...] > > +] > > s.. what is the interface here again? -> needs a (complete) schema > for the returned type, else how am I supposed to implement this? I mean, we could define one. I didn't want to clobber the docstring with too many examples / too much text here, but I agree. > > > + > > +=cut > > + > > sub list_volumes { > > my ($class, $storeid, $scfg, $vmid, $content_types) = @_; > > croak "implement me in sub-class\n"; > > } > > > > -# Returns undef if the attribute is not supported for the volume. > > -# Should die if there is an error fetching the attribute. > > -# Possible attributes: > > -# notes - user-provided comments/notes. > > -# protected - not to be removed by free_image, and for backups, ignored > > when pruning. > > +=head3 $plugin->get_volume_attribute(\%scfg, $storeid, $volname, > > $attribute) > > + > > +=head3 $plugin->get_volume_attribute(...) > > + > > +B Must be implemented in every storage plugin. > > + > > +Returns the value of the given C<$attribute> for a volume. If the > > attribute isn't > > +supported for the volume, returns C. > > + > > +Cs if there is an error fetching the attribute. > > + > > +B > > + > > +=over > > + > > +=item * C (returns scalar) > > scalar *string* ? > > + > > +User-provided comments / notes. > > + > > +=item * C (returns scalar) > > scalar *boolean* ? > > > + > > +When set to C<1>, the volume must not be removed by C > free_image()|/"$plugin->free_image(...)" >>>. > > +Backups with C set to C<1> are ignored when pruning. > > Backup volumes .. when calculating which backup volumes to prune. > > > + > > +=back > > +
Re: [pve-devel] [PATCH proxmox-ve-rs v2 1/1] ve-config: move types to proxmox-network-types
Just a quick note: On Tue Apr 1, 2025 at 3:36 PM CEST, Stefan Hanreich wrote: [..] > diff --git a/proxmox-ve-config/src/firewall/types/address.rs > b/proxmox-ve-config/src/firewall/types/address.rs > index 9b73d3d..548b813 100644 > --- a/proxmox-ve-config/src/firewall/types/address.rs > +++ b/proxmox-ve-config/src/firewall/types/address.rs [..] > -assert_eq!( > -[Ipv6Cidr::new( > -[0x, 0x, 0x, 0x, 0x, 0x, 0x, > 0x], > -128 > -) > -.unwrap(),], > -range.to_cidrs().as_slice() > -); > -} > } > + nit: new blank line at eof, guess just an accidental typo? :^) Came across it while applying the patch for a quick compile-test. Applying: ve-config: move types to proxmox-network-types .git/rebase-apply/patch:1548: new blank line at EOF. + warning: 1 line adds whitespace errors. Just wanted to mention, can easily be fixed up during applying as well. Definitely not worth a resend in any case, IMO. ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH container v5 28/32] create: factor out compression option helper
In preparation to re-use it for checking potentially untrusted archives. Signed-off-by: Fiona Ebner --- src/PVE/LXC/Create.pm | 51 +-- 1 file changed, 30 insertions(+), 21 deletions(-) diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm index 8c54d6b..1d0474d 100644 --- a/src/PVE/LXC/Create.pm +++ b/src/PVE/LXC/Create.pm @@ -78,15 +78,38 @@ sub restore_proxmox_backup_archive { $scfg, $storeid, $cmd, $param, userns_cmd => $userns_cmd); } +my sub tar_compression_option { +my ($archive) = @_; + +my %compression_map = ( + '.gz' => '-z', + '.bz2' => '-j', + '.xz' => '-J', + '.lzo' => '--lzop', + '.zst' => '--zstd', +); +if ($archive =~ /\.tar(\.[^.]+)?$/) { + if (defined($1)) { + die "unrecognized compression format: $1\n" if !defined($compression_map{$1}); + return $compression_map{$1}; + } + return; +} else { + die "file does not look like a template archive: $archive\n"; +} +} + my sub restore_tar_archive_command { -my ($conf, $opts, $rootdir, $bwlimit) = @_; +my ($conf, $compression_opt, $rootdir, $bwlimit) = @_; my ($id_map, $root_uid, $root_gid) = PVE::LXC::parse_id_maps($conf); my $userns_cmd = PVE::LXC::userns_command($id_map); -my $cmd = [@$userns_cmd, 'tar', 'xpf', '-', $opts->@*, '--totals', - @PVE::Storage::Plugin::COMMON_TAR_FLAGS, - '-C', $rootdir]; +my $cmd = [@$userns_cmd, 'tar', 'xpf', '-']; +push $cmd->@*, $compression_opt if $compression_opt; +push $cmd->@*, '--totals'; +push $cmd->@*, @PVE::Storage::Plugin::COMMON_TAR_FLAGS; +push $cmd->@*, '-C', $rootdir; # skip-old-files doesn't have anything to do with time (old/new), but is # simply -k (annoyingly also called --keep-old-files) without the 'treat @@ -108,24 +131,10 @@ sub restore_tar_archive { my $archive_fh; my $tar_input = '<&STDIN'; -my @compression_opt; +my $compression_opt; if ($archive ne '-') { # GNU tar refuses to autodetect this... *sigh* - my %compression_map = ( - '.gz' => '-z', - '.bz2' => '-j', - '.xz' => '-J', - '.lzo' => '--lzop', - '.zst' => '--zstd', - ); - if ($archive =~ /\.tar(\.[^.]+)?$/) { - if (defined($1)) { - die "unrecognized compression format: $1\n" if !defined($compression_map{$1}); - @compression_opt = $compression_map{$1}; - } - } else { - die "file does not look like a template archive: $archive\n"; - } + $compression_opt = tar_compression_option($archive); sysopen($archive_fh, $archive, O_RDONLY) or die "failed to open '$archive': $!\n"; my $flags = $archive_fh->fcntl(Fcntl::F_GETFD(), 0); @@ -133,7 +142,7 @@ sub restore_tar_archive { $tar_input = '<&'.fileno($archive_fh); } -my $cmd = restore_tar_archive_command($conf, [@compression_opt], $rootdir, $bwlimit); +my $cmd = restore_tar_archive_command($conf, $compression_opt, $rootdir, $bwlimit); if ($archive eq '-') { print "extracting archive from STDIN\n"; -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH pve-firmware] iwlwifi-extra: update list of forced inclusions
Recent Intel bluetooth chipsets are not working, due to missing firmware (e.g. ibt-0180-0041 for a Meteor Lake based Intel NUC 14). Updating the list based on the file list from the firmware-iwlwifi package in debian sid (and bookworm-backports)[0] should fix this and a few related issues. Follows commit bebdf38 ("iwl 5.19 extra: force some ibt (intel bluetooth) fw inclusion") [0] https://packages.debian.org/sid/all/firmware-iwlwifi/filelist Reported-by: Moayad Almalat Signed-off-by: Stoiko Ivanov -- hope I'm not overlooking a more sophisticated process for compiling this list (I mixed and matched the output of dpkg -c from debian with ours) fwlist-iwlwifi-extra | 55 1 file changed, 55 insertions(+) diff --git a/fwlist-iwlwifi-extra b/fwlist-iwlwifi-extra index ee8a7c5..6b6e21f 100644 --- a/fwlist-iwlwifi-extra +++ b/fwlist-iwlwifi-extra @@ -1,5 +1,49 @@ ibt-0041-0041.ddc kernel/drivers/bluetooth/btintel.ko ibt-0041-0041.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-0040-0041.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-0040-0041.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-0040-1020.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-0040-1020.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-0040-1050.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-0040-1050.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-0040-2120.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-0040-2120.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-0040-4150.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-0040-4150.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-0041-0041.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-0041-0041.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-0093-0041.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-0093-0041.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-0093-0291.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-0093-0291.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-0093-1050.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-0093-1050.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-0093-4150.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-0093-4150.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-0180-0041.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-0180-0041.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-0180-1050.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-0180-1050.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-0180-4150.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-0180-4150.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-0190-0041.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-0190-0041-iml.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-0190-0041.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-0190-0291.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-0190-0291-iml.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-0190-0291.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-0291-0291.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-0291-0291.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-1040-0041.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-1040-0041.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-1040-1020.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-1040-1020.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-1040-1050.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-1040-1050.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-1040-2120.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-1040-2120.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-1040-4150.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-1040-4150.sfi kernel/drivers/bluetooth/btintel.ko intel/ibt-11-5.ddc kernel/drivers/bluetooth/btintel.ko intel/ibt-11-5.sfi kernel/drivers/bluetooth/btintel.ko intel/ibt-12-16.ddc kernel/drivers/bluetooth/btintel.ko @@ -16,6 +60,8 @@ intel/ibt-19-0-0.ddc kernel/drivers/bluetooth/btintel.ko intel/ibt-19-0-0.sfi kernel/drivers/bluetooth/btintel.ko intel/ibt-19-0-1.ddc kernel/drivers/bluetooth/btintel.ko intel/ibt-19-0-1.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-19-0-3.ddc kernel/drivers/bluetooth/btintel.ko +intel/ibt-19-0-3.sfi kernel/drivers/bluetooth/btintel.ko intel/ibt-19-0-4.ddc kernel/drivers/bluetooth/btintel.ko intel/ibt-19-0-4.sfi kernel/drivers/bluetooth/btintel.ko intel/ibt-19-16-4.ddc kernel/drivers/bluetooth/btintel.ko @@ -36,3 +82,12 @@ intel/ibt-20-1-3.ddc kernel/drivers/bluetooth/btintel.ko intel/ibt-20-1-3.sfi kernel/drivers/bluetooth/btintel.ko intel/ibt-20-1-4.ddc kernel/drivers/bluetooth/btintel.ko intel/ibt-20-1-4.sfi kernel/drivers/bluetooth/btintel.ko +intel/ibt-hw-37.7.10-fw-1.0.1.2d.d.bseq kernel/drivers/bluetooth/btintel.ko +intel/ibt-hw-37.7.10-fw-1.0.2.3.d.bseq kernel/drivers/bluetooth/btintel.ko +intel/ibt-hw-37.7.10-fw-1.80.1.2d.d.bseq kernel/drivers/bluetooth/btintel.ko +intel/ibt-hw-37.7.10-fw-1.80.2.3.d.bseq kernel/drivers/bluetooth/btintel.ko +intel/
Re: [pve-devel] [PATCH proxmox v3 2/2] network-types: add hostname type
Thanks for your review - comments inline On 4/4/25 09:31, Wolfgang Bumiller wrote: > On Tue, Apr 01, 2025 at 04:52:44PM +0200, Stefan Hanreich wrote: >> Add a type for representing Linux hostnames. These are the same >> constraints as the installer enforces [1]. Lowercasing is fine as >> well, since practically everything treats hostnames case-insensitively >> as RFC 952 stipulates: >> >>> No distinction is made between upper and lower case. >> >> [1] >> https://git.proxmox.com/?p=pve-installer.git;a=blob;f=Proxmox/Sys/Net.pm;h=81cb15f0042b195461324fffeca53d732133629e;hb=HEAD#l11 >> [2] https://www.rfc-editor.org/rfc/rfc952.txt >> >> Signed-off-by: Stefan Hanreich >> --- >> >> Notes: >> sending this separately because this contains the new types, that >> haven't been a part of proxmox-ve-rs before. >> >> Changes from v2: >> * improved hostname validation (thanks @Maximiliano @Christoph) >> * added additional unit tests >> >> Changes from v1: >> * added unit tests >> >> proxmox-network-types/src/hostname.rs | 103 ++ >> proxmox-network-types/src/lib.rs | 1 + >> 2 files changed, 104 insertions(+) >> create mode 100644 proxmox-network-types/src/hostname.rs >> >> diff --git a/proxmox-network-types/src/hostname.rs >> b/proxmox-network-types/src/hostname.rs >> new file mode 100644 >> index ..4b2f7ede >> --- /dev/null >> +++ b/proxmox-network-types/src/hostname.rs >> @@ -0,0 +1,103 @@ >> +use std::fmt::Display; >> + >> +use serde::{Deserialize, Serialize}; >> +use thiserror::Error; >> + >> +#[derive(Error, Debug)] >> +pub enum HostnameError { >> +#[error("the hostname must be from 1 to 63 characters long")] >> +InvalidLength, >> +#[error("the hostname has an invalid format")] >> +InvalidFormat, >> +} >> + >> +/// Hostname of a Debian system > > ^ Why debian specific? Should this then not be in a different namespace > or have a different name? As Christoph mentioned, Debian explicitly forbids numeric-only hostnames. They are technically valid, just *strongly* discouraged since depending on the implementation they are interpreted as IPs rather than hostnames. 0 or 1 or 234728934 for instance is a valid IPv4 after all (ping accepts them as input for instance). Since we're debian-based I figured I'd add it to the validation since it's very much relevant for our use-case here. I have nothing against changing the name though to DebianHostname. >> +/// >> +/// It checks for the following conditions: >> +/// * At most 63 characters long. >> +/// * It must not start or end with a hyphen. >> +/// * Must only contain ASCII alphanumeric characters as well as hyphens. >> +/// * It must not be purely numerical. >> +#[derive(Debug, Deserialize, Serialize, Clone, Eq, Hash, PartialOrd, Ord, >> PartialEq)] >> +pub struct Hostname(String); >> + >> +impl std::str::FromStr for Hostname { >> +type Err = HostnameError; >> + >> +fn from_str(hostname: &str) -> Result { >> +Self::new(hostname) >> +} >> +} >> + >> +impl AsRef for Hostname { >> +fn as_ref(&self) -> &str { >> +&self.0 >> +} >> +} >> + >> +impl Display for Hostname { >> +fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { >> +self.0.fmt(f) >> +} >> +} >> + >> +impl Hostname { >> +/// Constructs a new hostname from a string >> +/// >> +/// This function accepts characters in any case, but the resulting >> hostname will be >> +/// lowercased. >> +pub fn new(name_ref: impl AsRef) -> Result { > > Nit: I'd recommend using a `check()` function which does not create the > `Hostname` itself, because then: > > - in `FromStr` we know we have a reference (&str) and need to clone. > - We could add a `TryFrom<&str>` wich just uses `.parse()` > - We could add a `TryFrom` which avoids the clone. > > But... > >> +let name: &str = name_ref.as_ref(); >> + >> +if name.is_empty() || name.len() > 63 { >> +return Err(HostnameError::InvalidLength); >> +} >> + >> +if !(name.starts_with(|c: char| c.is_ascii_alphanumeric()) >> +&& name.ends_with(|c: char| c.is_ascii_alphanumeric())) { >> +return Err(HostnameError::InvalidFormat); >> +} >> + >> +if !name.chars().all(|c| c.is_ascii_alphanumeric() || c == '-') { >> +return Err(HostnameError::InvalidFormat); >> +} >> + >> +if name.chars().all(|c| c.is_ascii_digit()) { >> +return Err(HostnameError::InvalidFormat); >> +} >> + >> +Ok(Self(name.to_lowercase())) > > ...do we really want/need to do this? (Note that if we really do this, > it should IMO be documented on the *type*, too, not just this method.) The idea was that, since hostnames are treated case-insensitively by basically every application, it would make no sense to create two *different* hostnames, that would be the same in practice. It's something we could mayb
[pve-devel] [PATCH v8 qemu 10/10] PVE backup: backup-access-api: explicit bitmap-mode parameter
This allows to explicitly request to re-create a bitmap under the same name. Signed-off-by: Wolfgang Bumiller --- New in v8 pve-backup.c | 17 - qapi/block-core.json | 20 +++- 2 files changed, 35 insertions(+), 2 deletions(-) diff --git a/pve-backup.c b/pve-backup.c index 8909842292..18bcf29533 100644 --- a/pve-backup.c +++ b/pve-backup.c @@ -1043,7 +1043,16 @@ BackupAccessInfoList *coroutine_fn qmp_backup_access_setup( error_propagate(errp, local_err); goto err; } -di->requested_bitmap_name = g_strdup(it->value->bitmap_name); +if (it->value->bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_NONE) { +di->bitmap_action = PBS_BITMAP_ACTION_NOT_USED; +} else { +di->requested_bitmap_name = g_strdup(it->value->bitmap_name); +if (it->value->bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_NEW) { +di->bitmap_action = PBS_BITMAP_ACTION_NEW; +} else if (it->value->bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_USE) { +di->bitmap_action = PBS_BITMAP_ACTION_USED; +} +} di_list = g_list_append(di_list, di); } bdrv_graph_co_rdunlock(); @@ -1096,6 +1105,12 @@ BackupAccessInfoList *coroutine_fn qmp_backup_access_setup( && di->requested_bitmap_name && strcmp(di->requested_bitmap_name, old_bitmap_name) == 0; +/* special case: if we explicitly requested a *new* bitmap, treat an + * existing bitmap as having a different name */ +if (di->bitmap_action == PBS_BITMAP_ACTION_NEW) { +same_bitmap_name = false; +} + if (old_bitmap_name && !same_bitmap_name) { BdrvDirtyBitmap *old_bitmap = bdrv_find_dirty_bitmap(di->bs, old_bitmap_name); if (!old_bitmap) { diff --git a/qapi/block-core.json b/qapi/block-core.json index 58586170d9..e1c79649fb 100644 --- a/qapi/block-core.json +++ b/qapi/block-core.json @@ -1145,9 +1145,27 @@ # in the result to see if you can actually re-use the bitmap or if it had to # be newly created. # +# @bitmap-mode: used to control whether the bitmap should be reused or +# recreated. +# ## { 'struct': 'BackupAccessSourceDevice', - 'data': { 'device': 'str', '*bitmap-name': 'str' } } + 'data': { 'device': 'str', '*bitmap-name': 'str', +'*bitmap-mode': 'BackupAccessSetupBitmapMode' } } + +## +# @BackupAccessSetupBitmapMode: +# +# How to setup a bitmap for a device for @backup-access-setup. +# +# @none: do not use a bitmap. Removes an existing bitmap if present. +# +# @new: create and use a new bitmap. +# +# @use: try to re-use an existing bitmap. Create a new one if it doesn't exist. +## +{ 'enum': 'BackupAccessSetupBitmapMode', + 'data': ['none', 'new', 'use' ] } ## # @backup-access-setup: -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH proxmox-ve-rs 1/1] clippy: elided lifetime has a name
Gabriel Goller writes: > bump, still applies If I am not mistaken, this could use `+ use<'a>` instead of `+ 'a`. ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH manager 1/1] close #3181: ui: display guest name in confirm dialogs
The confirmation dialogs of the following actions are affected by this change: * Remove * Clone * Migrate * Snapshot * Snapshot rollback * Backup VM/CT from config view * Restore VM/CT from config view The VM/CT name is considered optional in all handled cases. If it is undefined, the parentheses and the guest name simply will not be displayed in the dialog window. No exceptions are thrown in case of an undefined guest name because it only extends the information displayed to the user and is not essential for performing any of the actions above. Signed-off-by: Michael Köppl --- www/manager6/grid/BackupView.js | 4 www/manager6/lxc/CmdMenu.js | 9 - www/manager6/lxc/Config.js| 11 +-- www/manager6/qemu/CmdMenu.js | 9 - www/manager6/qemu/Config.js | 11 +-- www/manager6/tree/SnapshotTree.js | 3 +++ www/manager6/window/Backup.js | 3 +++ www/manager6/window/Clone.js | 7 ++- www/manager6/window/Migrate.js| 10 +++--- www/manager6/window/Restore.js| 3 +++ www/manager6/window/Snapshot.js | 6 +- 11 files changed, 65 insertions(+), 11 deletions(-) diff --git a/www/manager6/grid/BackupView.js b/www/manager6/grid/BackupView.js index e71d1c88..99f2a49b 100644 --- a/www/manager6/grid/BackupView.js +++ b/www/manager6/grid/BackupView.js @@ -39,6 +39,8 @@ Ext.define('PVE.grid.BackupView', { throw "unsupported VM type '" + vmtype + "'"; } + let vmname = me.pveSelNode.data.name; + var searchFilter = { property: 'volid', value: '', @@ -167,6 +169,7 @@ Ext.define('PVE.grid.BackupView', { nodename: nodename, vmid: vmid, vmtype: vmtype, + vmname: vmname, storage: storagesel.getValue(), listeners: { close: function() { @@ -189,6 +192,7 @@ Ext.define('PVE.grid.BackupView', { let win = Ext.create('PVE.window.Restore', { nodename: nodename, vmid: vmid, + vmname: vmname, volid: rec.data.volid, volidText: PVE.Utils.render_storage_content(rec.data.volid, {}, rec), vmtype: vmtype, diff --git a/www/manager6/lxc/CmdMenu.js b/www/manager6/lxc/CmdMenu.js index e30989a6..e4dadd2f 100644 --- a/www/manager6/lxc/CmdMenu.js +++ b/www/manager6/lxc/CmdMenu.js @@ -89,7 +89,13 @@ Ext.define('PVE.lxc.CmdMenu', { text: gettext('Clone'), iconCls: 'fa fa-fw fa-clone', hidden: !caps.vms['VM.Clone'], - handler: () => PVE.window.Clone.wrap(info.node, info.vmid, me.isTemplate, 'lxc'), + handler: () => PVE.window.Clone.wrap( + info.node, + info.vmid, + info.name, + me.isTemplate, + 'lxc', + ), }, { text: gettext('Migrate'), @@ -100,6 +106,7 @@ Ext.define('PVE.lxc.CmdMenu', { vmtype: 'lxc', nodename: info.node, vmid: info.vmid, + vmname: info.name, autoShow: true, }); }, diff --git a/www/manager6/lxc/Config.js b/www/manager6/lxc/Config.js index a7191fa2..9e33ca56 100644 --- a/www/manager6/lxc/Config.js +++ b/www/manager6/lxc/Config.js @@ -100,6 +100,7 @@ Ext.define('PVE.lxc.Config', { vmtype: 'lxc', nodename: nodename, vmid: vmid, + vmname: vm.name, }); win.show(); }, @@ -115,7 +116,13 @@ Ext.define('PVE.lxc.Config', { iconCls: 'fa fa-fw fa-clone', hidden: !caps.vms['VM.Clone'], handler: function() { - PVE.window.Clone.wrap(nodename, vmid, template, 'lxc'); + PVE.window.Clone.wrap( + nodename, + vmid, + vm.name, + template, + 'lxc', + ); }, }, { @@ -156,7 +163,7 @@ Ext.define('PVE.lxc.Config', { handler: function() { Ext.create('PVE.window.SafeDestroyGuest', { url: base_url, - item: { type: 'CT', id: vmid }, + item: { type: 'CT', id: vmid, name: vm.name }, taskName: 'vzdestroy', }).show(); }, diff --git a/www/manager6/qemu/CmdMenu.js b/www/manager6/qemu/CmdMenu.js index 7a4e0a0e..e7dd6782 100644 --- a/www/manager6/qemu/CmdMenu.js +++
[pve-devel] [PATCH manager v5 3/3] ui: qemu hd edit: allow importing a disk from the import storage
adds a checkbox 'import image' above the storage selector which: * hides the original storage selector * shows a 'source storage' selector * shows a 'import file' selector * shows a 'target storage' selector Since the wizard and the hd edit share this panel, this also works in the wizard. Signed-off-by: Dominik Csapak --- no changes in v5 www/manager6/qemu/HDEdit.js | 71 - 1 file changed, 70 insertions(+), 1 deletion(-) diff --git a/www/manager6/qemu/HDEdit.js b/www/manager6/qemu/HDEdit.js index b78647ec..d6357a91 100644 --- a/www/manager6/qemu/HDEdit.js +++ b/www/manager6/qemu/HDEdit.js @@ -78,11 +78,14 @@ Ext.define('PVE.qemu.HDInputPanel', { if (values.hdimage) { me.drive.file = values.hdimage; } else { - me.drive.file = values.hdstorage + ":" + values.disksize; + let disksize = values['import-from'] ? 0 : values.disksize; + me.drive.file = `${values.hdstorage}:${disksize}`; + PVE.Utils.propertyStringSet(me.drive, values['import-from'], 'import-from'); } me.drive.format = values.diskformat; } + PVE.Utils.propertyStringSet(me.drive, !values.backup, 'backup', '0'); PVE.Utils.propertyStringSet(me.drive, values.noreplicate, 'replicate', 'no'); PVE.Utils.propertyStringSet(me.drive, values.discard, 'discard', 'on'); @@ -168,6 +171,11 @@ Ext.define('PVE.qemu.HDInputPanel', { var me = this; me.down('#hdstorage').setNodename(nodename); me.down('#hdimage').setStorage(undefined, nodename); + + me.lookup('new-disk').setNodename(nodename); + me.lookup('import-source').setNodename(nodename); + me.lookup('import-source-file').setNodename(nodename); + me.lookup('import-target').setNodename(nodename); }, hasAdvanced: true, @@ -221,12 +229,73 @@ Ext.define('PVE.qemu.HDInputPanel', { column1.push(me.unusedDisks); } else if (me.isCreate) { column1.push({ + xtype: 'proxmoxcheckbox', + isFormField: false, + fieldLabel: gettext("Import Image"), + listeners: { + change: function(_cb, value) { + me.lookup('new-disk').setVisible(!value); + me.lookup('new-disk').setDisabled(!!value); + + let importSource = me.lookup('import-source'); + importSource.setVisible(!!value); + importSource.setDisabled(!value); + me.lookup('import-source-file').setVisible(!!value); + me.lookup('import-source-file').setDisabled(!value || !importSource.getValue()); + + me.lookup('import-target').setVisible(!!value); + me.lookup('import-target').setDisabled(!value); + }, + }, + }); + column1.push({ + reference: 'new-disk', xtype: 'pveDiskStorageSelector', storageContent: 'images', name: 'disk', nodename: me.nodename, autoSelect: me.insideWizard, }); + column1.push({ + xtype: 'pveStorageSelector', + reference: 'import-source', + fieldLabel: gettext('Import Storage'), + name: 'import-source-storage', + hidden: true, + disabled: true, + storageContent: 'import', + nodename: me.nodename, + autoSelect: me.insideWizard, + listeners: { + change: function(selector, storage) { + me.lookup('import-source-file').setStorage(storage); + me.lookup('import-source-file').setDisabled(!storage || selector.isDisabled()); + }, + }, + }); + column1.push({ + xtype: 'pveFileSelector', + reference: 'import-source-file', + fieldLabel: gettext("Select Image"), + hidden: true, + disabled: true, + storageContent: 'import', + name: 'import-from', + filter: (rec) => ['qcow2', 'vmdk', 'raw'].indexOf(rec?.data?.format) !== -1, + nodename: me.nodename, + }); + column1.push({ + xtype: 'pveDiskStorageSelector', + reference: 'import-target', + storageLabel: gettext('Target Storage'), + hidden: true, + disabled: true, + hideSize: true, + storageContent: 'images', + name: 'target', + nodename: me.nodename, + autoSelect: me.insideWizard, + }); } else { column1.push({ xtype: 'textfield', -- 2.39.5
Re: [pve-devel] [PATCH pve-manager 2/2] move /run/vzdump.lock to /run/lock/vzdump.lock
Am 24.03.25 um 14:04 schrieb Jing Luo: > I just found that this is technically a systemd thing: at boot time, > systemd creates the symlinks /var/run -> /run and /var/lock -> > /run/lock, this > is written in /usr/lib/tmpfiles.d/var.conf and > /usr/lib/tmpfiles.d/legacy.conf, > which has been true at least since Debian switched to systemd in 2015. > > So these two have always been symlinks (unless the machine has not been > restarted > in the past 10+ years). > > [1] https://wiki.debian.org/ReleaseGoals/RunDirectory > Thanks for checking, then it indeed seems fine to just do those two specific path moves without any extra coordination or backward compat code required. So for moving file paths from /var/run/ to /run/ and /var/lock/ to /run/lock, i.e. where the actual location of the file does not change at all due the respective pairs pointing to the exact same directory, this series is: Acked-by: Thomas Lamprecht Anything where the underlying directory changes is still NAK'd if it does not come with some backward compat handling code, FWICT this is really just the single vzdump.lock patch though. ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH manager v4 3/3] ui: qemu hd edit: allow importing a disk from the import storage
adds a checkbox 'import image' above the storage selector which: * hides the original storage selector * shows a 'source storage' selector * shows a 'import file' selector * shows a 'target storage' selector Since the wizard and the hd edit share this panel, this also works in the wizard. Signed-off-by: Dominik Csapak --- changes from v3: * also allow importing raw and vmdk (by changing the filter) www/manager6/qemu/HDEdit.js | 71 - 1 file changed, 70 insertions(+), 1 deletion(-) diff --git a/www/manager6/qemu/HDEdit.js b/www/manager6/qemu/HDEdit.js index b78647ec..d6357a91 100644 --- a/www/manager6/qemu/HDEdit.js +++ b/www/manager6/qemu/HDEdit.js @@ -78,11 +78,14 @@ Ext.define('PVE.qemu.HDInputPanel', { if (values.hdimage) { me.drive.file = values.hdimage; } else { - me.drive.file = values.hdstorage + ":" + values.disksize; + let disksize = values['import-from'] ? 0 : values.disksize; + me.drive.file = `${values.hdstorage}:${disksize}`; + PVE.Utils.propertyStringSet(me.drive, values['import-from'], 'import-from'); } me.drive.format = values.diskformat; } + PVE.Utils.propertyStringSet(me.drive, !values.backup, 'backup', '0'); PVE.Utils.propertyStringSet(me.drive, values.noreplicate, 'replicate', 'no'); PVE.Utils.propertyStringSet(me.drive, values.discard, 'discard', 'on'); @@ -168,6 +171,11 @@ Ext.define('PVE.qemu.HDInputPanel', { var me = this; me.down('#hdstorage').setNodename(nodename); me.down('#hdimage').setStorage(undefined, nodename); + + me.lookup('new-disk').setNodename(nodename); + me.lookup('import-source').setNodename(nodename); + me.lookup('import-source-file').setNodename(nodename); + me.lookup('import-target').setNodename(nodename); }, hasAdvanced: true, @@ -221,12 +229,73 @@ Ext.define('PVE.qemu.HDInputPanel', { column1.push(me.unusedDisks); } else if (me.isCreate) { column1.push({ + xtype: 'proxmoxcheckbox', + isFormField: false, + fieldLabel: gettext("Import Image"), + listeners: { + change: function(_cb, value) { + me.lookup('new-disk').setVisible(!value); + me.lookup('new-disk').setDisabled(!!value); + + let importSource = me.lookup('import-source'); + importSource.setVisible(!!value); + importSource.setDisabled(!value); + me.lookup('import-source-file').setVisible(!!value); + me.lookup('import-source-file').setDisabled(!value || !importSource.getValue()); + + me.lookup('import-target').setVisible(!!value); + me.lookup('import-target').setDisabled(!value); + }, + }, + }); + column1.push({ + reference: 'new-disk', xtype: 'pveDiskStorageSelector', storageContent: 'images', name: 'disk', nodename: me.nodename, autoSelect: me.insideWizard, }); + column1.push({ + xtype: 'pveStorageSelector', + reference: 'import-source', + fieldLabel: gettext('Import Storage'), + name: 'import-source-storage', + hidden: true, + disabled: true, + storageContent: 'import', + nodename: me.nodename, + autoSelect: me.insideWizard, + listeners: { + change: function(selector, storage) { + me.lookup('import-source-file').setStorage(storage); + me.lookup('import-source-file').setDisabled(!storage || selector.isDisabled()); + }, + }, + }); + column1.push({ + xtype: 'pveFileSelector', + reference: 'import-source-file', + fieldLabel: gettext("Select Image"), + hidden: true, + disabled: true, + storageContent: 'import', + name: 'import-from', + filter: (rec) => ['qcow2', 'vmdk', 'raw'].indexOf(rec?.data?.format) !== -1, + nodename: me.nodename, + }); + column1.push({ + xtype: 'pveDiskStorageSelector', + reference: 'import-target', + storageLabel: gettext('Target Storage'), + hidden: true, + disabled: true, + hideSize: true, + storageContent: 'images', + name: 'target', + nodename: me.nodename, + autoSelect: me.insideWizard, + }); } else { col
[pve-devel] [PATCH storage v5 07/32] add storage_has_feature() helper function
Which looks up whether a storage supports a given feature in its 'plugindata'. This is intentionally kept simple and not implemented as a plugin method for now. Should it ever become more complex requiring plugins to override the default implementation, it can later be changed to a method. Suggested-by: Fabian Grünbichler Signed-off-by: Fiona Ebner --- src/PVE/Storage.pm| 8 src/PVE/Storage/Plugin.pm | 10 ++ 2 files changed, 18 insertions(+) diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm index 3b4f041..d582af4 100755 --- a/src/PVE/Storage.pm +++ b/src/PVE/Storage.pm @@ -213,6 +213,14 @@ sub storage_check_enabled { return storage_check_node($cfg, $storeid, $node, $noerr); } +sub storage_has_feature { +my ($cfg, $storeid, $feature) = @_; + +my $scfg = storage_config($cfg, $storeid); + +return PVE::Storage::Plugin::storage_has_feature($scfg->{type}, $feature); +} + # storage_can_replicate: # return true if storage supports replication # (volumes allocated with vdisk_alloc() has replication feature) diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm index 65cf43f..80daeea 100644 --- a/src/PVE/Storage/Plugin.pm +++ b/src/PVE/Storage/Plugin.pm @@ -246,6 +246,16 @@ sub dirs_hash_to_string { return join(',', map { "$_=$hash->{$_}" } sort keys %$hash); } +sub storage_has_feature { +my ($type, $feature) = @_; + +my $data = $defaultData->{plugindata}->{$type}; +if (my $features = $data->{features}) { + return $features->{$feature}; +} +return; +} + sub default_format { my ($scfg) = @_; -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH qemu] PVE backup: backup access api: simplify bitmap logic
Superseded by: https://lore.proxmox.com/pve-devel/20250404094041.153518-1-f.eb...@proxmox.com/ improving the QAPI doc a bit ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [RFC PATCH] ui: lxc: set nesting to false for privileged container during creation
Thanks for this patch, some comments inline. Am 18.03.25 um 17:14 schrieb Michael Köppl: > The current implementation is slightly misleading. When creating a > privileged container, the nesting checkbox is disabled but keeps its > current state. However, nesting is not enabled for privileged containers > even if the checkbox was set to true. Enabling nesting is still possible > through the Options menu. > > Signed-off-by: Michael Köppl > --- > As an alternative to this, since we already discourage > the use of privileged containers [0], removing the checkbox for creating > privileged > containers in the web UI might make sense. For the rare cases where they > are required, they can still be created using pct (although the question > remains whether privileged should be the default for pct create). > > [0] > https://forum.proxmox.com/threads/debian-12-lxc-template-systemd-failures.151630/post-686850 > > www/manager6/lxc/CreateWizard.js | 11 ++- > 1 file changed, 10 insertions(+), 1 deletion(-) > > diff --git a/www/manager6/lxc/CreateWizard.js > b/www/manager6/lxc/CreateWizard.js > index 62cda27a..c7ee56f7 100644 > --- a/www/manager6/lxc/CreateWizard.js > +++ b/www/manager6/lxc/CreateWizard.js > @@ -7,6 +7,7 @@ Ext.define('PVE.lxc.CreateWizard', { > nodename: '', > storage: '', > unprivileged: true, > + nestingEnabled: true, This works fine as is, but I think there is a better solution available here (see below) and such use of viewModel data binding can come back and bite one when extending/refactoring such code, as the code using it needs to be careful to not interfere with the automatic two-way binding to the field's value. I do not want to discourage using the viewModel, most of the time it can be fine and result in even nicer code overall, but it has its oddities. > }, > formulas: { > cgroupMode: function(get) { > @@ -69,14 +70,22 @@ Ext.define('PVE.lxc.CreateWizard', { > value: '{unprivileged}', > }, > fieldLabel: gettext('Unprivileged container'), > + listeners: { > + change: function(checkbox, value) { > + var viewModel = checkbox.lookupViewModel(); style nit: we mainly use `let` variables for new code as that has less confusing scope rules, i.e. variables declared through var are also available in the parent block scope. > + if (!value && viewModel) { > + viewModel.set('nestingEnabled', false); You could also avoid the intermediate viewModel variable that is used just once and use an optional-chaining operator, like: checkbox.lookupViewModel()?.set('nestingEnabled', false); > + } > + }, > + }, > }, > { > xtype: 'proxmoxcheckbox', FYI that component comes from our proxmox-widget-toolkit and is a small override of the original ExtJS checkbox. One thing it provides is a `clearOnDisable` config flag that if set to true should exactly provide the behavior you want here without needing to hook into the unprivileged change listener. See src/form/Checkbox.js in the proxmox-widget-toolkit for the rather trivial implementation of that flag. > name: 'features', > inputValue: 'nesting=1', > - value: true, > bind: { > disabled: '{!unprivileged}', > + value: '{nestingEnabled}', > }, > fieldLabel: gettext('Nesting'), > }, ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH proxmox-perl-rs 4/7] perl-rs: sdn: implement OSPF interface file configuration generation
Add function to generate /etc/network/interfaces configuration for OpenFabric nodes: - Auto-create dummy interfaces with proper router-id - Configure interface addresses and IP forwarding - Support for both IPv4 and IPv6 addressing on both dummy and other interfaces Signed-off-by: Gabriel Goller --- pve-rs/src/sdn/openfabric.rs | 72 1 file changed, 72 insertions(+) diff --git a/pve-rs/src/sdn/openfabric.rs b/pve-rs/src/sdn/openfabric.rs index 680b3d081e47..2d21e6dae5e1 100644 --- a/pve-rs/src/sdn/openfabric.rs +++ b/pve-rs/src/sdn/openfabric.rs @@ -353,6 +353,78 @@ mod export { .ok_or(anyhow::anyhow!("node not found")) } +#[export] +fn get_interfaces_etc_network_config( +#[try_from_ref] this: &PerlSectionConfig, +node: Hostname, +) -> Result { +let guard = this.section_config.lock().unwrap(); +let mut interfaces = String::new(); + +guard.iter().try_for_each(|section| { +if let OpenFabricSectionConfig::Node(node_section) = section.1 { +if node_section.node_id.node == node { +// create dummy interface for this fabric +writeln!(interfaces)?; +writeln!(interfaces, "auto dummy_{}", node_section.node_id.fabric_id)?; +match node_section.router_id { +IpAddr::V4(_) => writeln!( +interfaces, +"iface dummy_{} inet static", +node_section.node_id.fabric_id +)?, +IpAddr::V6(_) => writeln!( +interfaces, +"iface dummy_{} inet6 static", +node_section.node_id.fabric_id +)?, +} +writeln!(interfaces, "\tlink-type dummy")?; +writeln!(interfaces, "\tip-forward 1")?; +// add dummy interface address as /32 +match node_section.router_id { +IpAddr::V4(ipv4_addr) => { +writeln!(interfaces, "\taddress {}/32", ipv4_addr)? +} +IpAddr::V6(ipv6_addr) => { +writeln!(interfaces, "\taddress {}/128", ipv6_addr)? +} +} + +// add ip-addrs to all other interfaces and ensure they exist +// also enable ip-forwarding on all interfaces as this is needed for unnumbered +// peering +node_section +.clone() +.interface +.into_iter() +.try_for_each(|i| { +let interface_name = i.name.clone(); +writeln!(interfaces)?; +writeln!(interfaces, "auto {interface_name}")?; +if let Some(ip) = i.ip.map(|i| i.to_string()) { +writeln!(interfaces, "iface {interface_name} inet static")?; +writeln!(interfaces, "\taddress {}", ip)?; +writeln!(interfaces, "\tip-forward 1")?; +} +if let Some(ipv6) = i.ipv6.map(|i| i.to_string()) { +writeln!(interfaces, "iface {interface_name} inet6 static")?; +writeln!(interfaces, "\taddress {ipv6}")?; +writeln!(interfaces, "\tip6-forward 1")?; +} +if i.ip.is_none() && i.ipv6.is_none() { +writeln!(interfaces, "iface {interface_name} inet manual")?; +writeln!(interfaces, "\tip-forward 1")?; +} +Ok::<(), std::fmt::Error>(()) +})?; +} +} +Ok::<(), std::fmt::Error>(()) +})?; +Ok(interfaces) +} + #[export] pub fn enabled_daemons( #[try_from_ref] this: &PerlSectionConfig, -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH v1 pve-storage 7/8] pluginbase: document volume operations
Add docstrings for the following methods: - list_volumes - get_volume_attribute - update_volume_attribute - volume_size_info - volume_resize - volume_snapshot - volume_snapshot_info - volume_rollback_is_possible - volume_snapshot_rollback - volume_snapshot_delete - volume_snapshot_needs_fsfreeze - storage_can_replicate - volume_has_feature - map_volume - unmap_volume - activate_volume - deactivate_volume - rename_volume - prune_backups Signed-off-by: Max Carrara Co-authored-by: Maximiliano Sandoval --- src/PVE/Storage/PluginBase.pm | 401 -- 1 file changed, 388 insertions(+), 13 deletions(-) diff --git a/src/PVE/Storage/PluginBase.pm b/src/PVE/Storage/PluginBase.pm index 37b1471..a1bfc8d 100644 --- a/src/PVE/Storage/PluginBase.pm +++ b/src/PVE/Storage/PluginBase.pm @@ -861,108 +861,483 @@ sub free_image { =cut +=head3 $plugin->list_volumes($storeid, \%scfg, $vmid, \@content_types) + +B Must be implemented in every storage plugin. + +Returns a list of volumes for the given guest whose content type is within the +given C<\@content_types>. If C<\@content_types> is empty, no volumes will be +returned. See Cplugindata()" >>> for all content types. + +# List all backups for a guest +my $backups = $plugin->list_volumes($storeid, $scfg, $vmid, ['backup']); + +# List all containers and virtual machines on the storage +my $guests = $plugin->list_volumes($storeid, $scfg, undef, ['rootdir', 'images']) + +The returned listref of hashrefs has the following structure: + +[ + { + content => "images", + ctime => "1702298038", # creation time as unix timestamp + format => "raw", + size => 9663676416, # in bytes! + vmid => 102, + volid => "local-lvm:vm-102-disk-0", + }, + # [...] +] + +Backups will also contain additional keys: + +[ + { + content => "backup", + ctime => 1742405070, # creation time as unix timestamp + format => "tar.zst", + notes => "...", # comment that was entered when backup was created + size => 328906840, # in bytes! + subtype => "lxc", # "lxc" for containers, "qemu" for VMs + vmid => 106, + volid => "local:backup/vzdump-lxc-106-2025_03_19-18_24_30.tar.zst", + }, + # [...] +] + +=cut + sub list_volumes { my ($class, $storeid, $scfg, $vmid, $content_types) = @_; croak "implement me in sub-class\n"; } -# Returns undef if the attribute is not supported for the volume. -# Should die if there is an error fetching the attribute. -# Possible attributes: -# notes - user-provided comments/notes. -# protected - not to be removed by free_image, and for backups, ignored when pruning. +=head3 $plugin->get_volume_attribute(\%scfg, $storeid, $volname, $attribute) + +=head3 $plugin->get_volume_attribute(...) + +B Must be implemented in every storage plugin. + +Returns the value of the given C<$attribute> for a volume. If the attribute isn't +supported for the volume, returns C. + +Cs if there is an error fetching the attribute. + +B + +=over + +=item * C (returns scalar) + +User-provided comments / notes. + +=item * C (returns scalar) + +When set to C<1>, the volume must not be removed by Cfree_image(...)" >>>. +Backups with C set to C<1> are ignored when pruning. + +=back + +=cut + sub get_volume_attribute { my ($class, $scfg, $storeid, $volname, $attribute) = @_; croak "implement me in sub-class\n"; } -# Dies if the attribute is not supported for the volume. +=head3 $plugin->update_volume_attribute(\%scfg, $storeid, $volname, $attribute, $value) + +=head3 $plugin->update_volume_attribute(...) + +B Must be implemented in every storage plugin. + +Sets the value of the given C<$attribute> for a volume to C<$value>. + +Cs if the attribute isn't supported for the volume (or storage). + +For a list of supported attributes, see Cget_volume_attribute(...)" >>>. + +=cut + sub update_volume_attribute { my ($class, $scfg, $storeid, $volname, $attribute, $value) = @_; croak "implement me in sub-class\n"; } +=head3 $plugin->volume_size_info(\%scfg, $storeid, $volname [, $timeout]) + +B Must be implemented in every storage plugin. + +Returns information about the given volume's size. In scalar context, this returns +just the volume's size in bytes: + +my $size = $plugin->volume_size_info($scfg, $storeid, $volname, $timeout) + +In list context, returns an array with the following structure: + +my ($size, $format, $used, $parent, $ctime) = $plugin->volume_size_info( + $scfg, $storeid, $volname, $timeout +) + +where C<$size> is the size of the volume in bytes, C<$format> one of the possible +L<< formats listed in C|/"$plugin->plugindata()" >>, C<$used> the +amount of space used in bytes, C<$parent> the (optional) name of the base volume +and C<$ctime> the creation time as unix timestamp. + +Optionally, a C<$timeout> ma
[pve-devel] [PATCH proxmox-openid v4 1/1] fix #4411: openid: add library code for generic id token claim support
Signed-off-by: Thomas Skinner --- proxmox-openid/src/lib.rs | 55 +-- 1 file changed, 47 insertions(+), 8 deletions(-) diff --git a/proxmox-openid/src/lib.rs b/proxmox-openid/src/lib.rs index fe65fded..bf8c650b 100644 --- a/proxmox-openid/src/lib.rs +++ b/proxmox-openid/src/lib.rs @@ -15,8 +15,11 @@ pub use auth_state::*; use openidconnect::{ //curl::http_client, core::{ -CoreAuthDisplay, CoreAuthPrompt, CoreAuthenticationFlow, CoreClient, CoreGenderClaim, -CoreIdTokenClaims, CoreIdTokenVerifier, CoreProviderMetadata, +CoreAuthDisplay, CoreAuthPrompt, CoreAuthenticationFlow, CoreErrorResponseType, +CoreGenderClaim, CoreIdTokenVerifier, CoreJsonWebKey, CoreJsonWebKeyType, +CoreJsonWebKeyUse, CoreJweContentEncryptionAlgorithm, CoreJwsSigningAlgorithm, +CoreProviderMetadata, CoreRevocableToken, CoreRevocationErrorResponse, +CoreTokenIntrospectionResponse, CoreTokenType, }, AdditionalClaims, AuthenticationContextClass, @@ -24,6 +27,9 @@ use openidconnect::{ ClientId, ClientSecret, CsrfToken, +EmptyExtraTokenFields, +IdTokenClaims, +IdTokenFields, IssuerUrl, Nonce, OAuth2TokenResponse, @@ -31,15 +37,47 @@ use openidconnect::{ PkceCodeVerifier, RedirectUrl, Scope, +StandardErrorResponse, +StandardTokenResponse, UserInfoClaims, }; /// Stores Additional Claims into a serde_json::Value; -#[derive(Debug, Deserialize, Serialize)] +#[derive(Clone, Debug, Default, Deserialize, PartialEq, Serialize)] pub struct GenericClaims(Value); impl AdditionalClaims for GenericClaims {} pub type GenericUserInfoClaims = UserInfoClaims; +pub type GenericIdTokenClaims = IdTokenClaims; + +pub type GenericIdTokenFields = IdTokenFields< +GenericClaims, +EmptyExtraTokenFields, +CoreGenderClaim, +CoreJweContentEncryptionAlgorithm, +CoreJwsSigningAlgorithm, +CoreJsonWebKeyType, +>; + +pub type GenericTokenResponse = StandardTokenResponse; + +pub type GenericClient = openidconnect::Client< +GenericClaims, +CoreAuthDisplay, +CoreGenderClaim, +CoreJweContentEncryptionAlgorithm, +CoreJwsSigningAlgorithm, +CoreJsonWebKeyType, +CoreJsonWebKeyUse, +CoreJsonWebKey, +CoreAuthPrompt, +StandardErrorResponse, +GenericTokenResponse, +CoreTokenType, +CoreTokenIntrospectionResponse, +CoreRevocableToken, +CoreRevocationErrorResponse, +>; #[derive(Debug, Deserialize, Serialize, Clone)] pub struct OpenIdConfig { @@ -56,7 +94,7 @@ pub struct OpenIdConfig { } pub struct OpenIdAuthenticator { -client: CoreClient, +client: GenericClient, config: OpenIdConfig, } @@ -120,8 +158,9 @@ impl OpenIdAuthenticator { let provider_metadata = CoreProviderMetadata::discover(&issuer_url, http_client)?; -let client = CoreClient::from_provider_metadata(provider_metadata, client_id, client_key) -.set_redirect_uri(RedirectUrl::new(String::from(redirect_url))?); +let client = +GenericClient::from_provider_metadata(provider_metadata, client_id, client_key) + .set_redirect_uri(RedirectUrl::new(String::from(redirect_url))?); Ok(Self { client, @@ -195,7 +234,7 @@ impl OpenIdAuthenticator { &self, code: &str, private_auth_state: &PrivateAuthState, -) -> Result<(CoreIdTokenClaims, GenericUserInfoClaims), Error> { +) -> Result<(GenericIdTokenClaims, GenericUserInfoClaims), Error> { let code = AuthorizationCode::new(code.to_string()); // Exchange the code with a token. let token_response = self @@ -206,7 +245,7 @@ impl OpenIdAuthenticator { .map_err(|err| format_err!("Failed to contact token endpoint: {}", err))?; let id_token_verifier: CoreIdTokenVerifier = self.client.id_token_verifier(); -let id_token_claims: &CoreIdTokenClaims = token_response +let id_token_claims: &GenericIdTokenClaims = token_response .extra_fields() .id_token() .expect("Server did not return an ID token") -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH qemu-server v7 6/9] api: enable live migration for marked mapped pci devices
On 4/3/25 15:00, Thomas Lamprecht wrote: Am 11.03.25 um 14:20 schrieb Dominik Csapak: They have to be marked as 'live-migration-capable' in the mapping config, and the driver and qemu must support it. For the gui checks, we now return the whole object of the mapped resources, which includes info like the name and if it's marked as live-migration capable. (while deprecating the old 'mapped-resource' return value, since that returns strictly less information) Reviewed-by: Christoph Heiss Reviewed-by: Fiona Ebner Signed-off-by: Dominik Csapak --- PVE/API2/Qemu.pm | 8 +++- PVE/QemuMigrate.pm | 17 - 2 files changed, 19 insertions(+), 6 deletions(-) diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm index 060bca8b..15bf7548 100644 --- a/PVE/API2/Qemu.pm +++ b/PVE/API2/Qemu.pm @@ -4665,13 +4665,18 @@ __PACKAGE__->register_method({ }, description => "List local resources (e.g. pci, usb) that block migration." }, + # FIXME: remove with 9.0 'mapped-resources' => { type => 'array', items => { type => 'string', description => "A mapped resource", }, - description => "List of mapped resources e.g. pci, usb" + description => "List of mapped resources e.g. pci, usb. Deprecated, use 'mapped-resource-info' instead." + }, + 'mapped-resource-info' => { + type => 'object', + description => "Object of mapped resources with additional information such if they're live migratable.", Where is the schema/format for this? New stuff really should be documented here, I mean, have you not sent enough schema updates to unlock basic features in the rust based PVE api crate used in PDM? ;-) Yes, you're right. I'll send a follow up for that if that's fine with you. }, }, }, @@ -4737,6 +4742,7 @@ __PACKAGE__->register_method({ $res->{local_resources} = $local_resources; $res->{'mapped-resources'} = [ sort keys $mapped_resources->%* ]; + $res->{'mapped-resource-info'} = $mapped_resources; would be IMO nicer to add returning that info upfront and then combine this with patch with 4/9 to have two patches that each do one thing, and not a split/mix. But I probably will just apply this as is with trying to bend the commit message from implementation details to the effect of the patch, will not be ideal, but I want to finish this finally.. If you want I can try to rework this part of the series, since it does make sense like you described, but I'm not sure I could finish that today. No offense to anybody, but IMO does not pain the best picture considering this has two R-b tags.. return $res; diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index 2153ac42..6909fc82 100644 --- a/PVE/QemuMigrate.pm +++ b/PVE/QemuMigrate.pm @@ -243,11 +243,18 @@ sub prepare { if (scalar(keys $mapped_res->%*)) { my $missing_mappings = $missing_mappings_by_node->{$self->{node}}; - if ($running) { - my $mapped_text = join(", ", sort keys $mapped_res->%*); - die "can't migrate running VM which uses mapped devices: $mapped_text\n"; - } elsif (scalar($missing_mappings->@*)) { - die "can't migrate to '$self->{node}': missing mapped devices " . join(", ", $missing_mappings->@*) . "\n"; + my $missing_live_mappings = []; + for my $key (sort keys $mapped_res->%*) { + my $res = $mapped_res->{$key}; + my $name = "$key:$res->{name}"; + push $missing_live_mappings->@*, $name if !$res->{'live-migration'}; + } + if (scalar($missing_mappings->@*)) { + my $missing = join(", ", $missing_mappings->@*); + die "can't migrate to '$self->{node}': missing mapped devices $missing\n"; + } elsif ($running && scalar($missing_live_mappings->@*)) { + my $missing = join(", ", $missing_live_mappings->@*); + die "can't live migrate running VM which uses following mapped devices: $missing\n"; } else { $self->log('info', "migrating VM which uses mapped local devices"); } ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH proxmox-perl-rs 1/7] perl-rs: sdn: initial fabric infrastructure
Add SDN fabric support with OpenFabric and OSPF configuration parsing. Implements PerlSectionConfig wrapper and Perl module exports for fabric configuration management. Signed-off-by: Gabriel Goller --- pve-rs/Cargo.toml | 6 - pve-rs/Makefile | 1 + pve-rs/src/lib.rs | 1 + pve-rs/src/sdn/fabrics.rs | 50 +++ pve-rs/src/sdn/mod.rs | 1 + 5 files changed, 58 insertions(+), 1 deletion(-) create mode 100644 pve-rs/src/sdn/fabrics.rs create mode 100644 pve-rs/src/sdn/mod.rs diff --git a/pve-rs/Cargo.toml b/pve-rs/Cargo.toml index f2e495d33743..441b417a961a 100644 --- a/pve-rs/Cargo.toml +++ b/pve-rs/Cargo.toml @@ -39,9 +39,13 @@ proxmox-log = "0.2" proxmox-notify = { version = "0.5", features = ["pve-context"] } proxmox-openid = "0.10" proxmox-resource-scheduling = "0.3.0" +proxmox-schema = "4.0.0" +proxmox-section-config = "2.1.1" proxmox-shared-cache = "0.1.0" proxmox-subscription = "0.5" proxmox-sys = "0.6" proxmox-tfa = { version = "5", features = ["api"] } proxmox-time = "2" -proxmox-ve-config = { version = "0.2.1" } +proxmox-ve-config = { version = "0.2.1", features=["frr"] } +proxmox-frr = { version = "0.1" } +proxmox-network-types = { version = "0.1" } diff --git a/pve-rs/Makefile b/pve-rs/Makefile index d01da692d8c9..86af16eb5e04 100644 --- a/pve-rs/Makefile +++ b/pve-rs/Makefile @@ -31,6 +31,7 @@ PERLMOD_PACKAGES := \ PVE::RS::Firewall::SDN \ PVE::RS::OpenId \ PVE::RS::ResourceScheduling::Static \ + PVE::RS::SDN::Fabrics \ PVE::RS::TFA PERLMOD_PACKAGE_FILES := $(addsuffix .pm,$(subst ::,/,$(PERLMOD_PACKAGES))) diff --git a/pve-rs/src/lib.rs b/pve-rs/src/lib.rs index 3de37d17fab6..12ee87a91cc6 100644 --- a/pve-rs/src/lib.rs +++ b/pve-rs/src/lib.rs @@ -15,6 +15,7 @@ pub mod apt; pub mod firewall; pub mod openid; pub mod resource_scheduling; +pub mod sdn; pub mod tfa; #[perlmod::package(name = "Proxmox::Lib::PVE", lib = "pve_rs")] diff --git a/pve-rs/src/sdn/fabrics.rs b/pve-rs/src/sdn/fabrics.rs new file mode 100644 index ..a761cea36ec0 --- /dev/null +++ b/pve-rs/src/sdn/fabrics.rs @@ -0,0 +1,50 @@ +#[perlmod::package(name = "PVE::RS::SDN::Fabrics", lib = "pve_rs")] +pub mod export { +use std::sync::Mutex; + +use anyhow::Error; +use proxmox_section_config::{ +typed::ApiSectionDataEntry, typed::SectionConfigData as TypedSectionConfigData, +}; +use proxmox_ve_config::sdn::fabric::{ +openfabric::OpenFabricSectionConfig, ospf::OspfSectionConfig, +}; +use serde::{Deserialize, Serialize}; + +pub struct PerlSectionConfig { +pub section_config: Mutex>, +} + +impl PerlSectionConfig +where +T: Send + Sync + Clone, +{ +pub fn into_inner(self) -> Result, anyhow::Error> { +let value = self.section_config.into_inner().unwrap(); +Ok(value) +} +} + +#[derive(Serialize, Deserialize)] +struct AllConfigs { +openfabric: Vec, +ospf: Vec, +} + +/// Get all the config. This takes the raw openfabric and ospf config, parses, and returns +/// both. +#[export] +fn config(raw_openfabric: &[u8], raw_ospf: &[u8]) -> Result { +let raw_openfabric = std::str::from_utf8(raw_openfabric)?; +let raw_ospf = std::str::from_utf8(raw_ospf)?; + +let openfabric = +OpenFabricSectionConfig::parse_section_config("openfabric.cfg", raw_openfabric)?; +let ospf = OspfSectionConfig::parse_section_config("ospf.cfg", raw_ospf)?; + +Ok(AllConfigs { +openfabric: openfabric.into_iter().map(|e| e.1).collect(), +ospf: ospf.into_iter().map(|e| e.1).collect(), +}) +} +} diff --git a/pve-rs/src/sdn/mod.rs b/pve-rs/src/sdn/mod.rs new file mode 100644 index ..3e3b1376f8d6 --- /dev/null +++ b/pve-rs/src/sdn/mod.rs @@ -0,0 +1 @@ +pub mod fabrics; -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [POC storage v7 16/37] add backup provider example
The example uses a simple directory structure to save the backups, grouped by guest ID. VM backups are saved as configuration files and qcow2 images, with backing files when doing incremental backups. Container backups are saved as configuration files and a tar file or squashfs image (added to test the 'directory' restore mechanism). Whether to use incremental VM backups and which backup mechanisms to use can be configured in the storage configuration. The 'nbdinfo' binary from the 'libnbd-bin' package is required for backup mechanism 'nbd' for VM backups, the 'mksquashfs' binary from the 'squashfs-tools' package is required for backup mechanism 'squashfs' for containers. Signed-off-by: Fiona Ebner --- Changes in v7: * Support per-device bitmap names. Implement new backup_vm_available_bitmaps() method. * Adapt to changed backup_get_mechanism() signature. .../BackupProvider/Plugin/DirectoryExample.pm | 784 ++ src/PVE/BackupProvider/Plugin/Makefile| 2 +- .../Custom/BackupProviderDirExamplePlugin.pm | 308 +++ src/PVE/Storage/Custom/Makefile | 5 + src/PVE/Storage/Makefile | 1 + 5 files changed, 1099 insertions(+), 1 deletion(-) create mode 100644 src/PVE/BackupProvider/Plugin/DirectoryExample.pm create mode 100644 src/PVE/Storage/Custom/BackupProviderDirExamplePlugin.pm create mode 100644 src/PVE/Storage/Custom/Makefile diff --git a/src/PVE/BackupProvider/Plugin/DirectoryExample.pm b/src/PVE/BackupProvider/Plugin/DirectoryExample.pm new file mode 100644 index 000..af49552 --- /dev/null +++ b/src/PVE/BackupProvider/Plugin/DirectoryExample.pm @@ -0,0 +1,784 @@ +package PVE::BackupProvider::Plugin::DirectoryExample; + +use strict; +use warnings; + +use Fcntl qw(SEEK_SET); +use File::Path qw(make_path remove_tree); +use IO::File; +use IPC::Open3; + +use PVE::Storage::Common; +use PVE::Storage::Plugin; +use PVE::Tools qw(file_get_contents file_read_firstline file_set_contents run_command); + +use base qw(PVE::BackupProvider::Plugin::Base); + +# Private helpers + +my sub log_info { +my ($self, $message) = @_; + +$self->{'log-function'}->('info', $message); +} + +my sub log_warning { +my ($self, $message) = @_; + +$self->{'log-function'}->('warn', $message); +} + +my sub log_error { +my ($self, $message) = @_; + +$self->{'log-function'}->('err', $message); +} + +# NOTE: This is just for proof-of-concept testing! A backup provider plugin should either use the +# 'nbd' backup mechansim and use the NBD protocol or use the 'file-handle' mechanism. There should +# be no need to use /dev/nbdX nodes for proper plugins. +my sub bind_next_free_dev_nbd_node { +my ($options) = @_; + +# /dev/nbdX devices are reserved in a file. Those reservations expires after $expiretime. +# This avoids race conditions between allocation and use. + +die "file '/sys/module/nbd' does not exist - 'nbd' kernel module not loaded?" + if !-e "/sys/module/nbd"; + +my $line = PVE::Tools::file_read_firstline("/sys/module/nbd/parameters/nbds_max") + or die "could not read 'nbds_max' parameter file for 'nbd' kernel module\n"; +my ($nbds_max) = ($line =~ m/(\d+)/) + or die "could not determine 'nbds_max' parameter for 'nbd' kernel module\n"; + +my $filename = "/run/qemu-server/reserved-dev-nbd-nodes"; + +my $code = sub { + my $expiretime = 60; + my $ctime = time(); + + my $used = {}; + my $latest = [0, 0]; + + if (my $fh = IO::File->new ($filename, "r")) { + while (my $line = <$fh>) { + if ($line =~ m/^(\d+)\s(\d+)$/) { + my ($n, $timestamp) = ($1, $2); + + $latest = [$n, $timestamp] if $latest->[1] <= $timestamp; + + if (($timestamp + $expiretime) > $ctime) { + $used->{$n} = $timestamp; # not expired + } + } + } + } + + my $new_n; + for (my $count = 0; $count < $nbds_max; $count++) { + my $n = ($latest->[0] + $count) % $nbds_max; + my $block_device = "/dev/nbd${n}"; + next if $used->{$n}; # reserved + next if !-e $block_device; + + my $st = File::stat::stat("/run/lock/qemu-nbd-nbd${n}"); + next if defined($st) && S_ISSOCK($st->mode) && $st->uid == 0; # in use + + # Used to avoid looping if there are other issues then the NBD node being in use + my $socket_error = 0; + eval { + my $errfunc = sub { + my ($line) = @_; + $socket_error = 1 if $line =~ m/^qemu-nbd: Failed to set NBD socket$/; + log_warn($line); + }; + run_command(["qemu-nbd", "-c", $block_device, $options->@*], errfunc => $errfunc); + }; + if (my $err = $@) { + die $err if !$socket_error; + log_warn("una
[pve-devel] [PATCH manager v5 31/32] ui: backup: also check for backup subtype to classify archive
In anticipation of future storage plugins that might not have PBS-specific formats or adhere to the vzdump naming scheme for backups. Signed-off-by: Fiona Ebner --- www/manager6/Utils.js | 10 ++ www/manager6/grid/BackupView.js| 4 ++-- www/manager6/storage/BackupView.js | 4 ++-- 3 files changed, 10 insertions(+), 8 deletions(-) diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js index aa415759..1ed3de5d 100644 --- a/www/manager6/Utils.js +++ b/www/manager6/Utils.js @@ -694,12 +694,14 @@ Ext.define('PVE.Utils', { 'import': gettext('Import'), }, -volume_is_qemu_backup: function(volid, format) { - return format === 'pbs-vm' || volid.match(':backup/vzdump-qemu-'); +volume_is_qemu_backup: function(volume) { + return volume.format === 'pbs-vm' || volume.volid.match(':backup/vzdump-qemu-') || + volume.subtype === 'qemu'; }, -volume_is_lxc_backup: function(volid, format) { - return format === 'pbs-ct' || volid.match(':backup/vzdump-(lxc|openvz)-'); +volume_is_lxc_backup: function(volume) { + return volume.format === 'pbs-ct' || volume.volid.match(':backup/vzdump-(lxc|openvz)-') || + volume.subtype === 'lxc'; }, authSchema: { diff --git a/www/manager6/grid/BackupView.js b/www/manager6/grid/BackupView.js index e71d1c88..ef3649c6 100644 --- a/www/manager6/grid/BackupView.js +++ b/www/manager6/grid/BackupView.js @@ -29,11 +29,11 @@ Ext.define('PVE.grid.BackupView', { var vmtypeFilter; if (vmtype === 'lxc' || vmtype === 'openvz') { vmtypeFilter = function(item) { - return PVE.Utils.volume_is_lxc_backup(item.data.volid, item.data.format); + return PVE.Utils.volume_is_lxc_backup(item.data); }; } else if (vmtype === 'qemu') { vmtypeFilter = function(item) { - return PVE.Utils.volume_is_qemu_backup(item.data.volid, item.data.format); + return PVE.Utils.volume_is_qemu_backup(item.data); }; } else { throw "unsupported VM type '" + vmtype + "'"; diff --git a/www/manager6/storage/BackupView.js b/www/manager6/storage/BackupView.js index db184def..749c2136 100644 --- a/www/manager6/storage/BackupView.js +++ b/www/manager6/storage/BackupView.js @@ -86,9 +86,9 @@ Ext.define('PVE.storage.BackupView', { disabled: true, handler: function(b, e, rec) { let vmtype; - if (PVE.Utils.volume_is_qemu_backup(rec.data.volid, rec.data.format)) { + if (PVE.Utils.volume_is_qemu_backup(rec.data)) { vmtype = 'qemu'; - } else if (PVE.Utils.volume_is_lxc_backup(rec.data.volid, rec.data.format)) { + } else if (PVE.Utils.volume_is_lxc_backup(rec.data)) { vmtype = 'lxc'; } else { return; -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH qemu-server v14 5/12] fix #1027: virtio-fs support
On March 4, 2025 12:57 pm, Markus Frank wrote: > add support for sharing directories with a guest vm. > > virtio-fs needs virtiofsd to be started. > In order to start virtiofsd as a process (despite being a daemon it is > does not run in the background), a double-fork is used. > > virtiofsd should close itself together with QEMU. > > There are the parameters dirid and the optional parameters direct-io, > cache and writeback. Additionally the expose-xattr & expose-acl > parameter can be set to expose xattr & acl settings from the shared > filesystem to the guest system. > > The dirid gets mapped to the path on the current node and is also used > as a mount tag (name used to mount the device on the guest). > > example config: > ``` > virtiofs0: foo,direct-io=1,cache=always,expose-acl=1 > virtiofs1: dirid=bar,cache=never,expose-xattr=1,writeback=1 > ``` > > For information on the optional parameters see the coherent doc patch > and the official gitlab README: > https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md > > Also add a permission check for virtiofs directory access. > > Signed-off-by: Markus Frank > --- > v14: > * use max_virtiofs() in check_vm_create_dir_perm > * addressed style nits and improved formatting > * added missing imports/uses > * assert_virtiofs_config now only gets the ostype and the virtiofs cfg > * removed unnecessary checks after parse_property_string > > PVE/API2/Qemu.pm | 41 ++- > PVE/QemuServer.pm | 29 - > PVE/QemuServer/Makefile| 3 +- > PVE/QemuServer/Memory.pm | 22 ++-- > PVE/QemuServer/Virtiofs.pm | 211 + > 5 files changed, 295 insertions(+), 11 deletions(-) > create mode 100644 PVE/QemuServer/Virtiofs.pm > > diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm > index 5ac61aa5..7cefffdf 100644 > --- a/PVE/API2/Qemu.pm > +++ b/PVE/API2/Qemu.pm > @@ -39,6 +39,7 @@ use PVE::QemuServer::MetaInfo; > use PVE::QemuServer::PCI; > use PVE::QemuServer::QMPHelpers; > use PVE::QemuServer::USB; > +use PVE::QemuServer::Virtiofs qw(max_virtiofs); > use PVE::QemuMigrate; > use PVE::RPCEnvironment; > use PVE::AccessControl; > @@ -801,6 +802,33 @@ my sub check_vm_create_hostpci_perm { > return 1; > }; > > +my sub check_dir_perm { > +my ($rpcenv, $authuser, $vmid, $pool, $opt, $value) = @_; > + > +return 1 if $authuser eq 'root@pam'; > + > +$rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']); > + > +my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', > $value); > +$rpcenv->check_full($authuser, "/mapping/dir/$virtiofs->{dirid}", > ['Mapping.Use']); > + > +return 1; > +}; > + > +my sub check_vm_create_dir_perm { > +my ($rpcenv, $authuser, $vmid, $pool, $param) = @_; > + > +return 1 if $authuser eq 'root@pam'; > + > +for (my $i = 0; $i < max_virtiofs(); $i++) { > + my $opt = "virtiofs$i"; > + next if !$param->{$opt}; > + check_dir_perm($rpcenv, $authuser, $vmid, $pool, $opt, $param->{$opt}); > +} > + > +return 1; > +}; > + > my $check_vm_modify_config_perm = sub { > my ($rpcenv, $authuser, $vmid, $pool, $key_list) = @_; > > @@ -811,7 +839,7 @@ my $check_vm_modify_config_perm = sub { > # else, as there the permission can be value dependent > next if PVE::QemuServer::is_valid_drivename($opt); > next if $opt eq 'cdrom'; > - next if $opt =~ m/^(?:unused|serial|usb|hostpci)\d+$/; > + next if $opt =~ m/^(?:unused|serial|usb|hostpci|virtiofs)\d+$/; > next if $opt eq 'tags'; > > > @@ -1114,6 +1142,7 @@ __PACKAGE__->register_method({ > &$check_vm_create_serial_perm($rpcenv, $authuser, $vmid, $pool, > $param); > check_vm_create_usb_perm($rpcenv, $authuser, $vmid, $pool, $param); > check_vm_create_hostpci_perm($rpcenv, $authuser, $vmid, $pool, > $param); > + check_vm_create_dir_perm($rpcenv, $authuser, $vmid, $pool, $param); > > PVE::QemuServer::check_bridge_access($rpcenv, $authuser, $param); > &$check_cpu_model_access($rpcenv, $authuser, $param); > @@ -2005,6 +2034,10 @@ my $update_vm_api = sub { > check_hostpci_perm($rpcenv, $authuser, $vmid, undef, $opt, > $val); > PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force); > PVE::QemuConfig->write_config($vmid, $conf); > + } elsif ($opt =~ m/^virtiofs\d$/) { > + check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, > $val); > + PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force); > + PVE::QemuConfig->write_config($vmid, $conf); > } elsif ($opt eq 'tags') { > assert_tag_permissions($vmid, $val, '', $rpcenv, $authuser); > delete $conf->{$opt}; > @@ -2095,6 +2128,12 @@ my $update_vm_api = sub { > } > check_hostpci_perm($rpcenv, $authuser, $v
[pve-devel] [PATCH qemu-server] config: add system and service credentials support
Allows to pass system and service credentials to a VM. See [1] for a description of credentials. This can be potentially used to provision a VM as per [2]. Values can be passed either as plain text or as a base64 encoded string when the base64 flag is set. A VM configuration file which, for example, contains: systemd-cred0: name=foo,value=bar systemd-cred1: name=encoded-foo,value=YmFy,base64=1 will have the following arguments added to its kvm command: -smbios 'type=11,value=io.systemd.credential:foo=bar' \ -smbios 'type=11,value=io.systemd.credential.binary:encoded-foo=YmFy' On the guest these credentials can be read via: dmidecode -t 11 via: systemd-creds --system list systemd-creds --system cat $CREDENTIAL_NAME or at: /run/credentials/@system/$CREDENTIAL_NAME [1] https://systemd.io/CREDENTIALS/ [2] https://www.freedesktop.org/software/systemd/man/latest/systemd.system-credentials.html Suggested-by: Wolfgang Bumiller Signed-off-by: Maximiliano Sandoval --- Differences from RFC: - Fixed multiple points from the feedback in the RFC - Uses "systemd-cred$i" instead of "systemd-credential.$name" as a name - The snipped support was dropped - More strict format required on credential names and values PVE/QemuServer.pm | 72 +++ 1 file changed, 72 insertions(+) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index ffd5d56b..1f902b8b 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -150,6 +150,31 @@ my $watchdog_fmt = { }; PVE::JSONSchema::register_format('pve-qm-watchdog', $watchdog_fmt); +my $MAX_CREDENTIALS = 16; + +my $systemd_cred_fmt = { +value => { + description => 'The credential\'s value. This is a Base64 encoded string.' + .' If the value should contain arbitrary binary data,' + .' then the value can be a Base64 encoded string and the base64=1 flag should be set.', + type => 'string', + pattern => '[A-Za-z0-9+\/]+={0,2}', + typetext => '', +}, +name => { + description => 'The credential\'s name. This is a short ASCII string suitable as filename in the filesystem', + type => 'string', + pattern => '[A-Za-z0-9\-\._]+', + typetext => '', +}, +base64 => { + description => 'Whether the credential\'s value is base64 encoded.', + type => 'boolean', + optional => 1, + default => 0, +}, +}; + my $agent_fmt = { enabled => { description => "Enable/disable communication with a QEMU Guest Agent (QGA) running in the VM.", @@ -946,6 +971,13 @@ my $netdesc = { description => "Specify network devices.", }; +my $systemd_cred_desc = { +optional => 1, +type => 'string', +format => $systemd_cred_fmt, +description => 'Specify system and service credentials.', +}; + PVE::JSONSchema::register_standard_option("pve-qm-net", $netdesc); my $ipconfig_fmt = { @@ -1007,6 +1039,10 @@ for (my $i = 0; $i < $MAX_NETS; $i++) { $confdesc_cloudinit->{"ipconfig$i"} = $ipconfigdesc; } +for (my $i = 0; $i < $MAX_CREDENTIALS; $i++) { +$confdesc->{"systemd-cred$i"} = $systemd_cred_desc; +} + foreach my $key (keys %$confdesc_cloudinit) { $confdesc->{$key} = $confdesc_cloudinit->{$key}; } @@ -1955,6 +1991,16 @@ sub parse_guest_agent { return $res; } +sub parse_systemd_credential { +my ($value) = @_; + +return {} if !$value; + +my $res = eval { parse_property_string($systemd_cred_fmt, $value) }; +warn $@ if $@; +return $res; +} + sub get_qga_key { my ($conf, $key) = @_; return undef if !defined($conf->{agent}); @@ -3383,6 +3429,17 @@ my sub get_vga_properties { return ($vga, $qxlnum); } +sub smbios_11_cred_arg { +my ($name, $value, $is_encoded) = @_; + +if ($is_encoded) { + die "value $value is not base64 encoded\n" if $value !~ /^[A-Za-z0-9+\/]+={0,2}$/; + return ('-smbios', "type=11,value=io.systemd.credential.binary:$name=$value"); +} else { + return ('-smbios', "type=11,value=io.systemd.credential:$name=$value"); +} +} + sub config_to_command { my ($storecfg, $vmid, $conf, $defaults, $forcemachine, $forcecpu, $live_restore_backing) = @_; @@ -4015,6 +4072,21 @@ sub config_to_command { push @$cmd, '-snapshot'; } +for (my $i = 0; $i < $MAX_CREDENTIALS; $i++) { + my $cred_name = "systemd-cred$i"; + + next if !$conf->{$cred_name}; + + my $opts = parse_systemd_credential($conf->{$cred_name}); + my $name = $opts->{'name'}; + my $is_encoded = $opts->{'base64'}; + my $value = $opts->{'value'}; + + if ($value && $name) { + push @$cmd, smbios_11_cred_arg($name, $value, $is_encoded); + } +} + # add custom args if ($conf->{args}) { my $aa = PVE::Tools::split_args($conf->{args}); -- 2.39.5 ___ pve-devel mailing list pve-deve
[pve-devel] [PATCH pve-manager 3/4] notification: replication: add common properties to template data
The new PVE::Notify::common_template_data helper gives us a hash of properties which should be available in all notifications (hostname, fqdn, cluster-name at this moment). This commit makes sure that replication notifications have these available. Signed-off-by: Lukas Wagner --- PVE/API2/Replication.pm| 16 +++- templates/default/replication-body.txt.hbs | 2 +- 2 files changed, 8 insertions(+), 10 deletions(-) diff --git a/PVE/API2/Replication.pm b/PVE/API2/Replication.pm index e4a7180f..3a2be73a 100644 --- a/PVE/API2/Replication.pm +++ b/PVE/API2/Replication.pm @@ -112,15 +112,13 @@ my sub _handle_job_err { # The replication job is run every 15 mins if no schedule is set. my $schedule = $job->{schedule} // '*/15'; -my $template_data = { - "failure-count" => $fail_count, - "last-sync" => $jobstate->{last_sync}, - "next-sync" => $next_sync, - "job-id"=> $job->{id}, - "job-target"=> $job->{target}, - "job-schedule" => $schedule, - "error" => $err, -}; +my $template_data = PVE::Notify::common_template_data(); +$template_data->{"failure-count"} = $fail_count; +$template_data->{"last-sync"} = $jobstate->{last_sync}; +$template_data->{"job-id"} = $job->{id}; +$template_data->{"job-target"} = $job->{target}; +$template_data->{"job-schedule"} = $schedule; +$template_data->{"error"} = $err; my $metadata_fields = { type => "replication", diff --git a/templates/default/replication-body.txt.hbs b/templates/default/replication-body.txt.hbs index a9273fef..b1894ce9 100644 --- a/templates/default/replication-body.txt.hbs +++ b/templates/default/replication-body.txt.hbs @@ -9,4 +9,4 @@ Note: The system will now reduce the frequency of error reports, as the job appears to be stuck. {{/if}} Error: -{{ error }} +{{error}} -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH storage v5 09/32] plugin: introduce new_backup_provider() method
Just a short high level nit today, will have to look more closely at this and the series the next days: There's a `new()` which takes an $scfg + $storeid. But later there are some methods taking `$self` (which usually means the thing returned from `new()`), which also get a `$storeid` as additional parameter (but without any `$scfg`). IMO the `$storeid` should be dropped there. On Fri, Mar 21, 2025 at 02:48:29PM +0100, Fiona Ebner wrote: > The new_backup_provider() method can be used by storage plugins for > external backup providers. If the method returns a provider, Proxmox > VE will use callbacks to that provider for backups and restore instead > of using its usual backup/restore mechanisms. > > The backup provider API is split into two parts, both of which again > need different implementations for VM and LXC guests: > > 1. Backup API > > There are two hook callback functions, namely: > 1. job_hook() is called during the start/end/abort phases of the whole >backup job. > 2. backup_hook() is called during the start/end/abort phases of the >backup of an individual guest. There also is a 'prepare' phase >useful for container backups, because the backup method for >containers itself is executed in the user namespace context >associated to the container. > > The backup_get_mechanism() method is used to decide on the backup > mechanism. Currently, 'file-handle' or 'nbd' for VMs, and 'directory' > for containers is possible. The method also let's the plugin indicate > whether to use a bitmap for incremental VM backup or not. It is enough > to implement one mechanism for VMs and one mechanism for containers. > > Next, there are methods for backing up the guest's configuration and > data, backup_vm() for VM backup and backup_container() for container > backup, with the latter running > > Finally, some helpers like getting the provider name or volume ID for > the backup target, as well as for handling the backup log. > > 1.1 Backup Mechanisms > > VM: > > Access to the data on the VM's disk from the time the backup started > is made available via a so-called "snapshot access". This is either > the full image, or in case a bitmap is used, the dirty parts of the > image since the last time the bitmap was used for a successful backup. > Reading outside of the dirty parts will result in an error. After > backing up each part of the disk, it should be discarded in the export > to avoid unnecessary space usage on the Proxmox VE side (there is an > associated fleecing image). > > VM mechanism 'file-handle': > > The snapshot access is exposed via a file descriptor. A subroutine to > read the dirty regions for incremental backup is provided as well. > > VM mechanism 'nbd': > > The snapshot access and, if used, bitmap are exported via NBD. > > Container mechanism 'directory': > > A copy or snapshot of the container's filesystem state is made > available as a directory. The method is executed inside the user > namespace associated to the container. > > 2. Restore API > > The restore_get_mechanism() method is used to decide on the restore > mechanism. Currently, 'qemu-img' for VMs, and 'directory' or 'tar' for > containers are possible. It is enough to implement one mechanism for > VMs and one mechanism for containers. > > Next, methods for extracting the guest and firewall configuration and > the implementations of the restore mechanism via a pair of methods: an > init method, for making the data available to Proxmox VE and a cleanup > method that is called after restore. > > For VMs, there also is a restore_vm_get_device_info() helper required, > to get the disks included in the backup and their sizes. > > 2.1. Restore Mechanisms > > VM mechanism 'qemu-img': > > The backup provider gives a path to the disk image that will be > restored. The path needs to be something 'qemu-img' can deal with, > e.g. can also be an NBD URI or similar. > > Container mechanism 'directory': > > The backup provider gives the path to a directory with the full > filesystem structure of the container. > > Container mechanism 'tar': > > The backup provider gives the path to a (potentially compressed) tar > archive with the full filesystem structure of the container. > > See the PVE::BackupProvider::Plugin module for the full API > documentation. > > Signed-off-by: Fiona Ebner > --- > > Changes in v5: > * Split API version+age bump into own commit. > * Replace 'block-device' mechanism with 'file-handle'. > > src/PVE/BackupProvider/Makefile|3 + > src/PVE/BackupProvider/Plugin/Base.pm | 1161 > src/PVE/BackupProvider/Plugin/Makefile |5 + > src/PVE/Makefile |1 + > src/PVE/Storage.pm |8 + > src/PVE/Storage/Plugin.pm | 15 + > 6 files changed, 1193 insertions(+) > create mode 100644 src/PVE/BackupProvider/Makefile > create mode 100644 src/PVE/BackupProvider/Plugin/Base.pm > create mode 10
Re: [pve-devel] [PATCH v4 pve-storage 3/5] storage: vdisk_free: remove external snapshots
> Alexandre Derumier via pve-devel hat am > 11.03.2025 11:28 CET geschrieben: > > > ___ > pve-devel mailing list > pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > Signed-off-by: Alexandre Derumier > --- > src/PVE/Storage.pm | 18 +- > src/test/run_test_zfspoolplugin.pl | 18 ++ > 2 files changed, 35 insertions(+), 1 deletion(-) > > diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm > index 79e5c3a..4012905 100755 > --- a/src/PVE/Storage.pm > +++ b/src/PVE/Storage.pm > @@ -1052,7 +1052,23 @@ sub vdisk_free { > > my (undef, undef, undef, undef, undef, $isBase, $format) = > $plugin->parse_volname($volname); > - $cleanup_worker = $plugin->free_image($storeid, $scfg, $volname, > $isBase, $format); > + > +$cleanup_worker = sub { > + #remove external snapshots > + activate_volumes($cfg, [ $volid ]); > + my $snapshots = PVE::Storage::volume_snapshot_info($cfg, $volid); > + for my $snapid (sort { $snapshots->{$b}->{order} <=> > $snapshots->{$a}->{order} } keys %$snapshots) { > + my $snap = $snapshots->{$snapid}; > + next if $snapid eq 'current'; > + next if !$snap->{volid}; > + next if !$snap->{ext}; > + my ($snap_storeid, $snap_volname) = > parse_volume_id($snap->{volid}); > + my (undef, undef, undef, undef, undef, $snap_isBase, > $snap_format) = > + $plugin->parse_volname($volname); > + $plugin->free_image($snap_storeid, $scfg, $snap_volname, > $snap_isBase, $snap_format); > + } > + $plugin->free_image($storeid, $scfg, $volname, $isBase, $format); this is the wrong place to do this, you need to handle this in the cleanup worker returned by the plugin and still execute it here.. also you need to honor saferemove when cleaning up the snapshots > + }; > }); > > return if !$cleanup_worker; > diff --git a/src/test/run_test_zfspoolplugin.pl > b/src/test/run_test_zfspoolplugin.pl > index 095ccb3..4ff9f22 100755 > --- a/src/test/run_test_zfspoolplugin.pl > +++ b/src/test/run_test_zfspoolplugin.pl > @@ -6,12 +6,30 @@ use strict; > use warnings; > > use Data::Dumper qw(Dumper); > +use Test::MockModule; > + > use PVE::Storage; > use PVE::Cluster; > use PVE::Tools qw(run_command); > +use PVE::RPCEnvironment; > use Cwd; > $Data::Dumper::Sortkeys = 1; > > +my $rpcenv_module; > +$rpcenv_module = Test::MockModule->new('PVE::RPCEnvironment'); > +$rpcenv_module->mock( > +get_user => sub { > +return 'root@pam'; > +}, > +fork_worker => sub { > + my ($self, $dtype, $id, $user, $function, $background) = @_; > + $function->(123456); > + return '123456'; > +} > +); > + > +my $rpcenv = PVE::RPCEnvironment->init('pub'); > + what? why? no explanation? > my $verbose = undef; > > my $storagename = "zfstank99"; > -- > 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH container v6 31/37] backup: implement restore for external providers
First, the provider is asked about what restore mechanism to use. Currently, 'directory' and 'tar' are possible. The 'directory' mechanism is for restoring from a directory containing the container's full filesystem structure, which is restored by piping from a privileged tar cf - to tar xf - in the associated user namespace. The 'tar' mechanism is for restoring a (potentially compressed) tar file containing the container's full filesystem structure. The new functions are copied and adapted from the existing ones for PBS or tar and it might be worth to factor out the common parts. Restore of containers as privileged are prohibited, because the archives from an external provider are considered less trusted than from Proxmox VE storages. If ever allowing that in the future, at least it would be worth extracting the tar archive in a restricted context (e.g. user namespace with ID mapped mount or seccomp). Signed-off-by: Fiona Ebner --- Changes in v6: * Adapt to renamed archive_get_*_config() methods. src/PVE/LXC/Create.pm | 149 ++ 1 file changed, 149 insertions(+) diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm index 8c8cb9a..c3c7640 100644 --- a/src/PVE/LXC/Create.pm +++ b/src/PVE/LXC/Create.pm @@ -7,6 +7,7 @@ use File::Path; use Fcntl; use PVE::RPCEnvironment; +use PVE::RESTEnvironment qw(log_warn); use PVE::Storage::PBSPlugin; use PVE::Storage::Plugin; use PVE::Storage; @@ -26,6 +27,24 @@ sub restore_archive { if ($scfg->{type} eq 'pbs') { return restore_proxmox_backup_archive($storage_cfg, $archive, $rootdir, $conf, $no_unpack_error, $bwlimit); } + if (PVE::Storage::storage_has_feature($storage_cfg, $storeid, 'backup-provider')) { + my $log_function = sub { + my ($log_level, $message) = @_; + my $prefix = $log_level eq 'err' ? 'ERROR' : uc($log_level); + print "$prefix: $message\n"; + }; + my $backup_provider = + PVE::Storage::new_backup_provider($storage_cfg, $storeid, $log_function); + return restore_external_archive( + $backup_provider, + $storeid, + $volname, + $rootdir, + $conf, + $no_unpack_error, + $bwlimit, + ); + } } $archive = PVE::Storage::abs_filesystem_path($storage_cfg, $archive) if $archive ne '-'; @@ -127,6 +146,62 @@ sub restore_tar_archive { die $err if $err && !$no_unpack_error; } +sub restore_external_archive { +my ($backup_provider, $storeid, $volname, $rootdir, $conf, $no_unpack_error, $bwlimit) = @_; + +die "refusing to restore privileged container backup from external source\n" + if !$conf->{unprivileged}; + +my ($mechanism, $vmtype) = $backup_provider->restore_get_mechanism($volname, $storeid); +die "cannot restore non-LXC guest of type '$vmtype'\n" if $vmtype ne 'lxc'; + +my $info = $backup_provider->restore_container_init($volname, $storeid, {}); +eval { + if ($mechanism eq 'tar') { + my $tar_path = $info->{'tar-path'} + or die "did not get path to tar file from backup provider\n"; + die "not a regular file '$tar_path'" if !-f $tar_path; + restore_tar_archive($tar_path, $rootdir, $conf, $no_unpack_error, $bwlimit); + } elsif ($mechanism eq 'directory') { + my $directory = $info->{'archive-directory'} + or die "did not get path to archive directory from backup provider\n"; + die "not a directory '$directory'" if !-d $directory; + + my $create_cmd = [ + 'tar', + 'cpf', + '-', + @PVE::Storage::Plugin::COMMON_TAR_FLAGS, + "--directory=$directory", + '.', + ]; + + my $extract_cmd = restore_tar_archive_command($conf, undef, $rootdir, $bwlimit); + + my $cmd; + # if there is a bandwidth limit, the command is already a nested array reference + if (ref($extract_cmd) eq 'ARRAY' && ref($extract_cmd->[0]) eq 'ARRAY') { + $cmd = [$create_cmd, $extract_cmd->@*]; + } else { + $cmd = [$create_cmd, $extract_cmd]; + } + + eval { PVE::Tools::run_command($cmd); }; + die $@ if $@ && !$no_unpack_error; + } else { + die "mechanism '$mechanism' requested by backup provider is not supported for LXCs\n"; + } +}; +my $err = $@; +eval { $backup_provider->restore_container_cleanup($volname, $storeid, {}); }; +if (my $cleanup_err = $@) { + die $cleanup_err if !$err; + warn $cleanup_err; +} +die $err if $err; + +} + sub recover_config { my ($storage_cfg, $volid, $vmid) = @_; @@ -135,6 +210,8 @@ sub recover_config { my $scfg = PVE::Storage::storage_check_enabled($storage_cf
Re: [pve-devel] [PATCH cluster/docs/manager/network/proxmox{, -ve-rs, -firewall, -perl-rs} 00/52] Add SDN Fabrics
On 03/04/2025 16:03, Stefan Hanreich wrote: > > > On 4/3/25 15:44, Friedrich Weber wrote: - when removing a fabric, the IP addresses defined on the interfaces remain until the next reboot. I guess the reason is that ifupdown2 doesn't remove IP addresses when the corresponding stanza vanishes. Not sure if this can be easily fixed -- if not, maybe this would be worth a note in the docs? >>> >>> Umm, I think `ifreload -a` should remove all the addresses? At least it >>> works on my machine :) >>> >>> But I'll check again. >> >> I took a closer look -- seems I can only reproduce this if >> /etc/network/interfaces contains an empty `iface INTERFACE inet manual` >> stanza for the interface. Without such a stanza, the IP address is >> removed correctly. > > `manual` means, that IP addresses are configured manually by the user, > so if ifupdown2 encounters an address configured on that interface it > won't remove it, since you're telling it with manual that it isn't > responsible for managing addresses on that interface. So I'd say that's > expected with that line? Hmm, the explanation makes sense, but seems like our installer automatically adds [1] an `iface INTERFACE inet manual` stanza for all "unused" interfaces? So users may run into this (admittedly minor) issue if they used interfaces that were already present at installation time for a fabric, and then remove that fabric. [1] https://git.proxmox.com/?p=pve-installer.git;a=blob;f=Proxmox/Install.pm;h=57fd899;hb=95f2bc3ee#l1097 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH zfsonlinux 5/8] d/control: remove transitional dummy package spl
debian upstream did this in 2021, but I overlooked that back then[0]. [0] https://salsa.debian.org/zfsonlinux-team/zfs/-/commit/50841f137b225746a549d3dac98cc6b05a39e4dd Signed-off-by: Stoiko Ivanov --- Makefile | 1 - debian/control | 19 --- 2 files changed, 20 deletions(-) diff --git a/Makefile b/Makefile index 72accfdb3..4f27fca07 100644 --- a/Makefile +++ b/Makefile @@ -25,7 +25,6 @@ ZFS_DEB2= $(ZFS_DEB_BINARY) \ libzfslinux-dev_$(DEB_VERSION)_amd64.deb \ python3-pyzfs_$(DEB_VERSION)_amd64.deb \ pyzfs-doc_$(DEB_VERSION)_all.deb \ -spl_$(DEB_VERSION)_all.deb \ zfs-initramfs_$(DEB_VERSION)_all.deb DEBS= $(ZFS_DEB1) $(ZFS_DEB2) $(ZFS_DBG_DEBS) diff --git a/debian/control b/debian/control index b0640b360..6ee22da9e 100644 --- a/debian/control +++ b/debian/control @@ -267,22 +267,3 @@ Description: OpenZFS test infrastructure and support scripts This package provides the OpenZFS test infrastructure for destructively testing and validating a system using OpenZFS. It is entirely optional and should only be installed and used in test environments. - -Package: spl -Section: contrib/metapackages -Architecture: all -Depends: ${misc:Depends}, -Suggests: zfs-test -Description: Solaris Porting Layer user-space utilities for Linux (dummy) - The Solaris Porting Layer (SPL) is a Linux kernel module which provides - many of the Solaris kernel APIs. This shim layer makes it possible to - run Solaris kernel code in the Linux kernel with relatively minimal - modification. The Solaris Porting LAyer Tests (SPLAT) is a Linux kernel - module which provides a testing harness for the SPL module. - . - SPL can be particularly useful when you want to track upstream Illumos - (or any other OpenSolaris fork) development closely and don't want the - overhead of maintaining a large patch which converts Solaris primitives - to Linux primitives. - . - This is a transitional dummy package. It can safely be removed. -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH manager v14 10/12] ui: add resource mapping view for directories
Some comments inline Otherwise, please consider: Reviewed-by: Laurențiu Leahu-Vlăducu Tested-by: Laurențiu Leahu-Vlăducu On 04.03.25 12:58, Markus Frank wrote: Signed-off-by: Markus Frank --- v14: * return HTML encoded comment www/manager6/Makefile | 1 + www/manager6/dc/Config.js | 10 + www/manager6/dc/DirMapView.js | 42 +++ 3 files changed, 53 insertions(+) create mode 100644 www/manager6/dc/DirMapView.js diff --git a/www/manager6/Makefile b/www/manager6/Makefile index 4b8677e3..57c4d377 100644 --- a/www/manager6/Makefile +++ b/www/manager6/Makefile @@ -189,6 +189,7 @@ JSSRC= \ dc/RealmSyncJob.js \ dc/PCIMapView.js\ dc/USBMapView.js\ + dc/DirMapView.js\ lxc/CmdMenu.js \ lxc/Config.js \ lxc/CreateWizard.js \ diff --git a/www/manager6/dc/Config.js b/www/manager6/dc/Config.js index 74728c83..2958fb88 100644 --- a/www/manager6/dc/Config.js +++ b/www/manager6/dc/Config.js @@ -329,6 +329,16 @@ Ext.define('PVE.dc.Config', { title: gettext('USB Devices'), flex: 1, }, + { + xtype: 'splitter', + collapsible: false, + performCollapse: false, + }, + { + xtype: 'pveDcDirMapView', + title: gettext('Directories'), + flex: 1, + }, This is only indirectly related to your changes, but: I noticed that the "Resource Mappings" tab does not have that much vertical space anymore. While this is fine on my (large) screen, it might be an issue with smaller (e.g. laptop) screens. In case we want to add more resource mappings in the future, it will get even worse. It might be worth thinking whether it makes sense to split it into tabs, similar to how we're doing it in the PBS GUI, or the Tasks / Cluster log in PVE. I know that this adds a third layer of GUI (Datacenter -> Resource Mappings -> click correct tab), but I think it might be a bit more organized, and the space would be used more efficiently. ], }, ); diff --git a/www/manager6/dc/DirMapView.js b/www/manager6/dc/DirMapView.js new file mode 100644 index ..ff0ce633 --- /dev/null +++ b/www/manager6/dc/DirMapView.js @@ -0,0 +1,42 @@ +Ext.define('pve-resource-dir-tree', { +extend: 'Ext.data.Model', +idProperty: 'internalId', +fields: ['type', 'text', 'path', 'id', 'description', 'digest'], +}); + +Ext.define('PVE.dc.DirMapView', { +extend: 'PVE.tree.ResourceMapTree', +alias: 'widget.pveDcDirMapView', + +editWindowClass: 'PVE.window.DirMapEditWindow', +baseUrl: '/cluster/mapping/dir', +mapIconCls: 'fa fa-folder', +entryIdProperty: 'path', + +store: { + sorters: 'text', + model: 'pve-resource-dir-tree', + data: {}, +}, + +columns: [ + { + xtype: 'treecolumn', + text: gettext('ID/Node'), + dataIndex: 'text', + width: 200, + }, + { + text: gettext('announce-submounts'), Minor remark: while I understand why naming it 'announce-submounts' makes it clearer which underlying option it sets, I would usually not display it like this in the GUI, but by using a nice label. Although I also think it's hard to find a short label that is easy to understand, so maybe - just maybe - it would make sense to add a short tooltip which explains what the option does. On the other hand, I know that the user can always press "Help" to get to the docs that explain everything in detail, so I would only add a tooltip if it's possible to easily explain what the feature does. + dataIndex: 'announce-submounts', + }, + { + header: gettext('Comment'), + dataIndex: 'description', + renderer: function(value, _meta, record) { + return Ext.String.htmlEncode(value ?? record.data.comment); + }, + flex: 1, + }, +], +}); ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH storage/manager v3] allow upload & import of qcow2 in the web UI
On 3/26/25 12:41, Fiona Ebner wrote: Am 26.03.25 um 11:47 schrieb Dominik Csapak: On 3/26/25 11:37, Fiona Ebner wrote: Am 25.03.25 um 16:14 schrieb Dominik Csapak: most of the building blocks are already there: * we can have qcow2 files in an import storage * we can import qcow2 files via the api from such a storage this series fills in the missing bits & pieces: * allow uploading qcow2 files into an import storage via the webgui * adding the possibility to select such a file when creating a vm/disk We could maybe also allow this for raw/vmdk if we want to, but IMHO we can start out with qcow2 and add the others as necssary. (if wanted, I can of course also add the others in a next version or as a follow up) Yes, please! It would be nice to have all three at the same time. Or is there any specific reason why you limit it to qcow2? Otherwise, users will just ask why support for these is missing right away. No specific reason, it was just easier/quicker to implement one first. When we also allow raw files, should we also allow other extensions than '.raw'? not sure if there is one that is often used (since I think '.raw' is more a PVE thing) Right, raw is actually a bit of a headache because of that :P We could either: 1) have a list of common extensions for raw: .raw/.img/etc 1b) also treat files without extension as raw? 2) have a list of known extensions that are not raw and treat everything else as raw, while logging an informational message I'd prefer 1), because we already require specific extensions for other uploads. And likely we want to rename after/during upload, so images that are raw for us always have a ".raw" extension? Otherwise, we need to be careful enough to enforce the very same rules when parsing the import volume name and thus mostly also have them set in stone for the future. The advantage of the latter would be for the use case where one wants to manually make accessible their already existing image folders without using the API. actually thinking of renaming, i don't think that's necessary to do in the backend at all since the client will provide a target filename, we can just rename it in the ui to '.raw' for the user? then we'd also not have to have a list of 'raw' formats on the backend at all? ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH pve-network 11/17] api: add fabrics subfolder
From: Stefan Hanreich Add a new subfolder that contains the API methods for the sdn fabrics. We also add a method for listing all fabrics of all types as a GET endpoint, with the respective schemas. It supports the same filtering options as the other SDN GET endpoints (pending / running). We also need to add a special case in encode_value for the interface key of nodes, since they require special handling when encoding because they are arrays. Signed-off-by: Stefan Hanreich Co-authored-by: Gabriel Goller Signed-off-by: Gabriel Goller --- src/PVE/API2/Network/SDN.pm | 7 + src/PVE/API2/Network/SDN/Fabrics.pm | 294 src/PVE/API2/Network/SDN/Makefile | 2 +- src/PVE/Network/SDN.pm | 2 +- 4 files changed, 303 insertions(+), 2 deletions(-) create mode 100644 src/PVE/API2/Network/SDN/Fabrics.pm diff --git a/src/PVE/API2/Network/SDN.pm b/src/PVE/API2/Network/SDN.pm index d216e4878b61..ccbf0777e3d4 100644 --- a/src/PVE/API2/Network/SDN.pm +++ b/src/PVE/API2/Network/SDN.pm @@ -17,6 +17,7 @@ use PVE::API2::Network::SDN::Vnets; use PVE::API2::Network::SDN::Zones; use PVE::API2::Network::SDN::Ipams; use PVE::API2::Network::SDN::Dns; +use PVE::API2::Network::SDN::Fabrics; use base qw(PVE::RESTHandler); @@ -45,6 +46,11 @@ __PACKAGE__->register_method ({ path => 'dns', }); +__PACKAGE__->register_method ({ +subclass => "PVE::API2::Network::SDN::Fabrics", +path => 'fabrics', +}); + __PACKAGE__->register_method({ name => 'index', path => '', @@ -76,6 +82,7 @@ __PACKAGE__->register_method({ { id => 'controllers' }, { id => 'ipams' }, { id => 'dns' }, + { id => 'fabrics' }, ]; return $res; diff --git a/src/PVE/API2/Network/SDN/Fabrics.pm b/src/PVE/API2/Network/SDN/Fabrics.pm new file mode 100644 index ..c9064b0ea05b --- /dev/null +++ b/src/PVE/API2/Network/SDN/Fabrics.pm @@ -0,0 +1,294 @@ +package PVE::API2::Network::SDN::Fabrics; + +use strict; +use warnings; + +use Storable qw(dclone); + +use PVE::RPCEnvironment; +use PVE::Tools qw(extract_param); + +use PVE::API2::Network::SDN::Fabrics::OpenFabric; +use PVE::API2::Network::SDN::Fabrics::Ospf; + +use PVE::Network::SDN::Fabrics; + +use PVE::RESTHandler; +use base qw(PVE::RESTHandler); + +__PACKAGE__->register_method ({ +subclass => "PVE::API2::Network::SDN::Fabrics::OpenFabric", +path => 'openfabric', +}); +__PACKAGE__->register_method ({ +subclass => "PVE::API2::Network::SDN::Fabrics::Ospf", +path => 'ospf', +}); + +my $openfabric_interface_fmt = { +name => { + type => 'string', + description => 'Name of the interface', +}, +ip => { + type => 'string', + description => 'The IPv4 address of the interface', + optional => 1, +}, +ipv6 => { + type => 'string', + description => 'The IPv6 address of the interface', + optional => 1, +}, +passive => { + type => 'boolean', + description => 'The passive property of the interface', + optional => 1, +}, +hello_interval => { + type => 'number', + description => 'The hello_interval property of the interface', + optional => 1, +}, +csnp_interval => { + type => 'number', + description => 'The csnp_interval property of the interface', + optional => 1, +}, +hello_multiplier => { + type => 'number', + description => 'The hello_multiplier property of the interface', + optional => 1, +}, +}; + +PVE::JSONSchema::register_format('pve-sdn-openfabric-interface', $openfabric_interface_fmt); + +my $ospf_interface_fmt = { +name => { + type => 'string', + description => 'Name of the interface', +}, +passive => { + type => 'boolean', + description => 'The passive property of the interface', + optional => 1, +}, +ip => { + type => 'string', + description => 'The IPv4 address of the interface', + optional => 1, +}, +unnumbered => { + type => 'boolean', + description => 'If the interface is unnumbered', + optional => 1, +}, +}; + +PVE::JSONSchema::register_format('pve-sdn-ospf-interface', $ospf_interface_fmt); + +__PACKAGE__->register_method({ +name => 'index', +path => '', +method => 'GET', +description => 'Index of SDN Fabrics', +permissions => { + description => "Only list entries where you have 'SDN.Audit' or 'SDN.Allocate' permissions on '/sdn/fabrics//'", + user => 'all', +}, +parameters => { + additionalProperties => 0, + properties => { + running => { + type => 'boolean', + optional => 1, + description => "Display running config.", + }, + pending => { + type => 'boolean', + optional => 1, + description => "Display pendin
[pve-devel] [PATCH proxmox-firewall 1/1] firewall: nftables: migrate to proxmox-network-types
From: Stefan Hanreich The fabrics patch series moved some generic network types into its own crate, so they can be reused across crates. Migrate proxmox-firewall to use the new proxmox-network-types crate instead of proxmox_ve_config. Signed-off-by: Stefan Hanreich Signed-off-by: Gabriel Goller --- Cargo.toml | 1 + proxmox-firewall/Cargo.toml| 1 + proxmox-firewall/src/firewall.rs | 2 +- proxmox-firewall/src/object.rs | 4 +++- proxmox-firewall/src/rule.rs | 3 ++- proxmox-nftables/Cargo.toml| 3 ++- proxmox-nftables/src/expression.rs | 5 + proxmox-nftables/src/types.rs | 2 +- 8 files changed, 12 insertions(+), 9 deletions(-) diff --git a/Cargo.toml b/Cargo.toml index 079fb79ee45b..7e1ebb60e536 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -7,3 +7,4 @@ resolver = "2" [workspace.dependencies] proxmox-ve-config = { version = "0.2.2" } +proxmox-network-types = { version = "0.1" } diff --git a/proxmox-firewall/Cargo.toml b/proxmox-firewall/Cargo.toml index 09ea3fe3826a..67622d227d1c 100644 --- a/proxmox-firewall/Cargo.toml +++ b/proxmox-firewall/Cargo.toml @@ -22,6 +22,7 @@ signal-hook = "0.3" proxmox-nftables = { path = "../proxmox-nftables", features = ["config-ext"] } proxmox-ve-config = { workspace = true } +proxmox-network-types = { workspace = true } [dev-dependencies] insta = { version = "1.21", features = ["json"] } diff --git a/proxmox-firewall/src/firewall.rs b/proxmox-firewall/src/firewall.rs index 607fc753b4ac..bec1fada7746 100644 --- a/proxmox-firewall/src/firewall.rs +++ b/proxmox-firewall/src/firewall.rs @@ -20,7 +20,7 @@ use proxmox_ve_config::firewall::ct_helper::get_cthelper; use proxmox_ve_config::firewall::guest::Config as GuestConfig; use proxmox_ve_config::firewall::host::Config as HostConfig; -use proxmox_ve_config::firewall::types::address::Ipv6Cidr; +use proxmox_network_types::address::Ipv6Cidr; use proxmox_ve_config::firewall::types::ipset::{ Ipfilter, Ipset, IpsetEntry, IpsetName, IpsetScope, }; diff --git a/proxmox-firewall/src/object.rs b/proxmox-firewall/src/object.rs index cf7e773b76a8..db7b1bb7a6e0 100644 --- a/proxmox-firewall/src/object.rs +++ b/proxmox-firewall/src/object.rs @@ -11,11 +11,13 @@ use proxmox_nftables::{ use proxmox_ve_config::{ firewall::{ ct_helper::CtHelperMacro, -types::{address::Family, alias::AliasName, ipset::IpsetAddress, Alias, Ipset}, +types::{alias::AliasName, ipset::IpsetAddress, Alias, Ipset}, }, guest::types::Vmid, }; +use proxmox_network_types::address::Family; + use crate::config::FirewallConfig; pub(crate) struct NftObjectEnv<'a, 'b> { diff --git a/proxmox-firewall/src/rule.rs b/proxmox-firewall/src/rule.rs index 14ee54471ee4..a0597c0c2aa3 100644 --- a/proxmox-firewall/src/rule.rs +++ b/proxmox-firewall/src/rule.rs @@ -12,7 +12,6 @@ use proxmox_ve_config::{ ct_helper::CtHelperMacro, fw_macros::{get_macro, FwMacro}, types::{ -address::Family, alias::AliasName, ipset::{Ipfilter, IpsetName}, log::LogRateLimit, @@ -26,6 +25,8 @@ use proxmox_ve_config::{ guest::types::Vmid, }; +use proxmox_network_types::address::Family; + use crate::config::FirewallConfig; #[derive(Debug, Clone)] diff --git a/proxmox-nftables/Cargo.toml b/proxmox-nftables/Cargo.toml index 4ff6f41a97da..85f07f064011 100644 --- a/proxmox-nftables/Cargo.toml +++ b/proxmox-nftables/Cargo.toml @@ -11,7 +11,7 @@ description = "Proxmox VE nftables" license = "AGPL-3" [features] -config-ext = ["dep:proxmox-ve-config"] +config-ext = ["dep:proxmox-ve-config", "dep:proxmox-network-types"] [dependencies] log = "0.4" @@ -23,3 +23,4 @@ serde_json = "1" serde_plain = "1" proxmox-ve-config = { workspace = true, optional = true } +proxmox-network-types = { workspace = true, optional = true } diff --git a/proxmox-nftables/src/expression.rs b/proxmox-nftables/src/expression.rs index e9ef94f65947..e81076cb76e4 100644 --- a/proxmox-nftables/src/expression.rs +++ b/proxmox-nftables/src/expression.rs @@ -1,17 +1,14 @@ use crate::types::{ElemConfig, Verdict}; -use proxmox_ve_config::firewall::types::address::IpRange; use proxmox_ve_config::host::types::BridgeName; use serde::{Deserialize, Serialize}; use std::net::{IpAddr, Ipv4Addr, Ipv6Addr}; -#[cfg(feature = "config-ext")] -use proxmox_ve_config::firewall::types::address::{Family, IpEntry, IpList}; #[cfg(feature = "config-ext")] use proxmox_ve_config::firewall::types::port::{PortEntry, PortList}; #[cfg(feature = "config-ext")] use proxmox_ve_config::firewall::types::rule_match::{IcmpCode, IcmpType, Icmpv6Code, Icmpv6Type}; #[cfg(feature = "config-ext")] -use proxmox_ve_config::firewall::types::Cidr; +use proxmox_network_types::address::{Cidr, IpRange, Family, IpEntry, IpList}; #[derive(Clone, Debug, Deserialize, Serialize)] #[serde(rename_all = "lowercase")] diff --git a/proxmox-nftables/src/types.
Re: [pve-devel] [PATCH corosync] corosync.service: add patch to reduce log spam in broken network setups
Friedrich Weber writes: > If I read the journald.conf docs [1] right, the default interval is 30s > and the burst value is 1 multiplied by a factor depending on the > free disk space, I guess 4-6 on reasonable setups -- this is a lot of > messages, but as you mention probably fine for limiting really noisy > services. I was more thinking about this from a technical support > point-of-view, where I'd fear that having extreme corosync logspam over > days or weeks would cause the actually interesting stuff to be rotated > away more quickly than I'd like. :) > > But as we have no idea how many broken setups are out there, this is all > somewhat hypothetical, so I'm also fine with not applying this -- if we > get many user reports seeing logspam I guess we can still do this. > > [1] > https://www.freedesktop.org/software/systemd/man/latest/journald.conf.html#RateLimitIntervalSec= I am starting to lean towards not limiting this here. However, I have seen multiple instances at our support portal where logs are rotated rather quickly and useful messages are lost. ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH v8 storage 2/8] common: add deallocate helper function
From: Fiona Ebner For punching holes via fallocate. This will be useful for the external backup provider API to discard parts of the source. The 'file-handle' mechanism there uses a fuse mount, which does not implement the BLKDISCARD ioctl, but does implement fallocate. Signed-off-by: Fiona Ebner --- No changes to v7. src/PVE/Storage/Common.pm | 30 ++ 1 file changed, 30 insertions(+) diff --git a/src/PVE/Storage/Common.pm b/src/PVE/Storage/Common.pm index 3ae20dd..32ed26a 100644 --- a/src/PVE/Storage/Common.pm +++ b/src/PVE/Storage/Common.pm @@ -3,6 +3,13 @@ package PVE::Storage::Common; use strict; use warnings; +use PVE::Syscall; + +use constant { +FALLOC_FL_KEEP_SIZE => 0x01, # see linux/falloc.h +FALLOC_FL_PUNCH_HOLE => 0x02, # see linux/falloc.h +}; + =pod =head1 NAME @@ -51,4 +58,27 @@ sub align_size_up : prototype($$) { return $aligned_size; } +=pod + +=head3 deallocate + +deallocate($file_handle, $offset, $length) + +Deallocates the range with C<$length> many bytes starting from offset C<$offset> +for the file associated to the file handle C<$file_handle>. Dies on failure. + +=cut + +sub deallocate : prototype($$$) { +my ($file_handle, $offset, $length) = @_; + +my $mode = FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE; +$offset = int($offset); +$length = int($length); + +if (syscall(PVE::Syscall::fallocate, fileno($file_handle), $mode, $offset, $length) != 0) { + die "fallocate: punch hole failed (offset: $offset, length: $length) - $!\n"; +} +} + 1; -- 2.39.5 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] applied: [PATCH installer] proxmox-chroot: replace nix::(u)mount calls with external (u)mount
On Fri Apr 4, 2025 at 10:28 AM CEST, Thomas Lamprecht wrote: > Am 11.03.25 um 14:27 schrieb Christoph Heiss: >> Comes with a reduction of 52 -> 40 in terms of crate dependencies for >> proxmox-chroot, 198 -> 192 for a full workspace build. >> >> Currently, this is done inconsistently anyway, i.e. there are calls to >> the external mount(8) as well as mount(2) and umount(2) via `nix`. >> Just switch over to calling the external programs completely, which in >> turn allows to drop the `nix` crate dependency from the tree. >> >> No functional changes. >> >> Signed-off-by: Christoph Heiss >> --- >> debian/control | 1 - >> proxmox-chroot/Cargo.toml | 1 - >> proxmox-chroot/src/main.rs | 81 +- >> 3 files changed, 53 insertions(+), 30 deletions(-) >> >> > > applied, thanks! > > btw. to throw in a crate I'd not be sad to see get removed: clap, at least > if upstream still is full in the refactoring for the sake of it and adding > yet another way to do things... Yeah, definitely, agreed! I have already been previously working on removing it and got a ~half done local branch - but its unfortunately bit of a bigger refactor/rewrite. I'll try to get to it again tho and finish it. (fwiw, looking at my notes, dropping clap would also be quite a sizeable win w.r.t. binary sizes, with ~20% reduction for one of the binaries) ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH guest-common v7 1/2] mapping: pci: check the mdev configuration on the device too
On 4/3/25 11:40, Thomas Lamprecht wrote: Am 11.03.25 um 14:20 schrieb Dominik Csapak: but that lives int he 'global' part of the mapping config, not in a specific mapping. To check that, add it to the $configured_props from there. this requires all call sites to be adapted otherwise the check will always fail for devices that are capable of mediated devices But that's not true, or? As the check only happens if the $cluster_mapping_cfg param is passed, which call-sites need to do first? true, I think the commit message is outdated. I faintly remember changing the semantic at some point but probably forgot to update the commit message. # checks if the given config is valid for the current node sub assert_valid { -my ($name, $mapping) = @_; +my ($name, $mapping, $cluster_mapping_cfg) = @_; ^- new param here my @paths = split(';', $mapping->{path} // ''); @@ -161,6 +161,12 @@ sub assert_valid { my $configured_props = { $mapping->%{qw(id iommugroup subsystem-id)} }; + # check mdev from globabl mapping config, if that is given + if (defined($cluster_mapping_cfg)) { guarded witch check for defindness here + $expected_props->{mdev} = $info->{mdev} ? 1 : 0; + $configured_props->{mdev} = $cluster_mapping_cfg->{mdev} ? 1 : 0; + } + for my $prop (sort keys $expected_props->%*) { next if $prop eq 'iommugroup' && $idx > 0; # check iommu only on the first device ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel