changes v1->v2:
* incorporated the feedback on the v1 (by Aaron and Fabian - huge thx!):
** a next-boot pin is now handled independently from a pin - i.e. if you
both pin a kernel and set one for the next-boot - the system afterwards
keeps the pinned version (instead of the latest)
** change
Signed-off-by: Stoiko Ivanov
---
debian/apthook/pve-apt-hook | 28
1 file changed, 28 insertions(+)
diff --git a/debian/apthook/pve-apt-hook b/debian/apthook/pve-apt-hook
index 50e50d1..6de56c4 100755
--- a/debian/apthook/pve-apt-hook
+++ b/debian/apthook/pve-apt-hoo
makes using this helper shorter in most cases
Signed-off-by: Stoiko Ivanov
---
proxmox-boot/functions | 5 +
1 file changed, 5 insertions(+)
diff --git a/proxmox-boot/functions b/proxmox-boot/functions
index 4515a2d..27da363 100755
--- a/proxmox-boot/functions
+++ b/proxmox-boot/functions
@
The 2 commands follow the mechanics of p-b-t kernel add/remove in
writing the desired abi-version to a config-file in /etc/kernel and
actually modifying the boot-loader configuration upon p-b-t refresh.
A dedicated new file is used instead of writing the version (with some
kind of annotation) to t
by setting the desired version in a dedicated file, which is used
by the systemd service as condition for removing it and refreshing
upon reboot
Signed-off-by: Stoiko Ivanov
---
bin/proxmox-boot-tool | 34 +--
debian/pve-kernel-helper.install |
While running `update-grub` directly in this case is a divergence from
the semantics of the command when p-b-t handles booting it makes the
cleanup in the `next-boot` case a bit tidier.
Signed-off-by: Stoiko Ivanov
---
bin/proxmox-boot-tool | 22 +++---
1 file changed, 19 inserti
Signed-off-by: Stoiko Ivanov
---
debian/apthook/pve-apt-hook | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/debian/apthook/pve-apt-hook b/debian/apthook/pve-apt-hook
index 1f77a1a..50e50d1 100755
--- a/debian/apthook/pve-apt-hook
+++ b/debian/apthook/pve-apt-hook
@@ -20
On 28.01.22 12:22, Aaron Lauterer wrote:
> The first step is to allocate rbd images correctly.
>
> The metadata objects still need to be stored in a replicated pool, but
> by providing the --data-pool parameter on image creation, we can place
> the data objects on the erasure coded (EC) pool.
>
>
On 01.02.22 15:02, Oguz Bektas wrote:
> to avoid being blacklisted because of the default libwww-perl user-agent
>
> issue was reported in community forum [0]
>
> [0]: https://forum.proxmox.com/threads/104081/
>
> Signed-off-by: Oguz Bektas
> ---
> PVE/API2/Nodes.pm | 1 +
> 1 file changed, 1
On 31.01.22 18:59, Stoiko Ivanov wrote:
> Signed-off-by: Stoiko Ivanov
> ---
> proxmox-boot/functions | 8
> proxmox-boot/zz-proxmox-boot | 4 +---
> 2 files changed, 9 insertions(+), 3 deletions(-)
>
>
applied, thanks!
___
pve-devel
On 31.01.22 18:59, Stoiko Ivanov wrote:
> Signed-off-by: Stoiko Ivanov
> ---
> proxmox-boot/zz-proxmox-boot | 5 -
> 1 file changed, 5 deletions(-)
>
>
applied, thanks!
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox
On 03.02.22 12:32, Fabian Ebner wrote:
> A virtual package does not have SelectedState Install, but the
> dependency will still be satisfied if a package providing it has.
>
> Fixes a bug, wrongly showing that postfix will be installed, when a
> different mail-transport-agent is installed and a pv
On 04.02.22 10:50, Aaron Lauterer wrote:
> If an OSD is removed during the wrong conditions, it could lead to
> blocked IO or worst case data loss.
>
> Check against global flags that limit the capabilities of Ceph to heal
> itself (norebalance, norecover, noout) and if there are degraded
> object
On 13.01.22 12:04, Fabian Ebner wrote:
> For snapshot creation, the storage for the vmstate file is activated
> via vdisk_alloc when the state file is created.
>
> Do not activate the volumes themselves, as that has unnecessary side
> effects (e.g. waiting for zvol device link for ZFS, mapping the
On 13.01.22 12:04, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner
> ---
> src/PVE/AbstractConfig.pm | 2 --
> 1 file changed, 2 deletions(-)
>
>
applied, thanks!
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi
On 03.02.22 13:41, Fabian Grünbichler wrote:
> remote migration always has an explicit endpoint from the start which
> gets used for everything.
>
> Signed-off-by: Fabian Grünbichler
> ---
> src/PVE/AbstractMigrate.pm | 37 +
> 1 file changed, 21 insertions(+)
On 03.02.22 13:41, Fabian Grünbichler wrote:
> into new top-level helper for re-use with remote migration.
>
> Signed-off-by: Fabian Grünbichler
> ---
>
> Notes:
> v4:
> - correctly use source storage for decision
> - fold fixup into correct patch
>
> PVE/Storage.pm | 14 ++
On 03.02.22 13:41, Fabian Grünbichler wrote:
> to allow reusing this with remote migration, where parsing of the source
> volid has to happen on the source node, but this call has to happen on
> the target node.
>
> Signed-off-by: Fabian Grünbichler
> ---
> PVE/Storage.pm | 16 +---
>
this deprecates the 'full' sync option and replaces it with
a 'mode' option, where we add a third one that updates
the current users (while retaining their custom set attributes not
exisiting in the source) and removing users that don't exist anymore
in the source
sorry for the long time between v
this mode behaves like the 'update' mode (so it updates users with
new data from the server, and adds new users), but also deletes
users and groups that do not exist anymore on the sync source.
this way, an admin can add custom data (e.g. keys) to the users in pve while
keeping only the users avai
in default sync options and the sync window. since we get the
mapped mode from the backend on read, a 'full=1' there will map
to 'mode=full' and we can simply use that
using this on a node with an old version will not work though
Signed-off-by: Dominik Csapak
---
www/manager6/dc/AuthEditLDAP.js
to be able to add more modes in the future.
full=0 maps to mode=update and full=1 to mode=full
but keep 'full' for backwards-compatibiltiy. On create/update, replace
full=1 with mode=full, on read, return both.
add a deprecation notice to the description of full, and a todo to
remove 'full' with
Am 03.02.22 um 13:41 schrieb Fabian Grünbichler:
> this series adds remote migration for VMs.
>
> both live and offline migration including NBD and storage-migrated disks
> should work. groundwork for extending to pve-container and pvesr already
> laid.
>
Everything besides storage 4/4 and guest
Am 03.02.22 um 13:41 schrieb Fabian Grünbichler:
> @@ -900,6 +1017,7 @@ our $cmddef = {
> clone => [ "PVE::API2::Qemu", 'clone_vm', ['vmid', 'newid'], { node =>
> $nodename }, $upid_exit ],
>
> migrate => [ "PVE::API2::Qemu", 'migrate_vm', ['vmid', 'target'], { node
> => $nodename },
Am 03.02.22 um 13:41 schrieb Fabian Grünbichler:
> @@ -251,22 +311,30 @@ sub scan_local_volumes {
> next if @{$dl->{$storeid}} == 0;
>
> my $targetsid =
> PVE::QemuServer::map_id($self->{opts}->{storagemap}, $storeid);
> - # check if storage is available on target nod
Am 03.02.22 um 13:41 schrieb Fabian Grünbichler:
> +sub storage_migrate {
> +my ($tunnel, $storecfg, $volid, $local_vmid, $remote_vmid, $opts, $log)
> = @_;
> +
> +my $targetsid = $opts->{targetsid};
> +my $bwlimit = $opts->{bwlimit};
> +
> +# JSONSchema and get_bandwidth_limit use
Am 03.02.22 um 13:41 schrieb Fabian Grünbichler:
> +if ($cpid) {
> + $writer->writer();
> + $reader->reader();
> + my $tunnel = {
> + writer => $writer,
> + reader => $reader,
> + pid => $cpid,
> + log => $log,
> + };
> +
> + eval {
> +
Am 03.02.22 um 13:41 schrieb Fabian Grünbichler:
> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
> index 837df1b..682dd38 100755
> --- a/PVE/Storage.pm
> +++ b/PVE/Storage.pm
> @@ -1833,6 +1833,72 @@ sub volume_imported_message {
> }
> }
>
> +# $format and $volname are requests and might be
--- Begin Message ---
February 4, 2022 10:50 AM, "Aaron Lauterer" wrote:
> If an OSD is removed during the wrong conditions, it could lead to
> blocked IO or worst case data loss.
>
> Check against global flags that limit the capabilities of Ceph to heal
> itself (norebalance, norecover, noout)
If an OSD is removed during the wrong conditions, it could lead to
blocked IO or worst case data loss.
Check against global flags that limit the capabilities of Ceph to heal
itself (norebalance, norecover, noout) and if there are degraded
objects.
Signed-off-by: Aaron Lauterer
---
Those are the
On 03.02.22 13:41, Fabian Grünbichler wrote:
> Signed-off-by: Fabian Grünbichler
> ---
> .gitignore| 1 +
> .cargo/config | 5 +
> Cargo.toml| 11 +++
> 3 files changed, 17 insertions(+)
> create mode 100644 .gitignore
> create mode 100644 .cargo/config
> create mode 1006
31 matches
Mail list logo