Hi Thomas!
On 18/04/2021 19:01, Thomas Lamprecht wrote:
On 12.01.21 10:19, aderum...@odiso.com wrote:
Hi,
I'm looking to unify sdn .cfg files with only 1 file,
with something different than section config format.
We have relationship like zones->vnets->subnets,
so I was thinking about someth
Signed-off-by: Fabian Ebner
---
PVE/Storage/RBDPlugin.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 42641e2..a8d1243 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -503,7 +503,7 @@ sub allo
From: Alwin Antreich
In Ceph Octopus the device_health_metrics pool is auto-created with 1
PG. Since Ceph has the ability to split/merge PGs, hitting the wrong PG
count is now less of an issue anyhow.
Signed-off-by: Alwin Antreich
Signed-off-by: Dominik Csapak
---
PVE/API2/Ceph/Pools.pm | 2 +
we do nothing with that field, so leave it like it is
Signed-off-by: Dominik Csapak
---
PVE/API2/Ceph/Pools.pm | 1 -
1 file changed, 1 deletion(-)
diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index 939a1f8a..45f0c47c 100644
--- a/PVE/API2/Ceph/Pools.pm
+++ b/PVE/API2/Ceph/Pools
From: Alwin Antreich
this is used to fine-tune the ceph autoscaler
Signed-off-by: Alwin Antreich
Signed-off-by: Dominik Csapak
---
www/manager6/ceph/Pool.js | 18 ++
1 file changed, 18 insertions(+)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index e19f
From: Alwin Antreich
the properties target_size_ratio, target_size_bytes and pg_num_min are
used to fine-tune the pg_autoscaler and are set on a pool. The updated
pool list shows now autoscale settings & status. Including the new
(optimal) target PGs. To make it easier for new users to get/set th
From: Alwin Antreich
Since Ceph Nautilus 14.2.10 and Octopus 15.2.2 the min_size of a pool is
calculated by the size (round(size / 2)). When size is applied after
min_size to the pool, the manual specified min_size will be overwritten.
Signed-off-by: Alwin Antreich
Signed-off-by: Dominik Csapak
Signed-off-by: Dominik Csapak
---
www/manager6/ceph/Pool.js | 2 ++
1 file changed, 2 insertions(+)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index 45333f4d..430decbb 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -201,6 +201,8 @@ Ext.define('P
From: Alwin Antreich
* add the ability to edit an existing pool
* allow adjustment of autoscale settings
* warn if user specifies min_size 1
* disallow min_size 1 on pool create
* calculate min_size replica by size
Signed-off-by: Alwin Antreich
Signed-off-by: Dominik Csapak
---
www/manager6/c
originally from Alwin Antreich
mostly rebase on master, a few eslint fixes (squashed into alwins
commits) and 3 small fixups
Alwin Antreich (6):
ceph: add autoscale_status to api calls
ceph: gui: add autoscale & flatten pool view
ceph: set allowed minimal pg_num down to 1
ceph: gui: rewor
From: Alwin Antreich
Letting the columns flex needs a flat column head structure.
Signed-off-by: Alwin Antreich
Signed-off-by: Dominik Csapak
---
www/manager6/ceph/Pool.js | 138 ++
1 file changed, 82 insertions(+), 56 deletions(-)
diff --git a/www/manager
the field gives us a string, so the second condition could never
be true, instead parse to a float instead
Signed-off-by: Dominik Csapak
---
www/manager6/ceph/Pool.js | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/P
Not replacing it with return, because the current behavior is dying:
Can't "next" outside a loop block
and the single existing caller in pve-manager's API2/Ceph/OSD.pm does not check
the return value.
Also check for $st, which can be undefined in case a non-existing path was
provided. This als
Signed-off-by: Lorenz Stechauner
---
oathkeygen | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/oathkeygen b/oathkeygen
index 89e385a..82e4eec 100755
--- a/oathkeygen
+++ b/oathkeygen
@@ -6,6 +6,6 @@ use MIME::Base32; #libmime-base32-perl
my $test;
open(RND, "/dev/urandom"
Fixes an issue in which a VM/CT fails to automatically restart after a
failed stop-mode backup.
Also fixes a minor typo in a comment
Signed-off-by: Dylan Whyte
---
Note:
v1->v2:
- Fix the issue from within PVE::VZDump::QemuServer, rather than adding
tedious sleep call and state checking in P
We enable/disable spice/xtermjs for the console button in the 'load'
callback of the statusstore, depending on the vms capabilities,
but until the first load there, the only safe option is novnc.
So we have to disable xtermjs and spice on start, else a click on
the button might open a window that
On 16.04.21 15:15, Mira Limbeck wrote:
> These 2 files can be helpful for issues with multipath. The multipath -v3
> output is too large most of the time and not required for analyzing and
> solving the issues.
>
> Signed-off-by: Mira Limbeck
> ---
> PVE/Report.pm | 8 ++--
> 1 file changed,
On 11.01.21 12:42, Dominic Jäger wrote:
> Allow destroying only OSDs that belong to the node that has been specified in
> the API path.
>
> So if
> - OSD 1 belongs to node A and
> - OSD 2 belongs to node B
> then
> - pvesh delete nodes/A/ceph/osd/1 is allowed but
> - pvesh delete nodes/A/ceph/
On 20.04.21 14:07, Fabian Ebner wrote:
> Not replacing it with return, because the current behavior is dying:
> Can't "next" outside a loop block
> and the single existing caller in pve-manager's API2/Ceph/OSD.pm does not
> check
> the return value.
>
> Also check for $st, which can be undefi
On 20.04.21 14:11, Lorenz Stechauner wrote:
> Signed-off-by: Lorenz Stechauner
> ---
> oathkeygen | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
>
applied, thanks!
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxm
On 20.04.21 16:35, Dominik Csapak wrote:
> We enable/disable spice/xtermjs for the console button in the 'load'
> callback of the statusstore, depending on the vms capabilities,
> but until the first load there, the only safe option is novnc.
>
> So we have to disable xtermjs and spice on start, e
On 20.04.21 10:14, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner
> ---
> PVE/Storage/RBDPlugin.pm | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
>
applied, thanks!
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.p
The goal of this new API endpoint is to provide an easy way to move a
disk between VMs as this was only possible with manual intervention
until now. Either by renaming the VM disk or by manually adding the
disks volid to the config of the other VM.
The latter can easily cause unexpected behavior s
also add alias to keep move_disk working.
Signed-off-by: Aaron Lauterer
---
this one is optional but would align the use of - instead of _ in the
command names
PVE/CLI/qm.pm | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index 6d78600..b629e
Signed-off-by: Aaron Lauterer
---
v4->v7: rebased
www/manager6/Utils.js | 1 +
1 file changed, 1 insertion(+)
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index f502950f..51942938 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1801,6 +1801,7 @@ Ext.define('PVE.
Signed-off-by: Aaron Lauterer
---
v6 -> v7:
* added target drive parameter
* renamed parameters to include source/target
* use - instead of _ in command name
v5 -> v6: nothing
v4 -> v5: renamed `drive_key` to `drive_name`
v3 ->v4: nothing
v2 -> v3: renamed parameter `disk` to `drive_key`
rfc -> v
This series implements a new feature which allows users to easily
reassign disks between VMs. Currently this is only possible with one of
the following manual steps:
* rename the disk image/file and do a `qm rescan`
* configure the disk manually and use the old image name, having an
image for
Functionality has been added for the following storage types:
* dir based ones
* ZFS
* (thin) LVM
* Ceph
A new feature `reassign` has been introduced to mark which storage
plugin supports the feature.
Version API and AGE have been bumped.
Signed-off-by: Aaron Lauterer
---
v6 -> v7:
We now plac
We will be using the mechanics also for ZFS systems booting with BIOS
legacy boot, and the tool is used also in PMG and PBS.
A symlink is kept in place for compatibility reasons
Signed-off-by: Stoiko Ivanov
---
Makefile| 2 +-
bin/Makefile
This patch adds support for booting non-uefi/legacy/bios-boot ZFS
installs, by using proxmox-boot-tool to copy the kernels to the ESP
and then generate a fitting grub config for booting from the vfat ESP:
* grub is installed onto the ESP and the MBR points to the ESP
* after copying/deleting the k
This patchset has been long overdue, and complements the solution to booting
ZFS on UEFI systems using systemd-boot.
With the upgrade of ZFS 2.0.0 (and it's support for ZSTD compression), quite
a few users found out that their systems were still booted with legacy bios
boot and were consequently r
Dominik,
Thank you for the insight. There is certainly complexity I did not
consider, even if I were to look at the narrow case of local ZFS storage.
Regardless, this would be helpful to me and if I make anything then I will
submit it. I already have signed the CLA and have code accepted in
pve-zsy
On 4/20/21 18:20, Thomas Lamprecht wrote:
On 20.04.21 16:35, Dominik Csapak wrote:
We enable/disable spice/xtermjs for the console button in the 'load'
callback of the statusstore, depending on the vms capabilities,
but until the first load there, the only safe option is novnc.
So we have to di
33 matches
Mail list logo