Am 01.03.21 um 11:18 schrieb Dietmar Maurer:
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index f401baf..e579cdf 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6991,7 +6991,15 @@ sub clone_disk {
# that is given by the OVMF_VARS.fd
my $src_path = PVE::Sto
Signed-off-by: Alexandre Derumier
---
pve-network.adoc | 3 ---
1 file changed, 3 deletions(-)
diff --git a/pve-network.adoc b/pve-network.adoc
index 34cc6c8..add220e 100644
--- a/pve-network.adoc
+++ b/pve-network.adoc
@@ -38,9 +38,6 @@ Reload Network with ifupdown2
With the optional `ifupdown
This code is quite strange. Please can you use a
normal: if .. then .. else ..
> +push @$cmd, '-H' if $healthonly;
> +push @$cmd, '-a', '-A', '-f', 'brief' if !$healthonly;
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.pr
On 01.03.21 16:12, Aaron Lauterer wrote:
> This RFC introduces support for Cephs RBD namespaces.
>
> A new storage config parameter 'namespace' defines the namespace to be
> used for the RBD storage.
>
> The namespace must already exist in the Ceph cluster as it is not
> automatically created.
>
to be re-used in the vmstatus() call.
Signed-off-by: Fabian Ebner
---
PVE/QemuServer/Machine.pm | 16 +++-
1 file changed, 11 insertions(+), 5 deletions(-)
diff --git a/PVE/QemuServer/Machine.pm b/PVE/QemuServer/Machine.pm
index c168ade..2474951 100644
--- a/PVE/QemuServer/Machine.p
Signed-off-by: Fabian Ebner
---
PVE/QemuServer.pm | 32
1 file changed, 32 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a498444..e866faa 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2555,6 +2555,16 @@ our $vmstatus_return_p
in the VM summary page.
Signed-off-by: Fabian Ebner
---
I felt like the running machine is less interesting to users, so I only
added the QEMU version. And I added it directly to the status as it doesn't
seem to fit badly.
Of course I can change it to be its own line and also add a line for the
This RFC introduces support for Cephs RBD namespaces.
A new storage config parameter 'namespace' defines the namespace to be
used for the RBD storage.
The namespace must already exist in the Ceph cluster as it is not
automatically created.
The main intention is to use this for external Ceph clus
if the -a option isn't passed, -H might report a failing disk as
'PASSED' even when the disk might be in a corrupted state.
Signed-off-by: Oguz Bektas
---
PVE/Diskmanage.pm | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index 64bb81
On 01.03.21 13:02, Fabian Ebner wrote:
> Moving to Ceph is very slow when bs=1. Instead, use a larger block size in
> combination with the (currently) PVE-specific osize option to specify the
> desired output size.
>
> Suggested-by: Dietmar Maurer
> Signed-off-by: Fabian Ebner
> ---
>
> Thanks
Moving to Ceph is very slow when bs=1. Instead, use a larger block size in
combination with the (currently) PVE-specific osize option to specify the
desired output size.
Suggested-by: Dietmar Maurer
Signed-off-by: Fabian Ebner
---
Thanks to Dietmar for pointing me in the right direction.
We ac
After a short off-list discussion with Thomas, we decided to first
assert that the size is a multiple of 1024 and then simply use bs=1024.
If a new architecture with a strange-sized VARS file comes along we have
to adapt it though.
Am 01.03.21 um 10:42 schrieb Fabian Ebner:
Moving to Ceph is
Am 01.03.21 um 11:18 schrieb Dietmar Maurer:
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index f401baf..e579cdf 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6991,7 +6991,15 @@ sub clone_disk {
# that is given by the OVMF_VARS.fd
my $src_path = PVE::Sto
Am 01.03.21 um 11:13 schrieb Stefan Reiter:
On 3/1/21 11:06 AM, Fabian Ebner wrote:
Am 01.03.21 um 10:54 schrieb Stefan Reiter:
On 3/1/21 10:42 AM, Fabian Ebner wrote:
Moving to Ceph is very slow when bs=1. Instead, use the biggest
possible power
of two <= 1024. At the moment our EFI image siz
> >>> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> >>> index f401baf..e579cdf 100644
> >>> --- a/PVE/QemuServer.pm
> >>> +++ b/PVE/QemuServer.pm
> >>> @@ -6991,7 +6991,15 @@ sub clone_disk {
> >>> # that is given by the OVMF_VARS.fd
> >>> my $src_path = PVE::Storage::pa
On 3/1/21 11:06 AM, Fabian Ebner wrote:
Am 01.03.21 um 10:54 schrieb Stefan Reiter:
On 3/1/21 10:42 AM, Fabian Ebner wrote:
Moving to Ceph is very slow when bs=1. Instead, use the biggest
possible power
of two <= 1024. At the moment our EFI image sizes are multiples of
1024, so
just using 102
Am 01.03.21 um 10:54 schrieb Stefan Reiter:
On 3/1/21 10:42 AM, Fabian Ebner wrote:
Moving to Ceph is very slow when bs=1. Instead, use the biggest
possible power
of two <= 1024. At the moment our EFI image sizes are multiples of
1024, so
just using 1024 wouldn't be a problem, but this feels m
On 3/1/21 10:42 AM, Fabian Ebner wrote:
Moving to Ceph is very slow when bs=1. Instead, use the biggest possible power
of two <= 1024. At the moment our EFI image sizes are multiples of 1024, so
just using 1024 wouldn't be a problem, but this feels more future-proof.
Signed-off-by: Fabian Ebner
Moving to Ceph is very slow when bs=1. Instead, use the biggest possible power
of two <= 1024. At the moment our EFI image sizes are multiples of 1024, so
just using 1024 wouldn't be a problem, but this feels more future-proof.
Signed-off-by: Fabian Ebner
---
I did not see an way for 'qemu-img d
19 matches
Mail list logo