With the change to the rust backend for the subscription check, the
return value changed as well.
Signed-off-by: Aaron Lauterer
---
This is the simple fix. We could also change it to lower case first, if
we expect that this might change again.
I don't think that PMG is affected by this, am not 1
On 10/19/22 12:00, Aaron Lauterer wrote:
With the change to the rust backend for the subscription check, the
return value changed as well.
Signed-off-by: Aaron Lauterer
---
This is the simple fix. We could also change it to lower case first, if
we expect that this might change again.
I don't t
With the change to the rust backend for the subscription check, the
return value changed as well.
Signed-off-by: Aaron Lauterer
---
changes: use toLowerCase() and safeguard with ?
www/manager6/dc/Support.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/www/manager6/dc/Supp
It has been possible since quite a while to live migrate replicated
guests.
Signed-off-by: Aaron Lauterer
---
pvesr.adoc | 1 -
1 file changed, 1 deletion(-)
diff --git a/pvesr.adoc b/pvesr.adoc
index e508eee..1981b9c 100644
--- a/pvesr.adoc
+++ b/pvesr.adoc
@@ -46,7 +46,6 @@ but not twice to t
Am 07/10/2022 um 14:41 schrieb Fiona Ebner:
> As mentioned in man 5 systemd.resource-control, the CPUShares setting
> was replaced by CPUWeight and is deprecated.
>
> Signed-off-by: Fiona Ebner
> ---
> qm.adoc | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
>
applied both docs patche
Am 07/10/2022 um 14:41 schrieb Fiona Ebner:
> while making it take the value directly instead of the config.
>
> Signed-off-by: Fiona Ebner
> ---
> src/PVE/GuestHelpers.pm | 16
> 1 file changed, 16 insertions(+)
>
> diff --git a/src/PVE/GuestHelpers.pm b/src/PVE/GuestHelpers.p
Am 19/10/2022 um 12:34 schrieb Aaron Lauterer:
> With the change to the rust backend for the subscription check, the
> return value changed as well.
>
> Signed-off-by: Aaron Lauterer
> ---
> changes: use toLowerCase() and safeguard with ?
>
> www/manager6/dc/Support.js | 2 +-
> 1 file changed,
when using a hyper-converged cluster it was previously possible to add
the pool used by the ceph-mgr modules (".mgr" since quincy or
"device_health_metrics" previously) as an RBD storage. this would lead
to all kinds of errors when that storage was used (e.g.: VMs missing
their disks after a migrat
since ceph luminous (ceph 12) pools need to be associated with at
least one applicaton expose this information here too so that clients
of this endpoint can use that information
Signed-off-by: Stefan Sterz
---
even though an application needs to be defined for a pool since
luminous, i tried to ma
On 10/19/22 14:16, Stefan Sterz wrote:
> when using a hyper-converged cluster it was previously possible to add
> the pool used by the ceph-mgr modules (".mgr" since quincy or
> "device_health_metrics" previously) as an RBD storage. this would lead
> to all kinds of errors when that storage was use
It's possible to have a
/proc/sys/net/ipv6/ directory
but no
/proc/sys/net/ipv6/conf/$iface/disable_ipv6
Signed-off-by: Alexandre Derumier
---
src/PVE/Network.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/Network.pm b/src/PVE/Network.pm
index c468e40..9d726cd 10
--- Begin Message ---
On October 19, 2022 2:16:44 PM GMT+02:00, Stefan Sterz
wrote:
>when using a hyper-converged cluster it was previously possible to add
>the pool used by the ceph-mgr modules (".mgr" since quincy or
>"device_health_metrics" previously) as an RBD storage. this would lead
>to al
12 matches
Mail list logo