On 9/27/19 8:53 AM, Alexandre DERUMIER wrote:
> Hi,
>
> I have noticed that when you upgrade libknet1 (and fix the crash of corosync),
>
> corosync is not auto restarted.
>
>
> Maybe should we bump the corosync package too to force a restart ?
>
Hmm, not sure about that, we always tell people
As the current versions are Debian Buster and PVE 6 mention them
instead of the old ones in the installation documentation.
Signed-off-by: Dominic Jäger
---
pve-installation.adoc | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/pve-installation.adoc b/pve-installation.adoc
ind
On 9/27/19 9:20 AM, Dominic Jäger wrote:
> As the current versions are Debian Buster and PVE 6 mention them
> instead of the old ones in the installation documentation.
>
> Signed-off-by: Dominic Jäger
> ---
> pve-installation.adoc | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> dif
On September 27, 2019 8:53 am, Alexandre DERUMIER wrote:
> Hi,
>
> I have noticed that when you upgrade libknet1 (and fix the crash of corosync),
>
> corosync is not auto restarted.
yes, just like any other library except libc6 in Debian. there is
tooling to handle this in general ('needrestart
one comment inline
On September 26, 2019 1:38 pm, Fabian Ebner wrote:
> Not every command parameter is 'target' anymore, so
> it was necessary to modify the parsing of $sd->{cmd}.
>
> Just changing the state to request_stop is not enough,
> we need to actually update the service configuration as
On 9/26/19 4:53 PM, Thomas Lamprecht wrote:
Hmm, how about rename the button to "Clear" (or "Reset") only when we move it
up? The association with what gets cleared should be then, well, clear.
Looks good to me. Then I'd to the same for the "Reset Layout" button.
I'd like the button alignmen
On 9/27/19 10:22 AM, Dominic Jäger wrote:
> Just to be sure: This was not something I changed. This happens to me on
> Chromium and Firefox with many (random?) zoom levels, but not with all.
Yes, that was always the case, just noticed it again when looking
at your screenshot.
__
On 9/27/19 10:07 AM, Fabian Grünbichler wrote:
>> @@ -651,9 +662,17 @@ sub next_state_started {
>> $haenv->log('info', "$cmd service '$sid' to node
>> '$target'");
>> &$change_service_state($self, $sid, $cmd, node =>
>> $sd->{node}, target => $target);
>>
Do a very minimal fix, effectively just making the code path for 5.0
to 5.1 and 5.2 to 5.3 the same one, which can be done by backporting
the new copy_kernel helpers into the kernel, as the existing ones are
indirectly exported as GPL symbol
Signed-off-by: Thomas Lamprecht
---
...r-save-restore-
This allows us to fix the ZFS SIMD patch for 5.0 kernel way easier,
as we can use the same code path as used for 5.2 and newer kernels
there.
The helper itself do not do anything, just exposes them for modules
to use.
Signed-off-by: Thomas Lamprecht
---
...kport-copy_kernel_to_XYZ_err-helpers.p
see GitHub issue comments for some details:
https://github.com/zfsonlinux/zfs/issues/9346#issuecomment-534984486
https://github.com/zfsonlinux/zfs/issues/9346#issuecomment-535133283
and the two patches for remaining ones.
If this looks OK I can apply and push this out, the patch which updates
the
small nit inline, otherwise
Acked-by: Fabian Grünbichler
On September 27, 2019 12:49 pm, Thomas Lamprecht wrote:
> This allows us to fix the ZFS SIMD patch for 5.0 kernel way easier,
> as we can use the same code path as used for 5.2 and newer kernels
> there.
>
> The helper itself do not do an
Acked-by: Fabian Grünbichler
On September 27, 2019 12:49 pm, Thomas Lamprecht wrote:
> Do a very minimal fix, effectively just making the code path for 5.0
> to 5.1 and 5.2 to 5.3 the same one, which can be done by backporting
> the new copy_kernel helpers into the kernel, as the existing ones ar
With the changes to pve-storage in commit 56362cf the startup hangs for
5 minutes on ZFS if the cloudinit disk does not exist. Instead of
calling activate_volume followed by file_size_info we now call
volume_size_info. This should work reliably on all storages that support
cloudinit disks.
Signed-
When adding a cloudinit disk it does not contain media=cdrom until it is
actually created. This means the check in check_replication fails to
detect cloudinit and it is recognized as normal disk. Then parse_volname
fails because it does not match the vm-$vmid-XYZ format. To fix this we
now check ex
Hi
After some investigations, I found a way to mitigate two issues I have when
working with ZFS over iSCSI (in my case ZoL / LIO) :
* First, and more important issue is that drive-mirror (with a running
guest) will make the guest FS panic if the source storage is ZFS over iSCSI.
See [
Fixes the issue of regenertion of the instance-id of cloud-init if
there are multiple network interfaces defined. (Not sorted hash keys.)
Signed-off-by: Beat Jörg
Beat Jörg (1):
Fix #2390: Sort @ifaces array to avoid regeneration of instance-id
PVE/QemuServer/Cloudinit.pm | 6 +++---
1 file
Signed-off-by: Beat Jörg
---
PVE/QemuServer/Cloudinit.pm | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/PVE/QemuServer/Cloudinit.pm b/PVE/QemuServer/Cloudinit.pm
index ab001f9..c368dd9 100644
--- a/PVE/QemuServer/Cloudinit.pm
+++ b/PVE/QemuServer/Cloudinit.pm
@@ -173,7
18 matches
Mail list logo