--- Begin Message ---
Hi,
Not trying to break this thread, feel free to poke me off-list if this
message is not supposed to be sent in this thread.
Concerning cloud-init. There is something with Proxmox cloud-init that
if you keep the cloud-init drive, after every boot, the sshd host-keys
ch
--- Begin Message ---
Hi Fiona,
I just confirmed that in addition to issue reported in
https://bugzilla.proxmox.com/show_bug.cgi?id=4073 (live migrated VM hung
using 100% CPU), we also reproduce issue reported in
https://forum.proxmox.com/threads/zeitspr%C3%BCnge-in-vms-seit-pve-7-2.112756/
Code LGTM and tested fine
consider the series:
Reviewed-by: Dominik Csapak
Tested-by: Dominik Csapak
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
MTU is used in several locations within code, but there is no form in the
interface to edit the value. There should be a form to edit MTU.
Thomas Crummett (1):
Adds MTU to NetworkEdit
www/manager6/qemu/NetworkEdit.js | 13 +
1 file changed, 13 insertions(+)
--
2.32.0 (Apple Git-
---
www/manager6/qemu/NetworkEdit.js | 13 +
1 file changed, 13 insertions(+)
diff --git a/www/manager6/qemu/NetworkEdit.js b/www/manager6/qemu/NetworkEdit.js
index b39cffdc..8e70e386 100644
--- a/www/manager6/qemu/NetworkEdit.js
+++ b/www/manager6/qemu/NetworkEdit.js
@@ -15,6 +15,7 @
currently pmxcfs and the running mta (postfix in most cases I assume)
have no ordering between them - resulting in the mta starting before
pmxcfs.
This can be problematic in case of a mail for 'root' being in the
mailq: postfix tries to deliver the mail - pvemailforward tries to
look up the destin
If the systemd ordering is okay, what about how we have with Ceph, where we
place the "ceph-after-pve-cluster.conf" for each service instead of changing the
pve-cluster.service?
See my recent patch regarding this [0].
[0] https://lists.proxmox.com/pipermail/pve-devel/2022-July/053546.html
O
On Mon, 5 Sep 2022 14:06:02 +0200
Aaron Lauterer wrote:
> If the systemd ordering is okay, what about how we have with Ceph, where we
> place the "ceph-after-pve-cluster.conf" for each service instead of changing
> the
> pve-cluster.service?
thanks for the hint - can be done - the gain would b