Re: [pve-devel] [PATCH qemu-server] api: cloud-init support for mtu and userdata

2020-09-09 Thread proxmox
But this does not change the MTU inside the VM right? -Ursprüngliche Nachricht- Von: Alexandre DERUMIER  Gesendet: Montag 7 September 2020 09:34 An: Proxmox VE development discussion Betreff: Re: [pve-devel] [PATCH qemu-server] api: cloud-init support for mtu and userdata Hi, not re

Re: [pve-devel] [PATCH qemu-server] api: cloud-init support for mtu and userdata

2020-09-09 Thread Alexandre DERUMIER
>>But this does not change the MTU inside the VM right? yes, it change the mtu inside the vm! (at least on recent kernel, don't remember when this had been added) - Mail original - De: "proxmox" À: "Proxmox VE development discussion" Envoyé: Mercredi 9 Septembre 2020 11:06:13 Objet: Re

Re: [pve-devel] [PATCH qemu-server] api: cloud-init support for mtu and userdata

2020-09-09 Thread Alexandre DERUMIER
it has been added to kernel in 2016 https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?h=v5.8.7&id=14de9d114a82a564b94388c95af79a701dc93134 - Mail original - De: "aderumier" À: "Proxmox VE development discussion" Envoyé: Mercredi 9 Septembre 2020 15:02:33 Objet: Re

Re: [pve-devel] [PATCH qemu-server] api: cloud-init support for mtu and userdata

2020-09-09 Thread Mira Limbeck
Hi, On 9/4/20 5:21 PM, proxmox wrote: Hello I didn't know this patch mail got approved, so sorry for the (very) late response. My intention for not going with snippets was the fact that they could not be created via the API and one would have to manually create a file on the target mach

[pve-devel] applied: [PATCH v2 firewall] introduce new icmp-type parameter

2020-09-09 Thread Thomas Lamprecht
On 29.05.20 14:22, Mira Limbeck wrote: > Currently icmp types are handled via 'dport'. This is not documented > anywhere except for a single line of comment in the code. To untangle > the icmp-type handling from the dport handling a new 'icmp-type' > parameter is introduced. > > The valid 'icmp-ty

Re: [pve-devel] [PATCH v3 access-control] add ui capabilities endpoint

2020-09-09 Thread Thomas Lamprecht
On 06.07.20 14:45, Tim Marx wrote: > Signed-off-by: Tim Marx > --- > * no changes Maybe we could merge this into the "/access/permissions" endpoint, maybe with a "heurisitic" parameter? > > PVE/API2/AccessControl.pm | 29 + > 1 file changed, 29 insertions(+) > > di

Re: [pve-devel] [PATCH lxc 0/2] fix apparmor rules and improve cgroupv2 experience

2020-09-09 Thread Thomas Lamprecht
On 22.07.20 13:05, Stoiko Ivanov wrote: > This patchset addresses 2 minor inconveniences I ran into, while running my > host with 'systemd.unified_cgroup_hierarchy=1': > > * apparmor mount denies for '/proc/sys/kernel/random/boot_id' (this happens > irrespective of the cgroup-layout > * having t

[pve-devel] applied: [PATCH container] setup: add kali-rolling in supported releases

2020-09-09 Thread Thomas Lamprecht
On 01.09.20 12:44, Oguz Bektas wrote: > for our setup purposes, it's the same as bullseye since it's following a > rolling release model. > > Signed-off-by: Oguz Bektas > --- > src/PVE/LXC/Setup/Debian.pm | 1 + > 1 file changed, 1 insertion(+) > > applied, thanks! Albeit it could make sense t

[pve-devel] applied-series: [PATCH v2 container 1/2] Add module for reading state changes from monitor socket

2020-09-09 Thread Thomas Lamprecht
On 08.09.20 13:58, Fabian Ebner wrote: > Will be used to monitor state changes on container startup. > > Co-developed-by: Wolfgang Bumiller > Signed-off-by: Fabian Ebner > --- > > New in v2. > > I hard-coded the name of the abstract UNIX socket instead of > trying to re-implement lxc/monitor.c

[pve-devel] applied: [PATCH container 5/5] setup: heuristically warn if the FS hosting /etc is not mounted

2020-09-09 Thread Thomas Lamprecht
Check for the existence of /etc, use -e as it could also be a symlink (and it's just a heuristic). But only do so if the expected ostype from the config does not match the detected one, this normally indicates that we had a "reals" distro running but detected the fallback "unmanaged". Only warn tho

[pve-devel] applied: [PATCH container 2/5] implement debug start

2020-09-09 Thread Thomas Lamprecht
Signed-off-by: Thomas Lamprecht --- src/Makefile | 6 -- src/PVE/API2/LXC/Status.pm | 8 +++- src/PVE/LXC.pm | 10 +++--- src/PVE/LXC/Config.pm| 6 ++ src/pve-container-debug@.service | 22 ++ 5 files c

[pve-devel] applied: [PATCH container 3/5] protected_call: remove left-over rootdir/dev mkdir

2020-09-09 Thread Thomas Lamprecht
commit 797e12e8a5df246d8afc53b045e632977cdf0088 got rid of our "just bind-mount the root /dev to the CT temporarily for some stuff" for good a while ago (2015), but creating the /dev directory in the CT root was kept, from what I can tell, by mistake. This can be a problem if, whyever, the CT root

[pve-devel] applied: [PATCH container 4/5] alpine: setup net: pass whole config to parent method

2020-09-09 Thread Thomas Lamprecht
We expected the whole $conf to be passed in a call to setup_network, a bit ago it worked if their where only the netX keys present, for some plugin that still is the case. But, in the Debian version, reused by Alpine, we now check if the CT distro version is recent enough to support (or need) the a

[pve-devel] applied: [PATCH container 1/5] ct start: track lxc-start stderr and print in error case

2020-09-09 Thread Thomas Lamprecht
Signed-off-by: Thomas Lamprecht --- src/PVE/LXC.pm | 15 +++ src/pve-container@.service | 2 +- 2 files changed, 16 insertions(+), 1 deletion(-) diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm index e13f7e6..f99 100644 --- a/src/PVE/LXC.pm +++ b/src/PVE/LXC.pm @@ -2167,

Re: [pve-devel] corosync bug: cluster break after 1 node clean shutdown

2020-09-09 Thread Thomas Lamprecht
On 08.09.20 09:11, Alexandre DERUMIER wrote: >>> It would really help if we can reproduce the bug somehow. Do you have and >>> idea how >>> to trigger the bug? > > I really don't known. I'm currently trying to reproduce on the same cluster, > with softdog && noboot=1, and rebooting node. > > >

Re: [pve-devel] corosync bug: cluster break after 1 node clean shutdown

2020-09-09 Thread Alexandre DERUMIER
Thanks Thomas for the investigations. I'm still trying to reproduce... I think I have some special case here, because the user of the forum with 30 nodes had corosync cluster split. (Note that I had this bug 6 months ago,when shuting down a node too, and the only way was stop full stop corosync