[pve-devel] [PATCH v2] Add DNS challenge schema for knot.

2021-11-18 Thread Jens Meißner
Signed-off-by: Jens Meißner --- src/dns-challenge-schema.json | 14 +- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/src/dns-challenge-schema.json b/src/dns-challenge-schema.json index a3a3ebc..6defbdd 100644 --- a/src/dns-challenge-schema.json +++ b/src/dns-challenge-

Re: [pve-devel] [PATCH] Add DNS challenge schema for knot.

2021-11-18 Thread Jens Meißner
Am 17.11.21 um 17:27 schrieb Thomas Lamprecht: > looks OK in general, one question inline... > > On 17.11.21 09:03, Jens Meißner wrote: >> Signed-off-by: Jens Meißner >> --- >> src/dns-challenge-schema.json | 19 ++- >> 1 file changed, 18 insertions(+), 1 deletion(-) >> >> diff -

[pve-devel] [PATCH widget-toolkit] data: diffstore: fix autoDestroyRstore option

2021-11-18 Thread Dominik Csapak
the change from extjs 6.0.1 to 7.0.0 removed 'onDestroy' but brought us 'doDestroy' for stores we did not notice since 'onDestroy' was a private method and thus the changelog did not mention this (doDestroy is a public method meant exactly for our use case) Signed-off-by: Dominik Csapak --- src

[pve-devel] [PATCH storage] lvm thin: add missing newline to error message

2021-11-18 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- PVE/Storage/LvmThinPlugin.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm index f699acb..1d2e37c 100644 --- a/PVE/Storage/LvmThinPlugin.pm +++ b/PVE/Storage/LvmThinPlugin.pm @@ -216,

[pve-devel] applied: [PATCH widget-toolkit] data: diffstore: fix autoDestroyRstore option

2021-11-18 Thread Thomas Lamprecht
On 18.11.21 10:50, Dominik Csapak wrote: > the change from extjs 6.0.1 to 7.0.0 removed 'onDestroy' but brought > us 'doDestroy' for stores > > we did not notice since 'onDestroy' was a private method and thus > the changelog did not mention this (doDestroy is a public method meant > exactly for o

[pve-devel] applied: [PATCH storage] lvm thin: add missing newline to error message

2021-11-18 Thread Thomas Lamprecht
On 18.11.21 11:17, Fabian Ebner wrote: > Signed-off-by: Fabian Ebner > --- > PVE/Storage/LvmThinPlugin.pm | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > applied, thanks! ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lis

[pve-devel] applied: [PATCH v2] Add DNS challenge schema for knot.

2021-11-18 Thread Thomas Lamprecht
On 18.11.21 09:49, Jens Meißner wrote: > Signed-off-by: Jens Meißner > --- > src/dns-challenge-schema.json | 14 +- > 1 file changed, 13 insertions(+), 1 deletion(-) > > applied, thanks! Maybe one could open a issue on acme.sh asking regarding the discrepancy between the documented

Re: [pve-devel] [PATCH v4 manager] api: apt: repos: fix interfacing with perlmod

2021-11-18 Thread Fabian Ebner
Seems like I forgot about this one. Still applies and I quickly tested to make sure it still fixes the issue. Am 16.07.21 um 15:27 schrieb Fabian Ebner: Using pvesh create /nodes/pve701/apt/repositories --path "/etc/apt/sources.list" --index 0 --enabled 1 reliably leads to error:

[pve-devel] [PATCH manager 1/3] pvescheduler: catch errors in forked childs

2021-11-18 Thread Dominik Csapak
if '$sub' dies, the error handler of PVE::Daemon triggers, which initiates a shutdown of the child, resulting in confusing error logs (e.g. 'got shutdown request, signal running jobs to stop') instead, run it under 'eval' and print the error to the sylog instead Signed-off-by: Dominik Csapak ---

[pve-devel] [PATCH manager 3/3] pvescheduler: implement graceful reloading

2021-11-18 Thread Dominik Csapak
utilize PVE::Daemons 'hup' functionality to reload gracefully. Leaves the children running (if any) and give them to the new instance via ENV variables. After loading, check if they are still around Signed-off-by: Dominik Csapak --- the only weird behaviour is that the re-exec can be up to one m

[pve-devel] [PATCH manager 2/3] pvescheduler: reworking child pid tracking

2021-11-18 Thread Dominik Csapak
previously, systemd timers were responsible for running replication jobs. those timers would not restart if the previous one is still running. though trying again while it is running does no harm really, it spams the log with errors about not being able to acquire the correct lock to fix this, we