Signed-off-by: Dominic Jäger
---
PVE/VZDump.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index 6e0d3dbf..aea7389b 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -522,7 +522,7 @@ sub getlock {
my $maxwait = $self->{opts}->{lockwait}
and make the two options mutally exclusive as long
as they are specified on the same level (e.g. both
from the storage configuration). Otherwise prefer
option > storage config > default (only maxfiles has a default currently).
Defines the backup limit for prune-backups as the sum of all
keep-value
Make use of the new 'prune-backups' storage property with vzdump.
Changes from v4:
* drop already applied patches
* rebase on current master
* fix typo
* add newline to error message
Fabian Ebner (2):
Allow prune-backups as an alternative to maxfiles
Always use prune-backups i
For the use case with '--dumpdir', it's not possible to call prune_backups
directly, so a little bit of special handling is required there.
Signed-off-by: Fabian Ebner
---
PVE/VZDump.pm | 42 --
1 file changed, 16 insertions(+), 26 deletions(-)
diff --git
On September 28, 2020 5:59 pm, Alexandre DERUMIER wrote:
> Here a new test http://odisoweb1.odiso.net/test5
>
> This has occured at corosync start
>
>
> node1:
> -
> start corosync : 17:30:19
>
>
> node2: /etc/pve locked
> --
> Current time : 17:30:24
>
>
> I have done backtr
On 29.09.20 10:07, Dominic Jäger wrote:
> Signed-off-by: Dominic Jäger
> ---
> PVE/VZDump.pm | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
>
applied, thanks!
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.c
>>with a change of how the logging is set up (I now suspect that some
>>messages might get dropped if the logging throughput is high enough),
>>let's hope this gets us the information we need. please repeat the test5
>>again with these packages.
I'll test this afternoon
>>is there anything sp
here a new test:
http://odisoweb1.odiso.net/test6/
node1
-
start corosync : 12:08:33
node2 (/etc/pve lock)
-
Current time : 12:08:39
node1 (stop corosync : unlock /etc/pve)
-
12:28:11 : systemctl stop corosync
backtraces: 12:26:30
coredump : 12:27:21
- Mail original
>>
>>node1 (stop corosync : unlock /etc/pve)
>>-
>>12:28:11 : systemctl stop corosync
sorry, this was wrong,I need to start corosync after the stop to get it working
again
I'll reupload theses logs
- Mail original -
De: "aderumier"
À: "Proxmox VE development discussion"
Envoyé: Ma
I have reuploaded the logs
node1
-
start corosync : 12:08:33 (corosync.log)
node2 (/etc/pve lock)
-
Current time : 12:08:39
node1 (stop corosync : ---> not unlocked) (corosync-stop.log)
-
12:28:11 : systemctl stop corosync
node2 (start corosync: > /etc/pve unlocked(cor
On 16.09.20 14:14, Stoiko Ivanov wrote:
> This patch addresses the problems some users experience when some zpools are
> created/imported with cachefile (which then causes other pools not to get
> imported during boot) - when our tooling creates a pool we explictly
> instantiate the service with th
huge thanks for all the work on this btw!
I think I've found a likely culprit (a missing lock around a
non-thread-safe corosync library call) based on the last logs (which
were now finally complete!).
rebuilt packages with a proof-of-concept-fix:
23b03a48d3aa9c14e86fe8cf9bbb7b00bd8fe9483084b9e
>>huge thanks for all the work on this btw!
huge thanks to you ! ;)
>>I think I've found a likely culprit (a missing lock around a
>>non-thread-safe corosync library call) based on the last logs (which
>>were now finally complete!).
YES :)
>>if feedback from your end is positive, I'll w
On 28.09.20 17:48, Stefan Reiter wrote:
> With the transaction patches, patch 0026-PVE-Backup-modify-job-api.patch
> is no longer necessary, so drop it and rebase all following patches on
> top.
>
> Signed-off-by: Stefan Reiter
> ---
>
applied, thanks!
On 28.09.20 17:48, Stefan Reiter wrote:
> ...and avoid printing 100% status twice
>
> Signed-off-by: Stefan Reiter
> ---
> PVE/VZDump/QemuServer.pm | 10 +-
> 1 file changed, 9 insertions(+), 1 deletion(-)
>
>
applied, thanks! But, I did s/verification/backup validation/ to avoid some
commit 815df2dd08ac4c7295135262e60d64fbb57b8f5c introduced a small issue
when activating linked clone volumes - the volname passed contains
basevol/subvol, which needs to be translated to subvol.
using the path method should be a robust way to get the actual path for
activation.
Found and tested
On 16.09.20 14:14, Stoiko Ivanov wrote:
> When creating a new ZFS storage, also instantiate an import-unit for the pool.
> This should help mitigate the case where some pools don't get imported during
> boot, because they are not listed in an existing zpool.cache file.
>
> This patch needs the cor
On 29.09.20 18:49, Stoiko Ivanov wrote:
> commit 815df2dd08ac4c7295135262e60d64fbb57b8f5c introduced a small issue
> when activating linked clone volumes - the volname passed contains
> basevol/subvol, which needs to be translated to subvol.
>
> using the path method should be a robust way to get
Hi,
some news, my last test is running for 14h now, and I don't have had any
problem :)
So, it seem that is indeed fixed ! Congratulations !
I wonder if it could be related to this forum user
https://forum.proxmox.com/threads/proxmox-6-2-corosync-3-rare-and-spontaneous-disruptive-udp-5405-sto
Hi,
On 30.09.20 08:09, Alexandre DERUMIER wrote:
> some news, my last test is running for 14h now, and I don't have had any
> problem :)
>
great! Thanks for all your testing time, this would have been much harder,
if even possible at all, without you probiving so much testing effort on a
produc
20 matches
Mail list logo