On Thu, 31 Oct 2024 14:43:44 +0100,
Aki Tuomi wrote:
>
>
> > On 31/10/2024 15:29 EET Kirill A. Korinsky via dovecot
> > wrote:
> >
> >
> > Good day,
> >
> > I just discovered a core file from dovecot's imap.
> >
> >
Good day,
I just discovered a core file from dovecot's imap.
It is running on OpenBSD 7.6 and dovecot is built with
https://github.com/dovecot/core/pull/223
Stacktrace:
#0 fts_search_merge_scores_or (dest=0x73b552104f38, src=)
at fts-search.c:278
src2 =
src_map =
Folks,
finally, I was able to track the issue and fix it.
Here the PR: https://github.com/dovecot/core/pull/223
With this replication_sync_timeout = 10 wroks without any crash for more
than of 7 hours on my cluster. Before it crashes each processed email.
Thus, this fix allows to switch to TCP+
On Tue, 24 Sep 2024 16:05:14 +0200,
Kirill A. Korinsky wrote:
>
> #9 0x0c7bec3e1054 in pool_data_stack_realloc (pool=,
> mem=0xc7d0cfeb028, old_size=4294967296, new_size=8589934592)
> at mempool-datastack.c:173
Here:
- 4294967296 is 0x1
- 8589934592 is 0x2
Well,
Using replication_sync_timeout = 60 or 15 leads to crash, 10 doesn't.
It is Dovecot v2.3.21.1 (d492236fa0) on OpenBSD.
Inside mail logs:
Sep 24 15:58:20 mx2 dovecot: lmtp(70545): Connect from local
Sep 24 15:58:25 mx2 dovecot: replicator: Panic: data stack: Out of memory when
allocating
On Tue, 24 Sep 2024 00:20:58 +0200,
Kirill A. Korinsky wrote:
>
> I also discover this commit
> https://github.com/dovecot/core/commit/3001c1410e72440e48c285890dd31625b7e12555
>
> As naive approach I had backported it to 2.3.21.1 let see how it goes.
>
It doesn't
Greetings,
On Sun, 21 Apr 2024 22:52:41 +0200,
Kirill A. Korinsky wrote:
>
> Excluding INBOX from virtual folder seems that allows to avoid the issue.
>
And today it had exploded inside virtual.Archive folder.
Inside logs regarding an email which triggered an issue I have:
Sep 23
Greetings,
On Sun, 21 Apr 2024 21:52:41 +0100,
Kirill A. Korinsky wrote:
>
> Excluding INBOX from virtual folder seems that allows to avoid the issue.
>
I'd like to confirm that excluding INBOX from virtual folder indeed allows
to avoid that issue.
Any suggestion how can I
Greeting,
Since last status update I've switched my workflow from active using
virtual.All into virtual.Archive folder.
After the switch I've recreated virtual.All on both servers.
Folder definition:
mx1$ cat /etc/dovecot/virtual/Archive/dovecot-virtual
Greeting,
On Thu, 18 Apr 2024 17:51:06 +0200,
Kirill A. Korinsky wrote:
>
> All of this allows to reconstruct the workflow which leads to an issue:
> 1. mx2 had a new mail.
> 2. mx2 registered that mail as 147699 inside virtual folder.
> 3. by some reason this email wasn'
Greetings,
Seems that I had catched a proof that it is defently bug in dsync.
On Thu, 18 Apr 2024 16:59:23 +0200,
Kirill A. Korinsky wrote:
>
> Anyway, I'll be back.
>
As usual after I sent an update here, the issue is comeback.
mx1# doveadm fetch "uid guid" -u
Greetings,
On Wed, 28 Feb 2024 22:15:55 +0100,
Kirill A. Korinsky wrote:
>
> As the next step I've reduced logs to verbose, let's see how it goes.
>
Reducing verbosity re-triggered an issue.
Meanwhile, as a lucky guess I've increased the number of open files on al
Greetings,
Rigt now both of servers works with debug log of dsync and produces ~1G logs
per day.
Seems that such verbose output decreases probability of issue.
For a week no more desync of uid in virtual boxes.
mx1# doveadm fetch -u kir...@korins.ky 'uid' mailbox virtual.All | grep
'^uid:' |
Greetings,
As promised: I'm back.
After processing about 740 mails, it is again.
Via SSH based replication it isn't so dramatic, but exists.
See:
mx1# doveadm fetch -u kir...@korins.ky 'uid' mailbox virtual.All | grep
'^uid:' | tail -n 5
uid: 145485
uid: 145486
uid: 145487
uid: 1454
Greetings,
I'd like to report that switch from TCP based replication to SSH based seems to
solve this issue.
SSH-based replication last for almost 24h. After I've switched to it I had
removed all virtual folders on all users, that triggres re-creating it.
Since then, I can't reproduce it.
mx1
Greeetings,
I do have a cron script which runs doveadm NOT mailbox Junk SEEN SINCE 30d
Everything works well with one exception, if user removes email when it's
running, it may lead to an email from cron like:
doveadm(...): Error: fetch(hdr) failed for box=virtual.All uid=145266:
Message wa
On Fri, 26 Jan 2024 01:44:06 +0100,
Kirill A. Korinsky wrote:
>
> So far so good.
>
And here we go again
mx1# doveadm fetch -u kir...@korins.ky 'uid' mailbox virtual.All | grep
'^uid:' | tail -n 20
uid: 144044
uid: 144045
uid: 144046
uid: 144047
ui
On Sun, 21 Jan 2024 17:31:37 +0100,
Kirill A. Korinsky wrote:
>
> mail_replica = tcps:mx2.catap.net
>
I've discovered that using TCP+SSL replication leads to errors in logs:
Error: Couldn't lock ../.dovecot-sync.lock: fcntl(../.dovecot-sync.lock,
write-lock, F_SET
On Sun, 21 Jan 2024 16:34:44 +0100,
Aki Tuomi wrote:
>
> Can you send output of doveconf -n?
>
Sure, here it is:
# 2.3.20 (80a5ac675d): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.20 (149edcf2)
# OS: OpenBSD 7.4 amd64
# Hostname: mx1.catap.net
default_vsz_limit = 10 G
dovead
I'd like to add that allowing to save messages via virtual folder to some folder
by change it defintion to
*
!Archive
-Trash
-Trash/*
-Junk
-Junk/*
all
doesn't help and syncronization fails as usual:
Jan 20 15:34:06 mx1 dovecot:
doveadm(kir...@korins.ky)<75563>: Error: Can't c
Greetings,
I have a setup with two dovecot servers and dsync replication between them.
On both of them I have a virtual folders, forexample All with config:
mx1# cat /etc/dovecot/virtual/All/dovecot-virtual
*
-Trash
-Trash/*
-Junk
-Junk/*
all
All virtual folders are excluded fro
21 matches
Mail list logo