Re: system wakeup caused by write operations to /var/lib/dovecot/instances

2019-02-07 Thread Timo Sirainen via dovecot
On 2 Feb 2019, at 6.44, Tijl  wrote:
> How can dovecot be run without writing to /var/lib/dovecot/instances 
> everyday? Is there a configuration setting for this?

You'd need to patch src/master/main.c instance_update_now() to remove:

to_instance = timeout_add((3600 * 12 + i_rand_limit(60 * 30)) * 1000,
  instance_update_now, list);

I'm not quite sure why I wrote such code to update it continuously.



Re: [grosjo/fts-xapian] `doveadm fts rescan` removes all indices (#15)

2019-02-14 Thread Timo Sirainen via dovecot
Hi,

The rescan() function is a bit badly designed. Currently what you could do what 
fts-lucene does and:
 - Get list of UIDs for all mails in each folder
 - If Xapian has UID that doesn't exist -> delete it from Xapian
 - If UID is missing from Xapian -> expunge the rest of the UIDs in that 
folder, so the next indexing will cause them to be indexed

The expunging of rest of the mails is rather ugly, yes.. A better API would be 
if backend simply had a way to iterate all mails in the index, preferrably 
sorted by folder. Then a more generic code could go through them and expunge 
the necessary mails and index the missing mails. Although not all FTS backends 
support indexing in the middle. Anyway, we don't really have time to implement 
this new API soon.

I'm not sure if this is a big problem though. I don't think most people running 
FTS have ever run rescan.

> On 8 Feb 2019, at 9.54, Joan Moreau via dovecot  wrote:
> 
> 
> 
>  
> Hi,
> 
> THis is a core problem in Dovecot in my understanding.
> 
> In my opinion, the rescan in dovecot should send to the FTS plugin the list 
> of "supposedly" indexed emails (UID), and the plugin shall purge the 
> redundant UID (i..e UID present in the index but not in the list sent by 
> dovecot) and send back the list of UID not in its indexes to dovecot, so 
> Dovect can send one by one the missing emails
> 
> 
> 
> WHat do you think ?
> 
> 
> 
>  Original Message 
> 
> Subject:  [grosjo/fts-xapian] `doveadm fts rescan` removes all indices 
> (#15)
> Date: 2019-02-08 08:28
> From: Leonard Lausen 
> To:   grosjo/fts-xapian 
> Cc:   Subscribed 
> Reply-To: grosjo/fts-xapian 
> 
> 
> 
> doveadm fts rescan -A deletes all indices, ie. all folders and files in the 
> xapian-indexes are deleted. However, according to man doveadm fts, the rescan 
> command should only
> 
>> Scan what mails exist in the full text search index and compare those to what
>> actually exist in mailboxes. This removes mails from the index that have 
>> already
>> been expunged and makes sure that the next doveadm index will index all the
>> missing mails (if any).
>> 
> Deleting all indices does not seem to be the intended action, especially as 
> constructing the index anew may take very long on large mailboxes.
> 
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub 
> , or mute the thread 
> .
> 
> 
> 



Re: submission-login: Fatal: master: service(submission-login):

2019-03-11 Thread Timo Sirainen via dovecot
On 11 Mar 2019, at 13.53, Marcelo Coelho via dovecot  
wrote:
> 
> Hi everyone!
> 
> I’m using dovecot 2.3.5. submission-login is crashing many times in a day:
> 
> Here is a sample error message:
> 
> dovecot: submission-login: Fatal: master: service(submission-login): child 
> 34247 killed with signal 11 (core not dumped - 
> https://dovecot.org/bugreport.html#coredumps - set service submission-login { 
> drop_priv_before_exec=yes })
> 
> After I added drop_priv_before_exec, I got these error messages:

If you're using Linux, you could alternatively: sysctl -w fs.suid_dumpable=2

> submission-login: Error: master-service: cannot get filters: 
> net_connect_unix(/var/run/dovecot/config) failed: Permission denied
> dovecot: master: Error: service(submission-login): command startup failed, 
> throttling for 2 secs


This could be avoided with adding to dovecot.conf:

service config {
  unix_listener config {
mode = 0666
  }
}



Re: imap-hibernate not working

2019-03-11 Thread Timo Sirainen via dovecot
On 8 Mar 2019, at 20.44, Marcelo Coelho via dovecot  wrote:
> 
> Hi,
> 
> I follow different setup instructions and I can't make imap-hibernate work. 
> I've tried vmail and dovecot as users, tried to set mode to 0666, without 
> success. I'm using FreeBSD 11.2.
> 
> Is imap-hibernate compatible with FreeBSD 11.2?
> 
> 
> 
> My operational system:
> 
> # uname -v
> FreeBSD 11.2-RELEASE-p9 #0: Tue Feb  5 15:30:36 UTC 2019 
> r...@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC 
> 
> Here are my logs:
> 
> Mar  8 15:30:57 servername dovecot: imap(u...@domain.com:52.125.128.90): 
> Error: kevent(-1) for notify remove failed: Bad file descriptor
> Mar  8 15:30:57 servername dovecot: imap(u...@domain.com:52.125.128.90): 
> Error: close(-1) for notify remove failed: Bad file descriptor
> Mar  8 15:30:57 servername dovecot: imap-hibernate: Error: Failed to parse 
> client input: Invalid peer_dev_minor value: 18446744073709486335
> Mar  8 15:30:57 servername dovecot: imap(u...@domain.com:52.125.128.90): 
> Error: /opt/dovecot/2.3.5/var/run/dovecot/imap-hibernate returned failure: 
> Failed to parse client input: Invalid peer_dev_minor value: 
> 18446744073709486335

Looks bad. I suppose it's broken with FreeBSD.



Re: Delayed flags changes over IDLE

2019-03-11 Thread Timo Sirainen via dovecot
On 10 Mar 2019, at 10.14, Kostya Vasilyev via dovecot  
wrote:
> 
> My mail is stored under ~/mail/.imap (not sure what this format is called), I 
> mean not "single file mbox".
> 
> I have not changed any IDLE related config settings:
> 
> doveconf  | grep -i idle
> default_idle_kill = 1 mins
> director_ping_idle_timeout = 30 secs
> imap_idle_notify_interval = 2 mins
> imapc_max_idle_time = 29 mins
> mailbox_idle_check_interval = 30 secs
> 
> What can I do to make Dovecot notify IDLE clients about flags changes - more 
> quickly? Preferably near-instant?

It should simply just work, assuming there aren't any weird inotify limits, but 
you should get errors logged about reaching those. You could see if it makes 
any difference to set mailbox_idle_check_interval=1s



Re: Regression ACL & namespace prefix

2019-03-12 Thread Timo Sirainen via dovecot
On 18 Sep 2018, at 17.10, Michal Hlavinka  wrote:
> 
> Seems that for Global ACL directory, namespace prefix is not part of the 
> path, when looking for acl file.

Is there a reason you're using ACL directory instead of ACL file? I've rather 
been thinking about removing code for ACL directories entirely at some point.



Re: “doveadm mailbox” command fails with UTF-8 mailboxes

2019-03-12 Thread Timo Sirainen via dovecot
On 12 Mar 2019, at 21.20, Felipe Gasper via dovecot  wrote:
> 
> Hello,
> 
>   I’ve got a strange misconfiguration where the following command:
> 
> doveadm -f pager mailbox status -u spamutf8 'messages vsize guid' INBOX 
> 'INBOX.*'
> 
> … fails with error code 68, saying that it can’t find one of the mailboxes. 
> (It lists the user’s other mailboxes.) The name of the mailbox in question is 
> saved to disk in UTF-8 rather than mUTF-7, but strace shows that doveadm is 
> stat()ing the mUTF-7 path; the failure of that stat() is, assumedly, what 
> causes doveadm to report the error status.
> 
>   I’ve tried to paw through the source code to see what might be causing 
> this but haven’t made much headway. Can someone here point out where the 
> misconfiguration might be that is causing doveadm to stat() the mUTF-7 path 
> rather than UTF-8? Or perhaps offer any tips as to how I might diagnose 
> what’s going on? What causes doveadm to stat() one path or the other?

What's your doveconf -n? Using UTF-8 on filesystem requires using "UTF-8" 
option in mail_location. Do you have it set? 
https://wiki2.dovecot.org/MailLocation



Re: dovecot-keywords are not preserved any more when moving mails between folders

2019-03-12 Thread Timo Sirainen via dovecot
On 12 Mar 2019, at 17.55, Dan Christensen via dovecot  
wrote:
> 
> On Mar 12, 2019, Aki Tuomi via dovecot  wrote:
> 
>> On 12.3.2019 13.46, Piper Andreas via dovecot wrote:
>> 
>>> after an upgrade of dovecot-2.2.5 to dovecot-2.3.4 the dovecot-keywords,
>>> which in my case are set by thunderbird, are not preserved any more when
>>> moving a mail between folders.
>> 
>> We are aware of this bug, and it's being tracked as DOP-842.
> 
> Could this bug also be causing flags to be lost when using dsync
> (as I described in some messages to this list Feb 16 to 23)?
> 
> It seems like it might be a different bug, since in my experience
> the flags are sometimes synced and then removed later.

That bug is fixed with attached patch.



2656.patch
Description: Binary data




Re: Delayed flags changes over IDLE

2019-03-12 Thread Timo Sirainen via dovecot
On 12 Mar 2019, at 10.21, Kostya Vasilyev via dovecot  
wrote:
> 
> It makes no difference if the IDLE connection does SELECT or SELECT 
> (CONDSTORE) prior to going IDLE.
> 
> But then as far as I know (?) - in Dovecot, once any connection uses 
> CONDSTORE ever, even once, Dovecot creates data structures to track MODSEQ 
> values, and those data structures are forever.

So are you saying that you can reproduce if you do for a completely new user:

doveadm exec imap -u testuser1
a select inbox
b idle

And then run:
echo foo | doveadm save -u testuser1
doveadm flags add -u testuser1 '\Seen' mailbox inbox 1

And the EXISTS shows up immediately after saving, but the flag change won't 
show up? It works fine with me.

Do you see any errors in "doveadm log errors"? Can you reproduce this if you 
try with some other mailbox format than mbox?



Re: question about %u and %h used in mail_location

2019-03-12 Thread Timo Sirainen via dovecot
On 12 Mar 2019, at 12.04, Joe Wong via dovecot  wrote:
> 
> Hello,
> 
>  I have defined the following: 
> 
> mail_location = maildir:~:INBOX=~/Maildir:LAYOUT=fs:INDEX=%u
> 
> %u is retrieve via database in that my username contain ":", in which it 
> create some confusion to dovecot:
> 
> doveadm index -u user1:site@domain iNBOX
> doveadm(user1:site@domain): Error: remote(192.168.10.22:24245 
> ): Namespace '': Unknown setting: site@domain
> 
> I cannot change the value in DB so is there a workaround to this problem?

Convert the username to not have ':' when it comes out of auth. Could be 
possible with SQL userdb. With others .. I'm not sure, you might have to use 
Lua to convert it.



Re: “doveadm mailbox” command fails with UTF-8 mailboxes

2019-03-12 Thread Timo Sirainen via dovecot
On 13 Mar 2019, at 1.07, Helmut K. C. Tessarek  wrote:
> 
> On 2019-03-12 17:23, Timo Sirainen via dovecot wrote
>> https://wiki2.dovecot.org/MailLocation
> 
> Sorry, this might be off-topic, but while reading up on the link you sent,
> I've noticed the following sentence:
> 
> Use only absolute paths. Even if relative paths would appear to work, they
> might just as well break some day.
> 
> Yet, all examples in the documentation use ~ which is a relative path.

Well, I suppose it depends on definitions.. But I'm not calling ~/ relative 
paths, because it expands to an absolute path. The problem is using paths like 
"mail/" where it depends on the current chdir.



Re: flags not synced correctly with dovecot sync (dsync)

2019-03-14 Thread Timo Sirainen via dovecot
On 13 Mar 2019, at 22.43, Dan Christensen via dovecot  
wrote:
> 
> On Mar 12, 2019, Dan Christensen via dovecot  wrote:
> 
>> In another thread, Timo wrote:
>> 
>> On Mar 12, 2019, Timo Sirainen via dovecot  wrote:
>> 
>>> That bug is fixed with attached patch.
>> 
>> I'll report back once I've tested it.
> 
> I applied 2656.patch to version 2.3.5 as found at
> 
>  https://repo.dovecot.org/ce-2.3-latest/ubuntu/bionic/pool/main/2.3.5-1_ce/
> 
> and rebuilt the ubuntu package.  I installed the resulting package on
> all three machines.  And still I can reproduce the bug involving unread
> messages getting marked as read.

Looks like you're also using Maildir, which has another bug of keywords not 
being copied correctly.



Re: imap ---- LIST "" * The returned mailbox does not display quotes

2019-03-21 Thread Timo Sirainen via dovecot
On 21 Mar 2019, at 6.22, lty via dovecot  wrote:
> 
> dovecot  version
> 
> v2.2.29
> 
> v2.2.36
> 
> v2.3.5
> 
> LIST "" * The returned mailbox does not display quotes
> 
> v2.1.17
> LIST "" * The returned mailbox shows quotation marks
> 
> Why is the quotation mark removed in the new version?
> 

Because they were unnecessary. The code was changed to use more generic IMAP 
token writer, which adds quotes only as necessary.

> Is there any configuration option in the new version to add quotes?
> 

No. Does it break some client?



Re: Panic: file mail-transaction-log-file.c: line 105 (mail_transaction_log_file_free): assertion failed: (!file->locked)

2019-03-28 Thread Timo Sirainen via dovecot
On 27 Mar 2019, at 12.42, Arkadiusz Miśkiewicz via dovecot 
 wrote:
> 
> 
> Hello.
> 
> I have one account with heavy traffic (big mails) and quite often
> indexes get corrupted.
> 
> This is dovecot 2.3.5 on local fs (xfs), Linux kernel 4.19.20, glibc 2.28.
> 
> When corruption happens lmtp and pop3 segfault on accessing it like:
> 
>> Mar 27 11:13:50 mbox dovecot[22370]: lmtp(24428): Connect from local 
>>  
>>  
>> [0/0]
>> Mar 27 11:13:50 mbox dovecot[22370]: lmtp(piast_efaktury): pid=<24428> 
>> session=, Error: Index 
>> /var/mail/piast_efaktury/dovecot.index: Lost log for seq=13 offset=25648: 
>> Missing middle file seq=13 (between 13..4294967295, we have seqs 14,15): 
>> .log.2 contains file_seq=14 (initial_mapped=0, reason=Index mapped)
>> Mar 27 11:13:50 mbox dovecot[22370]: lmtp(piast_efaktury): pid=<24428> 
>> session=, Warning: fscking index file 
>> /var/mail/piast_efaktury/dovecot.index
>> Mar 27 11:13:50 mbox dovecot[22370]: lmtp(piast_efaktury): pid=<24428> 
>> session=, Error: Fixed index file 
>> /var/mail/piast_efaktury/dovecot.index: log_file_seq 13 -> 15

dovecot.index says that it was generated against dovecot.index.log sequence 13, 
but the .log file already has sequence 15. I could maybe believe such a bug 
with long-running IMAP connections, but this is LMTP. And it's supposed to be 
fixing the problem here..

>> Mar 27 11:13:50 mbox dovecot[22370]: lmtp(piast_efaktury): pid=<24428> 
>> session=, Panic: file mail-transaction-log-file.c: 
>> line 105 (mail_transaction_log_file_free): assertion failed: (!file->locked)

Even though it crashes here, it's already supposed to have fixed the problem.

> dovecot isn't able to auto fix the indexes and manual deletion is
> required in all such cases

So if it keeps repeating, it's very strange. Could you send me such broken 
dovecot.index and dovecot.index.log files (without dovecot.index.cache)? They 
shouldn't contain anything sensitive (only message flags).

Also what's your doveconf -n?



Re: Panic: file mail-transaction-log-file.c: line 105 (mail_transaction_log_file_free): assertion failed: (!file->locked)

2019-03-28 Thread Timo Sirainen via dovecot
On 27 Mar 2019, at 14.58, Timo Sirainen via dovecot  wrote:
> 
>> dovecot isn't able to auto fix the indexes and manual deletion is
>> required in all such cases
> 
> So if it keeps repeating, it's very strange. Could you send me such broken 
> dovecot.index and dovecot.index.log files (without dovecot.index.cache)? They 
> shouldn't contain anything sensitive (only message flags).

Tested with the index files you sent. It gets fixed automatically in my tests.

The backtrace shows that after fsck it fails to write the fixed index to the 
disk, because mail_index_write() fails for some reason. Except there's no error 
logged about it, which is rather weird. Do you still have the lmtp core? Could 
you do:

fr 9
p *log.index





Re: v2.2.27 Panic: file rfc822-parser.h: line 23 (rfc822_parser_deinit): assertion failed: (ctx->data <= ctx->end)

2019-03-28 Thread Timo Sirainen via dovecot
On 27 Mar 2019, at 1.25, Jason Lewis via dovecot  wrote:
> 
> Hi Aki,
> 
> debian jessie backports has been moved to archive.debian.org and
> initially I was unable to install dovecot-dbg because of that. But I've
> managed to resolve that issue now.
> 
> This was the command I ran:
> doveadm -D -f flow fetch imap.envelope mailbox crm-spam.2008.g
> 
> Backtrace follows.

I've a feeling Debian's security fix backports didn't work properly:

> #5  0x7f3a7c34a97d in rfc822_parser_deinit (ctx=0x7ffdc7615e38,
> ctx=0x7ffdc7615e38) at rfc822-parser.h:23

rfc822_parser_deinit() wasn't added until v2.2.31. I think it was added as part 
of a security fix.

>data=data@entry=0x5563c13f3910 "To: bluef...@dickson.st,
> ja...@dickson.st, lewisja...@dickson.st, 05 Jul 2008 16:39:47 -0500
> PDT6Q--q=dns; c=nofws;d sender)
> smtp.mail=matt_coo...@postnewsweektech.com; domainkeys=pass (test mode)
> hea"..., size=size@entry=64,

I tried fetching a mail with these contents in v2.2.27, v2.2.33 and master. 
They all worked fine.



Re: Panic: file mail-transaction-log-file.c: line 105 (mail_transaction_log_file_free): assertion failed: (!file->locked)

2019-03-28 Thread Timo Sirainen via dovecot
On 28 Mar 2019, at 10.15, Arkadiusz Miśkiewicz  wrote:
> 
>  error = 0x55e3e2b40ac0 "Fixed index file
> /var/mail/piast_efaktury/dovecot.index: log_file_seq 13 -> 15",
>  nodiskspace = true,

This was one of the things I was first wondering, but I'm not sure why it's not 
logging an error. Anyway, you're using filesystem quota? And this index is 
large enough that trying to rewrite it brings the user over quota?



Re: FTS delays

2019-04-02 Thread Timo Sirainen via dovecot
On 2 Apr 2019, at 6.38, Joan Moreau via dovecot  wrote:
> 
> Further on this topic:
> 
> 
> 
> When choosing any headers in the search box, dovecot core calls the plugin 
> TWICE (and returns the results quickly, but not immediatly after getting the 
> IDs from the plugins)
> 
> When choosing the BODY search, dovecot core calls the plugin ONCE (and never 
> returns) (whereas the plugins returns properly the IDs)
> 

If we simplify this, do you mean this calls it once and is fast:

doveadm search -u user@domain mailbox inbox body helloworld

But this calls twice and is slow:

doveadm search -u user@domain mailbox inbox text helloworld

And what about searching e.g. subject? :

doveadm search -u user@domain mailbox inbox subject helloworld

And does the slowness depend on whether there were any matches or not?

> This is based on GIT version. (previous versions were working properly)

Previous versions were fast? Do you mean v2.3.5?



Re: Trying to track down source of duplicate messages

2019-04-02 Thread Timo Sirainen via dovecot
On 1 Apr 2019, at 19.40, Alex via dovecot  wrote:
> 
> Hi,
> 
> I haven't received any responses to my duplicate messages problem. It
> occurred to me that I posted my full dovecot config instead of just
> the changes we've made locally. I thought it might help to follow up
> with just the specific config to make it easier to identify a
> potential problem.

How are you delivering the mails? With dovecot-lda or something else? Do you 
see any errors/warnings in your MTA log? Similar problems at least can happen 
if the delivery takes a very long time and MTA times out and retries the 
delivery later on, but the original delivery actually succeeded eventually. You 
might see these differences in Received headers.



Re: sieve scripts not synching for 2.3.5.1 pre-built

2019-04-02 Thread Timo Sirainen via dovecot
On 2 Apr 2019, at 17.03, Jan-Pieter Cornet via dovecot  
wrote:
> 
> Hi,
> 
> We're synching mailboxes, changing format from maildir to mdbox, using 
> doveadm backup/doveadm sync.
> 
> When still running 2.2.36, 'doveadm backup' also synched the sieve scripts, 
> without issues.
> 
> After the upgrade to 2.3.5.1, the sieve sync stopped working. We're using the 
> pre-built 2.3 packages from 
> https://repo.dovecot.org/ce-2.3-latest/debian/stretch 
> 

Looks like this is trivial to reproduce. It used to work still in v2.3.1, but 
then something broke it. Tracking internally in DOP-1062.



Re: sieve scripts not synching for 2.3.5.1 pre-built

2019-04-02 Thread Timo Sirainen via dovecot
On 2 Apr 2019, at 22.37, Timo Sirainen via dovecot  wrote:
> 
> On 2 Apr 2019, at 17.03, Jan-Pieter Cornet via dovecot  <mailto:dovecot@dovecot.org>> wrote:
>> 
>> Hi,
>> 
>> We're synching mailboxes, changing format from maildir to mdbox, using 
>> doveadm backup/doveadm sync.
>> 
>> When still running 2.2.36, 'doveadm backup' also synched the sieve scripts, 
>> without issues.
>> 
>> After the upgrade to 2.3.5.1, the sieve sync stopped working. We're using 
>> the pre-built 2.3 packages from 
>> https://repo.dovecot.org/ce-2.3-latest/debian/stretch 
>> <https://repo.dovecot.org/ce-2.3-latest/debian/stretch>
> 
> Looks like this is trivial to reproduce. It used to work still in v2.3.1, but 
> then something broke it. Tracking internally in DOP-1062.

Reverting 
https://github.com/dovecot/pigeonhole/commit/479c5e57046dec76078597df844daccbfc0eb75f
 
<https://github.com/dovecot/pigeonhole/commit/479c5e57046dec76078597df844daccbfc0eb75f>
 fixes this.



Re: FTS delays

2019-04-21 Thread Timo Sirainen via dovecot
On 3 Apr 2019, at 20.30, Joan Moreau via dovecot  wrote:
> doveadm search -u j...@grosjo.net mailbox inbox text milan
> output
> 
> doveadm(j...@grosjo.net): Info: Query : ( bcc:inbox OR body:inbox OR cc:inbox 
> OR from:inbox OR message-id:inbox OR subject:inbox OR to:inbox OR uid:inbox ) 
> AND ( bcc:milan OR body:milan OR cc:milan OR from:milan OR message-id:milan 
> OR subject:milan OR to:milan OR uid:milan )
> 
> 1 - The query is wrong

That's because fts_backend_xapian_lookup() isn't anywhere close to being 
correct. Try to copy the logic based on solr_add_definite_query_args().



Re: FTS delays

2019-04-21 Thread Timo Sirainen via dovecot
Inbox appears in the list of arguments, because fts_backend_xapian_lookup() is 
parsing the search args wrong. Not sure about the other issue.

> On 21 Apr 2019, at 19.31, Joan Moreau  wrote:
> 
> For this first point, the problem is that dovecot core sends TWICE the 
> request and "Inbox" appears in the list of arguments ! (inbox shall serve to 
> select teh right mailbox, never sent to the backend)
> 
> And even if this would be solved, the dovecot core loops *after* the backend 
> hs returneds the results
> 
> 
> 
> # doveadm search -u j...@grosjo.net mailbox inbox text milan
> doveadm(j...@grosjo.net): Info: Get last UID of INBOX = 315526
> doveadm(j...@grosjo.net): Info: Get last UID of INBOX = 315526
> doveadm(j...@grosjo.net): Info: Query: FLAG=AND
> doveadm(j...@grosjo.net): Info: Query(1): add term(wilcard) : inbox
> doveadm(j...@grosjo.net): Info: Query(2): add term(wilcard) : milan
> doveadm(j...@grosjo.net): Info: Testing if wildcard
> doveadm(j...@grosjo.net): Info: Query: set GLOBAL (no specified header)
> doveadm(j...@grosjo.net): Info: Query : ( bcc:inbox OR body:inbox OR cc:inbox 
> OR from:inbox OR message-id:inbox OR subject:inbox OR to:inbox ) AND ( 
> bcc:milan OR body:milan OR cc:milan OR from:milan OR message-id:milan OR 
> subject:milan OR to:milan )
> doveadm(j...@grosjo.net): Info: Query: 2 results in 1 ms // THIS IS WHEN 
> BACKEND HAS FOUND RESULTS AND STOPPED
> d82b4b0f550d3859364495331209 847
> d82b4b0f550d3859364495331209 1569
> d82b4b0f550d3859364495331209 2260
> d82b4b0f550d3859364495331209 2575
> d82b4b0f550d3859364495331209 2811
> d82b4b0f550d3859364495331209 2885
> d82b4b0f550d3859364495331209 3038
> d82b4b0f550d3859364495331209 3121 -> LOOPING FOREVER
> 
> 
> 
>  
> 
> 
> On 2019-04-21 09:57, Timo Sirainen via dovecot wrote:
> 
>> On 3 Apr 2019, at 20.30, Joan Moreau via dovecot > <mailto:dovecot@dovecot.org>> wrote:
>>> 
>>> doveadm search -u j...@grosjo.net <mailto:j...@grosjo.net> mailbox inbox 
>>> text milan
>>> output
>>> 
>>> doveadm(j...@grosjo.net <mailto:j...@grosjo.net>): Info: Query : ( 
>>> bcc:inbox OR body:inbox OR cc:inbox OR from:inbox OR message-id:inbox OR 
>>> subject:inbox OR to:inbox OR uid:inbox ) AND ( bcc:milan OR body:milan OR 
>>> cc:milan OR from:milan OR message-id:milan OR subject:milan OR to:milan OR 
>>> uid:milan )
>>> 
>>> 1 - The query is wrong
>> 
>> That's because fts_backend_xapian_lookup() isn't anywhere close to being 
>> correct. Try to copy the logic based on solr_add_definite_query_args().
>> 
>> 



Re: FTS delays

2019-04-21 Thread Timo Sirainen via dovecot
It's because you're misunderstanding how the lookup() function works. It gets 
ALL the search parameters, including the "mailbox inbox". This is intentional, 
and not a bug. Two reasons being:

1) The FTS plugin in theory could support indexing/searching any kinds of 
searches, not just regular word searches. So I didn't want to limit it 
unnecessarily.

2) Especially with "mailbox inbox" this is important when searching from 
virtual mailboxes. If you configure "All mails in all folders" virtual mailbox, 
you can do a search in there that restricts which physical mailboxes are 
matched. In this case the FTS backend can optimize this lookup so it can filter 
only the physical mailboxes that have matches, leaving the others out. And it 
can do this in a single query if all the mailboxes are in the same FTS index.

So again: Your lookup() function needs to be changed to only use those search 
args that it really wants to search, and ignore the others. Use 
solr_add_definite_query_args() as the template.

Also I see now the reason for the timeout problem. It's because you're not 
setting search_arg->match_always=TRUE. These need to be set for the search args 
that you're actually using to generate the Xapian query. If it's not set, then 
Dovecot core doesn't think that the arg was part of the FTS search and it 
processes it itself. Meaning that it opens all the emails and does the search 
the slow way, practically making the FTS lookup ignored.

> On 21 Apr 2019, at 19.50, Joan Moreau  wrote:
> 
> No, the parsing is made by dovecot core, that is nothing the backend can do 
> about it. The backend shall *never*  reveive this. (would it be buggy or no)
> 
> 
> 
> PLease, have a look deeper
> 
> And the loop is a very big problem as it times out all the time (and once 
> again, this is not in any of the backend  functions)
> 
>  
> 
> 
> On 2019-04-21 10:42, Timo Sirainen via dovecot wrote:
> 
>> Inbox appears in the list of arguments, because fts_backend_xapian_lookup() 
>> is parsing the search args wrong. Not sure about the other issue.
>> 
>>> On 21 Apr 2019, at 19.31, Joan Moreau >> <mailto:j...@grosjo.net>> wrote:
>>> 
>>> For this first point, the problem is that dovecot core sends TWICE the 
>>> request and "Inbox" appears in the list of arguments ! (inbox shall serve 
>>> to select teh right mailbox, never sent to the backend)
>>> 
>>> And even if this would be solved, the dovecot core loops *after* the 
>>> backend hs returneds the results
>>> 
>>> 
>>> 
>>> # doveadm search -u j...@grosjo.net <mailto:j...@grosjo.net> mailbox inbox 
>>> text milan
>>> doveadm(j...@grosjo.net <mailto:j...@grosjo.net>): Info: Get last UID of 
>>> INBOX = 315526
>>> doveadm(j...@grosjo.net <mailto:j...@grosjo.net>): Info: Get last UID of 
>>> INBOX = 315526
>>> doveadm(j...@grosjo.net <mailto:j...@grosjo.net>): Info: Query: FLAG=AND
>>> doveadm(j...@grosjo.net <mailto:j...@grosjo.net>): Info: Query(1): add 
>>> term(wilcard) : inbox
>>> doveadm(j...@grosjo.net <mailto:j...@grosjo.net>): Info: Query(2): add 
>>> term(wilcard) : milan
>>> doveadm(j...@grosjo.net <mailto:j...@grosjo.net>): Info: Testing if wildcard
>>> doveadm(j...@grosjo.net <mailto:j...@grosjo.net>): Info: Query: set GLOBAL 
>>> (no specified header)
>>> doveadm(j...@grosjo.net <mailto:j...@grosjo.net>): Info: Query : ( 
>>> bcc:inbox OR body:inbox OR cc:inbox OR from:inbox OR message-id:inbox OR 
>>> subject:inbox OR to:inbox ) AND ( bcc:milan OR body:milan OR cc:milan OR 
>>> from:milan OR message-id:milan OR subject:milan OR to:milan )
>>> doveadm(j...@grosjo.net <mailto:j...@grosjo.net>): Info: Query: 2 results 
>>> in 1 ms // THIS IS WHEN BACKEND HAS FOUND RESULTS AND STOPPED
>>> d82b4b0f550d3859364495331209 847
>>> d82b4b0f550d3859364495331209 1569
>>> d82b4b0f550d3859364495331209 2260
>>> d82b4b0f550d3859364495331209 2575
>>> d82b4b0f550d3859364495331209 2811
>>> d82b4b0f550d3859364495331209 2885
>>> d82b4b0f550d3859364495331209 3038
>>> d82b4b0f550d3859364495331209 3121 -> LOOPING FOREVER
>>> 
>>> 
>>> 
>>>  
>>> 
>>> 
>>> On 2019-04-21 09:57, Timo Sirainen via dovecot wrote:
>>> 
>>> On 3 Apr 2019, at 20.30, Joan Moreau via dovecot >> <mailto:dovecot@dovecot.org>> wrote:
>>> doveadm search -u j...@grosjo.net <mailto:j...@grosjo.net> mailbox inbox 
>>> text milan
>>> output
>>> 
>>> doveadm(j...@grosjo.net <mailto:j...@grosjo.net>): Info: Query : ( 
>>> bcc:inbox OR body:inbox OR cc:inbox OR from:inbox OR message-id:inbox OR 
>>> subject:inbox OR to:inbox OR uid:inbox ) AND ( bcc:milan OR body:milan OR 
>>> cc:milan OR from:milan OR message-id:milan OR subject:milan OR to:milan OR 
>>> uid:milan )
>>> 
>>> 1 - The query is wrong
>>> 
>>> That's because fts_backend_xapian_lookup() isn't anywhere close to being 
>>> correct. Try to copy the logic based on solr_add_definite_query_args().
>>> 
>>> 



Re: Dovecot LMTP mixing up users on multi-recipient mail

2019-07-03 Thread Timo Sirainen via dovecot
On 27 Jun 2019, at 14.21, Bernhard Schmidt via dovecot  
wrote:
> 
> Hi,
> 
> I've upgraded a mailstore from Debian Jessie (aka oldstable) with
> Dovecot 2.2.13 to Debian Buster (next stable) with Dovecot 2.3.4.1
> today. It worked pretty well, except that we're seeing error messages
> very similar to this old thread
> 
> https://dovecot.org/pipermail/dovecot/2015-July/101396.html
> 
> It appears to be happening when a mail with multiple recipients on this
> message store is getting delivered through lmtp.
> 
> Jun 27 11:47:36 lxmhs74 dovecot: 
> lmtp(user1)<47683>: Error: 
> open(/var/cache/dovecot/index/n/user2n/.INBOX/dovecot.index.cache) failed: 
> Permission denied (euid=3814520() egid=12(man) missing +x perm: 
> /var/cache/dovecot/index/n/user2, dir owned by 3391995:12 mode=0700)
> 
> user1 uid is 3814520, user2n uid is 3391995. Dovecot appears to be trying
> to deliver the message to user1 while using the index directory of user2n.

When delivering multiple mails with LMTP it first writes the mail to the first 
recipient. It then leaves this mail open and uses it to copy the mail to the 
next recipient. This allows the possibility of e.g. using hard links if the 
filesystem permissions are the same with both recipients, although that won't 
happen in your case. Anyway, apparently this copying attempts to update the 
first recipient's dovecot.index.cache for some reason. I'm not sure why exactly 
this is different in v2.2 and v2.3.

I wasn't able to reproduce this easily though, except with some special plugin 
it happened. This change helped with it:

diff --git a/src/lmtp/lmtp-local.c b/src/lmtp/lmtp-local.c
index e43f156d3..93848ef27 100644
--- a/src/lmtp/lmtp-local.c
+++ b/src/lmtp/lmtp-local.c
@@ -669,6 +669,9 @@ lmtp_local_deliver_to_rcpts(struct lmtp_local *local,
   will be unreferenced later on */
local->rcpt_user = NULL;
src_mail = local->first_saved_mail;
+   struct mail_private *pmail =
+   (struct mail_private *)src_mail;
+   pmail->v.set_uid_cache_updates(src_mail, TRUE);
first_uid = geteuid();
i_assert(first_uid != 0);
}

Re: Dovecot 2.3.6 on Solaris10: build issues, segfaults

2019-07-10 Thread Timo Sirainen via dovecot
On 9 Jul 2019, at 3.02, Joseph Tam via dovecot  wrote:
> 
> Issue 3) dovecot/doveconf segfaults on startup
> 
>   It crashes here while processing dovecot.conf, as does "doveconf"
> 
>   (settings-parser.c:1519 in setting_copy())
>   *dest_size = *src_size;

This is correct code.

>   It appears *src_size is not an 8-byte address aligned (0x5597c).
>   It inherits this value from the calling routine as the sum of
>   "set" (8-byte aligned) + "def->offset"=20 => misaligned address.
> 
>   (settings-parser.c:1597 in settings_dup_full())
>   src = CONST_PTR_OFFSET(set, def->offset);
> 
>   (gdb) p set
>   $2 = (const void *) 0x55968
>   (gdb) p *def
>   $3 = {type = SET_SIZE, key = 0x2d548 
> "submission_max_mail_size", offset = 20, list_info = 0x0}

This is unexpected. But I don't see how it's a Dovecot bug. It seems as if your 
compiler doesn't do padding correctly and then crashes because it didn't do it 
correctly. I guess you're compiling this as 32bit? Is size_t 32bit or 64bit?

Can you try with the below small test program if it prints the same 20?

#include 
#include 
#include 

#define in_port_t unsigned short

struct submission_settings {
bool verbose_proctitle;
const char *rawlog_dir;

const char *hostname;

const char *login_greeting;
const char *login_trusted_networks;

/* submission: */
size_t submission_max_mail_size;
unsigned int submission_max_recipients;
const char *submission_client_workarounds;
const char *submission_logout_format;

/* submission backend: */
const char *submission_backend_capabilities;

/* submission relay: */
const char *submission_relay_host;
in_port_t submission_relay_port;
bool submission_relay_trusted;

const char *submission_relay_user;
const char *submission_relay_master_user;
const char *submission_relay_password;

const char *submission_relay_ssl;
bool submission_relay_ssl_verify;

const char *submission_relay_rawlog_dir;
unsigned int submission_relay_max_idle_time;

unsigned int submission_relay_connect_timeout;
unsigned int submission_relay_command_timeout;

/* imap urlauth: */
const char *imap_urlauth_host;
in_port_t imap_urlauth_port;

int parsed_workarounds;
};

int main(void)
{
struct submission_settings set;

printf("offset = %ld\n", offsetof(struct submission_settings, 
submission_max_mail_size));
printf("size = %ld\n", sizeof(set.submission_max_mail_size));
return 0;
}



Re: Applying Dovecot for a large / deep folder-hierarchy archive - BUG REPORTS!

2019-07-10 Thread Timo Sirainen via dovecot
On 7 Jul 2019, at 18.12, Arnold Opio Oree via dovecot  
wrote:
> 
> Dovecot Team,
> 
> I'd like to report a number of bugs, that are to my view all critical.

It would help to get your doveconf -n, example command lines causing the 
problems and the error messages it outputs or what the wrong behavior looks 
like in filesystem. It's now rather difficult to guess what exactly you tried 
and what happened.

Also what kind of output does readpst make? I'm not sure why you're using 
DIRNAMEs here.

> doveadm-sync -1/general
> 
> 1) If DIRNAMEs are not different between command line and mail_location 
> doveadm sync will fail, saying that the source and destination directories 
> are the same

This sounds very strange. I'm not sure what exactly you did, and I couldn't 
reproduce with a small test.

> 2) The -n / -N flags do not work, and a sync will fail strangely if location 
> is specified in the namespace definition

Again, sounds strange.

> 3) Adds mbox to path name under mailbox directory (where syncing from an mbox 
> source)

Probably with different parameters you could avoid it.

> 4) Not having the mailboxes at source named the same as those at destination 
> causes errors and partial sync 
> 
> 5) Not having the target mailboxes formatted to receive the sync 
> (//DIRNAME/) will cause sync errors.

I don't understand these. Target mailboxes are supposed to be empty initially, 
and after the initial sync they should be in the expected format. Why would 
they be different?

> doveadm-sync
> 
> 1) With large synchronizations UIDs are corrupted where multiple syncs are 
> executed and the program can no longer synchronize

What exactly is the error message?

> dovecot
> 
> 1) Panics and fails to expand ~ to user home: observed cases are where 
> multiple namespaces are being used

Panic message and more details would also help.

> With regards to the last error that I requested help on i.e. \Noselect. This 
> has been resolved more-or-less by the workarounds that I have implemented for 
> the bugs reported above.
> 
> I have seen a number of threads whilst researching the \Noselect issue where 
> people have been very confused. My finding was that \Noselect is a function 
> of the IMAP specification server-side implementation RFC3501 ( 
> https://tools.ietf.org/html/rfc3501#section-6.3.
>  
> 6).
>  And for me the server was returning directories with \Noselect because the 
> mailboxes were malformed on account of dovadm-sync errors. In order to fix 
> this I formed a bash command to transverse the mailbox hierarchy and create 
> the missing folders critical to the sdbox format, namely DIRNAME.

Nowadays we have also an option to disable creation of \Noselect folders, 
because they confuse people. Using mail_location = ...:NO-NOSELECT - It won't 
affect existing folders immediately though.



Re: Error: o_stream_send_istream and Disconnected in APPEND

2019-07-11 Thread Timo Sirainen via dovecot
On 11 Jul 2019, at 10.13, Alessio Cecchi via dovecot  
wrote:
> 
> Hi,
> 
> I'm running some Dovecot servers configured with LVS + Director + Backend + 
> NFS and version 2.2.36.3 (a7d78f5a2).
> 
> In the last days I see an increased number of these error:
> 
> Error: 
> o_stream_send_istream(/nfs/mail/company.com/info/Maildir/.Sent/tmp/1562771349.M255624P9151.pop01)
>  failed: Broken pipe
> 
> always with action "Disconnected in APPEND", when users try to upload a 
> message in Sent or Drafts.
> 

I think it simply means that Dovecot sees that the client disconnected while it 
was APPENDing the mail. Although I don't know why they would suddenly start 
now. And I especially don't understand why the error is "Broken pipe". Dovecot 
uses it internally when it closes input streams, so it's possibly that, but why 
would isn't that happening elsewhere then.. Did you upgrade your kernel 
recently? I guess it's also possible that there is some bug in Dovecot, but I 
don't remember any changes related to this for a long time.

I guess it could actually be writing as well, because "Broken pipe" is set also 
for closed output streams, so maybe some failed NFS write could cause it 
(although it really should have logged a different error in that case, so if 
that was the reason this is a bug).

Dovecot v2.3.x would log a different error depending on if the problem was 
reading or writing, which would make this clearer.



Re: Frequent Out of Memory for service(config)

2019-07-11 Thread Timo Sirainen via dovecot
On 13 May 2019, at 22.56, Root Kev via dovecot  wrote:
> 
> May 13 11:35:43 config: Fatal: master: service(config): child 26900 returned 
> error 83 (Out of memory (service config { vsz_limit=1024 MB }, you may need 
> to increase it) - set CORE_OUTOFMEM=1 environment to get core dump)


If you set ssl_dh setting this most likely gets fixed.



Re: Dovecot release v2.3.7

2019-07-12 Thread Timo Sirainen via dovecot
On 12 Jul 2019, at 21.05, Michael Grimm via dovecot  wrote:
> 
> Aki Tuomi via dovecot  wrote:
> 
>> We are pleased to release Dovecot release v2.3.7.
> 
> My upgrade from 2.3.6 to 2.3.7 broke replication (FreeBSD 12.0-STABLE 
> r349799):
> 
> | dovecot[76032]: master: Dovecot v2.3.7 (494d20bdc) starting up for imap, 
> lmtp, sieve
> | dovecot[76035]: replicator: Error: 
> file_ostream.net_set_tcp_nodelay((conn:unix:/var/run/dovecot/stats-writer), 
> TRUE) failed: Invalid argument
> | dovecot[76035]: replicator: Error: 
> file_ostream.net_set_tcp_nodelay((conn:unix:auth-userdb), TRUE) failed: 
> Invalid argument
> | dovecot[76035]: auth: Error: 
> file_ostream.net_set_tcp_nodelay((conn:unix:/var/run/dovecot/stats-writer), 
> TRUE) failed: Invalid argument

I don't think these cause any actual breakage? Of course, there are tons of 
errors logged..

This likely fixes it anyway:

diff --git a/src/lib/ostream-file.c b/src/lib/ostream-file.c
index e7e6f62d1..766841f2f 100644
--- a/src/lib/ostream-file.c
+++ b/src/lib/ostream-file.c
@@ -334,7 +334,7 @@ static void o_stream_tcp_flush_via_nodelay(struct 
file_ostream *fstream)
 {
if (net_set_tcp_nodelay(fstream->fd, TRUE) < 0) {
if (errno != ENOTSUP && errno != ENOTSOCK &&
-   errno != ENOPROTOOPT) {
+   errno != ENOPROTOOPT && errno != EINVAL) {
i_error("file_ostream.net_set_tcp_nodelay(%s, TRUE) 
failed: %m",
o_stream_get_name(&fstream->ostream.ostream));
}

Re: Pigeonhole release v0.5.7

2019-07-12 Thread Timo Sirainen via dovecot
On 12 Jul 2019, at 21.09, Reio Remma via dovecot  wrote:
> 
>> - dsync: dsync-replication does not synchronize Sieve scripts.
> 
> Sieve replication still doesn't work for me. dsync now replicated sieve and 
> sieve/tmp directories, but neither actual sieve files nor @.dovecot.sieve 
> link.

What if you change the Sieve script? It probably doesn't immediately replicates 
old scripts.



Re: 2.3.7/FreeBSD: Seeing lots of stats-writer errors.

2019-07-12 Thread Timo Sirainen via dovecot
On 12 Jul 2019, at 22.10, Larry Rosenman via dovecot  
wrote:
> 
> A user reported this to me, and I'm seeing it too:
> Hi all :)
> 
> After mail/dovecot upgrade to v2.3.7 it seems something went wrong with 
> stat-writer...
> 
> IMAP and POP3 logs:
> 
> Jul 12 19:45:34 mail dovecot: imap-login: Error: 
> file_ostream.net_set_tcp_nodelay((conn:unix:/var/run/dovecot/stats-writer), 
> TRUE) failed: Invalid argument

See the other thread: 
https://dovecot.org/pipermail/dovecot/2019-July/116479.html 



Re: Purpose of stats-writer and why doveadm try to open it to dump stats ?

2019-07-14 Thread Timo Sirainen via dovecot
On 14 Jul 2019, at 10.10, Jean-Daniel via dovecot  wrote:
> 
> Hello,
> 
> I want to monitor dovecot stats, and so I have an exporter process that run 
> with limited rights. 
> The monitoring user has only access to /var/run/dovecot/stats-reader and it 
> works fine.
> Doveadm stats dump returns the list of all stats as expected.
> 
> But each time I run doveadm stats dump, it logs the following error:
> 
> Error: net_connect_unix(/var/run/dovecot/stats-writer) failed: Permission 
> denied
> 
> So what is the purpose of the stats-writer socket, and why doveadm try to 
> open it to simply dump stats ? 
> Is it really something it needs and I should update my user permissions or is 
> it a doveadm bug ?

All Dovecot processes nowadays connect to the stats-writer process early on 
before they drop privileges, unless it's explicitly disabled in the code. In 
doveadm case I suppose most commands would want to connect to stats-writer, but 
we could have a per-command flag to specify that the command doesn't want 
stats. I'll add this to our internal Jira.



Re: Error since Dovecot v2.3.7

2019-07-15 Thread Timo Sirainen via dovecot
On 15 Jul 2019, at 10.58, Paul Hecker via dovecot  wrote:
> 
> Hi,
> 
> since upgrading to Dovecot 2.3.7 I get the following new errors:

What was the old version? Did you upgrade your kernel as well?

> 2019-07-15 09:10:52 mail dovecot:  
> imap(p...@iwascoding.com)<32484>: Error: file_lock_free(): 
> Unexpectedly failed to retry locking 
> /var/spool/mail/iwascoding/paul/mdbox/mailboxes/INBOX/dbox-Mails/dovecot-vsize.lock:
>  
> fcntl(/var/spool/mail/iwascoding/paul/mdbox/mailboxes/INBOX/dbox-Mails/dovecot-vsize.lock,
>  write-lock, F_SETLK) locking failed: No such file or directory

What filesystem are you using for mdboxes?

The lock fd is already open. Locking is not supposed to fail this way. Also I 
can't think of any recent changes that could be related to this. v2.2.36 
already had the same locking code.



Re: [ext] 2.3.7 slower than 2.3.6?

2019-07-15 Thread Timo Sirainen via dovecot
On 15 Jul 2019, at 23.17, Ralf Hildebrandt via dovecot  
wrote:
> 
> * Ralf Hildebrandt via dovecot :
>> We're using dovecot (via LMTP) as a backup for all incoming mail.
>> 
>> I upgraded from 2.3.6 to 2.3.7 on the 12th:
>> 2019-07-12 14:35:44 upgrade dovecot-imapd:amd64 2:2.3.6-2~bionic 
>> 2:2.3.7-8~bionic
>> 
>> and it seems that 2.3.7 is slower than 2.3.6 -- mail to the backup
>> IMAP box is suddenly taking quite some time to get delivered and is piling up
>> in the queue.
> 
> And alas, I had a look at the monitoring and found that disk IO has
> increase A LOT after the update: 
> https://www.arschkrebs.de/images/dovecot-disk-io-increase.png
> 
> I zoomed in and found that the sudden increace in IO coincides with
> the update to 2.3.7 (14:35): 
> https://www.arschkrebs.de/images/dovecot-io-2.png 
> 

Do the different disks have different kinds of data? Like you seem to be using 
external attachments and fts? Also your doveconf -n doesn't show what fts 
backend you're using. Or is it just half-configured and not really used?



Re: Dovecot release v2.3.7

2019-07-16 Thread Timo Sirainen via dovecot
On 13 Jul 2019, at 18.39, Michael Grimm via dovecot  wrote:
> 
> Now replication is working from my point of view, besides occational error 
> messages like:
> 
> | imap-login: Error: file_ostream.net_set_tcp_nodelay(, TRUE) failed: 
> Connection reset by peer

Here's the final fix for it:

https://github.com/dovecot/core/commit/25028730cd1b76e373ff989625132d526eea2504 




Re: Dovecot release v2.3.7

2019-07-16 Thread Timo Sirainen via dovecot
On 13 Jul 2019, at 14.44, Tom Sommer via dovecot  wrote:
> 
> LMTP is broken on director:
> 
> Jul 13 13:42:41 lmtp(34824): Panic: file smtp-params.c: line 685 
> (smtp_params_mail_add_body_to_event): assertion failed: ((caps & 
> SMTP_CAPABILITY_8BITMIME) != 0)

Thanks, fixed: 
https://github.com/dovecot/core/commit/c4de81077c11d09eddf6a5c93676ee82350343a6 



Re: Unexpected result from LIST EXTENDED command

2019-07-16 Thread Timo Sirainen via dovecot
On 16 Jul 2019, at 9.51, Emil Kalchev via dovecot  wrote:
> 
> I am executing this command below to dovecot-2.3.5-6.cp1178.x86_64 server
>  
> Notice that some status responses are missing (For folders INBOX.Archive, 
> INBOX.spam.&-BD0EOQQ9BDkEPQ-). I wonder If this is a bug or working as 
> expected
>  
> In rfc5819 there is this:
>  
> If the server runs into unexpected problems while trying to look up
> the STATUS information, it MAY drop the corresponding STATUS reply.
> In such a situation, the LIST command would still return a tagged OK
> reply.
>  
> May be that is the reason for this response? Is it possible to find more 
> details in server logs why STATUS is missing?

Do you see any errors logged? Does it work if you ask with STATUS command 
directly those folders? What's your doveconf -n?



Re: Unexpected result from LIST EXTENDED command

2019-07-16 Thread Timo Sirainen via dovecot
On 16 Jul 2019, at 11.41, Emil Kalchev  wrote:
> 
> There is no error in the server logs. I checked those particular folders on 
> the server and they don’t seems to have anything special about them, like 
> permission or etc.
>  
> Yes, calling STATUS on those particular folders returns the status. The 
> folders can be opened and they have emails in them so nothing special about 
> those folders.

https://github.com/dovecot/core/blob/master/src/imap/cmd-list.c#L195 
 seems to 
have a bug. If LIST is requesting SUBSCRIBED results, and it finds there is a 
folder that is not subscribed but has a child that is subscribed, then the 
parent isn't requested for STATUS. That matches:

S: * LIST (\HasChildren \UnMarked) "." INBOX.spam.&-BD0EOQQ9BDkEPQ-
S: * LIST (\Subscribed \HasChildren \UnMarked) "." 
INBOX.spam.&-BD0EOQQ9BDkEPQ-.jhfhg

But in your LIST output INBOX.Archive didn't have any children, so I'm not sure 
if that's the same issue or not.



Re: [ext] 2.3.7 slower than 2.3.6?

2019-07-17 Thread Timo Sirainen via dovecot
On 16 Jul 2019, at 11.15, Ralf Hildebrandt via dovecot  
wrote:
> 
> * Timo Sirainen via dovecot :
> 
>>> And alas, I had a look at the monitoring and found that disk IO has
>>> increase A LOT after the update: 
>>> https://www.arschkrebs.de/images/dovecot-disk-io-increase.png
>>> 
>>> I zoomed in and found that the sudden increace in IO coincides with
>>> the update to 2.3.7 (14:35): 
>>> https://www.arschkrebs.de/images/dovecot-io-2.png 
>>> <https://www.arschkrebs.de/images/dovecot-io-2.png>
>> 
> 
>> Do the different disks have different kinds of data? 
> 
> All of the data is an /dev/sda1

What filesystem is this?

I did a bunch of testing, and after initially thinking I saw an increase it was 
just due to bad testing because the new TCP_NODELAY changes allowed imaptest to 
do more work. So I can't see that there is any disk IO difference.

There are a few intentional changes to lib-index and also mdbox code. One of 
those is that dovecot.index.cache files are being recreated more often now. In 
some cases they were wasting a lot of disk space because expunged mails hadn't 
been removed from them. I guess it's a possibility that all the disk IO was 
because in your system Dovecot started recreating all those files at the same 
time. Maybe you could try upgrading again during slower hours and see if the 
disk IO normalizes after a while?



Re: Dovecot v2.3.7 - TLS/SSL issue

2019-07-17 Thread Timo Sirainen via dovecot
On 17 Jul 2019, at 13.59, Christos Chatzaras via dovecot  
wrote:
> 
> I use a helpdesk system that connects to dovecot using POP3 with SSL enabled 
> to fetch the e-mails.
> 
> After I upgrade to v2.3.7 the helpdesk randomly (some times it works, some 
> times not) doesn't fetch the e-mails. If I configure the e-mail accounts with 
> SSL/TLS disabled then it works.
> 
> Any idea about this?

What was the previous version you were using? Can you send log files about the 
time when it was broken? (You can send them to me privately.)



Re: auth: Warning: Event 0x1280fe20 leaked

2019-07-17 Thread Timo Sirainen via dovecot
On 17 Jul 2019, at 22.13, Jos Chrispijn via dovecot  wrote:
> 
> On 16-7-19 9:30, John Doe Vecot via dovecot wrote:
>> Lately I get some errors in my logfile, saying:
>> 
>> dovecot[72337]: auth: Warning: Event 0x1280fe20 leaked (parent=0x0): 
>> auth-client-connection.c:338: 1 Time(s)
> Same overhere; can someone pls explain?
> 

What's your doveconf -n? I suppose this is coming either when 
stopping/reloading, or maybe auth process stops when it's been idling for 1 
minute. Do you use Postfix/Exim (or something else) that authenticates via 
Dovecot?



Re: Dovecot Director upgrade from 2.2 to 2.3

2019-07-18 Thread Timo Sirainen via dovecot
On 18 Jul 2019, at 11.44, Alessio Cecchi via dovecot  
wrote:
> 
> Hi,
> 
> I have a setup with 3 Dovecot Director v2.2.36 and 
> director_consistent_hashing = yes ;-)
> 
> Now I would like to upgrade to 2.3.7, first only Director and after also 
> Backend.
> 
> Can works fine a ring of director with mixed 2.2 and 2.3 version?
> 
> Mi idea is to setup a new Director server with 2.3, stop one server with 2.2 
> and insert the new 2.3 in the current ring to check if works fine.
> 
> If all works fine I will replace all Director 2.2 with 2.3 version.
> 

There's no known reason why it wouldn't work. But be prepared in case there is 
an unknown reason.



Re: index worker 2.3.7 undefined symbol errors (more info)

2019-07-22 Thread Timo Sirainen via dovecot
On 21 Jul 2019, at 23.14, Dirk Koopman via dovecot  wrote:
> 
> Some supplemental information:
> 
> This is happening on every email delivered into Dovecot via LMTP. The curious 
> things are that the message is a) successfully delivered and b) sieved into 
> the correct directory. 
> 
> Another observation is that:
> 
> mail_deliver_ctx_get_log_var_expand_table
> 
> is defined globally in core/src/lib-lda/mail-deliver.c (and used there) but 
> the ONLY external call in the entire dovecot tree is from 
> pigeonhole/src/plugins/lda-sieve/lda-sieve-log.c. 
> 
> I am not using lda but it seems to be part of core. So, as I am only using 
> lmtp, why is pigeonhole using lda-sieve at all? 

It's part of lib-lda, which is also used by lmtp.

> Can I work around the error message by some config magic (as I did by calling 
> the correct plugin for imap_sieve) or is this an actual bug? Could this be 
> fixed simply by including mail-deliver.h in lda-sieve-log.c? 

I think you're not linking lmtp binary correctly somehow. That symbol should be 
part of it:

% nm /usr/libexec/dovecot/lmtp | grep mail_deliver_ctx_get_log_var_expand_table
00061960 T mail_deliver_ctx_get_log_var_expand_table




Re: Dovecot with MySQL over SSL.

2019-07-22 Thread Timo Sirainen via dovecot
On 20 Jul 2019, at 23.02, Reio Remma via dovecot  wrote:
> 
> On 20.07.2019 22:37, Aki Tuomi via dovecot wrote:
>> 
>>> On 20/07/2019 21:07 Reio Remma via dovecot  
>>>  wrote:
>>> 
>>> 
>>> On 20.07.2019 18:03, Aki Tuomi via dovecot wrote: 
 
> On 20/07/2019 13:12 Reio Remma via dovecot < dovecot@dovecot.org 
> > wrote:
> 
> 
> On 19.07.2019 0:24, Reio Remma via dovecot wrote:
>> I'm attempting to get Dovecot working with MySQL user database on
>> another machine. I can connect to the MySQL (5.7.26) instance with SSL
>> enabled:
>> mysql -h db.mrst.ee --ssl-ca=/etc/dovecot/ca.pem
>> --ssl-cert=/etc/dovecot/client-cert.pem
>> --ssl-key=/etc/dovecot/client-key.pem --ssl-cipher=DHE-RSA-AES256-SHA
>> -u vmail -p
>> However if I use the same values in dovecot-sql.conf.ext, I get the
>> following error:
>> Jul 19 00:20:18 turin dovecot: auth-worker(82996): Error:
>> mysql(db.mrst.ee): Connect failed to database (vmail): SSL connection
>> error: protocol version mismatch - waiting for 1 seconds before retry
>> Jul 19 00:20:19 turin dovecot: auth-worker(82996): Error:
>> mysql(db.mrst.ee): Connect failed to database (vmail): Connections
>> using insecure transport are prohibited while
>> --require_secure_transport=ON. - waiting for 5 seconds before retry
>> Database connection string:
>> connect = host=db.mrst.ee dbname=vmail user=vmail password=stuff \
>> ssl_ca=/etc/dovecot/ca.pem \
>> ssl_cert=/etc/dovecot/client-cert.pem \
>> ssl_key=/etc/dovecot/client-key.pem \
>> ssl_cipher=DHE-RSA-AES256-SHA
> Update: I got it to connect successfully now after downgrading the MySQL
> server tls-version from TLSv1.1 to TLSv1.
> 
> Is there a reason why Dovecot MySQL doesn't support TLSv1.1?
> 
> Thanks!
> Reio
 
 Dovecot mysql uses libmysqlclient. We do not enforce any particular tls 
 protocol version. If it requires you to downgrade I suggest you review 
 your client my.cnf for any restrictions.
 ---
 Aki Tuomi
>>> 
>>> Thanks Aki! I'm looking at it now and despite identical MySQL 5.7.26 
>>> versions on both systems, it seems Dovecot is using libmysqlclient 5.6.37. 
>>> 
>>> Dovecot seems to be using the older libmysqlclient.so.18.1.0 (5.6.37) from 
>>> mysql-community-libs-compat 5.7.26 instead of the newer 
>>> libmysqlclient.so.20.3.13 (5.7.26) from mysql-community-libs 5.7.26. 
>>> 
>>> If I try to remove the libs-compat, yum also insists on removing 
>>> dovecot-mysql, so it depends on the older libmysqlclient and ignores the 
>>> newer one. 
>>> 
>>> I don't suspect I can do anything on my end to force the Dovecot CentOS 
>>> package to use the non-compat libmysqlclient? 
>>> 
>>> Thanks, 
>>> Reio
>> 
>> What repo are you using?
>> ---
>> Aki Tuomi
> 
> Installed Packages
> dovecot-mysql.x86_64  
>   2:2.3.7-8   
>   
> @dovecot-2.3-latest
> mysql-community-libs.x86_64   
>   5.7.26-1.el7
>   
> @mysql57-community
> 
> Both are from official repos.

dovecot-mysql package is built against the mariadb library that comes with 
CentOS 7. If you want it to work against other libmysqlclient versions you'd 
need to compile it yourself: 
https://repo.dovecot.org/ce-2.3.7/centos/7/SRPMS/2.3.7-8_ce/ 



Re: Dovecot 2.3.6 on Solaris10: build issues, segfaults

2019-07-22 Thread Timo Sirainen via dovecot
Ah, okay, I see. submission_max_mail_size should be defined as uoff_t instead 
of size_t in struct submission_settings and struct submission_settings.

> On 20 Jul 2019, at 1.47, Joseph Tam via dovecot  wrote:
> 
> 
> Looking further into this segfault at
> 
>   settings-parser.c:setting_copy():1519
>   *dest_size = *src_size;
> 
> *src_size points to type size_t (typedef unsigned long), a 4-byte
> aligned value consistent with a 32-bit build.  This is mismatched with
> declared type
> 
>   (gdb) whatis src_size
>   type = const uoff_t *
>   (gdb) whatis uoff_t
>   type = unsigned long long
>   (gdb) p sizeof(uoff_t)
>   $1 = 8
> 
> resulting in the segfault when *src_size is dereferened.  The implied
> condition of this code segment is typeof(uoff_t)==typeof(size_t) which
> is clearly not the case.
> 
> I'm not sure how/if uoff_t is defined, but configure reports
> 
>   checking for uoff_t... no
>   checking type of off_t... long long
> 
> The latter is weird, because if I compile and run using the same compiler 
> flags
> 
>   #include 
>   int main(void) { printf("%d %d\n",sizeof(long long),sizeof(off_t)); }
> 
> the output is "8 4".
> 
> Joseph Tam 



Re: index worker 2.3.7 undefined symbol errors

2019-07-22 Thread Timo Sirainen via dovecot
On 19 Jul 2019, at 13.20, Dirk Koopman via dovecot  wrote:
> But I am left with this:
> 
> Jul 19 14:09:52 localhost dovecot: indexer-worker: Error: User  
> lookup failed: Couldn't load required plugin 
> /usr/lib/dovecot/modules/lib90_sieve_plugin.so: dlopen() faile: 
> /usr/lib/dovecot/modules/lib90_sieve_plugin.so: undefined symbol: 
> mail_deliver_ctx_get_log_var_expand_table

Oh.. it's logged by indexer-worker.

> mail_plugins = mail_log notify replication fts fts_lucene sieve

You can't load sieve globally. It needs to be inside protocol lmtp {}



Re: Character not allowed in mailbox name

2019-07-22 Thread Timo Sirainen via dovecot
On 22 Jul 2019, at 10.34, Lothar Schilling via dovecot  
wrote:
> 
> Hi Arnold,
> 
> thanks for your assistance. I solved the issue right now by changing
> 
> prefix = Shared/%%u/
> to
> prefix = Shared/%%n/
> 
> This omits the @mydomain.de part which holds the dot causing the
> trouble. Why this became a problem due to the update I wouldn't know.

Accidental bug. Fixed by 
https://github.com/dovecot/core/commit/62f3b738efd4a6444a4bde4a80bea208c5b39ccd 

> 
> Regards
> 
> Lothar
> 
> Am 22.07.2019 um 10:49 schrieb Arnold Opio Oree:
>> Hi Lothar,
>> 
>> I haven't yet had the opportunity to go deep into Dovecot engineering /
>> development, (so please correct me if necessary anyone).
>> 
>> I have experienced this exception where attempting to copy mailbox data
>> within a client from IMAP account to IMAP account, but I have not had
>> it cause a mailbox to become inaccessible.
>> 
>> The reason was because Maildir++ format uses the "." separator to
>> delineate child directories, and therefore the character is illegal in
>> mailbox names. The issue was addressed by either changing the
>> filesystem layout to LAYOUT=fs or by changing the mailbox format
>> altogether to e.g. sdbox.
>> 
>>>  location = maildir:%%h:INDEX=%h/shared/%%u
>> I can see from your configuration that the LAYOUT is not set as fs, so
>> I would assume is still using Maildir++.
>> 
>> Perhaps this setting was lost during the upgrade?
>> 
>> Regards,
>> 
>> Arnold Opio Oree
>> Chief Executive Officer
>> Parallax Digital Technologies
>> 
>> arnoldo...@parallaxdt.com
>> 
>> http://www.parallaxdt.com
>> 
>> tel : +44 (0) 333 577 8587
>> fax : +44 (0) 20 8711 2477
>> 
>> Parallax Digital Technologies is a trading name of Parallax Global
>> Limited. U.K. Co. No. 08836288
>> 
>> The contents of this e-mail are confidential. If you are not the
>> intended recipient you are to delete this e-mail immediately, disregard
>> its contents and disclose them to no other persons.
>> 
>> -Original Message-
>> From: Lothar Schilling via dovecot 
>> Reply-To: Lothar Schilling 
>> To: dovecot@dovecot.org
>> Subject: Character not allowed in mailbox name
>> Date: Mon, 22 Jul 2019 08:33:45 +0200
>> 
>> Hi everybody,
>> 
>> after an update this morning to dovecot 2.3.7 we cannot connect to our
>> shared mailboxes anymore. The error message as issued in Thunderbird:
>> 
>> "Character not allowed in mailbox name '.'"
>> 
>> I didn't change anything about the configuration. Output of dovecot -n:
>> 
>> [...]
>> namespace {
>>  list = children
>>  location = maildir:%%h:INDEX=%h/shared/%%u
>>  prefix = Shared/%%u/
>>  separator = /
>>  subscriptions = yes
>>  type = shared
>> }
>> [...]
>> 
>> Any help would be appreciated, thank you very much.
>> 
>> Lothar Schilling
>> 
>> 
>> 
>> 
>> 
> 



Re: Dovecot Director upgrade from 2.2 to 2.3

2019-07-22 Thread Timo Sirainen via dovecot
On 22 Jul 2019, at 17.45, Alessio Cecchi  wrote:
> 
> one server of the ring is now running Dovecot 2.3.7 and works fine with the 
> others Dovecot 2.2 since 3 days.
> I notice only that the load avarage of this CentOS 7 server is higher 
> compared with CentOS 6 and Dovecot 2.2, but I don't know if is related to the 
> new operating system or Dovecot (hardware is the same).
> 
How much higher? Can you check the individual dovecot processes' CPU usage? I 
guess mainly director, imap-login and pop3-login. The director code should be 
pretty much the same though.

The SSL code in login processes changed in v2.3, so I wonder if the new code 
has some performance issues.



Dovecot v2.3.7.1 & Pigeonhole v0.5.7.1 released

2019-07-23 Thread Timo Sirainen via dovecot
https://dovecot.org/releases/2.3/dovecot-2.3.7.1.tar.gz
https://dovecot.org/releases/2.3/dovecot-2.3.7.1.tar.gz.sig

https://pigeonhole.dovecot.org/releases/2.3/dovecot-2.3-pigeonhole-0.5.7.1.tar.gz
https://pigeonhole.dovecot.org/releases/2.3/dovecot-2.3-pigeonhole-0.5.7.1.tar.gz.sig

Binary packages in https://repo.dovecot.org/

These releases fix the reported regressions in v2.3.7 & v0.5.7.

Dovecot core:
- Fix TCP_NODELAY errors being logged on non-Linux OSes
- lmtp proxy: Fix assert-crash when client uses BODY=8BITMIME
- Remove wrongly added checks in namespace prefix checking

Pigeonhole:
- dsync: Sieve script syncing failed if mailbox attributes weren't
  enabled.



Re: corrupt mdbox index / zero mails showing in imap

2019-07-27 Thread Timo Sirainen via dovecot
On 25 Jul 2019, at 20.55, Mike via dovecot  wrote:
> 
> Hi,
> 
> 
> I have recently migrated (under emergency conditions) a dovecot imap/pop
> based server to a new instance. The mailboxes used mdbox format and due
> to various screwups I had corrupt indexes. I thought I'd cleaned this up
> but then I found that this new instance hadn't been set up correctly for
> nfs. Long story short, I still get users with new cases of corrupt
> indexes. The symptom is imap either showing NO mail in their inbox, or,
> not showing any recently delivered mail in the box, until I rm -f
> dovecot.map.index / doveadm force-resync -u user.
> 
> It would be a huge help if there could be some method to detect if this
> is the case for any given user and to proactively do the force-resync
> process for them instead of waiting for that support call (or service
> cancellation...). I have looked around and have not found any tool
> capable of 'linting' an mdbox format inbox, and it seems like something
> like this should have been or would be an extremely useful debugging
> tool both during development as well as to troubleshoot stuff in the
> field. I would love to know if anyone either has such a tool, or any
> general suggestion how I could go about finding these cases and
> addressing them. I believe I have the nfs issue resolved and will not be
> creating new cases, so I just want to fix the ~3000 boxes I have now and
> move forward.


I think you could do something with using "doveadm dump" command. I'm not sure 
in what ways your mdboxes are corrupted, so there might be an easier way to do 
it, but for a generic checker I think this would work:

 * doveadm dump dovecot.map.index and remember all the "uid" numbers
 * doveadm dump dovecot.index for each folder and remember all the "map_uid" 
numbers.
 * See if any map_uid is missing in dovecot.map.index's uid numbers. yes -> 
force-resync

You can also use doveadm dump for the storage/m.* files to see what they 
contain, but this likely won't be useful for this case.

Re: corrupt mdbox index / zero mails showing in imap

2019-07-27 Thread Timo Sirainen via dovecot
On 27 Jul 2019, at 14.13, Timo Sirainen  wrote:
> 
> On 25 Jul 2019, at 20.55, Mike via dovecot  wrote:
>> 
>> Hi,
>> 
>> 
>> I have recently migrated (under emergency conditions) a dovecot imap/pop
>> based server to a new instance. The mailboxes used mdbox format and due
>> to various screwups I had corrupt indexes. I thought I'd cleaned this up
>> but then I found that this new instance hadn't been set up correctly for
>> nfs. Long story short, I still get users with new cases of corrupt
>> indexes. The symptom is imap either showing NO mail in their inbox, or,
>> not showing any recently delivered mail in the box, until I rm -f
>> dovecot.map.index / doveadm force-resync -u user.
>> 
>> It would be a huge help if there could be some method to detect if this
>> is the case for any given user and to proactively do the force-resync
>> process for them instead of waiting for that support call (or service
>> cancellation...). I have looked around and have not found any tool
>> capable of 'linting' an mdbox format inbox, and it seems like something
>> like this should have been or would be an extremely useful debugging
>> tool both during development as well as to troubleshoot stuff in the
>> field. I would love to know if anyone either has such a tool, or any
>> general suggestion how I could go about finding these cases and
>> addressing them. I believe I have the nfs issue resolved and will not be
>> creating new cases, so I just want to fix the ~3000 boxes I have now and
>> move forward.
> 
> 
> I think you could do something with using "doveadm dump" command. I'm not 
> sure in what ways your mdboxes are corrupted, so there might be an easier way 
> to do it, but for a generic checker I think this would work:
> 
> * doveadm dump dovecot.map.index and remember all the "uid" numbers
> * doveadm dump dovecot.index for each folder and remember all the "map_uid" 
> numbers.
> * See if any map_uid is missing in dovecot.map.index's uid numbers. yes -> 
> force-resync
> 
> You can also use doveadm dump for the storage/m.* files to see what they 
> contain, but this likely won't be useful for this case.

Or actually reading further: It looks like all your indexes are gone, even the 
folders' dovecot.index* files? Wouldn't a simple solution then be just to check 
if "doveadm mailbox status" if the user looks empty (or mostly empty) you'd run 
the force-resync? Or another thought: If INBOX's dovecot.index was completely 
lost and was recreated, you can see the index's creation timestamp in the 
"indexid" field in "doveadm dump dovecot.index" - if it's new enough do the 
force-resync.

BTW. If all the indexes are gone, force-resync adds all storage/m.* mails back 
to their original folder. So if user had moved mails to another folder they 
come back to e.g. INBOX. It also loses message flags, and brings back mails 
that were already expunged but not yet doveadm purged.



Re: Autoexpunge not working for Junk?

2019-07-27 Thread Timo Sirainen via dovecot
On 25 Jul 2019, at 7.18, Amir Caspi via dovecot  wrote:
> 
> Hi all,
> 
>   I set up dovecot a couple of months ago and am having trouble getting 
> autoexpunge=30d to work on my Trash and Junk mailboxes.  Not sure why not 
> because I'm not getting error messages in my log.
>   Running "doveadm search -u  mailbox Junk savedbefore 30d" shows 
> me many messages (I've got messages back to mid-May, and a couple of other 
> users have them back to early April, although if this setting were working, 
> there should be nothing earlier than June 24).  Running a manual doveadm 
> expunge works fine... it's just autoexpunge that seems to not be running at 
> all.

Autoexpunging tries to be efficient, so it looks only at the first email's 
saved-timestamp. It's also cached in dovecot.list.index. So you should check:

1. What's the first mail's saved-timestamp?
doveadm fetch -u user date.saved mailbox Junk 1

2. That timestamp should also be the same in dovecot.list.index:
doveadm mailbox status -u user firstsaved Junk



Re: doveadm: Error: open(/proc/self/io) failed

2019-08-01 Thread Timo Sirainen via dovecot
On 31 Jul 2019, at 20.45, A. Schulze via dovecot  wrote:
> 
> 
> 
> Am 31.07.19 um 08:27 schrieb Sami Ketola via dovecot:
>> service lmtp {
>> user = vmail
>> }
>> 
>> please remove user = vmail from here or change it to root.
>> 
>> for security reasons lmtp service must be started as root since version 
>> 2.2.36. lmtp will drop root privileges after initialisation but it needs to 
>> open /self/proc/io as root before that.
> 
> Hello Sami,
> 
> I don't read "root is required for lmtp" in 
> https://wiki.dovecot.org/LMTP#Security neither does 
> https://dovecot.org/doc/NEWS-2.2 say so.
> Could you proof that statement somehow?


Alternative is:

service lmtp {
  user = vmail
  drop_priv_before_exec = yes
}

I'm not sure if you run into other problems with that.



Re: IMAP frontend authenticating proxy with GSSAPI/Kerberos SSO

2019-08-01 Thread Timo Sirainen via dovecot
On 1 Aug 2019, at 12.26, Gert van Dijk via dovecot  wrote:
> 
> passdb {
>  args = proxy=y host=127.0.0.1 port=1143 pass=#hidden_use-P_to_show#
..
> auth: Info: static(username,1.2.3.4,<9WOjSwWP8toKAAYE>): No password
> returned (and no nopassword)

I think this is why it's not using the passdb at all. Try adding 
password=something to the args.



Re: Dovecot Director upgrade from 2.2 to 2.3

2019-08-01 Thread Timo Sirainen via dovecot
On 31 Jul 2019, at 15.41, Alessio Cecchi via dovecot  
wrote:
> 
> Hi Timo,
> 
> here you can see two images with the load average and CPU usage with Dovecot 
> 2.2 (Centos 6) and 2.3 (Centos 7) on the same hardware and same configuration:
> 
> https://imgur.com/a/1hsItlc 
> Load average increment is relevant but CPU usage is similar.
> 

Load average changes can be rather difficult to debug if it's not because of 
any CPU usages or disk IO, and in your graphs those hadn't really changed. One 
more possibility could be to look at context switches, but do you have those 
from before the upgrade? (sar database?) Also since you changed CentOS 6 -> 7 
it could be possible that it has nothing to do with Dovecot at all.



Re: [BUG?] Double quota calulation when special folder is present

2019-08-07 Thread Timo Sirainen via dovecot

> On 6 Aug 2019, at 21.08, Mark Moseley via dovecot  wrote:
> 
>> 
>> I've bisected this down to this commit: 
>> 
>> git diff 
>> 7620195ceeea805137cbd1bae104e385eee474a9..97473a513feb2bbd763051869c8b7b83e24b37fa
>> 
>> Prior to this commit, anything updating the quota would do the right thing 
>> for any .INBOX. folders (i.e. not double count the contents of 
>> "INBOX" against the quota). After this commit, anything updating quota (new 
>> mail, quota recalc, etc) does the double counting of INBOX.
> 
> Thank you for the bisect! We'll look into this.
> 
> Hi. I was curious if there were any fixes for this? We're still affected by 
> this (and I imagine others are too but don't realize it). Thanks!

Looks like this happens only with Maildir++ quota. As a workaround you could 
switch to dict-file or "count" quota. Anyway added to internal tracking as 
DOP-1336.



Re: Autoexpunge not working for Junk?

2019-08-14 Thread Timo Sirainen via dovecot
On 13 Aug 2019, at 5.57, Amir Caspi via dovecot  wrote:
> 
> On Aug 12, 2019, at 8:54 PM, Thomas Zajic via dovecot  
> wrote:
>> 
>> * Amir Caspi via dovecot, 12.08.19 22:01
>> 
>>> [~]# doveadm mailbox status -u cepheid firstsaved Junk
>>> Junk firstsaved=1563154976
>>> 
>>> I can't tell how that timestamp corresponds to a human-readable date, 
>>> however.
>> 
>> [zlatko@disclosure:~]$ date -d @1563154976
>> Mon Jul 15 03:42:56 CEST 2019
> 
> So this is the same timestamp as date.saved on message 1... as it should be, 
> I guess.  Except that, as I showed, the timestamps are definitely messed up 
> somehow.  The timestamps in my MUA (whether webmail or local mail app) show 
> just fine... so something seems to be corrupted with the timestamps in the 
> dovecot index file, I think.  But the weird thing is that this is affecting 
> all users, not just me.

It probably has something to do with using mbox format. Are the IMAP UIDs 
changing unexpectedly? Errors/warnings logged related to it? Unfortunately it's 
a rather troublesome mailbox format. There are likely some bugs in Dovecot mbox 
code, but it's difficult and time consuming to try to reproduce any of the bugs 
so I've mostly given up trying.



Re: Should dovecot not be using different logging facility and severity levels?

2019-08-14 Thread Timo Sirainen via dovecot
On 9 Aug 2019, at 17.39, Marc Roos via dovecot  wrote:
> 
> Should dovecot not be using different severity levels like auth.warn? On 
> my system everything goes to loglevel info:

My thinking has been:

 * Panic: There's a bug that needs fixing
 * Fatal: Somewhat stronger error
 * Error: Something's broken or misconfigured - admin should fix something
 * Warning: Something seems to be at least temporarily broken, like maybe some 
limit was reached because the system was overloaded. Admin may need to do 
something or possibly just wait. Either way, these should be looked into.
 * Info: Events that admin doesn't necessarily need to look at, except while 
debugging or for gathering stats or something
 * Debug: Only when really debugging

> lev_info:Aug  9 16:18:24 mail03 dovecot: imap-login: Aborted login (auth 
> failed, 1 attempts in 2 secs): user=, method=PLAIN, rip=x.x.x.x, 
> lip=x.x.x.x, TLS, session=
> lev_info:Aug  9 16:18:29 mail03 dovecot: auth-worker(28656): 
> pam(krinfo,188.206.104.240,): unknown user

These are regular events that happen all the time due to brute force attacks 
and such. I don't know why you'd want to see them as warnings?



Re: Auth driver

2019-08-14 Thread Timo Sirainen via dovecot
On 9 Aug 2019, at 15.08, Riccardo Paolo Bestetti via dovecot 
 wrote:
> 
> 
> Could you point me to any documentation or examples? While I can find many 
> plugins in the repo and around the Internet, I could find none which add 
> authdb drivers.

https://dovecot.org/patches/2.2/passdb-openam.c 
 although it's not using 
autotools to do it in a nice way. For the autotools stuff you can use some 
other plugins as example. Pigeonhole for example, although it's much more 
complicated than most. https://dovecot.org/patches/2.2/mail-filter.tar.gz 
 probably has also (I 
didn't look into it).



Re: Autoexpunge not working for Junk?

2019-08-14 Thread Timo Sirainen via dovecot
On 14 Aug 2019, at 22.35, Amir Caspi via dovecot  wrote:
> 
> On Aug 14, 2019, at 1:26 PM, Timo Sirainen via dovecot  <mailto:dovecot@dovecot.org>> wrote:
>> 
>> It probably has something to do with using mbox format. Are the IMAP UIDs 
>> changing unexpectedly? Errors/warnings logged related to it? Unfortunately 
>> it's a rather troublesome mailbox format. There are likely some bugs in 
>> Dovecot mbox code, but it's difficult and time consuming to try to reproduce 
>> any of the bugs so I've mostly given up trying.
> 
> I'm not getting any errors or warnings as far as I can tell, and I don't 
> think the UIDs are changing unexpectedly -- messages are not getting 
> re-downloaded randomly.  That is, everything SEEMS to be working fine, as far 
> as I can tell.
> 
> So many people still use mbox that I hope we can fix this issue.
> 
> I'm happy to help test or provide further debug output... this problem is 
> certainly reproducible here, and it seems like lbutlr has a similar problem, 
> so hopefully we can address at least this one...
> 
> (I'm also happy to give you the Junk mailbox and index files... there's 
> nothing sensitive in my spam!)

It's not very helpful to look at the indexes after the problem already 
happened. But if you can find a reliably reproducible way to make this happen 
starting from an empty mailbox, I could look into it further. Ideally it would 
be a standalone script that reproduces the problem every time. Possibly 
something like:

 * Deliver mails with procmail
 * Read the mails with doveadm fetch
 * Maybe expunge the mails with doveadm expunge
 * Keep checking the uid and date.saved with doveadm fetch to see if they 
unexpectedly change at some point



Re: Emails not visible after renaming folders

2019-08-14 Thread Timo Sirainen via dovecot
Looks like this happens when you use a combination of FULLDIRNAME and INDEX in 
mail_location. Without one of these, or using DIRNAME instead of FULLDIRNAME it 
works correctly. Tracking internally in DOP-1348.

> On 6 Aug 2019, at 14.22, Aleksandr via dovecot  wrote:
> 
> Hi guys.
> 
> Does anyone have problems with a similar configuration (mdbox)?
> 
> Just tested with latest version (stage servers installation: dovecot 2.3.7), 
> also affected.
> 
> Not critical, but have complaints from users, 1-2 per month.
> 
> 
> 26.06.2019 12:05, Aleksandr пишет:
>> Copying or moving with email client: thunderbird, roundcube (webmail), mutt 
>> or any other email client via imap protocol.
>> 
>> 25.06.2019 22:10, Germán Herrera пишет:
>>> Are you copying/moving the emails with {cp|mv} or with "doveadm 
>>> {copy|move}"?
>>> 
>>> On 2019-06-25 12:00, Aleksandr via dovecot wrote:
 Hello,
 
 I have strange problem with "losing" emails after rename mail
 folder(s) (via imap client: thunderbird, roundcude, etc..)
 
 How to reproduce:
 
 1. Create some folder name, like TEST
 2. Create sub-folder under TEST (like SUBTEST)
 
 Structure:
 
 TEST
   |--SUBTEST
 
 
 # doveadm  mailbox list  -u postmaster@testmailbox
 Spam
 Trash
 Sent
 Drafts
 INBOX
 TEST
 TEST/SUBTEST
 
 3. Move (or copy) mails from INBOX to SUBTEST (all looks fine, and
 mails visible under SUBTEST)
 4. Rename TEST folder to any new name, NEWTEST
 
 Let`s try to view mails in mail client in NEWTEST-SUBTEST, folder have
 no emails :(
 
 
 mailsrv# doveadm -f table mailbox status -u postmaster@testmailbox
 "messages vsize" NEWTEST*
 mailbox  messages vsize
 NEWTEST 00
 NEWTEST/SUBTEST 00
 
 If doveadm force-resync postmaster@testmailbox, mails will be visible in 
 INBOX
 
 mailsrv# doveadm -f table mailbox status -u postmaster@testmailbox
 "messages vsize" INBOX*
 mailbox messages vsize
 INBOX   228
 
 Dovecot installation: CentOS x86_64 Linux 7.5.1804
 
 Storage: HDD Local Partition - XFS filesystem  / multi-dbox (mdbox) as
 mail_storage (this problem is not reproduced with the settings as
 Maildir storage !)
 somthing wrong with mapping indices.
 
 
  [start] 
 
 # dovecot -n
 
 # 2.2.36 (1f10bfa63): /etc/dovecot/dovecot.conf
 # Pigeonhole version 0.4.21 (92477967)
 # OS: Linux 3.10.0-862.2.3.el7.x86_64 x86_64 CentOS Linux release
 7.5.1804 (Core)
 # Hostname: 
 auth_mechanisms = plain login digest-md5 cram-md5
 base_dir = /var/run/dovecot/
 default_client_limit = 2
 default_login_user = dovecot
 default_process_limit = 1
 dict {
   quota = redis:host=127.0.0.1:prefix=user/:timeout_msecs=1000
 }
 disable_plaintext_auth = no
 first_valid_gid = 90
 first_valid_uid = 90
 imapc_features = rfc822.size fetch-headers
 imapc_host = 
 imapc_user = %u
 lda_mailbox_autocreate = yes
 lda_mailbox_autosubscribe = yes
 login_greeting = .
 login_log_format_elements = user=<%u> method=%m rip=%r lip=%l %c
 login_trusted_networks = 10.0.1.0/24
 mail_access_groups = mail
 mail_debug = yes
 mail_fsync = never
 mail_gid = 97
 mail_location =
 mdbox:~/mail/mailboxes:FULLDIRNAME=mBoX-MeSsAgEs:INDEX=~/mail/index:CONTROL=~/mail/control:INBOX=~/mail/mailboxes/inbox
 mail_log_prefix = "%{session} %Us(%u): "
 mail_max_lock_timeout = 30 secs
 mail_plugins = quota  zlib
 mail_prefetch_count = 20
 mail_privileged_group = mail
 mail_uid = 97
 managesieve_notify_capability = mailto
 managesieve_sieve_capability = fileinto reject envelope
 encoded-character vacation subaddress comparator-i;ascii-numeric
 relational regex imap4flags copy include variables enotify environment
 mailbox date index ihave duplicate mime foreverypart extracttext
 vacation-seconds editheader
 mbox_lock_timeout = 30 secs
 mbox_very_dirty_syncs = yes
 mbox_write_locks = fcntl
 namespace inbox {
   inbox = yes
   list = yes
   location =
   mailbox Drafts {
 auto = subscribe
 special_use = \Drafts
   }
   mailbox Sent {
 auto = subscribe
 special_use = \Sent
   }
   mailbox Spam {
 auto = subscribe
   }
   mailbox Trash {
 auto = subscribe
 special_use = \Trash
   }
   prefix =
   separator = /
   type = private
 }
 passdb {
   args = /etc/dovecot/dovecot-ldap.conf
   driver = ldap
 }
 plugin {
   cgroup_basedir = /usr/sys/cgroup
   hostingAccount = default
   quota = dict:User quota::proxy::quota
   quota_grace = 0%%
   quota_over_flag_value = TRUE
   quota_over_script = account-quota m

Re: dovecot-uidlist invalid data

2019-08-14 Thread Timo Sirainen via dovecot
On 4 Aug 2019, at 22.57, Király Balázs via dovecot  wrote:
> 
> Hi!
>  
> I’m struggling with the following error: 
>  
> Aug  4 21:32:00 mx02 dovecot: imap(x...@xxx.tld)<17693>: 
> Error: Mailbox INBOX: Broken file /home/vmail/xxx.tld/xxx/dovecot-uidlist 
> line 6246: Invalid data:
> Aug  4 21:49:22 mx02 dovecot: imap(x...@xxx.tld)<21879>: 
> Error: Mailbox INBOX: Broken file /home/vmail/xxx.tld/xxx/dovecot-uidlist 
> line 6249: Invalid data:
>  
> It seems the first part is not incremented properly and sometimes it has a 
> jump in it, like the line 6246:
>  
> 18810 :1564935891.M816284P8904.mx01.m.ininet.hu,S=12145,W=12409
> 18812 :1564947092.M542714P2651.mx01.m.ininet.hu,S=12275,W=12517

Is there ever anything after the "Invalid data:" text? It seems anyway that 
concurrent reading/writing isn't working as expected in dovecot-uidlist. Most 
likely something to do with NFS.

Can you reproduce this easily by just running "imaptest" with some test account 
(it'll delete mails)? See https://imapwiki.org/ImapTest - it's also available 
in dovecot-imaptest package in repo.dovecot.org.



Re: auth module logging

2019-08-14 Thread Timo Sirainen via dovecot
On 4 Aug 2019, at 6.23, AP via dovecot  wrote:
> 
> On Sat, Aug 03, 2019 at 11:27:24AM -0600, Michael Slusarz wrote:
>>> Errors hit the logs but I would appreciate seeing successful auths
>>> happen for the additional piece of mind. Cmouse and I couldn't
>>> find a way to do it on irc and it appears that the capability is
>>> missing. Successul /logins/ can be logged but auths, by themselves,
>>> cannot.
>>> 
>>> I would appreciate if the ability was added.
>>> 
>>> Dovecot 2.3.7.1 is in use.
>> 
>> Events (using event exporter) is probably what you want, new in 2.3.7.
>> 
>> https://doc.dovecot.org/admin_manual/list_of_events/
> 
> Hi,
> 
> I've tried using this in various ways but I could never get any real success.
> 
> I came close but the logging was always far too verbose. The info I wanted
> WAS there but so was a ton of other data I didn't want. I'd share the configs
> I tried but they came and went as I was experimenting.
> 
> Would love to know how to configure the events logging such that I only get
> a successful auth line logged as that would, indeed, solve my issue. It's
> quite likely I didn't hit the right config as the docs are somewhat sparse.

There probably isn't yet a name for the event that you want. A kludgy approach 
would be to filter the event based on the source code filename and line number. 
But that likely needs to be modified every time you upgrade Dovecot..



Re: segmentation fault in fs_list_get_path

2019-08-14 Thread Timo Sirainen via dovecot
On 3 Aug 2019, at 21.22, David M. Johnson via dovecot  
wrote:
> 
> There seems to be a straightforward bug in 
> src/lib-storage/list/mailbox-list-fs.c:79.  set->index_dir is unchecked prior 
> to dereferencing (unlike on line 126 in the same file, where it is properly 
> checked).  This manifested on a FreeBSD server running dovecot 2.3.6 when 
> clients tried to retrieve mail with subscriptions like `~/bar/baz`.  This 
> caused the `imap` child to crash, e.g. (slightly anonymized)

Could you also send your doveconf -n output? Would likely help creating a 
reproducible test.



Re: 2.3.7 + stats

2019-08-17 Thread Timo Sirainen via dovecot
On 16 Aug 2019, at 14.35, Jean-Daniel via dovecot  wrote:
> 
> Some of the behaviours you observe may be due to the same bug I encountered:
> 
> https://dovecot.org/pipermail/dovecot/2019-July/116475.html
> 
> Especially, regarding the ‘successful' field for auth, which does not exists 
> and is really named ‘success', and which is never set anyway.

As a workaround you could use auth_passdb_request_finished. And since it shows 
differences between "password mismatch" vs "user unknown", that's likely a good 
permanent solution as well.

metric auth_ok {
  event_name = auth_passdb_request_finished
  filter {
result = ok
  }
}
metric auth_user_unknown {
  event_name = auth_passdb_request_finished
  filter {
result = user_unknown
  }
}
metric auth_password_mismatch {
  event_name = auth_passdb_request_finished
  filter {
result = password_mismatch
  }
}



Re: Dovecot and hard links?

2019-08-17 Thread Timo Sirainen via dovecot
On 17 Aug 2019, at 1.57, @lbutlr via dovecot  wrote:
> 
> On 16 Aug 19, at 07:33 , @lbutlr  wrote:
>> I was looking at a mail folder and I noted that a file in the inbox had a 
>> total of 11 hard links to it:
> 
> Ack. I checked the junk folder and there are 379 files in there with 379 
> links!
> 
> Since they were all in jink I just deleted them all, but that cannot possibly 
> be desired behavior.
> 
> What do I check here?

Hard links are created when a mail is copied with the IMAP COPY command. So 
Dovecot just does what the client asks it to do. Maybe you have some 
misbehaving IMAP client?



Re: Feature wishlist: Allow to hide client IP/host in submission service

2019-08-28 Thread Timo Sirainen via dovecot
On 25 Aug 2019, at 21.51, Sebastian Krause via dovecot  
wrote:
> 
> Hi,
> 
> In many mail setups a required feature (for privacy reasons) is to
> hide the host and IP of clients (in the "Received" header) that use
> the authenticated submission over port 587. In Postfix that's
> possible (https://serverfault.com/q/413533/86332), but not very nice
> to configure especially if you only want want to strip the Received
> header for port 587 submissions, but not on port 25.
> 
> As far as I can see this configuration is not possible at all in the
> Dovecot submission server because the function which adds the
> Received header with the client's IP address
> (smtp_server_transaction_write_trace_record) is always called in
> submission-commands.c.
> 
> It would be very useful if the submission server could anonymize the
> client with a single configuration option, then all the Postfix
> configuration mess (and using SASL) could be skipped by simply using
> the Dovecot submission server instead.

Yeah, it would be useful to hide the client's IP and do it by default. Actually 
I think there shouldn't even be an option to not hide it. Or would it be better 
or worse to just not have the Received header added at all?



Re: Dovecot 2.3.7 - char "-" missing

2019-09-03 Thread Timo Sirainen via dovecot
On 30 Aug 2019, at 13.44, Domenico Pastore via dovecot  
wrote:
> 
> Hello,
> 
> i have update dovecot from version 2.2.15 to 2.3.7.2.
> I have a problem with mine java software because there is a different 
> response when open connection to doveadm.
> 
> I need open socket to doveadm for get imap quota of a mailbox.
> 
> With version 2.2.15:
> # telnet 192.160.10.4 924
> Trying 192.160.10.4...
> Connected to 192.160.10.4.
> Escape character is '^]'.
> -
> 
> 
> With version 2.3.7.2:
> # telnet 192.160.10.3 924
> Trying 192.160.10.3...
> Connected to 192.160.10.3.
> Escape character is '^]'.
> 
> 
> The difference is "-" character. The version 2.3.7 not respond with "-" 
> character after opening the connection.
> 
> Is it possible to add the character again with a parameter?
> 
> Why did doveadm's answer change?

It got changed as part of some other doveadm protocol changes. The change was 
somewhat accidental though and we didn't notice the difference. Anyway, 
practically this shouldn't have made any difference if the code was implemented 
as was described in https://wiki.dovecot.org/Design/DoveadmProtocol 
 It says that the client needs 
to send VERSION first, and as a reply it receives the "+" or "-" line. So it 
was more of a bug that previous Dovecot versions sent the +/- line too early. I 
added a note about this to the wiki page though.

Re: WARNING: using attachment_dir with plugin zlib can corrupt mails

2019-09-04 Thread Timo Sirainen via dovecot
On 19 Jul 2019, at 17.52, Patrick Cernko via dovecot  
wrote:
> 
> Hello list, hello Dovecot developers,
> 
> this week, I discovered a serious bug in Dovecot, that lead to several broken 
> mails on our servers. The bug corrupts the first few characters of the mail 
> header during saving. On our setup, it was almost always only the very first 
> line of text, that was corrupted.
..
> The bug occurs on very specific mails. Due to privacy reasons I could not 
> provide sample mails here. Storing such mails seems to trigger the bug 
> reproducible.
> 
> 
> I attached a very minimal doveconf -n config, that can be used to trigger the 
> bug. If one of the developers is interested, I can try to generate an 
> "anonymized" version of such a specific mail that still causes the issue. I 
> discovered the bug on our productive systems, running latest Dovecot 2.2 
> release, but the latest 2.3 I used during debugging is affected, too.

Getting such a mail that would allow reproducing would be helpful. I can't seem 
to be able to reproduce this with stress testing.

https://dovecot.org/tools/  has a couple of scripts 
that can obfuscate emails in a bit different ways. For example 
https://dovecot.org/tools/maildir-obfuscate.pl 
 might work.

I'm also wondering if Stephan's recent base64 code changes will fix this 
(everything is not merged yet).



Re: WARNING: using attachment_dir with plugin zlib can corrupt mails

2019-09-07 Thread Timo Sirainen via dovecot
On 19 Jul 2019, at 17.52, Patrick Cernko via dovecot  
wrote:
> 
> Hello list, hello Dovecot developers,
> 
> this week, I discovered a serious bug in Dovecot, that lead to several broken 
> mails on our servers. The bug corrupts the first few characters of the mail 
> header during saving. On our setup, it was almost always only the very first 
> line of text, that was corrupted.
> 
> The bug seems to be triggered by a bad "interaction" of attachment_dir option 
> and zlib plugin. If you use both, you most likely are affected, too, except 
> you only use zlib plugin for reading previously compressed stored mails. 
> That's also the workaround we use now: zlib plugin only enabled in 
> mail_plugins but no plugin/zlib_save set.

Actually the mail isn't saved corrupted. The bug is when reading the mail. So 
any existing corrupted mails become fixed after upgrading.

Fix here: 
https://github.com/dovecot/core/commit/5068b11e594ad7cc1f7cedf2bd9280520e0e534d 




Re: dovecot 2.3.7.2-1~bionic: Performance issues caused by excessive IO to ~/mdbox/mailboxes/INBOX/dbox-Mails/dovecot.index.tmp

2019-10-01 Thread Timo Sirainen via dovecot
On 1 Oct 2019, at 16.31, Ralf Hildebrandt via dovecot  
wrote:
> 
> I set up system copying all mails to a backup system.
> 
> This used to work without a hitch - now in the last few days mails
> would pile up in the Postfix Queue, waiting to be delivered using the
> lmtp transport into dovecot.
> 
> So dovecot was being slow, but why? After all, nothing changed.
> 
> After reading some articles on stackoverflow I found a way of finding
> out which file gets the most IO:
> 
> % sysdig -c topfiles_bytes;
> 
> This command quickly pointed to 
> ~/mdbox/mailboxes/INBOX/dbox-Mails/dovecot.index.tmp
> That file was written excessively. 

Was it one user's dovecot.index.tmp or for a lot of users? This means that 
dovecot.index is being rewritten, which should happen only once in a while, but 
now it sounds like it's happening maybe for every mail delivery. If it's still 
happening, could you send me one folder's dovecot.index and dovecot.index.log 
files? (They don't contain anything sensitive other than maybe message flags.)

> I then put ~/mdbox/mailboxes/INBOX/dbox-Mails/ into tmpfs and alas, the queue 
> would drain quickly.
> 
> But why is that? Why would the index file be updated so often?
> 
> This is dovecot 2.3.7.2-1~bionic

So you had been running this version already for a while, and then it just 
suddenly started getting slow?

I tried to reproduce this with imaptest and Dovecot that is patched to log when 
dovecot.index is being rewritten, but there doesn't seem to be any difference 
with v2.2.36, v2.3.7 or git master.



Re: [ext] dovecot 2.3.7.2-1~bionic: Performance issues caused by excessive IO to ~/mdbox/mailboxes/INBOX/dbox-Mails/dovecot.index.tmp

2019-10-07 Thread Timo Sirainen via dovecot
On 1 Oct 2019, at 16.45, Ralf Hildebrandt via dovecot  
wrote:
> 
> * Ralf Hildebrandt via dovecot :
> 
>> But why is that? Why would the index file be updated so often?
> 
> BTW: This post is a followup to my "2.3.7 slower than 2.3.6?" post from back 
> in July.

Fixed by 
https://github.com/dovecot/core/commit/5e9e09a041b318025fd52db2df25052b60d0fc98 

 and will be in the soon-to-be-released v2.3.8.



Re: Using attachment_dir with plugin zlib corrupt mails

2019-10-10 Thread Timo Sirainen via dovecot
Can you test if 
https://github.com/dovecot/core/commit/5068b11e594ad7cc1f7cedf2bd9280520e0e534d.patch
 

 fixes it for you?

> On 10 Oct 2019, at 11.34, MAREN ZUBIZARRETA via dovecot  
> wrote:
> 
> Hello:
>  
>   I have found the same problem reported above by Patrick Cernko affecting 
> our system and corrupting our messages. Even worse, Outlook 2016 will no 
> synchronize and the clients cannot see any message, even if there is only one 
> corrupted mail per mailbox.
>  
>   I cannot figure out a feasible workaround for our system, and I can see 
> that in new version 2.38 the bug is not fixed.
>  
>  Will this issue be treated soon?
>  
>  Thanks a lot
>  
>  Maren Zubizarreta
>  
> 
> WARNING: using attachment_dir with plugin zlib can corrupt mails
> 
> Patrick Cernko pcernko at mpi-klsb.mpg.de 
> 
> Fri Jul 19 17:52:37 EEST 2019
> Previous message: index worker 2.3.7 undefined symbol errors 
> 
> Next message: Address family not supported by protocol 
> 
> Messages sorted by: [ date ] 
>  [ thread ] 
>  [ subject ] 
>  [ author ] 
> 
> Hello list, hello Dovecot developers,
>  
> this week, I discovered a serious bug in Dovecot, that lead to several 
> broken mails on our servers. The bug corrupts the first few characters 
> of the mail header during saving. On our setup, it was almost always 
> only the very first line of text, that was corrupted.
>  
> Depending on the IMAP client (they seem to request different header 
> fields, ... during mail access), the bug causes the imap process to hang 
> up the TCP connection and log errors like this:
>  
> > imap(USERNAME)<4767>: Error: Corrupted record in index 
> > cache file 
> > /IMAP/mail/mailboxes/USERNAME/mdbox/mailboxes/Trash/dbox-Mails/dovecot.index.cache:
> >  UID 489113: Broken fields in mailbox Trash: 
> > read(attachments-connector(zlib(/IMAP/mail/mailboxes/USERNAME/mdbox/storage/m.813))):
> >  FETCH BODY[HEADER.FIELDS (RETURN-PATH SUBJECT)] got too little data: 2 vs 
> > 122
>  
> In our case that finally grabbed my attention, the client was the users 
> iphone that did not display any new messages but his Thunderbird did.
>  
> The bug seems to be triggered by a bad "interaction" of attachment_dir 
> option and zlib plugin. If you use both, you most likely are affected, 
> too, except you only use zlib plugin for reading previously compressed 
> stored mails. That's also the workaround we use now: zlib plugin only 
> enabled in mail_plugins but no plugin/zlib_save set.
>  
> The bug occurs on very specific mails. Due to privacy reasons I could 
> not provide sample mails here. Storing such mails seems to trigger the 
> bug reproducible.
>  
>  
> I attached a very minimal doveconf -n config, that can be used to 
> trigger the bug. If one of the developers is interested, I can try to 
> generate an "anonymized" version of such a specific mail that still 
> causes the issue. I discovered the bug on our productive systems, 
> running latest Dovecot 2.2 release, but the latest 2.3 I used during 
> debugging is affected, too.
>  
> During debugging, I also found one hint, that might help find the bug: 
> If you store a problematic mail with zlib_save=gz (or zlib_save=bz2) and 
> then disable the zlib plugin in mail_plugins, you can call
>  
> doveadm fetch -u test hdr all | grep -v ^hdr: | gzip --decompress
>  
> on test's mailbox with only that one broken mail.
> This will display the beginning of the rfc822 mail text until gzip 
> terminates with "gzip: stdin: unexpected end of file", approximately 
> after twice the length of the mail HEADER. This might indicate, that 
> dovecot stores the uncompressed size of the header in it's data 
> structures although the mail is stored compressed.
>  
>  
> I also found a very efficient way to find all affected mails in our setup:
>  
> doveadm -f flow fetch -A 'user guid mailbox uid seq flags hdr' all | \
>grep -a "^[^ ]+ user=" | \
>grep -avF ' hdr=Return-path: ' | \
>grep -av '.* hdr=[[:print:][:space:]]*$'
> (runtime for ~6M mails on our servers was 20-30min)
>  
> This can be even more optimized if you have a powerful storage system 
> with GNU parallel:
> > doveadm user '*' | parallel "doveadm -f flow fetch -u '{}' 'user guid 
> > mailbox uid seq flags hdr' all | grep -a '^user=' | grep -avF ' 
> > hdr=Return-path: ' | grep -av '.* hdr=

Re: BUG: Mailbox renaming algorithm got into a potentially infinite loop, aborting

2019-10-16 Thread Timo Sirainen via dovecot
On 25 Sep 2019, at 17.03, Alex Ha via dovecot mailto:dovecot@dovecot.org>> wrote:
> 
> Hi all!
> 
> I have two dovecot servers with dsync replication over tcp.
> Replication works fine except for one user.
> 
> # doveadm replicator status
> username  
>priority fast sync full sync success sync failed
> custo...@example.com 
> none 00:00:33  07:03:23  
> 03:22:31 y 
> 
> If i run dsync manually, i get the following error message:
> 
> dsync-local(custo...@example.com ): Debug: brain 
> M: -- Mailbox renamed, restart sync --
> dsync-local(custo...@example.com ): Error: BUG: 
> Mailbox renaming algorithm got into a potentially infinite loop, aborting
> dsync-local(custo...@example.com ): Error: 
> Mailbox INBOX.Foldername sync: mailbox_rename failed: Invalid mailbox name 
> 'Foldername-temp-1': Missing namespace prefix 'INBOX.'
> 
I've never fixed this because I haven't figured out how to reproduce it. If it 
happens with you all the time, could you try:

 - Get a copy of both replica sides, e.g. under /tmp/replica1 and /tmp/replica2
 - Make sure dsync still crashes with them, e.g. doveadm -o 
mail=maildir:/tmp/replica1 sync maildir:/tmp/replica2
 - Delete all mails and dovecot.index* files (but not dovecot.mailbox.log)
 - Make sure dsync still crashes
 - Send me the replicas - they should no longer contain anything sensitive

As for fixing, you could see if deleting dovecot.mailbox.log from both replicas 
happens to fix this.



Re: Coredump v2.3.8 specific msg fetch, corrupted record in index cache, Broken physical size

2019-10-20 Thread Timo Sirainen via dovecot
On 18 Oct 2019, at 15.36, Erik de Waard via dovecot  wrote:
> 
> Hi, i'm getting a coredump on a specific msg, i've attached the gdb.
> 
> file on disk i noticed W= is missing.
> 1571209735.M744550P1608.rwvirtual65,S=15886:2,S
..
> mail.log
> Oct 18 14:41:39 rwvirtual10 dovecot: imap(john...@company.nl 
> )<15868>: Error: Mailbox 
> INBOX.Debug: UID=1041: 
> read(/data/mail/company.nl/users/johndoe/Maildir/.Debug/cur/1571209735.M744550P1608.rwvirtual65,S=15886:2,S
>  
> )
>  failed: Cached message size smaller than expected (15886 < 16367, 
> box=INBOX.Debug, UID=1041) (read reason=mail stream)
> Oct 18 14:41:39 rwvirtual10 dovecot: imap(john...@company.nl 
> )<15868>: Error: Corrupted 
> record in index cache file 
> /data/indexes/john...@company.nl/.Debug/dovecot.index.cache 
> : UID 1041: Broken 
> physical size in mailbox INBOX.Debug: 
> read(/data/mail/company.nl/users/johndoe/Maildir/.Debug/cur/1571209735.M744550P1608.rwvirtual65,S=15886:2,S
>  
> )
>  failed: Cached message size smaller than expected (15886 < 16367, 
> box=INBOX.Debug, UID=1041)
> Oct 18 14:41:39 rwvirtual10 dovecot: imap(john...@company.nl 
> )<15868>: Panic: file istream.c: 
> line 315 (i_stream_read_memarea): assertion failed: (old_size <= _stream->pos 
> - _stream->skip)

The missing W shouldn't matter, but the S size is wrong. So the error is 
expected, but the panic isn't. I tried to reproduce this, but I couldn't get 
the panic to happen. Do you still have the file? Could you send it to me? You 
can also put it through https://dovecot.org/tools/maildir-obfuscate.pl which 
should remove all sensitive content but hopefully still contain enough to 
reproduce the bug.



Re: Dovecot v2.3.8 released

2019-10-20 Thread Timo Sirainen via dovecot


> On 18 Oct 2019, at 13.43, Tom Sommer via dovecot  wrote:
> 
>> I am seeing a lot of errors since the upgrade, on multiple client accounts:
>> Info: Connection closed: read(size=7902) failed: Connection reset by
>> peer (UID FETCH running for 0.242 + waiting input/output for 108.816
>> secs, 60 B in + 24780576+8192 B out, state=wait-output)
>> Using NFS storage (not running with the mail_nfs_index or mail_nfs_storage)
>> Was something changed in terms of IO/Idle timeouts?
> 
> We are also seeing different I/O patterns since the upgrade, more I/O is 
> being used than normal.

What was the previous version you were running?

> This is mail_debug from one of the accounts in question:
> 
> Oct 18 13:39:37 imap()<7552>: Debug: Mailbox INBOX: 
> Mailbox opened because: SELECT
> Oct 18 13:39:37 imap()<7552>: Debug: Mailbox INBOX: UID 
> 17854: Opened mail because: prefetch
> Oct 18 13:39:37 imap()<7552>: Debug: Mailbox INBOX: UID 
> 17854: Opened mail because: full mail
..
> Oct 18 13:39:48 imap()<7552>: Debug: Mailbox INBOX: UID 
> 17947: Opened mail because: full mail

Quite a lot of mail downloads for a single session. I wonder if the user really 
had that many new mails or if they were being redownloaded for some reason?

> Oct 18 13:40:56 imap()<7552>: Debug: Mailbox Junk: 
> Mailbox opened because: autoexpunge
> Oct 18 13:40:56 imap()<7552>: Debug: Mailbox Junk 
> E-mail: Mailbox opened because: autoexpunge
> Oct 18 13:40:56 imap()<7552>: Info: Connection closed: 
> read(size=7902) failed: Connection reset by peer (UID FETCH running for 0.542 
> + waiting input/output for 78.357 secs, 60 B in + 39221480+8192 B out, 
> state=wait-output) in=290 out=39401283 deleted=0 expunged=0 trashed=0 
> hdr_count=0 hdr_bytes=0 body_count=94 body_bytes=39210315

state=wait-output means Dovecot was waiting for client to read the data it is 
sending. In v2.3.7 there was some changes related to this, but were you 
previously successfully running v2.3.7? In v2.3.8 I can't really think of such 
changes.



Re: Dovecot v2.3.8 released

2019-10-20 Thread Timo Sirainen via dovecot
On 20 Oct 2019, at 11.37, Tom Sommer via dovecot  wrote:
> 
>>> This is mail_debug from one of the accounts in question:
>>> Oct 18 13:39:37 imap()<7552>: Debug: Mailbox INBOX: 
>>> Mailbox opened because: SELECT
>>> Oct 18 13:39:37 imap()<7552>: Debug: Mailbox INBOX: 
>>> UID 17854: Opened mail because: prefetch
>>> Oct 18 13:39:37 imap()<7552>: Debug: Mailbox INBOX: 
>>> UID 17854: Opened mail because: full mail
>> ..
>>> Oct 18 13:39:48 imap()<7552>: Debug: Mailbox INBOX: 
>>> UID 17947: Opened mail because: full mail
>> Quite a lot of mail downloads for a single session. I wonder if the
>> user really had that many new mails or if they were being redownloaded
>> for some reason?
> 
> They might redownload because of UID FETCH failing?

The client successfully downloaded all but the last mail. So it should be 
redownloading only the latest one, not all of them. (I don't think there are 
any clients stupid enough to redownload everything..)

>>> Oct 18 13:40:56 imap()<7552>: Debug: Mailbox Junk: 
>>> Mailbox opened because: autoexpunge
>>> Oct 18 13:40:56 imap()<7552>: Debug: Mailbox Junk 
>>> E-mail: Mailbox opened because: autoexpunge
>>> Oct 18 13:40:56 imap()<7552>: Info: Connection 
>>> closed: read(size=7902) failed: Connection reset by peer (UID FETCH running 
>>> for 0.542 + waiting input/output for 78.357 secs, 60 B in + 39221480+8192 B 
>>> out, state=wait-output) in=290 out=39401283 deleted=0 expunged=0 trashed=0 
>>> hdr_count=0 hdr_bytes=0 body_count=94 body_bytes=39210315
>> state=wait-output means Dovecot was waiting for client to read the
>> data it is sending. In v2.3.7 there was some changes related to this,
>> but were you previously successfully running v2.3.7? In v2.3.8 I can't
>> really think of such changes.
> 
> Yes, we were successfully running 2.3.7.2 before, the issue started just 
> after the upgrade
> 
> It can't be related to changes in the indexes? Increasing I/O
> 
> There were no input/output errors in the logs prior to 2.3.8

How large are the IO latencies now and before? The IO wait% in e.g. iostat? And 
load average in general?

I can't see any reason for IO to be different in v2.3.8 than v2.3.7. The only 
thing even close to it is the one index file bugfix. I did some further testing 
with it and I can't see it doing any more work now than it used to.



Re: BUG: Mailbox renaming algorithm got into a potentially infinite loop, aborting

2019-10-20 Thread Timo Sirainen via dovecot
On 17 Oct 2019, at 13.01, Alex Ha via dovecot  wrote:
> 
>> dsync-local(custo...@example.com ): Error: BUG: 
>> Mailbox renaming algorithm got into a potentially infinite loop, aborting
>> dsync-local(custo...@example.com ): Error: 
>> Mailbox INBOX.Foldername sync: mailbox_rename failed: Invalid mailbox name 
>> 'Foldername-temp-1': Missing namespace prefix 'INBOX.'
>> 
> 
> As for fixing, you could see if deleting dovecot.mailbox.log from both 
> replicas happens to fix this.
> 
> 
> Unfortunatley deleting dovecot.mailbox.log on both replicas did not fix the 
> problem.

I could reproduce the issue. Looks like deleting both dovecot.mailbox.log* and 
dovecot.list.index* fixes it. Tracking this internally in DOP-1501.



Re: [2.3.8] possible replication issue

2019-12-05 Thread Timo Sirainen via dovecot
I think there's a good chance that upgrading both will fix it. The bug already 
existed in old versions, it just wasn't normally triggered. Since v2.3.8 this 
situation is triggered on one dsync side, so the v2.3.9 fix needs to be on the 
other side.

> On 5. Dec 2019, at 8.34, Piper Andreas via dovecot  
> wrote:
> 
> Hello,
> 
> upgrading to 2.3.9 unfortunately does *not* solve this issue:
> 
> I upgraded one of my replicators from 2.3.7.2 to 2.3.9 and after some seconds 
> replication stopped. The other replicator remained with 2.3.7.2. After 
> downgrading to 2.3.7.2 replication is again working fine.
> 
> I did not try to upgrade both replicators up to now, as this is a live 
> production system. Is there a chance, that upgrading both replicators will 
> solve the problem?
> 
> The machines are running Ubuntu 18.04
> 
> Any help is appreciated.
> 
> Thanks,
> Andreas
> 
> Am 18.10.19 um 13:52 schrieb Carsten Rosenberg via dovecot:
>> Hi,
>> some of our customers have discovered a replication issue after
>> upgraded from 2.3.7.2 to 2.3.8.
>> Running 2.3.8 several replication connections are hanging until defined
>> timeout. So after some seconds there are $replication_max_conns hanging
>> connections.
>> Other replications are running fast and successful.
>> Also running a doveadm sync tcp:... is working fine for all users.
>> I can't see exactly, but I haven't seen mailboxes timeouting again and
>> again. So I would assume it's not related to the mailbox.
>> From the logs:
>> server1:
>> Oct 16 08:29:25 server1 dovecot[5715]:
>> dsync-local(userna...@domain.com): Error:
>> dsync(172.16.0.1): I/O has stalled, no activity for 600 seconds (version
>> not received)
>> Oct 16 08:29:25 server1 dovecot[5715]:
>> dsync-local(userna...@domain.com): Error:
>> Timeout during state=master_recv_handshake
>> server2:
>> Oct 16 08:29:25 server2 dovecot[8113]: doveadm: Error: read(server1)
>> failed: EOF (last sent=handshake, last recv=handshake)
>> There aren't any additional logs regarding the replication.
>> I have tried increasing vsz_limit or reducing replication_max_conns.
>> Nothing changed.
>> --
>> Both customers have 10k+ users. Currently I couldn't reproduce this on
>> smaller test systems.
>> Both installation were downgraded to 2.3.7.2 to fix the issue for now
>> --
>> I've attached a tcpdump showing the client showing the client stops
>> sending any data after the mailbox_guid table headers.
>> Any idea what could be wrong here or the debug this issue?
>> Thanks.
>> Carsten Rosenberg
> 
> 
> -- 
> 
> Dr. Andreas Piper, Hochschulrechenzentrum der Philipps-Univ. Marburg
>  Hans-Meerwein-Straße 6, 35032 Marburg, Germany
> Phone: +49 6421 28-23521  Fax: -26994  E-Mail: pi...@hrz.uni-marburg.de 
> 


Re: doveadm sync - I/O has stalled

2024-04-09 Thread Timo Sirainen via dovecot
We haven't found any specific bugs with lib-ssl-iostream so far, but we did 
find istream-multiplex bug that could cause hangs with doveadm-doveadm 
connections. Could be worth testing if it helps: 
https://github.com/dovecot/core/commit/bbe546bc637a6ac5c9e91fc8abefce62e4950d07

> On 30. Dec 2022, at 14.37, songliny  wrote:
> 
> Hi All,
>   We are using dovecot 2.3.19.1
> We created a account with more than 1000 mail folders in Maildir format to 
> reproduce the issue.
> After weeks of testing, we have found a logic that may cause dsync to 
> encounter the error - no activity for 900 seconds
>   The function, dsync_ibc_stream_input, is the callback function after some 
> data are ready for be read.
> This is part of what it does.
>   o_stream_cork(ibc->output);
> ibc->ibc.io_callback(ibc->ibc.io_context);
> o_stream_uncork(ibc->output);
> Normally, ibc->ibc.io_callback(ibc->ibc.io_context) reads some data and 
> then processes it.
>   But when dsync connects over tcps,
> it uses function implementations in lib-ssl-iostream to send and receive data.
> And this simplified call stack would result in some data are read when 
> calling o_stream_uncork
> 
> o_stream_uncork => o_stream_flush => o_stream_ssl_flush_buffer => 
> openssl_iostream_bio_sync => openssl_iostream_bio_input
> 
> If some data arrive after ibc->ibc.io_callback(ibc->ibc.io_context) and 
> before o_stream_uncork,
> o_stream_uncork would read the data and then return.
> After o_stream_uncork returns, dsync then waits for new data to be read or 
> written.
> But because the data had been read in o_stream_uncork, and there may be no 
> new data to be read,
> dsync may then wait until timeout is met.
>   It may happen, but it is hard to reproduce.
> If you also encounter this situation, you may try to use dsync over tcp 
> connection.
> It does not read data when calling o_stream_uncork.
> As a result, it may not stuck.
>   Back to the error itself,
> Maybe openssl-stream should not read data when doing uncork(flush)?
>   Song-Lin Yang

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: maintainer-feedback requested: [Bug 280929] mail/dovecot move bogus warning "Time moved forwards" to debug

2024-08-21 Thread Timo Sirainen via dovecot
The way Dovecot works is:
 - It finds the next timeout, sees that it happens in e.g. 5 milliseconds.
 - Then it calls kqueue() to wait for I/O for max 5 milliseconds
 - Then it notices that it actually returned more than 105 milliseconds later, 
and then logs a warning about it.

So kqueue() apparently isn't very accurate in its timeout handling.

With some googling I found 
https://lists.freebsd.org/pipermail/freebsd-arch/2012-March/012416.html which 
suggests this could happen at least if kern.hz is set to 20 or less. Could that 
be the case?

I guess we could increase IOLOOP_TIME_MOVED_FORWARDS_MIN_USECS higher than 100 
ms, but that might start hiding problems. Nowadays some people use rather short 
timeouts in e.g. some HTTP requests (auth, push-notifications). It could be 
difficult to understand why 100ms timeout happens only at 200ms without this 
warning message. Although if it happens only rarely, I guess it's not much of a 
problem.

Anyway, would be good to understand first why this happens in FreeBSD before 
growing the warning time.

Also, this is kind of a problem when it does happen. Since Dovecot thinks the 
time moved e.g. 100ms forward, it adjusts all timeouts to happen 100ms 
backwards. If this wasn't a true time jump, then these timeouts now happen 
100ms earlier. So e.g. a HTTP request with <100ms timeout can actually trigger 
an immediate timeout. Hiding the log message makes debugging this also more 
difficult. So I don't think it's a good solution to simply hide it or change it 
to debug level, as it may mask real problems.

> On 19. Aug 2024, at 19.11, Larry Rosenman via dovecot  
> wrote:
> 
> Comments from the dovecot community?
> 
> Aug 19, 2024 11:07:30 AM bugzilla-nore...@freebsd.org:
> 
> 
> Bugzilla Automation  has asked Larry Rosenman
>  for maintainer-feedback:
> Bug 280929: mail/dovecot move bogus warning "Time moved forwards" to debug
> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=280929
> 
> 
> 
> --- Description ---
> Dovecot complains about time moving forward, which seems to be due to a broken
> mechanism (on FreeBSD) used to measure timeouts. This warning spams the 
> maillog
> up to several hundred times per day.
> 
> There's an ongoing thread about this issue in the freebsd forums:
> https://forums.freebsd.org/threads/dovecot-time-moved-forwards.82886
> 
> In post #33 RypPn points out the offending line in main.c and in post #34
> msplsh suggests instead of completely removing/commenting out the line, it
> would be more sensible to move it from 'warning' to 'debug'.
> This is what this patch does: change the log facility to 'debug' to mute that
> bogus message for standard configurations, but keep it in 'debug' for anyone
> who might want to debug that issue in the future.
> 
> I tested the patch as a local patch in poudriere and it builds fine on
> 13.3-RELEASE with the quarterly and latest branch.
> ___
> dovecot mailing list -- dovecot@dovecot.org
> To unsubscribe send an email to dovecot-le...@dovecot.org

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: maintainer-feedback requested: [Bug 280929] mail/dovecot move bogus warning "Time moved forwards" to debug

2024-08-21 Thread Timo Sirainen via dovecot
On 21. Aug 2024, at 12.35, Timo Sirainen  wrote:
> 
> The way Dovecot works is:
> - It finds the next timeout, sees that it happens in e.g. 5 milliseconds.
> - Then it calls kqueue() to wait for I/O for max 5 milliseconds
> - Then it notices that it actually returned more than 105 milliseconds later, 
> and then logs a warning about it.
> 
> So kqueue() apparently isn't very accurate in its timeout handling.

Actually another guess: Some people were saying it happens mainly on idle 
hours. Maybe kqueue() is accurate with low timeout values, but not accurate on 
high timeout values? So if Dovecot asked kqueue() to wait for <100ms, it would 
be very accurate. But if it asks to wait for 1ms, kqueue() would think it's 
okay to return after 10100ms. If that's the case, this check could be changed 
to allow higher time jumps only on higher timeout waits.

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: maintainer-feedback requested: [Bug 280929] mail/dovecot move bogus warning "Time moved forwards" to debug

2024-08-26 Thread Timo Sirainen via dovecot
On 24. Aug 2024, at 5.06, Jochen Bern via dovecot  wrote:
> 
> On 21.08.24 11:35, Timo Sirainen wrote:
>>> [Lots and lots of "but my NTP sync is much more precise than that" in
>>> the FreeBSD thread]
>> The way Dovecot works is:
>>  - It finds the next timeout, sees that it happens in e.g. 5 milliseconds.
>>  - Then it calls kqueue() to wait for I/O for max 5 milliseconds
>>  - Then it notices that it actually returned more than 105 milliseconds
>>later, and then logs a warning about it.
> 
> I think that more information is needed to pinpoint possible causes, and one 
> of the open questions is: What clock does dovecot look at to determine how 
> long it *actually* stayed dormant? On Linux, software that has need of a 
> monotonously increasing "time" to derive guaranteed unique IDs from often 
> looks at the kernel uptime - which is essentially a count of ticks since 
> bootup, and *not* being corrected by NTP.

Dovecot is doing simply gettimeofday() calls before and after epoll/kqueue/etc. 
It would be possible to use e.g. clock_gettime(CLOCK_MONOTONIC) to see whether 
there really was a time change, but this seems a bit excessive since Dovecot 
needs the real time in any case, so the current checks are "free", while doing 
calls to get monotonic time would only be useful to handle issues with time 
changes.

Another possibility would be to start using timerfd API when it's supported. 
Looks like it exists also in FreeBSD. This might be a good idea, although some 
parts of Dovecot can create/update a lot of timeouts, so I wonder how efficient 
it is to have syscalls updating the timers all the time. But I guess it would 
be fine.

> Similarly, it should be determined whether the timeouts of I/O function 
> called (i.e., kqueue()) are or aren't influenced by NTP's corrections to 
> system time.

I doubt clock changes affect those calls, since they ask to wait for N 
microseconds, not until a specific time.

>> Also, this is kind of a problem when it does happen. Since Dovecot
>> thinks the time moved e.g. 100ms forward, it adjusts all timeouts to
>> happen 100ms backwards. If this wasn't a true time jump, then these
>> timeouts now happen 100ms earlier.
> 
> That is, of course, a dangerous approach if you do *not* have a guarantee 
> that the timeouts of the I/O function called are *otherwise* true to the 
> requested duration. But shouldn't those other concurrently-running timeouts 
> notice an actual discontinuity of the timescale just the same as the first 
> one did? Maybe some sort of "N 'nay's needed for a vote of nonconfidence" 
> mechanism would be safer ...

There's only one timeout concurrently running per process. In theory the 
processes could talk to each other to find out whether there is such a time 
jump in more processes, but that would be very complicated.
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: maintainer-feedback requested: [Bug 280929] mail/dovecot move bogus warning "Time moved forwards" to debug

2024-08-30 Thread Timo Sirainen via dovecot
On 30. Aug 2024, at 19.00, dco2024--- via dovecot  wrote:
> 
> This is not limited to FreeBSD. I'm seeing it on Gentoo Linux. Kernel is 
> 6.6.47-gentoo-x86_64, dovecot 2.3.21.1 (d492236fa0). The warning is logged 
> once every 12-15 hours. 
> 
> Syslog:
> 2024-08-24 18:03:49 UTC myhost dovecot: master: Warning: Time moved forwards 
> by 0.100068 seconds - adjusting timeouts.
> 2024-08-25 06:18:49 UTC myhost dovecot: master: Warning: Time moved forwards 
> by 0.100063 seconds - adjusting timeouts.
> 2024-08-26 06:52:16 UTC myhost dovecot: master: Warning: Time moved forwards 
> by 0.100062 seconds - adjusting timeouts.
> 2024-08-26 18:57:54 UTC myhost dovecot: master: Warning: Time moved forwards 
> by 0.100068 seconds - adjusting timeouts.
> 2024-08-27 07:24:34 UTC myhost dovecot: master: Warning: Time moved forwards 
> by 0.100061 seconds - adjusting timeouts.
> 2024-08-27 19:38:48 UTC myhost dovecot: master: Warning: Time moved forwards 
> by 0.100060 seconds - adjusting timeouts.
> 2024-08-28 20:21:44 UTC myhost dovecot: master: Warning: Time moved forwards 
> by 0.100071 seconds - adjusting timeouts.
> 2024-08-29 08:41:44 UTC myhost dovecot: master: Warning: Time moved forwards 
> by 0.100070 seconds - adjusting timeouts.
> 2024-08-29 21:04:37 UTC myhost dovecot: master: Warning: Time moved forwards 
> by 0.100071 seconds - adjusting timeouts.
> 2024-08-30 09:30:36 UTC myhost dovecot: master: Warning: Time moved forwards 
> by 0.100066 seconds - adjusting timeouts.
> 
> Chrony ntp keeps the time in sync and the time has been in sync to within 
> 30us of UTC for many days. I noticed that it reports that the unadjusted 
> system clock is about 2.31 ppm fast of UTC. Doing the math for dovecot's 12 
> hour warning interval:
>   12 hours * 3600 secs/hour * 2.3/100 = 0.0998 seconds.
> Could it be that dovecot is effectively measuring intervals of the 
> uncorrected system clock time instead of the longer term adjusted time, and 
> it complains when the accumulated NTP adjustments sum to 0.1 seconds.

I don't see how that would be possible. The check is using only just generated 
timestamps, not anything from a long time ago.

I wonder if this kind of a simple patch would be good enough of a fix:

diff --git a/src/lib/ioloop.c b/src/lib/ioloop.c
index 98c2dc2bf4..a63f861330 100644
--- a/src/lib/ioloop.c
+++ b/src/lib/ioloop.c
@@ -18,6 +18,7 @@
Dovecot generally doesn't have very important short timeouts, so to avoid
logging many warnings about this, use a rather high value. */
 #define IOLOOP_TIME_MOVED_FORWARDS_MIN_USECS (10)
+#define IOLOOP_TIME_MOVED_FORWARDS_MIN_USECS_LARGE (100)
 
 time_t ioloop_time = 0;
 struct timeval ioloop_timeval;
@@ -654,9 +655,13 @@ static void io_loop_handle_timeouts_real(struct ioloop 
*ioloop)
/* the callback may have slept, so check the time again. */
i_gettimeofday(&ioloop_timeval);
} else {
+   int max_diff = diff_usecs < 
IOLOOP_TIME_MOVED_FORWARDS_MIN_USECS_LARGE ?
+   IOLOOP_TIME_MOVED_FORWARDS_MIN_USECS :
+   IOLOOP_TIME_MOVED_FORWARDS_MIN_USECS_LARGE;
+
diff_usecs = timeval_diff_usecs(&ioloop->next_max_time,
&ioloop_timeval);
-   if (unlikely(-diff_usecs >= 
IOLOOP_TIME_MOVED_FORWARDS_MIN_USECS)) {
+   if (unlikely(-diff_usecs >= max_diff)) {
io_loops_timeouts_update(-diff_usecs);
/* time moved forward */
ioloop->time_moved_callback(&ioloop->next_max_time,


___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Inconsistency in map index with dovecot v2.3.21

2024-08-30 Thread Timo Sirainen via dovecot
On 30. Aug 2024, at 16.39, Nikolaos Pyrgiotis via dovecot  
wrote:
> 
> But is there a possible bug in dovecot 2.3.21 version linked with the mdbox 
> format that is causing the `inconsistency in map index` in the first place or 
> it is just a configuration error? Other users have also reported these error 
> messages on this older thread 
> https://dovecot.org/mailman3/archives/list/dovecot@dovecot.org/thread/73CEPDRB7TWP6BJABZL6VBZZH66HQ6S6/#73CEPDRB7TWP6BJABZL6VBZZH66HQ6S6
>   .

What was the previous version you were running? 2.3.20? I don't think there are 
really much of any changes related to dbox or index file handling. We did do 
pretty large fixes to mdbox corruption handling, but looks like they're still 
waiting for v2.3.22 release. Maybe those would help with these inconsistency 
issues also, or at least fixing them.

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Dovecot v2.3.21.1 released

2024-09-06 Thread Timo Sirainen via dovecot
On 2. Sep 2024, at 15.44, Guilhem Moulin via dovecot  
wrote:
> 
> Hi Aki,
> 
>> we are releasing a CVE patch release 2.3.21.1.
> 
> Your message to the oss-security list [0] says both 2.2 and 2.3 versions
> are vulnerable to CVE-2024-23184.  Using the following test message as
> reproducer
> 
>From: f...@example.net
>To: b...@example.net
>  , b...@example.net
>  […]
>  , bar$n...@example.net
>Bcc: b...@example.net
>[…]
>Bcc: baz$n...@example.net
>Date: $(LC_TIME=C.UTF-8 date -R)
>Subject: boom
>Message-Id: $(cat /proc/sys/kernel/random/uuid)@example.net
> 
>boom
> 
> I could reproduce the issue back to 2.3.10 but not with earlier
> versions.  I used `doveadm fetch imap.envelope all` to measure the
> (non-cached) IMAP ENVELOPE command.
> 
> For n=100k, it takes ~20s with 2.3.19 vs. ~0.5s with early 2.3.x and
> 2.2.x.  For n=500k, I measured ~2s with early 2.3.x and 2.2.x, so for
> these versions it doesn't look like parsing is O(n²) in the number of
> addresses.
> 
> I didn't try to bisect to pinpoint the exact commit, but AFAICT the main
> problem you described
> 
> | each header line's address is added to the end of a linked list. This
> | is done by walking the whole linked list, which becomes more inefficient
> | the more addresses there are.
> 
> was introduced in 2.3.10 by
> https://github.com/dovecot/core/commit/469fcd3bdd7df40bb8f4d131121f3bfbceade02a
>  .
> 
> Is my reproducer/analysis incorrect, or are versions before 2.3.10
> immune to CVE-2024-23184?  (AFAICT they are affected by CVE-2024-23185;
> only talking about -23184 here.)

Yes, looks like this is all correct. I guess we didn't really verify the oldest 
version this affects.

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Bug Report: Missing parent folders will cause some clients to not retrieve messages

2024-10-09 Thread Timo Sirainen via dovecot
On 9. Oct 2024, at 2.18, Brendon Meyer via dovecot  wrote:
> 
> 
> When creating subfolders with tools such as imapfilter, if the parent of the 
> subfolder does not exist, Dovecot will allow that folder to be created, and 
> the tool will allow you to populate that folder with messages.

What about trying to reproduce the same with doveadm commands? For example:

doveadm mailbox create -u $user -s testparent4/testchild4/subchild4
echo 'Subject: testmail' | doveadm save -u $user -m 
testparent4/testchild4/subchild4

> The issue then rears it ugly head when you say use Thunderbird and the 
> messages in this subfolder are not visible.  This behaviour is not limited to 
> Thunderbird (e.g. Outlook) but I am using Thunderbird as an example here.  
> Oddly enough, the Apple mail client is *not* impacted in quite the same way 
> (though it is impacted but the behaviour is very subtly different).

I tested with the latest Thunderbird in OSX, and I already have similar folders 
in my Apple Mail. No issues with them other than having to restart Thunderbird 
for it to see the newly created folders.

What's your doveconf -n output?

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Missing IMAP folders after upgrading from 2.3.17 to 2.3.21.1

2024-10-22 Thread Timo Sirainen via dovecot
On 21. Oct 2024, at 15.50, Frank Kirschner via dovecot  
wrote:
> 
> Hi,
> 
> I have upgraded from 2.3.17 to 2.3.21.1.
> mail_location = maildir
> 
> Now in some mailboxes are IMAP folder in subdir of INBOX missing.
> When I do a 'ls -la' in filesystem, I see the subfolder like other folder 
> with identical owner an permissions.
> But  when I do doveadm mailbox list -u [user] the IMAP folder is not 
> displayed.
> 
> So I have deleted all files in /opt/dovecot/[hash]/[user] because I use 
> INDEX=/opt/dovecot/%2.256Nu/%u:ITERINDEX
> The restart dovecot and re-login the Imap client. All IMAP folders are 
> rebuilt but the same folders like before are still missing.

ITERINDEX lists the IMAP folders using the directories that exist in INDEX 
path. If you delete the index directories, it shouldn't be showing you any IMAP 
folders. Does the problem go away simply removing the ITERINDEX, or why are you 
using it?

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: bug? dsync extremely much slower on forward than reverse direction

2024-11-27 Thread Timo Sirainen via dovecot
It's probably fixed by 
https://github.com/dovecot/core/commit/3001c1410e72440e48c285890dd31625b7e12555

> On 25. Nov 2024, at 14.51, Bjørnar Ness via dovecot  
> wrote:
> 
> Here are some numbers from a test run:
> 
> dst: full pull sync from src
> 6m19s
> dst: delta pull sync from src
> 2s
> 
> 
> src: full push dync to destination
> 4h14m34s
> src: delta push sync to destination
> 6s
> 
> 
> So the initial sync is 4h14m (for push) vs 6m20s (for pull)
> 
> Can someone answer what is causing this very odd behavior?
> 
> 
> man. 18. nov. 2024 kl. 13:18 skrev Bjørnar Ness :
>> 
>> I have been experimenting with dsync lately, and to my surprise I found that 
>> dsync in forward direction is extremely slow compared for reverse (-R) 
>> direction for the exact same data, disk caches flushed both times. I have 
>> created a docker-compose setup to prove my findings. Am I missing something 
>> here, or is this a bug?
>> 
>> link to tests: https://github.com/bjne/dovecot-dsync-benchmark
>> 
>> --
>> Bj(/)rnar
> 
> 
> 
> -- 
> Bj(/)rnar
> ___
> dovecot mailing list -- dovecot@dovecot.org
> To unsubscribe send an email to dovecot-le...@dovecot.org

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: dovecot, virtiofs, fchown, invalid argument

2024-11-26 Thread Timo Sirainen via dovecot
On 26. Nov 2024, at 15.34, Кирилл Шигапов via dovecot  
wrote:
> 
> We are using virtiofs share mounted to /var/vmail
> When dovecot tries to make a directory structure, it fails with error -
> fchown: Invalid argument
> 
> strace doveadm mailbox create -u testu...@testdomain.com 
>  Trash
> ...
> mkdir("/var/vmail/vmail1/
> testdomain.com/t/e/s/testuser-2024.11.25.12.58.50//Maildir/.Trash 
> ", 
> 0700) =
> 0
> umask(077)  = 000
> openat(AT_FDCWD, "/var/vmail/vmail1/
> testdomain.com/t/e/s/testuser-2024.11.25.12.58.50//Maildir/.Trash 
> ",
> O_RDONLY) = 11
> fchown(11, -1, -1)  = -1 EINVAL (Invalid argument)
> close(11)   = 0
> rmdir("/var/vmail/vmail1/
> testdomain.com/t/e/s/testuser-2024.11.25.12.58.50//Maildir/.Trash 
> ") 
> = 0
> ...
> So there is no permission issue, directory successfully created. It seems
> that virtiofs not allowing uid=-1 and gid=-1
> 
> I write a small C program for testing. And when
> using fchown($FileDescriptor, 2000, 2000) there is no error.

Did you try to fchown() a directory file descriptor or a regular file? I've a 
feeling it doesn't work for directory fds.

> I can't find a way to tell dovecot to use uid & gid 2000 when he tries to
> do fchown.
> 
> If I create a directory structure manually, everything works well.
> 
> Maybe we can make some parameter in config, telling that we are using
> virtiofs and skip EINVAL when doing fchown...

It's completely unnecessary to do fchown(fd, -1, -1). It doesn't do anything. 
This patch perhaps helps? :

diff --git a/src/lib/mkdir-parents.c b/src/lib/mkdir-parents.c
index 64f660df3e..f2de0ccd09 100644
--- a/src/lib/mkdir-parents.c
+++ b/src/lib/mkdir-parents.c
@@ -34,6 +34,11 @@ mkdir_chown_full(const char *path, mode_t mode, uid_t uid,
umask(old_mask);
if (ret < 0)
break;
+   if (uid == (uid_t)-1 && gid == (gid_t)-1) {
+   /* no changes to owner/group */
+   return 0;
+   }
+
fd = open(path, O_RDONLY);
if (fd != -1)
break;

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Problem email client iPhone ios18.2

2025-02-03 Thread Timo Sirainen via dovecot
On 3. Feb 2025, at 16.17, stephen--- via dovecot  wrote:
> 
> funny - i also talked to apple business support and have a case since my 
> FB15701211 never went anywhere. they took a day or so and talked to engineers 
> and stated that there were no other official apple support requests for this 
> issue - which i have to believe means no large org they care about has this 
> issue - or uses dovecot. My Apple Support Case was/is: 102513275820 which can 
> be used as reference if you want to try - 
> 
> otherwise i am exploring alternatives to dovecot now.
> 
> removing IDLE capability is getting us by for now.

I haven't noticed any issues on my iPhone so far, and I haven't heard of our 
customers complaining about this to us either. So difficult to say when/why it 
happens..

I think to get anywhere forward with this, we'd need to see IMAP rawlogs from a 
session (or perhaps multiple sessions for the whole user) where this problems 
happens. Especially if the problem is IDLE-related, I'd want to see things 
going wrong after IDLE. Of course, this likely means seeing some email contents 
in there, but those could be replaced with some .

As for how to do rawlogs: 
https://doc.dovecot.org/2.3/settings/core/#core_setting-rawlog_dir

And better not to have imap-hibernation enabled, since that could confuse 
debugging.
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: IMAP dovecot\postgres low authentication performance

2025-02-03 Thread Timo Sirainen via dovecot
On 3. Feb 2025, at 7.05, Anatoliy Zhestov via dovecot  
wrote:
> 
> Hi. We have a performance problem with imap authentication through
> postgresql.
> Our servers(modoboa based) have a big amount of permanent imap
> connections(5000-5).
> Current performance is about 3000 successful authentications per hour. No
> visible reasons for such low speed. Accordingly, after a network failure or
> server restart, all clients try to reconnect, but restoring the connection
> pool takes hours and even tens of hours. Judging by the logs after the
> restart, a huge number of auth requests closed by timeout after 70-90
> seconds. The postgresql database is not overloaded at the restore
> connections process and the postgresql connection pool (100) does not
> overflow. Manually started sql auth queries work fast, tables have indexes.
> So I guess there is a bottleneck somewhere in dovecot auth service or
> postgresql driver.

Are you sure the problem is authentication / pgsql? You could test with looping 
"doveadm auth lookup $user" rapidly. Of course for different users to avoid 
them coming from cache. Or if you can reproduce it that way, try if the same 
happens for repeating the same user so it does come from cache.

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Problem email client iPhone ios18.2

2025-02-04 Thread Timo Sirainen via dovecot
On 4. Feb 2025, at 3.53, stephen--- via dovecot  wrote:
> 
> in my experience the only users who complained were mail abusers - with tens 
> of thousands of messages in each folder - tons of folders etc etc. smaller 
> mailboxes seemed fine. i think apple uses some threshold where this new 
> pipelining takes place on larger mailboxes maybe.
> 
> there is a lot of info including raw logs here where they ended up patching 
> it in another mail server:
> https://github.com/stalwartlabs/mail-server/issues/765

With Stalwart apparently they had a clear issue with IDLE pipelining. Dovecot 
is able to handle DONE + commands pipelined just fine, at least in all my 
tests. So I don't think Stalwart rawlogs are helpful for Dovecot.

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: IMAP dovecot\postgres low authentication performance

2025-02-04 Thread Timo Sirainen via dovecot
On 3. Feb 2025, at 20.18, Anatoliy Zhestov via dovecot  
wrote:
> 
>> 
>> Are you sure the problem is authentication / pgsql? You could test with
>> looping "doveadm auth lookup $user" rapidly. Of course for different users
>> to avoid them coming from cache. Or if you can reproduce it that way, try
>> if the same happens for repeating the same user so it does come from cache.
> 
> 
> i test in condition when 90% of imap connection is already established.
> auth cache is enabled so i guess tests with the same user are not relevant.
> 
>  less loaded server
> ps waux|grep imap-login|wc -l
> 24977

Oh, somehow I missed that you have this many concurrent connections.

> echo "13285 / 343" |bc
> 38 (per second)

So with this speed 24977 users would take 11 minutes to login back, which is a 
bit slow.

Some ideas:

1) If pgsql is the bottleneck, try multiple pgsql connections: Add maxconns=4 
(or whatever) to the dovecot-sql.conf.ext's connect line.

2) In your Dovecot proxy (assuming you have one?) you can configure it to 
spread over disconnections over a long time, if the issue is that backend 
disconnects everyone at once. login_proxy_max_disconnect_delay setting does 
this.

3) With that many connections and to make logins faster, you'd be better off 
using 
https://doc.dovecot.org/2.3/admin_manual/login_processes/#high-performance-mode

4) To optimize login performance, it would be best to get rid of the post-login 
script. Also:

service imap {
  service_count = 1000
  process_min_avail = 10
}

5) auth_cache_size is rather small. Likely could be increased much larger.
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Problem email client iPhone ios18.2

2025-02-10 Thread Timo Sirainen via dovecot
On 10. Feb 2025, at 15.24, Florian Effenberger via dovecot 
 wrote:
> 
> Hello,
> 
> frido--- via dovecot wrote on 10.02.25 at 14:07:
>> The problem as described at the start of this thread is not solved in iOS 
>> 18.3. A bit more testing shows that it only appears to happen with 
>> attachments around 1MB and bigger.
> 
> there seem to be two kind of problems, at least on my devices.

I'm also wondering if there are multiple issues. I have one way of easily 
reproducing, but it has nothing to do with IMAP IDLE. Here's a test email which 
causes the hang - snipped it a bit in the middle which you can fill out 
yourself with:

dd if=/dev/urandom bs=1024 count=3432 | base64 -w76

If you want to test multiple times, change Message-Id for every delivery. This 
mail can be simply saved to INBOX, e.g. with doveadm save:

To: 
Subject: testing
Content-Type: multipart/mixed; boundary="foo"
Date: Tue, 17 Dec 2024 06:46:17 +
From: 
Message-Id: 



--foo
Content-Type: multipart/alternative; boundary="bar"

--bar
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html; charset=UTF-8

asdf
--bar--

--foo
Content-Description: test.pdf
Content-Disposition: attachment; filename="test.pdf"
Content-Transfer-Encoding: base64
Content-Type: application/octet-stream; name="test.pdf"


lBjZO0luFuyUiOqQ8KPTTpMyPoNhlSTPPJauDuCVSwbZgaNyhnObul5qrdyPKo0NGiQKA3Kha34A
<61642 lines of random>
4cNyk39T2wMsnQavfl8LXwG1vNQylpgmSN4p6fOpmGqQ

--foo--

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Bad Signature - Mustiple domains

2025-02-11 Thread Timo Sirainen via dovecot
On 11. Feb 2025, at 2.09, Ken Wright via dovecot  wrote:
> 
> Sorry, I sent that this morning just before running off to my job. 
> Here's the section of dovecot.conf that I'm working on:

Nothing wrong with that part.

> When I was running Dovecot 2.3, I was able to send and receive email to
> and from example1.com and example2.net.  I want to be able to do the
> same with 2.4, but right now when I try to start Dovecot I get an error
> message like this:
> 
> doveconf: Fatal: Error in configuration file /etc/dovecot/dovecot.conf
> line 42: Unknown section name: local

This most likely means you forgot to close with } an earlier section. v2.4.0 
unfortunately gives a rather bad error message in that case. It'll be better in 
v2.4.1.

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Error in tests for Dovecot 2.4.0 aarch64

2025-02-11 Thread Timo Sirainen via dovecot


> On 7. Feb 2025, at 9.48, Peter via dovecot  wrote:
> 
> When attempting to build Dovecot 2.4.0 on the aarch64 platform, when running 
> tests...
> 
> test-file-cache.c:268: Assert failed: file_cache_set_size(cache, 1024) == -1

This test attempts to shrink allowed address space with setrlimit(RLIMIT_AS) 
and then use mmap() to allocate memory. For some reason that still succeeds. 
The test works in my Ubuntu 24.04 / aarch64 though. So not really sure how to 
debug it further myself. In any case it's most likely not a real problem.

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Exim / Dovecot v2.4.0 authentication patch

2025-02-02 Thread Timo Sirainen via dovecot
Hi,

Dovecot v2.4.0 changed authentication protocol slightly to allow new 
functionality (SCRAM TLS channel binding). It attempted to preserve backwards 
compatibility by checking client-provided VERSION first before sending data 
that the client wouldn't handle correctly. However, Exim's Dovecot 
authenticator doesn't send VERSION until Dovecot has sent the whole 
authentication handshake. This causes Exim to get stuck when trying to 
authenticate.

I guess we'll provide some kind of a workaround for v2.4.1, but this should get 
fixed on Exim side as well. Attached a patch that I tested works (against 
4.97-4ubuntu4).



exim4-dovecot24-auth.patch
Description: Binary data


___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: 2.4.0 on MacOS

2025-02-01 Thread Timo Sirainen via dovecot
On 1. Feb 2025, at 10.27, John Muccigrosso via dovecot  
wrote:
> 
> Fatal: setrlimit(RLIMIT_DATA, 2147483648): Invalid argument
> 
> 
> The old(?) solution doesn't work for me:
> 
> default_vsz_limit = 0
> 
> It reports: doveconf: Fatal: Error in configuration file 
> /opt/homebrew/etc/dovecot/dovecot.conf: service(log): vsz_limit is too low

Use unlimited, not 0. I guess the error message should also mention that.
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


  1   2   >