Re: Second rule isn't apply when first rule matches

2018-05-30 Thread Stephan Bosch




Op 28-5-2018 om 14:18 schreef daniel_1...@protonmail.com:

Dear list,
I want to define two concurrent rules :
1. I want to flag an e-mail containing the word "cloud" in the body
2. I want to move mail sent explicitly to me (as opposed to mail sent to an alias I am 
part of) going to "INOBX.I must answere this"

If I put rule (1) first, everything works as expected. If put rule (2) first, 
only that rule applies.

Here's a small test case you should be able to reproduce on your setup.

Let's work on this fake test e-mail :

$ cat testmail.eml

Date: Mon, 28 May 2018 12:41:53 +0100
From: fri...@otherdomain.tld
To: m...@mydomain.tld
Subject: dur dur la vie

mon frère, le cloud c'est le top du top


Here's the first, broken version of the script

$ cat test.sieve

require ["body","fileinto","imap4flags","vacation"];

# rule:[Mail about me]
if anyof (header :contains "to" "m...@mydomain.tld", body :text :contains 
"ahmed")
{
   fileinto "INBOX.I must answere this";
}

# rule:[Mail about the Cloud]
if body :text :contains "cloud"
{
 addflag "\\Flagged";
}


Let's test it out, the two rules should be applied :


$ sieve-test test.sieve testmail.eml

Performed actions:

  * store message in folder: INBOX.I must answere

Implicit keep:

   (none)

sieve-test(root): Info: final result: success
$



Notice that the last rule isn't applied although it matches. Now we invert the 
order of the rules and apply the script on the same test e-mail :


$ cat test.sieve

require ["body","fileinto","imap4flags","vacation"];
# rule:[Mail about the Cloud]
if body :text :contains "cloud"
{
 addflag "\\Flagged";
}

# rule:[Mail about me]
if anyof (header :contains "to" "m...@mydomain.tld", body :text :contains 
"ahmed")
{
   fileinto "INBOX.I must answere this";
}



Running the test again :

$ sieve-test test.sieve testmail.eml

Performed actions:

  * store message in folder: INBOX.I must answere
 + add IMAP flags: \flagged

Implicit keep:

   (none)

sieve-test(root): Info: final result: success
$

The IMAP flag was set !


Any ideas ?


You should read the specification: https://tools.ietf.org/html/rfc5232

Summary: the addflag command doesn't actually add the flag to the 
message immediately; instead, it adds it to an internal list of assigned 
flags. Only once fileinto is executed, the current list of flags is 
applied to the message stored for that particular fileinto action. 
Performing an addflag will not affect fileinto actions that were 
executed before; it will only affect fileinto (and keep) executed 
thereafter. That is why the order is so important in your script.


Regards,

Stephan.


Re: Problem in Pigeonhole sievec

2018-05-30 Thread Stephan Bosch

Hi,

Can you send me that script?

Regards,

Stephan.
Op 28-5-2018 om 13:07 schreef Thorsten Hater:

Dear all,

I stumbled upon the following behaviour of Pigeonhole, which I consider
to be problematic. A user deployed a Sieve script similar to the 
following snippet


if not anyof (address :is ["from","cc"] ["...", ..., "...@...
GARBAGE", ...] {
  fileinto "inbox.Trash";
  stop;
}

Note the extra line break before GARBAGE. This script is obviously broken,
but gets accepted by sievec and only fails later, at runtime with
line X: error: found stray carriage-return (CR) character in quoted
 string started at line X.
So, the question is whether line breaks in strings are allowed in general
and the runtime error is unavoidable, or should sievec return an error?

Best regards,
 Thorsten




Re: SSL error after upgrading to 2.31

2018-05-30 Thread A. Schulze



Aki Tuomi:


There is already ssl_client_ca, for verifying clients. ssl_ca verifies
certs when dovecot is connecting somewhere.



For clarification:

there is a third use case an admin may need intermediate certificates:
And that's where dovecot act as server providing imap/pop3/lmtp/sieve  
via TLS or STARTTLS


that's different semantic:
ssl_client_ca and ssl_ca provide lists of CAs, dovecot should trust
while in the third case an administrator has to define exactly one list
of intermediate CAs used as chain to a root. Mixing them is wrong.

In the third case an administrator has to provide files with  
certificates. And these files
are required (by best practice) to include any chain-certificates  
excluding the self signed root.


There is no reason to only provide a certificate via ssl_cert =  /path/to/file"
No need for other options...

Andreas




Re: Second rule isn't apply when first rule matches

2018-05-30 Thread daniel_1983
‐‐‐ Original Message ‐‐‐

On May 30, 2018 9:08 AM, Stephan Bosch  wrote:
> Performing an addflag will not affect fileinto actions that were
> 
> executed before; it will only affect fileinto (and keep) executed
> 
> thereafter. That is why the order is so important in your script.
> 
> Regards,
> 
> Stephan.

Thank you Stephan for the detailed answere.

Cheers !



Struggling with sieve, fileinto and variables.

2018-05-30 Thread Barry Pearce
Hi,

Im on Manjaro linux fully up to date running

dovecot 2.3.1-2
pigeonhole 0.5.1-2

All is running well except I am having problems with fileinto - it refuses to 
use variables, and mailboxexists is also failing for me.

Im just dealing with plus addressing - should be simple but the behaviour Im 
experiencing isnt right.

require ["variables", "envelope", "fileinto", "subaddress", "mailbox"];
if envelope :matches :detail "to" "*" {
set :lower "name" "${1}";
}
fileinto :create "folder3";

This works, but changing the last line to:

fileinto :create "${name}";

fails and files into inbox. As does:

fileinto :create "${1}";

It makes no difference if the fileinto is placed inside the if statement or 
otherwise. Using sieve-test suggests the basic script it should work and traces 
show the right action.

On a related matter the script Im actually trying to get to work is:

require ["variables", "envelope", "fileinto", "subaddress", "mailbox"];

if envelope :matches :detail "to" "*" {
set :lower "name" "${1}";
}
if mailboxexists "${name}" {
fileinto :create "${name}";
}

But fails and files in the inbox.

Modifying the script to:

require ["variables", "envelope", "fileinto", "subaddress", "mailbox"];

if envelope :matches :detail "to" "*" {
set :lower "name" "${1}";
}

if mailboxexists "${name}" {
fileinto :create "folder7";
} else {
  fileinto :create "folder8";
}

Files into folder8. So the mailboxexists is failing.

 ## Started executing script 'plus-addressing'
  3: envelope test
  3:   starting `:matches' match with `i;ascii-casemap' comparator:
  3:   getting `to' part from message envelope
  3:   extracting `detail' part from address 
  3:   matching value `folder4'
  3: with key `*' => 1
  3:   finishing match with result: matched
  3: jump if result is false
  3:   not jumping
  4: set command
  7:   modify :lower "folder4" => "folder4"
  7:   assign `name' [0] = "folder4"
  7: mailboxexists test
  7:   mailbox `folder4' cannot be opened
  7:   some mailboxes are unavailable
  7: jump if result is false
  7:   jumping to line 9
 10: fileinto action
 10:   store message in mailbox `folder8'
 ## Finished executing script 'plus-addressing'

Here folder4 actually does exist - so the sieve-test confirms that issue.  Im 
at a loss as to whats going on. Im particularly perplexed by the difference in 
behaviour between sieve-test and the result under the live server.


Any ideas?



Re: 2.3.1 Replication is throwing scary errors

2018-05-30 Thread Reuben Farrelly

Hi,

Checking in - this is still an issue with 2.3-master as of today 
(2.3.devel (3a6537d59)).


I haven't been able to narrow the problem down to a specific commit. 
The best I have been able to get to is that this commit is relatively 
good (not perfect but good enough):


d9a1a7cbec19f4c6a47add47688351f8c3a0e372 (from Feb 19, 2018)

whereas this commit:

6418419ec282c887b67469dbe3f541fc4873f7f0 (From Mar 12, 2018)

is pretty bad.  Somewhere in between some commit has caused the problem 
(which may have been introduced earlier) to get much worse.


There seem to be a handful of us with broken systems who are prepared to 
assist in debugging this and put in our own time to patch, test and get 
to the bottom of it, but it is starting to look like we're basically on 
our own.


What sort of debugging, short of bisecting 100+ patches between the 
commits above, can we do to progress this?


Reuben



On 7/05/2018 5:54 am, Thore Bödecker wrote:

Hey all,

I've been affected by these replication issues too and finally downgraded
back to 2.2.35 since some newly created virtual domains/mailboxes
weren't replicated *at all* due to the bug(s).

My setup is more like a master-slave, where I only have a rather small
virtual machine as the slave host, which is also only MX 20.
The idea was to replicate all mails through dovecot and perform
individual (independent) backups on each host.

The clients use a CNAME with a low TTL of 60s so in case my "master"
(physical dedicated machine) goes down for a longer period I can simply
switch to the slave.

In order for this concept to work, the replication has to work without
any issue. Otherwise clients might notice missing mails or it might
even result in conflicts when the master cames back online if the
slave was out of sync beforehand.


On 06.05.18 - 21:34, Michael Grimm wrote:

And please have a look for processes like:
doveadm-server: [IP4  INBOX import:1/3] (doveadm-server)

These processes will "survive" a dovecot reboot ...


This is indeed the case. Once the replication processes
(doveadm-server) get stuck I had to resort to `kill -9` to get rid of
them. Something is really wrong there.

As stated multiple times in the #dovecot irc channel I'm happy to test
any patches for the 2.3 series in my setup and provide further details
if required.

Thanks to all who are participating in this thread and finally these
issues get some attention :)


Cheers,
Thore





Re: Struggling with sieve, fileinto and variables.

2018-05-30 Thread Stephan Bosch




Op 30-5-2018 om 14:01 schreef Barry Pearce:

Hi,

Im on Manjaro linux fully up to date running

dovecot 2.3.1-2
pigeonhole 0.5.1-2


All is running well except I am having problems with fileinto - it 
refuses to use variables, and mailboxexists is also failing for me.


Im just dealing with plus addressing - should be simple but the 
behaviour Im experiencing isnt right.


require ["variables", "envelope", "fileinto", "subaddress",
"mailbox"];
if envelope :matches :detail "to" "*" {
    set :lower "name" "${1}";
}
fileinto :create "folder3";


This works, but changing the last line to:

fileinto :create "${name}";


fails and files into inbox. As does:

fileinto :create "${1}";


It makes no difference if the fileinto is placed inside the if 
statement or otherwise. Using sieve-test suggests the basic scriptit 
should work and traces show the right action.


On a related matter the script Im actually trying to get to work is:

require ["variables", "envelope", "fileinto", "subaddress",
"mailbox"];

if envelope :matches :detail "to" "*" {
    set :lower "name" "${1}";
}
if mailboxexists "${name}" {
        fileinto :create "${name}";
}


But fails and files in the inbox.


This happens for a reason, which you can either find in your syslog or 
in the user log file (e.g. ~/.dovecot.sieve -> ~/.dovecot.sieve.log).




Modifying the script to:

require ["variables", "envelope", "fileinto", "subaddress",
"mailbox"];

if envelope :matches :detail "to" "*" {
    set :lower "name" "${1}";
}

if mailboxexists "${name}" {
        fileinto :create "folder7";
} else {
  fileinto :create "folder8";
}


Files into folder8. So the mailboxexists is failing.

     ## Started executing script 'plus-addressing'
  3: envelope test
  3:   starting `:matches' match with `i;ascii-casemap' comparator:
  3:   getting `to' part from message envelope
  3:   extracting `detail' part from address 
  3:   matching value `folder4'
  3: with key `*' => 1
  3:   finishing match with result: matched
  3: jump if result is false
  3:   not jumping
  4: set command
  7:   modify :lower "folder4" => "folder4"
  7:   assign `name' [0] = "folder4"
  7: mailboxexists test
  7:   mailbox `folder4' cannot be opened
  7:   some mailboxes are unavailable
  7: jump if result is false
  7:   jumping to line 9
 10: fileinto action
 10:   store message in mailbox `folder8'
 ## Finished executing script 'plus-addressing'

Here folder4 actually does exist - so the sieve-test confirms that 
issue.  Im at a loss as to whats going on. Im particularly perplexed 
by the difference in behaviour between sieve-test and the result under 
the live server.


You can also get a trace for the live server: 
https://wiki2.dovecot.org/Pigeonhole/Sieve/Configuration#Trace_Debugging


Regards,

Stephan.



Re: auth: Error - Request timed out

2018-05-30 Thread Hajo Locke

Hello,


Am 29.05.2018 um 11:30 schrieb Gerald Galster:



Am 29.05.2018 um 11:00 schrieb Aki Tuomi :



On 29.05.2018 11:35, Hajo Locke wrote:

Hello,


Am 29.05.2018 um 09:22 schrieb Aki Tuomi:

On 29.05.2018 09:54, Hajo Locke wrote:

Hello List,

i use dovecot 2.2.22 and have the same problem described here:
https://dovecot.org/pipermail/dovecot/2017-November/110020.html

I can confirm that sometimes there is a problem with connection to
mysql-db, but sometimes not.
Reasons for failing are still under investigation by my mates.

My current main problem is, that this fail seems to be a one way
ticket for dovecot. Even if mysql is verifyable working again and
waiting for connection dovecot stucks furthermore with errors like
this:

May 29 07:00:49 hostname dovecot: auth: Error:
plain(m...@example.com,xxx.xxx.xx.xxx,): Request
999.7 timed out after 150 secs, state=1

When restarting dovecot all is immediately working again.
Is there a way to tell dovecot to restart auth services or
reinitialize mysql-connection after these hard fails? I could insert
"idle_kill = 1 mins" into service auth and service auth-worker, but i
dont know if this would work. Unfortunately i am not able to reproduce
this error and there are always a couple of days between errors.

Thanks,
Hajo



Hi!

I was not able to repeat this problem using 2.2.36. Can you provide
steps to reproduce?

May 29 10:20:24 auth: Debug: client in: AUTH1PLAIN
service=imapsecuredsession=XtpgEVNtQeUB
lip=::1rip=::1lport=143rport=58689resp=
May 29 10:20:24 auth-worker(31098): Debug:
sql(t...@domain.org,::1,): query:
SELECT userid AS username, domain, password FROM users WHERE userid =
'test' AND domain = 'domain.org'
May 29 10:20:54 auth-worker(31098): Warning: mysql: Query failed,
retrying: Lost connection to MySQL server during query (idled for 28
secs)
May 29 10:20:59 auth-worker(31098): Error: mysql(127.0.0.1): Connect
failed to database (dovecot): Can't connect to MySQL server on
'127.0.0.1' (4) - waiting for 5 seconds before retry
May 29 10:21:04 auth-worker(31098): Error: mysql(127.0.0.1): Connect
failed to database (dovecot): Can't connect to MySQL server on
'127.0.0.1' (4) - waiting for 5 seconds before retry
May 29 10:21:14 auth: Debug: auth client connected (pid=31134)
May 29 10:21:14 imap-login: Warning: Growing pool 'imap login commands'
with: 1024
May 29 10:21:14 auth-worker(31098): Error: mysql(127.0.0.1): Connect
failed to database (dovecot): Can't connect to MySQL server on
'127.0.0.1' (4) - waiting for 25 seconds before retry

This is what it looks like for me and after restoring connectivity, it
started working normally.

Unfortunately i can not reproduce. Servers running well for days or
sometimes weeks and then it happens one time. I can provide some more
logs.

This is an error with mysql involvement:

May 29 06:56:59 hostname dovecot: auth-worker(1099): Error:
mysql(xx.xx.xx.xx): Connect failed to database (mysql): Can't connect
to MySQL server on 'xx.xx.xx.xx' (111) - waiting for 1 seconds before
retry
.
. some more of above line
.
May 29 06:56:59 hostname dovecot: auth-worker(1110): Error:
sql(m...@example.com,xx.xx.xx.xx): Password query failed: Not
connected to database
May 29 06:56:59 hostname dovecot: auth: Error: auth worker: Aborted
PASSV request for m...@example.com: Internal auth worker failure
May 29 06:57:59 hostname dovecot: auth-worker(1099): Error: mysql:
Query timed out (no free connections for 60 secs): SELECT `inbox` as
`user`, `password` FROM `mail_users` WHERE `login` = 'username' AND
`active`='Y'
May 29 06:59:30 hostname dovecot: auth: Error:
plain(username,xx.xx.xx.xx,): Request 999.2 timed
out after 151 secs, state=1
.
. much more of these lines with Request timed out
.

At this point my mates restartet dovecot and all worked well
immediately. Mysql performed a short restart at 6:56 and dovecot was
not able to reconnect for about 10 mins until my mates did the
restart. I could not reproduce the problem by manually restarting of
mysql, this worked well.

This is an error without visible mysql involvement:
.
. lines of normal imap/pop activity
.
May 29 05:43:03 hostname dovecot: imap-login: Error: master(imap):
Auth request timed out (received 0/12 bytes)
May 29 05:43:03 hostname dovecot: imap-login: Internal login failure
(pid=1014 id=16814) (internal failure, 1 successful auths):
user=, method=PLAIN, rip=xx.xx.xx.xx, lip=xx.xx.xx.xx, TLS
May 29 05:43:03 hostname dovecot: imap: Error: Login client
disconnected too early
May 29 05:44:03 hostname dovecot: auth: Error:
plain(username,xx.xx.xx.xx,): Request 1014.16815
timed out after 150 secs, state=1
May 29 05:44:08 hostname dovecot: imap: Error: Auth server request
timed out after 155 secs (client-pid=1014 client-id=16814)
May 29 05:44:33 hostname dovecot: imap-login: Disconnected: Inactivity
during authentication (disconnected while authenticating, waited 180
secs): user=<>, method=PLAIN, rip=xx.xx.xx.xx, lip=xx.

expunge not removing attachments?

2018-05-30 Thread Ralf Hildebrandt
I have a large mail backup folder backup@backup.invalid; I'm cleaning
up daily like this:

infimum=`date -d "-4 day" +"%Y-%m-%d"`
doveadm expunge -u backup@backup.invalid mailbox INBOX SAVEDBEFORE $infimum 
doveadm purge   -u backup@backup.invalid

yet I see this:

# find attachments/ -type f -ctime +5 | wc -l
7522
# find attachments/ -type f | wc -l
127579

# find attachments/ -type f -mtime +5 | wc -l
14361
# find attachments/ -type f | wc -l
127793

About 5.9% of the files in attachments and below are older than 5 days.
Why? Is that normal?

using dovecot 2:2.3.1-1 from the official repos.
-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | https://www.charite.de



imapsieve: mailbox Trash: FLAG event

2018-05-30 Thread Peter Schiffer
Hi all,

after upgrade from Dovecot 2.2 to 2.3 I started to see that emails from
Trash folder are automatically deleted after ~7 days. Email clients are
not configured to automatically expunge or empty trash on log out (I
see this behavior for multiple mailboxes, with different clients).

With mail_debug enabled, I see this log message for every email that
gets deleted from Trash:

May 29 08:02:17 hostname dovecot: imap(some@mail)<7725>: Debug: imapsieve: mailbox Trash: FLAG event (changed flags:
\Deleted)

I have Dovecot 2.3.1 (8e2f634) and Pigeonhole 0.5.1 for sieve. My
configuration can be found here:
https://github.com/pschiffe/mailcow-dockerized/tree/master/data/conf/dovecot

There is enabled support for sieve config in sql, but all sieve tables
are empty, and there are no sieve filters for Trash flagging.

Do you know how to find the source of that Trash flag event? I need to
disable it or to configure it so it deletes emails at least couple of
months old.

Thanks all,

peter


use instance-name for syslog?

2018-05-30 Thread A. Schulze
Hello,

When running multiple instances of dovecot on the same host (or running 
multiple docker container),
it is hard to distinguish logs from different processes: the syslog entries are 
all prefixed with the same identifier "dovecot"
It is hardcoded here:
https://github.com/dovecot/core/blob/master/src/lib-master/master-service.c#L420

Would it make sense to use the already implemented instance-name as syslog 
ident?
How do others solve that problem?

Andreas




Re: use instance-name for syslog?

2018-05-30 Thread SATOH Fumiyasu
Hi!

On Thu, 31 May 2018 00:44:58 +0900,
A. Schulze wrote:
> When running multiple instances of dovecot on the same host (or running 
> multiple docker container),
> it is hard to distinguish logs from different processes: the syslog entries 
> are all prefixed with the same identifier "dovecot"
> It is hardcoded here:
> https://github.com/dovecot/core/blob/master/src/lib-master/master-service.c#L420
> 
> Would it make sense to use the already implemented instance-name as syslog 
> ident?
> How do others solve that problem?

I have a patchset to implement that. Please see the attachment.

-- 
-- Name: SATOH Fumiyasu @ OSS Technology Corp. (fumiyas @ osstech co jp)
-- Business Home: https://www.OSSTech.co.jp/
-- GitHub Home: https://GitHub.com/fumiyas/
-- PGP Fingerprint: BBE1 A1C9 525A 292E 6729  CDEC ADC2 9DCA 5E1C CBCA
>From 958933cd0e98f1fda68a2d4d4fc51fb8058a7914 Mon Sep 17 00:00:00 2001
From: SATOH Fumiyasu 
Date: Tue, 1 Jul 2014 19:20:30 +0900
Subject: [PATCH 1/2] master: Do not prepend "dovecot-" to a process name

---
 src/master/main.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/src/master/main.c b/src/master/main.c
index 752c253..733fc04 100644
--- a/src/master/main.c
+++ b/src/master/main.c
@@ -86,8 +86,6 @@ void process_exec(const char *cmd, const char *extra_args[])
/* prefix with dovecot/ */
argv[0] = t_strdup_printf("%s/%s", services->set->instance_name,
  argv[0]);
-   if (strncmp(argv[0], PACKAGE, strlen(PACKAGE)) != 0)
-   argv[0] = t_strconcat(PACKAGE"-", argv[0], NULL);
(void)execv_const(executable, argv);
 }
 
-- 
2.0.1

>From 36d052adbd04f8c8a89ba66e086e22c152a5a93c Mon Sep 17 00:00:00 2001
From: SATOH Fumiyasu 
Date: Tue, 1 Jul 2014 19:22:56 +0900
Subject: [PATCH 2/2] lib-master: Set instance_name to the syslog name

---
 src/lib-master/master-service-settings.c | 2 ++
 src/lib-master/master-service-settings.h | 1 +
 src/lib-master/master-service.c  | 3 ++-
 src/lib/failures.c   | 7 ++-
 src/master/service-process.c | 1 +
 5 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/src/lib-master/master-service-settings.c 
b/src/lib-master/master-service-settings.c
index 30ad936..67fca19 100644
--- a/src/lib-master/master-service-settings.c
+++ b/src/lib-master/master-service-settings.c
@@ -36,6 +36,7 @@ master_service_settings_check(void *_set, pool_t pool, const 
char **error_r);
 static const struct setting_define master_service_setting_defines[] = {
DEF(SET_STR, base_dir),
DEF(SET_STR, state_dir),
+   DEF(SET_STR, instance_name),
DEF(SET_STR, log_path),
DEF(SET_STR, info_log_path),
DEF(SET_STR, debug_log_path),
@@ -52,6 +53,7 @@ static const struct setting_define 
master_service_setting_defines[] = {
 static const struct master_service_settings master_service_default_settings = {
.base_dir = PKG_RUNDIR,
.state_dir = PKG_STATEDIR,
+   .instance_name = PACKAGE,
.log_path = "syslog",
.info_log_path = "",
.debug_log_path = "",
diff --git a/src/lib-master/master-service-settings.h 
b/src/lib-master/master-service-settings.h
index e5b5ace..d7f1a80 100644
--- a/src/lib-master/master-service-settings.h
+++ b/src/lib-master/master-service-settings.h
@@ -10,6 +10,7 @@ struct master_service;
 struct master_service_settings {
const char *base_dir;
const char *state_dir;
+   const char *instance_name;
const char *log_path;
const char *info_log_path;
const char *debug_log_path;
diff --git a/src/lib-master/master-service.c b/src/lib-master/master-service.c
index 65b2753..d3e5281 100644
--- a/src/lib-master/master-service.c
+++ b/src/lib-master/master-service.c
@@ -297,7 +297,8 @@ void master_service_init_log(struct master_service *service,
if (!syslog_facility_find(service->set->syslog_facility,
  &facility))
facility = LOG_MAIL;
-   i_set_failure_syslog("dovecot", LOG_NDELAY, facility);
+   i_set_failure_syslog(service->set->instance_name, LOG_NDELAY,
+facility);
i_set_failure_prefix("%s", prefix);
 
if (strcmp(service->set->log_path, "syslog") != 0) {
diff --git a/src/lib/failures.c b/src/lib/failures.c
index 3023ba8..0ebba3d 100644
--- a/src/lib/failures.c
+++ b/src/lib/failures.c
@@ -443,7 +443,12 @@ void i_syslog_error_handler(const struct failure_context 
*ctx,
 
 void i_set_failure_syslog(const char *ident, int options, int facility)
 {
-   openlog(ident, options, facility);
+   static char *syslog_ident = NULL;
+
+   i_free(syslog_ident);
+   syslog_ident = i_strdup(ident);
+
+   openlog(syslog_ident, options, facility);
 
i_set_fatal_handler(i_syslog_fatal_handler);
i_set_error_handler(i_syslog_error_handler);
diff --git a/sr

Re: Fatal: nfs flush requires mail_fsync=always

2018-05-30 Thread Juan C. Blanco

Hello, any news about the attached error?

I'm preparing the 2.2 to 2.3 upgrade and having the same error.

We have the mail stores in an NFS filer.

Regards


On 19.01.2018 11:55, Søren Skou wrote:

Hiya all,

I'm seeing this "Fatal: nfs flush requires mail_fsync=always" error on
my testbed. The issue is that from what I can see, mail_fsync is set
to always :

# doveconf -n | grep mail_fs
mail_fsync = always

The result is that the client does not connect at all, which is not
really what I wanted to happen :)

Any idea what is going wrong here?

Best regards
Søren P. Skou

doveconf -n

# 2.3.1.alpha0 (bdfa22623) [XI:2:2.3.1~alpha0-1~auto+14]:
/etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.1.alpha0 (d5f710e0)
# OS: Linux 4.9.0-4-amd64 x86_64 Debian 9.3 nfs
auth_worker_max_count = 200
dict {
  expire = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
  quota = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
  sqlquota = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
}
disable_plaintext_auth = no
lock_method = dotlock
mail_fsync = always
mail_location = maildir:/mnt/virtual_mail/%d/%n
mail_nfs_index = yes
mail_nfs_storage = yes
mail_plugins = quota
mailbox_list_index = no
metric imap_select_no {
  event_name = imap_command_finished
  filter {
name = SELECT
tagged_reply_state = NO
  }
}
mmap_disable = yes
namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
auto = subscribe
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox "Sent Messages" {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix =
}
passdb {
  args = /etc/dovecot/dovecot-sql.conf.ext
  driver = sql
}
plugin {
  quota = dict:User quota::proxy::sqlquota
  quota_grace = 10%%
  quota_rule = *:storage=1000m:messages=30
  quota_rule2 = Trash:storage=+10%%:messages=+10%%
  quota_rule3 = Junk:storage=+20%%:messages=+20%%
  quota_status_nouser = DUNNO
  quota_status_overquota = 552 5.2.2 Mailbox is full
  quota_status_success = DUNNO
  quota_warning = storage=75%%:messages=75%% quota-warning 75 %u
  quota_warning2 = storage=95%%:messages=95%% quota-warning 95 %u
  quota_warning3 = -storage=100%%:messages=100%% quota-warning below %u
  sieve = /etc/dovecot/sieve/default.sieve
  sieve_global_dir = /etc/dovecot/sieve
}
protocols = " imap pop3"
service dict {
  unix_listener dict {
mode = 0600
user = vmail
  }
}
service imap {
  executable = imap
}
service quota-status {
  client_limit = 1000
  executable = quota-status -p postfix
  inet_listener {
address = 127.0.0.1
port = 12340
  }
}
service quota-warning {
  executable = script /usr/local/bin/quota-warning.sh
  user = vmail
}
ssl_ca = /etc/ssl/certs/ca-root.crt
ssl_cert = 

Hi!

Thanks, we'll look into it.

Aki




--
+---+
| Juan C. Blanco|
|   |
|  Centro de Calculo |  |
|  E.T.S. Ingenieros Informáticos|  E-mail: jcbla...@fi.upm.es  |
|  Universidad Politécnica de Madrid |  |
|  Campus de Montegancedo|  |
|  Boadilla del Monte|  Tel.:(+34) 91 067 2771  |
|  28660 MADRID (Spain)  |  Fax :(+34) 91 336 7412  |
+---+


Re: Fatal: nfs flush requires mail_fsync=always

2018-05-30 Thread Aki Tuomi
This fix is part of next release.


---Aki TuomiDovecot oy
 Original message From: "Juan C. Blanco"  
Date: 30/05/2018  19:31  (GMT+02:00) To: Dovecot Mailing List 
 Subject: Re: Fatal: nfs flush requires mail_fsync=always 
Hello, any news about the attached error?

I'm preparing the 2.2 to 2.3 upgrade and having the same error.

We have the mail stores in an NFS filer.

Regards

> On 19.01.2018 11:55, Søren Skou wrote:
>> Hiya all,
>>
>> I'm seeing this "Fatal: nfs flush requires mail_fsync=always" error on
>> my testbed. The issue is that from what I can see, mail_fsync is set
>> to always :
>>
>> # doveconf -n | grep mail_fs
>> mail_fsync = always
>>
>> The result is that the client does not connect at all, which is not
>> really what I wanted to happen :)
>>
>> Any idea what is going wrong here?
>>
>> Best regards
>> Søren P. Skou
>>
>> doveconf -n
>>
>> # 2.3.1.alpha0 (bdfa22623) [XI:2:2.3.1~alpha0-1~auto+14]:
>> /etc/dovecot/dovecot.conf
>> # Pigeonhole version 0.5.1.alpha0 (d5f710e0)
>> # OS: Linux 4.9.0-4-amd64 x86_64 Debian 9.3 nfs
>> auth_worker_max_count = 200
>> dict {
>>   expire = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
>>   quota = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
>>   sqlquota = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
>> }
>> disable_plaintext_auth = no
>> lock_method = dotlock
>> mail_fsync = always
>> mail_location = maildir:/mnt/virtual_mail/%d/%n
>> mail_nfs_index = yes
>> mail_nfs_storage = yes
>> mail_plugins = quota
>> mailbox_list_index = no
>> metric imap_select_no {
>>   event_name = imap_command_finished
>>   filter {
>> name = SELECT
>> tagged_reply_state = NO
>>   }
>> }
>> mmap_disable = yes
>> namespace inbox {
>>   inbox = yes
>>   location =
>>   mailbox Drafts {
>> special_use = \Drafts
>>   }
>>   mailbox Junk {
>> auto = subscribe
>> special_use = \Junk
>>   }
>>   mailbox Sent {
>> special_use = \Sent
>>   }
>>   mailbox "Sent Messages" {
>> special_use = \Sent
>>   }
>>   mailbox Trash {
>> special_use = \Trash
>>   }
>>   prefix =
>> }
>> passdb {
>>   args = /etc/dovecot/dovecot-sql.conf.ext
>>   driver = sql
>> }
>> plugin {
>>   quota = dict:User quota::proxy::sqlquota
>>   quota_grace = 10%%
>>   quota_rule = *:storage=1000m:messages=30
>>   quota_rule2 = Trash:storage=+10%%:messages=+10%%
>>   quota_rule3 = Junk:storage=+20%%:messages=+20%%
>>   quota_status_nouser = DUNNO
>>   quota_status_overquota = 552 5.2.2 Mailbox is full
>>   quota_status_success = DUNNO
>>   quota_warning = storage=75%%:messages=75%% quota-warning 75 %u
>>   quota_warning2 = storage=95%%:messages=95%% quota-warning 95 %u
>>   quota_warning3 = -storage=100%%:messages=100%% quota-warning below %u
>>   sieve = /etc/dovecot/sieve/default.sieve
>>   sieve_global_dir = /etc/dovecot/sieve
>> }
>> protocols = " imap pop3"
>> service dict {
>>   unix_listener dict {
>> mode = 0600
>> user = vmail
>>   }
>> }
>> service imap {
>>   executable = imap
>> }
>> service quota-status {
>>   client_limit = 1000
>>   executable = quota-status -p postfix
>>   inet_listener {
>> address = 127.0.0.1
>> port = 12340
>>   }
>> }
>> service quota-warning {
>>   executable = script /usr/local/bin/quota-warning.sh
>>   user = vmail
>> }
>> ssl_ca = /etc/ssl/certs/ca-root.crt
>> ssl_cert = > ssl_cipher_list = TLSv1+HIGH !SSLv2 !RC4 !aNULL !eNULL !3DES-CBC !3DES 
>> @STRENGTH
>> ssl_dh =  # hidden, use -P to show it
>> ssl_key =  # hidden, use -P to show it
>> userdb {
>>   args = uid=2000 gid=2000 home=/mnt/virtual_mail/%d/%n
>>   driver = static
>> }
>> protocol lmtp {
>>   mail_plugins = quota
>> }
>> protocol lda {
>>   mail_plugins = quota
>> }
>> protocol imap {
>>   mail_plugins = quota imap_quota
>>   rawlog_dir = /tmp/rawlog/%u
>> }
> 
> Hi!
> 
> Thanks, we'll look into it.
> 
> Aki



-- 
+---+
| Juan C. Blanco    |
|   |
|  Centro de Calculo |  |
|  E.T.S. Ingenieros Informáticos    |  E-mail: jcbla...@fi.upm.es  |
|  Universidad Politécnica de Madrid |  |
|  Campus de Montegancedo    |  |
|  Boadilla del Monte    |  Tel.:    (+34) 91 067 2771  |
|  28660 MADRID (Spain)  |  Fax :    (+34) 91 336 7412  |
+---+


Re: Struggling with sieve, fileinto and variables.

2018-05-30 Thread Barry Pearce

Thanks for that. Turns out it was an issue with exim stripping the local part 
suffix from the envelope before passing over lmtp to dovecot. Fixed in the exim 
router config!

Although if you turn on trace with spamtest it does result in a crash during 
mail receipt:

lmtp(t...@test.net)<7988>: Panic: Unsupported 0x30 
specifier starting at #38 in 'extracted score=%.3f, max=%.3f, ratio=%.0f %%'

lmtp(t...@test.net)<7988>: Error: Raw backtrace: 
/usr/lib/dovecot/libdovecot.so.0(+0xc7e87) [0x7fa1c8cf1e87] -> 
/usr/lib/dovecot/libdovecot.so.0(+0xc7f4a) [0x7fa1c8cf1f4a] -> 
/usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7fa1c8c5ebc9] -> 
/usr/lib/dovecot/libdovecot.so.0(+ [snip...]

lmtp(t...@test.net)<7988>: Fatal: master: 
service(lmtp): child 7988 killed with signal 6 (core not dumped - 
https://dovecot.org/bugreport.html#coredumps - set /proc/sys/fs/suid_dumpable 
to 2)

At least my problem is fixed - so thank you!


From: Stephan Bosch 
Sent: 30 May 2018 13:41
To: Barry Pearce; dovecot@dovecot.org
Subject: Re: Struggling with sieve, fileinto and variables.



Op 30-5-2018 om 14:01 schreef Barry Pearce:
> Hi,
>
> Im on Manjaro linux fully up to date running
>
> dovecot 2.3.1-2
> pigeonhole 0.5.1-2
>
>
> All is running well except I am having problems with fileinto - it
> refuses to use variables, and mailboxexists is also failing for me.
>
> Im just dealing with plus addressing - should be simple but the
> behaviour Im experiencing isnt right.
>
> require ["variables", "envelope", "fileinto", "subaddress",
> "mailbox"];
> if envelope :matches :detail "to" "*" {
> set :lower "name" "${1}";
> }
> fileinto :create "folder3";
>
>
> This works, but changing the last line to:
>
> fileinto :create "${name}";
>
>
> fails and files into inbox. As does:
>
> fileinto :create "${1}";
>
>
> It makes no difference if the fileinto is placed inside the if
> statement or otherwise. Using sieve-test suggests the basic scriptit
> should work and traces show the right action.
>
> On a related matter the script Im actually trying to get to work is:
>
> require ["variables", "envelope", "fileinto", "subaddress",
> "mailbox"];
>
> if envelope :matches :detail "to" "*" {
> set :lower "name" "${1}";
> }
> if mailboxexists "${name}" {
> fileinto :create "${name}";
> }
>
>
> But fails and files in the inbox.

This happens for a reason, which you can either find in your syslog or
in the user log file (e.g. ~/.dovecot.sieve -> ~/.dovecot.sieve.log).


> Modifying the script to:
>
> require ["variables", "envelope", "fileinto", "subaddress",
> "mailbox"];
>
> if envelope :matches :detail "to" "*" {
> set :lower "name" "${1}";
> }
>
> if mailboxexists "${name}" {
> fileinto :create "folder7";
> } else {
>   fileinto :create "folder8";
> }
>
>
> Files into folder8. So the mailboxexists is failing.
>
>  ## Started executing script 'plus-addressing'
>   3: envelope test
>   3:   starting `:matches' match with `i;ascii-casemap' comparator:
>   3:   getting `to' part from message envelope
>   3:   extracting `detail' part from address 
>   3:   matching value `folder4'
>   3: with key `*' => 1
>   3:   finishing match with result: matched
>   3: jump if result is false
>   3:   not jumping
>   4: set command
>   7:   modify :lower "folder4" => "folder4"
>   7:   assign `name' [0] = "folder4"
>   7: mailboxexists test
>   7:   mailbox `folder4' cannot be opened
>   7:   some mailboxes are unavailable
>   7: jump if result is false
>   7:   jumping to line 9
>  10: fileinto action
>  10:   store message in mailbox `folder8'
>  ## Finished executing script 'plus-addressing'
>
> Here folder4 actually does exist - so the sieve-test confirms that
> issue.  Im at a loss as to whats going on. Im particularly perplexed
> by the difference in behaviour between sieve-test and the result under
> the live server.

You can also get a trace for the live server:
https://wiki2.dovecot.org/Pigeonhole/Sieve/Configuration#Trace_Debugging

Regards,

Stephan.



Re: use instance-name for syslog?

2018-05-30 Thread A. Schulze



Am 30.05.2018 um 18:08 schrieb SATOH Fumiyasu:
> Hi!
> 
> On Thu, 31 May 2018 00:44:58 +0900,
> A. Schulze wrote:
>> When running multiple instances of dovecot on the same host (or running 
>> multiple docker container),
>> it is hard to distinguish logs from different processes: the syslog entries 
>> are all prefixed with the same identifier "dovecot"
>> It is hardcoded here:
>> https://github.com/dovecot/core/blob/master/src/lib-master/master-service.c#L420
>>
>> Would it make sense to use the already implemented instance-name as syslog 
>> ident?
>> How do others solve that problem?
> 
> I have a patchset to implement that. Please see the attachment.

Thanks!, I'll try to apply the patch on 2.2.36 and report my results...

Andreas


Re: Fatal: nfs flush requires mail_fsync=always

2018-05-30 Thread Juan C. Blanco




On 30/05/2018 18:50, Aki Tuomi wrote:

This fix is part of next release.


OK, thanks!





---
Aki Tuomi
Dovecot oy

 Original message 
From: "Juan C. Blanco" 
Date: 30/05/2018 19:31 (GMT+02:00)
To: Dovecot Mailing List 
Subject: Re: Fatal: nfs flush requires mail_fsync=always

Hello, any news about the attached error?

I'm preparing the 2.2 to 2.3 upgrade and having the same error.

We have the mail stores in an NFS filer.

Regards

 > On 19.01.2018 11:55, Søren Skou wrote:
 >> Hiya all,
 >>
 >> I'm seeing this "Fatal: nfs flush requires mail_fsync=always" error on
 >> my testbed. The issue is that from what I can see, mail_fsync is set
 >> to always :
 >>
 >> # doveconf -n | grep mail_fs
 >> mail_fsync = always
 >>
 >> The result is that the client does not connect at all, which is not
 >> really what I wanted to happen :)
 >>
 >> Any idea what is going wrong here?
 >>
 >> Best regards
 >> Søren P. Skou
 >>
 >> doveconf -n
 >>
 >> # 2.3.1.alpha0 (bdfa22623) [XI:2:2.3.1~alpha0-1~auto+14]:
 >> /etc/dovecot/dovecot.conf
 >> # Pigeonhole version 0.5.1.alpha0 (d5f710e0)
 >> # OS: Linux 4.9.0-4-amd64 x86_64 Debian 9.3 nfs
 >> auth_worker_max_count = 200
 >> dict {
 >>   expire = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
 >>   quota = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
 >>   sqlquota = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
 >> }
 >> disable_plaintext_auth = no
 >> lock_method = dotlock
 >> mail_fsync = always
 >> mail_location = maildir:/mnt/virtual_mail/%d/%n
 >> mail_nfs_index = yes
 >> mail_nfs_storage = yes
 >> mail_plugins = quota
 >> mailbox_list_index = no
 >> metric imap_select_no {
 >>   event_name = imap_command_finished
 >>   filter {
 >> name = SELECT
 >> tagged_reply_state = NO
 >>   }
 >> }
 >> mmap_disable = yes
 >> namespace inbox {
 >>   inbox = yes
 >>   location =
 >>   mailbox Drafts {
 >> special_use = \Drafts
 >>   }
 >>   mailbox Junk {
 >> auto = subscribe
 >> special_use = \Junk
 >>   }
 >>   mailbox Sent {
 >> special_use = \Sent
 >>   }
 >>   mailbox "Sent Messages" {
 >> special_use = \Sent
 >>   }
 >>   mailbox Trash {
 >> special_use = \Trash
 >>   }
 >>   prefix =
 >> }
 >> passdb {
 >>   args = /etc/dovecot/dovecot-sql.conf.ext
 >>   driver = sql
 >> }
 >> plugin {
 >>   quota = dict:User quota::proxy::sqlquota
 >>   quota_grace = 10%%
 >>   quota_rule = *:storage=1000m:messages=30
 >>   quota_rule2 = Trash:storage=+10%%:messages=+10%%
 >>   quota_rule3 = Junk:storage=+20%%:messages=+20%%
 >>   quota_status_nouser = DUNNO
 >>   quota_status_overquota = 552 5.2.2 Mailbox is full
 >>   quota_status_success = DUNNO
 >>   quota_warning = storage=75%%:messages=75%% quota-warning 75 %u
 >>   quota_warning2 = storage=95%%:messages=95%% quota-warning 95 %u
 >>   quota_warning3 = -storage=100%%:messages=100%% quota-warning below %u
 >>   sieve = /etc/dovecot/sieve/default.sieve
 >>   sieve_global_dir = /etc/dovecot/sieve
 >> }
 >> protocols = " imap pop3"
 >> service dict {
 >>   unix_listener dict {
 >> mode = 0600
 >> user = vmail
 >>   }
 >> }
 >> service imap {
 >>   executable = imap
 >> }
 >> service quota-status {
 >>   client_limit = 1000
 >>   executable = quota-status -p postfix
 >>   inet_listener {
 >> address = 127.0.0.1
 >> port = 12340
 >>   }
 >> }
 >> service quota-warning {
 >>   executable = script /usr/local/bin/quota-warning.sh
 >>   user = vmail
 >> }
 >> ssl_ca = /etc/ssl/certs/ca-root.crt
 >> ssl_cert =  >> ssl_cipher_list = TLSv1+HIGH !SSLv2 !RC4 !aNULL !eNULL !3DES-CBC 
!3DES @STRENGTH

 >> ssl_dh =  # hidden, use -P to show it
 >> ssl_key =  # hidden, use -P to show it
 >> userdb {
 >>   args = uid=2000 gid=2000 home=/mnt/virtual_mail/%d/%n
 >>   driver = static
 >> }
 >> protocol lmtp {
 >>   mail_plugins = quota
 >> }
 >> protocol lda {
 >>   mail_plugins = quota
 >> }
 >> protocol imap {
 >>   mail_plugins = quota imap_quota
 >>   rawlog_dir = /tmp/rawlog/%u
 >> }
 >
 > Hi!
 >
 > Thanks, we'll look into it.
 >
 > Aki



--
+---+
| Juan C. Blanco    |
|   |
|  Centro de Calculo |  |
|  E.T.S. Ingenieros Informáticos    |  E-mail: jcbla...@fi.upm.es  |
|  Universidad Politécnica de Madrid |  |
|  Campus de Montegancedo    |  |
|  Boadilla del Monte    |  Tel.:    (+34) 91 067 2771  |
|  28660 MADRID (Spain)  |  Fax :    (+34) 91 336 7412  |
+---+


--
+---+
| Juan C. Blanco|
|   |
|  Centro de Calculo |  

Re: imapsieve: mailbox Trash: FLAG event

2018-05-30 Thread Stephan Bosch




Op 30/05/2018 om 17:08 schreef Peter Schiffer:

Hi all,

after upgrade from Dovecot 2.2 to 2.3 I started to see that emails from
Trash folder are automatically deleted after ~7 days. Email clients are
not configured to automatically expunge or empty trash on log out (I
see this behavior for multiple mailboxes, with different clients).

With mail_debug enabled, I see this log message for every email that
gets deleted from Trash:

May 29 08:02:17 hostname dovecot: imap(some@mail)<7725>: Debug: imapsieve: mailbox Trash: FLAG event (changed flags:
\Deleted)

I have Dovecot 2.3.1 (8e2f634) and Pigeonhole 0.5.1 for sieve. My
configuration can be found here:
https://github.com/pschiffe/mailcow-dockerized/tree/master/data/conf/dovecot

There is enabled support for sieve config in sql, but all sieve tables
are empty, and there are no sieve filters for Trash flagging.

Do you know how to find the source of that Trash flag event? I need to
disable it or to configure it so it deletes emails at least couple of
months old.


That debug message just indicates that IMAPSIEVE noticed a flag change 
event, not that Sieve is the entity setting the flag.


Likely, the client (or one of several clients you're using) is doing 
this explicitly: IMAPSIEVE would only log that debug message for an IMAP 
STORE command, which is issued by the client.


Regards,

Stephan.


strange misflagging of incoming mail as trash

2018-05-30 Thread Robb Aley Allan
Setting up postfix/dovecot on a clean FreeBSD cloud server, and have discovered 
a strange problem resulting from the interaction of Mac OS X Mail and dovecot 
(using the Maildir file structure). Essentially, under very specific 
circumstances, Mail somehow tells dovecot that a new incoming message is trash, 
dovecot marks it as such, and the message effectively vanishes.

The specific circumstance is as follows:

1. A message is sent to a server running postfix. It is placed in the 
Maildir mail structure in the “new” directory.

2. OS X Mail is used to retrieve mail via IMAP, and dovecot transfers 
the message from “new” to “cur”.

3. OS X Mail has a rule that seeks to transfer an incoming message to a 
subfolder if the message sender belongs to a group created in the Contacts app 
(IOW the sender is matched against an OS-internal contacts database).

4. IF the sender is the user, AND the user is in the group against 
which the rule is compared, the message is marked as trash (suffix “T”) and is 
never loaded into the mail client, either in the inbox or in the trash folder. 
For all intents and purposes, it disappears.

The connection log shows the following line:

"WROTE May 30 23:38:11.960 [kCFStreamSocketSecurityLevelTLSv1_2] -- 
host:cloud.helical.com -- port:993 -- socket:0x6080004b6c80 -- 
thread:0x604000678e40
12.3 UID STORE 125 +FLAGS.SILENT (\Deleted)”

which appears to be the client instructing dovecot to mark the message as 
deleted.

I have the log files from the connection on both sides, and the console log 
from the Mac, but can’t understand

1. how Mail decides that the message should be deleted, and 
2. why Mail “loses” the message (doesn’t appear in the Trash mailbox or 
anywhere else).

_

Robb Aley Allan