Re: IMAP directory structure.

2017-02-21 Thread Steffen Kaiser

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Tue, 21 Feb 2017, C5ace wrote:

I use Dovecot 2.1.7. and like to know how to force the IMAP directory 
structure to be:


The IMAP server's desired directory structure is:
root@server-2:/var/vmail/c5ace.com/test/Maildir#
/.INBOX
/.INBOX.Archives
/.INBOX.Drafts
/.INBOX.Junk
/.INBOX.Sent
/.INBOX.Templates
/.INBOX.Trash

and to prevent the mail clients like Thunderbird, Claws Mail, etc. to add 
additional out of tree directories.


Define the prefix of the default namespace as "INBOX." and deploy ACLs 
that deny to create new sub-mailboxes. (However, if I remember correctly 
the own has the permission to change the permissions implicitly.)


- -- 
Steffen Kaiser

-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEVAwUBWKv0j3z1H7kL/d9rAQLq4Af/TIbcJ1BCkk+0l6RSEeHRscXhghKzNU0G
pjOQyHx/DkUQEQMb+cWPR6smUEVzfNmL6zdzYx6FXyg5VpgfxOfUnxSreCc7OXIi
olY8l1fB46Yi7qEfcW2uNp83OeX4W0wXgXJtEYGDatzL9C6wWbPEzPQFP4w4GYD2
DSZI7/haslTMZDeFiVQUlXtAdddFkl7vYA9e146QvNz0RiwIV4ITfO+caCsRFQPz
a8hcoWCufTOjTXaEA91FiDBrPiRTgNeqJRfS8tca1Zyli9/rM6yzLHQLnCf6VQRf
o7ZOLKVsIcYBE0ulBnVzCc47tv5B0HY/fe4J6LVMUDE8JbAl522dYQ==
=6nsR
-END PGP SIGNATURE-


How to dsync mdbox compressed to maildir uncompressed

2017-02-21 Thread Daniel Betz
Hello,

we are using doveadm sync to export mdbox to maildir format, so we can use an 
external tool to convert into an pst file.
Since we have enabled zlib compression doveadm sync always exports the maildir 
gzip compressed.

Are there any ways to prevent the doveadm sync to export the maildir compressed 
?

Have tried this: doveadm -o "maildir_copy_with_hardlinks=no" sync -u 
i...@test.de maildir:~/Maildir
>From Wiki: If you want to use dsync to convert to a compressed Maildir you may 
>need -o maildir_copy_with_hardlinks=no (this is set to yes by default and will 
>prevent compression).

Regards,
Daniel


# 2.2.27 (c0f36b0): /usr/local/dovecot2/etc/dovecot/dovecot.conf
doveconf: Warning: service auth { client_limit=6 } is lower than required 
under max. load (500500)
# OS: Linux 3.10.0-327.36.3.el7.x86_64 x86_64 CentOS Linux release 7.2.1511 
(Core)
auth_cache_negative_ttl = 1 mins
auth_cache_size = 64 M
auth_cache_ttl = 2 hours
auth_mechanisms = plain login
auth_username_chars =
base_dir = /var/run/dovecot/
debug_log_path = /dev/null
default_login_user = dovecot
default_vsz_limit = 750 M
disable_plaintext_auth = no
doveadm_password =  # hidden, use -P to show it
doveadm_port = 12345
first_valid_gid = 1001
first_valid_uid = 1001
info_log_path = /var/log/dovecot/messages
lda_mailbox_autocreate = yes
lda_original_recipient_header = X-Envelope-To
log_path = /dev/stderr
login_log_format_elements = user=[%u] method=%m rip=%r lip=%l %c
mail_gid = 1001
mail_location = mdbox:~:INDEX=%h/INDEX
mail_plugins = quota notify mail_log zlib
mail_uid = 1001
mbox_write_locks = fcntl
namespace {
  inbox = yes
  location =
  mailbox Drafts {
auto = no
special_use = \Drafts
  }
  mailbox "Gesendete Elemente" {
auto = no
special_use = \Sent
  }
  mailbox "Infizierte Objekte" {
auto = no
special_use = \Junk
  }
  mailbox Sent {
auto = no
special_use = \Sent
  }
  mailbox "Sent Messages" {
auto = no
special_use = \Sent
  }
  mailbox Spam {
auto = no
special_use = \Junk
  }
  mailbox Trash {
auto = no
special_use = \Trash
  }
  prefix =
  separator = .
  type = private
}
namespace inbox {
  hidden = yes
  inbox = no
  list = no
  location =
  prefix = INBOX.
  separator = .
}
passdb {
  args = /usr/local/dovecot2/etc/dovecot/dovecot-ldap.conf
  driver = ldap
}
passdb {
  args = /usr/local/dovecot2/etc/dovecot/dovecot-ldap2.conf
  driver = ldap
}
plugin {
  quota = dict:User quota::file:%h/mdbox/dovecot-quota
  quota_rule1 = Trash:storage=+100M
  quota_rule2 = INBOX.Trash:storage=+100M
  quota_warning = storage=85%% quota-warning 85 %u
  quota_warning1 = storage=95%% quota-warning 95 %u
  quota_warning2 = storage=99%% quota-warning 99 %u
  zlib_save = gz
  zlib_save_level = 6
}
replication_max_conns = 30
sendmail_path = /usr/local/exim/bin/exim
service aggregator {
  fifo_listener replication-notify-fifo {
mode = 0666
user = popuser
  }
  unix_listener replication-notify {
mode = 0666
user = popuser
  }
}
service anvil {
  client_limit = 6
}
service auth {
  client_limit = 6
  unix_listener auth-userdb {
mode = 0666
user = popuser
  }
  unix_listener auth {
mode = 0666
user = popuser
  }
}
service config {
  unix_listener config {
user = popuser
  }
}
service dict {
  unix_listener dict {
mode = 0666
user = popuser
  }
}
service dns_client {
  process_limit = 6000
  process_min_avail = 12
  unix_listener dns-client {
mode = 0666
user = popuser
  }
}
service doveadm {
  inet_listener {
port = 12345
  }
  user = popuser
}
service imap-login {
  chroot = login
  client_limit = 6000
  process_limit = 100
  process_min_avail = 16
  service_count = 0
}
service imap {
  executable = /usr/local/dovecot2/libexec/dovecot/imap
  process_limit = 25
  process_min_avail = 50
  service_count = 250
}
service ipc {
  client_limit = 6
  unix_listener ipc {
mode = 0650
user = dovecot
  }
  unix_listener login/ipc-proxy {
mode = 0650
user = dovecot
  }
}
service lmtp {
  unix_listener lmtp {
mode = 0666
user = popuser
  }
}
service pop3-login {
  chroot = login
  client_limit = 6000
  process_limit = 100
  process_min_avail = 16
  service_count = 0
}
service pop3 {
  executable = /usr/local/dovecot2/libexec/dovecot/pop3
  process_limit = 25
  process_min_avail = 50
  service_count = 250
}
service quota-warning {
  executable = script /usr/local/dovecot2/bin/quota-warning.sh
  unix_listener quota-warning {
mode = 0600
user = popuser
  }
  user = popuser
}
service replicator {
  unix_listener replicator-doveadm {
mode = 0600
user = popuser
  }
}
ssl_cert = 

Re: How to dsync mdbox compressed to maildir uncompressed

2017-02-21 Thread Timo Sirainen
On 21 Feb 2017, at 12.49, Daniel Betz  wrote:
> 
> Hello,
> 
> we are using doveadm sync to export mdbox to maildir format, so we can use an 
> external tool to convert into an pst file.
> Since we have enabled zlib compression doveadm sync always exports the 
> maildir gzip compressed.
> 
> Are there any ways to prevent the doveadm sync to export the maildir 
> compressed ?
> 
> Have tried this: doveadm -o "maildir_copy_with_hardlinks=no" sync -u 
> i...@test.de maildir:~/Maildir
> From Wiki: If you want to use dsync to convert to a compressed Maildir you 
> may need -o maildir_copy_with_hardlinks=no (this is set to yes by default and 
> will prevent compression).

Run it via two processes so you can give separate settings for them, something 
like:

doveadm sync -u imap@test.d  'doveadm -o mail=~/Maildir -o 
mail_plugins=everything-but-zlib dsync-server'


Re: How to dsync mdbox compressed to maildir uncompressed

2017-02-21 Thread Thomas Leuxner
* Daniel Betz  2017.02.21 11:49:

> Have tried this: doveadm -o "maildir_copy_with_hardlinks=no" sync -u 
> i...@test.de maildir:~/Maildir
> From Wiki: If you want to use dsync to convert to a compressed Maildir you 
> may need -o maildir_copy_with_hardlinks=no (this is set to yes by default and 
> will prevent compression).

doveadm -o plugin/quota= -o plugin/zlib_save= backup -u i...@test.de 
maildir:~/Maildir

Regards
Thomas


signature.asc
Description: Digital signature


Re: How to dsync mdbox compressed to maildir uncompressed

2017-02-21 Thread Daniel Betz
Hi Timo,

thank you for the hint, but it doesnt seems to work.

doveadm sync -u i...@test.de 'doveadm -o mail="maildir:~/Maildir" -o 
"mail_plugins=quota" dsync-server -u i...@test.de'
Also tried -o mail=~/Maildir .. -o maildir:~/Maildir .. 

The log throws an error:
Feb 21 13:05:35 doveadm: Error: Panic: io_add(0x1) called twice fd=9, 
callback=0x7f49baa06840 -> 0x7f49ba991e30
Feb 21 13:05:35 doveadm: Error: Error: Raw backtrace: 
/usr/local/dovecot2/lib/dovecot/libdovecot.so.0(+0x92d70) [0x7f49ba9efd70] -> 
/usr/local/dovecot2/lib/dovecot/libdovecot.so.0(default_fatal_handler+0x2a) 
[0x7f49ba9efdda] -> /usr/local/dovecot2/lib/dovecot/libdovecot.so.0(i_fatal+0) 
[0x7f49ba98b4e0] -> 
/usr/local/dovecot2/lib/dovecot/libdovecot.so.0(ioloop_iolist_add+0x83) 
[0x7f49baa03dc3] -> 
/usr/local/dovecot2/lib/dovecot/libdovecot.so.0(io_loop_handle_add+0x3b) 
[0x7f49baa046db] -> /usr/local/dovecot2/lib/dovecot/libdovecot.so.0(+0xa599f) 
[0x7f49baa0299f] -> /usr/local/dovecot2/lib/dovecot/libdovecot.so.0(io_add+0xd) 
[0x7f49baa02a4d] -> 
/usr/local/dovecot2/lib/dovecot/libdovecot.so.0(master_service_io_listeners_add+0x65)
 [0x7f49ba9916d5] -> 
/usr/local/dovecot2/lib/dovecot/libdovecot.so.0(master_service_init_finish+0xb7)
 [0x7f49ba9917a7] -> /usr/local/dovecot2/bin/doveadm(main+0x189) [0x4143a9] -> 
/lib64/libc.so.6(__libc_start_main+0xf5) [0x7f49ba5bcb15] -> 
/usr/local/dovecot2/bin/doveadm() [0x414785]
Feb 21 13:05:35 dsync-local(i...@test.de): Error: read(remote) failed: EOF 
(version not received)
Feb 21 13:05:35 dsync-local(i...@test.de): Error: Remote command died with 
signal 6: doveadm -o mail="maildir:~/Maildir" -o "mail_plugins=quota" 
dsync-server -u i...@test.de dsync-server

Regards,
Daniel



Von: Timo Sirainen [mailto:t...@iki.fi] 
Gesendet: Dienstag, 21. Februar 2017 12:01
An: Daniel Betz 
Cc: dovecot@dovecot.org
Betreff: Re: How to dsync mdbox compressed to maildir uncompressed

On 21 Feb 2017, at 12.49, Daniel Betz  wrote:

Hello,

we are using doveadm sync to export mdbox to maildir format, so we can use an 
external tool to convert into an pst file.
Since we have enabled zlib compression doveadm sync always exports the maildir 
gzip compressed.

Are there any ways to prevent the doveadm sync to export the maildir compressed 
?

Have tried this: doveadm -o "maildir_copy_with_hardlinks=no" sync -u 
i...@test.de maildir:~/Maildir
>From Wiki: If you want to use dsync to convert to a compressed Maildir you may 
>need -o maildir_copy_with_hardlinks=no (this is set to yes by default and will 
>prevent compression).

Run it via two processes so you can give separate settings for them, something 
like:

doveadm sync -u imap@test.d 'doveadm -o mail=~/Maildir -o 
mail_plugins=everything-but-zlib dsync-server'


Re: doveadm: Fatal: All your namespaces have a location setting

2017-02-21 Thread Aki Tuomi


On 20.02.2017 11:46, Ben wrote:
>
>> Hi!
>>
>> Can you post doveconf -n
>>
>> Aki
>
> # 2.2.10: /etc/dovecot/dovecot.conf
> # OS: Linux 3.10.0-514.6.1.el7.x86_64 x86_64 CentOS Linux release
> 7.3.1611 (Core)
> auth_mechanisms = plain login
> auth_verbose = yes
> auth_verbose_passwords = sha1
> first_valid_uid = 1000
> mail_location = maildir:~/Maildir
> managesieve_notify_capability = mailto
> managesieve_sieve_capability = fileinto reject envelope
> encoded-character vacation subaddress comparator-i;ascii-numeric
> relational regex imap4flags copy include variables body environment
> mailbox date ihave enotify
> mbox_write_locks = fcntl
> namespace inbox {
>   inbox = yes
>   location =

Try removing this

Aki


Scaling to 10 Million IMAP sessions on a single server

2017-02-21 Thread KT Walrus
I just read this blog: 
https://mrotaru.wordpress.com/2013/10/10/scaling-to-12-million-concurrent-connections-how-migratorydata-did-it/
 

 about scaling to 12 Million Concurrent Connections on a single server and it 
got me thinking.

Would it be possible to scale Dovecot IMAP server to 10 Million IMAP sessions 
on a single server?

I think the current implementation of having a separate process manage each 
active IMAP session (w/ the possibility of moving idling sessions to a single 
hibernate process) will never be able to deploy a single server managing 10 
Million IMAP sessions.

But, would it be possible to implement a new IMAP server plugin that uses a 
fixed configurable pool of “worker” processes, much like NGINX or PHP-FPM does. 
These servers can probably scale to 10 Million TCP connections, if the server 
is carefully tuned and has enough cores/memory to support that many active 
sessions.

I’m thinking that the new IMAP server could use some external database (e.g., 
Redis or Memcached) to save all the sessions state and have the “worker” 
processes poll the TCP sockets for new IMAP commands to process (fetching the 
session state from the external database when it has a command that is waiting 
on a response). The Dovecot IMAP proxies could even queue incoming commands to 
proxy many incoming requests to a smaller number of backend connections (like 
ProxySQL does for MySQL requests). That might allow each Dovecot proxy to 
support 10 Million IMAP sessions and a single backend could support multiple 
front end Dovecot proxies (to scale to 100 Million concurrent IMAP connections 
using 10 proxies for 100 Million connections and 1 backend server for 10 
Million connections).

Of course, the backend server may need to be beefy and have very fast NVMe SSDs 
for local storage, but changing the IMAP server to manage a pool of workers 
instead of requiring a process per active session, would allow bigger scale up 
and could save large sites a lot of money.

Is this a good idea? Or, am I missing something?

Kevin

Could not login as root or other Linux user account

2017-02-21 Thread Basdove
Ubuntu server 16.04.2samba has upgraded from As per repository (latest 
version)I was configuring samba as per document from wiki 
"ActivedirectoryWINbindHowto"After editing the common-account and 
common-auth I rebooted the server.I could notlogin as root or any Linux user. 
Server says "Incorrect login" But I tried with all otherLinux user 
login which  are all logged well before inducing  root.How to login 
nowBelow is part of document 
:--- Note:
 You can use pam-auth-update to add the necessary entries for winbind 
authentication. If you installed libpam-winbind above, this step is all you 
need to do to configure pam. You may want to add the line to automatically 
create the home directory.sudo pam-auth-updateThis PAM configuration does not 
acquire a Kerberos TGT at login. To acquire a ticket, use kinit after logging 
in, and
  consider using kdestroy in a logout script.file: 
/etc/pam.d/common-accountaccount sufficient   
pam_winbind.soaccount required 
pam_unix.sofile: /etc/pam.d/common-authauth sufficient pam_winbind.soauth 
sufficient pam_unix.so nullok_secure use_first_passauth required   
pam_deny.so 


Re: Could not login as root or other Linux user account

2017-02-21 Thread Aki Tuomi



On 2017-02-21 17:35, Basdove wrote:

Ubuntu server 16.04.2samba has upgraded from As per repository (latest version)I was configuring samba as 
per document from wiki "ActivedirectoryWINbindHowto"After editing the common-account and 
common-auth I rebooted the server.I could notlogin as root or any Linux user. Server says 
"Incorrect login" But I tried with all otherLinux user login which  are all 
logged well before inducing  root.How to login nowBelow is part of document 
:--- Note:
 You can use pam-auth-update to add the necessary entries for winbind authentication. If you installed 
libpam-winbind above, this step is all you need to do to configure pam. You may want to add the line to 
automatically create the home directory.sudo pam-auth-updateThis PAM configuration does not acquire a 
Kerberos TGT at login. To acquire a ticket, use kinit after logging in, and
   consider using kdestroy in a logout script.file: /etc/pam.d/common-accountaccount 
sufficient   pam_winbind.soaccount 
required pam_unix.sofile: /etc/pam.d/common-authauth sufficient 
pam_winbind.soauth sufficient pam_unix.so nullok_secure use_first_passauth required   
pam_deny.so 


Could you send your email in some readable format?

Aki


segfault in lib20_expire_plugin

2017-02-21 Thread Mario Arnold
Hello,

after upgrade from [2.2.devel (34f7cc3)] to [2.2.devel (b3443fc)] dovecot
stops with a segfault:

Fatal: master: service(imap): child 21179 killed with signal 11 (core dumped)
imap[21179]: segfault at 0 ip f726eef1 sp ffa3b050 error 4 in
lib20_expire_plugin.so[f726d000+3000]

gdb /usr/lib/dovecot/imap /var/_core/core_imap-11-5000-5000-21179
GNU gdb (Debian 7.12-6) 7.12.0.20161007-git
Reading symbols from /usr/lib/dovecot/imap...Reading symbols from
/usr/lib/debug/.build-id/99/6f1cf1a262cf5738f075ec046d9a7d344d9693.debug...done.
done.
[New LWP 21179]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/i386-linux-gnu/libthread_db.so.1".
Core was generated by `dovecot/imap imap-postlogin'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  expire_mail_namespaces_created (ns=0xf814db90) at expire-plugin.c:428
428 expire-plugin.c: Datei oder Verzeichnis nicht gefunden.
(gdb) bt full
#0  expire_mail_namespaces_created (ns=0xf814db90) at expire-plugin.c:428
user = 0xf814d028
v = 0x0
db = 0xf815b960
error = 0xf81522c0 "Trash"
#1  0xf75f9b21 in hook_mail_namespaces_created (namespaces=0xf8151008) at
mail-storage-hooks.c:304
_data_stack_cur_id = 4
hooks__foreach_end = 0xf814dae8
hooks = 0xf814dad4
#2  0xf75ebf9f in mail_namespaces_init_finish (namespaces=0xf8151008,
error_r=0xffa3b23c) at mail-namespace.c:383
_data_stack_cur_id = 3
ns = 0x0
prefixless_found = false
__FUNCTION__ = "mail_namespaces_init_finish"
#3  0xf75ec1bb in mail_namespaces_init (user=0xf814d028, error_r=0xffa3b23c)
at mail-namespace.c:438
mail_set = 0xf814d118
ns_set = 
unexpanded_ns_set = 
namespaces = 0xf8151008
ns_p = 
i = 
count = 
count2 = 
__FUNCTION__ = "mail_namespaces_init"
#4  0xf75fcd30 in mail_storage_service_init_post (ctx=,
error_r=, mail_user_r=, priv=,
user=) at mail-storage-service.c:728
mail_set = 0xf814d118
mail_user = 0xf814d028
#5  mail_storage_service_next_real (mail_user_r=,
user=, ctx=) at mail-storage-service.c:1426
len = 4162116496
priv = {uid = 5000, gid = 5000, uid_source = 0xf76afeb4 "userdb
lookup", gid_source = 0xf76afeb4 "userdb lookup", home = 0xf813ea71
"/srv/vmail/xtlv.de/1000", chroot = 0xf8130a20 ""}
error = 0xf75b9934 "4\210\024"
#6  mail_storage_service_next (ctx=0xf814d118, user=0xf813da90,
mail_user_r=0xffa3b304) at mail-storage-service.c:1444
No locals.
#7  0xf75fd0ff in mail_storage_service_lookup_next (ctx=0xf81399b0,
input=0xffa3b368, user_r=0xffa3b300, mail_user_r=0xffa3b304,
error_r=0xffa3b360) at mail-storage-service.c:1477
user = 0xf813da90
ret = 
#8  0xf77832c9 in client_create_from_input (input=0xffa3b368, fd_in=15,
fd_out=15, client_r=0xffa3b35c, error_r=0xffa3b360) at main.c:228
user = 0x81a4
mail_user = 0xc34a5
ns = 0xf814d000
client = 0xffa3b304
imap_set = 0xffa3b360
lda_set = 0xffa3b304
errstr = 0xf814db90 ""
mail_error = 49663
#9  0xf77834ea in login_client_connected (login_client=0xf813b450,
username=0xf81300c8 "1...@xtlv.de", extra_fields=0xf81300ac) at main.c:316
input = {module = 0xf778b616 "imap", service = 0xf778b616 "imap",
username = 0xf81300c8 "1...@xtlv.de", session_id = 0xf813b4c0
"doleKgxJ3s8l6zee", session_id_prefix = 0x0, session_create_time = 0, local_ip 
= {
family = 2, u = {ip6 = {__in6_u = {__u6_addr8 = "T&K\217", '\000'
, __u6_addr16 = {9812, 36683, 0, 0, 0, 0, 0, 0}, __u6_addr32
= {2404066900, 0, 0, 0}}}, ip4 = {s_addr = 2404066900}}}, remote_ip = {
family = 2, u = {ip6 = {__in6_u = {__u6_addr8 = "%\353\067\236",
'\000' , __u6_addr16 = {60197, 40503, 0, 0, 0, 0, 0, 0},
__u6_addr32 = {2654464805, 0, 0, 0}}}, ip4 = {s_addr = 2654464805}}},
  local_port = 0, remote_port = 0, userdb_fields = 0xf81300ac,
flags_override_add = (unknown: 0), flags_override_remove = (unknown: 0),
no_userdb_lookup = 0, debug = 0}
client = 0xf750331c 
flags = 
error = 0xf8130130 "auth_token=3763fd48bfdfeea2a3617cbda148915c19e125fa"
__FUNCTION__ = "login_client_connected"
#10 0xf749b0d1 in master_login_auth_finish (client=0xf813b450,
auth_args=0xf814d000, auth_args@entry=0xf81300a8) at master-login.c:210
login = 0xf813ab88
service = 0xf81383e8
__FUNCTION__ = "master_login_auth_finish"
#11 0xf749b6a9 in master_login_postlogin_input (pl=0xf813d720) at
master-login.c:284
login = 0xf813ab88
buf =
"1...@xtlv.de\tquota_rule=*:storage=5M\tuid=5000\tgid=5000\thome=/srv/vmail/xtlv.de/1000\tauth_token=3763fd48bfdfeea2a3617cbda148915c19e125fa\nQ\345td",
'\000' ,
"\006\000\000\000\020\000\000\000R\345td\260.\000\000\260>\000\000\260>\000\000P\001\000\000P\001\000\000\004\000\000\000\001\000\000\000"...
auth_args = 0xf81300a8
p = 0xf81300c0

Sieve and multi-auth databases

2017-02-21 Thread dovecot

Hello Community,

I am currently facing the following:

- dovecot+postfix+sieve are running smoothly using passwd-file 
authentication
- if a add a second authentication scheme (let's say mysql), I face a 
problem with sieve:
-- receiving thru postfix is ok on both passwd-file and mysql 
entries and correctly stored

-- I am able to send from the server as before
** BUT sieve
== does not authenticate anymore from a client (using the same 
configuration as before ie using imap credentials)

== does not process the messages anymore

Digging in the sieve logs, it reports not finding the scripts anymore 
for existing accounts found in the passwd-file


Any idea?

Thank you!


 - - - - + - - - -
# Here is the 90-sieve.conf:

plugin {
  sieve = 
file:/sd/MAIL_IMAP_POP/%d/%n/_dovecot-sieve;active=/sd/MAIL_IMAP_POP/%d/%n/_dovecot-sieve-active

  sieve_default = /sd/myhost/var/lib/dovecot/sieve/default.sieve

  sieve = /sd/MAIL_IMAP_POP/%d/%n/__Sieve

  sieve_global_dir = /sd/myhost/var/lib/dovecot/sieve/global/

  sieve_before = /sd/MAIL_IMAP_POP/SieveBefore
  sieve_after = /sd/MAIL_IMAP_POP/%d/SieveAfter/
  sieve_after2 = /sd/MAIL_IMAP_POP/SieveAfter/

  sieve_plugins = sieve_extprograms
  sieve_extensions = +vnd.dovecot.filter
  sieve_filter_bin_dir = /etc/dovecot/sieve-filters

}

 - - - - + - - - -
# Authentication for SQL users. Included from 10-auth.conf.
passdb sql {
  driver = sql
  args = /etc/dovecot/dovecot-sql.conf.ext
  # Associated query:
  # password_query = SELECT email as user, password FROM virtual_users 
WHERE email='%u';

}

userdb sql {
  driver = static
  args = uid=vmail gid=vmail home=/sd/MAIL_IMAP_POP/%d/%n:LAYOUT=fs
}


Re: segfault in lib20_expire_plugin

2017-02-21 Thread Aki Tuomi

> On February 21, 2017 at 6:04 PM Mario Arnold  wrote:
> 
> 
> Hello,
> 
> after upgrade from [2.2.devel (34f7cc3)] to [2.2.devel (b3443fc)] dovecot
> stops with a segfault:
> 
> Fatal: master: service(imap): child 21179 killed with signal 11 (core dumped)
> imap[21179]: segfault at 0 ip f726eef1 sp ffa3b050 error 4 in
> lib20_expire_plugin.so[f726d000+3000]
> 

Hi!

Thank you for your report, we'll look into it.

Aki


Re: Sieve and multi-auth databases

2017-02-21 Thread Stephan Bosch
Op 2/21/2017 om 5:19 PM schreef dovecot@avv.solutions:
> Hello Community,
>
> I am currently facing the following:
>
> - dovecot+postfix+sieve are running smoothly using passwd-file
> authentication
> - if a add a second authentication scheme (let's say mysql), I face a
> problem with sieve:
> -- receiving thru postfix is ok on both passwd-file and mysql
> entries and correctly stored
> -- I am able to send from the server as before
> ** BUT sieve
> == does not authenticate anymore from a client (using the same
> configuration as before ie using imap credentials)
> == does not process the messages anymore
>
> Digging in the sieve logs, it reports not finding the scripts anymore
> for existing accounts found in the passwd-file
>
> Any idea?
>

You should enable mail_debug. That will provide details on what Sieve is
doing regarding file system storage paths. Also a full `dovecot -n`
output is helpful.

Regards,

Stephan.

> Thank you!
>
>
>  - - - - + - - - -
> # Here is the 90-sieve.conf:
>
> plugin {
>   sieve =
> file:/sd/MAIL_IMAP_POP/%d/%n/_dovecot-sieve;active=/sd/MAIL_IMAP_POP/%d/%n/_dovecot-sieve-active
>   sieve_default = /sd/myhost/var/lib/dovecot/sieve/default.sieve
>
>   sieve = /sd/MAIL_IMAP_POP/%d/%n/__Sieve
>
>   sieve_global_dir = /sd/myhost/var/lib/dovecot/sieve/global/
>
>   sieve_before = /sd/MAIL_IMAP_POP/SieveBefore
>   sieve_after = /sd/MAIL_IMAP_POP/%d/SieveAfter/
>   sieve_after2 = /sd/MAIL_IMAP_POP/SieveAfter/
>
>   sieve_plugins = sieve_extprograms
>   sieve_extensions = +vnd.dovecot.filter
>   sieve_filter_bin_dir = /etc/dovecot/sieve-filters
>
> }
>
>  - - - - + - - - -
> # Authentication for SQL users. Included from 10-auth.conf.
> passdb sql {
>   driver = sql
>   args = /etc/dovecot/dovecot-sql.conf.ext
>   # Associated query:
>   # password_query = SELECT email as user, password FROM virtual_users
> WHERE email='%u';
> }
>
> userdb sql {
>   driver = static
>   args = uid=vmail gid=vmail home=/sd/MAIL_IMAP_POP/%d/%n:LAYOUT=fs
> }


Re: Sieve and multi-auth databases

2017-02-21 Thread Stephan Bosch
Op 2/21/2017 om 6:09 PM schreef Stephan Bosch:
> Op 2/21/2017 om 5:19 PM schreef dovecot@avv.solutions:
>
>>  - - - - + - - - -
>> # Authentication for SQL users. Included from 10-auth.conf.
>> passdb sql {
>>   driver = sql
>>   args = /etc/dovecot/dovecot-sql.conf.ext
>>   # Associated query:
>>   # password_query = SELECT email as user, password FROM virtual_users
>> WHERE email='%u';
>> }
>>
>> userdb sql {
>>   driver = static
>>   args = uid=vmail gid=vmail home=/sd/MAIL_IMAP_POP/%d/%n:LAYOUT=fs
>> }


Based on the log file you sent me, the above sql userdb is the problem.
The configured home field makes no sense. A home directory is strictly a
filesystem path and does not accept options such as LAYOUT. That only
applies to a mail storage location; i.e., the "mail" field.

What I find puzzling though is that that userdb is not in the
configuration you sent me.

Regards,


Stephan.


Re: Scaling to 10 Million IMAP sessions on a single server

2017-02-21 Thread Christian Balzer
On Tue, 21 Feb 2017 09:49:39 -0500 KT Walrus wrote:

> I just read this blog: 
> https://mrotaru.wordpress.com/2013/10/10/scaling-to-12-million-concurrent-connections-how-migratorydata-did-it/
>  
> 
>  about scaling to 12 Million Concurrent Connections on a single server and it 
> got me thinking.
> 

While that's a nice article, nothing in it was news to me or particular
complex when one does large scale stuff, like Ceph for example. 

> Would it be possible to scale Dovecot IMAP server to 10 Million IMAP sessions 
> on a single server?
> 
I'm sure Timo's answer will (or would, if he could be bothered) be along
the lines of: 
"Sure, if you give me all your gold and then some for a complete rewrite
of, well, everything".

What you're missing and what the bad idea here is that as mentioned
before scale-up only goes so far. 
I was feeling that my goal of 500k users/sessions in 2-node active/active
cluster was quite ambitious and currently I'm looking at 200k sessions as
something achievable with the current Dovecot and other limitations.

But even if you were to implement something that can handle 1 million or
more sessions per server, would you want to?
As in, if that server goes down, the resulting packet, authentication
storm will be huge and most like result in a proverbial shit storm later.
Having more than 10% or so of your customers on one machine and thus
involved in an outage that you KNOW will hit you eventually strikes me as
a bad idea.

I'm not sure how the design below meshes with Timo's lofty goals and
standards when it comes to security as well.

And a push with the right people (clients) to support IMAP NOTIFY would of
course reduce the number of sessions significantly.

Finally, Dovecot in proxy mode already scales quite well.

Christian

> I think the current implementation of having a separate process manage each 
> active IMAP session (w/ the possibility of moving idling sessions to a single 
> hibernate process) will never be able to deploy a single server managing 10 
> Million IMAP sessions.
> 
> But, would it be possible to implement a new IMAP server plugin that uses a 
> fixed configurable pool of “worker” processes, much like NGINX or PHP-FPM 
> does. These servers can probably scale to 10 Million TCP connections, if the 
> server is carefully tuned and has enough cores/memory to support that many 
> active sessions.
> 
> I’m thinking that the new IMAP server could use some external database (e.g., 
> Redis or Memcached) to save all the sessions state and have the “worker” 
> processes poll the TCP sockets for new IMAP commands to process (fetching the 
> session state from the external database when it has a command that is 
> waiting on a response). The Dovecot IMAP proxies could even queue incoming 
> commands to proxy many incoming requests to a smaller number of backend 
> connections (like ProxySQL does for MySQL requests). That might allow each 
> Dovecot proxy to support 10 Million IMAP sessions and a single backend could 
> support multiple front end Dovecot proxies (to scale to 100 Million 
> concurrent IMAP connections using 10 proxies for 100 Million connections and 
> 1 backend server for 10 Million connections).
> 
> Of course, the backend server may need to be beefy and have very fast NVMe 
> SSDs for local storage, but changing the IMAP server to manage a pool of 
> workers instead of requiring a process per active session, would allow bigger 
> scale up and could save large sites a lot of money.
> 
> Is this a good idea? Or, am I missing something?
> 
> Kevin


-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Rakuten Communications
http://www.gol.com/


Re: Scaling to 10 Million IMAP sessions on a single server

2017-02-21 Thread Ruga
A more efficient algorithm would reduce computational complexity, and the need 
for expensive power-hungry CPUs.

Sent from ProtonMail Mobile


On Wed, Feb 22, 2017 at 5:12 AM, Christian Balzer <'ch...@gol.com'> wrote:
On Tue, 21 Feb 2017 09:49:39 -0500 KT Walrus wrote:

> I just read this blog: 
> https://mrotaru.wordpress.com/2013/10/10/scaling-to-12-million-concurrent-connections-how-migratorydata-did-it/
>  
> 
>  about scaling to 12 Million Concurrent Connections on a single server and it 
> got me thinking.
>

While that's a nice article, nothing in it was news to me or particular
complex when one does large scale stuff, like Ceph for example.

> Would it be possible to scale Dovecot IMAP server to 10 Million IMAP sessions 
> on a single server?
>
I'm sure Timo's answer will (or would, if he could be bothered) be along
the lines of:
"Sure, if you give me all your gold and then some for a complete rewrite
of, well, everything".

What you're missing and what the bad idea here is that as mentioned
before scale-up only goes so far.
I was feeling that my goal of 500k users/sessions in 2-node active/active
cluster was quite ambitious and currently I'm looking at 200k sessions as
something achievable with the current Dovecot and other limitations.

But even if you were to implement something that can handle 1 million or
more sessions per server, would you want to?
As in, if that server goes down, the resulting packet, authentication
storm will be huge and most like result in a proverbial shit storm later.
Having more than 10% or so of your customers on one machine and thus
involved in an outage that you KNOW will hit you eventually strikes me as
a bad idea.

I'm not sure how the design below meshes with Timo's lofty goals and
standards when it comes to security as well.

And a push with the right people (clients) to support IMAP NOTIFY would of
course reduce the number of sessions significantly.

Finally, Dovecot in proxy mode already scales quite well.

Christian

> I think the current implementation of having a separate process manage each 
> active IMAP session (w/ the possibility of moving idling sessions to a single 
> hibernate process) will never be able to deploy a single server managing 10 
> Million IMAP sessions.
>
> But, would it be possible to implement a new IMAP server plugin that uses a 
> fixed configurable pool of "worker" processes, much like NGINX or PHP-FPM 
> does. These servers can probably scale to 10 Million TCP connections, if the 
> server is carefully tuned and has enough cores/memory to support that many 
> active sessions.
>
> I’m thinking that the new IMAP server could use some external database (e.g., 
> Redis or Memcached) to save all the sessions state and have the "worker" 
> processes poll the TCP sockets for new IMAP commands to process (fetching the 
> session state from the external database when it has a command that is 
> waiting on a response). The Dovecot IMAP proxies could even queue incoming 
> commands to proxy many incoming requests to a smaller number of backend 
> connections (like ProxySQL does for MySQL requests). That might allow each 
> Dovecot proxy to support 10 Million IMAP sessions and a single backend could 
> support multiple front end Dovecot proxies (to scale to 100 Million 
> concurrent IMAP connections using 10 proxies for 100 Million connections and 
> 1 backend server for 10 Million connections).
>
> Of course, the backend server may need to be beefy and have very fast NVMe 
> SSDs for local storage, but changing the IMAP server to manage a pool of 
> workers instead of requiring a process per active session, would allow bigger 
> scale up and could save large sites a lot of money.
>
> Is this a good idea? Or, am I missing something?
>
> Kevin


--
Christian Balzer Network/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communications
http://www.gol.com/