Sieve Symlink Error
Hello, I'm in the process of moving our mail server from RHEL 6 to RHEL 9. We will be moving to: # dovecot --version 2.3.16 (7e2e900c1a) My issue is that sieve does not appear to work on the new setup, where it does work on the old one. I made a simple filter rule: # cat /u/mail0test/.sieve/ingo.sieve # Sieve Filter # Generated by Ingo (http://www.horde.org/apps/ingo/) (06/28/2024, 11:14:52 PM) require "fileinto"; # Test if header :comparator "i;ascii-casemap" :contains "Subject" "filtertest" { fileinto "Fun"; stop; } Upon sending an email to this test account, the following appears in /var/log/maillog: Jun 29 23:19:56 mail5 dovecot[3066980]: lda(mail0test)<3066980>: Warning: sieve: file storage: Active sieve script symlink /u/mail0test/.dovecot.sieve is broken: Invalid/unknown path to storage (points to /u/mail0test/.sieve). Jun 29 23:19:56 mail5 dovecot[2987026]: doveadm(mail0test)<3066983>: Warning: sieve: file storage: Active sieve script symlink /u/mail0test/.dovecot.sieve is broken: Invalid/unknown path to storage (points to /u/mail0test/.sieve). Jun 29 23:19:56 mail5 dovecot[2987026]: doveadm(mail0test)<3067016>: Warning: sieve: file storage: Active sieve script symlink /u/mail0test/.dovecot.sieve is broken: Invalid/unknown path to storage (points to /u/mail0test/.sieve). Yet: # ll /u/mail0test/.dovecot.sieve lrwxrwxrwx. 1 mail0test sysguest 17 Jun 28 23:26 /u/mail0test/.dovecot.sieve -> .sieve/ingo.sieve # file /u/mail0test/.sieve/ingo.sieve /u/mail0test/.sieve/ingo.sieve: ASCII text That is the filter file I've pasted above. I've set the following directives in /etc/dovecot/conf.d/90-sieve.conf via puppet: augeas { "dovecot_sieve_settings": context => "/files/etc/dovecot/conf.d/90-sieve.conf", changes => [ "set plugin/sieve_dir ~/.sieve", "set plugin/sieve_user_log ~/.sieve/log" ], require => Package["dovecot"], notify => Service["dovecot"]; } The full configuration dump is attached. /u in our environment is the path for user homedirs, which is an NFS mount to a NetApp. The OS is Springdale Linux 9.2, a clone of RedHat from before the IBM license change. It will soon be RHEL 9.4 as we have obtained a license, but for all intents and purposes, Springdale 9.2 and RHEL 9.2 should be considered bug-for-bug compatible. The arch is x86_64 with both machines mail5 and mail6 (replicated) having Intel(R) Xeon(R) Gold 6244 CPU @ 3.60GHz and 768gb of memory. I have the same issue with SELinux in both enforcing and permissive modes, so this is not a permissions error due to SELinux. Am I doing something wrong, or is this a bug? I've seen that there have been some previous issues similar to this that ended up being bugs in pigeonhole, so here I am. Thanks, Ben # dovecot -n # 2.3.16 (7e2e900c1a): /etc/dovecot/dovecot.conf # Pigeonhole version 0.5.16 (09c29328) # OS: Linux 5.14.0-284.11.1.el9_2.x86_64 x86_64 Springdale Open Enterprise Linux release 9.2 (Parma) # Hostname: mail5.math.princeton.edu.private auth_cache_negative_ttl = 5 mins auth_cache_size = 32 M auth_debug = yes auth_mechanisms = plain login auth_username_format = %Ln auth_verbose = yes auth_verbose_passwords = sha1 doveadm_password = # hidden, use -P to show it doveadm_port = 12345 first_valid_gid = 500 lda_mailbox_autocreate = yes lda_mailbox_autosubscribe = yes listen = * mail_location = mbox:~/mail:INBOX=/var/spool/mail/%u:INDEX=/home/%u/indexes mail_nfs_storage = yes mail_plugins = " fts fts_squat zlib notify replication" managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date index ihave duplicate mime foreverypart extracttext namespace inbox { inbox = yes location = mailbox Drafts { special_use = \Drafts } mailbox Junk { special_use = \Junk } mailbox Sent { special_use = \Sent } mailbox "Sent Messages" { special_use = \Sent } mailbox Trash { special_use = \Trash } prefix = } passdb { args = /etc/dovecot/deny-users deny = yes driver = passwd-file } passdb { args = /etc/dovecot/dovecot-ldap.conf.ext driver = ldap } plugin { fts = squat fts_squat = partial=4 full=10 mail_replica = tcp:mail6.math.princeton.edu.private:12345 replication_sync_timeout = 2 sieve = file:~/sieve;active=~/.dovecot.sieve sieve_dir = ~/.sieve sieve_user_log = ~/.sieve/log } protocols = imap lmtp replication_max_conns = 64 service aggregator { fifo_listener replication-notify-fifo { group = mail mode = 0666 user = dovecot } unix_listener replication-notify { group = mail mode = 0666 user = dovecot } } service anvil { unix_listener anvil { group = mail mode = 0666 } } service auth { unix_listener auth-userdb { mode = 0666 } } service doveadm { inet_listener {
Re: Sieve Symlink Error
On 6/30/24 16:48, John Fawcett via dovecot wrote: On 30/06/2024 07:17, Benjamin Rose via dovecot wrote: Hello, I'm in the process of moving our mail server from RHEL 6 to RHEL 9. We will be moving to: # dovecot --version 2.3.16 (7e2e900c1a) My issue is that sieve does not appear to work on the new setup, where it does work on the old one. I made a simple filter rule: # cat /u/mail0test/.sieve/ingo.sieve # Sieve Filter # Generated by Ingo (http://www.horde.org/apps/ingo/) (06/28/2024, 11:14:52 PM) require "fileinto"; # Test if header :comparator "i;ascii-casemap" :contains "Subject" "filtertest" { fileinto "Fun"; stop; } Upon sending an email to this test account, the following appears in /var/log/maillog: Jun 29 23:19:56 mail5 dovecot[3066980]: lda(mail0test)<3066980>: Warning: sieve: file storage: Active sieve script symlink /u/mail0test/.dovecot.sieve is broken: Invalid/unknown path to storage (points to /u/mail0test/.sieve). Jun 29 23:19:56 mail5 dovecot[2987026]: doveadm(mail0test)<3066983>: Warning: sieve: file storage: Active sieve script symlink /u/mail0test/.dovecot.sieve is broken: Invalid/unknown path to storage (points to /u/mail0test/.sieve). Jun 29 23:19:56 mail5 dovecot[2987026]: doveadm(mail0test)<3067016>: Warning: sieve: file storage: Active sieve script symlink /u/mail0test/.dovecot.sieve is broken: Invalid/unknown path to storage (points to /u/mail0test/.sieve). Yet: # ll /u/mail0test/.dovecot.sieve lrwxrwxrwx. 1 mail0test sysguest 17 Jun 28 23:26 /u/mail0test/.dovecot.sieve -> .sieve/ingo.sieve # file /u/mail0test/.sieve/ingo.sieve /u/mail0test/.sieve/ingo.sieve: ASCII text That is the filter file I've pasted above. I've set the following directives in /etc/dovecot/conf.d/90-sieve.conf via puppet: augeas { "dovecot_sieve_settings": context => "/files/etc/dovecot/conf.d/90-sieve.conf", changes => [ "set plugin/sieve_dir ~/.sieve", "set plugin/sieve_user_log ~/.sieve/log" ], require => Package["dovecot"], notify => Service["dovecot"]; } The full configuration dump is attached. /u in our environment is the path for user homedirs, which is an NFS mount to a NetApp. The OS is Springdale Linux 9.2, a clone of RedHat from before the IBM license change. It will soon be RHEL 9.4 as we have obtained a license, but for all intents and purposes, Springdale 9.2 and RHEL 9.2 should be considered bug-for-bug compatible. The arch is x86_64 with both machines mail5 and mail6 (replicated) having Intel(R) Xeon(R) Gold 6244 CPU @ 3.60GHz and 768gb of memory. I have the same issue with SELinux in both enforcing and permissive modes, so this is not a permissions error due to SELinux. Am I doing something wrong, or is this a bug? I've seen that there have been some previous issues similar to this that ended up being bugs in pigeonhole, so here I am. Thanks, Ben Hi Ben what version of Pigeonhole are you using? I read here that sieve_dir is deprecated since v0.3.1 https://doc.dovecot.org/settings/pigeonhole/#pigeonhole_setting-sieve_dir In any case these settings look as though they don't really match up. Is the correct directory .sieve or sieve? sieve = file:~/sieve;active=~/.dovecot.sieve sieve_dir = ~/.sieve Also, I was curious if your inboxes are really under /var/spool/mail/%u and your indexes under /home/%u/indexes? John ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org Hello, Thank you! Adding the line to puppet to enforce that this exists in /etc/dovecot/conf.d/90-sieve.conf: sieve=file:~/.sieve;active=~/.dovecot.sieve has solved the problem. Filters now work as expected! To answer your questions, I am using dovecot-pigeonhole-2.3.16-8.el9.x86_64, and yes, user mail spools live under /var/spool/mail (NFS-mounted mbox files) and indexes live under /home (local disk - soon to be SSD). That's only for users who are using mbox format / pine / mutt. Most users are using only modern clients and in this case their storage is mdbox and entirely kept inside of /home. This is configured on a per-user basis inside of an LDAP value named mailMessageStore. Either it exists such as "mdbox:/home//mail", or it does not exist at all, in which case, delivery falls back to old-style mbox format. If they are on mbox format, only INBOX is kept in /var/spool/mail, all other folders are kept in ~/mail (/u//mail/). Ben ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
dsync crashing
Hello, I am running Dovecot 2.3.16 (7e2e900c1a) on RHEL 9.2. I attach my "doveconf -n" configuration. I have replication enabled between 2 servers, both very beefy with 16 cores of Intel(R) Xeon(R) Gold 6244 CPU @ 3.60GHz, 768gb memory, and 100-gig ethernet. The configs are the same except the replication part which points at each other via a puppet template. It seems to work well for most users on modern email clients using the mdbox storage format. The issue I'm having is that some of my users are on the old-style mbox storage format so they can use legacy mail readers such as pine and mutt natively. For some of these users, syncing works just fine. For others, right now about a dozen users, the sync never completes, and I get the following error & backtrace in /var/log/maillog: Aug 4 23:54:22 mail6 dovecot[1177530]: doveadm: Panic: file dsync-mailbox-import.c: line 2163 (dsync_mailbox_import_handle_mail): assertion failed: (array_count(&wanted_uids) > 0) Aug 4 23:54:22 mail6 dovecot[1177530]: doveadm: Error: Raw backtrace: /usr/lib64/dovecot/libdovecot.so.0(backtrace_append+0x46) [0x7fa6ab05c486] -> /usr/lib64/dovecot/libdovecot.so.0(backtrace_get+0x22) [0x7fa6ab05c5a2] -> /usr/lib64/dovecot/libdovecot.so.0(+0x10a41b) [0x7fa6ab06b41b] -> /usr/lib64/dovecot/libdovecot.so.0(+0x10a4b7) [0x7fa6ab06b4b7] -> /usr/lib64/dovecot/libdovecot.so.0(+0x5d11a) [0x7fa6aafbe11a] -> dovecot/doveadm-server(+0x24029) [0x56433f2f7029] -> dovecot/doveadm-server(dsync_mailbox_import_changes_finish+0x29c) [0x56433f32fb9c] -> dovecot/doveadm-server(dsync_brain_sync_mails+0x8d5) [0x56433f331f15] -> dovecot/doveadm-server(+0x52255) [0x56433f325255] -> dovecot/doveadm-server(+0x52639) [0x56433f325639] -> dovecot/doveadm-server(+0x5fa53) [0x56433f332a53] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x6d) [0x7fa6ab081cbd] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x13a) [0x7fa6ab083bba] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x54) [0x7fa6ab083c64] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x40) [0x7fa6ab083e20] -> dovecot/doveadm-server(+0x37d7c) [0x56433f30ad7c] -> dovecot/doveadm-server(+0x3974d) [0x56433f30c74d] -> dovecot/doveadm-server(+0x5020b) [0x56433f32320b] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x6d) [0x7fa6ab081cbd] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x13a) [0x7fa6ab083bba] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x54) [0x7fa6ab083c64] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x40) [0x7fa6ab083e20] -> /usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x17) [0x7fa6aaff2337] -> dovecot/doveadm-server(main+0x102) [0x56433f2f8f92] -> /lib64/libc.so.6(+0x3feb0) [0x7fa6aac3feb0] -> /lib64/libc.so.6(__libc_start_main+0x80) [0x7fa6aac3ff60] -> dovecot/doveadm-server(_start+0x25) [0x56433f2f9015] Aug 4 23:54:22 mail6 dovecot[1177530]: doveadm: Fatal: master: service(doveadm): child 1117475 killed with signal 6 (core dumped) I'm not sure what might be causing this, and wanted to see if anyone here could offer any suggestions. Please let me know if any additional information is needed. Thanks, Ben # 2.3.16 (7e2e900c1a): /etc/dovecot/dovecot.conf # Pigeonhole version 0.5.16 (09c29328) # OS: Linux 5.14.0-284.11.1.el9_2.x86_64 x86_64 Springdale Open Enterprise Linux release 9.2 (Parma) # Hostname: mail5.math.princeton.edu.private auth_cache_negative_ttl = 5 mins auth_cache_size = 32 M auth_debug = yes auth_mechanisms = plain login auth_username_format = %Ln auth_verbose = yes auth_verbose_passwords = sha1 doveadm_password = # hidden, use -P to show it doveadm_port = 12345 first_valid_gid = 500 lda_mailbox_autocreate = yes lda_mailbox_autosubscribe = yes listen = * mail_location = mbox:~/mail:INBOX=/var/spool/mail/%u:INDEX=/home/%u/indexes mail_nfs_storage = yes mail_plugins = " fts fts_squat zlib notify replication" managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date index ihave duplicate mime foreverypart extracttext namespace inbox { inbox = yes location = mailbox Drafts { special_use = \Drafts } mailbox Junk { special_use = \Junk } mailbox Sent { special_use = \Sent } mailbox "Sent Messages" { special_use = \Sent } mailbox Trash { special_use = \Trash } prefix = } passdb { args = /etc/dovecot/deny-users deny = yes driver = passwd-file } passdb { args = /etc/dovecot/dovecot-ldap.conf.ext driver = ldap } plugin { fts = squat fts_squat = partial=4 full=10 mail_replica = tcps:mail6.math.princeton.edu:12345 sieve = file:~/.sieve;active=~/.dovecot.sieve sieve_dir = ~/.sieve/ sieve_user_log = ~/.sieve/log } protocols = imap lmtp replication_max_conns = 1024 service aggregator
Re: dsync crashing
Hello, The requested information has been sent off-list. Thanks, Ben On 8/5/24 02:10, Aki Tuomi via dovecot wrote: Hi! We have seen this before but were unable to reproduce the issue. Could you please send directly to me the core file processed with https://raw.githubusercontent.com/dovecot/core/master/src/util/dovecot-sysreport You should use dovecot-sysreport --core /path/to/core /usr/lib/dovecot/imap as parameters. Also, if possible, index files that match the core from both source & destination server. If you can't provide these, we'll be happy to look just at the core. Aki On 05/08/2024 06:59 EEST Benjamin Rose via dovecot wrote: Hello, I am running Dovecot 2.3.16 (7e2e900c1a) on RHEL 9.2. I attach my "doveconf -n" configuration. I have replication enabled between 2 servers, both very beefy with 16 cores of Intel(R) Xeon(R) Gold 6244 CPU @ 3.60GHz, 768gb memory, and 100-gig ethernet. The configs are the same except the replication part which points at each other via a puppet template. It seems to work well for most users on modern email clients using the mdbox storage format. The issue I'm having is that some of my users are on the old-style mbox storage format so they can use legacy mail readers such as pine and mutt natively. For some of these users, syncing works just fine. For others, right now about a dozen users, the sync never completes, and I get the following error & backtrace in /var/log/maillog: Aug 4 23:54:22 mail6 dovecot[1177530]: doveadm: Panic: file dsync-mailbox-import.c: line 2163 (dsync_mailbox_import_handle_mail): assertion failed: (array_count(&wanted_uids) > 0) Aug 4 23:54:22 mail6 dovecot[1177530]: doveadm: Error: Raw backtrace: /usr/lib64/dovecot/libdovecot.so.0(backtrace_append+0x46) [0x7fa6ab05c486] -> /usr/lib64/dovecot/libdovecot.so.0(backtrace_get+0x22) [0x7fa6ab05c5a2] -> /usr/lib64/dovecot/libdovecot.so.0(+0x10a41b) [0x7fa6ab06b41b] -> /usr/lib64/dovecot/libdovecot.so.0(+0x10a4b7) [0x7fa6ab06b4b7] -> /usr/lib64/dovecot/libdovecot.so.0(+0x5d11a) [0x7fa6aafbe11a] -> dovecot/doveadm-server(+0x24029) [0x56433f2f7029] -> dovecot/doveadm-server(dsync_mailbox_import_changes_finish+0x29c) [0x56433f32fb9c] -> dovecot/doveadm-server(dsync_brain_sync_mails+0x8d5) [0x56433f331f15] -> dovecot/doveadm-server(+0x52255) [0x56433f325255] -> dovecot/doveadm-server(+0x52639) [0x56433f325639] -> dovecot/doveadm-server(+0x5fa53) [0x56433f332a53] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x6d) [0x7fa6ab081cbd] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x13a) [0x7fa6ab083bba] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x54) [0x7fa6ab083c64] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x40) [0x7fa6ab083e20] -> dovecot/doveadm-server(+0x37d7c) [0x56433f30ad7c] -> dovecot/doveadm-server(+0x3974d) [0x56433f30c74d] -> dovecot/doveadm-server(+0x5020b) [0x56433f32320b] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x6d) [0x7fa6ab081cbd] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x13a) [0x7fa6ab083bba] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x54) [0x7fa6ab083c64] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x40) [0x7fa6ab083e20] -> /usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x17) [0x7fa6aaff2337] -> dovecot/doveadm-server(main+0x102) [0x56433f2f8f92] -> /lib64/libc.so.6(+0x3feb0) [0x7fa6aac3feb0] -> /lib64/libc.so.6(__libc_start_main+0x80) [0x7fa6aac3ff60] -> dovecot/doveadm-server(_start+0x25) [0x56433f2f9015] Aug 4 23:54:22 mail6 dovecot[1177530]: doveadm: Fatal: master: service(doveadm): child 1117475 killed with signal 6 (core dumped) I'm not sure what might be causing this, and wanted to see if anyone here could offer any suggestions. Please let me know if any additional information is needed. Thanks, Ben ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
Re: dsync crashing
Hello, Disappointing, but understandable. It is not trivial to move mbox users who are used to direct access for many decades now. As a workaround, is it possible to disable replication on a per-user basis? Since mbox files are stored on an NFS-mounted /u (vs mdbox which is stored locally in /home), I do not think it is really needed for this dozen or so users. These users are differentiated by the lack of a mailMessageStore LDAP value in their DN, vs users such as myself on the newer format which have a mailMessageStore value of "mdbox:/home/benrose/mail". Thanks, Ben On 8/5/24 03:38, Aki Tuomi via dovecot wrote: I just realized that the bug is in mbox handling, which is nowadays frozen, so even if there is a bug, we won't fix it. Have you considered suggesting to your mbox users that they could use pine with imap instead of direct access? Aki On 05/08/2024 09:35 EEST Benjamin Rose via dovecot wrote: Hello, The requested information has been sent off-list. Thanks, Ben On 8/5/24 02:10, Aki Tuomi via dovecot wrote: Hi! We have seen this before but were unable to reproduce the issue. Could you please send directly to me the core file processed with https://raw.githubusercontent.com/dovecot/core/master/src/util/dovecot-sysreport You should use dovecot-sysreport --core /path/to/core /usr/lib/dovecot/imap as parameters. Also, if possible, index files that match the core from both source & destination server. If you can't provide these, we'll be happy to look just at the core. Aki On 05/08/2024 06:59 EEST Benjamin Rose via dovecot wrote: Hello, I am running Dovecot 2.3.16 (7e2e900c1a) on RHEL 9.2. I attach my "doveconf -n" configuration. I have replication enabled between 2 servers, both very beefy with 16 cores of Intel(R) Xeon(R) Gold 6244 CPU @ 3.60GHz, 768gb memory, and 100-gig ethernet. The configs are the same except the replication part which points at each other via a puppet template. It seems to work well for most users on modern email clients using the mdbox storage format. The issue I'm having is that some of my users are on the old-style mbox storage format so they can use legacy mail readers such as pine and mutt natively. For some of these users, syncing works just fine. For others, right now about a dozen users, the sync never completes, and I get the following error & backtrace in /var/log/maillog: Aug 4 23:54:22 mail6 dovecot[1177530]: doveadm: Panic: file dsync-mailbox-import.c: line 2163 (dsync_mailbox_import_handle_mail): assertion failed: (array_count(&wanted_uids) > 0) Aug 4 23:54:22 mail6 dovecot[1177530]: doveadm: Error: Raw backtrace: /usr/lib64/dovecot/libdovecot.so.0(backtrace_append+0x46) [0x7fa6ab05c486] -> /usr/lib64/dovecot/libdovecot.so.0(backtrace_get+0x22) [0x7fa6ab05c5a2] -> /usr/lib64/dovecot/libdovecot.so.0(+0x10a41b) [0x7fa6ab06b41b] -> /usr/lib64/dovecot/libdovecot.so.0(+0x10a4b7) [0x7fa6ab06b4b7] -> /usr/lib64/dovecot/libdovecot.so.0(+0x5d11a) [0x7fa6aafbe11a] -> dovecot/doveadm-server(+0x24029) [0x56433f2f7029] -> dovecot/doveadm-server(dsync_mailbox_import_changes_finish+0x29c) [0x56433f32fb9c] -> dovecot/doveadm-server(dsync_brain_sync_mails+0x8d5) [0x56433f331f15] -> dovecot/doveadm-server(+0x52255) [0x56433f325255] -> dovecot/doveadm-server(+0x52639) [0x56433f325639] -> dovecot/doveadm-server(+0x5fa53) [0x56433f332a53] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x6d) [0x7fa6ab081cbd] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x13a) [0x7fa6ab083bba] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x54) [0x7fa6ab083c64] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x40) [0x7fa6ab083e20] -> dovecot/doveadm-server(+0x37d7c) [0x56433f30ad7c] -> dovecot/doveadm-server(+0x3974d) [0x56433f30c74d] -> dovecot/doveadm-server(+0x5020b) [0x56433f32320b] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x6d) [0x7fa6ab081cbd] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x13a) [0x7fa6ab083bba] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x54) [0x7fa6ab083c64] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x40) [0x7fa6ab083e20] -> /usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x17) [0x7fa6aaff2337] -> dovecot/doveadm-server(main+0x102) [0x56433f2f8f92] -> /lib64/libc.so.6(+0x3feb0) [0x7fa6aac3feb0] -> /lib64/libc.so.6(__libc_start_main+0x80) [0x7fa6aac3ff60] -> dovecot/doveadm-server(_start+0x25) [0x56433f2f9015] Aug 4 23:54:22 mail6 dovecot[1177530]: doveadm: Fatal: master: service(doveadm): child 1117475 killed with signal 6 (core dumped) I'm not sure what might be causing this, and wanted to see if anyone here could offer any suggestions. Please let me know if any additional information is needed. Thanks, Ben ___ dovecot mailing list -- dove
Re: dsync crashing
Hello, Thanks everyone for the attention and advice. Following the documentation, I did implement "noreplicate". To do so, I found an unused attribute in my LDAP's schema, mailDeliveryOption. I probably could have made a new one, but I wanted to be done before the workweek started in earnest, so I just quickly used that one. In this attribute, I wrote a quick script that users with "mdbox" format specified in mailMessageStore got the value "no", and users without any mailMessageStore override (therefore, using the default of mbox storage) got the value "yes". I then amended my /etc/dovecot/dovecot-ldap.conf.ext and assigned those values into the userdb (and in the pass db prefetch too): user_attrs = homeDirectory=home,uidNumber=uid,gidNumber=gid,mailmessagestore=mail,mailDeliveryOption=noreplicate pass_attrs = uid=user,userPassword=password,homeDirectory=userdb_home,uidNumber=userdb_uid,gidNumber=userdb_gid,mailmessagestore=userdb_mail,mailDeliveryOption=userdb_noreplicate I pushed the changes, restarted dovecot, and then ran 'doveadm replicator replicate "*"' to force a global resync. After waiting a while, all 78 user accounts on the old-style storage format dropped out of the "doveadm replicator status" table. I was surprised to find there are still 78 such accounts, but that exactly matches the number of "mailDeliveryOption: yes" values in LDAP. So now users with mbox storage on NFS mounts are not replicating, and the crashes / backtraces have disappeared from the logs. Hopefully there are no other concerns with this setup, in case one of the 2 active hosts has a problem or is rebooted, such as index cache. But, dovecot so far seems to be smart enough to detect this sort of problem and fsck the indexes / reset the IMAP connection. Thanks again everyone! Ben On 8/5/24 07:57, Markus Bach via dovecot wrote: Since v2.3.1 you can disable replication for a user by providing |noreplicate| user database field <https://doc.dovecot.org/configuration_manual/authentication/user_database_extra_fields/#authentication-user-database-extra-fields>. https://doc.dovecot.org/configuration_manual/replication/ On 8/5/24 13:14, Aki Tuomi via dovecot wrote: I can't recall off the bat what version it was added but there is noreplicate key in userdb reply that can be used to stop replication for a user. Aki On 05/08/2024 14:09 EEST Benjamin Rose via dovecot wrote: Hello, Disappointing, but understandable. It is not trivial to move mbox users who are used to direct access for many decades now. As a workaround, is it possible to disable replication on a per-user basis? Since mbox files are stored on an NFS-mounted /u (vs mdbox which is stored locally in /home), I do not think it is really needed for this dozen or so users. These users are differentiated by the lack of a mailMessageStore LDAP value in their DN, vs users such as myself on the newer format which have a mailMessageStore value of "mdbox:/home/benrose/mail". Thanks, Ben On 8/5/24 03:38, Aki Tuomi via dovecot wrote: I just realized that the bug is in mbox handling, which is nowadays frozen, so even if there is a bug, we won't fix it. Have you considered suggesting to your mbox users that they could use pine with imap instead of direct access? Aki On 05/08/2024 09:35 EEST Benjamin Rose via dovecot wrote: Hello, The requested information has been sent off-list. Thanks, Ben On 8/5/24 02:10, Aki Tuomi via dovecot wrote: Hi! We have seen this before but were unable to reproduce the issue. Could you please send directly to me the core file processed with https://raw.githubusercontent.com/dovecot/core/master/src/util/dovecot-sysreport You should use dovecot-sysreport --core /path/to/core /usr/lib/dovecot/imap as parameters. Also, if possible, index files that match the core from both source & destination server. If you can't provide these, we'll be happy to look just at the core. Aki On 05/08/2024 06:59 EEST Benjamin Rose via dovecot wrote: Hello, I am running Dovecot 2.3.16 (7e2e900c1a) on RHEL 9.2. I attach my "doveconf -n" configuration. I have replication enabled between 2 servers, both very beefy with 16 cores of Intel(R) Xeon(R) Gold 6244 CPU @ 3.60GHz, 768gb memory, and 100-gig ethernet. The configs are the same except the replication part which points at each other via a puppet template. It seems to work well for most users on modern email clients using the mdbox storage format. The issue I'm having is that some of my users are on the old-style mbox storage format so they can use legacy mail readers such as pine and mutt natively. For some of these users, syncing works just fine. For others, right now about a dozen users, the sync never completes, and I get the following error & backtrace in /var/log/maillog: Aug