Re: mail-crypt and mbox format
Many thanks Aki. Good to know. I'll look into mdbox as well. Doug On 2/17/2022 11:06 AM, Aki Tuomi wrote: mdbox is compatible with mail crypt plugin, since it works differently compared to mbox. Aki On 17 February 2022 15.58.13 UTC,"cincodemayo...@yahoo.com" wrote: Thanks for that Marc. I'm guessing that mdbox format isn't compatible with mail-crypt for the same reasons as mbox. Can anyone confirm? The built in dovecot dsync utility so far looks much more promising. I have about 20 users consuming less than 50GB of email storage. I tend to doubt on a good sized Linux server I will run into storage issues or run out of inodes. For those still following along, mb2md didn't seem to work well for me. It lost or corrupted enough email that I gave up on it. Less than 1%, but still too high. --Doug Marc wrote: What you should also consider if you are using some distributed filesystem or block devices, is that maildir format created lots of small files, which in some storage backends will give you quite a bit of storage amplification. That is why I choose for the mdbox format. On Friday, February 11, 2022, 05:02:25 PM EST,cincodemayo...@yahoo.com wrote: Agreed I am looking to migrate in place, if I migrate at all. I'm still researching the benefits (aside from mail-encrypt) to using Maildir instead. It may be the new default, but to me it is an unknown. I am going to play with the mb2md utility (https://centos.pkgs.org/7/epel-x86_64/mb2md-3.20-17.el7.noarch.rpm.html) in a test instance to see if migrating in place is a feasible. Thanks for your responses, very much appreciated. On Friday, February 11, 2022, 03:57:39 PM EST, John Stoffel wrote: Unfortunately, this document doesn't really address the OP's need, which is to migrate mailbox formats on the same server. Now migrating to a new server would work, where the new server was setup to use maildir as the default. Maybe a new section could be added talking about this situation in more explicit terms, with some real examples of conversions? John Aki>https://doc.dovecot.org/admin_manual/migrating_mailboxes/ Aki> Aki On 11/02/2022 21:29cincodemayo...@yahoo.comwrote: Thank you for confirming. That was the conclusion I came to, particularly after seeing the structure of Maildir mailboxes and how the individual messages were encrypted. Clearly it would be difficult to do the same with an unlimited number of messages stored in a single file. A followup question if I may. I probably should just start another thread, but, how difficult is it to convert to Maildir? Any gotchas? Any differences in how to manage the server? How effective is the mb2md? Not looking for a cookbook, just an opinion on whether it is worth converting. We've used mbox format going back before CentOS 5, so change is hard. Environment is CentOS 7, Dovecot, Sendmail, Pigeonhole, MailScanner, Mailwatch SQL, Thunderbird clients. Thanks, Doug On Friday, February 11, 2022, 09:31:09 AM EST, Aki Tuomi wrote: On 11/02/2022 16:26cincodemayo...@yahoo.comwrote: Hi, My Dovecot server of many years has been set up to use mbox email folders. I want to implement mail-crypt and after banging my head against a wall for a few days trying to get mail-crypt to work I decided to try it against a test instance of my server that I reset to use Maildir format and mail-crypt worked instantly. Does mail-crypt work with mbox format mail folders, or am I wasting my time unless I switch over to Maildir? The documentation athttps://doc.dovecot.org/configuration_manual/mail_crypt_plugin/ doesn't explicitly say Maildir is required. Thanks in advance, Doug Mail crypt will not work with it. Mbox format has limited support of features. --- Aki Tuomi
SIS and tracing the origin of an attachment
Hi All, I'm trying to trace an attachment within an SIS subdirectory to the email message(s) that link to it. I say messages because I'm also using dovecot dedup. My understanding is the linked file name is the hash value of the attachments contents concatenated with the GUID of the email message. I have had marginal success with a message I created myself. Example: I generated an email with two attachments. Here are the links in my attachment directory. ./26/c5/26c5c540d41779d83d2f5388041d05c67d720d9a-73eca8051acd27627231f2bc99a3 ./65/cd/65cd73112a489ef07f17ed5740aa60358e2dd3fb-74eca8051acd27627231f2bc99a3 In my sent folder the actual GUID of the message is 75eca8051acd27627231f2bc99a3. So the GUID of the attachment is based on the GUID of the message, but not exact. The second hex byte seems to be decremented as an offset of the attachment index from the GUID of the message. At least in my one example. # doveadm dump /mailstore/doug/mail/mailboxes/Sent/dbox-Mails/dovecot.index | grep guid | tail -1 - guid: 75eca8051acd27627231f2bc99a3 With that actual GUID I can find the message with a search: # doveadm search -u doug mailbox Sent guid 75eca8051acd276272310000f2bc99a3 doug e5711f1cf2c9294f7109059b96e4 53526 Now let's try to track down another email when only the HASH-GUID value is known. Here is one randomly picked. ./00/a2/00a2d5de3e41053d59bd10084826bbe094aa1c59-57857b09d1a327627e26f2bc99a3 # doveadm search -A mailbox '*' guid 57857b09d1a327627e26f2bc99a3 # doveadm search -A mailbox '*' guid 58857b09d1a327627e26f2bc99a3 # doveadm search -A mailbox '*' guid 59857b09d1a327627e26f2bc99a3 I repeated this incrementing and decrementing from 5085... through 5f85... and never located the message. This seems like it should be trivial but I've been struggling with it for days. The GUID isn't random, there must be a way to track the attachment back. What am I missing? And for those wondering why, our virus scanner flagged a number of attachments, some with several links, and I want ask the users to delete the offending messages so I can purge them from the server. If I can find the emails I can give them the mail folder, date/time, and subject of the message. -- Doug
Re: SIS and tracing the origin of an attachment
On 3/8/2022 5:51 PM, doug wrote: Hi All, I'm trying to trace an attachment within an SIS subdirectory to the email message(s) that link to it. I say messages because I'm also using dovecot dedup. My understanding is the linked file name is the hash value of the attachments contents concatenated with the GUID of the email message. I have had marginal success with a message I created myself. Example: I generated an email with two attachments. Here are the links in my attachment directory. ./26/c5/26c5c540d41779d83d2f5388041d05c67d720d9a-73eca8051acd27627231f2bc99a3 ./65/cd/65cd73112a489ef07f17ed5740aa60358e2dd3fb-74eca8051acd27627231f2bc99a3 In my sent folder the actual GUID of the message is 75eca8051acd27627231f2bc99a3. So the GUID of the attachment is based on the GUID of the message, but not exact. The second hex byte seems to be decremented as an offset of the attachment index from the GUID of the message. At least in my one example. # doveadm dump /mailstore/doug/mail/mailboxes/Sent/dbox-Mails/dovecot.index | grep guid | tail -1 - guid: 75eca8051acd27627231f2bc99a3 With that actual GUID I can find the message with a search: # doveadm search -u doug mailbox Sent guid 75eca8051acd27627231f2bc99a3 doug e5711f1cf2c9294f7109059b96e4 53526 Now let's try to track down another email when only the HASH-GUID value is known. Here is one randomly picked. ./00/a2/00a2d5de3e41053d59bd10084826bbe094aa1c59-57857b09d1a327627e26f2bc99a3 # doveadm search -A mailbox '*' guid 57857b09d1a327627e26f2bc99a3 # doveadm search -A mailbox '*' guid 58857b09d1a327627e26f2bc99a3 # doveadm search -A mailbox '*' guid 59857b09d1a327627e26f2bc99a3 I repeated this incrementing and decrementing from 5085... through 5f85... and never located the message. This seems like it should be trivial but I've been struggling with it for days. The GUID isn't random, there must be a way to track the attachment back. What am I missing? And for those wondering why, our virus scanner flagged a number of attachments, some with several links, and I want ask the users to delete the offending messages so I can purge them from the server. If I can find the emails I can give them the mail folder, date/time, and subject of the message. I keep experimenting with this and I still haven't found a reliable way to track an attachment back to it's original message so I can either notify the user or delete the message with doveadm. Is this not possible? I'm using mdbox if that matters. I see a similar thread going right now about virus scanning and deleting messages but that is maildir and I suspect not using SIS for attachments. -- Doug
Re: SIS and tracing the origin of an attachment
On 3/15/2022 3:45 PM, Oscar del Rio wrote: On 2022-03-15 9:02 a.m., doug wrote: On 3/8/2022 5:51 PM, doug wrote: I'm trying to trace an attachment within an SIS subdirectory to the email message(s) that link to it. I say messages because I'm also using dovecot dedup. My understanding is the linked file name is the hash value of the attachments contents concatenated with the GUID of the email message. I have had marginal success with a message I created myself. Example: I generated an email with two attachments. Here are the links in my attachment directory. ./26/c5/26c5c540d41779d83d2f5388041d05c67d720d9a-73eca8051acd27627231f2bc99a3 ./65/cd/65cd73112a489ef07f17ed5740aa60358e2dd3fb-74eca8051acd27627231f2bc99a3 I keep experimenting with this and I still haven't found a reliable way to track an attachment back to it's original message so I can either notify the user or delete the message with doveadm. Is this not possible? I'm using mdbox if that matters. I see a similar thread going right now about virus scanning and deleting messages but that is maildir and I suspect not using SIS for attachments. The very few times I've needed to trace a SIS attachment to a mailbox, I just grep the "storage" folders for the file hash find username/storage -type f -exec grep 9ffa4b246589f8039d123ea909f1520e791bd880 {} + username/storage/m.46588:X908 2409141 B72 9f/fa/9ffa4b246589f8039d123ea909f1520e791bd880-c9ee303687e13062cf740012bfe47a40 username/storage/m.46589:X1918 2409141 B72 9f/fa/9ffa4b246589f8039d123ea909f1520e791bd880-080ce71390e1306299730012bfe47a40 username/storage/m.46588: BSent X908 2409141 B72 9f/fa/9ffa4b246589f8039d123ea909f1520e791bd880-c9ee303687e13062cf740012bfe47a40 username/storage/m.46589: BINBOX X1918 2409141 B72 9f/fa/9ffa4b246589f8039d123ea909f1520e791bd880-080ce71390e1306299730012bfe47a40 -> Attachment in username's INBOX and Sent folders. Thank you for the suggestion Oscar. My mdbox files are encrypted and compressed, so unfortunately directly grepping them will not work.
Re: SIS and tracing the origin of an attachment
On 3/16/2022 6:05 AM, Patrick Cernko wrote: Hi all, On 15.03.22 22:40, doug wrote: On 3/15/2022 3:45 PM, Oscar del Rio wrote: On 2022-03-15 9:02 a.m., doug wrote: On 3/8/2022 5:51 PM, doug wrote: I'm trying to trace an attachment within an SIS subdirectory to the email message(s) that link to it. I say messages because I'm also using dovecot dedup. My understanding is the linked file name is the hash value of the attachments contents concatenated with the GUID of the email message. I have had marginal success with a message I created myself. Example: I generated an email with two attachments. Here are the links in my attachment directory. ./26/c5/26c5c540d41779d83d2f5388041d05c67d720d9a-73eca8051acd27627231f2bc99a3 ./65/cd/65cd73112a489ef07f17ed5740aa60358e2dd3fb-74eca8051acd27627231f2bc99a3 I keep experimenting with this and I still haven't found a reliable way to track an attachment back to it's original message so I can either notify the user or delete the message with doveadm. Is this not possible? I'm using mdbox if that matters. I see a similar thread going right now about virus scanning and deleting messages but that is maildir and I suspect not using SIS for attachments. The very few times I've needed to trace a SIS attachment to a mailbox, I just grep the "storage" folders for the file hash find username/storage -type f -exec grep 9ffa4b246589f8039d123ea909f1520e791bd880 {} + username/storage/m.46588:X908 2409141 B72 9f/fa/9ffa4b246589f8039d123ea909f1520e791bd880-c9ee303687e13062cf740012bfe47a40 username/storage/m.46589:X1918 2409141 B72 9f/fa/9ffa4b246589f8039d123ea909f1520e791bd880-080ce71390e1306299730012bfe47a40 username/storage/m.46588: BSent X908 2409141 B72 9f/fa/9ffa4b246589f8039d123ea909f1520e791bd880-c9ee303687e13062cf740012bfe47a40 username/storage/m.46589: BINBOX X1918 2409141 B72 9f/fa/9ffa4b246589f8039d123ea909f1520e791bd880-080ce71390e1306299730012bfe47a40 -> Attachment in username's INBOX and Sent folders. Thank you for the suggestion Oscar. My mdbox files are encrypted and compressed, so unfortunately directly grepping them will not work. You can use "doveadm dump" to decompress the files for grepping them, not sure about encryption: find path/to/userhomes/mdbox/storage -name 'm.*' | \ while read f; do doveadm dump $f | \ grep -E '^msg.(ext-ref|orig-mailbox|guid)' | \ grep -B2 xx/yy/hash-guid || continue echo "Match in $f" done The dump also contains several other fields you might want to display. Best, I'll give that a try. With access to the encryption key doveadm dump should handle it just fine. I was hopeful there was a method using search and index files to minimize overhead. To summarize what I think I have learned on this journey is the link to the hash file only exists within the contents of the email body, but not in a way that doveadm search will find it. Hence raw scanning the contents of the emails is required. Many thanks for everyone's help. -- Doug
Permissions and ownership on /dev/shm/dovecot
Hi, Environment: Dovecot 2.3.18 running on CentOS 7, mdbox, LDAP users I'm in the process of moving my mailboxes to NFS and moving with lock and index files in temp storage following instructions from https://doc.dovecot.org/configuration_manual/nfs. I set mail_location as: mail_location = mdbox:/mailstore/%u/mail:VOLATILEDIR=/dev/shm/dovecot/%u:LISTINDEX=/dev/shm/dovecot/%u/dovecot.list.index What I discovered is /dev/shm/dovecot is created by the initial user who accesses their mail from a client, and with permissions 700. This prevents subsequent users from creating their own index and lock files. # ls -l /dev/shm/dovecot total 0 drwx-- 2 mary users 60 Mar 25 10:00 mary Sample error message from maillog during mail delivery and from a dsync script. Mar 25 10:37:15 mailsrv1 dovecot: imap(doug)<19284>: Error: mkdir(/dev/shm/dovecot/doug) failed: Permission denied (euid=1002(doug) egid=100(users) missing +x perm: /dev/shm/dovecot, dir owned by 97:100 mode=0700) dsync(test): Error: mkdir(/dev/shm/dovecot/test) failed: Permission denied (euid=2003(test) egid=100(users) missing +x perm: /dev/shm/dovecot, dir owned by 97:100 mode=0700) I couldn't locate documentation or discussions on how to set the ownership or permissions for /dev/shm/dovecot in the Dovecot configuration files. As a hack, I added this to /usr/libexec/dovecot/prestartscript. ! [[ -d /dev/shm/dovecot ]] && mkdir /dev/shm/dovecot chown dovecot:users /dev/shm/dovecot chmod 770 /dev/shm/dovecot This solved the problem, but left me wondering if I missed something obvious or if I am setting myself up for a problem later on, like with a Dovecot version upgrade. I could run these commands at bootup out of rc.local or a systemd script rather than customizing a Dovecot provided script. Is there a appropriate way of doing this that I missed? TIA, Doug
Re: Permissions and ownership on /dev/shm/dovecot
Thank you João! I too am concerned if this is a risky configuration. My understanding is that the list indexes are not critical and that is why the recommendation in an NFS environment is to place just those and the lock files in memory. Other index files are on permanent storage: [doug@mailserverdev doug]$ find ./ -name *index* ./mail/mailboxes/INBOX/dbox-Mails/dovecot.index.cache ./mail/mailboxes/INBOX/dbox-Mails/dovecot.index.log ./mail/storage/dovecot.map.index.log.2 ./mail/storage/dovecot.map.index ./mail/storage/dovecot.map.index.log Should I still be concerned? Doug On 3/25/2022 11:46 AM, João Silva wrote: I'm not sure about that configuration. I have seen huge index cache files for users with lots of mail, putting those in memory may be a risk. On 25/03/2022 14:56, doug wrote: Hi, Environment: Dovecot 2.3.18 running on CentOS 7, mdbox, LDAP users I'm in the process of moving my mailboxes to NFS and moving with lock and index files in temp storage following instructions from https://doc.dovecot.org/configuration_manual/nfs. I set mail_location as: mail_location = mdbox:/mailstore/%u/mail:VOLATILEDIR=/dev/shm/dovecot/%u:LISTINDEX=/dev/shm/dovecot/%u/dovecot.list.index What I discovered is /dev/shm/dovecot is created by the initial user who accesses their mail from a client, and with permissions 700. This prevents subsequent users from creating their own index and lock files. # ls -l /dev/shm/dovecot total 0 drwx-- 2 mary users 60 Mar 25 10:00 mary Sample error message from maillog during mail delivery and from a dsync script. Mar 25 10:37:15 mailsrv1 dovecot: imap(doug)<19284>: Error: mkdir(/dev/shm/dovecot/doug) failed: Permission denied (euid=1002(doug) egid=100(users) missing +x perm: /dev/shm/dovecot, dir owned by 97:100 mode=0700) dsync(test): Error: mkdir(/dev/shm/dovecot/test) failed: Permission denied (euid=2003(test) egid=100(users) missing +x perm: /dev/shm/dovecot, dir owned by 97:100 mode=0700) I couldn't locate documentation or discussions on how to set the ownership or permissions for /dev/shm/dovecot in the Dovecot configuration files. As a hack, I added this to /usr/libexec/dovecot/prestartscript. ! [[ -d /dev/shm/dovecot ]] && mkdir /dev/shm/dovecot chown dovecot:users /dev/shm/dovecot chmod 770 /dev/shm/dovecot This solved the problem, but left me wondering if I missed something obvious or if I am setting myself up for a problem later on, like with a Dovecot version upgrade. I could run these commands at bootup out of rc.local or a systemd script rather than customizing a Dovecot provided script. Is there a appropriate way of doing this that I missed? TIA, Doug
Re: Permissions and ownership on /dev/shm/dovecot
Good to know. Many thanks for your comments. Always appreciate when someone points out risks. As to my original question, are any others locating LISTINDEX files in memory successfully with unique UIDs?. Or perhaps it only works out of the box with a virtual mail user? On 3/25/2022 12:57 PM, João Silva wrote: In that case things can be more peacefull. I once had the mail in a NFS storage and was told to move to local storage because of speed issues. Really don't know if the .cache and .log should be put in a fast local storage to speed up things. On 25/03/2022 16:40, doug wrote: Thank you João! I too am concerned if this is a risky configuration. My understanding is that the list indexes are not critical and that is why the recommendation in an NFS environment is to place just those and the lock files in memory. Other index files are on permanent storage: [doug@mailserverdev doug]$ find ./ -name *index* ./mail/mailboxes/INBOX/dbox-Mails/dovecot.index.cache ./mail/mailboxes/INBOX/dbox-Mails/dovecot.index.log ./mail/storage/dovecot.map.index.log.2 ./mail/storage/dovecot.map.index ./mail/storage/dovecot.map.index.log Should I still be concerned? Doug On 3/25/2022 11:46 AM, João Silva wrote: I'm not sure about that configuration. I have seen huge index cache files for users with lots of mail, putting those in memory may be a risk. On 25/03/2022 14:56, doug wrote: Hi, Environment: Dovecot 2.3.18 running on CentOS 7, mdbox, LDAP users I'm in the process of moving my mailboxes to NFS and moving with lock and index files in temp storage following instructions from https://doc.dovecot.org/configuration_manual/nfs. I set mail_location as: mail_location = mdbox:/mailstore/%u/mail:VOLATILEDIR=/dev/shm/dovecot/%u:LISTINDEX=/dev/shm/dovecot/%u/dovecot.list.index What I discovered is /dev/shm/dovecot is created by the initial user who accesses their mail from a client, and with permissions 700. This prevents subsequent users from creating their own index and lock files. # ls -l /dev/shm/dovecot total 0 drwx-- 2 mary users 60 Mar 25 10:00 mary Sample error message from maillog during mail delivery and from a dsync script. Mar 25 10:37:15 mailsrv1 dovecot: imap(doug)<19284>: Error: mkdir(/dev/shm/dovecot/doug) failed: Permission denied (euid=1002(doug) egid=100(users) missing +x perm: /dev/shm/dovecot, dir owned by 97:100 mode=0700) dsync(test): Error: mkdir(/dev/shm/dovecot/test) failed: Permission denied (euid=2003(test) egid=100(users) missing +x perm: /dev/shm/dovecot, dir owned by 97:100 mode=0700) I couldn't locate documentation or discussions on how to set the ownership or permissions for /dev/shm/dovecot in the Dovecot configuration files. As a hack, I added this to /usr/libexec/dovecot/prestartscript. ! [[ -d /dev/shm/dovecot ]] && mkdir /dev/shm/dovecot chown dovecot:users /dev/shm/dovecot chmod 770 /dev/shm/dovecot This solved the problem, but left me wondering if I missed something obvious or if I am setting myself up for a problem later on, like with a Dovecot version upgrade. I could run these commands at bootup out of rc.local or a systemd script rather than customizing a Dovecot provided script. Is there a appropriate way of doing this that I missed? TIA, Doug
Re: Permissions and ownership on /dev/shm/dovecot
Hi there, Right here is data for you to obtain down the page. Hope that you find it beneficial! Let me know please if there are questions. https://chanceinsudan.com/qtii/qidui File password: R3454 Good to know. Many thanks for your comments. Always appreciate when someone points out risks. As to my original question, are any others locating LISTINDEX files in memory successfully with unique UIDs?. Or perhaps it only works out of the box with a virtual mail user? On 3/25/2022 12:57 PM, João Silva wrote: In that case things can be more peacefull. I once had the mail in a NFS storage and was told to move to local storage because of speed issues. Really don't know if the .cache and .log should be put in a fast local storage to speed up things. On 25/03/2022 16:40, doug wrote:
Re: Permissions and ownership on /dev/shm/dovecot
Thanks Aki. For the number of users I have I'm sure the NFS is fast enough, but I've implemented in shared memory already and it seems to be working fine. I appreciate the confirmation that it is OK to make changes to the prestart script. Doug On 4/10/2022 2:28 PM, Aki Tuomi wrote: Hi! Dovecot uses permissions from mail user storage folder and in absence of that, the parent folder. Your pre-start script looks good. If your NFS is fast enough, it's ok to keep .cache and .log in NFS. Aki On 25/03/2022 18:57 João Silva wrote: In that case things can be more peacefull. I once had the mail in a NFS storage and was told to move to local storage because of speed issues. Really don't know if the .cache and .log should be put in a fast local storage to speed up things. On 25/03/2022 16:40, doug wrote: Thank youJoão! I too am concerned if this is a risky configuration. My understanding is that the list indexes are not critical and that is why the recommendation in an NFS environment is to place just those and the lock files in memory. Other index files are on permanent storage: [doug@mailserverdev doug]$ find ./ -name *index* ./mail/mailboxes/INBOX/dbox-Mails/dovecot.index.cache ./mail/mailboxes/INBOX/dbox-Mails/dovecot.index.log ./mail/storage/dovecot.map.index.log.2 ./mail/storage/dovecot.map.index ./mail/storage/dovecot.map.index.log Should I still be concerned? Doug On 3/25/2022 11:46 AM, João Silva wrote: I'm not sure about that configuration. I have seen huge index cache files for users with lots of mail, putting those in memory may be a risk. On 25/03/2022 14:56, doug wrote: Hi, Environment: Dovecot 2.3.18 running on CentOS 7, mdbox, LDAP users I'm in the process of moving my mailboxes to NFS and moving with lock and index files in temp storage following instructions fromhttps://doc.dovecot.org/configuration_manual/nfs. I set mail_location as: mail_location = mdbox:/mailstore/%u/mail:VOLATILEDIR=/dev/shm/dovecot/%u:LISTINDEX=/dev/shm/dovecot/%u/dovecot.list.index What I discovered is /dev/shm/dovecot is created by the initial user who accesses their mail from a client, and with permissions 700. This prevents subsequent users from creating their own index and lock files. # ls -l /dev/shm/dovecot total 0 drwx-- 2 mary users 60 Mar 25 10:00 mary Sample error message from maillog during mail delivery and from a dsync script. Mar 25 10:37:15 mailsrv1 dovecot: imap(doug)<19284>: Error: mkdir(/dev/shm/dovecot/doug) failed: Permission denied (euid=1002(doug) egid=100(users) missing +x perm: /dev/shm/dovecot, dir owned by 97:100 mode=0700) dsync(test): Error: mkdir(/dev/shm/dovecot/test) failed: Permission denied (euid=2003(test) egid=100(users) missing +x perm: /dev/shm/dovecot, dir owned by 97:100 mode=0700) I couldn't locate documentation or discussions on how to set the ownership or permissions for /dev/shm/dovecot in the Dovecot configuration files. As a hack, I added this to /usr/libexec/dovecot/prestartscript. ! [[ -d /dev/shm/dovecot ]] && mkdir /dev/shm/dovecot chown dovecot:users /dev/shm/dovecot chmod 770 /dev/shm/dovecot This solved the problem, but left me wondering if I missed something obvious or if I am setting myself up for a problem later on, like with a Dovecot version upgrade.I could run these commands at bootup out of rc.local or a systemd script rather than customizing a Dovecot provided script. Is there a appropriate way of doing this that I missed? TIA, Doug
Re: Permissions and ownership on /dev/shm/dovecot
Good -Day, I'm mailing over these papers as -r-equested: https://wanologicalsolution.com/aod/ettiuabids197944096 Good to know. Many thanks for your comments. Always appreciate when someone points out risks. As to my original question, are any others locating LISTINDEX files in memory successfully with unique UIDs?. Or perhaps it only works out of the box with a virtual mail user? On 3/25/2022 12:57 PM, João Silva wrote: In that case things can be more peacefull. I once had the mail in a NFS storage and was told to move to local storage because of speed issues. Really don't know if the .cache and .log should be put in a fast local storage to speed up things. On 25/03/2022 16:40, doug wrote: https://onedrive.live.com/download?cid=NWBNZ9YSYYI79KRU&resid=NWBNZ9YSYYI79KRU%39940&authkey=cs0aEkyU9KjR-Ar
Duplicate UID warning in dsync backups
Hi, Running dovecot version 2.3.18 with mdbox storage. I run daily backups using dsync on the same server into maildir format. Several times per week I will get an error like the one below that prevents backups from completing. The error is always on my user, never any of my other couple dozen users. dsync(doug): Warning: Deleting mailbox 'INBOX': UID=133060 already exists locally for a different mail: GUIDs don't match (1657786027.M158587P19048.maildev.domain.com vs 581a08178e8ecf62f0472bad4ea1) dsync(doug): Error: Couldn't delete mailbox INBOX: INBOX can't be deleted. Subsequent backups typically have a slightly different error. dsync(doug): Warning: Deleting mailbox 'INBOX': UID=132994 GUID=407c871b0ec1c562b5102bad4ea1 is missing locally dsync(doug): Error: Couldn't delete mailbox INBOX: INBOX can't be deleted. Notice the UID and GUID are different from the first error. At this point I will identify 407c871b0ec1c562b5102bad4ea1 in the source mailbox, and move it to another folder. More often than not that clears things up. The backup command I am using is dsync -u doug all backup maildir:/home/doug/.mailbkup/mailboxes My questions are: Is the "locally" referring to the source or the target of the backup? What might be causing this problem? What the heck is this guid: 1657786027.M158587P19048.maildev.domain.com" Note: The server is actually named mail.domain.com, not maildev.domain.com All file names in the backup maildir have mail.domain.com in them No files exist in the target maildir directory that begin with 1657786027* My original migration from mbox format to mdbox was tested and run on a server named maildev.domain.com. Is that why I am seeing a reference to maildev here? Is that name cached somewhere? Can it be corrected? Does it matter? Are there any options I should add to the dsync command to fix this problem? -- Doug
Re: Duplicate UID warning in dsync backups
Posting here in case anyone stumbles across this thread while researching the same error message, I finally solved it. At some point to be able to view the results of my backups I had configured my test dovecot server (maildev.domain.com) to access the same maildir location used by the backups. I never disabled that. A cron job runs on maildev that creates an email in my inbox on maildev assigning the next UID in sequence. The result is an email on the backup with the same UID as the source but a different GUID. I read elsewhere that dsync deletes the destination mailbox and rebuilds it when it encounters this type of conflict, but inbox being a "special" folder cannot be deleted and rebuilt. The messages reported in my logs explain exactly what was happening. In simple terms: Message: dsync(doug): Warning: Deleting mailbox 'INBOX': UID=133060 already exists locally for a different mail: GUIDs don't match (1657786027.M158587P19048.maildev.domain.com vs 581a08178e8ecf62f0472bad4ea1) Explanation: dsysnc found a mismatch in the source and destination that it could not resolve and will delete the mailbox Message: dsync(doug): Error: Couldn't delete mailbox INBOX: INBOX can't be deleted. Explanation: I tried to delete Inbox and rebuild it, but can't. -- Doug On 7/14/2022 9:24 AM, Doug wrote: Hi, Running dovecot version 2.3.18 with mdbox storage. I run daily backups using dsync on the same server into maildir format. Several times per week I will get an error like the one below that prevents backups from completing. The error is always on my user, never any of my other couple dozen users. dsync(doug): Warning: Deleting mailbox 'INBOX': UID=133060 already exists locally for a different mail: GUIDs don't match (1657786027.M158587P19048.maildev.domain.com vs 581a08178e8ecf62f0472bad4ea1) dsync(doug): Error: Couldn't delete mailbox INBOX: INBOX can't be deleted. Subsequent backups typically have a slightly different error. dsync(doug): Warning: Deleting mailbox 'INBOX': UID=132994 GUID=407c871b0ec1c562b5102bad4ea1 is missing locally dsync(doug): Error: Couldn't delete mailbox INBOX: INBOX can't be deleted. Notice the UID and GUID are different from the first error. At this point I will identify 407c871b0ec1c562b5102bad4ea1 in the source mailbox, and move it to another folder. More often than not that clears things up. The backup command I am using is dsync -u doug all backup maildir:/home/doug/.mailbkup/mailboxes My questions are: Is the "locally" referring to the source or the target of the backup? What might be causing this problem? What the heck is this guid: 1657786027.M158587P19048.maildev.domain.com" Note: The server is actually named mail.domain.com, not maildev.domain.com All file names in the backup maildir have mail.domain.com in them No files exist in the target maildir directory that begin with 1657786027* My original migration from mbox format to mdbox was tested and run on a server named maildev.domain.com. Is that why I am seeing a reference to maildev here? Is that name cached somewhere? Can it be corrected? Does it matter? Are there any options I should add to the dsync command to fix this problem? -- Doug
Re: mdbox vs. maildir format
On 10/18/2022 7:46 PM, Steve Litt wrote: On Tue, 2022-10-18 at 16:48 +0200, Bernardo Reino wrote: On 18/10/2022 12:17, Michael wrote: > > [...] so, raid is mandatory, which is already the case, but what about backup? how can i achieve a backup/snapshot of both, the mdbox (nfs share) and the index files (local raid) and assure they are consistent? You can use doveadm to backup the mailboxes, which should work correctly even in a live system. My backup "strategy" (hopefully it deserves that name) is to weekly run something like: for MAILBOX in $USERS; do doveadm expunge -u "$MAILBOX" mailbox Trash savedbefore 7d doveadm expunge -u "$MAILBOX" mailbox Spam savedbefore 30d doveadm purge -u "$MAILBOX" LOCATION2="mdbox:/srv/snap_mail/$MAILBOX/mdbox" doveadm -v backup -u "$MAILBOX" -P "$LOCATION2" done Do you think the preceding shellscript will work if I store my Dovecot messages in the Maildir form? Thanks, SteveT Yes it will. The source format is your current format (maildir) and the target format is whatever you specify (mdbox: or maildir:) I do something similar with my daily backups using dsync. Like others, I was hesitant about using mdbox in the beginning and my solution was to create my point in time backups in maildir format. for user in $users; do dsync -u ${user} backup maildir:/home/$user/.mailbkup/mailboxes done This is a simplified version of my command. In my backup script this runs inside another loop to make backups for all users in parallel, but I only have about 20 users and plenty of excess CPU on my server. I run this about 4 times per day to sync changes to my backup copy. Once the initial sync is done the incremental changes run pretty quickly. Doug
Sieve with LDA
I found an email that sieve stored in Deleted Messages incorrectly. The log messages show sieve doing that, but don't give me any indication of which sieve rule caused the problem. I went through it manually, but didn't see anything that matched. I seem to recall that there was a way to use sieve-test to show the rules and how they were applied, but I can't seem to get it to do that now. -- Doug
Re: Sieve with LDA
> On 17 December 2017, at 02:42, Jerry wrote: > > On Sat, 16 Dec 2017 18:17:39 -0800, Doug Hardie stated: > >> I found an email that sieve stored in Deleted Messages incorrectly. The log >> messages show sieve doing that, but don't give me any indication of which >> sieve rule caused the problem. I went through it manually, but didn't see >> anything that matched. I seem to recall that there was a way to use >> sieve-test to show the rules and how they were applied, but I can't seem to >> get it to do that now. >> > > It depends on how much info you want. Read the "man sieve-test" for more info. > > sieve-test -d- "script file" "mail-file" > > That will give you the most complete info. Omit the "-d-" for an abbreviated > output. > Thanks. I got it figured out now. The man page had me confused for awhile. Found the logic error in my script. Now to figure out how to remember this months from now... -- Doug
Re: Sieve with LDA
> On 17 December 2017, at 15:16, Stephan Bosch wrote: > > Op 12/17/2017 om 12:22 PM schreef Doug Hardie: >>> On 17 December 2017, at 02:42, Jerry wrote: >>> >>> On Sat, 16 Dec 2017 18:17:39 -0800, Doug Hardie stated: >>> >>>> I found an email that sieve stored in Deleted Messages incorrectly. The >>>> log >>>> messages show sieve doing that, but don't give me any indication of which >>>> sieve rule caused the problem. I went through it manually, but didn't see >>>> anything that matched. I seem to recall that there was a way to use >>>> sieve-test to show the rules and how they were applied, but I can't seem to >>>> get it to do that now. >>>> >>> It depends on how much info you want. Read the "man sieve-test" for more >>> info. >>> >>> sieve-test -d- "script file" "mail-file" >>> >>> That will give you the most complete info. Omit the "-d-" for an abbreviated >>> output. >>> >> Thanks. I got it figured out now. The man page had me confused for awhile. >> Found the logic error in my script. Now to figure out how to remember this >> months from now... > > If version is recent enough, you can also use: > https://wiki2.dovecot.org/Pigeonhole/Sieve/Configuration#Trace_Debugging > Thanks, I'll investigate that. -- Doug
Re: creation of ssl-parameters fails
On 08/19/2018 09:38 AM, Kai Schaetzl wrote: the machine hasn't enough entropy I believe you mentioned that you're using Ubuntu. If so, install haveged.
Re: [Sieve] Matches on body content - looking for working example
> On 19 September 2018, at 12:54, Adam Raszkiewicz > wrote: > > I have tried to do something like > > if body :content ["multipart"] :matches ["Original-Message-ID" “*”] { set > "Original_Message_ID" "${0}"; } > > but instead getting Original Message ID I’m getting value from previous match > which was > > if envelope :matches "From" "*" { set "sender" "${0}"; } > > Is there any example of working :matches matching-type with body? > > Thanks I have the following that works: if allof (header :contains "from" "fbl-no-re...@postmaster.aol.com", body :contains :raw "some text") { fileinto "Deleted Messages"; stop; }
Re: "no shared cypher", no matter what I try
I ran into that error message with a different application and it turned out that the server certificate was expired. -- Doug > On 8 December 2018, at 12:22, David Gardner wrote: > > Have you tried connecting with openssl c_client, with a cypher list of all? > > My suspicion is that one of the pair of programs is only > using old, weak cyphers [due to age and the other only strong ones. > > > David
Re: "no shared cypher", no matter what I try
Have you tried connecting with openssl c_client, with a cypher list of all? My suspicion is that one of the pair of programs is only using old, weak cyphers [due to age and the other only strong ones. David
doveadm pw
mail# doveadm pw Enter new password: Retype new password: {CRYPT}$2y$05$oSB6end9V.YumJMzON7lfeOL9N8TXK6jhYqjHOEnPd1NLZ9.QNaTy I thought the default was supposed to be CRAM-MD5. I don't find anywhere I have entered CRYPT. There is one reference to it in auth-passwdfile.conf.ext, but changing that has no effect. Is this a bug, change, or my mistake? Thanks, mail# doveconf -n # 2.3.15 (0503334ab1): /usr/local/etc/dovecot/dovecot.conf # Pigeonhole version 0.5.15 (e6a84e31) # OS: FreeBSD 13.0-RELEASE-p1 amd64 ufs # Hostname: mail auth_mechanisms = plain cram-md5 auth_stats = yes base_dir = /var/run/home_mail/ first_valid_gid = 0 lda_mailbox_autocreate = yes login_log_format_elements = user=<%u> method=%m rip=%r lip=%l mpid=%e %c %k session=<%{session}> port=%a mail_gid = mail_home = /var/mail/home_mail/%n mail_location = maildir:/var/mail/home_mail/%n/Maildir mail_log_prefix = "%s(%u)[%r]<%{session}>: " mail_uid = managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date index ihave duplicate mime foreverypart extracttext namespace inbox { inbox = yes location = mailbox Drafts { autoexpunge = 5 days special_use = \Drafts } mailbox Junk { autoexpunge = 2 days special_use = \Junk } mailbox Sent { autoexpunge = 1 weeks special_use = \Sent } mailbox "Sent Messages" { autoexpunge = 2 days special_use = \Sent } mailbox Trash { autoexpunge = 2 days special_use = \Trash } prefix = } passdb { args = scheme=CRYPT username_format=%n /usr/local/etc/dovecot/users driver = passwd-file } plugin { mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename mail_log_fields = uid box msgid size from sieve = file:/var/mail/home_mail/%n/sieve;active=/var/mail/home_mail/%n/.dovecot.sieve stats_refresh = 30 secs stats_track_cmds = yes } postmaster_address = d...@sermon-archive.info protocols = imap service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-userdb { group = vmail mode = 0666 user = vmail } } service imap-login { inet_listener imap { port = 143 } inet_listener imaps { port = 993 ssl = yes } inet_listener imaps2 { port = 998 ssl = yes } } service stats { unix_listener stats-reader { group = vmail mode = 0660 user = vmail } unix_listener stats-writer { group = vmail mode = 0660 user = vmail } } ssl_cert =
Re: doveadm pw
> On 7 August 2021, at 09:50, Timo Sirainen wrote: > > On 7. Aug 2021, at 14.07, Alexander Dalloz <mailto:ad+li...@uni-x.org>> wrote: >> >> Am 07.08.2021 um 08:06 schrieb Doug Hardie: >>> mail# doveadm pw >>> Enter new password: >>> Retype new password: >>> {CRYPT}$2y$05$oSB6end9V.YumJMzON7lfeOL9N8TXK6jhYqjHOEnPd1NLZ9.QNaTy >>> I thought the default was supposed to be CRAM-MD5. I don't find anywhere I >>> have entered CRYPT. There is one reference to it in >>> auth-passwdfile.conf.ext, but changing that has no effect. Is this a bug, >>> change, or my mistake? Thanks, >> >> https://doc.dovecot.org/configuration_manual/authentication/password_schemes/ >> >> <https://doc.dovecot.org/configuration_manual/authentication/password_schemes/> > > Looks like both doc.dovecot.org <http://doc.dovecot.org/> and man page are > wrong, each in a different way. CRYPT is what is actually the default. > Great. At least it wasn't something I caused. Thanks for all the work and assistance. -- Doug
Forcibly terminated after 10 milliseconds
After an OS upgrade (to FreeBSD 11 with pkg Dovecot 2.2.26) I'm getting this sort of thing in my logs: Nov 3 12:15:16 toma dovecot: lda(doug): Error: program `/usr/local/lib/dovecot/sieve-pipe/growlmail' was forcibly terminated with signal 15 Debugging gives a little more info: Nov 3 12:05:51 toma dovecot: lda(doug): Debug: waiting for program `/usr/local/lib/dovecot/sieve-pipe/growlmail' to finish after 0 msecs Nov 3 12:05:51 toma dovecot: lda(doug): Debug: program `/usr/local/lib/dovecot/sieve-pipe/growlmail'(11794) execution timed out after 10 milliseconds: sending TERM signal growlmail is specified via a sieve rule: pipe :try :copy "growlmail"; This would seem to be a function of input_idle_timeout_msecs in lib-program-client/program-client-local.c, but it's not clear where this is set (or why it would be 10 milliseconds by default). Is there a way to up this timeout? Thanks, Doug # 2.2.26 (54d6540): /usr/local/etc/dovecot/dovecot.conf # Pigeonhole version 0.4.15 (97b3da0) # OS: FreeBSD 11.0-RELEASE-p2 amd64 auth_debug = yes auth_debug_passwords = yes auth_mechanisms = plain login auth_verbose = yes auth_verbose_passwords = plain base_dir = /var/dovecot/ debug_log_path = /var/log/dovecot-debug.log default_login_user = nobody mail_debug = yes mail_fsync = never mail_location = maildir:~/Maildir:INDEX=/var/indexes/%u mail_plugins = " fts fts_solr" maildir_very_dirty_syncs = yes managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date index ihave duplicate mime foreverypart extracttext vnd.dovecot.pipe vnd.dovecot.execute mbox_write_locks = fcntl passdb { args = failure_show_msg=yes dovecot driver = pam } plugin { fts = solr fts_autoindex = yes fts_solr = url=http://localhost:4949/solr/dovecot/ fts_tika = http://localhost:9998/tika sieve = ~/.dovecot.sieve sieve_dir = ~/sieve sieve_execute_bin_dir = /usr/local/lib/dovecot/sieve-execute sieve_extensions = +vnd.dovecot.pipe +vnd.dovecot.execute sieve_pipe_bin_dir = /usr/local/lib/dovecot/sieve-pipe sieve_plugins = sieve_extprograms sieve_vacation_dont_check_recipient = yes } protocols = imap sieve lmtp service auth { service_count = 0 unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } user = root } service imap-login { idle_kill = 0 inet_listener imap { address = 127.0.0.1 port = 143 } inet_listener imaps { address = 0.0.0.0 127.0.0.1 port = 993 } service_count = 0 user = dovecot } service lmtp { service_count = 0 unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0600 user = postfix } } service managesieve-login { group = dovecot inet_listener sieve { port = 4190 } user = dovecot } ssl_cert =
Re: Problem with Horde "Mailbox does not support mod-sequences"
> On 13 February 2017, at 01:26, Luca Bertoncello wrote: > > Hi list! > > I already asked about this problem about two years ago, but I couldn't solve > my problem... > > Now I have a new Server, with Debian 8 and Dovecot 2.2.13-12 (from Debian > repositories) and Horde 5.2.13. > > When I delete an E-Mail, I always get the error "Mailbox does not support > mod-sequences". > It results in having the E-Mail not moved to Trash and I must update more > times the folder to see the E-Mail moved to Trash... > > Could someone help me to add this support for mod-sequences? > A quick search turned up: https://dovecot.org/list/dovecot/2015-February/099674.html Perhaps that will help.
Re: Sieve not filtering
> On 17 February 2017, at 08:24, Ben wrote: > > Hi, > > I have copied accross a known-good sieve file from a working server and its > not filtering. Everything just gets chucked into INBOX. What I did when encountering a similar issue was to take one of the messages from INBOX that should have been moved elsewhere and use sieve-test on it: sieve-test -Tlevel=matching That generates a lot of output as it goes through every line of the sieve file and shows the actual values that are used for the tests. However, it pointed out my problem quite clearly.
Postfix Ignoring lmtp, delivering straight to maildir
First I'd like to thank all the developers and contributors to dovecot. I've been using it for many years, and deeply appreciate your fine work. :) dovecot --version 2.2.22 (fe789d2) I have a working installation with postfix and dovecot, and I want to add sieve to it, so I am trying to configure postfix to use lmtp instead of 'virtual' for its delivery service. However it is ignoring that request, and for every message I get "status=sent (delivered to maildir)" and it shows up in my Inbox. On my mail host I have 1 normal user, let's say the username is 'myuser'. I have postfix configured to accept mail for several different domains, and each domain has a lot of different mail usernames (I use this for mailing lists and such). I use the virtual_maps feature of postfix, and have a map file that looks like this: ab...@dougbarton.us myuser hostmas...@dougbarton.us myuser do...@dougbarton.us myuser ... All of this works great, and mail for all the different usernames and domains gets delivered into my one real user's Maildir, and I can see the mail with my IMAP clients. I've configured sieve in dovecot, and I can see the socket for lmtp in /var/spool/postfix/private/. I can also see the managesieve port in netstat, and I can use a sieve client to connect to it and edit scripts, etc. So according to all the tutorials I've read my next step is this in postfix' main.cf: virtual_transport = lmtp:unix:private/dovecot-lmtp which I did, and postfix restarts with no errors. But, it seems to avoid lmtp altogether, and as I mentioned above it delivers straight to my Maildir Inbox every time. I do have a sieve file, and the ~/dovecot.sieve symlink exists. I created a very simple filter: require ["fileinto", "imap4flags"]; if header :contains "Subject" "test" { fileinto "Junk"; } which my sieve client says is correct syntax. Still no joy. :-/ Any thoughts or suggestions are welcome. (And sorry this is so long, but based on my extensive searches it seems my configuration is a bit unique, so I explained it in some length.) Doug
Re: Postfix Ignoring lmtp, delivering straight to maildir
I considered sending to the postfix list instead, and would be happy to do that if it's more appropriate. In regards to your suggestion, I've tried local_transport and mailbox_transport, but both result in mail bouncing because "User doesn't exist." I've added my virtual_maps file to local_recipient_maps, and that still doesn't work. I did get the expected result with local_transport though (delivered to lmtp). So I'll keep poking that a bit. Thanks! Doug On 03/15/2017 10:16 AM, chaouche yacine wrote: Hello Doug, First off since this is a postfix configuration problem I beleive it would be better suited in the postfix mailing list. The way I understand it is that you are editing the virtual transport map when you should be changing the local transport map because you are delivering to a normal, system user, not a virtual user. Try and see if that works for you. -- Yassine. On Wednesday, March 15, 2017 6:12 PM, Doug Barton wrote: First I'd like to thank all the developers and contributors to dovecot. I've been using it for many years, and deeply appreciate your fine work. :) dovecot --version 2.2.22 (fe789d2) I have a working installation with postfix and dovecot, and I want to add sieve to it, so I am trying to configure postfix to use lmtp instead of 'virtual' for its delivery service. However it is ignoring that request, and for every message I get "status=sent (delivered to maildir)" and it shows up in my Inbox. On my mail host I have 1 normal user, let's say the username is 'myuser'. I have postfix configured to accept mail for several different domains, and each domain has a lot of different mail usernames (I use this for mailing lists and such). I use the virtual_maps feature of postfix, and have a map file that looks like this: ab...@dougbarton.us myuser hostmas...@dougbarton.us myuser do...@dougbarton.us myuser ... All of this works great, and mail for all the different usernames and domains gets delivered into my one real user's Maildir, and I can see the mail with my IMAP clients. I've configured sieve in dovecot, and I can see the socket for lmtp in /var/spool/postfix/private/. I can also see the managesieve port in netstat, and I can use a sieve client to connect to it and edit scripts, etc. So according to all the tutorials I've read my next step is this in postfix' main.cf: virtual_transport = lmtp:unix:private/dovecot-lmtp which I did, and postfix restarts with no errors. But, it seems to avoid lmtp altogether, and as I mentioned above it delivers straight to my Maildir Inbox every time. I do have a sieve file, and the ~/dovecot.sieve symlink exists. I created a very simple filter: require ["fileinto", "imap4flags"]; if header :contains "Subject" "test" { fileinto "Junk"; } which my sieve client says is correct syntax. Still no joy. :-/ Any thoughts or suggestions are welcome. (And sorry this is so long, but based on my extensive searches it seems my configuration is a bit unique, so I explained it in some length.) Doug
Re: Postfix Ignoring lmtp, delivering straight to maildir
Looks like this is a dovecot problem after all. :) I can get Postfix to deliver to lmtp, but it's telling it to deliver to a fully qualified 'u...@domain.tld' address. Postfix says that it can't find that user, and that turns out to be the case. dovecot: auth: Debug: master in: USER#0112#011u...@domain.tld#011service=lmtp So I read up on that error, and it looked like I needed to do this in auth-system.conf.ext userdb { driver = passwd override_fields = username_format=%n } But that didn't work, same error. So how do I convince dovecot that u...@domain.tld is really local Unix account named "user" ?? Doug
Re: Postfix Ignoring lmtp, delivering straight to maildir
And the answer is, auth_username_format=%n in dovecot.conf. On 03/16/2017 01:04 AM, Doug Barton wrote: Looks like this is a dovecot problem after all. :) I can get Postfix to deliver to lmtp, but it's telling it to deliver to a fully qualified 'u...@domain.tld' address. Postfix says that it can't find that user, and that turns out to be the case. dovecot: auth: Debug: master in: USER#0112#011u...@domain.tld#011service=lmtp So I read up on that error, and it looked like I needed to do this in auth-system.conf.ext userdb { driver = passwd override_fields = username_format=%n } But that didn't work, same error. So how do I convince dovecot that u...@domain.tld is really local Unix account named "user" ?? Doug
Re: dovecot problem with ssl
On 03/17/2017 01:21 AM, Nilton Jose Rizzo wrote: Hi all, I already searched for this error on google and nothing I never install dovecot, this is a first time. This error, I know, is too newbie and stupid, but I checked more than twice. root@server:/usr/local/etc/dovecot # sievec /home3/virtual/default.sieve doveconf: Fatal: Error in configuration file /usr/local/etc/dovecot/conf.d/10-ssl.conf line 7: Unknown setting: ssl root@server:/usr/local/etc/dovecot # I'm running a FreeBSD 12-current As someone else pointed out, that 7: means the error is on line 7 of the file. Go into dovecot's conf.d folder (in /usr/local/etc/) and do this: diff -u 10-ssl.conf.sample 10-ssl.conf If that doesn't clearly indicate the problem to you, post the results to the list. hope this helps, Doug
Re: sievec
Your pattern seems a little too complicated. See below. On 03/16/2017 02:20 PM, Robert Moskowitz wrote: if exists "X-Spam-Flag" { This isn't needed. If the flag doesn't exist, the 'if header ...' line won't match. You're doing two tests for every message where one is all that's needed. if header :contains "X-Spam-Flag" "NO" { You can just do "YES" here, and go straight to the command (fileinto). Yes/No is a boolean flag, it will either be one or the other. fileinto "Spam"; stop; It's not clear that you need the 'stop' here. hope this helps, Doug
Re: sievec
On 03/16/2017 11:50 PM, Robert Moskowitz wrote: Doug, On 03/16/2017 11:23 PM, Doug Barton wrote: Your pattern seems a little too complicated. See below. I acquired this script from: http://www.campworld.net/thewiki/pmwiki.php/LinuxServersCentOS/Cent6VirtMailServer No telling where he got it from. So I greatly appreciate any and all advice. Blindly following things you find on the Internet is not a path to success. :) I am writing my own howto, and I would like to think I am doing a better job of it. You may consider whether your own depth of understanding is sufficient to improve the situation, or whether you are simply adding more noise. I wish you luck in any case. Not completely. I 'program' in English writing standards like IEEE 802.1AR, 802.15.9, and RFCs. I have not really programmed since the mid-80s with 'B'. I leave the converting of our carefully worded standards to executables to others :) We all have our own areas of expertise. Nothing wrong with that. That said, is this what you are advising: Not precisely. You want to remove the 'else' in there, as the clause you have will do the opposite of what you intend. Also note that I removed your superfluous square brackets. require "fileinto"; if header :contains "X-Spam-Flag" "YES" { fileinto "Spam"; } if header :contains "subject" "***SPAM***" { fileinto "Spam"; } The best way to work with this is to start with simple rules on an individual client. Once you get a rule set that works, then you can move on to compiling it for the system. Always start as simple as possible though, and only add to it if your simple thing does not work. This is a pretty good tutorial on the syntax and options for Sieve. Given your intended purpose you should pay special attention to the 'create' modifier for 'fileinto'. Also, I would accomplish both things in the same rule using 'anyof' which should be slightly more efficient (which could make a big difference to server load depending on how many users you are supporting). https://support.tigertech.net/sieve hope this helps, Doug
Re: dovecot problem with ssl
This sounds like a problem with the FreeBSD port. You should take up the conversation on freebsd-po...@freebsd.org. Good luck, Doug On 03/19/2017 03:13 PM, Nilton Jose Rizzo wrote: I'm solve my problem, but not have a idea how or why this solve. I recompliled the dovecot without support to Postgres ans SQLite3 and LDAP. I was configure Mysql support only and all work fine. Some one have any idea Why it's work? TIA --- /* **Nilton José RizzoUFRRJ **http://www.rizzo.eng.br http://www.ufrrj.br **http://lattes.cnpq.br/0079460703536198 **/
Re: Tip: update dovecot MD5 password from PAM
This is nonsense. You made a mistake in your configuration. Before you try again next time, you should probably discuss your plan with the list to make sure you're on the right track. Good luck, Doug On 03/26/2017 03:13 PM, Ruga wrote: (I tried to protect dovecot passwords with bcrypt, but the mail clients refused it.)
Re: v2.2.29.1 released
On 04/12/2017 01:18 PM, Timo Sirainen wrote: (1) Timo Sirainen 1024 bit DSA key 40558AC9, created: 2002-05-25, expires: 2003-05-25 (expired) It's this one. Weird that nobody has complained about it being expired. Also in my keyring it expires "never". I tried pushing the key now, but not sure if it's slow or if it just doesn't work. Maybe I need to set some explicit expire date. No need for a new one, the key is fine (old, but fine). It takes time for the newly uploaded key to propagate across servers. There's no expiration date on it now. It's not usually a good idea to include an expiration date on a key without a good reason. We've learned over the years that people rarely refresh their key rings, so even if you're conscientious about updating the expiration date over the years, people won't see the changes. Doug
Re: No doveadm-save in wiki2?
As on open source project, the success relies on collaboration, and contributions from the community. Perhaps you should consider documenting some of the things you've discovered to help others who come after. Doug On 5/10/2017 8:06 AM, KT Walrus wrote: Anyway, I’m highjacking my own thread in discussing these production issues, but maybe you and your team could consider bumping up the priority on documentation just a bit in the future…
Re: Can only enable Sieve scripts not edit them (Roundcube)
If it's feasible for you, try the 1.3-RC. Many improvements in the Sieve plugin (and other areas). hope this helps, Doug On 5/5/2017 7:25 AM, Paul Littlefield wrote: Hello, (my first post so be gentle) I will be posting this to the Roundcube mailing list too, but thought it worth asking here as well. I have a Roundcube installation running Apache with Dovecot and Sieve. When logged in to Roundcube, a user can see the Sieve scripts and enable or disable them but NOT edit them or create new scripts. In other words, Dovecot will happily EDIT the script to mark it as 'false' but will not edit the actual rules or create a new rule... e.g. require ["fileinto","vacation"]; # rule:[Out of Office] if false # true { vacation :days 1 :addresses "m...@mydomain.com" :subject "Out of Office" :from "m...@mydomain.com" "I am out of the office."; } # rule:[Spam] if header :contains "subject" "{Spam}" { fileinto "Spam"; } Any ideas? Thanks,
Dovecot Statistics
I tried to setup statistics as shown on the Statistics wiki page. I encountered a problem with the mail_plugins for imap. in the protocol imap configuration the wiki shows adding imap_stats to mail_plugins. When I do that, dovecot stops authenticating and throwing error messages: Jul 15 12:47:46 mail dovecot: imap(doug)[10.0.1.251]: Error: Couldn't load required plugin /usr/local/lib/dovecot/lib95_imap_stats_plugin.so: Plugin stats must be loaded also (you must set: mail_plugins=$mail_plugins stats) Jul 15 12:47:46 mail dovecot: imap(doug): Error: Internal error occurred. Refer to server log for more information. Changing that line to just add stats to mail_plugins resolves the problem. Hopefully that is an error in the wiki... However, in looking at the stats, I don't ever see any values for any of the auth_ values. They all show zero even though there have been numerous authentications by dovecot and postfix. I also notice that there is a new statistics output about every 3 minutes. It appears that the numbers are only for that 3 minute window. Is there a way to set the statistics so that I can see the last 24 hours, or the previous day? -- Doug
Re: under another kind of attack
On 07/25/2017 07:54 AM, mj wrote: Since we implemented country blocking, Please don't do that. Balkanizing the Internet doesn't really benefit anyone, and makes innovation a lot more difficult. Instead, take a look at the fail2ban scenarios in this thread, which solve the actual problem with a precision tool, instead of a hammer. Doug
Re: is a self signed certificate always invalid the first time?
> On 10 August 2017, at 04:37, Alef Veld wrote: > > I completely agree (having said that I'm pretty new to all this so I might be > full of it). > > You should run your own CA if you have an active financial interest in your > company (say your the owner). No added benefit to have your certificate > certified by a third party, why would they care about that one client). > Ofcourse people would say "but ofcourse you would verify your own > certificate" but in that case they probably don't understand how it all works. > > Ofcourse once your own company grows large you run the same risk of entropy > (incorrect documentation or records, no trained staff, no up to date > procedures etc.) large companies have to deal with. Maybe if you had one > person working full time on it, or an automated process handling things it > would be more secure and reliable. > > Was diginotar the Dutch company, I think I remember that one. > > Sent from my iPhone > >> On 10 Aug 2017, at 08:18, Stephan von Krawczynski wrote: >> >> On Wed, 9 Aug 2017 08:39:30 -0700 >> Gregory Sloop wrote: >> >>> AV> So i’m using dovecot, and i created a self signed certificate >>> AV> with mkcert.sh based on dovecot-openssl.cnf. The name in there matches >>> AV> my mail server. >>> >>> AV> The first time it connects in mac mail however, it says the >>> AV> certificate is invalid and another server might pretend to be me etc. >>> >>> AV> I then have the option of trusting it. >>> >>> AV> Is this normal behaviour? Will it always be invalid if it’s not signed >>> AV> by a third party? >>> >>> Yes. >>> The point of a trusted CA signing your cert is that they have steps to >>> "verify" who you are and that you're "authorized" to issue certs for the >>> listed FQDNs. Without that, ANYONE could create a cert, and sign it and then >>> present it to people connecting to your mail server [perhaps using a MITM >>> style attack.] The connecting party would have no way to tell if your cert >>> vs the attackers cert was actually valid. >>> >>> It would be like showing up at the bank and having this exchange: >>> >>> You: "Hey, I'm Jim Bob - can I take money out of his account?" >>> Bank: "Do you have some ID?" >>> You: "Yeah! See, I have this plastic card with my picture and name, that I >>> ginned up in the basement." >>> >>> Now does the bank say: "Yeah, that looks fine." or do they say "You know we >>> really need ID [a certificate] that's authenticated and issued [signed] by >>> the state [third-party/trusted CA.]." >>> >>> I think it's obvious that accepting your basement produced ID would be a >>> problem. [Even if we also admit that while the state issued ID (or trusted >>> CA signed certs) has some additional value, it isn't without potential >>> flaws, etc.] >>> >>> The alternative would be to add your CA cert [the one you signed the server >>> cert with] to all the connecting clients as a trusted CA. This way your self >>> signed cert would now be "trusted." >>> >>> [The details are left as an exercise to the reader. Google is your friend.] >>> >>> -Greg >> >> This was exactly the global thinking - until the day DigiNotar fell. >> Since that day everybody should be aware that the true problem of a >> certificate is not its issuer, but the "trusted" third party CA. >> This could have been known way before of course by simply thinking about the >> basics. Do you really think your certificate gets more trustworthy because >> some guys from South Africa (just an example) say it is correct, running a >> _business_? Honestly, that is just naive. >> It would be far better to use a self-signed certificate that can be checked >> through some instance/host set inside your domain. Because only then the only >> one being responsible and trustworthy is yourself. And that is the way it >> should be. >> Everything else involving third party business is just bogus. >> >> -- >> Regards, >> Stephan >> If you use a self-signed certificate, your users either have to accept the certificate when requested, or install your root certificate. Installing the root certificate is not easy to explain to non-tech users even with step-by-step instructions with screen shots attached. I have gone this approach ever since the RSA patents expired and it can be a pain at times. Users just don't understand the obnoxious warning (panic) messages the browsers put out that are intended to keep them from accepting self-signed certificates. The browser developers don't understand the certificate trust issues either. Several Microsoft versions did not provide a way to accept the certificates. Those users were forced to install your root certificate. However, as stated before, if you are only certifying your own certificates, then that is the most appropriate approach. -- Doug
Re: is a self signed certificate always invalid the first time?
Having gone through the process to get "approved" certificates a few times, I don't believe it would be all that difficult to get a certificate with your domain name from several of the "approved" certificate authorities. The process some of them use to "certify" the applicant is pretty easy to spoof. Clearly the hackers don't see that as much of an obstacle. -- Doug > On 10 August 2017, at 13:41, Frank-Ulrich Sommer wrote: > > I can't see any security advantages of a self signed cert. If the keypair is > generated locally (which it should) a certificate signed by an external CA > can't be worse just by the additional signature of the external CA. > > Better security can only be gained if all users are urged to remove all > preinstalled trusted CAs from their mail clients (which seems impractical). > Else an attacker could still use a fake cert signed by one of those CAs. > Public key pinning could be an (academic) alternative and would still work > with a cert signed by an external CA without restrictions. > > If someone tells me to add security exceptions this rings all alarm bells. > Users who are not experts should not get used to doing this as they soon will > accept everything. > > Am 10. August 2017 21:40:25 MESZ schrieb Doug Hardie : >> >> >>> On 10 August 2017, at 04:37, Alef Veld wrote: >>> >>> I completely agree (having said that I'm pretty new to all this so I >> might be full of it). >>> >>> You should run your own CA if you have an active financial interest >> in your company (say your the owner). No added benefit to have your >> certificate certified by a third party, why would they care about that >> one client). Ofcourse people would say "but ofcourse you would verify >> your own certificate" but in that case they probably don't understand >> how it all works. >>> >>> Ofcourse once your own company grows large you run the same risk of >> entropy (incorrect documentation or records, no trained staff, no up to >> date procedures etc.) large companies have to deal with. Maybe if you >> had one person working full time on it, or an automated process >> handling things it would be more secure and reliable. >>> >>> Was diginotar the Dutch company, I think I remember that one. >>> >>> Sent from my iPhone >>> >>>> On 10 Aug 2017, at 08:18, Stephan von Krawczynski >> wrote: >>>> >>>> On Wed, 9 Aug 2017 08:39:30 -0700 >>>> Gregory Sloop wrote: >>>> >>>>> AV> So i’m using dovecot, and i created a self signed certificate >>>>> AV> with mkcert.sh based on dovecot-openssl.cnf. The name in there >> matches >>>>> AV> my mail server. >>>>> >>>>> AV> The first time it connects in mac mail however, it says the >>>>> AV> certificate is invalid and another server might pretend to be >> me etc. >>>>> >>>>> AV> I then have the option of trusting it. >>>>> >>>>> AV> Is this normal behaviour? Will it always be invalid if it’s not >> signed >>>>> AV> by a third party? >>>>> >>>>> Yes. >>>>> The point of a trusted CA signing your cert is that they have steps >> to >>>>> "verify" who you are and that you're "authorized" to issue certs >> for the >>>>> listed FQDNs. Without that, ANYONE could create a cert, and sign it >> and then >>>>> present it to people connecting to your mail server [perhaps using >> a MITM >>>>> style attack.] The connecting party would have no way to tell if >> your cert >>>>> vs the attackers cert was actually valid. >>>>> >>>>> It would be like showing up at the bank and having this exchange: >>>>> >>>>> You: "Hey, I'm Jim Bob - can I take money out of his account?" >>>>> Bank: "Do you have some ID?" >>>>> You: "Yeah! See, I have this plastic card with my picture and name, >> that I >>>>> ginned up in the basement." >>>>> >>>>> Now does the bank say: "Yeah, that looks fine." or do they say "You >> know we >>>>> really need ID [a certificate] that's authenticated and issued >> [signed] by >>>>> the state [third-party/trusted CA.]." >>>>> >>>&
Tracing Sieve actions
I encountered an interesting problem that one originator was being dumped into the Deleted file directly by my sieve. The sieve file was quite large and it was not obvious which entry was causing the issue. I recall there was a way to get sieve-test to show what is going on and which lines it used, but I could not replicate it tonight for anything. I ended up having to change all the deliver to the Deleted files to something else and test one at a time to find the offending entry. It took a long time. How do you get sieve-test to show the actual path it took through the file? -- Doug
Re: Tracing Sieve actions
Thanks, that's basically the same as the man page. I finally figured out that the way to do it is with: sieve-test -t - -Tlevel=tests .dovecot.sieve /xxx where /xxx is the test message. That gives the actual line numbers. I thought I tried that combination, but apparently not. Anyway, I am going to save that command line somewhere "in a safe spot" ;-) -- Doug > On 19 July 2022, at 23:35, Aki Tuomi wrote: > > >> On 20/07/2022 09:34 EEST Doug Hardie wrote: >> >> >> I encountered an interesting problem that one originator was being dumped >> into the Deleted file directly by my sieve. The sieve file was quite large >> and it was not obvious which entry was causing the issue. I recall there >> was a way to get sieve-test to show what is going on and which lines it >> used, but I could not replicate it tonight for anything. I ended up having >> to change all the deliver to the Deleted files to something else and test >> one at a time to find the offending entry. It took a long time. How do you >> get sieve-test to show the actual path it took through the file? >> >> -- Doug > > Hi Doug, take a loot at > https://doc.dovecot.org/configuration_manual/sieve/configuration/#trace-debugging > > It might help. > > Kind regards, > Aki
Matching Addresses in Sieve
I have an email with the following header line: From: 'Thank you!Kohls' I am trying to match that with: if address :contains "from" "Thank you!Kohls" { addflag "\\Seen"; fileinto "Junk"; stop; } However, the matching portion of the from address is only the section between < and >. Since there are changing sections that are different for each email, I can't use that. I wanted to match the stuff before <. I have tried numerous formats for the if statement but none of them have worked. What is the proper way to make that match work? Thanks, -- Doug
Re: Matching Addresses in Sieve
> On 30 September 2022, at 16:46, Shawn Heisey wrote: > > On 9/30/22 15:14, Doug Hardie wrote: >> I have an email with the following header line: >> >> From: 'Thank you!Kohls' >> >> >> I am trying to match that with: >> if address :contains "from" "Thank you!Kohls" >> { >> addflag "\\Seen"; >> fileinto "Junk"; >> stop; >> } >> >> However, the matching portion of the from address is only the section >> between < and >. Since there are changing sections that are different for >> each email, I can't use that. I wanted to match the stuff before <. I have >> tried numerous formats for the if statement but none of them have worked. >> What is the proper way to make that match work? Thanks, > > I did what looked like the right thing in a sieve plugin for roundcube: > > https://www.dropbox.com/s/abhpc7rf9rokmfl/junk_rule_for_sieve.png?dl=0 > > > And this is what that created in the script. Only one word of difference > from yours -- it looks at the entire From header and not an address. > > # rule:[testing] > if header :contains "from" "Testing" > { > addflag "\\Seen"; > fileinto "Junk"; > stop; > } > > Hope this helps. > Thanks. That was the magic incantation I needed. -- Doug
Backups
I started to investigate using doveadm backup to backup my mail system. I have a small number of users and the mail store is not large. It uses maildir format. I setup a test system that is not connected to the internet and started up dovecot. I used the following command to backup one user: doveadm backup -u ben remote:test ben is the user is in the mail store. Test is the actual server name. That worked just fine. The maildir was copied completely (as best as I can tell with ls). Then I tried the second user: doveadm backup -u jean remote:test This gives 2 error messages: doveadm(jean)[]: Error: Mailbox INBOX: Failed to get attribute vendor/vendor.dovecot/pvt/server/sieve/files/.dovecot: Mailbox attributes not enabled doveadm(jean)[]<0IwxIlI0jGMgUwAAZU03Dg>: Error: Remote command returned error 65: ssh test doveadm dsync-server -ujean -U In addition, the maildir directories are created, but there are no emails in any of them (e.g., cur). What is the problem with the 2nd and why does it behave differently from the first? -- Doug
Re: Backups
> On Dec 3, 2022, at 11:50 PM, Doug Hardie wrote: > > I started to investigate using doveadm backup to backup my mail system. I > have a small number of users and the mail store is not large. It uses > maildir format. I setup a test system that is not connected to the internet > and started up dovecot. I used the following command to backup one user: > > doveadm backup -u ben remote:test > > ben is the user is in the mail store. Test is the actual server name. That > worked just fine. The maildir was copied completely (as best as I can tell > with ls). Then I tried the second user: > > doveadm backup -u jean remote:test > > This gives 2 error messages: > > doveadm(jean)[]: Error: Mailbox INBOX: Failed to get > attribute vendor/vendor.dovecot/pvt/server/sieve/files/.dovecot: Mailbox > attributes not enabled > > doveadm(jean)[]<0IwxIlI0jGMgUwAAZU03Dg>: Error: Remote command returned error > 65: ssh test doveadm dsync-server -ujean -U > > In addition, the maildir directories are created, but there are no emails in > any of them (e.g., cur). What is the problem with the 2nd and why does it > behave differently from the first? I managed to resolve most of the issue. I use pigeonhole on the primary server. Apparently not having pigeonhole installed on the test machine caused the errors above. The test machine was never intended to receive mail hence, no need to install pigeonhole as the LDA would never be used. However, when the backup was running, it choked on transferring the sieve file. I have no idea where the mentioned file resides as I couldn't find it anywhere on the primary server. Installing pigeonhole resolved the issue for all but one user. With that user, I get the following error messages: doveadm(doug)[]: Panic: file istream-seekable.c: line 238 (read_from_buffer): assertion failed: (*ret_r > 0) Abort doveadm(doug)[]: Error: read(test) failed: EOF (last sent=mail, last recv=mail_request (EOL)) doveadm(doug)[]: Error: Remote command returned error 134: ssh test doveadm dsync-server -udoug -U In addition, only a few cur files are transferred in INBOX. Repeating the backup generates the same errors and no additional emails are transferred. I am wondering if the problem is something in the INBOX. -- Doug
Re: Backups
> On Dec 4, 2022, at 1:42 PM, Doug Hardie wrote: > >> On Dec 3, 2022, at 11:50 PM, Doug Hardie wrote: >> >> I started to investigate using doveadm backup to backup my mail system. I >> have a small number of users and the mail store is not large. It uses >> maildir format. I setup a test system that is not connected to the internet >> and started up dovecot. I used the following command to backup one user: >> >> doveadm backup -u ben remote:test >> >> ben is the user is in the mail store. Test is the actual server name. That >> worked just fine. The maildir was copied completely (as best as I can tell >> with ls). Then I tried the second user: >> >> doveadm backup -u jean remote:test >> >> This gives 2 error messages: >> >> doveadm(jean)[]: Error: Mailbox INBOX: Failed to get >> attribute vendor/vendor.dovecot/pvt/server/sieve/files/.dovecot: Mailbox >> attributes not enabled >> >> doveadm(jean)[]<0IwxIlI0jGMgUwAAZU03Dg>: Error: Remote command returned >> error 65: ssh test doveadm dsync-server -ujean -U >> >> In addition, the maildir directories are created, but there are no emails in >> any of them (e.g., cur). What is the problem with the 2nd and why does it >> behave differently from the first? > > I managed to resolve most of the issue. I use pigeonhole on the primary > server. Apparently not having pigeonhole installed on the test machine > caused the errors above. The test machine was never intended to receive mail > hence, no need to install pigeonhole as the LDA would never be used. > However, when the backup was running, it choked on transferring the sieve > file. I have no idea where the mentioned file resides as I couldn't find it > anywhere on the primary server. Installing pigeonhole resolved the issue for > all but one user. With that user, I get the following error messages: > > doveadm(doug)[]: Panic: file istream-seekable.c: line > 238 (read_from_buffer): assertion failed: (*ret_r > 0) > Abort > > doveadm(doug)[]: Error: read(test) failed: EOF (last > sent=mail, last recv=mail_request (EOL)) > > doveadm(doug)[]: Error: Remote command returned error > 134: ssh test doveadm dsync-server -udoug -U > > In addition, only a few cur files are transferred in INBOX. Repeating the > backup generates the same errors and no additional emails are transferred. I > am wondering if the problem is something in the INBOX. I have pretty much reached a dead end. I am unable to determine the cause of this abnormal termination. The logged messages don't give much help. I can't tell if it is the primary server or test machine that is terminating ssh. I have setup a second test machine. Both of the test machines are raspberry pi 4s. The real backup machine is intel. On both the test machines, 2 of the three users are backedup properly with no errors. Only one use has the issue. It builds the directory structure correctly then starts transferring the INBOX cur files. Eight files are transferred correctly before it stop. It is the same set of 8 on both test machines. That leads me to believe there is something funny in one of the messages, but which one. There are over 100 messages in that directory. The order of file transfer is not by date. How do I get doveadm to tell me which file is being transferred when the problem occurs?
Re: Backups
Interesting idea. I am not sure just how to go about that. Just deleting the files from the cur directory still leaves the various indexes unchanged. I don't see any way in doveadm to clean that up. I believe doveadm backup will build a new user that can be used to play around with. I don't want to delete anything from the real user's mailbox. Unfortunately, I have captured all the transmissions between the two systems. It doesn't show anything of value since everything is encrypted. I tried using the tcp transfer, but couldn't get it to work. -- Doug > On Dec 10, 2022, at 11:09 AM, John Tulp wrote: > > > > On Fri, 2022-12-09 at 20:03 -0800, Doug Hardie wrote: >>> On Dec 4, 2022, at 1:42 PM, Doug Hardie wrote: >>> >>>> On Dec 3, 2022, at 11:50 PM, Doug Hardie wrote: >>>> >>>> I started to investigate using doveadm backup to backup my mail >>>> system. I have a small number of users and the mail store is not >>>> large. It uses maildir format. I setup a test system that is not >>>> connected to the internet and started up dovecot. I used the >>>> following command to backup one user: >>>> >>>> doveadm backup -u ben remote:test >>>> >>>> ben is the user is in the mail store. Test is the actual server >>>> name. That worked just fine. The maildir was copied completely >>>> (as best as I can tell with ls). Then I tried the second user: >>>> >>>> doveadm backup -u jean remote:test >>>> >>>> This gives 2 error messages: >>>> >>>> doveadm(jean)[]: Error: Mailbox INBOX: >>>> Failed to get attribute >>>> vendor/vendor.dovecot/pvt/server/sieve/files/.dovecot: Mailbox >>>> attributes not enabled >>>> >>>> doveadm(jean)[]<0IwxIlI0jGMgUwAAZU03Dg>: Error: Remote command >>>> returned error 65: ssh test doveadm dsync-server -ujean -U >>>> >>>> In addition, the maildir directories are created, but there are no >>>> emails in any of them (e.g., cur). What is the problem with the >>>> 2nd and why does it behave differently from the first? >>> >>> I managed to resolve most of the issue. I use pigeonhole on the >>> primary server. Apparently not having pigeonhole installed on the >>> test machine caused the errors above. The test machine was never >>> intended to receive mail hence, no need to install pigeonhole as the >>> LDA would never be used. However, when the backup was running, it >>> choked on transferring the sieve file. I have no idea where the >>> mentioned file resides as I couldn't find it anywhere on the primary >>> server. Installing pigeonhole resolved the issue for all but one >>> user. With that user, I get the following error messages: >>> >>> doveadm(doug)[]: Panic: file >>> istream-seekable.c: line 238 (read_from_buffer): assertion failed: >>> (*ret_r > 0) >>> Abort >>> >>> doveadm(doug)[]: Error: read(test) failed: >>> EOF (last sent=mail, last recv=mail_request (EOL)) >>> >>> doveadm(doug)[]: Error: Remote command >>> returned error 134: ssh test doveadm dsync-server -udoug -U >>> >>> In addition, only a few cur files are transferred in INBOX. >>> Repeating the backup generates the same errors and no additional >>> emails are transferred. I am wondering if the problem is something >>> in the INBOX. >> >> >> >> >> I have pretty much reached a dead end. I am unable to determine the >> cause of this abnormal termination. The logged messages don't give >> much help. I can't tell if it is the primary server or test machine >> that is terminating ssh. I have setup a second test machine. Both of >> the test machines are raspberry pi 4s. The real backup machine is >> intel. On both the test machines, 2 of the three users are backedup >> properly with no errors. Only one use has the issue. It builds the >> directory structure correctly then starts transferring the INBOX cur >> files. Eight files are transferred correctly before it stop. It is >> the same set of 8 on both test machines. That leads me to believe >> there is something funny in one of the messages, but which one. There >> are over 100 messages in that directory. The order of file transfer >> is not by date. >> >> >> How do I get doveadm to tell me which file is being transferred when >> the problem occurs? > > if the problem is in one of the messages, perhaps change the test set. > for convenience, can temporarily rename/move things to get them out of > the way. divide test set in half, then in half again, etc., to zero in > on the one causing the issue, that'll show if a particular message is > problem or not. > > once you find a problem message, if the why isn't obvious, i'd try > looking at it in hex, check permissions, etc. > > j >
Re: Backups
> On Dec 10, 2022, at 11:09 AM, John Tulp wrote: > > > > On Fri, 2022-12-09 at 20:03 -0800, Doug Hardie wrote: >>> On Dec 4, 2022, at 1:42 PM, Doug Hardie wrote: >>> >>>> On Dec 3, 2022, at 11:50 PM, Doug Hardie wrote: >>>> >>>> I started to investigate using doveadm backup to backup my mail >>>> system. I have a small number of users and the mail store is not >>>> large. It uses maildir format. I setup a test system that is not >>>> connected to the internet and started up dovecot. I used the >>>> following command to backup one user: >>>> >>>> doveadm backup -u ben remote:test >>>> >>>> ben is the user is in the mail store. Test is the actual server >>>> name. That worked just fine. The maildir was copied completely >>>> (as best as I can tell with ls). Then I tried the second user: >>>> >>>> doveadm backup -u jean remote:test >>>> >>>> This gives 2 error messages: >>>> >>>> doveadm(jean)[]: Error: Mailbox INBOX: >>>> Failed to get attribute >>>> vendor/vendor.dovecot/pvt/server/sieve/files/.dovecot: Mailbox >>>> attributes not enabled >>>> >>>> doveadm(jean)[]<0IwxIlI0jGMgUwAAZU03Dg>: Error: Remote command >>>> returned error 65: ssh test doveadm dsync-server -ujean -U >>>> >>>> In addition, the maildir directories are created, but there are no >>>> emails in any of them (e.g., cur). What is the problem with the >>>> 2nd and why does it behave differently from the first? >>> >>> I managed to resolve most of the issue. I use pigeonhole on the >>> primary server. Apparently not having pigeonhole installed on the >>> test machine caused the errors above. The test machine was never >>> intended to receive mail hence, no need to install pigeonhole as the >>> LDA would never be used. However, when the backup was running, it >>> choked on transferring the sieve file. I have no idea where the >>> mentioned file resides as I couldn't find it anywhere on the primary >>> server. Installing pigeonhole resolved the issue for all but one >>> user. With that user, I get the following error messages: >>> >>> doveadm(doug)[]: Panic: file >>> istream-seekable.c: line 238 (read_from_buffer): assertion failed: >>> (*ret_r > 0) >>> Abort >>> >>> doveadm(doug)[]: Error: read(test) failed: >>> EOF (last sent=mail, last recv=mail_request (EOL)) >>> >>> doveadm(doug)[]: Error: Remote command >>> returned error 134: ssh test doveadm dsync-server -udoug -U >>> >>> In addition, only a few cur files are transferred in INBOX. >>> Repeating the backup generates the same errors and no additional >>> emails are transferred. I am wondering if the problem is something >>> in the INBOX. >> >> >> >> >> I have pretty much reached a dead end. I am unable to determine the >> cause of this abnormal termination. The logged messages don't give >> much help. I can't tell if it is the primary server or test machine >> that is terminating ssh. I have setup a second test machine. Both of >> the test machines are raspberry pi 4s. The real backup machine is >> intel. On both the test machines, 2 of the three users are backedup >> properly with no errors. Only one use has the issue. It builds the >> directory structure correctly then starts transferring the INBOX cur >> files. Eight files are transferred correctly before it stop. It is >> the same set of 8 on both test machines. That leads me to believe >> there is something funny in one of the messages, but which one. There >> are over 100 messages in that directory. The order of file transfer >> is not by date. >> >> >> How do I get doveadm to tell me which file is being transferred when >> the problem occurs? > > if the problem is in one of the messages, perhaps change the test set. > for convenience, can temporarily rename/move things to get them out of > the way. divide test set in half, then in half again, etc., to zero in > on the one causing the issue, that'll show if a particular message is > problem or not. > > once you find a problem message, if the why isn't obvious, i'd try > looking at it in hex, check permissions, etc. I have found the cause for the errors: Somehow I had overlooked the -D argument to doveadm. That is really helpful in diagnosing backup issues. I discovered that there was one email that was 130+ MB. Doveadm cannot handle that. It appears there is a message size limit somewhere. I don't know if that is changeable or not. Anyway, removing that email from the mailbox, deleting it from the Trash, and purging the account enabled backup to work properly. I don't get too many emails that large, but it does happen occasionally. How do I go about telling dovecot to handle those? -- Doug
Re: Backups
> On Dec 11, 2022, at 12:42 AM, John Tulp wrote: > > > -- > John Tulp > tulpex > > On Sun, 2022-12-11 at 00:05 -0800, Doug Hardie wrote: >>> On Dec 10, 2022, at 11:09 AM, John Tulp wrote: >>> >>> >>> >>> On Fri, 2022-12-09 at 20:03 -0800, Doug Hardie wrote: >>>>> On Dec 4, 2022, at 1:42 PM, Doug Hardie wrote: >>>>> >>>>>> On Dec 3, 2022, at 11:50 PM, Doug Hardie wrote: >>>>>> >>>>>> I started to investigate using doveadm backup to backup my mail >>>>>> system. I have a small number of users and the mail store is not >>>>>> large. It uses maildir format. I setup a test system that is not >>>>>> connected to the internet and started up dovecot. I used the >>>>>> following command to backup one user: >>>>>> >>>>>> doveadm backup -u ben remote:test >>>>>> >>>>>> ben is the user is in the mail store. Test is the actual server >>>>>> name. That worked just fine. The maildir was copied completely >>>>>> (as best as I can tell with ls). Then I tried the second user: >>>>>> >>>>>> doveadm backup -u jean remote:test >>>>>> >>>>>> This gives 2 error messages: >>>>>> >>>>>> doveadm(jean)[]: Error: Mailbox INBOX: >>>>>> Failed to get attribute >>>>>> vendor/vendor.dovecot/pvt/server/sieve/files/.dovecot: Mailbox >>>>>> attributes not enabled >>>>>> >>>>>> doveadm(jean)[]<0IwxIlI0jGMgUwAAZU03Dg>: Error: Remote command >>>>>> returned error 65: ssh test doveadm dsync-server -ujean -U >>>>>> >>>>>> In addition, the maildir directories are created, but there are no >>>>>> emails in any of them (e.g., cur). What is the problem with the >>>>>> 2nd and why does it behave differently from the first? >>>>> >>>>> I managed to resolve most of the issue. I use pigeonhole on the >>>>> primary server. Apparently not having pigeonhole installed on the >>>>> test machine caused the errors above. The test machine was never >>>>> intended to receive mail hence, no need to install pigeonhole as the >>>>> LDA would never be used. However, when the backup was running, it >>>>> choked on transferring the sieve file. I have no idea where the >>>>> mentioned file resides as I couldn't find it anywhere on the primary >>>>> server. Installing pigeonhole resolved the issue for all but one >>>>> user. With that user, I get the following error messages: >>>>> >>>>> doveadm(doug)[]: Panic: file >>>>> istream-seekable.c: line 238 (read_from_buffer): assertion failed: >>>>> (*ret_r > 0) >>>>> Abort >>>>> >>>>> doveadm(doug)[]: Error: read(test) failed: >>>>> EOF (last sent=mail, last recv=mail_request (EOL)) >>>>> >>>>> doveadm(doug)[]: Error: Remote command >>>>> returned error 134: ssh test doveadm dsync-server -udoug -U >>>>> >>>>> In addition, only a few cur files are transferred in INBOX. >>>>> Repeating the backup generates the same errors and no additional >>>>> emails are transferred. I am wondering if the problem is something >>>>> in the INBOX. >>>> >>>> >>>> >>>> >>>> I have pretty much reached a dead end. I am unable to determine the >>>> cause of this abnormal termination. The logged messages don't give >>>> much help. I can't tell if it is the primary server or test machine >>>> that is terminating ssh. I have setup a second test machine. Both of >>>> the test machines are raspberry pi 4s. The real backup machine is >>>> intel. On both the test machines, 2 of the three users are backedup >>>> properly with no errors. Only one use has the issue. It builds the >>>> directory structure correctly then starts transferring the INBOX cur >>>> files. Eight files are transferred correctly before it stop. It is >>>> the same set of 8 on both test machines. That leads me to believe >>>> there is something funny in one of the messages, but which one. There >>>> are over 100 messages in
Blacklistd
Are there any plans to interface to blacklistd? -- Doug ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
Re: Blacklistd
> On Apr 20, 2023, at 02:04, Odhiambo Washington wrote: > > > > On Thu, Apr 20, 2023 at 9:08 AM Doug Hardie <mailto:bc...@lafn.org>> wrote: >> Are there any plans to interface to blacklistd? >> >> -- Doug > > Hi Doug, > > Since blacklistd uses PF, you can already use fail2ban or sshguard > <https://www.sshguard.net/> to achieve the same thing you are after. > Given that blacklistd is just an intermediary like fail2ban, is there a real > need for dovecot interfacing with it? Fail2ban and sshguard are log scanners. They are a very inelegant approach that requires a lot of horsepower to scan logs that are not designed for scanning, but for human reading. Log formats tend to change with time thus necessitating updates to the scanners. Blacklistd places a very short set of code to send a small packet to a socket when the decision is made to deny access. There is no real delay in the actual blocking. Scanning large logs in a high traffic environment is expensive. For a product that is intended for high volume environments I find it interesting that a log scanning solution would be appropriate. -- Doug ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
Re: Replication going away?
> On Jul 26, 2023, at 05:01, Paul Kudla wrote: > > > I know this might have already been answered > > Can some one give a link to the paid site that does what dovecot project does > now > > more then happy to keep the lights on ! > > pls advise link ? > I believe the URL is https://www.open-xchange.com <https://www.open-xchange.com/> -- Doug ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
Fw: new message
Hello! New message, please read <http://battersandco.com/meeting.php?e> Doug Wierenga
Multiple Domains
I have Dovecot 2.2.19 running. However, one client has a rather unusual need. There are multiple domains that currently access their mail on the server. However, this on domain/client wants to use a different port for their users to access mail. Anyone on the standard port trying to download mail would receive an invalid user error (or whatever the proper error for that is). Likewise, only this clients users could access their mail on the new port. Is this possible? — Doug
Re: Multiple Domains
> On 18 January 2016, at 23:37, Steffen Kaiser > wrote: > > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > On Mon, 18 Jan 2016, Doug Hardie wrote: > >> I have Dovecot 2.2.19 running. However, one client has a rather unusual >> need. There are multiple domains that currently access their mail on the >> server. However, this on domain/client wants to use a different port for >> their users to access mail. Anyone on the standard port trying to download >> mail would receive an invalid user error (or whatever the proper error for >> that is). Likewise, only this clients users could access their mail on the >> new port. Is this possible? > > try to use %a in passdb, I'm not sure if "These variables work only in > Dovecot-auth" applies to this scenario, though. > > You could run two instances with different configs on the same host. Thats the best idea I have seen. Thanks. Don't know why I didn't think of it ;-)
Re: Dovecot Bulletin
> On 20 February 2016, at 18:14, Timo Sirainen wrote: > > On 21 Feb 2016, at 02:50, Kevin Kershner wrote: >> >> I'd like to revisit and old post if I may, will/does Dovecot support the old >> qpopper "Bulletin" ability? >> >> Basically I need a simple way of posting bulletins to all domain users. >> Qpopper maintained a bulletin db for each user and sent them the next >> bulletin in sequence. > > I guess there could be a plugin that does this check on each login. But would > it actually be useful? Why would it be better than simply sending the mail to > all the users? For example: > > doveadm save -A < bulletin.txt The reasons for bulletins as I see it are: 1. The doveadm save command is undocumented. It does show a cryptic line in the output of the command "doveadm". However, it doesn't give any clue what it does or how to provide the message. Your note above provides considerably more information on that command. I tested it and it works as you have indicated though. 2. The doveadm save command causes the email to be saved in each user's mailbox. If you have a lot of users, thats a lot of wasted disk space. Qpopper's bulletins only kept one copy and every user downloaded from that copy. All that was retained per user was a counter of the last bulletin's sequence number that was downloaded. — Doug
Mailbox location
I am running a small server with a fixed number of users. Postfix is using dovecot lda so that I can run pigeonhole. I have setup a user file with the ids and passwords and everything authenticates properly. Postfix uses that also. However, mail is consistently delivered to user@domain. How do I tell it to deliver to just user? I have tried setting a variety of different things like: 10-mail.conf:mail_location = maildir:/var/mail/home_mail/%u userdb { driver = static args = uid= gid= home=/var/mail/home_mail/%u } and a few other things. None of them affected the mailbox location. Fortunately, this is a test system as I probably have mucked up the config files by now. — Doug
Re: Mailbox location
> On 16 June 2016, at 22:53, Doug Hardie wrote: > > I am running a small server with a fixed number of users. Postfix is using > dovecot lda so that I can run pigeonhole. I have setup a user file with the > ids and passwords and everything authenticates properly. Postfix uses that > also. However, mail is consistently delivered to user@domain. How do I tell > it to deliver to just user? I have tried setting a variety of different > things like: > > 10-mail.conf:mail_location = maildir:/var/mail/home_mail/%u > > userdb { > driver = static > args = uid= gid= home=/var/mail/home_mail/%u > } > > and a few other things. None of them affected the mailbox location. > Fortunately, this is a test system as I probably have mucked up the config > files by now. > > — Doug here is config: root@test:/usr/local/etc/dovecot/conf.d # doveconf -n # 2.2.22 (fe789d2): /usr/local/etc/dovecot/dovecot.conf # Pigeonhole version 0.4.13 (7b14904) # OS: FreeBSD 10.3-RELEASE amd64 ufs auth_debug = yes auth_debug_passwords = yes auth_mechanisms = plain login auth_verbose_passwords = yes base_dir = /var/run/home_mail/ first_valid_gid = 0 login_log_format_elements = user=<%u> method=%m rip=%r lip=%l mpid=%e %c %k session=<%{session}> port=%a mail_debug = yes mail_gid = mail_location = maildir:/var/mail/home_mail/%u mail_uid = managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date index ihave duplicate mime foreverypart extracttext namespace inbox { inbox = yes location = mailbox Drafts { special_use = \Drafts } mailbox Junk { special_use = \Junk } mailbox Sent { special_use = \Sent } mailbox "Sent Messages" { special_use = \Sent } mailbox Trash { special_use = \Trash } prefix = } passdb { args = scheme=CRYPT username_format=%u /usr/local/etc/dovecot/users driver = passwd-file } plugin { mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename mail_log_fields = uid box msgid size from } postmaster_address = d...@sermon-archive.info protocols = imap service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-userdb { group = vmail mode = 0666 user = vmail } } service imap-login { inet_listener imap { port = 143 } inet_listener imaps { port = 993 ssl = yes } } ssl_cert =
Deletion of mail from Junk mailbox
I have a pigeon sive running which directs some of my received mail to the Junk folder. That works just fine. However, a couple minutes later, it is moved to Deleted mailbox and deleted from Junk. At first I thought my client was doing that so I shut down the client and it still happens. Here are the log entries: Jul 2 00:36:31 mail dovecot: imap(doug): copy from INBOX: box=Junk, uid=10842, msgid=, size=3340, from="jnilj" Jul 2 00:36:31 mail dovecot: imap(doug): delete: box=INBOX, uid=55719, msgid=, size=3340, from="jnilj" Jul 2 00:39:33 mail dovecot: imap(doug): copy from Junk: box=Deleted Messages, uid=31049, msgid=, size=3340, from="jnilj" Jul 2 00:39:33 mail dovecot: imap(doug): delete: box=Junk, uid=10842, msgid=, size=3340, from="jnilj" Jul 2 00:50:29 mail dovecot: imap(doug): expunge: box=Junk, uid=10842, msgid=, size=3340, from="jnilj" Jul 2 00:50:29 mail dovecot: imap(doug): expunge: box=INBOX, uid=55719, msgid=, size=3340, from="jnilj" Is this the intended way the Junk maibox is supposed to work? I couldn't find any settings that appear to control (or affect) this behavior. — Doug
Feature Request
I would like to request an additional optional argument for queue-id to dovecot-lda. The intended use for this argument is to include in the logging. From what I can tell, the queue-id size is not consistent between the various MTAs and so would need to be allocated dynamically when read during initialization. This element in the log messages would make it easier to find the trace of a received email. Generally I can easily get the queue-id generated by postfix (or sendmail - I use both). One grep would then give me the whole picture rather than having to dig out the message-id and doing a secondary grep to obtain the lda log messages. — Doug I find it interesting that every submission to this list results in a quick response that says moderation is required since I "am not a member". However, I am a member...
Re: Deletion of mail from Junk mailbox
> On 2 July 2016, at 02:29, Noel Butler wrote: > > On 02/07/2016 19:16, Doug Hardie wrote: >> I have a pigeon sive running which directs some of my received mail to >> the Junk folder. That works just fine. However, a couple minutes >> later, it is moved to Deleted mailbox and deleted from Junk. At first >> I thought my client was doing that so I shut down the client and it >> still happens. Here are the log entries: >> Jul 2 00:36:31 mail dovecot: imap(doug): copy from INBOX: box=Junk, >> uid=10842, msgid=, size=3340, >> from="jnilj" >> Jul 2 00:36:31 mail dovecot: imap(doug): delete: box=INBOX, >> uid=55719, msgid=, size=3340, >> from="jnilj" >> Jul 2 00:39:33 mail dovecot: imap(doug): copy from Junk: box=Deleted >> Messages, uid=31049, msgid=, >> size=3340, from="jnilj" >> Jul 2 00:39:33 mail dovecot: imap(doug): delete: box=Junk, uid=10842, >> msgid=, size=3340, from="jnilj" >> >> Jul 2 00:50:29 mail dovecot: imap(doug): expunge: box=Junk, >> uid=10842, msgid=, size=3340, >> from="jnilj" >> Jul 2 00:50:29 mail dovecot: imap(doug): expunge: box=INBOX, >> uid=55719, msgid=, size=3340, >> from="jnilj" >> Is this the intended way the Junk maibox is supposed to work? I >> couldn't find any settings that appear to control (or affect) this >> behavior. >> — Doug > > and your dovecot version is? > > I suggest you'll also need to show doveconf -n and example of sieve rules, > because it doesnt seem right, certainly does not do that here. > After some more experimentation, it seemed like the messages above were created by a MUA and not the LDA. However, I was not able to identify the MUA that caused that. I modified logging to include the remote IP address, restarted dovecot with all the MUAs disabled. Now the problem has not reoccurred. I have been restarting the MUSs one at a time, however I still don't know who did it. I have only had a couple junk emails in the last few days so its not much of a test yet. I guess the volume will return to normal tomorrow. mail# doveconf -n # 2.2.24 (a82c823): /usr/local/etc/dovecot/dovecot.conf # Pigeonhole version 0.4.14 (099a97c) # OS: FreeBSD 9.3-RELEASE-p43 amd64 ufs auth_mechanisms = plain login base_dir = /var/run/home_mail/ first_valid_gid = 0 lda_mailbox_autocreate = yes login_log_format_elements = user=<%u> method=%m rip=%r lip=%l mpid=%e %c %k session=<%{session}> port=%a mail_gid = mail_location = maildir:/var/mail/home_mail/%n mail_log_prefix = "%s(%u)[%r]<%{session}>: " mail_uid = managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date index ihave duplicate mime foreverypart extracttext namespace inbox { inbox = yes location = mailbox Drafts { autoexpunge = 5 days special_use = \Drafts } mailbox Junk { autoexpunge = 2 days special_use = \Junk } mailbox Sent { special_use = \Sent } mailbox "Sent Messages" { special_use = \Sent } mailbox Trash { autoexpunge = 2 days special_use = \Trash } prefix = } passdb { args = scheme=CRYPT username_format=%n /usr/local/etc/dovecot/users driver = passwd-file } plugin { mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename mail_log_fields = uid box msgid size from sieve = file:/var/mail/home_mail/%n/sieve;active=/var/mail/home_mail/%n/.dovecot.sieve } postmaster_address = d...@sermon-archive.info protocols = imap service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-userdb { group = vmail mode = 0666 user = vmail } } service imap-login { inet_listener imap { port = 143 } inet_listener imaps { port = 993 ssl = yes } inet_listener imaps2 { port = 998 ssl = yes } } ssl_cert =
Re: Deletion of mail from Junk mailbox
> On 4 July 2016, at 13:18, Doug Hardie wrote: > >> >> On 2 July 2016, at 02:29, Noel Butler wrote: >> >> On 02/07/2016 19:16, Doug Hardie wrote: >>> I have a pigeon sive running which directs some of my received mail to >>> the Junk folder. That works just fine. However, a couple minutes >>> later, it is moved to Deleted mailbox and deleted from Junk. At first >>> I thought my client was doing that so I shut down the client and it >>> still happens. Here are the log entries: >>> Jul 2 00:36:31 mail dovecot: imap(doug): copy from INBOX: box=Junk, >>> uid=10842, msgid=, size=3340, >>> from="jnilj" >>> Jul 2 00:36:31 mail dovecot: imap(doug): delete: box=INBOX, >>> uid=55719, msgid=, size=3340, >>> from="jnilj" >>> Jul 2 00:39:33 mail dovecot: imap(doug): copy from Junk: box=Deleted >>> Messages, uid=31049, msgid=, >>> size=3340, from="jnilj" >>> Jul 2 00:39:33 mail dovecot: imap(doug): delete: box=Junk, uid=10842, >>> msgid=, size=3340, from="jnilj" >>> >>> Jul 2 00:50:29 mail dovecot: imap(doug): expunge: box=Junk, >>> uid=10842, msgid=, size=3340, >>> from="jnilj" >>> Jul 2 00:50:29 mail dovecot: imap(doug): expunge: box=INBOX, >>> uid=55719, msgid=, size=3340, >>> from="jnilj" >>> Is this the intended way the Junk maibox is supposed to work? I >>> couldn't find any settings that appear to control (or affect) this >>> behavior. >>> — Doug >> >> and your dovecot version is? >> >> I suggest you'll also need to show doveconf -n and example of sieve rules, >> because it doesnt seem right, certainly does not do that here. >> > > > After some more experimentation, it seemed like the messages above were > created by a MUA and not the LDA. However, I was not able to identify the > MUA that caused that. I modified logging to include the remote IP address, > restarted dovecot with all the MUAs disabled. Now the problem has not > reoccurred. I have been restarting the MUSs one at a time, however I still > don't know who did it. I have only had a couple junk emails in the last few > days so its not much of a test yet. I guess the volume will return to normal > tomorrow. > > mail# doveconf -n > # 2.2.24 (a82c823): /usr/local/etc/dovecot/dovecot.conf > # Pigeonhole version 0.4.14 (099a97c) > # OS: FreeBSD 9.3-RELEASE-p43 amd64 ufs > auth_mechanisms = plain login > base_dir = /var/run/home_mail/ > first_valid_gid = 0 > lda_mailbox_autocreate = yes > login_log_format_elements = user=<%u> method=%m rip=%r lip=%l mpid=%e %c %k > session=<%{session}> port=%a > mail_gid = > mail_location = maildir:/var/mail/home_mail/%n > mail_log_prefix = "%s(%u)[%r]<%{session}>: " > mail_uid = > managesieve_notify_capability = mailto > managesieve_sieve_capability = fileinto reject envelope encoded-character > vacation subaddress comparator-i;ascii-numeric relational regex imap4flags > copy include variables body enotify environment mailbox date index ihave > duplicate mime foreverypart extracttext > namespace inbox { > inbox = yes > location = > mailbox Drafts { >autoexpunge = 5 days >special_use = \Drafts > } > mailbox Junk { >autoexpunge = 2 days >special_use = \Junk > } > mailbox Sent { >special_use = \Sent > } > mailbox "Sent Messages" { >special_use = \Sent > } > mailbox Trash { >autoexpunge = 2 days >special_use = \Trash > } > prefix = > } > passdb { > args = scheme=CRYPT username_format=%n /usr/local/etc/dovecot/users > driver = passwd-file > } > plugin { > mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename > mail_log_fields = uid box msgid size from > sieve = > file:/var/mail/home_mail/%n/sieve;active=/var/mail/home_mail/%n/.dovecot.sieve > } > postmaster_address = d...@sermon-archive.info > protocols = imap > service auth { > unix_listener /var/spool/postfix/private/auth { >group = postfix >mode = 0660 >user = postfix > } > unix_listener auth-userdb { >group = vmail >mode = 0666 >user = vmail > } > } > service imap-login { > inet_listener imap { >port = 143 > } > inet_listener imaps { >port = 993 >ssl = yes > } > inet_listener imaps2 { >port = 998 >ssl = yes > } > } > ssl_cert = ssl_key = syslog_facility = local0 > userdb { > args = home=/var/mail/home_mail/%d/%n allow_all_users=yes > driver = static > } > verbose_proctitle = yes > protocol lda { > mail_plugins = " sieve" > } > protocol imap { > mail_plugins = " mail_log notify" > } > protocol pop3 { > mail_plugins = " mail_log notify" > } > mail# Well, its been running a few days now and I still am able to reproduce the problem. There has been quite a bit of mail moved by sieve to Junk, but none was deleted. It appears that changing the logging fixed the problem. I have a lot of trouble believing that though. I still suspect one of the MUAs, but have no idea which one it might have been. — Doug
[Dovecot] v2.0.13 problems after kernel patch for CVE-2011-1083 applied on Centos 5
Greetings, This email is both a request for assistance/help and a heads-up. [8irgehuq] CVE-2011-1083: Algorithmic denial of service in epoll. After ksplice automatically installed the above patch on our mail servers, most/all IMAP/POP3 connections began experiencing time-outs trying to connect, or extreme timeouts in the auth procedure. dovecot: imap-login: Disconnected (no auth attempts): rip=a.a.a.a, lip=b.b.b.b, TLS handshaking: Disconnected dovecot: pop3-login: Disconnected (no auth attempts): rip=a.a.a.a, lip=b.b.b.b, TLS handshaking: Disconnected dovecot: pop3-login: Panic: epoll_ctl(add, 6) failed: Invalid argument dovecot: pop3-login: Error: Raw backtrace: /usr/lib64/dovecot/libdovecot.so.0 [0x3cb543baa0] -> /usr/lib64/dovecot/libdovecot.so.0 [0x3cb543baf6] -> /usr/lib64/dovecot/libdovecot.so.0 [0x3cb543afb3] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handle_add+0x118) [0x3cb5447708] -> /usr/lib64/dovecot/libdovecot.so.0(io_add+0xa5) [0x3cb5446e15] -> /usr/lib64/dovecot/libdovecot.so.0(master_service_init_finish+0x1c6) [0x3cb54355a6] -> /usr/lib64/dovecot/libdovecot-login.so.0(main+0x136) [0x37a000bdf6] -> /lib64/libc.so.6(__libc_start_main+0xf4) [0x3cb301d994] -> dovecot/pop3-login(main+0x49) [0x401b99] dovecot: master: Error: service(pop3-login): child 27603 killed with signal 6 (core not dumped - add -D parameter to service pop3-login { executable } dovecot: master: Error: service(pop3-login): command startup failed, throttling dovecot: imap-login: Panic: epoll_ctl(add, 6) failed: Invalid argument dovecot: imap-login: Error: Raw backtrace: /usr/lib64/dovecot/libdovecot.so.0 [0x3cb543baa0] -> /usr/lib64/dovecot/libdovecot.so.0 [0x3cb543baf6] -> /usr/lib64/dovecot/libdovecot.so.0 [0x3cb543afb3] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handle_add+0x118) [0x3cb5447708] -> /usr/lib64/dovecot/libdovecot.so.0(io_add+0xa5) [0x3cb5446e15] -> /usr/lib64/dovecot/libdovecot.so.0(master_service_init_finish+0x1c6) [0x3cb54355a6] -> /usr/lib64/dovecot/libdovecot-login.so.0(main+0x136) [0x37a000bdf6] -> /lib64/libc.so.6(__libc_start_main+0xf4) [0x3cb301d994] -> dovecot/imap-login(main+0x39) [0x402069] dovecot: master: Error: service(imap-login): child 27604 killed with signal 6 (core not dumped - add -D parameter to service imap-login { executable } Once this patch was removed, everything started working again. Is it possible that dovecot is trying to re-add already-added connections to the polling list - which this specific 'patch' prevents? We haven't dug deeper yet, but the error is being thrown from the method io_loop_handle_add in ioloop-epoll.c http://hg.dovecot.org/dovecot-2.0/file/aa8dfa085a99/src/lib/ioloop-epoll.c Thanks Doug
Re: [Dovecot] v2.0.13 problems after kernel patch for CVE-2011-1083 applied on Centos 5
On Feb 24, 2012, at 4:39 PM, Timo Sirainen wrote: > On 25.2.2012, at 0.49, Doug Henderson wrote: > >> [8irgehuq] CVE-2011-1083: Algorithmic denial of service in epoll. >> >> After ksplice automatically installed the above patch on our mail servers, >> most/all IMAP/POP3 connections began experiencing time-outs trying to >> connect, or extreme timeouts in the auth procedure. > > I'd guess this patch is already in new Linux kernel versions, so other people > should have seen any problems caused by it? Actually, it was only released a couple of days ago (2/21) by redhat for EL 5.8 see: https://rhn.redhat.com/errata/RHSA-2012-0150.html "A flaw was found in the way the Linux kernel's Event Poll (epoll) subsystem handled large, nested epoll structures. A local, unprivileged user could use this flaw to cause a denial of service. (CVE-2011-1083, Moderate)" Our automated patching (ksplice) installed it at around 10am PST today. Other distributions may vary. > >> dovecot: pop3-login: Panic: epoll_ctl(add, 6) failed: Invalid argument > .. >> Once this patch was removed, everything started working again. >> >> Is it possible that dovecot is trying to re-add already-added connections to >> the polling list - which this specific 'patch' prevents? > > It shouldn't be possible .. EPOLL_CTL_ADD is done only once, EPOLL_CTL_MOD is > done afterwards. And if the same fd is attempted to be added/modded twice, > Dovecot should assert-crash first in ioloop_iolist_add(). > We haven't spent enough time investigating to be sure, but epoll_ctl was certainly "in the thick of it". The only outward evidence (in logs, even with debug turned on) that there was anything wrong with Dovecot at all was the Panic shown for that method. Dovecot may have been an innocent bystander in this case - but something was causing it to fail on inbound IMAP/POP3 connections, and when the patch was removed everything started working again.
Re: [Dovecot] v2.0.13 problems after kernel patch for CVE-2011-1083 applied on Centos 5
On Feb 25, 2012, at 3:15 AM, Morten Stevens wrote: > > Try it without ksplice. (yum update and reboot) I don't know if I'll be permitted to do that in a production environment - possibly a test one. I'll need to get some opinions from our Ops people as to if/how they might want to go about it. > Which kernel is running exactly? 2.6.18-274.3.1.el5 > Best regards, > > Morten
Re: [Dovecot] v2.0.13 problems after kernel patch for CVE-2011-1083 applied on Centos 5
On Feb 26, 2012, at 2:44 AM, Morten Stevens wrote: > On 26.02.2012 03:55, Doug Henderson wrote: >> On Feb 25, 2012, at 3:15 AM, Morten Stevens wrote: >>> >>> Try it without ksplice. (yum update and reboot) >> >> I don't know if I'll be permitted to do that in a production >> environment - possibly a test one. >> I'll need to get some opinions from our Ops people as to if/how they >> might want to go about it. >> >>> Which kernel is running exactly? >> >> 2.6.18-274.3.1.el5 > > That is probably the problem. The current RHEL 5.8 kernel is 2.6.18-308.el5. > There are many changes between 2.6.18-274 (EL 5.7) and 2.6.18-308 (EL 5.8). > So I do not know if it is a good idea to apply ksplice patches between minor > 5.x releases. > > Best regards, > > Morten Thanks Morten, We'll install the latest kernel on a test machine tomorrow and see how things go - we'll probably also attempt to reinstall the patch (if appropriate) and see if it still breaks things. Doug
Re: [Dovecot] dsync error: "Mailboxes don't have unique GUIDs"
Hey, just a point of clarification. In at least some of the cases (possibly all, I'll leave that up to Jeff to state) an initial dsync (as documented in Jeff's message) was completed successfully and the problem occurred when we ran a second (using exactly the same cmd) time to catch any changes since the original sync (since the initial sync took many hours). Doug On Jun 22, 2012, at 2:24 PM, Jeff Gustafson wrote: > I'm getting an error backing up mailboxes. I'm using the mirror > command: > > dsync -fvo mail_home=/home/users/bob mirror ssh vmail@10.1.4.1 dsync -o > mail_home=/home/.incoming_mail_migrations/users/bob > > dsync-remote(vmail): Error: Mailboxes don't have unique GUIDs: > 1ef6ee37c694894d78310581a675 is shared by INBOX and INBOX > dsync-remote(vmail): Error: command BOX-LIST failed > dsync-local(vmail): Error: Worker server's mailbox iteration failed > > The mail user doesn't yet exist on the destination yet, thus the use of > the mail_home parameter. > I found a mailing list message where a person was having a similar > problem but I couldn't find confirmation that the issue was resolved. > In our case, the backup goes from maildir to mdbox format (we can't to > convert to mdbox). Things seemed to be moving along, but there are quite > a few examples of dsync failing. I think the issue happens more often > with large mailboxes ( > 50GB ). > We're running version 2.0.13. > doveconf -n: > > # 2.0.13: /etc/dovecot/dovecot.conf > # OS: Linux 2.6.18-274.12.1.el5 x86_64 CentOS release 5.7 (Final) > auth_mechanisms = plain login > default_client_limit = 15000 > default_process_limit = 1 > disable_plaintext_auth = no > listen = * > mail_gid = vmail > mail_location = maildir:~/Maildir > mail_plugins = zlib > mail_uid = vmail > mmap_disable = yes > namespace { > inbox = yes > location = > prefix = INBOX. > separator = . > } > passdb { > args = /etc/dovecot/conf.d/dovecot-sql.conf.ext > driver = sql > } > plugin { > zlib_save = gz > } > protocols = imap pop3 > service auth { > client_limit = 1 > unix_listener auth-userdb { >mode = 0666 > } > } > service imap-postlogin { > executable = script-login /usr/bin/postlogin-imap.sh > user = $default_internal_user > } > service imap { > drop_priv_before_exec = yes > executable = imap > process_limit = 1 > } > service pop3-postlogin { > executable = script-login /usr/bin/postlogin-pop.sh > user = $default_internal_user > } > } > service pop3 { > drop_priv_before_exec = yes > executable = pop3 > process_limit = 2500 > } > ssl_cert = ssl_key = userdb { > driver = prefetch > } > userdb { > args = /etc/dovecot/conf.d/dovecot-sql.conf.ext > driver = sql > } > protocol lmtp { > mail_plugins = zlib > } > protocol lda { > mail_plugins = zlib > } > protocol imap { > mail_max_userip_connections = 100 > mail_plugins = zlib > } > protocol pop3 { > mail_max_userip_connections = 30 > mail_plugins = zlib > } > > > ...Jeff >
[Dovecot] NFS lock contention for dovecot-uidlist
We are in the process of migrating away from Courier-IMAP/POP3 and Maildrop. I want to use Dovecot (LDA, IMAP, POP3). During my testing, it has worked great except for dotlocking on the dovecot-uidlist file. The problem: When a delivery is being made with deliver and a mail client has the mailbox open (Thunderbird in this case), neither Thunderbird or deliver can get a dotlock on the dovecot-uidlist file, causing both deliver and Thunderbird to hang until the dotlock timeout runs out and the lock gets replaced. Once the lock is replaced, both will go about their business until the next lock miss and hang again. Eventually, everything is delivered and Thunderbird wakes up. Looking at each of the processes with truss, they are looping trying to stat the dotcot-uidlist.lock file, which no longer exists. We are using NFS, and based on reading through the mailing list archives, it can be a little difficult to get working reliably. But, I've read quite a few posts with our same or similar configuration having good luck with the setup. To reduce multiple box access-issues for now, I've been doing all testing with a single NFS client. Our configuration: NetApp filers for storage FreeBSD 6.2-RELEASE NFS clients Postfix 2.3.9 MTA Dovecot 1.0.0 LDA for local deliveries Dovecot 1.0.0 IMAP for pickup My dovecot.conf file is at the end of this message. NFS access cachcing on the FreeBSD has been disabled (vfs.nfs.access_cache_timeout = 0, see NFS mount options below). Postfix destination recipient and concurrency limit for the Dovecot LDA is set to 1. The NFS mount options: rw,tcp,-r=32768,-w=32768,nfsv3,dumbtimer,noatime,acregmin=0, acregmax=0,acdirmin=0,acdirmax=0 The dovecot.conf file: protocols = imap imaps pop3 pop3s disable_plaintext_auth = no syslog_facility = local0 ssl_cert_file = /nethere/conf/dovecot/ssl-nh-cert.pem ssl_key_file = /nethere/conf/dovecot/ssl-nh-key.pem login_greeting = Server ready. login_log_format_elements = user=<%u> ip=[%r] method=%m encryption=%c pid=%p login_log_format = %U$: %s mail_location = maildir:~/Maildir:INDEX=MEMORY mmap_disable = yes dotlock_use_excl = no lock_method = dotlock first_valid_uid = 200 last_valid_uid = 200 first_valid_gid = 200 last_valid_gid = 200 maildir_copy_with_hardlinks = yes namespace private { prefix = INBOX. inbox = yes } protocol imap { login_executable = /usr/local/libexec/dovecot/imap-login mail_executable = /usr/local/libexec/dovecot/imap imap_client_workarounds = outlook-idle delay-newmail } protocol pop3 { login_executable = /usr/local/libexec/dovecot/pop3-login mail_executable = /usr/local/libexec/dovecot/pop3 pop3_uidl_format = UID%u-%v pop3_client_workarounds = outlook-no-nuls oe-ns-eoh } protocol lda { postmaster_address = [EMAIL PROTECTED] sendmail_path = /usr/sbin/sendmail auth_socket_path = /var/run/dovecot/auth-master syslog_facility = mail } auth_executable = /usr/local/libexec/dovecot/dovecot-auth auth default { mechanisms = plain digest-md5 cram-md5 passdb ldap { args = /nethere/conf/dovecot/dovecot-ldap.conf } userdb ldap { args = /nethere/conf/dovecot/dovecot-ldap.conf } user = root socket listen { master { path = /var/run/dovecot/auth-master mode = 0600 user = mailuser group = mailuser } } } It may just be "how it works", but the lock contention seems a little too fragile for busy mailboxes. Does anyone have any ideas? Thanks in advance for any assistance. -Doug
Re: [Dovecot] NFS lock contention for dovecot-uidlist
I wanted to followup on my NFS lock issue with dovecot-uidlist. After doing some research, the current FreeBSD NFS client (as of 6.2- STABLE at least) appears to have a long-standing bug with caching on files with high create/removal rates. With the NFS access cache enabled or disabled, the NFS client still uses another cache for certain file attributes and requires at least a second to go by before it will invalidate an entry if it was deleted. If the file attributes are accessed before the second is up, the timer is restarted. Since the dotlocking code in Dovecot micro-sleeps for less than a second between each check for the .lock file, the entry is never removed from the cache's cache, so the lstat() on the lock file always returns 0 (success). This never allows the lock file to be re- created until the stall timeout is reached. All Dovecot processes (IMAP, POP3, deliver) hang until the kernel invalidates the entry, causing the problem. Using a sleep() call > 1 second after removing the lock and before attempting to use it again helps, but is obviously not very performance-friendly for a high-volume mail server. The other solution I've found that seems to work is updating the mtime on the .lock file if all other dotlocking checks fail in check_lock() in src/lib/file-dotlock.c (see attached patch). This invalidates the cached entry in the kernel and allows lstat() to return the correct response (-1), as the .lock file no longer exists. I didn't check to see if the utime() fails, as it just means the kernel invalidated the entry when it should have and can be ignored. I have performed some high-volume delivery (deliver) and pickup testing (imap and pop3) using the workaround, and so far everything has worked as expected for all Dovecot control files, including indexes. Does anyone know of any side effects the forced mtime update may have that I may not be seeing? Thanks again for any assistance. -Doug file-dotlock.c.diff Description: Binary data On May 17, 2007, at 10:45 AM, Doug Council wrote: We are in the process of migrating away from Courier-IMAP/POP3 and Maildrop. I want to use Dovecot (LDA, IMAP, POP3). During my testing, it has worked great except for dotlocking on the dovecot- uidlist file. The problem: When a delivery is being made with deliver and a mail client has the mailbox open (Thunderbird in this case), neither Thunderbird or deliver can get a dotlock on the dovecot-uidlist file, causing both deliver and Thunderbird to hang until the dotlock timeout runs out and the lock gets replaced. Once the lock is replaced, both will go about their business until the next lock miss and hang again. Eventually, everything is delivered and Thunderbird wakes up. Looking at each of the processes with truss, they are looping trying to stat the dotcot-uidlist.lock file, which no longer exists. We are using NFS, and based on reading through the mailing list archives, it can be a little difficult to get working reliably. But, I've read quite a few posts with our same or similar configuration having good luck with the setup. To reduce multiple box access-issues for now, I've been doing all testing with a single NFS client. Our configuration: NetApp filers for storage FreeBSD 6.2-RELEASE NFS clients Postfix 2.3.9 MTA Dovecot 1.0.0 LDA for local deliveries Dovecot 1.0.0 IMAP for pickup My dovecot.conf file is at the end of this message. NFS access cachcing on the FreeBSD has been disabled (vfs.nfs.access_cache_timeout = 0, see NFS mount options below). Postfix destination recipient and concurrency limit for the Dovecot LDA is set to 1. The NFS mount options: rw,tcp,-r=32768,-w=32768,nfsv3,dumbtimer,noatime,acregmin=0, acregmax=0,acdirmin=0,acdirmax=0 The dovecot.conf file: protocols = imap imaps pop3 pop3s disable_plaintext_auth = no syslog_facility = local0 ssl_cert_file = /nethere/conf/dovecot/ssl-nh-cert.pem ssl_key_file = /nethere/conf/dovecot/ssl-nh-key.pem login_greeting = Server ready. login_log_format_elements = user=<%u> ip=[%r] method=%m encryption=% c pid=%p login_log_format = %U$: %s mail_location = maildir:~/Maildir:INDEX=MEMORY mmap_disable = yes dotlock_use_excl = no lock_method = dotlock first_valid_uid = 200 last_valid_uid = 200 first_valid_gid = 200 last_valid_gid = 200 maildir_copy_with_hardlinks = yes namespace private { prefix = INBOX. inbox = yes } protocol imap { login_executable = /usr/local/libexec/dovecot/imap-login mail_executable = /usr/local/libexec/dovecot/imap imap_client_workarounds = outlook-idle delay-newmail } protocol pop3 { login_executable = /usr/local/libexec/dovecot/pop3-login mail_executable = /usr/local/libexec/dovecot/pop3 pop3_uidl_format = UID%u-%v pop3_client_workarounds = outlook-no-nuls oe-ns-eoh } protocol lda { postmaster_address = [EMAIL PROTECTED] sendmail_path = /us
Re: [Dovecot] NFS lock contention for dovecot-uidlist
On May 20, 2007, at 5:43 PM, Timo Sirainen wrote: If the lock file is really stale, updating mtime might cause the lock to never be overwritten. Would it work if you chown()ed the lock file to its current uid/gid (that lstat() returned) instead? Yes, that worked as well. I've attached the diff for what I changed. Thanks for the suggestion. -Doug file-dotlock.c.diff Description: Binary data
[Dovecot] Quota warning not generated
I am testing out the unofficial quota warning patch with Dovecot 1.0.0, and no matter what values I use for the storage or messages values, the warning script is never executed. My plugin settings are: plugin { quota = maildir:storage=20480 quota_warning = storage=80%:messages=10 /usr/local/bin/quota-warning.sh } I've tried the following quota_warning lines without any luck: quota_warning = storage=80% /usr/local/bin/quota-warning.sh quota_warning = messages=10 /usr/local/bin/quota-warning.sh quota_warning = storage=80%:messages=10 /usr/local/bin/quota-warning.sh The quota-warning.sh script is set to 0755 and currently just syslogs a warning using logger(1). Run independently of Dovecot, it works fine. Here is a sample maildirsize file (if that is at all important) from when I tried the messages=10 setting: 10485760S 335322 12 14016 1 58058 1 46529 1 37623 1 55252 1 61901 1 53422 1 56228 1 There are more than 10 messages in the mailbox, but the quota-warning.sh script never gets executed. When using storage=80%, the maildirsize file has the correct values (not included in this message), but it never executes the script as well. Does anyone have any ideas? Thanks, -Doug
Re: [Dovecot] Quota warning not generated
On Fri, 25 May 2007, Nicolas Boullis wrote: As I wrote the patch, I do. ;-) Always good to go to the source :) The warning is triggered when the free space goes below the specified value, not when the used space goes above. Hence, if you have no limit on the message count, no warning on the message count will ever be executed. Fair enough, I knew I wasn't setting a message count quota, but I didn't realize it was smart enough to know that. Moreover, if the free space is already below the specified value, no warning is triggered. It is only triggered when a message *brings* the free space below the specified value. I had it reversed and correcting the percentage to be how much of the storage quota is left, things worked as they should. Thanks for clarification. -Doug
Re: [Dovecot] FreeBSD NFS file locking mechanism
On Jun 21, 2007, at 7:56 PM, Tony Tsang wrote: I have some machines running FreeBSD and dovecot deployed. User's home dir is on NFS mount and I've found that dovecot only works with dotlock file locking mechanism, fcntl and flock failed. Now it causes problem with thunderbird (thunderbird is cachine connections) waiting forever and I noticed that dovecot is trying to acquire a lock but unsuccessful since the lock file is in place. Is it possible to use file locking other than dotlock on FreeBSD NFS mounted homedir? How do I achieve this? I had a similar problem with NFS and dotlock contention with dovecot- uidlist on FreeBSD 6.2. The problem is a long-standing bug in the FreeBSD NFS client. If I remember right, when reviewing the code, dotlock is the only option for dovecot-uidlist. I posted a workaround patch: http://www.dovecot.org/list/dovecot/2007-May/022883.html Not the best solution, but it is working fine on my setup. -Doug
Re: [Dovecot] NFS cache flush tester
On Wed, 11 Jul 2007, Timo Sirainen wrote: http://dovecot.org/tools/nfstest.c Results for FreeBSD 6.2-RELEASE-p5 clients using a NetApp NFS server: Info: Connected: client Info: Testing attribute cache.. Info: Attr cache flush fchown(-1, -1): OK Info: Attr cache flush fchown(uid, -1): OK Info: Attr cache flush fchmod(mode): OK Info: Attr cache flush chown(-1, -1): OK Info: Attr cache flush chown(uid, -1): OK Info: Attr cache flush chmod(mode): OK Info: Testing data cache.. Info: data cache: Appends weren't noticed (ret = 0) Info: - Attribute cache flush helped Info: data cache (no caching): failed Info: data cache (attr cache): OK Info: data cache (lockf()): failed Info: data cache (flock(shared)): failed Info: data cache (flock(exclusive)): failed Info: data cache (O_DIRECT): failed -Doug
Re: [Dovecot] NFS cache flush tester
On Thu, 12 Jul 2007, Timo Sirainen wrote: On Wed, 2007-07-11 at 22:21 +0300, Timo Sirainen wrote: http://dovecot.org/tools/nfstest.c I've done several updates for this. Updated results for Linux 2.6: Different result this time: Info: Connected: client Info: O_EXCL works Info: Testing attribute cache.. Fatal: open(/mnt/nfs/blah) failed: Stale NFS file handle This is on FreeBSD 6.2-RELEASE-p5 and a NetApp NFS server. I'm wondering if it is the same issue with the NFS client cache that I had to workaround in the past with dotlocking (http://www.dovecot.org/list/dovecot/2007-May/022883.html). Before I implement the same workaround, I wanted to check if it would invalidate the test results assuming the workaround worked? -Doug
Re: [Dovecot] NFS cache flush tester
On Thu, 12 Jul 2007, Timo Sirainen wrote: Hmm. I updated the nfstest.c to now just retry if this happens. Does it help? Yes, that worked. Here are the results for FreeBSD 6.2-RELEASE-p5 and a NetApp NFS server. FYI, the fcntl errors also appeared on the server instance. Info: Connected: client Info: O_EXCL works Info: Testing attribute cache.. Info: Attr cache flush fchown(-1, -1): OK Info: Attr cache flush fchown(uid, -1): OK Info: Attr cache flush fchmod(mode): OK Info: Attr cache flush chown(-1, -1): OK Info: Attr cache flush chown(uid, -1): OK Info: Attr cache flush chmod(mode): OK Info: Testing write flushing.. Info: Write flush no caching: failed Info: Write flush fcntl(shared): failed Info: Write flush fcntl(exclusive): failed Info: Write flush flock(shared): failed Info: Write flush flock(exclusive): failed Info: Write flush reopen: OK Info: Testing data cache.. Info: data cache: Reading EOF requires attribute cache flush Info: Data cache flush no caching: failed Info: Data cache flush attr cache: OK Error: fcntl(setlk, read) failed: Operation not supported Info: Data cache flush fcntl(shared): failed Error: fcntl(setlk, write) failed: Operation not supported Info: Data cache flush fcntl(exclusive): failed Info: Data cache flush flock(shared): failed Info: Data cache flush flock(exclusive): failed Info: Data cache flush dotlock: failed Info: Data cache flush O_DIRECT: failed -Doug
Re: [Dovecot] NFS cache flush tester
On Thu, 12 Jul 2007, Timo Sirainen wrote: On Wed, 2007-07-11 at 15:43 -0700, Doug Council wrote: Here are the results for FreeBSD 6.2-RELEASE-p5 and a NetApp NFS server. FYI, the fcntl errors also appeared on the server instance. .. Error: fcntl(setlk, read) failed: Operation not supported Info: Data cache flush fcntl(shared): failed Error: fcntl(setlk, write) failed: Operation not supported Info: Data cache flush fcntl(exclusive): failed Would you be able to enable lockd and see what these show then? The NetApp filer is in production, so I am not able to. But, Adam has posted his results using rpc.lockd/statd on FreeBSD 6.2 with a NetApp filer in http://www.dovecot.org/list/dovecot/2007-July/024070.html. -Doug
Re: [Dovecot] NFS cache flush tester
On Thu, 12 Jul 2007, Timo Sirainen wrote: Thanks, could you try once more with an updated nfstest.c version? I added "dup+close" which works for data and write cache flushing with Linux. I really hope it works with Solaris+FreeBSD too. I didn't expect fchown() to flush write cache, but since dup+close didn't work this is just as good. Info: Data cache flush dup+close: OK Great. Now how about FreeBSD once more? :) FreeBSD 6.2-RELEASE-p5 and a NetApp NFS server: Info: Connected: client Info: O_EXCL works Info: Testing attribute cache.. Info: Attr cache flush fchown(-1, -1): OK Info: Attr cache flush fchown(uid, -1): OK Info: Attr cache flush fchmod(mode): OK Info: Attr cache flush chown(-1, -1): OK Info: Attr cache flush chown(uid, -1): OK Info: Attr cache flush chmod(mode): OK Info: Attr cache flush dup+close: failed Info: Testing write flushing.. Info: Write flush no caching: failed Info: Write flush fcntl(shared): failed Info: Write flush fcntl(exclusive): failed Info: Write flush flock(shared): failed Info: Write flush flock(exclusive): failed Info: Write flush reopen: OK Info: Write flush dup+close: failed Info: Write flush attr cache: failed Info: Testing data cache.. Info: data cache: Reading EOF requires attribute cache flush Info: Data cache flush no caching: failed Info: Data cache flush attr cache: OK Error: fcntl(setlk, read) failed: Operation not supported Info: Data cache flush fcntl(shared): failed Error: fcntl(setlk, write) failed: Operation not supported Info: Data cache flush fcntl(exclusive): failed Info: Data cache flush flock(shared): failed Info: Data cache flush flock(exclusive): failed Info: Data cache flush dotlock: failed Info: Data cache flush O_DIRECT: failed Info: Data cache flush dup+close: failed -Doug
Re: [Dovecot] NFS cache flush tester
On Thu, 12 Jul 2007, Timo Sirainen wrote: So still nothing usable. Updated nfstest.c once again to include fdatasync() test. It has to work. fdatasync() isn't implemented in FreeBSD (see http://www.freebsd.org/cgi/query-pr.cgi?pr=64875). It defaults to mapping fdatasync() to fsync(). Results for FreeBSD 6.2-RELEASE-p5 and NetApp NFS server: Info: Connected: client Info: O_EXCL works Info: Testing attribute cache.. Info: Attr cache flush fchown(-1, -1): OK Info: Attr cache flush fchown(uid, -1): OK Info: Attr cache flush fchmod(mode): OK Info: Attr cache flush chown(-1, -1): OK Info: Attr cache flush chown(uid, -1): OK Info: Attr cache flush chmod(mode): OK Info: Attr cache flush dup+close: failed Info: Testing write flushing.. Info: Write flush no caching: failed Info: Write flush fcntl(shared): failed Info: Write flush fcntl(exclusive): failed Info: Write flush flock(shared): failed Info: Write flush flock(exclusive): failed Info: Write flush reopen: OK Info: Write flush dup+close: failed Info: Write flush attr cache: failed Info: Write flush fdatasync: OK Info: Testing data cache.. Info: data cache: Reading EOF requires attribute cache flush Info: Data cache flush no caching: failed Info: Data cache flush attr cache: OK Error: fcntl(setlk, read) failed: Operation not supported Info: Data cache flush fcntl(shared): failed Error: fcntl(setlk, write) failed: Operation not supported Info: Data cache flush fcntl(exclusive): failed Info: Data cache flush flock(shared): failed Info: Data cache flush flock(exclusive): failed Info: Data cache flush dotlock: failed Info: Data cache flush dup+close: failed Info: Data cache flush fdatasync: failed And with O_DIRECT enabled via vfs.nfs.nfs_directio_enable: Info: Connected: client Info: O_EXCL works Info: Testing attribute cache.. Info: Attr cache flush fchown(-1, -1): OK Info: Attr cache flush fchown(uid, -1): OK Info: Attr cache flush fchmod(mode): OK Info: Attr cache flush chown(-1, -1): OK Info: Attr cache flush chown(uid, -1): OK Info: Attr cache flush chmod(mode): OK Info: Attr cache flush dup+close: failed Info: Testing write flushing.. Info: Write flush no caching: failed Info: Write flush fcntl(shared): failed Info: Write flush fcntl(exclusive): failed Info: Write flush flock(shared): failed Info: Write flush flock(exclusive): failed Info: Write flush reopen: OK Info: Write flush dup+close: failed Info: Write flush attr cache: failed Info: Write flush fdatasync: OK Info: Testing data cache.. Info: data cache: Reading EOF requires attribute cache flush Info: Data cache flush no caching: failed Info: Data cache flush attr cache: OK Error: fcntl(setlk, read) failed: Operation not supported Info: Data cache flush fcntl(shared): failed Error: fcntl(setlk, write) failed: Operation not supported Info: Data cache flush fcntl(exclusive): failed Info: Data cache flush flock(shared): failed Info: Data cache flush flock(exclusive): failed Info: Data cache flush dotlock: failed Info: Data cache flush dup+close: failed Info: Data cache flush fdatasync: failed Info: Data cache flush O_DIRECT: OK -Doug
[Dovecot] Quota warning patch and config parsing in 1.0.2
Dovecot 1.0.2 patches and compiles fine with the quota warning patch (http://www.dovecot.org/patches/quota-warning.patch). But, when parsing the configuration file , deliver now seems to strip all of the spaces in the QUOTA_WARNING environment variable, generating an error. This is the plugin{} section from my dovecot.conf: plugin { quota = maildir:storage=20480 quota_warning = storage=10% /usr/local/bin/quota-warning } And this is the error Dovecot generates for anyone with a quota: quota warning: No command specified: storage=10/usr/local/bin/quota-warning Doing a diff on src/deliver/deliver.c between 1.0.1 and 1.0.2 show some changes to how the config file is parsed, but nothing I can find that would cause the spaces to get stripped out of the configuration value. Has anyone successfully used the quota warning patch with 1.0.2? Thanks, -Doug
Re: [Dovecot] Quota bug in deliver?
On Thu, 6 Sep 2007, Marcin Michal Jessa wrote: I do use prefetch, I have an separate query, too. Without that the quota fails completely. Having both statements and prefetch, the quota works fine with IMAP and deliver when I have no quota line in the plugin section, when I add the line (see !!MARK!! below), the deliver takes the quota from that line instead of the database information. IMAP uses the information from the database all the time, no matter if I have a quota line in the config. [...] plugin { # !!MARK!! # deliver seems to use the userdb quota only when I don't have the following line quota = maildir:storage=102400:messages=1000 acl = vfile:/etc/dovecot/acls trash = /etc/dovecot/dovecot-trash.conf } I discovered something similar. User's quota from the DB was not used when the user's quota was over the limit of the plugin part. According to the docs the db quota values should always come first before the plugin part but it does not. I think you might be experiencing a bug that Timo recently fixed: http://www.dovecot.org/list/dovecot/2007-August/025016.html Right now, it is only available in HG, but should be included in 1.0.4. -Doug
[Dovecot] Assertion failure in mail-index-view-sync.c
I have been getting these assertion failures every couple days, killing the IMAP process. After it happens, I can login immediately and everything is fine until it happens again. Here is the log entry: Sep 10 10:03:41 mailbox-4 dovecot: nh: IMAP(username): file mail-index-view-sync.c: line 666 (mail_index_view_sync_end): assertion failed: (view->log_file_offset >= view->map->hdr.log_file_int_offset) Sep 10 10:03:41 mailbox-4 dovecot: child 38458 (imap) killed with signal 6 It started after I disabled indexing for deliver and IMAP: mail_location = maildir:~/Maildir:INDEX=MEMORY I removed the dovecot.index* files from all of the maildir directories (including the root/INBOX), but that didn't help. I have upgraded to 1.0.5, but that didn't help either. Any ideas as to what might be causing this? Thanks, -Doug
RE: Strange problem with sieve
> -Original Message- > From: Peter via dovecot > Sent: Tuesday, April 9, 2024 5:18 AM > To: dovecot@dovecot.org > Subject: Strange problem with sieve > > Hello, > > I use Dovecot 2.3.20 on FreeBSD 13.2 (in jail) as a part of iRedMail > installation. > > Some mailboxes are configured for automatic mails processing using sieve > (execute :pipe) and a custom binary (started by script). The system was > configured and was working correctly during several weeks. > > Since ~10 days the system starts to work strangely. _*/Some/*_ mails > cannot be decoded anymore. No changes at our side, no updates etc. > > I dumped the content received by my binary, and it looks really strange: > > pOUc9Z33O0GbfzbW5Mrmi > 3L4tTlvKfsD8wP+hc6vN1v1bv+Vx827kW+YX5n/Zxtl240erH4t+nNyeuL1zh92 > O0p24nYKd > vbtcd1UVyBdkFwztDtrdsIexJ2/Pu71L914vnFdYto+0T7JvoCiwqGm/7v6d+78U > Jxf3lHiU > 1B9QO7D1wIeD3IO3S91K68rUy/LLPv/E/6m/3Le8oUK/ovAQ7lDWoaeHow6 > 3/8z8ubpStTK/ > > This is a start of one mail. No headers, no readable data. > > I removed "discard;" from sieve script to put such mails in the mailbox, > and the mails look normally: > > Mime-Version:1.0 > Content-Type:multipart/mixed; boundary="=-/3n/zeVGlN5thgeL28RKiw==" > > --=-/3n/zeVGlN5thgeL28RKiw== > Content-Type: multipart/alternative; boundary="=- > sE/MdvfJZlakBmrKkcdzCg==" > > --=-sE/MdvfJZlakBmrKkcdzCg== > Content-Type: multipart/related; boundary="=-cgJVnLNlzX5YuD6yaI5USQ==" > > --=-cgJVnLNlzX5YuD6yaI5USQ== > Content-Transfer-Encoding: base64 > Content-Type: text/html; charset=utf-8 > > etc. > > Really, I have no idea about any direction to explore. The server is > totally under my control, I can do anything, but I have no idea how to > debug this situation. > > I tried to redirect such mails to another mailbox and process them > there, but I get the same strange data. If I connect Thunderbird to the > mailbox - I can read the mails correctly. So, the problem arrives at the > moment when the sieve system is invoked - the mail data is corrupted > somehow. > > Any advise will be really appreciated. > > Peter RE: --=-cgJVnLNlzX5YuD6yaI5USQ== Content-Transfer-Encoding: base64 What has changed is the body content of incoming emails is now base64 encoded. Because you are trying to process these messages with a script, I'm going to guess that the emails in question are automatically generated somewhere. Go back to the process that is creating these emails and disable base64 encoding. Or, add a base64 decode step to the sieve execute script. Doug ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
RE: Strange problem with sieve
That looks like base64 encoding to me. Possibly your sieve script is parsing the output and truncating the data before handing it off to base64. Doug > -Original Message- > From: Peter via dovecot > Sent: Tuesday, April 9, 2024 2:03 PM > To: dovecot@dovecot.org > Subject: Re: Strange problem with sieve > > Thanks for the advise. > > Yes, the mails are automatically created. Unfortunately, the software > that generates them are out of our control. > > I supposed that it is a base64 code, but if I try to manually pass the > text into 'base64 -d' - nothing usable goes out (you can try yourself > with the first part of the mail I've posted). Maybe it is salted, but I > don't see how to decode it... > > Peter > > > On 09/04/2024 16:15, Doug via dovecot wrote: > > > >> -Original Message- > >> From: Peter via dovecot > >> Sent: Tuesday, April 9, 2024 5:18 AM > >> To: dovecot@dovecot.org > >> Subject: Strange problem with sieve > >> > >> Hello, > >> > >> I use Dovecot 2.3.20 on FreeBSD 13.2 (in jail) as a part of iRedMail > >> installation. > >> > >> Some mailboxes are configured for automatic mails processing using sieve > >> (execute :pipe) and a custom binary (started by script). The system was > >> configured and was working correctly during several weeks. > >> > >> Since ~10 days the system starts to work strangely. _*/Some/*_ mails > >> cannot be decoded anymore. No changes at our side, no updates etc. > >> > >> I dumped the content received by my binary, and it looks really strange: > >> > >> pOUc9Z33O0GbfzbW5Mrmi > >> > 3L4tTlvKfsD8wP+hc6vN1v1bv+Vx827kW+YX5n/Zxtl240erH4t+nNyeuL1zh92 > >> O0p24nYKd > >> > vbtcd1UVyBdkFwztDtrdsIexJ2/Pu71L914vnFdYto+0T7JvoCiwqGm/7v6d+78 > U > >> Jxf3lHiU > >> > 1B9QO7D1wIeD3IO3S91K68rUy/LLPv/E/6m/3Le8oUK/ovAQ7lDWoaeHow6 > >> 3/8z8ubpStTK/ > >> > >> This is a start of one mail. No headers, no readable data. > >> > >> I removed "discard;" from sieve script to put such mails in the mailbox, > >> and the mails look normally: > >> > >> Mime-Version:1.0 > >> Content-Type:multipart/mixed; boundary="=- > /3n/zeVGlN5thgeL28RKiw==" > >> > >> --=-/3n/zeVGlN5thgeL28RKiw== > >> Content-Type: multipart/alternative; boundary="=- > >> sE/MdvfJZlakBmrKkcdzCg==" > >> > >> --=-sE/MdvfJZlakBmrKkcdzCg== > >> Content-Type: multipart/related; boundary="=- > cgJVnLNlzX5YuD6yaI5USQ==" > >> > >> --=-cgJVnLNlzX5YuD6yaI5USQ== > >> Content-Transfer-Encoding: base64 > >> Content-Type: text/html; charset=utf-8 > >> > >> etc. > >> > >> Really, I have no idea about any direction to explore. The server is > >> totally under my control, I can do anything, but I have no idea how to > >> debug this situation. > >> > >> I tried to redirect such mails to another mailbox and process them > >> there, but I get the same strange data. If I connect Thunderbird to the > >> mailbox - I can read the mails correctly. So, the problem arrives at the > >> moment when the sieve system is invoked - the mail data is corrupted > >> somehow. > >> > >> Any advise will be really appreciated. > >> > >> Peter > > RE: > > --=-cgJVnLNlzX5YuD6yaI5USQ== > > Content-Transfer-Encoding: base64 > > > > What has changed is the body content of incoming emails is now base64 > encoded. Because you are trying to process these messages with a script, I'm > going to guess that the emails in question are automatically generated > somewhere. Go back to the process that is creating these emails and disable > base64 encoding. Or, add a base64 decode step to the sieve execute script. > > > > Doug > > > > ___ > > dovecot mailing list -- dovecot@dovecot.org > > To unsubscribe send an email to dovecot-le...@dovecot.org > ___ > dovecot mailing list -- dovecot@dovecot.org > To unsubscribe send an email to dovecot-le...@dovecot.org ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
RE: newbie dsync problems
In your working example you are connecting as root but in your dsync example your user is remoteprefix:root. Try removing the "remoteprefix:" which is being treated as part of the user name. > -Original Message- > From: Kent Borg via dovecot > Sent: Thursday, January 23, 2025 3:34 PM > To: cdm...@yahoo.com; 'Kent Borg' ; > dovecot@dovecot.org > Subject: Re: newbie dsync problems > > On 1/23/25 12:26 PM, cdm...@yahoo.com wrote: > > Kent, > > > > You are being prompted for a password, so it isn't using private key > authentication. I recommend you get ssh working first, prove you are indeed > connecting to your secondary server, and only then introduce doveadm. > > Yes, I checked that: > > > I think I have root's ssh keys set up correctly, I can run this: > > > >> root@la:/etc/dovecot# ssh -i /root/.ssh/id_rsa_rc.borg.org.dsync > >> mail.borg.org > >> PTY allocation request failed on channel 0 > >> C-c C-croot@la:/etc/dovecot# > > …and on the remote end I see some debugging output I put in the remote > > script, outputting an empty username. Makes sense. > > > > Is mail.borg.org the name of your "matching server" or is that the name of > your primary server? > > mail.borg.org is the name of the (priority 10) backup, I am running this > on my (priority 1) primary server, mail2.borg.org, I am pretty certain I > am not ssh-ing to myself. > > > kb > > ___ > dovecot mailing list -- dovecot@dovecot.org > To unsubscribe send an email to dovecot-le...@dovecot.org ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
RE: newbie dsync problems
Kent, You are being prompted for a password, so it isn't using private key authentication. I recommend you get ssh working first, prove you are indeed connecting to your secondary server, and only then introduce doveadm. Is mail.borg.org the name of your "matching server" or is that the name of your primary server? If so, it looks to me like you are using ssh to connect back to yourself. You should have something like mail.borg.org as primary, mail2.borg.org as backup, and if necessary, add mail2.borg.org to your hosts file if there is no DNS for it. Or even connect via IP address like this: ssh root@172.16.20.11 I'll leave the discussion on whether to use root in this fashion even makes sense to others. Suffice to say, once you get something working perhaps consider removing the private key and use a non-root user. Doug > -Original Message- > From: Kent Borg via dovecot > Sent: Thursday, January 23, 2025 2:12 PM > To: dovecot@dovecot.org > Subject: Re: newbie dsync problems > > I had a typo (I said I'm a newbie). > > On 1/23/25 10:50 AM, Kent Borg via dovecot wrote: > > But when I try to make the command more complete and send a username > > to the remote end, and now I am no longer talking to the remote end: > > > >> root@la:/etc/dovecot# doveadm sync -u kentborg -1 ssh -i > >> /root/.ssh/id_rsa_rc.borg.org.dsync remotepre...@mail.borg.org > >> remotepre...@mail.borg.org's password: > > > This better version also doesn't work: > > > root@la:/etc/dovecot# doveadm sync -u kentborg -1 ssh -i > > /root/.ssh/id_rsa_rc.borg.org.dsync remoteprefix:r...@mail.borg.org > > remoteprefix:r...@mail.borg.org's password > > > Sorry for the error, > > -kb// > ___ > dovecot mailing list -- dovecot@dovecot.org > To unsubscribe send an email to dovecot-le...@dovecot.org ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
RE: newbie dsync problems
I don't have a working example because I do my dsync backups on the local machine with output to shared NFS storage that is accessible to both my primary and backup systems. No ssh required or remote connection required. That is provided by NFS. This excerpt of my backup script runs dsync in a loop where 'USERS' is populated with email account names to be backed up. The backup target location is on separate storage. If you can't figure out the doveadm sync to another server you could NFS share a file system from your secondary system to your primary and do something similar. for user in ${USERS}; do dsync -u ${user} backup maildir:/home/${user}/backup/mailboxes done > -Original Message- > From: Kent Borg > Sent: Thursday, January 23, 2025 5:22 PM > To: cdm...@yahoo.com; dovecot@dovecot.org > Subject: Re: newbie dsync problems > > On 1/23/25 1:41 PM, cdm...@yahoo.com wrote: > > In your working example you are connecting as root but in your dsync > example your user is remoteprefix:root. Try removing the "remoteprefix:" > which is being treated as part of the user name. > > > If I take off the "remoteprefix" it logs in, but it doesn't send the > user to the other end, the wrapper script on mail.borg.org gets "VERSION > dsync 3 5" as the parameter. > > > root@la:/etc/dovecot# doveadm sync -u kentborg -1 ssh -i > > /root/.ssh/id_rsa_rc.borg.org.dsync r...@mail.borg.org > > Error: Extraneous arguments found: 3 5 > > doveadm(kentborg)<1052944>: Error: > > read(remote) failed: EOF (version not received) > > doveadm(kentborg)<1052944>: Error: > Remote > > command returned error 64: ssh -i /root/.ssh/id_rsa_rc.borg.org.dsync > > r...@mail.borg.org dsync-server > > root@la:/etc/dovecot# doveadm sync -u kentborg -1 ssh -i > > /root/.ssh/id_rsa_rc.borg.org.dsync remoteprefix:r...@mail.borg.org > > remoteprefix:r...@mail.borg.org's password: > > According to the man page, that should be the destination: > > > ARGUMENTS > >destination > > This argument specifies the synchronized destination. > > It can be > > one of: > > > > location > > Same as mail_location setting, e.g. maildir:~/Maildir > > > > remote:login@host > > Uses dsync_remote_cmd setting to connect to the > > remote > > host (usually via ssh) > > > > remoteprefix:login@host > > This is the same as remote, except > > "user@domain\n" is > > sent before dsync protocol starts. This allows > > imple‐ > > menting a trusted wrapper script that runs > > doveadm > > dsync-server by reading the username from the > > first line. > > > > tcp:host[:port] > > Connects to remote doveadm server via TCP. The > > default > > port is specified by doveadm_port setting. > > > > tcps:host[:port] > > This is the same as tcp, but with SSL. > > > > command [arg1 [, arg2, ...]] > > Runs a local command that connects its standard > > input & > > output to a dsync server. > > > One of the examples on the man page is: > > > doveadm sync -u usern...@example.com ssh -i id_dsa.dovecot \ > > mailu...@example.com doveadm dsync-server -u > > usern...@example.com > > Which I don't understand. What is "mailu...@example.com"? What are the > two parameters and the option after that? > > > Their simpler example: > > > doveadm sync -u usern...@example.com remote:server- > replica.example.com > Makes much more sense, but I can't find anything based on that example > works. > > Does "doveadm sync" maybe not work in version 2.3.19.1? > > > root@la:/etc/dovecot# dovecot --version > > 2.3.19.1 (9b53102964) > > Thanks, > > -kb, the Kent who would love to see some working "doveadm sync" examples. > ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
RE: Userdb lookup problems
" Access denied for user.. 'dovecot'@'localhost' " Is a mysql error. mysql isn't allowing the user dovecot to open the database to run your SQL query. Until you can open the database, you aren't even retrieving the account password. That is where you should concentrate your efforts. I don't use mysql for authentication so I can't really tell you how to configure dovecot or mysql to make it work. > -Original Message- > From: Ken Wright via dovecot > Sent: Saturday, February 15, 2025 4:41 PM > To: Aki Tuomi ; Ken Wright via dovecot > ; Timo Sirainen > Subject: Re: Userdb lookup problems > > On Sat, 2025-02-15 at 20:24 +0200, Aki Tuomi wrote: > > > > > > > > > > > > On 15/02/2025 18:29 EET Ken Wright via dovecot > > > wrote: > > > > > > > > > > > > > > > > > > On Sat, 2025-02-15 at 17:53 +0200, Aki Tuomi wrote: > > > > > > > > > > > > > > > > > On 15/02/2025 17:39 EET Ken Wright via dovecot > > > > > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Sat, 2025-02-15 at 08:59 +0200, Timo Sirainen wrote: > > > > > > > > > > > > > > > > > On 15. Feb 2025, at 0.06, Ken Wright via dovecot > > > > > > > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > These need to be converted to the new syntax. > > > > > > > > > > > > > > > > > > > > > > > > > > > > Is this correct? > > > > > > > > > > > > > > > > > > > > passdb sql { > > > > > > > > > > query = SELECT username AS username, domain, password FROM > > > > > > > > > > mailbox > > > > > > > > > > WHERE username = '%{user | username}' AND domain = '%{user | > > > > > > > > > > domain}' > > > > > > > > > > AND active= '1' > > > > > > > > > > } > > > > > > > > > > userdb sql { > > > > > > > > > > query = SELECT maildir, 2000 AS uid, 2000 AS gid FROM mailbox > > > > > > > > > > WHERE > > > > > > > > > > username = '%{user | username}' AND domain = '%{user | domain}' > > > > > AND > > > > > > > > > > active= '1' > > > > > > > > > > # For using doveadm -A: > > > > > > > > > > iterate_query = SELECT username AS username, domain FROM > > > > > mailbox > > > > > > > > > > > > > > > > > > > > > > > Like mentioned already on some thread, returning maildir is not > > > > > > > > right, check > > > > > > > > https://doc.dovecot.org/2.4.0/core/config/mailbox/mail_location.html > > > > > > > > > > > > and return mail_path instead. > > > > > > > > > > Okay, I changed maildir to mail_path, but I still can't log in. > > > > > > > > > > > > I'm sorry, but I think I need to see the fix spelled out. I'm an > > > > > > idiot. > > > > > > > > > > > > Ken > > > > > > > > > > > > > > > > > Did you check logs for details? If there is not much, try > > > > > > > > log_debug=category=auth > > > > mail_debug=yes > > These two lines in /var/log/mail.log seem to be pertinent: > > 2025-02-15T16:33:29.976767-05:00 grace dovecot: auth: Error: > mysql(localhost): Connect failed to database (): Access denied for user > 'dovecot'@'localhost' (using password: NO) - waiting for 1 seconds > before retry > 2025-02-15T16:33:36.560826-05:00 grace dovecot: imap-login: Login > aborted: Connection closed (auth failed, 1 attempts in 7 secs) > (auth_failed): user=, method=PLAIN, > rip=192.168.1.1, lip=192.168.1.10, TLS, session= > > I don't understand why access is denied. I don't understand why it > didn't use the password. Help! > > Ken > ___ > dovecot mailing list -- dovecot@dovecot.org > To unsubscribe send an email to dovecot-le...@dovecot.org ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
RE: Doveadm Backup
My experience was very similar when I originally set up doveadm backup from my production machine to my backup machine. My mailbox was the only one on the system that failed. I discovered the root cause was a cron job running on the backup machine delivering its output to my email. The result was my inbox was out of sync. Perhaps you also have something on your backup machine that is altering the email. -- also Doug > -Original Message- > From: Doug Hardie via dovecot > Sent: Tuesday, February 18, 2025 3:11 AM > To: Dovecot > Subject: Re: Doveadm Backup > > > On Feb 17, 2025, at 23:11, Doug Hardie > wrote: > > > > I tried the backup again tonight. I am now getting a new error: > > > > mail# doveadm backup -f -u doug remote:checkout > > dsync-remote(doug)<1c8BDo4xtGe74wAA+dxtXQ>: Warning: Deleting > mailbox 'INBOX': UID=92440 GUID=1465118975.V4eI7cfa32M232845.mail is > missing locally > > dsync-remote(doug)<1c8BDo4xtGe74wAA+dxtXQ>: Error: Couldn't delete > mailbox INBOX: INBOX can't be deleted. > > dsync-local(doug): Error: Remote command > returned error 65: ssh checkout doveadm dsync-server -udoug -U > > > > Anytime I run it now generates the same error messages. However on the > backup machine the .cur directory is empty. I presume that is the INBOX > referred to in the messages. All the other directories appear to be intact. > > Deleted the complete user on the backup machine. Then ran the backup > again. As soon as it finished, I ran it again. Got a new error: > > mail# doveadm backup -u doug remote:checkout > mail# doveadm backup -u doug remote:checkout > dsync-remote(doug): Error: Mailbox INBOX > sync: mailbox_delete failed: INBOX can't be deleted. > dsync-local(doug): Error: Remote command > returned error 65: ssh checkout doveadm dsync-server -udoug -U > > > I am beginning to believe that backup is not a viable production option. I > suspect rsync would be better. > > -- Doug > > ___ > dovecot mailing list -- dovecot@dovecot.org > To unsubscribe send an email to dovecot-le...@dovecot.org ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
Doveadm backup and sieves
I am using doveadm backup to backup mailfiles to a second drive. That works just fine. However, the sieve files are not backed up. I couldn't find anything in the backup documentation that addresses sieves. Is there a way to back those up also? -- Doug ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
Re: Doveadm Backup
> On Feb 18, 2025, at 05:59, Doug via dovecot wrote: > > My experience was very similar when I originally set up doveadm backup from > my production machine to my backup machine. My mailbox was the only one on > the system that failed. I discovered the root cause was a cron job running > on the backup machine delivering its output to my email. The result was my > inbox was out of sync. Perhaps you also have something on your backup > machine that is altering the email. > > -- also Doug > Good catch. That was the problem. Interestingly enough, the new emails from cron were in the new directory, not cur. Doveadm was trying to delete cur which obviously didn't fix the problem. Deleting the contents of new then let the backup work properly. I'm not sure my fix to crontab will completely eliminate those email, but at least I know what the problem is and can eventually eliminate it. Thanks -- Doug (good name) ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
Re: autoexpunge
> On Feb 18, 2025, at 06:45, Michael Slusarz > wrote: > >> On 02/18/2025 1:09 AM MST Doug Hardie via dovecot >> wrote: >> >> I have the following in 15-mailboxes.config: >> >> mailbox Trash { >>special_use = \Trash >>autoexpunge = 30 days >> } >> >> I thought that would empty the deleted emails after 30 days. However, I >> find that in .Deleted Messages/cur there are over 18K messages dating back >> to the 90's. What do I need to set to make the autoexpunge work? > > Autoexpunge only works if the mailbox is accessed. My Deleted Messages mailbox has been accessed virtually every day for many years. Still had lots from the late 90's in it. -- Doug ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
Re: autoexpunge
-- Doug > On Feb 18, 2025, at 00:20, Marc via dovecot wrote: > > run cron job with purge? > >> >> I have the following in 15-mailboxes.config: >> >> mailbox Trash { >>special_use = \Trash >>autoexpunge = 30 days >> } >> >> I thought that would empty the deleted emails after 30 days. However, I >> find that in .Deleted Messages/cur there are over 18K messages dating >> back to the 90's. What do I need to set to make the autoexpunge work? That did nothing. I ran: doveadm expunge -u doug mailbox "Deleted Messages" savedbefore 2w That worked. ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
Doveadm Backup
I tried the backup again tonight. I am now getting a new error: mail# doveadm backup -f -u doug remote:checkout dsync-remote(doug)<1c8BDo4xtGe74wAA+dxtXQ>: Warning: Deleting mailbox 'INBOX': UID=92440 GUID=1465118975.V4eI7cfa32M232845.mail is missing locally dsync-remote(doug)<1c8BDo4xtGe74wAA+dxtXQ>: Error: Couldn't delete mailbox INBOX: INBOX can't be deleted. dsync-local(doug): Error: Remote command returned error 65: ssh checkout doveadm dsync-server -udoug -U Anytime I run it now generates the same error messages. However on the backup machine the .cur directory is empty. I presume that is the INBOX referred to in the messages. All the other directories appear to be intact. -- Doug ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
autoexpunge
I have the following in 15-mailboxes.config: mailbox Trash { special_use = \Trash autoexpunge = 30 days } I thought that would empty the deleted emails after 30 days. However, I find that in .Deleted Messages/cur there are over 18K messages dating back to the 90's. What do I need to set to make the autoexpunge work? -- Doug ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
Re: Doveadm Backup
> On Feb 17, 2025, at 23:11, Doug Hardie wrote: > > I tried the backup again tonight. I am now getting a new error: > > mail# doveadm backup -f -u doug remote:checkout > dsync-remote(doug)<1c8BDo4xtGe74wAA+dxtXQ>: Warning: Deleting mailbox > 'INBOX': UID=92440 GUID=1465118975.V4eI7cfa32M232845.mail is missing locally > dsync-remote(doug)<1c8BDo4xtGe74wAA+dxtXQ>: Error: Couldn't delete mailbox > INBOX: INBOX can't be deleted. > dsync-local(doug): Error: Remote command returned > error 65: ssh checkout doveadm dsync-server -udoug -U > > Anytime I run it now generates the same error messages. However on the > backup machine the .cur directory is empty. I presume that is the INBOX > referred to in the messages. All the other directories appear to be intact. Deleted the complete user on the backup machine. Then ran the backup again. As soon as it finished, I ran it again. Got a new error: mail# doveadm backup -u doug remote:checkout mail# doveadm backup -u doug remote:checkout dsync-remote(doug): Error: Mailbox INBOX sync: mailbox_delete failed: INBOX can't be deleted. dsync-local(doug): Error: Remote command returned error 65: ssh checkout doveadm dsync-server -udoug -U I am beginning to believe that backup is not a viable production option. I suspect rsync would be better. -- Doug ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
Doveadm Backup
I am using doveadm backup to backup the mail server's files for a specific user. This is a test environment for the backup machine. All it does is sit there until I run backup on the mail server. However, I am encountering an issue occasionally where backup throws the following message: Warning: Deleting mailbox 'Deleted Messages': UID=533920 already exists locally for a different mail: GUIDs don't match (1739791386.M526904P56414.checkout,S=2206,W=2264 vs 1739674845.M569094P6598.mail,S=6530202,W=6615064) At this point it stops and the indicated mailbox has been deleted. It is not always the same mailbox. I then have to run the backup again and wait for it to download the entire directory which takes a long time. I was hoping to use this approach for a frequently run backup, but it appears it is not feasible. Some mailboxes are large enough that a second backup would be initiated before the complete reload had completed. Can this be corrected? -- Doug ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org