Re: [Dovecot] Dovecot error with Symbian mail client
[EMAIL PROTECTED] wrote: > Greetings list, > > I have recently acquired an Nokia E71 (which comes with Symbian 3rd > edition, feature pack 3 I believe). Accessing my emails has worked > before, but now, I cannot connect to the mail server any longer. Worked before with this phone? I have this exact phone and have no issues with Dovecot here. > If I enable verbose_ssl, I get the following error in the log: > > SSL_accept() failed: error:140943F2:SSL routines:SSL3_READ_BYTES:sslv3 > alert unexpected message [141.84.69.67] > > I cannot access my mail, the dialog just closes and the wireless > connection ends. > > I am using dovecot version 1.0.13 and my config is Update!!! I'm using 1.1.3, there have been many many times updating has fixed someone's problems on this list. For completeness here is my dovecot -n # 1.1.3: /usr/local/etc/dovecot.conf ssl_cert_file: /usr/local/etc/ssl/certs/dovecot.pem ssl_key_file: /usr/local/etc/ssl/private/dovecot.pem login_dir: /var/run/dovecot/login login_executable: /usr/local/libexec/dovecot/imap-login login_processes_count: 1 verbose_proctitle: yes mail_location: maildir:~/Maildir imap_client_workarounds: delay-newmail netscape-eoh tb-extra-mailbox-sep auth default: passdb: driver: pam userdb: driver: passwd socket: type: listen client: path: /var/spool/postfix/private/auth mode: 432 user: postfix group: postfix Jonathan
Re: [Dovecot] deliver vs lda
Timo Sirainen wrote: > deliver is the binary name. but it's configured inside protocol lda {} > section. This is getting annoying, any thoughts on what would be a good > unifying name? > > c) dovecot-lda binary, protocol lda {} I'd vote for C as well.
Re: [Dovecot] New messages notification - (IMAP PUSH?)
On 5/4/2009 4:21 AM, Proskurin Kirill wrote: Hello all. I have strange problem. As I understand IMAP use server2client command to tell what new email is arrived. So - all was good, but few weeks ago I notice what my thunderbird not show new messages. I think it was local problem(I switch distro what day) but soon my email clients what uses mostly Outlook-2007-SP1 start to confirm this problem. Is mail.check_all_imap_folders_for_new set in your Thunderbird configuration? It's an advanced setting Tools->Options->Advanced->General->Config Editor If that value is not True Thunderbird only checks the currently selected folder and the inbox for new messages. A similar issue was brought up in the recent thread titled "Imap notifications in IDLE" Jonathan
[Dovecot] [OT] preferred clients
I'm getting tired of Thunderbird telling me I have unread messages in folders that haven't gotten new messages for months so I'm looking for a new mail client. I know the problem lies with Thunderbird because everything is fine via RoundCube and if it tell Thunderbird to rebuild it's index it shows the folder correctly again. Except of course for a subset of the messages in my inbox that it insists where delivered at the exact time I re-indexed it, every time. So what IMAP clients do people prefer these days? Preferably windows or cross platform and it needs to have decent key bindings because (probably like many of you) I get 100s of emails a day via lists and anything that speeds my way through them is good. I run my own server (probably obvious being on this list) and can install webmail clients as well. I ran squirrelmail for a while but although functional it's quite dated. I'm using RoundCube for access away from my systems now but it lacks keyboard shortcut support and trying to click one email after another with a laptop touchpad gets painful fast. Thanks, Jonathan
Re: [Dovecot] [OT] preferred clients
On 11/20/2009 1:27 PM, John Gateley wrote: Jonathan wrote: So what IMAP clients do people prefer these days? Preferably windows or cross platform and it needs to have decent key bindings... Have you tried the new Thunderbird 3 beta? There was a thread on this list recently about it. It has a lot of IMAP improvements. I'm running Beta 4 now. I could try dropping back to Thunderbird 2.x but I don't want to have to choose between features and stability like that. I'm greedy and want both. Sylpheed has a new beta out as well with improved IMAP support. (Sylpheed runs on Windows and Linux, I wish it ran on Macs). I'm giving Claws Mail, a fork of Sylpheed apparently, a try. Haven't found a way to change key bindings yet and SHIFT-! is really awkward for marking a message unread. Jonathan
Re: [Dovecot] [OT] preferred clients
On 11/21/2009 7:22 PM, Thomas wrote: Hi Jonathan, I'm getting tired of Thunderbird telling me I have unread messages in folders that haven't gotten new messages for months so I'm looking for a new mail client. [..] Yes, it's a Thunderbird issue only. Usually that appears when you don't compact your folders (you can ask TB to compact by itself as well). 90% of the time when you have weird stuff in your folders that's because you didn't compact your folders. I would agree with that if it happened to folder I used actively but I've had thunderbird mark emails as unread in my 2003-2006 archive folder, which obviously hasn't been touched since early 2007 so something weird is going on. As someone else noted it may be related to the amount of email I have. I probably have nearly 100,000 messages spread across 30-40 folders right now. Jonathan
Re: [Dovecot] [OT] preferred clients
On 11/21/2009 9:15 PM, Thomas wrote: Re, As someone else noted it may be related to the amount of email I have. I probably have nearly 100,000 messages spread across 30-40 folders right now. Close TB. Delete your .msf to recreate indexes. Start TB again and let it re-index (it will take a while). Then everything should be fine. If not do a bug report. MSF files deleted. The problem occurs pretty randomly so it will be a few days to a week before I know whether that fixed it. Do you know anything about the date issue I mentioned where TB shows emails with a date of the last time the folder was indexed instead of when the email was actually delivered? Thanks, Jonathan
Re: [Dovecot] [OT] preferred clients
On 11/21/2009 9:15 PM, Thomas wrote: Re, As someone else noted it may be related to the amount of email I have. I probably have nearly 100,000 messages spread across 30-40 folders right now. Close TB. Delete your .msf to recreate indexes. Start TB again and let it re-index (it will take a while). Then everything should be fine. If not do a bug report. Okay, that didn't take long. I have another spurious unread message already. Should I do what it says here [1] and grab a nightly build and create an entire new profile, or should I just report with what I have? Any suggestions on what component to file the report against? Jonathan [1] http://www.mozilla.org/support/thunderbird/bugs
Re: [Dovecot] Moving new email from the mail spool to the inbox
Adrian Barker wrote: Thanks for replying. We cannot easily change the way we deliver email, as we have over 30,000 users, who use a mixture of imap, pop and Unix email clients, so we have to continue to deliver email to a central mail spool. The MTA that we run is Exim, which has the flexibility to deliver into the 'Inbox', but we need to remain compatible with non-IMAP mailers. Take a look at the convert_mail plugin option. The example is: convert_mail = mbox:/home/%u:INBOX=/var/spool/mail/%u but something similar might help. There's some discussion in the mail archives. -- Jonathan
Re: [Dovecot] Moving new email from the mail spool to the inbox
Kenny Dail wrote: Adrian Barker wrote: Thanks for replying. We cannot easily change the way we deliver email, as we have over 30,000 users, who use a mixture of imap, pop and Unix email clients, so we have to continue to deliver email to a central mail spool. The MTA that we run is Exim, which has the flexibility to deliver into the 'Inbox', but we need to remain compatible with non-IMAP mailers. This is not clear at all, why does the number of users effect the delivery method? What mailbox format are you using? I believe Adrian is saying that only some of his 30,000 clients use IMAP and he does not want to break existing behaviour for those using Unix command line clients. -- Jonathan
Re: [Dovecot] Replication plans
Hi Timo, MySQL gets around the problem of multiple masters allocating the same primary key, by giving each server its own address range (e.g. first machine uses 1,5,9,13 next one uses 2,6,10,14,...). Would this work for UIDs? Jonathan.
Re: Dovecot on Ubuntu 20.04
> On Aug 20, 2020, at 11:53 AM, spamv...@googlemail.com wrote: > > Hi, > > is anyone using the "Bionic (18.04 LTS)" packages on Focal Fossa (20.04 LTS) > ? > I'm not sure if its working after the upgrade > > Hans Can you tell us a little more about what isn’t working? I have the feeling that I know already, and it’s related to this: https://bugs.centos.org/view.php?id=17341 I know that says CentOS but the same information worked for me when I did the 18.04 - 20.04 upgrade recently. Check out /etc/dovecot/conf.d/10-ssl.conf.ucf-dist too, it’s mentioned in there. I could be wrong. If it’s something else, let us know. :) — j
Core dump with search in a virtual folder with FTS enabled
Hi all, when I try to search in a virtual mailbox with FTS enabled, imap cores dump. Sample IMAP session in a virtual mailbox: 1 OK [CAPABILITY IMAP4rev1 SASL-IR LOGIN-REFERRALS ID ENABLE IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS THREAD=ORDEREDSUBJECT MULTIAPPEND URL-PARTIAL CATENATE UNSELECT CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS BINARY MOVE SNIPPET=FUZZY PREVIEW=FUZZY STATUS=SIZE SAVEDATE LITERAL+ NOTIFY] Logged in2 SELECT Search/All* FLAGS (\Answered \Flagged \Deleted \Seen \Draft)* OK [PERMANENTFLAGS (\Answered \Flagged \Deleted \Seen \Draft \*)] Flags permitted.* 1 EXISTS* 0 RECENT* OK [UNSEEN 1] First unseen.* OK [UIDVALIDITY 1602234810] UIDs valid* OK [UIDNEXT 2] Predicted next UID2 OK [READ-WRITE] Select completed (0.002 + 0.000 + 0.001 secs).2 UID SEARCH HEADER From userConnection closed by foreign host. When I search in INBOX all OK: 1 OK [CAPABILITY IMAP4rev1 SASL-IR LOGIN-REFERRALS ID ENABLE IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS THREAD=ORDEREDSUBJECT MULTIAPPEND URL-PARTIAL CATENATE UNSELECT CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS BINARY MOVE SNIPPET=FUZZY PREVIEW=FUZZY STATUS=SIZE SAVEDATE LITERAL+ NOTIFY] Logged in 2 SELECT INBOX* FLAGS (\Answered \Flagged \Deleted \Seen \Draft)* OK [PERMANENTFLAGS (\Answered \Flagged \Deleted \Seen \Draft \*)] Flags permitted.* 1 EXISTS* 0 RECENT* OK [UNSEEN 1] First unseen.* OK [UIDVALIDITY 1602234520] UIDs valid* OK [UIDNEXT 2] Predicted next UID* OK [NOMODSEQ] No permanent modsequences2 OK [READ-WRITE] Select completed (0.003 + 0.000 + 0.002 secs).2 UID SEARCH HEADER From user* SEARCH 12 OK Search completed (0.002 + 0.000 + 0.001 secs). Error log: Oct 09 12:58:22 Panic: imap(user)<31554>: Module context fts_mailbox_list_module missingOct 09 12:58:22 Error: imap(user)<31554>: Raw backtrace: /usr/lib64/dovecot/libdovecot.so.0(backtrace_append+0x42) [0x7fbe88afddf2] -> /usr/lib64/dovecot/libdovecot.so.0(backtrace_get+0x1e) [0x7fbe88afdefe] -> /usr/lib64/dovecot/libdovecot.so.0(+0xec42e) [0x7fbe88b0842e] -> /usr/lib64/dovecot/libdovecot.so.0(+0xec4d1) [0x7fbe88b084d1] -> /usr/lib64/dovecot/libdovecot.so.0(i_fatal+0) [0x7fbe88a5f4ea] -> /usr/lib64/dovecot/lib20_fts_plugin.so(+0x123f7) [0x7fbe8821f3f7] -> /usr/lib64/dovecot/lib20_fts_plugin.so(+0xe2f4) [0x7fbe8821b2f4] -> /usr/lib64/dovecot/lib20_fts_plugin.so(fts_search_lookup+0xd4) [0x7fbe8821b7b4] -> /usr/lib64/dovecot/lib20_fts_plugin.so(+0x11a98) [0x7fbe8821ea98] -> dovecot/imap(imap_search_start+0x70) [0x55e251f4e620] -> dovecot/imap(cmd_search+0xd6) [0x55e251f3ea86] -> dovecot/imap(command_exec+0x64) [0x55e251f46cc4] -> dovecot/imap(+0x1ce4f) [0x55e251f44e4f] -> dovecot/imap(+0x1ced7) [0x55e251f44ed7] -> dovecot/imap(client_handle_input+0x205) [0x55e251f45365] -> dovecot/imap(client_input+0x75) [0x55e251f45925] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x65) [0x7fbe88b20b45] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x12b) [0x7fbe88b2249b] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x59) [0x7fbe88b20c49] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7fbe88b20e88] -> /usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7fbe88a90393] -> dovecot/imap(main+0x332) [0x55e251f36f12] -> /lib64/libc.so.6(__libc_start_main+0xf5) [0x7fbe88670555] -> dovecot/imap(+0xf115) [0x55e251f37115]Oct 09 12:58:22 Fatal: imap(user)<31554>: master: service(imap): child 31554 killed with signal 6 (core dumped) Dovecot configuration (doveconf -n):# 2.3.11.3 (502c39af9): /etc/dovecot/dovecot.conf # OS: Linux 3.10.0-1127.19.1.el7.x86_64 x86_64 CentOS Linux release 7.8.2003 (Core)first_valid_uid = 1000 mail_debug = yesmail_location = maildir:~/Maildir:INDEX=MEMORYmail_plugins = virtual fts fts_lucenembox_write_locks = fcntlnamespace inbox { inbox = yes location = prefix = separator = /}namespace virtual { hidden = yes list = no location = virtual:/etc/dovecot/virtual:INDEX=~/virtual prefix = Search/ separator = / subscriptions = no type = private}passdb { args = scheme=PLAIN username_format=%u /etc/dovecot/users driver = passwd-file}plugin { fts = lucene fts_lucene = whitespace_chars=@.}userdb { args = username_format=%u /etc/dovecot/users driver = passwd-file} Backtrace:#0 0x7fbe88684387 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:55 #1 0x7fbe88685a78 in __GI_abort () at abort.c:90#2 0x7fbe88b083e7 in default_fatal_finish (status=0, type=) at failures.c:459#3 fatal_handler_real (ctx=, format=, args=) at failures.c:471#4 0x7fbe88b084d1 in i_internal_fatal_handler (ctx=, format=, args=) at failures.c:848#5 0x7fbe88a5f4ea in i_panic (format=format@entry=0x7fbe88225058 "Module context fts_mailbox_list_module missing") at
Virtual folder with 'younger' not updating
I have a virtual folder set up with the following in dovecot-virtual: INBOX INBOX.temp ALL YOUNGER 86400 The problem is that messages older than 86,400 seconds (24 hours) never disappear from the folder. Is there any fix for this apart from adding INDEX=MEMORY to the mailbox location in the config file? I'm running Dovecot 2.2.36 on CentOS. Regards, -- Jonathan
Re: Virtual folder with 'younger' not updating
On 30/12/2020 06:25, Aki Tuomi wrote: On 29/12/2020 21:15 Jonathan Casiot wrote: I have a virtual folder set up with the following in dovecot-virtual: INBOX INBOX.temp ALL YOUNGER 86400 The problem is that messages older than 86,400 seconds (24 hours) never disappear from the folder. Is there any fix for this apart from adding INDEX=MEMORY to the mailbox location in the config file? I'm running Dovecot 2.2.36 on CentOS. Regards, -- Jonathan H! This is a known bug, we are tracking it as DOP-281. The indexes get updated when new mail is received. Aki Hi In my case new mails are shown in the virtual folder so presumably these have been added to the index but those over 24 hours old are not removed - a 'doveadm search' of the mailbox still shows messages that are older than 24 hours. Regards -- Jonathan
Re: Virtual folder with 'younger' not updating
On 30/12/2020 10:00, Aki Tuomi wrote: On 30/12/2020 11:52 Jonathan Casiot wrote: On 30/12/2020 06:25, Aki Tuomi wrote: On 29/12/2020 21:15 Jonathan Casiot wrote: I have a virtual folder set up with the following in dovecot-virtual: INBOX INBOX.temp ALL YOUNGER 86400 The problem is that messages older than 86,400 seconds (24 hours) never disappear from the folder. Is there any fix for this apart from adding INDEX=MEMORY to the mailbox location in the config file? I'm running Dovecot 2.2.36 on CentOS. Regards, -- Jonathan H! This is a known bug, we are tracking it as DOP-281. The indexes get updated when new mail is received. Aki Hi In my case new mails are shown in the virtual folder so presumably these have been added to the index but those over 24 hours old are not removed - a 'doveadm search' of the mailbox still shows messages that are older than 24 hours. Regards -- Jonathan Can you try expunging something from the folder, as an experiment? Aki An expunge works, eg. 'before 40h' those messages are now no longer visible either in the mail client or a 'doveadm search'. -- Jonathan
Re: Virtual folder with 'younger' not updating
On 30/12/2020 10:22, Aki Tuomi wrote: On 30/12/2020 12:11 Jonathan Casiot wrote: On 30/12/2020 10:00, Aki Tuomi wrote: On 30/12/2020 11:52 Jonathan Casiot wrote: On 30/12/2020 06:25, Aki Tuomi wrote: On 29/12/2020 21:15 Jonathan Casiot wrote: I have a virtual folder set up with the following in dovecot-virtual: INBOX INBOX.temp ALL YOUNGER 86400 The problem is that messages older than 86,400 seconds (24 hours) never disappear from the folder. Is there any fix for this apart from adding INDEX=MEMORY to the mailbox location in the config file? I'm running Dovecot 2.2.36 on CentOS. Regards, -- Jonathan H! This is a known bug, we are tracking it as DOP-281. The indexes get updated when new mail is received. Aki Hi In my case new mails are shown in the virtual folder so presumably these have been added to the index but those over 24 hours old are not removed - a 'doveadm search' of the mailbox still shows messages that are older than 24 hours. Regards -- Jonathan Can you try expunging something from the folder, as an experiment? Aki An expunge works, eg. 'before 40h' those messages are now no longer visible either in the mail client or a 'doveadm search'. -- Jonathan I was thinking that you'd expunge *one* mail and see if the indexes get updated. Aki I expunged three! -- Jonathan
sieve passing body as arguments to executed script
I ran into an issue when trying to wildcard match the email body where the variable was empty, which lead me to extracttext and forverypart, however I am running into an odd error. When I deliver an email with the body of 'this is the body text of email' it dies with the following error: sogo: line 236: error: specified :args item `this is the body text of email??' is invalid The sieve in question: if header :matches "Subject" "*" { set "emailsubject" "${1}"; } if header :matches "From" "*" { set "emailfrom" "${1}"; } foreverypart { if header :mime :type :is "Content-Type" "text" { extracttext :first 80 "emailbody"; break; } } execute "script.php" ["${emailfrom}", "${emailsubject}", "${emailbody}"]; I tried downloading the log to see if ?? was a character not rendering correctly in console, but local text editors show the same. I'm not sure where it is coming from or what it is. I tried :quotewildcard with set :quotewildcard "quotedbody" ${emailbody}"; And passed "${quotedbody}" as an argument, however the same error was appearing in the logs. If I'm approaching this problem incorrectly, the end result is I want to pass the entire contents of the email to the script. - Jonathan ___ dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-le...@dovecot.org
Re: [Dovecot] Can anyone tell me quick, if the howto on dovecot.org is actual:
Marko Weber said the following on 2/1/12 7:11 AM: Is this Howto http://wiki2.dovecot.org/HowTo/DovecotLDAPostfixAdminMySQL actual also for Dovecot 2.x ? Cause i read Dovecot 1.x in the Howto Thank you for any hints / tipps My gut feeling is version 1.2 because it was last modified on 2010-06-29 11:37:39(see bottom of page) which is in the 1.2 days(http://www.dovecot.org/oldnews.html). -Jonathan smime.p7s Description: S/MIME Cryptographic Signature
[Dovecot] Telephone systems and Dovecot
Hi folks, We're looking to integrate our telephone system with our email system. The telephone system will use IMAP4 to store WAV files in a users mailbox and then retrieve them for playing if necessary. This is usually called "unified messaging". The manufacturers are claiming full integration with Microsoft Exchange and Lotus Notes using IMAP4 and a single username and password to access all the users mailboxes. I can see that Dovecot has a master user feature that looks like it will do the job. Has anyone had any experience of using dovecot as a unified message server for a telephone system and has anyone any experience of configuring dovecot so that an MS exchange IMAP4 client using a master user can use dovecot without changing the client? Jon.
Re: [Dovecot] Telephone systems and Dovecot
Sorry, couldn't grok this last past. Isn't MS exchange IMAP4 IMAP? And why would you want to allow the client to login with a master user? Anyway, you can put "loginuser*masteruser" and "masterpassword" on the username and password boxes on the client as explained by the documentation. I might not have asked that last question in the clearest way. The telephone systems access MS Exchange and Lotus notes using the IMAP4 interface of Exchange/Notes and a single (presumably master) username and password. The telephone systems then adds voicemail messages (as a WAV attachment to an ordinary message) to the users inbox and, if the user dials in for their voicemail, they can access the INBOX to find the voicemail messages and play those back. From the users perspective they can either get their voicemail by dialing in, or by reading their inbox and playing the WAV files. I was wondering whether anyone had any documentation on the differences between using Dovecot with a master user and using exchange/notes with a master user. In other words if the telephone exchange says it fully support Exchange/Notes how much work should I expect in getting it to work with dovecot? Jon.
[Dovecot] dovecot 1.1.1 compilation errors (was: Re: [Dovecot-news] v1.1.1 released)
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 1.1.x don't compile for me (rPath Linux 2 and Foresight Linux 2): rquota_xdr.c:125: error: expected declaration specifiers or ‘...’ before ‘gqr_status’ rquota_xdr.c:126: warning: no previous prototype for ‘xdr_gqr_status’ rquota_xdr.c: In function ‘xdr_gqr_status’: rquota_xdr.c:129: error: ‘objp’ undeclared (first use in this function) rquota_xdr.c:129: error: (Each undeclared identifier is reported only once rquota_xdr.c:129: error: for each function it appears in.) rquota_xdr.c: In function ‘xdr_getquota_rslt’: rquota_xdr.c:139: error: too many arguments to function ‘xdr_gqr_status’ make[4]: *** [rquota_xdr.lo] Error 1 make[4]: Leaving directory `/home/smithj/build/dovecot/dovecot-1.1.1/src/plugins/quota' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/home/smithj/build/dovecot/dovecot-1.1.1/src/plugins' make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/home/smithj/build/dovecot/dovecot-1.1.1/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/smithj/build/dovecot/dovecot-1.1.1' make: *** [all] Error 2 I do *not* get this error with 1.0.15. Please reply-to-all, as I'm not on the dovecot@dovecot.org list. smithj -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.9 (GNU/Linux) iEYEAREIAAYFAkhen3wACgkQCG91qXPaRel7MwCfcNfux9v7rvV+TSNzXhqLzfrU vE4An2p0vm+DJHpOIBLDrWbPNqWz8sNR =EPAA -END PGP SIGNATURE-
Re: [Dovecot] dovecot 1.1.1 compilation errors
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Timo Sirainen wrote: > Does it work if you change that to: > > #include "rquota.h" Yes, it does. Thanks. smithj -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.9 (GNU/Linux) iEYEAREIAAYFAkhgmVAACgkQCG91qXPaRenvNQCbBdrQ5R1BotR1Zjo59b4fqBSW TzwAnjrfDHRkbmCBAG6vPZsXrIkjkRsd =PVt9 -END PGP SIGNATURE-
[Dovecot] Dovecot & (Al)pine - resaving messages to Inbox
I have a x86 Linux box running Fedora 4. Up until 2008 I was using pine 4.64 and Dovecot 0.99.x. In early 2008 I transitioned to Alpine 1.10, and didn't notice any major changes. I upgraded to Dovecot 1.0.15 this past weekend. My dovecot -n settings are at the bottom of the email. I've noticed a change since Dovecot was upgraded. I used to reorder items within my INBOX in al/pine by "saving" messages back to the INBOX, which I was connecting via IMAP . Now when I try to do it I get an error in alpine that says: [Can't copy mails inside same folder] then: [Save to folder "{localhost:143/i...r=jherbach}INBOX" FAILED] While these are alpine messages, they appear to be from the dovecot upgrade. I really miss the ability to reorder messages in my inbox since I often use it as a queue of important items. Does anybody have any ideas why I'm seeing this error message? Or better yet, how to potentially (re) configure the system to allow me to resave messages into the same folder? Jonathan --- My settings: # 1.0.15: /usr/local/etc/dovecot.conf protocols: imap pop3 imaps pop3s ssl_cert_file: /etc/pki/dovecot/dovecot.pem ssl_key_file: /etc/pki/dovecot/private/dovecot.pem disable_plaintext_auth: no login_dir: /var/run/dovecot-login login_executable(default): /usr/local/libexec/dovecot/imap-login login_executable(imap): /usr/local/libexec/dovecot/imap-login login_executable(pop3): /usr/local/libexec/dovecot/pop3-login first_valid_gid: 100 last_valid_gid: 100 mail_location: mbox:%h/mail:INBOX=%h/Mailbox mbox_write_locks: fcntl mail_executable(default): /usr/local/libexec/dovecot/imap mail_executable(imap): /usr/local/libexec/dovecot/imap mail_executable(pop3): /usr/local/libexec/dovecot/pop3 mail_plugin_dir(default): /usr/local/lib/dovecot/imap mail_plugin_dir(imap): /usr/local/lib/dovecot/imap mail_plugin_dir(pop3): /usr/local/lib/dovecot/pop3 pop3_no_flag_updates(default): no pop3_no_flag_updates(imap): no pop3_no_flag_updates(pop3): yes pop3_uidl_format(default): pop3_uidl_format(imap): pop3_uidl_format(pop3): %08Xu%08X pop3_client_workarounds(default): pop3_client_workarounds(imap): pop3_client_workarounds(pop3): outlook-no-nuls auth default: passdb: driver: pam userdb: driver: passwd _ Get ideas on sharing photos from people like you. Find new ways to share. http://www.windowslive.com/explore/photogallery/posts?ocid=TXT_TAGLM_WL_Photo_Gallery_082008
Re: [Dovecot] Dovecot & (Al)pine - resaving messages to Inbox
> I suppose the OP does not sort by date but Arrival time (this means arrival > time in the mailbox). Exactly. I sort by arrival time, which in practice is the "unsorted" ordering based upon the mbox file itself. (Except I'd like to control the arbitrary "unsorted" order that I want by "resave to inbox" to handle reordering.) Nicolas wrote: > - a quick grep shows this is 'normal' behaviour, as written in > src/lib-storage/index/mbox/mbox-save.c > > if (mbox->mbox_lock_type == F_RDLCK) { > /* FIXME: we shouldn't fail here. it's just > a locking issue that should be possible to > fix.. */ > mail_storage_set_error(storage, > MAIL_ERROR_NOTPOSSIBLE, > "Can't copy mails inside same mailbox"); > return -1; > } So I'm currently using: mbox_write_locks: fcntl When I was using version 9 the relevant config setting was: "mbox_locks = fcntl", which plus the config suggestion "fcntl: Use this if possible" led me down this path. Does anybody know why this code was designed this way / why it is different from 0.99 behavior / etc? Jonathan _ Get thousands of games on your PC, your mobile phone, and the web with Windows®. http://clk.atdmt.com/MRT/go/108588800/direct/01/
[Dovecot] [dovecot] Enable logging of all client commands in dovecot-1.2.alpha3
Hello, I would like to log all IMAP client commands sent to dovecot. The format would be time pid command arguments. I reviewed http://wiki.dovecot.org/Logging and started digging through dovecot-1.2.alpha3/src/master . I don't need this turned on all the time, just enough to see how clients do things and I don't need to see passwords. Any tips would be appreciated. -Jonathan smime.p7s Description: S/MIME Cryptographic Signature
[Dovecot] [dovecot] INDEX variable and mbox_snarf plugin
Is there a way to tell the dovecot mbox_snarf plugin to use an alternate location for the index/cache files? It doesn't seem to want to use the INDEX variable. I'm guessing the answer is no because of all the ties in lib-storage/index/mbox/*c to the directory where the file(that is the inbox) lives. The filesystem where I keep the inbox file doesn't have a directory for the user to own. The user only owns the mbox file. Thanks, Jonathan smime.p7s Description: S/MIME Cryptographic Signature
Re: [Dovecot] [dovecot] INDEX variable and mbox_snarf plugin
Since I didn't get any bytes for this question, I'll pose it a different way. From http://wiki.dovecot.org/MailLocation#indexfiles : == Index files Index files are by default stored under the same directory as mails. With maildir they are stored in the actual maildirs, with mbox they are stored under .imap/ directory. You may want to change the index file location if you're using NFS or if you're setting up shared mailboxes. You can change the index file location by adding :INDEX= to mail_location. For example: mail_location = maildir:~/Maildir:INDEX=/var/indexes/%u The index directories are created automatically, but note that it requires that Dovecot has actually access to create the directories. Either make sure that the index root directory (/var/indexes in the above example) is writable to the logged in user, or create the user's directory with proper permissions before the user logs in. If you really want to, you can also disable the index files completely by appending :INDEX=MEMORY. == How are people setting the INDEX for mbox_snarf if you are using NFS? I'm guessing I will just be rewriting lib-storage/index/mbox/ to have a choice of directories or creating 2 new files for mbox_snarf that look alot like the mbox driver index code. Was there a reason to not have a variable for index files for mbox_snarf? Thanks, Jonathan smime.p7s Description: S/MIME Cryptographic Signature
Re: [Dovecot] How to specify /var/spool/mail/m/i/mike
On Nov 10, 2008, at 12:25 PM, Network Operations wrote: I was wondering if somebody could tell me how I can tell dovecot (IMAP) to read users' mail boxes at /var/spool/mail/{1st char}/{2nd char}/username . For example: Mailbox for user 'mike' would be located at /var/spool/mail/m/i/mike 'karen' would be at /var/spool/mail/k/a/karen etc... Look at http://wiki.dovecot.org/Variables. I think the notation would be %1u/%1.1u/%u -Jonathan smime.p7s Description: S/MIME cryptographic signature
[Dovecot] [dovecot] Pre-populate index files for mbox
Is there a program written that can be used to build the index files for a given mbox file without using IMAP/POP? If not I'll be happy to donate it when I'm done. The args would be the userid , the full path to the mbox and where to put the index files. -Jonathan smime.p7s Description: S/MIME Cryptographic Signature
Re: [Dovecot] [dovecot] Pre-populate index files for mbox
Mark Zealey said the following on 11/25/08 11:31 AM: As a quick hack, surely you could deliver a dummy file to the inbox and then login over pop/imap and remove it? Mark Sure. This is for 85k users with about 50 folders each.. smime.p7s Description: S/MIME Cryptographic Signature
Re: [Dovecot] [dovecot] Pre-populate index files for mbox
Timo Sirainen said the following on 11/25/08 12:01 PM: On Nov 25, 2008, at 6:28 PM, Jonathan Siegle wrote: Is there a program written that can be used to build the index files for a given mbox file without using IMAP/POP? If not I'll be happy to donate it when I'm done. The args would be the userid , the full path to the mbox and where to put the index files. There's a difference between IMAP and POP3: With POP3 you really want to have message sizes cached. With IMAP you might not want to cache any fields, but depending on the client you might want to cache many different fields. That's not really easy to figure out beforehand (unless you just cache pretty much everything). Anyway assuming you want POP3 to be fast for most people, this'll do it: export MAIL=mbox:/tmp:INBOX=/path/to/mbox/file echo "quit" | /usr/local/libexec/dovecot/pop3 I'm actually doing this step for our Webmail application. It has its own set of toc files and also has native access to the filesystems. I want to migrate to the toc files in dovecot instead. I still don't think it would be a bad idea to create a mailutil type application for dovecot. Thanks, Jonathan smime.p7s Description: S/MIME Cryptographic Signature
[Dovecot] Imap logging and inetd
When I ran /usr/local/sbin/dovecot, the variable login_log_format_elements from dovecot.conf was honored. Now when I run /usr/local/libexec/dovecot/imap-login from /etc/inetd.conf, it isn't. How do I get imap-login to write log lines that use login_log_format_elements for the format? This is dovecot-1.2.alpha3 soon to be alpha4. Thanks, Jonathan smime.p7s Description: S/MIME Cryptographic Signature
Re: [Dovecot] Imap logging and inetd
Jonathan Siegle said the following on 12/2/08 7:16 AM: When I ran /usr/local/sbin/dovecot, the variable login_log_format_elements from dovecot.conf was honored. Now when I run /usr/local/libexec/dovecot/imap-login from /etc/inetd.conf, it isn't. How do I get imap-login to write log lines that use login_log_format_elements for the format? This is dovecot-1.2.alpha3 soon to be alpha4. Thanks, Jonathan I found where to get my desired result in src/master/mail-process.c. -Jonathan smime.p7s Description: S/MIME Cryptographic Signature
[Dovecot] Roadmap
I've been running 1.2alpha3 for weeks now without issue. However, some people in my testing department object to even testing "alpha" code. Are there specific features/bugs that need resolved before alpha reaches beta? Thanks, Jonathan smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] Imap logging and inetd
On Dec 13, 2008, at 10:54 PM, Timo Sirainen wrote: On Tue, 2008-12-02 at 13:48 -0500, Jonathan Siegle wrote: Jonathan Siegle said the following on 12/2/08 7:16 AM: When I ran /usr/local/sbin/dovecot, the variable login_log_format_elements from dovecot.conf was honored. Now when I run /usr/local/libexec/dovecot/imap-login from /etc/inetd.conf, it isn't. How do I get imap-login to write log lines that use login_log_format_elements for the format? This is dovecot-1.2.alpha3 soon to be alpha4. Thanks, Jonathan I found where to get my desired result in src/master/mail-process.c. Huh? mail-process.c only affects post-login imap/pop3. What did you change? The goal was to print a line with the user and the IP address with a date/time stamp. So I send those to syslog from here. Is that bad information? I haven't gotten bad info from it yet.. -Jonathan smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] SSL cert problems.
On Dec 29, 2008, at 2:31 PM, Geoff Sweet wrote: So my conf looks similar to yours: # Disable SSL/TLS support. #ssl_disable = no ssl_cert_file = /etc/pki/dovecot/certs/pop.x10.com.cer ssl_key_file = /etc/pki/dovecot/private/pop.x10.com.key # If key file is password protected, give the password here. Alternatively # give it when starting dovecot with -p parameter. #ssl_key_password = # File containing trusted SSL certificate authorities. Usually not needed. # The CAfile should contain the CA-certificate(s) followed by the matching # CRL(s). CRL checking is new in dovecot .rc1 ssl_ca_file = /etc/pki/verisign/intermediate_ca.cer Reading the openssl book on page 120(chapter 5) it says that you should have the whole chain in one file. I see that if you are using the SSL_CTX_use_certificate_chain_file function(as dovecot1.2alpha4 ./login-common/ssl-proxy-openssl.c does), you just need to put the whole chain in one file with the intermediate SECOND and your certificate FIRST. The book also claims that you should put the root certificate in here. I have seen conflicting documentation on putting the root cert in here because as another poster mentioned , you will never send it out. I may have missed a post that had my info above so sorry if I'm giving already provided information. -Jonathan # Request client to send a certificate. #ssl_verify_client_cert = no and the ssl_ca_file is a copy and past from this: http://www.verisign.com/support/verisign-intermediate-ca/extended-validation/index.html Yet the cert still doesn't work. And the OpenSSL people are telling me this is an issue with my application, dovecot. For reference this is all that is in my /etc/pki/verisign/intermediate_ca.cer: -BEGIN CERTIFICATE- MIIFEzCCBHygAwIBAgIQV7/7A/ssRtThns7g10N/EzANBgkqhkiG9w0BAQUFADBf MQswCQYDVQQGEwJVUzEXMBUGA1UEChMOVmVyaVNpZ24sIEluYy4xNzA1BgNVBAsT LkNsYXNzIDMgUHVibGljIFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkw HhcNMDYxMTA4MDAwMDAwWhcNMjExMTA3MjM1OTU5WjCByjELMAkGA1UEBhMCVVMx FzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQLExZWZXJpU2lnbiBUcnVz dCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwNiBWZXJpU2lnbiwgSW5jLiAtIEZv ciBhdXRob3JpemVkIHVzZSBvbmx5MUUwQwYDVQQDEzxWZXJpU2lnbiBDbGFzcyAz IFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IC0gRzUwggEi MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCvJAgIKXo1nmAMqudLO07cfLw8 RRy7K+D+KQL5VwijZIUVJ/XxrcgxiV0i6CqqpkKzj/i5Vbext0uz/o9+B1fs70Pb ZmIVYc9gDaTY3vjgw2IIPVQT60nKWVSFJuUrjxuf6/WhkcIzSdhDY2pSS9KP6HBR TdGJaXvHcPaz3BJ023tdS1bTlr8Vd6Gw9KIl8q8ckmcY5fQGBO+QueQA5N06tRn/ Arr0PO7gi+s3i+z016zy9vA9r911kTMZHRxAy3QkGSGT2RT+rCpSx4/VBEnkjWNH iDxpg8v+R70rfk/Fla4OndTRQ8Bnc+MUCH7lP59zuDMKz10/NIeWiu5T6CUVAgMB AAGjggHeMIIB2jAPBgNVHRMBAf8EBTADAQH/MDEGA1UdHwQqMCgwJqAkoCKGIGh0 dHA6Ly9jcmwudmVyaXNpZ24uY29tL3BjYTMuY3JsMA4GA1UdDwEB/wQEAwIBBjBt BggrBgEFBQcBDARhMF+hXaBbMFkwVzBVFglpbWFnZS9naWYwITAfMAcGBSsOAwIa BBSP5dMahqyNjmvDz4Bq1EgYLHsZLjAlFiNodHRwOi8vbG9nby52ZXJpc2lnbi5j b20vdnNsb2dvLmdpZjA9BgNVHSAENjA0MDIGBFUdIAAwKjAoBggrBgEFBQcCARYc aHR0cHM6Ly93d3cudmVyaXNpZ24uY29tL2NwczAdBgNVHQ4EFgQUf9Nlp8Ld7Lvw MAnzQzn6Aq8zMTMwNAYDVR0lBC0wKwYJYIZIAYb4QgQBBgpghkgBhvhFAQgBBggr BgEFBQcDAQYIKwYBBQUHAwIwgYAGA1UdIwR5MHehY6RhMF8xCzAJBgNVBAYTAlVT MRcwFQYDVQQKEw5WZXJpU2lnbiwgSW5jLjE3MDUGA1UECxMuQ2xhc3MgMyBQdWJs aWMgUHJpbWFyeSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eYIQcLrkHRDZKTS2OMp7 A8y6vzANBgkqhkiG9w0BAQUFAAOBgQCpe2YpMPfVtKaWEtDucvBYEWkVVV9B/9IS hBOk2QNm/6ngTMntjHKLtNdVOykVYMg8Ie9ELpM9xgsMjSQ/HvsBWnrdg2YU0cf9 MFNIUYWFE6hU4e52ookY05eJesb9s72UYVo6CM8Uk72T/Qmpe1bIALhEWOneW3e9 BxxsCzAwxw== -END CERTIFICATE- Like I said, just a copy and paste from the Verisign site. Any thoughts? -Geoff smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] New SSL certificate problem
On Jan 5, 2009, at 2:50 PM, Stewart Dean wrote: Although I was told by Digicert that the order of chained certs in / var/ssl/certs/dovecot.pem should make no difference, after I put our public cert first, followed by Digicert's intermediate cert, dovecot started up fine. Of course, there were so many things I looked into, it might have been something else I touched.. Stewart, I posted this answer last week in another thread(12/29/2008 Subject SSL cert problems.). Yes order seems to be important. I found this answer in the Openssl book on page 120. -Jonathan Stewart Dean wrote: Our DC has been using a Verisign certificate. Over the past year, we've been using a Digicert Wildcard Plus certificate for almost all of our machines, and I wanted to switched over our DC mailserver. I used the following command to generate the CSR and key: openssl req -new -newkey rsa:1024 -nodes -out star_bard_edu.csr - keyout star_bard_edu.key -subj "/C=US/ST=NY/L=ourtown/O=Bard College IT/OU=Bard College /CN=*.bard.edu" The resultant CSR verified and I submitted it to digicert and got back our cert, plus their intermediate and Trusted root certs. I killed the root instance of dovecot and waited for all the children to die I put together the intermediate cert (first) and our cert (second) into /usr/ssl/certs/dovecot.pem I put the key star_bard_edu.key in /var/ssl/private/dovecot.pem I restarted dovecot, but the imap login instances didn't appear, so I shifted back to the original combined cert file and key, restarted dovecot and it came up OK I check the syslog and saw these error messages: Jan 5 10:19:49 mercury mail:err|error dovecot: imap-login: Can't load private k ey file /var/ssl/private/dovecot.pem: error:0B080074:x509 certificate routines:X 509_check_private_key:key values mismatch Jan 5 10:19:49 mercury mail:err|error last message repeated 8 times Jan 5 10:19:49 mercury mail:err|error dovecot: child 4051108 (login) returned e rror 89 Jan 5 10:19:49 mercury mail:err|error dovecot: child 4231382 (login) returned e rror 89 I checked my key and it has the same time stamp as my CSR, so I didn't somehow get the wrong key. Both the old and new key are 600; if the old one works based on perms, the new one should too. Would some kind soul tell me what I'm missing? Or is there a problem using wild card certificate with DC? Is there an openssl command to verify the key. Or is it that the key is unencrypted? -- Once upon a time, the Internet was a friendly, neighbors- helping-neighbors small town, and no one locked their doors. Now it's like an apartment in Bed-Stuy: you need three heavy duty pick- proof locks, one of those braces that goes from the lock to the floor, and bars on the windows Stewart Dean, Unix System Admin, Bard College, New York 12504 sd...@bard.edu voice: 845-758-7475, fax: 845-758-7035 smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] [dovecot] INDEX variable and mbox_snarf plugin
On Jan 15, 2009, at 6:47 PM, Timo Sirainen wrote: Sorry for the late reply. On Thu, 2008-11-06 at 13:39 -0500, Jonathan Siegle wrote: Is there a way to tell the dovecot mbox_snarf plugin to use an alternate location for the index/cache files? It doesn't seem to want to use the INDEX variable. I'm guessing the answer is no because of all the ties in lib-storage/index/mbox/*c to the directory where the file(that is the inbox) lives. The filesystem where I keep the inbox file doesn't have a directory for the user to own. The user only owns the mbox file. It's not possible to specify the index location because of how the plugin has been implemented. The best you could easily do by modifying sources is to disable indexes for INBOX. Other than that it would require larger changes, perhaps a full rewrite. That's what I gleaned from the source. For now, I will go forward with adding another 300k files to the filesystem.. Thanks, Jonathan smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] I've moved to US
On Feb 6, 2009, at 9:49 PM, Timo Sirainen wrote: On Feb 6, 2009, at 8:29 PM, Ron Wilhoite wrote: Congratulations! Wow, Finland to Blacksburg. That could make for some interesting 'culture shock' posts. Actually I find Blacksburg to be very similar to Finland. I haven't really had any shocks. Just some small annoyances how some things are better/easier in Finland :) Wow! Welcome, my new neighbor to the south.. http://maps.google.com/maps?f=d&source=s_d&saddr=State+College,+PA&daddr=Blacksburg,+Va&hl=en&geocode=&mra=ls&sll=37.0625,-95.677068&sspn=53.167773,96.503906&ie=UTF8&z=7 smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] I've moved to US
On Feb 10, 2009, at 4:24 PM, Jerry wrote: On Tue, 10 Feb 2009 15:44:09 -0500 (EST) Kyle George wrote: On Mon, 9 Feb 2009, Timo Sirainen wrote: All bread tastes weird, I'm not sure why. You have to get "Arthur Avenue" bread. See if you can locate a local bakery. The regular 'commercial' bread is loaded with preservatives. Agreed wrt preservatives. I'm partial to Martin's Famous Whole Wheat Potato Bread (http://www.potatoroll.com/pages/products.asp) when I didn't have time to make my own bread for the week. It could be the water. But there are some positive things, like grocery stores being open 24h and when driving turning to right is allowed on red light :) Careful: there's no right-on-red in New York City (and there are no signs about it!). There are on the West Side Drive. I see them every morning when I have to drive into the city. They are usually located at the city borders. -- Jerry ges...@yahoo.com Technological progress has merely provided us with more efficient means for going backwards. Aldous Huxley smime.p7s Description: S/MIME cryptographic signature
[Dovecot] mbox snarf plugin + idle
I'm having a problem with mbox snarf not looking at /var/spool/mail/ when in idle mode thus never giving me a RECENT line even though there are new messages in /var/spool/mail/ . Here are the imap commands to reproduce the problem: 1 login userid password 2 select inbox 3 idle When I run "select inbox" it does see my messages in /var/spool/mail/ and moves them over fine. When I truss the process, I see it only running stat calls on my "mbox- snarf" file. To get new messages I issue DONE, CLOSE, and SELECT INBOX. I'm not sure why it is reporting alpha5 two lines below. I did an hg pull just the other day and see 1.2.beta1 in the output of hg tags. # /usr/ladmin2/sbin/dovecot -n # 1.2.alpha5: /usr/ladmin2/etc/dovecot.conf Warning: fd limit 2000 is lower than what Dovecot can use under full load (more than 4224). Either grow the limit or change login_max_processes_count and max_mail_processes settings # OS: AIX 3 0001112AD300 syslog_facility: local0 protocols: imap listen: *: ssl: no disable_plaintext_auth: no login_dir: /usr/ladmin2/var/run/dovecot/login login_executable: /usr/ladmin2/libexec/dovecot/imap-login login_greeting: Dovecot ready. login_processes_count: 30 max_mail_processes: 4096 mail_location: mbox:%h/new:INBOX=/var/spool/mail/14/%u mmap_disable: yes dotlock_use_excl: no mbox_write_locks: fcntl mbox_lazy_writes: no mail_plugins: mbox_snarf imap_client_workarounds: delay-newmail auth default: mechanisms: plain gssapi krb5_keytab: /etc/krb5/dovecot.keytab gssapi_hostname: $ALL verbose: yes debug: yes passdb: driver: pam userdb: driver: passwd plugin: mbox_snarf: /gpfs/inbox/14/%u Thanks, Jonathan smime.p7s Description: S/MIME cryptographic signature
[Dovecot] Fwd: mbox snarf plugin + idle
Ok. I went back and reread rfc 2177 and now I understand. Another bug in mail.app. I'll go tell Apple. " The IDLE command is sent from the client to the server when the client is ready to accept unsolicited mailbox update messages. The server requests a response to the IDLE command using the continuation ("+") response. The IDLE command remains active until the client responds to the continuation, and as long as an IDLE command is active, the server is now free to send untagged EXISTS, EXPUNGE, and other messages at any time." Mail.app doesnt respond to the "+" response. Begin forwarded message: From: Jonathan Siegle Date: February 12, 2009 7:55:58 AM EST To: Dovecot Mailing List Subject: [Dovecot] mbox snarf plugin + idle X-Psu-Spam-Hits: -102.599 I'm having a problem with mbox snarf not looking at /var/spool/mail/ when in idle mode thus never giving me a RECENT line even though there are new messages in /var/spool/mail/ . Here are the imap commands to reproduce the problem: 1 login userid password 2 select inbox 3 idle When I run "select inbox" it does see my messages in /var/spool/ mail/ and moves them over fine. When I truss the process, I see it only running stat calls on my "mbox-snarf" file. To get new messages I issue DONE, CLOSE, and SELECT INBOX. I'm not sure why it is reporting alpha5 two lines below. I did an hg pull just the other day and see 1.2.beta1 in the output of hg tags. # /usr/ladmin2/sbin/dovecot -n # 1.2.alpha5: /usr/ladmin2/etc/dovecot.conf Warning: fd limit 2000 is lower than what Dovecot can use under full load (more than 4224). Either grow the limit or change login_max_processes_count and max_mail_processes settings # OS: AIX 3 0001112AD300 syslog_facility: local0 protocols: imap listen: *: ssl: no disable_plaintext_auth: no login_dir: /usr/ladmin2/var/run/dovecot/login login_executable: /usr/ladmin2/libexec/dovecot/imap-login login_greeting: Dovecot ready. login_processes_count: 30 max_mail_processes: 4096 mail_location: mbox:%h/new:INBOX=/var/spool/mail/14/%u mmap_disable: yes dotlock_use_excl: no mbox_write_locks: fcntl mbox_lazy_writes: no mail_plugins: mbox_snarf imap_client_workarounds: delay-newmail auth default: mechanisms: plain gssapi krb5_keytab: /etc/krb5/dovecot.keytab gssapi_hostname: $ALL verbose: yes debug: yes passdb: driver: pam userdb: driver: passwd plugin: mbox_snarf: /gpfs/inbox/14/%u Thanks, Jonathan smime.p7s Description: S/MIME cryptographic signature
[Dovecot] dovecot1.2beta2( hg tags yields 8834:5284f45c249a) fetch error
Steps to reproduce 1 login testuser testpw 2 select inbox 3 fetch 1 body.peek[HEADER.FIELDS (date)] I get the error 3 BAD Error in IMAP command FETCH: Unknown FETCH modifier This is AIX 5.3 with mbox files. -Jonathan smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] dovecot1.2beta2( hg tags yields 8834:5284f45c249a) fetch error
On Mar 18, 2009, at 2:00 PM, Timo Sirainen wrote: On Wed, 2009-03-18 at 13:06 -0400, Jonathan Siegle wrote: Steps to reproduce 1 login testuser testpw 2 select inbox 3 fetch 1 body.peek[HEADER.FIELDS (date)] I get the error 3 BAD Error in IMAP command FETCH: Unknown FETCH modifier So it seems. Strange that no one had noticed it before. I thought Evolution did that, at least it used to.. Anyway, fixed: http://hg.dovecot.org/dovecot-1.2/rev/dc6880dcbbba I've noticed it for a little. But I didn't want to send noise. pine/ alpine use this when you go get a message that is postponed. I've just taken the time to learn howto fire up pine in debug and get that fetch statement out of the .pine-debug files.. Thanks this works now. 2 fetch 1 body.peek[HEADER.FIELDS (date)] * 1 FETCH (BODY[HEADER.FIELDS (DATE)] {41} Date: Wed, 18 Feb 2009 15:28:46 + smime.p7s Description: S/MIME cryptographic signature
[Dovecot] LIST command claims children exist in empty folder
Dovecot 1.2 (8834:5284f45c249a) Should list return \HasChildren if no folders exist under it? I'm using mbox format. 2 create testfolder/ 2 OK Create completed. 3 list "testfolder/" * * LIST (\Noselect \HasChildren) "/" "testfolder/" 3 OK List completed. 4 list "testfolder/" % * LIST (\Noselect \HasChildren) "/" "testfolder/" 4 OK List completed. 5 list "testfolder" % * LIST (\Noselect \HasChildren) "/" "testfolder" 5 OK List completed. 6 list "testfolder" * * LIST (\Noselect \HasChildren) "/" "testfolder" Thanks, Jonathan smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] LIST command claims children exist in empty folder
On Mar 24, 2009, at 11:36 AM, Jonathan Siegle wrote: Dovecot 1.2 (8834:5284f45c249a) Should list return \HasChildren if no folders exist under it? I'm using mbox format. 2 create testfolder/ 2 OK Create completed. 3 list "testfolder/" * * LIST (\Noselect \HasChildren) "/" "testfolder/" 3 OK List completed. 4 list "testfolder/" % * LIST (\Noselect \HasChildren) "/" "testfolder/" 4 OK List completed. 5 list "testfolder" % * LIST (\Noselect \HasChildren) "/" "testfolder" 5 OK List completed. 6 list "testfolder" * * LIST (\Noselect \HasChildren) "/" "testfolder" Thanks, Jonathan Here is my patch. diff -r d975ed910613 src/lib-storage/index/mbox/mbox-storage.c --- a/src/lib-storage/index/mbox/mbox-storage.c Wed Mar 25 07:34:55 2009 -0400 +++ b/src/lib-storage/index/mbox/mbox-storage.c Wed Mar 25 07:44:52 2009 -0400 @@ -888,7 +888,7 @@ path = t_strconcat(dir, "/", fname, NULL); if (stat(path, &st) == 0) { if (S_ISDIR(st.st_mode)) - *flags |= MAILBOX_NOSELECT | MAILBOX_CHILDREN; + *flags |= MAILBOX_NOSELECT ; else { *flags |= MAILBOX_NOINFERIORS | STAT_GET_MARKED(st); if (is_inbox_file(ctx->list, path, fname) && 3 list "testfolder/" * * LIST (\Noselect \HasChildren) "/" "testfolder/" 3 OK List completed. 4 list "testfolder" % * LIST (\Noselect \HasNoChildren) "/" "testfolder" 4 OK List completed. 5 list "testfolder" % * LIST (\Noselect \HasNoChildren) "/" "testfolder" 5 OK List completed. 6 list "testfolder" * * LIST (\Noselect \HasNoChildren) "/" "testfolder" 6 OK List completed. So slightly different behavior. Is this correct ? smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] LIST command claims children exist in empty folder
On Mar 25, 2009, at 8:42 PM, Timo Sirainen wrote: On Wed, 2009-03-25 at 07:52 -0400, Jonathan Siegle wrote: On Mar 24, 2009, at 11:36 AM, Jonathan Siegle wrote: Dovecot 1.2 (8834:5284f45c249a) Should list return \HasChildren if no folders exist under it? I'm using mbox format. Is this really a problem?.. alpine/pine can't delete empty folders because the empty folder \HasChildren. Here is my patch. Problem with that is that it doesn't return any children flags when using LISTEXT command: 1 list (subscribed) "" % return (children) Fixing this would require adding new code to fs_list_subs() to scan the subdirectory if children flags are missing. list_file_subdir() handles that for non-subscription listing, but it can't be directly used for subscription listing. ok thanks. I'll look at that today. smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] Dovecot v2.0 hg tree
On Apr 23, 2009, at 8:04 PM, Timo Sirainen wrote: http://hg.dovecot.org/dovecot-2.0/ I just did the initial commit for master process rewrite, which marks the beginning of Dovecot v2.0. Several things are still missing/ broken, but at least I was just able to successfully log in using imap :) I left v1.3 hg tree there for now, but once v2.0 tree is fully usable I'll just delete the v1.3 tree. Timo, Is there any reason to follow the 1.3 tree? Thanks, Jonathan smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] Timeout leak with dovecot version dovecot1.2(8985:f43bebab3dac)
On Apr 29, 2009, at 2:58 PM, Timo Sirainen wrote: On Wed, 2009-04-29 at 14:52 -0400, jsie...@psu.edu wrote: This is 64bit AIX 5.3. Looking through previous versions of dovecot, I see this warning. I didn't realize this was something bad until today. It's not exactly bad. It gets logged only when the process is exiting. But it shouldn't be happening either. local0.log.20090429:Apr 29 12:41:16 hostname dovecot: IMAP(jsiegle): Timeout leak: 1100054c0 How easily can you reproduce this? For example if you do: telnet localhost 143 1 login user pass 2 select inbox 3 logout Does it get logged? What if you select some other mailbox instead? Yes it gets logged. I did your steps and reproduced. I also did login;select;close;logout and that also gave me the error. smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] Timeout leak with dovecot version dovecot1.2(8985:f43bebab3dac)
On Apr 30, 2009, at 3:06 PM, Timo Sirainen wrote: On Thu, 2009-04-30 at 15:04 -0400, Jonathan Siegle wrote: telnet localhost 143 1 login user pass 2 select inbox 3 logout Does it get logged? What if you select some other mailbox instead? Yes it gets logged. I did your steps and reproduced. I also did login;select;close;logout and that also gave me the error. What about another mailbox than INBOX? If it happens only with INBOX, the problem is mbox-snarf plugin. It doesn't do it on folders. so login;select folder;logout doesn't produce the error. smime.p7s Description: S/MIME cryptographic signature
[Dovecot] mbox-snarf plugin and istream-mail-stats.c revision 9000:b02c642b4e51
I'm getting this error: May 1 09:09:30 tr27n12.aset.psu.edu syslog: PSU mbox-snarf name is INBOX May 1 09:09:30 tr27n12.aset.psu.edu dovecot: Panic: IMAP(tstem38): file istream-mail-stats.c: line 75: assertion failed: (ret != -1 || stream->istream.eof || stream->istream.stream_errno != 0) Will this error go away before the general 1.2 release? Thanks, Jonathan
Re: [Dovecot] mbox-snarf plugin and istream-mail-stats.c revision 9000:b02c642b4e51
On May 1, 2009, at 11:38 AM, Timo Sirainen wrote: On May 1, 2009, at 11:37 AM, Jonathan Siegle wrote: May 1 09:09:30 tr27n12.aset.psu.edu syslog: PSU mbox-snarf name is INBOX May 1 09:09:30 tr27n12.aset.psu.edu dovecot: Panic: IMAP(tstem38): file istream-mail-stats.c: line 75: assertion failed: (ret != -1 || stream->istream.eof || stream->istream.stream_errno != 0) Will this error go away before the general 1.2 release? Hopefully.. You can reproduce this always by selecting INBOX? Yeah. This is on my development server. I can't get into my inbox anymore.
Re: [Dovecot] mbox-snarf plugin and istream-mail-stats.c revision 9000:b02c642b4e51
On May 1, 2009, at 2:50 PM, Timo Sirainen wrote: On Fri, 2009-05-01 at 11:37 -0400, Jonathan Siegle wrote: I'm getting this error: May 1 09:09:30 tr27n12.aset.psu.edu syslog: PSU mbox-snarf name is INBOX May 1 09:09:30 tr27n12.aset.psu.edu dovecot: Panic: IMAP(tstem38): file istream-mail-stats.c: line 75: assertion failed: (ret != -1 || stream->istream.eof || stream->istream.stream_errno != 0) http://hg.dovecot.org/dovecot-1.2/rev/06bd1266f0c7 Thank you so much for the two fixes(this + : http://hg.dovecot.org/dovecot-1.2/rev/66b6cd495702) . I've done basic testing with no errors. Have a good weekend! smime.p7s Description: S/MIME cryptographic signature
[Dovecot] dovecot 2.0 (revision 9271:d467712aee77) compile problems on AIX 5.3
Having some problems compiling on AIX 5.3 with IBM vac version 8. Programs that I had problems building: test-mail test-imap test-index To fix the undefined symbols problem: ld: 0711-317 ERROR: Undefined symbol: .charset_to_utf8_end ld: 0711-317 ERROR: Undefined symbol: .charset_to_utf8_begin ld: 0711-317 ERROR: Undefined symbol: .charset_to_utf8 ld: 0711-317 ERROR: Undefined symbol: .charset_is_utf8 ld: 0711-317 ERROR: Undefined symbol: .charset_to_utf8_str ld: 0711-317 ERROR: Undefined symbol: .message_header_decode_utf8 ld: 0711-317 ERROR: Undefined symbol: .rfc822_parser_init ld: 0711-317 ERROR: Undefined symbol: .rfc822_skip_lwsp ld: 0711-317 ERROR: Undefined symbol: .rfc822_parse_content_type ld: 0711-317 ERROR: Undefined symbol: .rfc2231_parse ld: 0711-317 ERROR: Undefined symbol: .rfc822_parse_mime_token ld: 0711-317 ERROR: Undefined symbol: .rfc822_parse_atom ld: 0711-317 ERROR: Undefined symbol: .message_address_parse ld: 0711-317 ERROR: Undefined symbol: .iconv ld: 0711-317 ERROR: Undefined symbol: .iconv_open ld: 0711-317 ERROR: Undefined symbol: .iconv_close I add this ../lib-charset/.libs/libcharset.a -liconv to the Makefiles (./src/lib-imap/Makefile;./src/lib-index/Makefile;./src/lib-mail/ Makefile) below: $ find . -name Makefile -exec egrep -p libcharset.a {} \; -print clean-noinstPROGRAMS: @list='$(noinst_PROGRAMS)'; for p in $$list; do \ f=`echo $$p|sed 's/$(EXEEXT)$$//'`; \ echo " rm -f $$p $$f"; \ rm -f $$p $$f ; \ done test-imap$(EXEEXT): $(test_imap_OBJECTS) $(test_imap_DEPENDENCIES) @rm -f test-imap$(EXEEXT) $(LINK) $(test_imap_LDFLAGS) $(test_imap_OBJECTS) $ (test_imap_LDADD) $(LIBS) \ ../lib-mail/.libs/libmail.a ../lib-charset/.libs/libcharset.a -liconv ./src/lib-imap/Makefile clean-noinstPROGRAMS: @list='$(noinst_PROGRAMS)'; for p in $$list; do \ f=`echo $$p|sed 's/$(EXEEXT)$$//'`; \ echo " rm -f $$p $$f"; \ rm -f $$p $$f ; \ done test-index$(EXEEXT): $(test_index_OBJECTS) $(test_index_DEPENDENCIES) @rm -f test-index$(EXEEXT) $(LINK) $(test_index_LDFLAGS) $(test_index_OBJECTS) $ (test_index_LDADD) $(LIBS) ../lib-charset/.libs/libcharset.a -liconv ./src/lib-index/Makefile clean-noinstPROGRAMS: @list='$(noinst_PROGRAMS)'; for p in $$list; do \ f=`echo $$p|sed 's/$(EXEEXT)$$//'`; \ echo " rm -f $$p $$f"; \ rm -f $$p $$f ; \ done test-mail$(EXEEXT): $(test_mail_OBJECTS) $(test_mail_DEPENDENCIES) @rm -f test-mail$(EXEEXT) $(LINK) $(test_mail_LDFLAGS) $(test_mail_OBJECTS) $ (test_mail_LDADD) $(LIBS) ../lib-charset/.libs/libcharset.a -liconv ./src/lib-mail/Makefile Then compilation occurs fine. Thanks! Jonathan smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] Possibly dumb questions about DC and user/system limits
On May 14, 2009, at 2:17 PM, Stewart Dean wrote: Warning: fd limit 2000 is lower than what Dovecot can use under full load (more than 2054). Either grow the limit or change login_max_processes_count and max_mail_processes settings So I changed the no_size and no_size_hard to 3500 and 4000 respectively in both dovecot and root AIX defines nosize:* *Sets the soft limit for the number of file descriptors a user process may have open at one time. an lsuser dovecot returns: dovecot id=417 pgrp=dovecot groups=dovecot shell=/bin/false daemon=true admin=false ... fsize=2097151 cpu=-1 data=262144 stack=65536 core=2097151 rss=65536 nofiles=3500 nofiles_hard=4000 I kill dovecot and all children and restart itsame error message What am I missing? Does the machine have to be rebooted for the no_size to be updated? AIX should not have to be rebooted for this to happen. For debug, I would change the shell to a real shell; change the fsize to 600 or something small; logout and then login(as root and then su or just as the user) and try to make large files and verify limits. Ping me off list for more debugging unless people consider this on topic. -Jonathan smime.p7s Description: S/MIME cryptographic signature
[Dovecot] namespace list not working in dovecot 1.2 revision 9027:421393827a81
In my config that has worked for some time I have: namespace private { separator = / prefix = ~/ hidden = yes list = no # for v1.1+ } 1 list "~/" * 1 OK List completed. 2 list "~/" % 2 OK List completed. I _think_ that the last time I tested this was 2 weeks ago but it might have been 1 week ago. Thanks, Jonathan smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] Possibly dumb questions about DC and user/system limits
On May 14, 2009, at 2:17 PM, Stewart Dean wrote: The only changes was that max_mail_processes went from 1024 to 1280. Now I get a error message when I start DC: Warning: fd limit 2000 is lower than what Dovecot can use under full load (more than 2054). Either grow the limit or change login_max_processes_count and max_mail_processes settings So I changed the no_size and no_size_hard to 3500 and 4000 respectively in both dovecot and root AIX defines nosize:* *Sets the soft limit for the number of file descriptors a user process may have open at one time. I found a way to recreate the problem on my side. Since I start dovecot from inetd, I must verify my shell has the proper ulimits then do stopsrc -s inetd;startsrc -s inetd to pick up the new ulimits. I didn't notice how you started dovecot. Maybe this helps. smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] dovecot 2.0 (revision 9271:d467712aee77) compile problems on AIX 5.3
On May 17, 2009, at 2:42 PM, Timo Sirainen wrote: On Wed, 2009-05-13 at 15:04 -0400, Jonathan Siegle wrote: Having some problems compiling on AIX 5.3 with IBM vac version 8. Programs that I had problems building: test-mail test-imap test-index Should be fixed in hg now? Yes fixed. Is it to "early" to be reporting stuff like this? I now have this error(rev 9321:4c4b95def1fa): "ssl-proxy.c", line 12.5: 1506-343 (S) Redeclaration of ssl_proxy_new differs from previous declaration on line 17 of "ssl-proxy.h". "ssl-proxy.c", line 12.5: 1506-376 (I) Redeclaration of ssl_proxy_new has a different number of fixed parameters than the previous declaration. "ssl-proxy.c", line 12.5: 1506-377 (I) The type "struct ssl_proxy**" of parameter 3 differs from the previous type "const struct login_settings*". "ssl-proxy.c", line 19.5: 1506-343 (S) Redeclaration of ssl_proxy_client_new differs from previous declaration on line 19 of "ssl-proxy.h". "ssl-proxy.c", line 19.5: 1506-376 (I) Redeclaration of ssl_proxy_client_new has a different number of fixed parameters than the previous declaration. "ssl-proxy.c", line 19.5: 1506-377 (I) The type "int(*)(void*)" of parameter 3 differs from the previous type "const struct login_settings*". gmake[3]: *** [ssl-proxy.lo] Error 1 gmake[3]: Leaving directory `/usr/sadmin/src/imapservers/dovecothg/ dovecot-2.0psu/src/login-common' thanks, Jonathan smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] dovecot 2.0 (revision 9271:d467712aee77) compile problems on AIX 5.3
On May 18, 2009, at 2:02 PM, Timo Sirainen wrote: On Mon, 2009-05-18 at 13:57 -0400, Jonathan Siegle wrote: Yes fixed. Is it to "early" to be reporting stuff like this? No, it's not too early. Better early than late :) I now have this error(rev 9321:4c4b95def1fa): "ssl-proxy.c", line 12.5: 1506-343 (S) Redeclaration of ssl_proxy_new differs from previous declaration on line 17 of "ssl-proxy.h". Fixed: http://hg.dovecot.org/dovecot-2.0/rev/9c6597ba9e3e Thanks. No more compilation errors to report. smime.p7s Description: S/MIME cryptographic signature
[Dovecot] Problem with pam/krb5 auth on AIX 5.3
I'm using pam to authenticate users against my krb5 realm. Here is the problem scenario: User test2 attempts to login and their password is not expired so dovecot says: 0 login test2 myfavoritepassword 0 OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE SORT THREAD=REFERENCES THREAD=REFS MULTIAPPEND UNSELECT IDLE CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH] Logged in 1 logout * BYE Logging out 1 OK Logout completed. User test1 attempts to login, but their password is expired. So dovecot says: 0 login test1 myfavoritepassword 0 NO d expired User test2 attempts to login and their password is not expired. But dovecot still says: 0 login test2 myfavoritepassword 0 NO d expired If I kill the pid with name "dovecot-auth -w", user test2 can login just fine unless I login with the user test1 before trying user test2. So it seems like something is getting cached. I'm running imap-login out of inetd, in case that matters. In my dovecot.conf, I don't have any caching/authentication variables activated. I don't see anything obvious to type in passdb pam{ } to type. For debug, I've enable pam for telnet and tested that without error. Also, the logs show that test2 This is dovecot revision 9062:694714d59cd9 . Looking at the logs, I see user test2 authenticate correctly in all instances. thanks, Jonathan smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] Problem with pam/krb5 auth on AIX 5.3
On May 20, 2009, at 1:38 PM, Timo Sirainen wrote: On Wed, 2009-05-20 at 13:22 -0400, Jonathan Siegle wrote: I'm using pam to authenticate users against my krb5 realm. Here is the problem scenario: I guess pam_krb5 doesn't like it if the same process tries to authenticate multiple times. Use passdb pam { args = max_requests=1 } Ah yes that is the flag I need. Thanks! Jonathan smime.p7s Description: S/MIME cryptographic signature
[Dovecot] uid fetch error with revision 9112:9d634c93d28a
This command fails: 2 uid fetch somevaliduid (BODYSTRUCTURE BODY.PEEK[]) 2 BAD Error in IMAP command UID: Unknown command BODYSTRUCTURE Thanks, Jonathan smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] uid fetch error with revision 9112:9d634c93d28a
On Jun 1, 2009, at 12:27 PM, Timo Sirainen wrote: On Mon, 2009-06-01 at 11:03 -0400, Jonathan Siegle wrote: This command fails: 2 uid fetch somevaliduid (BODYSTRUCTURE BODY.PEEK[]) 2 BAD Error in IMAP command UID: Unknown command BODYSTRUCTURE http://hg.dovecot.org/dovecot-1.2/rev/f1a6c9dd4c33 ? Nope. I can tell you that it definitely works in 1.2rc3 from May 4th and I think it _might_ have worked as of early last week. This is using mbox storage. The client doing this is OSX Mail.app version 3.6(935/935.3). Thanks, Jonathan smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] uid fetch error with revision 9112:9d634c93d28a
On Jun 1, 2009, at 12:59 PM, Jonathan Siegle wrote: On Jun 1, 2009, at 12:27 PM, Timo Sirainen wrote: On Mon, 2009-06-01 at 11:03 -0400, Jonathan Siegle wrote: This command fails: 2 uid fetch somevaliduid (BODYSTRUCTURE BODY.PEEK[]) 2 BAD Error in IMAP command UID: Unknown command BODYSTRUCTURE http://hg.dovecot.org/dovecot-1.2/rev/f1a6c9dd4c33 ? Nope. I can tell you that it definitely works in 1.2rc3 from May 4th and I think it _might_ have worked as of early last week. This is using mbox storage. The client doing this is OSX Mail.app version 3.6(935/935.3). Thanks, Jonathan I can also tell you that it worked on the May 14th revision. smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] uid fetch error with revision 9112:9d634c93d28a
On Jun 1, 2009, at 1:59 PM, Timo Sirainen wrote: On Mon, 2009-06-01 at 12:59 -0400, Jonathan Siegle wrote: On Jun 1, 2009, at 12:27 PM, Timo Sirainen wrote: On Mon, 2009-06-01 at 11:03 -0400, Jonathan Siegle wrote: This command fails: 2 uid fetch somevaliduid 5 uid fetch 4 (BODYSTRUCTURE BODY.PEEK[]) 5 BAD Error in IMAP command UID: Unknown command BODYSTRUCTURE 2 BAD Error in IMAP command UID: Unknown command BODYSTRUCTURE http://hg.dovecot.org/dovecot-1.2/rev/f1a6c9dd4c33 ? Nope. I can tell you that it definitely works in 1.2rc3 from May 4th and I think it _might_ have worked as of early last week. This is using mbox storage. The client doing this is OSX Mail.app version 3.6(935/935.3). I can't seem to be able to reproduce this. What plugins do you have loaded? Can you manually try a few commands? telnet localhost 143 1 login user pass 2 select inbox 3 fetch 1:* flags 4 uid fetch 1:* flags 5 fetch 1 bodystructure Sure. The problem for me is somewhere between revision 9061(works) and 9098(doesn't work). I don't select inbox. I select a folder to try to take plugins out of the picture. I'll keep doing my binary search to find the changeset that breaks.. 2 select foo4 * FLAGS (\Answered \Flagged \Deleted \Seen \Draft $NotJunk NonJunk) * OK [PERMANENTFLAGS (\Answered \Flagged \Deleted \Seen \Draft $NotJunk NonJunk \*)] Flags permitted. * 10 EXISTS * 0 RECENT * OK [UIDVALIDITY 1] UIDs valid * OK [UIDNEXT 11] Predicted next UID * OK [HIGHESTMODSEQ 1] 2 OK [READ-WRITE] Select completed. 3 fetch 1:* flags * 1 FETCH (FLAGS (\Seen $NotJunk)) * 2 FETCH (FLAGS (\Seen $NotJunk)) * 3 FETCH (FLAGS (\Seen $NotJunk NonJunk)) * 4 FETCH (FLAGS (\Seen $NotJunk NonJunk)) * 5 FETCH (FLAGS (\Seen $NotJunk NonJunk)) * 6 FETCH (FLAGS (\Seen $NotJunk NonJunk)) * 7 FETCH (FLAGS (\Seen $NotJunk)) * 8 FETCH (FLAGS (\Seen $NotJunk)) * 9 FETCH (FLAGS (\Seen $NotJunk)) * 10 FETCH (FLAGS (\Seen $NotJunk)) 3 OK Fetch completed. 4 fetch 1 bodystructure 4 BAD Error in IMAP command FETCH: Unknown command BODYSTRUCTURE 4 uid fetch 1:* flags * 1 FETCH (UID 1 FLAGS (\Seen $NotJunk)) * 2 FETCH (UID 2 FLAGS (\Seen $NotJunk)) * 3 FETCH (UID 3 FLAGS (\Seen $NotJunk NonJunk)) * 4 FETCH (UID 4 FLAGS (\Seen $NotJunk NonJunk)) * 5 FETCH (UID 5 FLAGS (\Seen $NotJunk NonJunk)) * 6 FETCH (UID 6 FLAGS (\Seen $NotJunk NonJunk)) * 7 FETCH (UID 7 FLAGS (\Seen $NotJunk)) * 8 FETCH (UID 8 FLAGS (\Seen $NotJunk)) * 9 FETCH (UID 9 FLAGS (\Seen $NotJunk)) * 10 FETCH (UID 10 FLAGS (\Seen $NotJunk)) 4 OK Fetch completed. 5 uid fetch 4 (BODYSTRUCTURE BODY.PEEK[]) 5 BAD Error in IMAP command UID: Unknown command BODYSTRUCTURE ./dovecot -n # 1.2.rc4: /usr/ladmin3/etc/dovecot.conf # OS: AIX 3 0001112AD300 syslog_facility: local0 protocols: imap listen: *:someport ssl: no disable_plaintext_auth: no login_dir: /usr/ladmin3/var/run/dovecot/login login_executable: /usr/ladmin3/libexec/dovecot/imap-login login_greeting: Dovecot baseline ready. login_processes_count: 30 max_mail_processes: 4096 mail_location: mbox:%h mmap_disable: yes dotlock_use_excl: no mbox_write_locks: fcntl mail_plugins: mbox_snarf mail_plugin_dir: /usr/ladmin3/lib/dovecot/imap/ imap_client_workarounds: tb-extra-mailbox-sep imap_id_log: * namespace: type: private separator: / inbox: yes list: yes subscriptions: yes namespace: type: private separator: / prefix: ~/ hidden: yes list: no subscriptions: yes auth default: krb5_keytab: /etc/myfavoritekeytab verbose: yes debug: yes passdb: driver: pam userdb: driver: passwd plugin: mbox_snarf: %h/SNARF smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] uid fetch error with revision 9112:9d634c93d28a
On Jun 1, 2009, at 2:37 PM, Timo Sirainen wrote: On Mon, 2009-06-01 at 14:31 -0400, Timo Sirainen wrote: On Mon, 2009-06-01 at 14:20 -0400, Jonathan Siegle wrote: 2 uid fetch somevaliduid 5 uid fetch 4 (BODYSTRUCTURE BODY.PEEK[]) 5 BAD Error in IMAP command UID: Unknown command BODYSTRUCTURE This should fix it: http://hg.dovecot.org/dovecot-1.2/rev/ 4d2b2adfd415 And http://hg.dovecot.org/dovecot-1.2/rev/9ae55b68cf61 3 fetch 1 bodystructure * 1 FETCH (BODYSTRUCTURE ("text" "plain" ("charset" "iso-8859-1") NIL NIL "7bit" 555 13 NIL ("inline" NIL) NIL NIL)) 3 OK Fetch completed. 4 uid fetch 1 (BODYSTRUCTURE BODY.PEEK[]) * 1 FETCH (UID 1 BODYSTRUCTURE ("text" "plain" ("charset" "iso-8859-1") NIL NIL "7bit" 555 13 NIL ("inline" NIL) NIL NIL) BODY[] {3953} . . . . 4 OK Fetch completed. And there was much cheering in central Pennsylvania... thanks, Jonathan smime.p7s Description: S/MIME cryptographic signature
[Dovecot] record points outside file error with dovecot revision 9116:9ae55b68cf61
I use mbox with mbox-snarf plugin. I get errors like: dovecot: IMAP(testuser): Corrupted index cache file /full/path/ dovecot.index.cache: record points outside file The errors can be in either folders or INBOX. I put some syslog statements in src/lib-index/mail-cache-lookup.c to help understand this. if (offset + sizeof(*rec) > cache->mmap_length) { mail_cache_set_corrupted(cache, "record points outside file"); syslog(LOG_DEBUG,"mail_cache_get_record rec->size is %d ", rec->size); syslog(LOG_DEBUG,"cache->mmap_length is %d ", cache- >mmap_length); syslog(LOG_DEBUG,"offset is %d ", offset); return -1; } syslog: mail_cache_get_record rec->size is 268595472 syslog: cache->mmap_length is 4096 syslog: offset is 1630760037 Oh I forgot to print off sizeof(*rec). Would that help? Thanks, Jonathan smime.p7s Description: S/MIME cryptographic signature
[Dovecot] Panic with signal 6 core dump with revision 9116:9ae55b68cf61
Jun 2 10:05:14 hostname dovecot: IMAP(testuser): Panic: file istream- raw-mbox.c: line 380: assertion failed: (new_pos > 0) Jun 2 10:05:14 hostname dovecot: dovecot: child 544822 (imap) killed with signal 6 (dbx) where raise(??) at 0x905a68c abort() at 0x9085c2c default_fatal_finish(type = LOG_TYPE_PANIC, status = 0), line 160 in "failures.c" i_internal_fatal_handler(type = LOG_TYPE_PANIC, status = 0, fmt = "file %s: line %d: assertion failed: (%s)", args = ""), line 440 in "failures.c" i_panic(format = "file %s: line %d: assertion failed: (%s)", ... = 0x1001731b4, 0x17c, 0x100173328, 0x0, 0x15b5, 0x0, 0x0), line 207 in "failures.c" i_stream_raw_mbox_read(stream = 0x0001100ca6b0), line 380 in "istream-raw-mbox.c" i_stream_raw_mbox_read(stream = 0x0001100ca6b0), line 379 in "istream-raw-mbox.c" i_stream_read(stream = 0x0001100ca700), line 80 in "istream.c" i_stream_limit_read(stream = 0x0001100cb930), line 64 in "istream- limit.c" i_stream_read(stream = 0x0001100cb980), line 80 in "istream.c" i_stream_read_copy_from_parent(istream = 0x0001100cbb20), line 118 in "istream.c" i_stream_header_filter_read(stream = 0x0001100cbad0), line 315 in "istream-header-filter.c" i_stream_read(stream = 0x0001100cbb20), line 80 in "istream.c" i_stream_read_copy_from_parent(istream = 0x0001100cbd20), line 118 in "istream.c" i_stream_mail_stats_read_mail_stats(stream = 0x0001100cbcd0), line 47 in "istream-mail-stats.c" i_stream_read(stream = 0x0001100cbd20), line 80 in "istream.c" i_stream_read_data(stream = 0x0001100cbd20, data_r = 0x0fffecc0, size_r = 0x0fffecc8, threshold = 1), line 361 in "istream.c" message_parser_read_more(ctx = 0x0001100cc118, block_r = 0x0fffecb0, full_r = 0x0fffeb84), line 118 in "message- parser.c" parse_next_body_to_boundary(ctx = 0x0001100cc118, block_r = 0x0fffecb0), line 330 in "message-parser.c" message_parser_parse_next_block(ctx = 0x0001100cc118, block_r = 0x0fffecb0), line 768 in "message-parser.c" message_parser_parse_body(ctx = 0x0001100cc118, hdr_callback = (nil), context = (nil)), line 831 in "message-parser.c" index_mail_parse_body(mail = 0x0001100c9878, field = MAIL_CACHE_FLAGS), line 792 in "index-mail.c" index_mail_get_parts(_mail = 0x0001100c9878, parts_r = 0x0fffef28), line 224 in "index-mail.c" mail_get_parts(mail = 0x0001100c9878, parts_r = 0x0fffef28), line 71 in "mail.c" unnamed block in search_arg_match_text(args = 0x0001100c7e70, ctx = 0x0001100c95f0, ret = -1), line 647 in "index-search.c" search_arg_match_text(args = 0x0001100c7e70, ctx = 0x0001100c95f0, ret = -1), line 647 in "index-search.c" search_match_next(ctx = 0x0001100c95f0), line 1101 in "index- search.c" unnamed block in index_storage_search_next_nonblock(_ctx = 0x0001100c95f0, mail = 0x0001100c9878, tryagain_r = 0x01e0), line 1301 in "index-search.c" index_storage_search_next_nonblock(_ctx = 0x0001100c95f0, mail = 0x0001100c9878, tryagain_r = 0x01e0), line 1301 in "index-search.c" mailbox_search_next_nonblock(ctx = 0x0001100c95f0, mail = 0x0001100c9878, tryagain_r = 0x01e0), line 754 in "mail-storage.c" cmd_search_more(cmd = 0x000110022058), line 347 in "imap-search.c" cmd_search_more_callback(cmd = 0x000110022058), line 434 in "imap- search.c" unnamed block in io_loop_handle_timeouts_real(ioloop = 0x00011001f310), line 316 in "ioloop.c" io_loop_handle_timeouts_real(ioloop = 0x00011001f310), line 316 in "ioloop.c" unnamed block in io_loop_handle_timeouts(ioloop = 0x00011001f310), line 327 in "ioloop.c" io_loop_handle_timeouts(ioloop = 0x00011001f310), line 327 in "ioloop.c" io_loop_handler_run(ioloop = 0x00011001f310), line 162 in "ioloop- poll.c" io_loop_run(ioloop = 0x00011001f310), line 338 in "ioloop.c" main(argc = 1, argv = 0x0630, envp = 0x0640), line 323 in "main.c" smime.p7s Description: S/MIME cryptographic signature
[Dovecot] Running imaptest revision 209:939fa886391a built against dovecot revision 9116:9ae55b68cf61 on AIX 5.3 core dumps
# ./imaptest host=127.0.0.1 port=143 user=tstem38 pass=pass4you mbox=/ gpfs/users/t/s/tstem38/IMAP/foo4 Panic: file client.c: line 620: assertion failed: (idx >= array_count(&clients) == NULL) IOT/Abort trap(coredump) smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] record points outside file error with dovecot revision 9116:9ae55b68cf61
On Jun 2, 2009, at 1:31 PM, Timo Sirainen wrote: On Tue, 2009-06-02 at 13:24 -0400, jsie...@psu.edu wrote: dovecot: IMAP(testuser): Corrupted index cache file /full/path/ dovecot.index.cache: record points outside file So you're using AIX? Do you also happen to use NFS? Can you reproduce this error by running imaptest for a while? http://imapwiki.org/ImapTest No NFS here. The filesystem is called GPFS. It is a clustered FS. Ah, that probably explains it. Can multiple different servers modify the same mailbox? Cache file is the part of Dovecot that demands the most from the OS/filesystem. The most difficult part is probably that it writes to the file without locking. It first reserves a space and then starts writing there. Multiple processes can write to the same file at the same time. Timo, Are you saying that multiple processes on the same folder(INBOX) on the same IMAP server can cause this collision as well? Is there a difference between running multiple processes on the same folder(INBOX) on multiple IMAP servers vs running multiple processes on the same folder on a single IMAP server? This should probably be changed at some point, since it could just buffer more data to memory and then lock, write, unlock. That would also make the code simpler, since it can currently leave holes to the file because it has to guess initially how much space to reserve.. This is definitely on my wish list. Thanks, Jonathan smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] record points outside file error with dovecot revision 9116:9ae55b68cf61
On Jun 3, 2009, at 9:35 AM, Timo Sirainen wrote: On Wed, 2009-06-03 at 09:31 -0400, Jonathan Siegle wrote: Are you saying that multiple processes on the same folder(INBOX) on the same IMAP server can cause this collision as well? Is there a difference between running multiple processes on the same folder(INBOX) on multiple IMAP servers vs running multiple processes on the same folder on a single IMAP server? I don't know. That depends on how GPFS is implemented. Pick a local filesystem, say ext3? smime.p7s Description: S/MIME cryptographic signature
[Dovecot] Dovecot 1.2 + AIX setups
Is anyone running Dovecot on AIX? I'm trying to debug the "Corrupted index cache file... record points outside file" problem. I can give gory details about what I've tried, but for now I would just like to see setups on AIX. Thanks, Jonathan smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] record points outside file error with dovecot revision 9116:9ae55b68cf61
On Jun 3, 2009, at 12:17 PM, Timo Sirainen wrote: On Wed, 2009-06-03 at 12:11 -0400, Timo Sirainen wrote: On Wed, 2009-06-03 at 10:14 -0400, Jonathan Siegle wrote: On Jun 3, 2009, at 9:35 AM, Timo Sirainen wrote: On Wed, 2009-06-03 at 09:31 -0400, Jonathan Siegle wrote: Are you saying that multiple processes on the same folder(INBOX) on the same IMAP server can cause this collision as well? Is there a difference between running multiple processes on the same folder(INBOX) on multiple IMAP servers vs running multiple processes on the same folder on a single IMAP server? I don't know. That depends on how GPFS is implemented. Pick a local filesystem, say ext3? But with ext3 you can't have multiple servers accessing the same filesystem. But of course there are no problems (well, some very rare random ones maybe) having multiple processes accessing the same mailbox on the same server. Ever since I wrote my imaptest tool (a few years ago?) I've been heavily stress testing multiple connections modifying the mailbox at the same time. Timo, Can you tell me what platform/filesystem you use for testing? Oh and can you hint at what "bad things" may happen when I get the error "record points outside file"? Thanks, Jonathan smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] minimize mbox mdbox fragmentation
On Oct 20, 2010, at 10:14 PM, Timo Sirainen wrote: > On 21.10.2010, at 3.12, Denny Lin wrote: > >> On Wed, Oct 20, 2010 at 06:45:17PM +0100, Timo Sirainen wrote: >>> On Wed, 2010-10-20 at 13:32 -0400, Charles Marcus wrote: On 2010-10-20 12:53 PM, Timo Sirainen wrote: > Oh, interesting. I didn't know that was possible. And even better: Linux > has fallocate() that can do it for other filesystems than just XFS. Or > looks like it's only XFS and ext4 (ext3 doesn't support it). How about reiserfs (3, not 4)? >>> >>> Doesn't support. >> >> Is it possible with UFS and ZFS? > > Linux doesn't support either and my googling didn't find any FreeBSD or > Solaris interface for this feature, so I don't know. > AIX supports posix_fallocate but only for JFS2. For GPFS(what we use), the function is gpfs_prealloc. smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] Question about mbox_snarf and dovecot2.0
On Nov 18, 2010, at 2:08 PM, Timo Sirainen wrote: > On Thu, 2010-04-29 at 11:11 -0400, Jonathan Siegle wrote: > >> As I need this to function, I've been thinking about your words above >> and been reading http://wiki.dovecot.org/Design/Storage/Plugins. The >> mentality of it reminds me of the lazy-expunge-plugin, but this API is >> taking some time getting used to. Any pointers would be appreciated. > > So I guess you never got around to implementing it? I finally did: > http://dovecot.org/list/dovecot/2010-November/055020.html > > I did do it. I've been testing it for a few months now. Sorry. Should have said something.
[Dovecot] dovecot 2.0 revision 12532:e030df616faf: problem with Snarf plugin
this command fails when the snarf plugin is enabled. 5 status inbox (UIDNEXT MESSAGES) with the error: Dec 10 13:57:57 tr27n12.aset.psu.edu dovecot: imap(tstem38): Panic: file index-transaction.c: line 71: assertion failed: (box->opened) If it isn't easy to reproduce, I'll spit out the config. thanks, Jonathan smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] dovecot 2.0 revision 12532:e030df616faf: problem with Snarf plugin
Oops! Config problem. Typing should be second nature by now huh. 4 status inbox (UIDNEXT MESSAGES) * STATUS "inbox" (MESSAGES 4716 UIDNEXT 101501) 4 OK Status completed. 5 status inbox (UIDNEXT MESSAGES) * STATUS "inbox" (MESSAGES 4717 UIDNEXT 101502) 5 OK Status completed. On Dec 10, 2010, at 2:04 PM, Jonathan Siegle wrote: > this command fails when the snarf plugin is enabled. > > 5 status inbox (UIDNEXT MESSAGES) > > > with the error: > > Dec 10 13:57:57 tr27n12.aset.psu.edu dovecot: imap(tstem38): Panic: file > index-transaction.c: line 71: assertion failed: (box->opened) > > If it isn't easy to reproduce, I'll spit out the config. > > thanks, > Jonathan > smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] dovecot 2.0 revision 12532:e030df616faf: problem with Snarf plugin
On Dec 13, 2010, at 8:06 AM, Timo Sirainen wrote: > On Fri, 2010-12-10 at 14:45 -0500, Jonathan Siegle wrote: >> Oops! Config problem. Typing should be second nature by now huh. > .. >>> Dec 10 13:57:57 tr27n12.aset.psu.edu dovecot: imap(tstem38): Panic: file >>> index-transaction.c: line 71: assertion failed: (box->opened) > > What kind of a config problem? It still shouldn't crash. > > Actually it wasn't a config problem. I'm now able to reproduce it again. I'm using mail.local from sendmail as my lda. This is on AIX 5.3 What other info do you need? smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] question about snarf plugin in dovecot 2
On Dec 14, 2010, at 4:46 PM, Tom Lieuallen wrote: > >> After these changes, snarfing doesn't work for me, either straight or UW >> optional style. Either way I try it, when I connect, it creates an empty >> ~/mbox file but does not snarf my inbox into it. My mail client >> (thunderbird) doesn't see any messages at all in my inbox. I remove the >> snarf namespace and change my mail_location back and my inbox is, of >> course, back. Note that I have all the UW compatibility namespaces in >> there. > Once I do that, I see that the plugin does load. And now it no longer > creates an ~/mbox if it doesn't first exist. Then again, it now panics, so > perhaps that is happening before getting to the point of creating the ~/mbox > file. Since the panic is happening just after reference to > .../var/run/dovecot/empty, I'm wondering if it has something to do with that > directory. I can tell you that my config looks a little different and snarfing works. I have no need for optional snarfing. My mail_location qualifies the full path to the file where I want the mail to go. So it looks like: mail_location = mbox:%h:INBOX=/gpfs/inbox/%Ju/%u:INDEX=%h/.dovecot2.0.2 Namespace wise, I have: namespace Snarf { prefix = ~~Snarfbox/ location = mbox:/var/empty:INBOX=/var/spool/mail/%Ju/%u:INDEX=MEMORY list = no hidden = yes } and finally: plugin { snarf = ~~Snarfbox/INBOX } I have not figured out my problem with using the IMAP status command. 1 status inbox (messages) -Jonathan smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] dovecot 2.0 revision 12532:e030df616faf: problem with Snarf plugin
On Dec 13, 2010, at 9:27 AM, Jonathan Siegle wrote: > > On Dec 13, 2010, at 8:06 AM, Timo Sirainen wrote: > >> On Fri, 2010-12-10 at 14:45 -0500, Jonathan Siegle wrote: >>> Oops! Config problem. Typing should be second nature by now huh. >> .. >>>> Dec 10 13:57:57 tr27n12.aset.psu.edu dovecot: imap(tstem38): Panic: file >>>> index-transaction.c: line 71: assertion failed: (box->opened) >> >> What kind of a config problem? It still shouldn't crash. >> >> > > Actually it wasn't a config problem. I'm now able to reproduce it again. I'm > using mail.local from sendmail as my lda. This is on AIX 5.3 What other info > do you need? > > Timo, If you give me some logic to throw at this, I'll take a shot at the programming. I tested my snarf code and it does the same thing. thanks, Jonathan smime.p7s Description: S/MIME cryptographic signature
Re: [Dovecot] dovecot 2.0 revision 12532:e030df616faf: problem with Snarf plugin
On Dec 17, 2010, at 8:16 AM, Timo Sirainen wrote: > On Fri, 2010-12-17 at 07:45 -0500, Jonathan Siegle wrote: >>>>>> Dec 10 13:57:57 tr27n12.aset.psu.edu dovecot: imap(tstem38): Panic: file >>>>>> index-transaction.c: line 71: assertion failed: (box->opened) > .. >> If you give me some logic to throw at this, I'll take a shot at the >> programming. I tested my snarf code and it does the same thing. > > Does http://hg.dovecot.org/dovecot-2.0/rev/b7dd7a966a3a fix it? > > Yeah thanks! smime.p7s Description: S/MIME cryptographic signature
[Dovecot] Multiple Authentication Databases
Hi Everyone, I wish to run Dovecot on my "Front End" outbound mail relay, and use Dovecot purely for authentication purposes. However, each mysql database for each domain will be on a separate server. Is there a way for dovecot to authenticate against different databases depending on domain name? Thanks
[Dovecot] Multiple Authentication Databases
Hi Everyone, I wish to run Dovecot on my "Front End" outbound mail relay, and use Dovecot purely for authentication purposes. However, each mysql database for each domain will be on a separate server. Is there a way for dovecot to authenticate against different databases depending on domain name? Thanks
Re: [Dovecot] Multiple Authentication Databases
I had a look there, but that doesn't have anything on a domain-by-domain basis On 11/01/11 13:31, Henrique Fernandes wrote: http://wiki2.dovecot.org/Authentication/MultipleDatabases Is that are you looking for ? []'sf.rique On Tue, Jan 11, 2011 at 11:28 AM, Jonathan Tripathy mailto:jon...@abpni.co.uk>> wrote: Hi Everyone, I wish to run Dovecot on my "Front End" outbound mail relay, and use Dovecot purely for authentication purposes. However, each mysql database for each domain will be on a separate server. Is there a way for dovecot to authenticate against different databases depending on domain name? Thanks
Re: [Dovecot] Multiple Authentication Databases
Yes, but the problem there is that each database is controlled by different untrusted individuals. If someone were to create a username/password on a database that is higher in the list, they could authenticate as that user, which is undesirable. On 11/01/11 13:37, Henrique Fernandes wrote: Well, at least it work, it will fail until get the right databases. []'sf.rique On Tue, Jan 11, 2011 at 11:32 AM, Jonathan Tripathy mailto:jon...@abpni.co.uk>> wrote: I had a look there, but that doesn't have anything on a domain-by-domain basis On 11/01/11 13:31, Henrique Fernandes wrote: http://wiki2.dovecot.org/Authentication/MultipleDatabases Is that are you looking for ? []'sf.rique On Tue, Jan 11, 2011 at 11:28 AM, Jonathan Tripathy mailto:jon...@abpni.co.uk>> wrote: Hi Everyone, I wish to run Dovecot on my "Front End" outbound mail relay, and use Dovecot purely for authentication purposes. However, each mysql database for each domain will be on a separate server. Is there a way for dovecot to authenticate against different databases depending on domain name? Thanks
[Dovecot] Best Cluster Storage
Hi Everyone, I wish to create a Postfix/Dovecot active-active cluster (each node will run Postfix *and* Dovecot), which will obviously have to use central storage. I'm looking for ideas to see what's the best out there. All of this will be running on multiple Xen hosts, however I don't think that matters as long as I make sure that the cluster nodes are on different physical boxes. Here are my ideas so far for the central storage: 1) NFS Server using DRBD+LinuxHA. Export the same NFS share to each mail server. Which this seems easy, how well does Dovecot work with NFS? I've read the wiki page, and it doesn't sound promising. But it may be outdated.. 2) Export block storage using iSCSI from targets which have GFS2 on DRBD+LinuxHA. This is tricky to get working well, and it's only a theory. 3) GlusterFS. Easy to set up, but apparently very slow to run. So what's everybody using? I know that Postfix runs well on NFS (according to their docs). I intend to use Maildir Thanks
Re: [Dovecot] Best Cluster Storage
In this Xen setup, I think the best way to accomplish your goals is to create 6 guests: 2 x Linux Postfix 2 x Linux Dovecot 1 x Linux NFS server 1 x Linux Dovecot director Each of these can be painfully small stripped down Linux instances. Configure each Postfix and Dovecot server to access the same NFS export. Configure Postfix to use native local delivery to NFS/maildir. Don't use LDA (deliver). Ok so this is interesting. As long as I use Postfix native delivery, along with Dovecot director, NFS should work ok? For any meaningful use of virtualized clusters with Xen, ESX, etc, a prerequisite is shared storage. If you don't have it, get it. The hypervisor is what gives you fault tolerance. This requires shared storage. If you do not intend to install shared storage, and intend to use things like drbd between guests to get your storage redundancy, then you really need to simply throw out your hypervisor, in this case Xen, and do direct bare metal host clustering with drbd, gfs2, NFS, etc. Why is this the case? Apart from the fact that Virtualisation becomes "more useful" with shared storage (which I agree with), is there anything wrong with doing DR between guests? We don't have shared storage set up yet for the location this email system is going. We will get one in time though.
Re: [Dovecot] Best Cluster Storage
On 13/01/11 10:57, Stan Hoeppner wrote: Jonathan Tripathy put forth on 1/13/2011 2:24 AM: Ok so this is interesting. As long as I use Postfix native delivery, along with Dovecot director, NFS should work ok? One has nothing to do with the other. Director doesn't touch smtp (afaik), only imap. The reason for having Postfix use its native local(8) delivery agent for writing into the maildir, instead of Dovecot deliver, is to avoid Dovecot index locking/corruption issues with a back end NFS mail store. So if you want to do sorting you'll have to use something other than sieve, such as maildrop or procmail. These don't touch Dovecot's index files, while Deliver (LSA) does write to them during message delivery into the maildir. Yes, I thought it had something to do with that For any meaningful use of virtualized clusters with Xen, ESX, etc, a prerequisite is shared storage. If you don't have it, get it. The hypervisor is what gives you fault tolerance. This requires shared storage. If you do not intend to install shared storage, and intend to use things like drbd between guests to get your storage redundancy, then you really need to simply throw out your hypervisor, in this case Xen, and do direct bare metal host clustering with drbd, gfs2, NFS, etc. Why is this the case? Apart from the fact that Virtualisation becomes "more useful" with shared storage (which I agree with), is there anything wrong with doing DR between guests? We don't have shared storage set up yet for the location this email system is going. We will get one in time though. I argue that datacenter virtualization is useless without shared storage. This is easy to say for those of us who have done it both ways. You haven't yet. Your eyes will be opened after you do Xen or ESX atop a SAN. If you're going to do drbd replication between two guests on two physical Xen hosts then you may as well not use Xen at all. It's pointless. Where did I say I havn't done that yet? I have indeed worked with VM infrastructures using SAN storage, and yes, it's fantastic. Just this particular location doesn't have a SAN box installed. And we will have to agree to disagree as I personally do see the benefit of using VMs with local storage What you need to do right now is build the justification case for installing the SAN storage as part of the initial build out and setup your virtual architecture around shared SAN storage. Don't waste your time on this other nonsense of replication from one guest to another, with an isolated storage pool attached to each physical Xen server. That's just nonsense. Do it right or don't do it at all. Don't take my word for it. Hit Novell's website and VMWare's and pull up the recommended architecture and best practices docs. You don't need to tell me :) I already know how great it is One last thing. I thought I read something quite some time ago about Xen working on adding storage layer abstraction which would allow any Xen server to access directly connected storage on another Xen server, creating a sort of quasi shared SAN storage over ethernet without the cost of the FC SAN. Did anything ever come of that? I haven’t really been following how the 4.x branch is going as it wasn't stable enough for our needs. Random lockups would always occur. The 3.x branch is rock solid. There have been no crashes (yet!) Would DRBD + GFS2 work better than NFS? While NFS is simple, I don't mind experimenting with DRBD and GFS2 is it means fewer problems?
Re: [Dovecot] Best Cluster Storage
On 13/01/11 10:57, Stan Hoeppner wrote: Jonathan Tripathy put forth on 1/13/2011 2:24 AM: Ok so this is interesting. As long as I use Postfix native delivery, along with Dovecot director, NFS should work ok? One has nothing to do with the other. Director doesn't touch smtp (afaik), only imap. The reason for having Postfix use its native local(8) delivery agent for writing into the maildir, instead of Dovecot deliver, is to avoid Dovecot index locking/corruption issues with a back end NFS mail store. So if you want to do sorting you'll have to use something other than sieve, such as maildrop or procmail. These don't touch Dovecot's index files, while Deliver (LSA) does write to them during message delivery into the maildir. Yes, I thought it had something to do with that For any meaningful use of virtualized clusters with Xen, ESX, etc, a prerequisite is shared storage. If you don't have it, get it. The hypervisor is what gives you fault tolerance. This requires shared storage. If you do not intend to install shared storage, and intend to use things like drbd between guests to get your storage redundancy, then you really need to simply throw out your hypervisor, in this case Xen, and do direct bare metal host clustering with drbd, gfs2, NFS, etc. Why is this the case? Apart from the fact that Virtualisation becomes "more useful" with shared storage (which I agree with), is there anything wrong with doing DR between guests? We don't have shared storage set up yet for the location this email system is going. We will get one in time though. I argue that datacenter virtualization is useless without shared storage. This is easy to say for those of us who have done it both ways. You haven't yet. Your eyes will be opened after you do Xen or ESX atop a SAN. If you're going to do drbd replication between two guests on two physical Xen hosts then you may as well not use Xen at all. It's pointless. Where did I say I havn't done that yet? I have indeed worked with VM infrastructures using SAN storage, and yes, it's fantastic. Just this particular location doesn't have a SAN box installed. And we will have to agree to disagree as I personally do see the benefit of using VMs with local storage What you need to do right now is build the justification case for installing the SAN storage as part of the initial build out and setup your virtual architecture around shared SAN storage. Don't waste your time on this other nonsense of replication from one guest to another, with an isolated storage pool attached to each physical Xen server. That's just nonsense. Do it right or don't do it at all. Don't take my word for it. Hit Novell's website and VMWare's and pull up the recommended architecture and best practices docs. You don't need to tell me :) I already know how great it is One last thing. I thought I read something quite some time ago about Xen working on adding storage layer abstraction which would allow any Xen server to access directly connected storage on another Xen server, creating a sort of quasi shared SAN storage over ethernet without the cost of the FC SAN. Did anything ever come of that? I haven't really been following how the 4.x branch is going as it wasn't stable enough for our needs. Random lockups would always occur. The 3.x branch is rock solid. There have been no crashes (yet!) Would DRBD + GFS2 work better than NFS? While NFS is simple, I don't mind experimenting with DRBD and GFS2 is it means fewer problems?
Re: [Dovecot] Best Cluster Storage
On 13/01/11 21:34, Stan Hoeppner wrote: Jonathan Tripathy put forth on 1/13/2011 7:11 AM: Would DRBD + GFS2 work better than NFS? While NFS is simple, I don't mind experimenting with DRBD and GFS2 is it means fewer problems? Depends on your definition of "better". If you do two dovecot+drbd nodes you have only two nodes. If you do NFS you have 3 including the NFS server. Performance would be very similar between the two. Now, when you move to 3 dovecot nodes or more you're going to run into network scaling problems with the drbd traffic, because it increases logarithmically (or is it exponentially?) with node count. If using GFS2 atop drbd across all nodes, each time a node writes to GFS, the disk block gets encapsulated by the drbd driver and transmitted to all other drbd nodes. With each new mail that's written by each server, or each flag is updated, it gets written 4 times, once locally, and 3 times via drbd. With NFS, each of these writes occurs over the network only once. With drbd it's always a good idea to dedicate a small high performance GbE switch to the cluster nodes just for drbd traffic. This may not be necessary in a low volume environment, but it's absolutely necessary in high traffic setups. Beyond a certain number of nodes even in a moderately busy mail network, drbd mirroring just doesn't work. The bandwidth requirements become too high, and nodes bog down from processing all of the drbd packets. Without actually using it myself, and just using some logical reasoning based on the technology, I'd say the ROI of drbd mirroring starts decreasing rapidly between 2 and 4 nodes, and beyond for nodes... You'd be much better off with an NFS server, or GFS2 directly on a SAN LUN. CXFS would be far better, but it's not free. In fact it's rather expensive, and it requires a dedicated metadata server(s), which is one of the reasons it's so #@! damn fast compared to most clustered filesystems. Another option is a hybrid setup, with dual NFS servers each running GFS2 accessing the shared SAN LUN(s). This eliminates the one NFS server as a potential single point of failure, but also increases costs significantly as you have to spend about $15K USD minimum for low end SAN array, and another NFS server box, although the latter need not be expensive. Hi Stan, The problem is, is that we do not have the budget at the minute to buy a SAN box, which is why I'm just looking to setup Linux environment to substitute for now. Regarding the servers, I was thinking of having a 2 node drbd cluster (in active+standby), which would export a single iSCSI LUN. Then, I would have a 2 node dovecot+postfix cluster (in active-active), where each node would mount the same LUN (With GFS2 on top). This is 4 servers in total (Well, 4 VMs running on 4 physically separate servers). I'm hearing different things on whether dovecot works well or not with GFS2. Of course, I could simply replace the iSCSI LUN above with an nfs server running on each DRBD node, if you feel NFS would work better than GFS2. Either way, I would probably use a crossover cable for the DRBD cluster. Could maybe even bond 2 cables together if I'm feeling adventurous! The way I see it, is that there are 2 issues to deal with: 1) Which "Shared Disk" technology is best (GFS2 over LUN or a simple NFS server) and 2) What is the best method of HA for the storage system Any advice is appreciated.
Re: [Dovecot] Best Cluster Storage
Either way, I would probably use a crossover cable for the DRBD cluster. I use 2 1Gb links bonded together, over crossover cables... Could maybe even bond 2 cables together if I'm feeling adventurous! Yes, recommended. That is what I do on all my clusters. How do you bond the connections? Do you just use Linux kernel bonding? Or some driver level stuff?
Re: [Dovecot] Best Cluster Storage
Does gfs2 guarantee integridy withou anm fency device ? You make a fair point. Would I need any hardware fencing for DRBD (and GFS2)?
Re: [Dovecot] Best Cluster Storage
On 14/01/11 03:39, Eric Rostetter wrote: Quoting Henrique Fernandes : for drbd you only need a heartbeat i guess. Fencing is not needed for drbd, though recommended. But to use gfs2 you need fence device, ocfs2 does not require once, like the ocfs2 driver takes care, it reboots if it thinks it is desyncronized gfs2 technically requires fencing, since it technically requires a cluster, and red hat clustering requires fencing. Some people "get around this" by using "manual" fencing, though this is "not recommended for production" as it could result in a machine staying down until manual intervention, which usually conflicts with the "uptime" desire for a cluster... But that is up to the implementor to decide on... []'sf.rique I've actually been reading on ocfs2 and it looks quite promising. According to this presentation: http://www.gpaterno.com/publications/2010/dublin_ossbarcamp_2010_fs_comparison.pdf ocfs2 seems to work quite well with lots of small files (typical of maildir). I'm guessing that since ocfs2 reboot a system automatically, it doesn't require any additional fencing? I was thinking of following this article: http://wiki.virtastic.com/display/howto/Clustered+Filesystem+with+DRBD+and+OCFS2+on+CentOS+5.5 with the only difference being that I'm going to export the drbd device via iSCSI to my active-active mail servers.
Re: [Dovecot] Best Cluster Storage
On 14/01/11 03:26, Eric Rostetter wrote: Quoting Jonathan Tripathy : Either way, I would probably use a crossover cable for the DRBD cluster. I use 2 1Gb links bonded together, over crossover cables... Could maybe even bond 2 cables together if I'm feeling adventurous! Yes, recommended. That is what I do on all my clusters. How do you bond the connections? Do you just use Linux kernel bonding? Or some driver level stuff? Linux kernel bonding, mode=4 (IEEE 802.3ad Dynamic link aggregation). I'm guessing that since you're using a cross over cable, by just setting up the bond0 interfaces as usual (As per this article http://www.cyberciti.biz/tips/linux-bond-or-team-multiple-network-interfaces-nic-into-single-interface.html), you didn't need to do anything else, since there is no switch?
Re: [Dovecot] Best Cluster Storage
On 14/01/11 20:07, Eric Rostetter wrote: Quoting Patrick Westenberg : just to get it right: DRBD for shared storage replication is OK? Yes, but only if done correctly. ;) There is some concern on Stan's part (and mime) that you might do it wrong (e.g., in a vm guest rather than at the vm host, etc). What is actually wrong with doing in it VM guests? I appreciate that there will be a slight performance hit, but not too much as Xen PV guests have excellent disk and network performance.
Re: [Dovecot] Best Cluster Storage
On 14/01/11 19:00, Stan Hoeppner wrote: Jonathan Tripathy put forth on 1/13/2011 4:17 PM: Regarding the servers, I was thinking of having a 2 node drbd cluster (in active+standby), which would export a single iSCSI LUN. Then, I would have a 2 node dovecot+postfix cluster (in active-active), where each node would mount the same LUN (With GFS2 on top). This is 4 servers in total (Well, 4 VMs running on 4 physically separate servers). Something you need to consider very carefully: drbd is a kernel block storage driver. You run in ON a PHYSICAL cluster node, and never inside a virtual machine guest. drbd is RAID 1 over a network instead of a SCSI cable. Is is meant to protect against storage and node failures. This is how you need to look at drbd. Again, DO NOT run DRBD inside of a VM guest. If you have a decent background in hardware and operating systems, it won't take you 30 seconds to understand what I'm saying here. If it takes you longer, then consider this case: You have a consolidated Xen cluster of two 24 core AMD Magny Cours servers each with 128GB RAM, an LSI MegaRAID SAS controller with dual SFF8087 ports backed by 32 SAS drives in external jbod enclosures setup as a single hardware RAID 10. You spread your entire load of 97 virtual machine guests across this two node farm. Within this set of 97 guests, 12 of them are clustered network applications, and two of these 12 are your Dovecot/Postfix guests. If you use drbd in the way you currently have in your head, you are mirroring virtual disk partitions with drbd _SIX times_ instead of once. Here, where you'd want to run drbd is within the Xen hypervisor kernel. drbd works at the BLOCK DEVICE level, not the application layer. Eric already mentioned this once. Apparently you weren't paying attention. I'm sorry I don't follow this. It would be appreciated if you could include a simpler example. The way I see it, a VM disk is just a small chunck "LVM LV in my case" of a real disk.
Re: [Dovecot] Best Cluster Storage
On 15/01/11 00:59, Eric Shubert wrote: On 01/14/2011 03:58 PM, Jonathan Tripathy wrote: On 14/01/11 19:00, Stan Hoeppner wrote: Jonathan Tripathy put forth on 1/13/2011 4:17 PM: Regarding the servers, I was thinking of having a 2 node drbd cluster (in active+standby), which would export a single iSCSI LUN. Then, I would have a 2 node dovecot+postfix cluster (in active-active), where each node would mount the same LUN (With GFS2 on top). This is 4 servers in total (Well, 4 VMs running on 4 physically separate servers). Something you need to consider very carefully: drbd is a kernel block storage driver. You run in ON a PHYSICAL cluster node, and never inside a virtual machine guest. drbd is RAID 1 over a network instead of a SCSI cable. Is is meant to protect against storage and node failures. This is how you need to look at drbd. Again, DO NOT run DRBD inside of a VM guest. If you have a decent background in hardware and operating systems, it won't take you 30 seconds to understand what I'm saying here. If it takes you longer, then consider this case: You have a consolidated Xen cluster of two 24 core AMD Magny Cours servers each with 128GB RAM, an LSI MegaRAID SAS controller with dual SFF8087 ports backed by 32 SAS drives in external jbod enclosures setup as a single hardware RAID 10. You spread your entire load of 97 virtual machine guests across this two node farm. Within this set of 97 guests, 12 of them are clustered network applications, and two of these 12 are your Dovecot/Postfix guests. If you use drbd in the way you currently have in your head, you are mirroring virtual disk partitions with drbd _SIX times_ instead of once. Here, where you'd want to run drbd is within the Xen hypervisor kernel. drbd works at the BLOCK DEVICE level, not the application layer. Eric already mentioned this once. Apparently you weren't paying attention. I'm sorry I don't follow this. It would be appreciated if you could include a simpler example. The way I see it, a VM disk is just a small chunck "LVM LV in my case" of a real disk. Perhaps if you were to compare and contrast a virtual disk to a raw disk, that would help. If you wanted to use drbd with a raw disk being accessed via a VM guest, that would probably be all right. Might not be "supported" though. Thanks Eric, Now I understand where you are coming from: It's not the fact that DRBD is running in a VM is the problem, is the fact that DRBD should be replicating a raw physical disk, which of course is still possible from with a Xen VM Also thanks to Stan and everyone else for the helpful comments. I still haven’t decided between GFS2 or OCFS2 yet. I guess I'll have to try both and see what works the best. I really wish NFS didn't have the caching issue, as it's the most simple to set up
Re: [Dovecot] Best Cluster Storage
On 15/01/11 01:14, Brad Davidson wrote: -Original Message- I'm sorry I don't follow this. It would be appreciated if you could include a simpler example. The way I see it, a VM disk is just a small chunck "LVM LV in my case" of a real disk. Perhaps if you were to compare and contrast a virtual disk to a raw disk, that would help. If you wanted to use drbd with a raw disk being accessed via a VM guest, that would probably be all right. Might not be "supported" though. Depending on your virtualization method, raw device passthrough would probably be OK. Otherwise, think about what you're doing - putting a filesystem - on a replicated block device - that's presented through a virtualization layer - that's on a filesystem - that's on a block device. If you're running GFS/GlusterFS/etc on the DRBD disk, and the VM is on VMFS, then you're actually using two clustered filesystems! Each layer adds a bit of overhead, and each block-on-filesystem layering adds the potential for block misalignments and other issues that will affect your overall performance and throughput. It's just hard to do right. -Brad Generally, I would give an LVM LV to each of my Xen guests, which according to the DRBD site, is ok: http://www.drbd.org/users-guide/s-lvm-lv-as-drbd-backing-dev.html I do not use img files with loopback devices Is this a bit better now?
[Dovecot] [dovecot] Question about master user and PAM with dovecot 2.0.12
Is it possible to allow master users to be authenticated against PAM? Something like: passdb { driver = pam #args = /etc/dovecot/passwd.masterusers master = yes #pass = yes } and then have a userdb which qualifies what accounts are master accounts but doesn't have passwords? Thanks, Jonathan smime.p7s Description: S/MIME cryptographic signature