Re: [Dovecot] Random timeouts on mailboxes

2009-03-09 Thread listacc
Hi Bastien,

just a little thought because I had an issue like this (but with cyrus):

Where is your thunderbird installation stored?

In my case, it was not locally but on some windows mapped drive, and so also 
the imap cache of thunderbird was on this drive. Because we have a lot of mails 
in our accounts, the cache was about 50-100MB for some users, and users with a 
lot mail accounts had very often timeouts. This was due to performance problems 
on our network storage.
Sometimes it also works to delete thunderbird´s cache files. Then it has to 
download all the headers again (bad with bad performing network storage :-/), 
but it seems that these cache files get corrupted from time to time and this 
also produces timeouts.

kind regards

  Andreas 
-- 
Computer Bild Tarifsieger! GMX FreeDSL - Telefonanschluss + DSL
für nur 17,95 Euro/mtl.!* http://dsl.gmx.de/?ac=OM.AD.PD003K11308T4569a


Re: [Dovecot] Clustering dovecot?

2009-05-29 Thread listacc
Hi Rick,

at the moment I´m building the same setup than you. I have no further 
experience with it, but I made a setup in our testing lab and under testing 
conditions it seems to run quite nice. 

I took 2 servers with heartbeat1 in active/passive node. Each server has its 
own IP, and they have a cluster IP that´s managed by heartbeat only. This 
cluster IP is provided in our DNS for accessing the mailstorage cluster, and 
only the active node has it at the time.

Then I have a DRBD shared storage on the two nodes.
On the DRBD storage I only put the dovecot maildir and mysql databases. The 
dovecot and mysql binaries are not shared and the configuration also not.

DRBD, dovecot and Mysql are managed by heartbeart.

There is always a danger that the connection between the 2 nodes is failing and 
you will have a "split brain" then with a big data mess. So it´s important to 
provide redundancy in the connections. 
For heartbeat, I have one dedicated LAN connection and a serial connection.
For DRBD, I use 2 bonded NICs on different PCI cards.
Take a look at DOPD for DRBD. This marks the passive DRBD partition "outdated" 
if the DRBD connection fails, and because heartbeat can only takeover if it can 
start all resources of a resource group, a failover is not possible anymore if 
the DRBD connection is broken, so you can´t mess up your DRBD so easy any more.

If both heartbeat connections fail, you will have lots of trouble, and that´s 
easy to achieve with some wrong iptables if you take only LAN connections. So 
the serial cable is a nice thing because it´s not affected!

We use heartbeat1 because we had some trouble bringing heartbeat2 to run. 
Heartbeat1 is not able to monitor it´s resources, so we thought about using MON 
for this. And to take some STONITH devices like telnet accessible power outlets 
to switch off the power of a failing node automatically. But this setup seems 
to be rather complex, which is the enemy of reliability, and we heard about 
people having problems with accidently automatic failovers or reboots. So in 
the end we decided against an automatic failover in the case a service dies. We 
use only the failover of heartbeat1, e.g. if the active node dies completely, 
there will be a failover to the passive node. And we use connection redundancy 
to hopefully not have a split brain. And make a good backup ;-)

(Take care not to use NFS for storage if you take another setup than the here 
described because you can have trouble with file locking!)

Our cluster is protecting against hardware problems, and against some kind of 
software problems. Because of DRBD, if you do a "rm -rf" on the maildir, you 
loose all data on _both_ nodes in the same second, so the protection against 
administration faults is not very good! Backups are really important.
But if we have some trouble with the active node, and we can´t fix it in some 
minutes, we can try a failover to the passive node and there is a big chance 
that the service is running on the other node quite well. A nice things for 
software updates.

For MTA we use Postfix. Because it´s not a good idea to put the postfix 
mailqueue on a DRBD (bad experiences), you will have  some mails (temporarily) 
lost if you do a failover. So it´s a good idea to minimize the time mails are 
held in the queue. Because of this and because we need a longtime stable 
mailstorage but an always up-to-date brand new SPAM and virus filter, we 
decided to put 2 Postfix/Amavis/Spamassassin/Antivirus relays in front of the 
IMAP cluster. They´re identical, with the same MX priority in DNS, so if one of 
the relays fails, the other one takes the load.

As I said, this solution is working only in the lab now and not yet in 
production, but there the failover seems to be no problem at all for the 
clients. So I hope I could give you some ideas.

regards,

  Andreas 
-- 
Nur bis 31.05.: GMX FreeDSL Komplettanschluss mit DSL 6.000 Flatrate und
Telefonanschluss nur 17,95 Euro/mtl.!* http://portal.gmx.net/de/go/dsl02


[Dovecot] strange quota behaviour with dovecot 1.1.7

2009-06-23 Thread listacc
Hello!

I´m running dovecot 1.1.7 (which is the most recent binary of openSuSE 11.1 
repository) with Postfix, MySQL and Postfixadmin.

Now I´m trying for some days to get quota working, but I get some strange 
behaviour and I was not yet able to figure out where the error in my 
configuration is.

Quota information is inserted by postfixadmin into a MySQL database. It seems 
that dovecot is reading this information correctly - when  a mailbox is filled 
up and gets over-quota, further emails are rejected.

But then, if I want to increase the quota for this filled-up mailbox in 
postfixadmin, postfixadmin correctly changes the quota information in MySQL 
table. 

After growing quota, mails to the account are accepted again. But my 
Thunderbird with Quota plugin continues showing the old quota setting. When 
more mails are sent to this account, it shows a quota > 100%.

If I look into the "mailboxsize" file of this account, it still shows the old 
quota value before I raised it. If I delete the entry in mailboxsize, dovecot 
writes the old value in again.

Seems that dovecot isn´t looking in the MySQL table (but it did after creating 
the account). But why are mails accepted again if dovecot is not recognizing 
the raised quota??

If I try now, as a user, to delete mails from the "overfilled" Inbox, it is 
denied with a "quota exceeded" message. But I´m able to send further mails to 
this account. The only way to get the account working properly again is to 
delete the messages directly on the server´s filesystem. 

I´m very grateful for every hint!

regards,

  Andreas



-

dovecot -n:

# 1.1.7: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.27.23-0.1-default x86_64 openSUSE 11.1 (x86_64) ext3
base_dir: /drbd/mail/var/run/dovecot/
log_path: /var/log/dovecot.err
info_log_path: /var/log/dovecot.info
protocols: imaps managesieve
listen(default): *
listen(imap): *
listen(managesieve): 192.168.1.8:2000
ssl_listen(default): *:993
ssl_listen(imap): *:993
ssl_listen(managesieve): 
ssl_cert_file: /etc/ssl/certs/imap.domain.crt
ssl_key_file: /etc/ssl/private/imap.domain.key
verbose_ssl: yes
login_dir: /drbd/mail/var/run/dovecot//login
login_executable(default): /usr/lib/dovecot/imap-login
login_executable(imap): /usr/lib/dovecot/imap-login
login_executable(managesieve): /usr/lib/dovecot/managesieve-login
max_mail_processes: 2000
mail_max_userip_connections(default): 30
mail_max_userip_connections(imap): 30
mail_max_userip_connections(managesieve): 10
first_valid_uid: 5001
last_valid_uid: 5001
mail_location: maildir:/drbd/mail/vmail/%d/%n
mail_debug: yes
mail_executable(default): /usr/lib/dovecot/imap
mail_executable(imap): /usr/lib/dovecot/imap
mail_executable(managesieve): /usr/lib/dovecot/managesieve
mail_plugins(default): quota imap_quota
mail_plugins(imap): quota imap_quota
mail_plugins(managesieve): 
mail_plugin_dir(default): /usr/lib64/dovecot/modules/imap
mail_plugin_dir(imap): /usr/lib64/dovecot/modules/imap
mail_plugin_dir(managesieve): /usr/lib64/dovecot/modules/managesieve
managesieve_implementation_string(default): dovecot
managesieve_implementation_string(imap): dovecot
managesieve_implementation_string(managesieve): Cyrus timsieved v2.2.13
sieve_storage(default): 
sieve_storage(imap): 
sieve_storage(managesieve): /drbd/mail/vmail/%d/%n/sieve
sieve(default): 
sieve(imap): 
sieve(managesieve): /drbd/mail/vmail/%d/%n/dovecot.sieve
auth default:
  mechanisms: plain login cram-md5
  user: nobody
  verbose: yes
  debug: yes
  debug_passwords: yes
  passdb:
driver: pam
  passdb:
driver: sql
args: /etc/dovecot/dovecot-sql.conf
  userdb:
driver: passwd
  userdb:
driver: sql
args: /etc/dovecot/dovecot-sql.conf
  socket:
type: listen
client:
  path: /var/spool/postfix/private/auth
  mode: 432
  user: postfix
  group: postfix
master:
  path: /var/run/dovecot/auth-master
  mode: 432
  user: vmail
  group: vmail
plugin:
  sieve: /drbd/mail/vmail/%d/%n/dovecot.sieve
  quota: maildir


---
dovecot-sql.conf:

(...)
password_query = SELECT username AS user, password, '/drbd/mail/vmail/%d/%n' AS 
userdb_home, 'maildir:/drbd/mail/vmail/%d/%n' AS userdb_mail, 5001 AS 
userdb_uid, 5001 AS userdb_gid FROM mailbox WHERE username = '%u' AND active = 
'1'
(...)
user_query = SELECT '/drbd/mail/vmail/%d/%n' AS home, 
'maildir:/drbd/mail/vmail/%d/%n' AS mail, 5001 AS uid, 5001 AS gid, 
concat('*:storage=', quota, 'B') AS quota_rule FROM mailbox WHERE username = 
'%u' AND active = '1'


---

mysql> select quota from mailbox;
+---+
| quota |
+---+
| 51200 | 
| 51200 | 
| 51200 | 
| 51200 | 
| 51200 | 
| 51200 | 
| 51200 | 
| 51200 | 
| 51200 | 
|  3072 | 
|  3072 | 
+---+
11 rows in set (0.00 sec)

-- 
GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT!
Jetzt freischalten unter http://portal.gmx.net/de/go/maxdome01


[Dovecot] RESOLVED: strange quota behaviour with dovecot 1.1.7

2009-07-15 Thread listacc
Hi,

I just wanted to tell you that my problems are resolved now. Seems that Timo 
had the essential hint for me. Thank you very much!! :-)

On 27.06.2009 23:40, Timo Sirainen wrote:
>>   passdb:
>> driver: pam
>>   passdb:
>> driver: sql
>> args: /etc/dovecot/dovecot-sql.conf
> 
> Do you really want to have system users too? 

No, I don´t want to :-) I switched this off.


>> password_query = SELECT username AS user, password, '/drbd/mail/vmail/%d/%n' 
>> AS userdb_home, 'maildir:/drbd/mail/vmail/%d/%n' AS userdb_mail, 5001 AS 
>> userdb_uid, 5001 AS userdb_gid FROM mailbox WHERE username = '%u' AND active 
>> = '1'
> 
> You're also not using userdb prefetch, so all these usedb_* fields are
> ignored here.
> 

Seems that was it! Now quota works very fine! 

> After growing quota, mails to the account are accepted again. But my 
> Thunderbird with Quota plugin continues showing the old quota setting. When 
> more mails are sent to this account, it shows a quota > 100%.
> (...)
> If I try now, as a user, to delete mails from the "overfilled" Inbox, it is 
> denied with a "quota exceeded" message. But I´m able to send further mails to 
> this account. The only way to get the account working properly again is to 
> delete the messages directly on the server´s filesystem. 
> 

This happens again even after correcting the MySQL query, but it´s very easy to 
fix when the user logs out and in again (e.g. closes and reopens his 
Thunderbird) :-) 

Thank you very much for the fix and this very nice imap server!!!

  Andreas


-- 
GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT!
Jetzt freischalten unter http://portal.gmx.net/de/go/maxdome01


[Dovecot] Managesieve configuration problem

2009-07-30 Thread listacc
Hello!

I'm trying for a lot of time to make managesieve working with dovecot 1.1.17 
and virtual users on openSuSE 11.1. I think I'm doing some misconfiguration, 
but I can't figure it out. Perhaps it's something with the paths to the sieve 
scripts?

In dovecot.conf I find in the comments:

~~
protocol managesieve {
#(...)
  # Specifies the location of the symlink pointing to the active script in
  # the sieve storage directory. This must match the SIEVE setting used by
  # deliver (refer to http://wiki.dovecot.org/LDA/Sieve#location for more
  # info). Variable substitution with % is recognized.
  # Take care: if a file in a maildir: begins with a '.', it is recognized
  # as a folder; so, avoid this.
  #sieve=~/.dovecot.sieve
  sieve = /drbd/mail/vmail/%d/%n/dovecot.sieve
}
~~

"This must match the SIEVE setting used by deliver", so I have to take a look 
to LDA section, no?
But there I don't find any hint for sieve settings:

~~
protocol lda {
  # Address to use when sending rejection mails.
  postmaster_address = postmas...@mydomain.tld

  # Hostname to use in various parts of sent mails, eg. in Message-Id.
  # Default is the system's real hostname.
  #hostname = 

  # Support for dynamically loadable plugins. mail_plugins is a space separated
  # list of plugins to load.
  mail_plugins = cmusieve quota
  mail_plugin_dir = /usr/lib64/dovecot/modules/lda

  # If user is over quota, return with temporary failure instead of
  # bouncing the mail.
  #quota_full_tempfail = no

  # Format to use for logging mail deliveries. You can use variables:
  #  %$ - Delivery status message (e.g. "saved to INBOX")
  #  %m - Message-ID
  #  %s - Subject
  #  %f - From address
  #deliver_log_format = msgid=%m: %$

  # Binary to use for sending mails.
  sendmail_path = /usr/lib/sendmail

  # Human readable error message for rejection mails. Use can use variables:
  #  %n = CRLF, %r = reason, %s = subject, %t = recipient
  rejection_reason = Your message to <%t> was automatically rejected:%n%r

  # UNIX socket path to master authentication server to find users.
#  auth_socket_path = /drbd/mail/var/run/dovecot/auth-master
  auth_socket_path = /var/run/dovecot/auth-master

}
~~

I tried to log in to managesieve manually with gnutls-cli, and that was 
successfull. I was able to authenticate.
But when I try to put the example script from dovecot wiki 
(http://wiki.dovecot.org/ManageSieve/Troubleshooting)

~~~
PUTSCRIPT "hutsefluts" {6+}
keep;
~~~

I get no 

~
OK "Putscript completed."
~

there happens nothing, managesieve seems to wait for something else.

When I look now into the directory where the script should be stored (the 
folder has been created automatically) I don't find any script file but a 
folder "tmp" that contains a plaintext file with the script commands inside, 
eg. 

~~~
hutsefluts-12345678.M033562P2271.imap.sieve
~~~

After disconnecting, this file disappears.


In the dovecot.info log it looks like this:

~~
dovecot: Jul 30 14:51:56 Info: auth(default): master out: USER  16  
t...@mydomain.tld home=/drbd/mail/vmail/mydomain.tld/test
   mail=maildir:/drbd/mail/vmail/mydomain.tld/test   uid=5001gid=5001   
 quota_rule=*:storage=51200B
dovecot: Jul 30 14:51:56 Info: managesieve-login: Login: 
user=, method=PLAIN, rip=192.168.200.39, lip=192.168.200.40, 
TLS
dovecot: Jul 30 14:51:56 Info: MANAGESIEVE(t...@mydomain.tld): Effective 
uid=5001, gid=5001, home=/drbd/mail/vmail/mydomain.tld/test
dovecot: Jul 30 14:51:56 Info: MANAGESIEVE(t...@mydomain.tld): sieve-storage: 
using active sieve script path: /drbd/mail/vmail/mydomain.tld/test/dovecot.sieve
dovecot: Jul 30 14:51:56 Info: MANAGESIEVE(t...@mydomain.tld): sieve-storage: 
using active sieve script path: /drbd/mail/vmail/mydomain.tld/test/dovecot.sieve
dovecot: Jul 30 14:51:56 Info: MANAGESIEVE(t...@mydomain.tld): sieve-storage: 
using sieve script storage directory: /drbd/mail/vmail/mydomain.tld/test/sieve
dovecot: Jul 30 14:51:56 Info: MANAGESIEVE(t...@mydomain.tld): sieve-storage: 
relative path to sieve storage in active link: sieve/
~~

../vmail/mydomain.tld/test/dovecot.sieve is missing completely.


Perhaps you can give me a small hint?



And here is my "dovecot -n":

~~
# 1.1.7: /etc/dovecot/dovecot.conf
Warning: fd limit 1024 is lower than what Dovecot can use