Re: Getting panic in http-client-request.c: line 1240 during indexing on Ubuntu 20.04

2021-02-08 Thread deano-dovecot
 

On 2021-02-07 7:32 pm, John Fawcett wrote: 

> On 07/02/2021 20:15, @lbutlr wrote:
> On 07 Feb 2021, at 02:07, @lbutlr  wrote: On 06 Feb 2021, 
> at 11:06, John Fawcett  wrote: 19.08.20 Is that a 
> malformed ISO date 2019-08-20 or a truncated europeans style 19-08-2020?
 Either way, I cannot find the message in my dovecot folder. Closest I
can find is Message-ID:
<20200820141341.ga1...@meili.valhalla.31bits.net> from the 2020-08-20 at
16:13:52 but there is no patch in that mail. 

here's the post in the list archives

https://dovecot.org/pipermail/dovecot/2020-August/119703.html [1]

Do we have when (or even if) that patch will make it into the main ? I
would really rather prefer pulling from repo ... 

Links:
--
[1] https://dovecot.org/pipermail/dovecot/2020-August/119703.html


Re: Getting panic in http-client-request.c: line 1240 during indexing on Ubuntu 20.04

2021-02-08 Thread deano-dovecot
 

On 2021-02-08 1:29 pm, John Fawcett wrote: 

> On 08/02/2021 18:40, deano-dove...@areyes.com wrote: 
> 
> On 2021-02-07 7:32 pm, John Fawcett wrote: 
> 
> On 07/02/2021 20:15, @lbutlr wrote:
> On 07 Feb 2021, at 02:07, @lbutlr  wrote: On 06 Feb 2021, 
> at 11:06, John Fawcett  wrote: 19.08.20 Is that a 
> malformed ISO date 2019-08-20 or a truncated europeans style 19-08-2020?
 Either way, I cannot find the message in my dovecot folder. Closest I
can find is Message-ID:
<20200820141341.ga1...@meili.valhalla.31bits.net> from the 2020-08-20 at
16:13:52 but there is no patch in that mail. 

here's the post in the list archives

https://dovecot.org/pipermail/dovecot/2020-August/119703.html [1]

Do we have when (or even if) that patch will make it into the main ? I
would really rather prefer pulling from repo ... 

+1 from me. 

I'd like to see this patch (or something equivalent go in). Without this
Tika is unusable for me. 

Tika is also unusable in my opinion without the (basic) limits on
processing unbounded amounts of text. There is also another patch
introducing basic auth which I find useful since my Tika server is
password protected. 

If all three could make it into the repo then I could avoid manual
patching that I do at each release. 

Unfortunately they don't make the source repos (deb-src
http://repo.dovecot.org/.) available, at least as far as I've found.
So patching the .deb is not a simple affair. 

Normally it would be a case of 

> apt-get source dovecot-solr

which would grab the whole dovecot tree, then patching, then a few deb
builder cmds to rebuild the .debs. Simple (ish) ... 

Are you installing from source tarballs, or rebuilding debs ? What's
your environment ? 

DC 

Links:
--
[1] https://dovecot.org/pipermail/dovecot/2020-August/119703.html


Re: Simple patch script for fts tika faulting on Ubuntu 20.04

2021-02-18 Thread deano-dovecot
 

On 2021-02-07 7:32 pm, John Fawcett wrote: 

> On 07/02/2021 20:15, @lbutlr wrote:
> On 07 Feb 2021, at 02:07, @lbutlr  wrote: On 06 Feb 2021, 
> at 11:06, John Fawcett  wrote: 19.08.20 Is that a 
> malformed ISO date 2019-08-20 or a truncated europeans style 19-08-2020?
 Either way, I cannot find the message in my dovecot folder. Closest I
can find is Message-ID:
<20200820141341.ga1...@meili.valhalla.31bits.net> from the 2020-08-20 at
16:13:52 but there is no patch in that mail. 

here's the post in the list archives

https://dovecot.org/pipermail/dovecot/2020-August/119703.html [1]

Thanks so much for the help - I've verified the patch and it works
nicely. Here's a simple script for 20.04 - it's very manual, you have to
choose the version to download. 

It's just two lines of patch - is there any chance this will be fixed in
the next point release ? 

Links:
--
[1] https://dovecot.org/pipermail/dovecot/2020-August/119703.html
#/bin/bash

## dovecot solr fts has a big that causes tika to fault when indexing.
# https://dovecot.org/pipermail/dovecot/2020-August/119703.html

# Browse https://repo.dovecot.org/ce-2.3-latest/ to find correct package to pull down
#  eghttps://repo.dovecot.org/ce-2.3-latest/ubuntu/focal/pool/main/2.3.13-2_ce/

sudo apt install -y --no-install-recommends devscripts build-essential
mkdir dovecot
cd dovecot

# Fetch patch
# https://dovecot.org/pipermail/dovecot/2020-August/119703.html

cat > fts-solr-tika.patch << 'EOF'
diff --git a/src/plugins/fts-solr/solr-connection.c b/src/plugins/fts-solr/solr-connection.c
index ae720b5e2870a852c1b6c440939e3c7c0fa72b5c..9d364f93e2cd1b716b9ab61bd39656a6c5b1ea04 100644
--- a/src/plugins/fts-solr/solr-connection.c
+++ b/src/plugins/fts-solr/solr-connection.c
@@ -103,7 +103,7 @@ int solr_connection_init(const struct fts_solr_settings *solr_set,
 		http_set.ssl = ssl_client_set;
 		http_set.debug = solr_set->debug;
 		http_set.rawlog_dir = solr_set->rawlog_dir;
-		solr_http_client = http_client_init(&http_set);
+		solr_http_client = http_client_init_private(&http_set);
 	}

 	*conn_r = conn;
diff --git a/src/plugins/fts/fts-parser-tika.c b/src/plugins/fts/fts-parser-tika.c
index a4b8b5c3034f57e22e77caa759c090da6b62f8ba..b8b57a350b9a710d101ac7ccbcc14560d415d905 100644
--- a/src/plugins/fts/fts-parser-tika.c
+++ b/src/plugins/fts/fts-parser-tika.c
@@ -77,7 +77,7 @@ tika_get_http_client_url(struct mail_user *user, struct http_url **http_url_r)
 		http_set.request_timeout_msecs = 60*1000;
 		http_set.ssl = &ssl_set;
 		http_set.debug = user->mail_debug;
-		tika_http_client = http_client_init(&http_set);
+		tika_http_client = http_client_init_private(&http_set);
 	}
 	*http_url_r = tuser->http_url;
 	return 0;
EOF

# Download dovecot
dget -u https://repo.dovecot.org/ce-2.3-latest/ubuntu/focal/pool/main/2.3.13-2_ce/dovecot-Ubuntu_20.04.dsc

# Get build dependencies
sudo apt install -y --no-install-recommends pkg-config libpam0g-dev libldap2-dev libpq-dev libmysqlclient-dev libsqlite3-dev libsasl2-dev krb5-multidev libbz2-dev libdb-dev libcurl4-gnutls-dev libexpat-dev libwrap0-dev dh-systemd libclucene-dev liblzma-dev liblz4-dev libexttextcat-dev libstemmer-dev dh-exec liblua5.2-dev libsodium-dev libzstd-dev

# Patch dovecot and build
cd dovecot-2.3.13
patch -p1 < ../fts-solr-tika.patch
dpkg-buildpackage -rfakeroot -b -uc -us

# Install new dovecot-solr package

cd ..
# sudo dpkg -i dovecot-solr_2.3.13-2+ubuntu20.04_amd64.deb
# sudo systemctl restart dovecot

# Test indexing

# sudo doveadm -D index -u j...@example.com INBOX

exit



Some questions about mail_crypt setups

2021-02-21 Thread deano-dovecot
 

Some questions about mail_crypt setups 

I have global mail enecryption working nicely, and replication works
nicely between two systems. The main problem is that the private and
public keys are *right there* on the server in /etc/dovecot/private ...
Fine for a completely controlled system, but not so fine when on a
rented VPS etc. 

When are the keys read in by dovecot ? Are they ever read in again while
dovecot is running, or does it cache them in ram until dovecot is
restarted ? 

Would it be possible for dovecot to read the keys as output from a
script ? I'm thinking of a small script that would reach out to an
authentication service like Authy or Okta or similar. Admin gets an
alert on their phone, taps OK, UNLOCK and the two keys are returned to
the script, which then hands them back to dovecot and away it goes. 

The mail_crypt config normally contains 

> mail_crypt_global_private_key =  mail_crypt_global_public_key =  mail_crypt_global_script =  # /etc/dovecot/conf.d/99-mailcrypt.conf
> #--
> mail_attribute_dict = file:%h/Maildir/dovecot-attributes
> plugin {
> mail_crypt_require_encrypted_user_key = yes
> mail_crypt_save_version = 2
> mail_crypt_curve = secp521r1
> } 
> 
> # /etc/dovecot/dovecot-sql.conf.ext
> #--
> # CREATE TABLE IF NOT EXISTS `users` (
> # `username` varchar(64) character set utf8 collate utf8_bin NOT NULL COMMENT 
> 'localpart of email-address',
> # `domain` varchar(64) character set utf8 collate utf8_bin NOT NULL COMMENT 
> 'domain-part of email-address',
> # `name` varchar(64) character set utf8 collate utf8_bin NOT NULL COMMENT 
> 'Full name of user',
> # `password` varchar(128) character set utf8 collate utf8_bin NOT NULL 
> COMMENT 'base64-encoded SHA512 hash of password',
> # PRIMARY KEY (`username`,`domain`)
> # ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='users'; 
> 
> driver = mysql
> connect = host=/var/run/mysqld/mysqld.sock dbname=emailers user=dovecot 
> password=RandomPassword
> default_pass_scheme = SHA512-CRYPT 
> 
> password_query = SELECT username, password, '%w' AS 
> userdb_mail_crypt_private_password, '/var/mail/%d/%n' AS userdb_home, 'vmail' 
> AS userdb_uid, 'vmail' AS userdb_gid FROM users WHERE username = '%n' AND 
> domain = '%d' 
> 
> # For LDA:
> user_query = SELECT '/var/mail/%d/%n' AS home, 'vmail' AS uid, 'vmail' AS gid 
> FROM users WHERE username = '%n' AND domain = '%d' 
> 
> # For using doveadm -A:
> iterate_query = SELECT username, domain FROM users

Except that replication doesn't work due to the user password not being
available. Actually, indexing fails too for the same reason. 

> Feb 21 14:02:13 
> indexer-worker(testu...@example.com)<120846>:
>  Error: Mailbox INBOX: UID=1: read() failed: 
> read(/var/mail/example.com/testuser/Maildir/INBOX/new/1613934133.M132059P120842.dove1,S=2568,W=2624)
>  failed: Private key not available: Cannot decrypt key 
> f64e7c12a60b3df12ebf865a70bec57fedd3e9b4fd98df93205f1096db14fda7: Cannot 
> decrypt key eca099273f525ca46b2f5640253770ad19e0578543244d8cd34bde183e996bd5: 
> Password not available (read reason=fts indexing) 
> 
> Feb 21 14:02:13 
> indexer-worker(testu...@example.com)<120846>:
>  Error: Failed to read mailbox INBOX mail UID=1 stream: Mailbox INBOX: UID=1: 
> read() failed: 
> read(/var/mail/example.com/testuser/Maildir/INBOX/new/1613934133.M132059P120842.dove1,S=2568,W=2624)
>  failed: Private key not available: Cannot decrypt key 
> f64e7c12a60b3df12ebf865a70bec57fedd3e9b4fd98df93205f1096db14fda7: Cannot 
> decrypt key eca099273f525ca46b2f5640253770ad19e0578543244d8cd34bde183e996bd5: 
> Password not available (read reason=fts indexing) 
> 
> Feb 21 14:02:13 
> indexer-worker(testu...@example.com)<120846>:
>  Error: Mailbox INBOX: Mail search failed: Internal error occurred. Refer to 
> server log for more information. [2021-02-21 14:02:13] 
> 
> Feb 21 14:02:13 
> indexer-worker(testu...@example.com)<120846>:
>  Error: Mailbox INBOX: Transaction commit failed: FTS transaction commit 
> failed: transaction context (attempted to index 1 messages (UIDs 1..1)) 
> 
> Feb 21 14:02:13 dsync-local(testu...@example.com): 
> Error: Mailbox INBOX: UID=1: read() failed: 
> read(/var/mail/example.com/testuser/Maildir/INBOX/new/1613934133.M132059P120842.dove1,S=2568,W=2624)
>  failed: Private key not available: Cannot decrypt key 
> f64e7c12a60b3df12ebf865a70bec57fedd3e9b4fd98df93205f1096db14fda7: Cannot 
> decrypt key eca099273f525ca46b2f5640253770ad19e0578543244d8cd34bde183e996bd5: 
> Password not available (read reason=prefetch)

What are the options here for providing the decryption password or key ?
The user password is already stored in the mysql database as a
SHA512-CRYPT so we don't want to store it unencrypted ... I saw the
mail_crypt docs
(https://doc.dovecot.org/configuration_manual/mail_crypt_plugin/)
mention 

> mail_crypt_private_key - Private key to decrypt user's master key, can be 
> base64 encoded

I'm assuming tha

Re: Dovecot HA/Resilience

2020-01-13 Thread deano-dovecot
My own personal setup is a 3-node system using three cheap VPS'.  I also 
helped to set the same thing up for a previous company using proper 
systems, this was handling customer email.


Everything possible is kept in mariadb with galera for master-master 
replication.  Two main mail nodes with 
dovecot/nginx/roundcube/spamassassin/etc and a third as a mariadb quorum 
node.  Dovecot uses replication to keep the encrypted mailstores in 
sync.


This way there is no need for HA storage - you're relying on 
replication.  Oh, and the replication all happens over a tinc vpn mesh 
network, but would work equally well over zerotier or whatever.


I have an ansible playbook to set the whole thing up automagically.  I'm 
working on cleaning it up and documenting it so others can use it as 
well.  So long as you have ssh key'd access to 3 nodes, it will build 
the entire setup.


I'll put it up on github in a few weeks.  NOTE: this is built for MY 
needs.  It might not meet your needs.  But when it's ready(ish) you're 
welcome to try it out.  For example, it's not true HA - you have to hit 
one node or the other.  If you control your own DNS you could set up 
round-robin for a mail.yourdomain.com rather than using 
mail1.yourdomain.com and mail2.yourdomain.com.  For me, I don't bother.


Dean.

On 2020-01-11 3:50 am, Jean-Daniel wrote:

If you just want active/standby, you can simply use corosync/pacemaker
as other already suggest and don’t use Director.
I have a dovecot HA server that uses floating IP and pacemaker to
managed it, and it works quite well.

The only real hard part is having a HA storage.
You can simply use a NFS storage shared by both servers (as long as
only one has the floating IP, you won’t have issue with the same
client accessing it from both servers), but the storage will then be a
single point of failure.
You may have both server have their own storage and sync it using
dovecot replicator (I have never tried, so I can’t say for sure), or
have an other layer taking care of the storage sync (like DRDB).

While drdb is fine to sync dovecot storage, it may not be enough if
you really want HA and have other services (postfix, rspamd, …)
running on that server, as you may need to also have the postfix
queues (or other data) sync on both servers.



Le 10 janv. 2020 à 21:12, Adrian Minta  a 
écrit :


Yes, but it works for small systems if you set IP source address 
persistence on LB or even better, if you set priority to be 
Active/Standby. I couldn't find a good example with dovecot director 
and backend on the same server, so adding another two machines seems 
overkill for small setups.


If someone has a working example for this please make it public !

Quote from https://wiki2.dovecot.org/Director

"Director and Backend in same server (broken)
NOTE: This feature never actually worked. It would require further 
development to fix (director would need to add "proxy" field to extra 
fields and notify auth that the auth_request can be freed)."


Also:

https://dovecot.org/pipermail/dovecot/2012-May/135600.htm

https://www.dovecot.org/list/dovecot/2012-June/083983.html


On 1/10/20 8:09 PM, Aki Tuomi wrote:
Also you should probably use dovecot director to ensure same user 
sessions end up on same server, as it's not supported to access same 
user on different backends in this scenario.


Aki


On 10/01/2020 19:49 Adrian Minta  wrote:


 Hello,
 you need to "clone" the first server, change the ip address, mount 
the same maildir storage and use some mechanism to share the 
accounts database.


 Then you need to put a TCP load-balancer in front of the servers an 
you are good to go. This is the easiest solution if you already have 
in the network an appliance that can do LB. For instance if you 
already have a firewall with that function.




 Another solution is to make a cluster with corosync/pacemaker out 
of the two servers:


 
https://www.digitalocean.com/community/tutorials/how-to-create-a-high-availability-setup-with-corosync-pacemaker-and-floating-ips-on-ubuntu-14-04
 
https://linuxacademy.com/blog/linux-academy/configure-a-failover-cluster-with-pacemaker/






 On 1/10/20 7:16 PM, Kishore Potnuru wrote:



Thank you all for the replies


 I have the test environment with the same configuration. But I 
have been asked to go with same environment for HA/Resilience in 
Live.
   Yes, I have only one Live server. It is configured in "Maildir" 
format. The data stores on a Network / Shared Storage (But 
definitely not local disk, its a mount point).
   I have been asked to create a HA/Resilience for this 
environment. They gave me another server with same ram/cpu/os and I 
need to configure the dovecot on it.
   Please provide your suggestions/steps as I am new to this kind 
of environment.
   Is it possible, when any email comes to any one or both of the 
two servers, how it will be read by the user from Outlook? How to 
create the environment?



 Thanks,
 Kishore Potnuru
   On Fri, Jan 10,

Re: Dovecot - Upgrade Solr 7.7.2 to 8.4.1

2020-01-22 Thread deano-dovecot

On 2020-01-22 8:42 am, Domenico Pastore wrote:

Hello,
I have Dovecot configured with Solr for the indexes.

I have need your support for upgrade solr 7.7.2 to 8.4.1.
Solr 7.7.2 has a security issue CVE-2019-12409.

It's possible upgrade of Solr?
Dovecot work correctly with Solr 8.x?

The Solr documentation recommended after updating:
"It is always strongly recommended that you fully reindex your
documents after a major version upgrade."

There are tips for Dovecot?


Easy mitigation - block or control all access on port 18983 via iptables 
?  Might be a bit of a blanket statement though ...


Be aware than later version of Solr use a *lot* more ram.  I tested last 
year with 8.3.0 and even with tuning was seeing a much higher higher RES 
memory usage.


DC


Re: Current thinking on backups ?

2020-05-29 Thread deano-dovecot
I run a pair of dovecot servers for personal small domains with several 
layers of backup in place ...


- The two dovecot servers replicate to each via a Tinc vpn mesh. That 
gives email resiliency.
- All mail is replicated via offlineimap to a 3rd server over that Tinc 
vpn. It's on the mesh, it has space, so why not ?
- All mail is replicated as well as via mbsync to a zfs dataset on my 
main media server at home once an hour.
- That zfs dataset (and others) is snapshot'd hourly, and zfs send/recv 
to a backup box nightly.


Outside of dovecot procedures, I find mbsync to work extremely well.  It 
was easy enough to set up a systemd timer and service to pull the mail 
down.



mysync.timer

# Run the mbsync process to sync mail down to local mediabox

[Unit]
Description=mbsync timer
ConditionPathExists=%h/.mbsyncrc
ConditionPathIsDirectory=/stuff/Backups/Mailsystems/mbsync-backups

[Timer]
OnBootSec=15m
OnCalendar=hourly
Persistent=true

[Install]
WantedBy=timers.target


mysync.service
==
# mbsync service

[Unit]
Description=mbsync backup from mailsystems
ConditionPathExists=%h/.mbsyncrc
ConditionPathIsDirectory=/stuff/Backups/Mailsystems/mbsync-backups

[Service]
Type=oneshot
ExecStart=/usr/local/bin/mbsync backup

[Install]
WantedBy=default.target


"backup" is the mbsync group that includes all the defined channels that 
determine what should be backed up.  Transparent.  In the background.  
Don't have to think about it, it's just there.


I've done test restores to test environments via mbsync, and it all 
works flawlessly.



On 2020-05-26 12:31 am, Germain Le Chapelain wrote:
Le 24 mai 2020 à 14:42, Laura Smith 
 a écrit :


Hi,

What are people doing for backups ?
My current process is LVM snapshot and backup from that to NFS share.
But there seems to be hints around the internet that people use/abuse 
"doveadm backup" for backup purposes even though it seems its original 
intention was for transferring mailboxes between dovecot instances.
Assuming its ok to "doveadm backup" to an NFS share, is it ok to use 
"doveadm backup" when dovecot has replication setup 
(replication-notify etc.)  ? Or will it interfere ?

Thanks!
Laura


This has came up in the past:

https://dovecot.org/pipermail/dovecot/2020-February/thread.html#118206

I ended up developing my own system based on forwarding all emails to
a program (from which I back-up as they come in.)

I am hoping if disaster and/or misfortune were to strike my server, I
could simply cat >> back all those files in order (or not come to
think of it) in the /var/mail/ (or somewhere even better fit
in Postfix.)


I am not interested in saving the state of the mailbox as much as all
the mails that ever come in (or go out.)


--
Dean Carpenter
deano is at areyes dot com
203 six oh four 6644


Re: dovecot-fts-solr Solr9 support

2023-04-24 Thread deano-dovecot
 

Shawn - 

You had mentioned in another email (somewhere) that were hopefully going
to do a write-up of setting up Solr 9.x with Dovecot. Any chance you've
had time for that ? 

Thanks - 

On 2022-09-30 1:52 pm, Shawn Heisey wrote: 

> On 9/27/22 19:32, Nathanael Anderson wrote:
> 
>> I was trying a new install of dovecot w/ solr9. I've manually fixed the file 
>> linking to the proper directories, however one plugin is no longer shipped. 
>> Since the solr files aren't updated yet to 9, can anyone tell me if I need 
>> the discontinued velocity plugin that was default in the dovecot solr 7.7 
>> config file. It appears it is now a third party plugin that hasn't been 
>> updated for 3 years.
> 
> The velocity stuff that Solr ships with is a templating system that 
> allows Solr to host a little website showcasing its capabilities. It is 
> strongly recommended to never use this in production, as it requires 
> that end users have direct network access to the Solr install, which is 
> never a good idea.
> 
> Dovecot accesses the API directly and does not need velocity.
> 
> I am running a dev version of Solr 9.1.0 with the config and schema 
> stripped down to just what is needed for Dovecot. I have added the jars 
> necessary for the ICU analysis components and I am using two of those 
> analysis components in my schema.
> 
> I installed Solr on Ubuntu Server using the service installer script 
> included in the download. This extracts the tarball in /opt, and then 
> sets up /opt/solr as a symlink to the version-specific directory in 
> /opt. It creates a directory structure under /var/solr and creates 
> /etc/default/solr.in.sh. If you use a service name other than solr, 
> that will be named /etc/default/${servicename}.in.sh and I believe the 
> data will go to /var/${servicename}.
> 
> For ICU, I created /var/solr/data/lib, then copied icu4j-70.1.jar and 
> lucene-analysis-icu-9.3.0.jar from /opt/solr/modules/analysis-extras/lib 
> to that new lib directory. Solr 9.0.0 would have lucene jars from Lucene 
> 9.0.0, but the 9.x branch is currently using Lucene 9.3.0. Do not use 
>  config elements in solrconfig.xml to load the jars. My 
> solrconfig.xml and managed-schema.xml files can be found here:
> 
> https://paste.elyograg.org/view/97597ed3 [1]
> https://paste.elyograg.org/view/dca55086 [2]
> 
> My index is quite small by Solr standards, which is why I have such a 
> low maxTime on autoSoftCommit. Larger indexes may do better with a 
> larger interval there.
> 
> I use LATEST for luceneMatchVersion, which generates a warning when Solr 
> starts. I am also using 2.0 for the schema version so that it will 
> automatically pick up new defaults after the 1.6 version when those 
> versions are created in later versions of Solr.
> 
> This is the current contents of /etc/default/solr.in.sh with commented 
> lines removed:
> 
> ---
> SOLR_PID_DIR="/var/solr"
> SOLR_HOME="/var/solr/data"
> LOG4J_PROPS="/var/solr/log4j2.xml"
> SOLR_LOGS_DIR="/var/solr/logs"
> SOLR_PORT="8983"
> SOLR_HEAP="1g"
> GC_TUNE=" 
> -XX:+UseG1GC 
> -XX:+ParallelRefProcEnabled 
> -XX:MaxGCPauseMillis=100 
> -XX:+UseLargePages 
> -XX:+AlwaysPreTouch 
> -XX:+ExplicitGCInvokesConcurrent 
> -XX:ParallelGCThreads=2 
> -XX:+UseStringDeduplication 
> -XX:+UseNUMA 
> "
> SOLR_JAVA_STACK_SIZE="-Xss1m"
> SOLR_ULIMIT_CHECKS=false
> SOLR_GZIP_ENABLED=true
> SOLR_JETTY_HOST=0.0.0.0
> ---
> 
> Once you have all that in place, start and stop solr using service or 
> systemctl. Don't run the solr script directly except to create the 
> index ... and for that you must run it as the solr user. Running it as 
> root is prohibited by default, and forcing it will cause problems.
> 
> My Solr install is running in cloud mode, but I have removed the things 
> that configure that to make this info easier to use.
> 
> One final note: Solr 9 cannot use indexes touched by Solr 7 or 
> earlier. You will need to completely reindex.
> 
> Thanks,
> Shawn
 

Links:
--
[1] https://paste.elyograg.org/view/97597ed3
[2] https://paste.elyograg.org/view/dca55086
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: [EXT] Replication going away?

2023-07-20 Thread deano-dovecot
 

On 2023-07-19 4:08 pm, Gerald Galster wrote: 

> A 50-100 mailbox user server will run Dovecot CE just fine. Pro would be 
> overkill. What is overkill? I always thought it had a bit more features and 
> support.
 For Pro 2.3, you need (at minimum) 7 Dovecot nodes + HA authentication
+ HA storage + (minimum) 3 Cassandra nodes if using object storage. This
is per site; most of our customers require data center redundancy as
well, so multiply as needed. And this is only email retrieval; this
doesn't even begin to touch upon email transfer. Email high availability
isn't cheap. (I would argue that if you truly need this sort of
carrier-grade HA for 50 users, it makes much more sense to use email
as-a-service than trying to do it yourself these days. Unless you have
very specific reasons and a ton of cash.) 

High availability currently is cheap with a small two server setup:
You need 3 servers or virtual machines: dovecot (and maybe postfix)
running on two of them and mysql galera on all three.
This provides very affordable active/active geo-redundancy.

No offence, it's just a pity to see that feature disappering.

That's exactly how my own 3-node personal setup works. I shove all I can
into mariadb with galera (dovecot auth, spamassassin, etc) across the 3
nodes. Dovecot replication keeps the 2 dovecot instances in sync, the
3rd node is the quorum node for galera. 

This is is on 3 cheap VPS' in 3 locations around the US. Mesh VPN
between them for the encrypted connectivity. It works, and it works
well. 

And now replication is going away ? A perfectly-well working feature is
being removed ?? It's not as if it's a problematic one, nor would it
interfere with anything if it remained ... 

I only see a couple of routes forward, at least for me. 

* Stay on the last dovecot release that supports replication. 
* Switch away from dovecot and cobble something else together. 
* Move to gmail 

The removal of replication feels very arbitrary. ___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: antispam plugin, pipe backend, how to make it work?

2016-04-13 Thread deano-dovecot

Johannes -

I'm running 2.2.9 under Ubuntu 14.04.  I gave up on using the pipe 
backend, just could not get the damn thing to work.  I would up using 
spool2dir and incron, which works perfectly.  The issue was that 
sa-learn would cause a pthread_cancel error with libgcc_s.so.1


Below is an excerpt from my install script :


  # Enable antispam - Damn, not working right with pipe backend
  # now using spool2dir and incron
  if [ -e /etc/spamassassin ]; then
sed -i "
  s/^  #mail_plugins.*/  mail_plugins = \$mail_plugins antispam 
${DOVENOTIFY}/

  s/^  #mail_max_userip.*/mail_max_userip_connections = 20/
" /etc/dovecot/conf.d/20-imap.conf
cat > /etc/dovecot/conf.d/99-Installerbox-antispam.conf << EOF
##
## antispam configuration
##
plugin {
  antispam_debug_target = syslog
  # antispam_verbose_debug = 1
  antispam_trash_pattern = Trash;Deleted *
  antispam_spam = Junk;Spam

  antispam_backend = spool2dir
  antispam_spool2dir_spam = 
/var/cache/dovecot-antispam/spam/%%020lu-%u-%%05luS
  antispam_spool2dir_notspam =  
/var/cache/dovecot-antispam/ham/%%020lu-%u-%%05luH


  # pipe backend not working with sa-learn - causes pthread_cancel error 
with libgcc_s.so.1

  # antispam_backend = pipe
  # antispam_pipe_program = /usr/local/bin/sa-learn-pipe.sh
  # antispam_pipe_program_args = --for;%u
  # antispam_pipe_program_spam_arg = --spam
  # antispam_pipe_program_notspam_arg = --ham
  # antispam_pipe_tmpdir = /tmp
}
EOF

# incron watches the spam/ham spool dirs, calls sa-learn-pipe.sh to 
handle

echo "root" >> /etc/incron.allow
mkdir -p /var/cache/dovecot-antispam/spam 
/var/cache/dovecot-antispam/ham

chown -R ${VMAIL_ID}.dovecot /var/cache/dovecot-antispam/
cat > /var/spool/incron/root << "EOF"
/var/cache/dovecot-antispam/spam IN_CLOSE_WRITE 
/usr/local/bin/sa-learn-pipe.sh --spam 
/var/cache/dovecot-antispam/spam/$#
/var/cache/dovecot-antispam/ham IN_CLOSE_WRITE 
/usr/local/bin/sa-learn-pipe.sh --ham /var/cache/dovecot-antispam/ham/$#

EOF
chgrp incron /var/spool/incron/root
chmod 600 /var/spool/incron/root

# inotify needs a little more room to breath - default of 128 too 
low

cat > /etc/sysctl.d/60-inotify.conf << EOF
# inotify changes for Dovecot
# http://dovecot.org/list/dovecot/2011-March/058300.html

# Defaults are
# fs.inotify.max_queued_events = 16384
# fs.inotify.max_user_instances = 128
# fs.inotify.max_user_watches = 8192

fs.inotify.max_user_instances = 2048
EOF

# spamassassin learning script
cat > /usr/local/bin/sa-learn-pipe.sh << "EOFSPAM"
#!/bin/bash
# Pipe script to learn/unlearn single email file
# Set to read from file or from stdin
# From stdin to accomodate dovecot-antispam pipe backend (nor currently 
working)


# echo /usr/bin/sa-learn $* /tmp/sendmail-msg-$$.txt
FILE=`echo $* | sed "s/^.* //"`
echo "$$-start ($*)" >> /var/log/sa-learn-pipe.log
echo -n "$$ " >> /var/log/sa-learn-pipe.log
egrep --no-filename "^Subject: " /tmp/sendmail-msg-$$.txt ${FILE} | head 
-1 >> /var/log/sa-learn-pipe.log

cat<&0 >> /tmp/sendmail-msg-$$.txt
/usr/bin/sa-learn --progress $* /tmp/sendmail-msg-$$.txt >> 
/tmp/sa-learn-pipe.$$.log 2>&1

echo $$ sa-learn rc=$? id=$(id) HOME=$HOME >> /var/log/sa-learn-pipe.log

while read line; do
  echo $$-sa-learn "$line" >> /var/log/sa-learn-pipe.log
done < /tmp/sa-learn-pipe.$$.log

rm -f /tmp/sendmail-msg-$$.txt /tmp/sa-learn-pipe.$$.log
rm -f ${FILE}
echo "$$-end" >> /var/log/sa-learn-pipe.log

exit 0
EOFSPAM
chmod 755 /usr/local/bin/sa-learn-pipe.sh
touch /var/log/sa-learn-pipe.log
chown ${VMAIL_ID}.dovecot /var/log/sa-learn-pipe.log
chmod 660 /var/log/sa-learn-pipe.log
cat > /etc/logrotate.d/sa-learn-pipe.log << EOFLOG
/var/log/sa-learn-pipe.log {
daily
missingok
rotate 10
compress
delaycompress
notifempty
create 660 ${VMAIL_ID} dovecot
}
EOFLOG
  fi # spamassassin




On 2016-04-12 14:14, Johannes Rohr wrote:

Hi, my setup is a dovecot 2.0.19 IMAP server on Ubuntu Precise with
the antispam plugin in version  2.0+20120225-2 and spamassassin at
version 3.2.2

I have been trying and failed to get the pipe backend of the antispam
plugin to work. Spamassin by itself works, a manual call of sa-learn
works fine. Bayes data is stored in a mysql DB.

I have the following configuration in 
/etc/dovecot/conf.d/90-plugin.conf


plugin {
  #setting_name = value
  sieve=~/.dovecot.sieve
  sieve_dir=~/sieve
antispam_pipe_program_spam_arg = --spam
antispam_pipe_program_notspam_arg  = --ham
antispam_pipe_program = /usr/local/bin/sa-learn-pipe.sh
antispam_pipe_program_args = --username=%u # % expansion done by 
dovecot

antispam_trash = trash;Trash;Deleted Items;Deleted Messages
antispam_spam = SPAM;Junk
antispam_backend = pipe
antispam_verbose_debug = 1
antispam_debug_target = syslog
antispam_pipe_tmpdir = /t

[Dovecot] Replication with virtual users and static userdb possible ?

2014-06-03 Thread deano-dovecot
 

Is it possible to get replication working in a virtual user setup
that uses a static userdb ? My environment is fairly simple and typical
- there's a single system user (vmail) that owns all the home dirs
(/var/mail/domain.com/user). The virtual users
(use...@domain.com:secretpassword) are kept in a single file
(/var/mail/domain.com/PASSWD) that's unique per domain, and referenced
as a static userdb : 

passdb {
 driver = passwd-file
 args =
scheme=plain username_format=%u /var/mail/%d/PASSWD
} 

userdb {
 driver
= static
 args = uid=vmail gid=vmail home=/var/mail/%d/%n
} 

I know the
wiki http://wiki2.dovecot.org/Replication states that user listing must
be enabled, but that's not available for a static userdb. The wiki
http://wiki2.dovecot.org/UserDatabase/Static also says that it shouldn't
be a problem because it will use do a passdb lookup instead (except for
PAM which isn't used here). 

Unfortunately, it's not working. I've
testing with ssh : 

dsync_remote_cmd = ssh -l vmail %{host} doveadm
dsync-server -u%u -l%{lock_timeout} -n%{namespace}
mail_replica =
remote:vm...@server2.domain.com 

as well as with straight tcp (SSL for
later) 

mail_replica = tcp:server2.domain.com:999 

/var/log/mail.err
shows the problems ... 

Jun 3 11:30:53 server1 dovecot: auth: Error:
Trying to iterate users, but userdbs don't support it
Jun 3 11:30:53
server1 dovecot: replicator: Error: User listing returned failure
Jun 3
11:30:53 server1 dovecot: replicator: Error: listing users failed, can't
replicate existing data 

Anyone else have it working ? I'm sure it's
something simple that I've just overlooked. 

Thanks - 

D. 
 


Re: [Dovecot] General questions about TCP replication with dsync

2014-06-04 Thread deano-dovecot
 

How does this affect the other packages and the dependencies ? For
example, my test system is an Ubuntu 14.04/trusty box ... 

$ dpkg -l
dove* | fgrep ii
ii dovecot-antispam 2.0+20130822-2build1 Dovecot
plugins for training spam filters
ii dovecot-core 1:2.2.9-1ubuntu2.1
secure POP3/IMAP server - core files
ii dovecot-imapd 1:2.2.9-1ubuntu2.1
secure POP3/IMAP server - IMAP daemon
ii dovecot-lmtpd
1:2.2.9-1ubuntu2.1 secure POP3/IMAP server - LMTP server
ii
dovecot-managesieved 1:2.2.9-1ubuntu2.1 secure POP3/IMAP server -
ManageSieve server
ii dovecot-sieve 1:2.2.9-1ubuntu2.1 secure POP3/IMAP
server - Sieve filters support
ii dovecot-solr 1:2.2.9-1ubuntu2.1 secure
POP3/IMAP server - Solr support 

So rather than the stock 

_sudo
apt-get install dovecot-imapd dovecot-sieve dovecot-antispam
dovecot-managesieved dovecot-lmtpd dovecot-solr_ 

one would install
_dovecot_ from the MAMARLEY PPA ? To integrate cleanly it would have to
"Provides: dovecot-common" and "Replaces: dovecot-common, mailavenger"
like the stock dovecot-core does. While we can and do use non-stock
packages, we really try to stay with stock(ish) packages as much as
possible to ease upgrade administration. I'm dealing with a slew of
custom apache2 installs, all unique, on a bunch of old Cent 5.2 boxes
right now, and it's a mighty pain. 

D. 

On 2014-06-04 16:41, Robert
Schetterer wrote: 

> Am 04.06.2014 19:53, schrieb Patrick De Zordo:
>

>> Well, not so easy.. we are working on a productive server; this
version ships as default for this distro.. I don't even know how to
compile my own dovecot version..
> 
> see
> 
>
http://wiki2.dovecot.org/PrebuiltBinaries#Automatically_Built_Packages
>

> or
> 
> https://launchpad.net/~mamarley/+archive/updates/+packages
>

> recompile
> 
>
https://sys4.de/de/blog/2013/06/17/dovecot-patching-mit-debian-und-ubuntu/


Re: [Dovecot] Replication with virtual users and static userdb possible ?

2014-06-05 Thread deano-dovecot
Ugh, stuff got mangled in formatting below. Anyway, I've had no luck 
with
various permutations, so it's looking like a virtual-user setup can't 
make

use of replication ?

I guess what I want is for it to activate replication upon ANY 
notification

of updated emails.

On 2014-06-03 11:54, deano-dove...@areyes.com wrote:


Is it possible to get replication working in a virtual user setup
that uses a static userdb ? My environment is fairly simple and 
typical

- there's a single system user (vmail) that owns all the home dirs
(/var/mail/domain.com/user). The virtual users
( userid @ domain.com : secretpassword) are kept in a single file
(/var/mail/domain.com/PASSWD) that's unique per domain, and 
referenced

as a static userdb :

  passdb {
driver = passwd-file
args = scheme=plain username_format=%u /var/mail/%d/PASSWD
  }

  userdb {
driver = static
args = uid=vmail gid=vmail home=/var/mail/%d/%n
  }

I know the
wiki http://wiki2.dovecot.org/Replication states that user listing 
must

be enabled, but that's not available for a static userdb. The wiki
http://wiki2.dovecot.org/UserDatabase/Static also says that it 
shouldn't
be a problem because it will use do a passdb lookup instead (except 
for

PAM which isn't used here).

Unfortunately, it's not working. I've testing with ssh :

  dsync_remote_cmd = ssh -l vmail %{host} doveadm dsync-server -u%u 
-l%{lock_timeout} -n%{namespace}

  mail_replica = remote:vm...@server2.domain.com

as well as with straight tcp (SSL for later)

  mail_replica = tcp:server2.domain.com:999

/var/log/mail.err shows the problems ...

Jun 3 11:30:53 server1 dovecot: auth: Error: Trying to iterate users, 
but userdbs don't support it
Jun 3 11:30:53 server1 dovecot: replicator: Error: User listing 
returned failure
Jun 3 11:30:53 server1 dovecot: replicator: Error: listing users 
failed, can't replicate existing data


Anyone else have it working ? I'm sure it's
something simple that I've just overlooked.


Re: [Dovecot] Replication with virtual users and static userdb possible ?

2014-06-10 Thread deano-dovecot
 

Is there no-one out there using replication with virtual users ? If
so how did you do it ? 

I just *know* someone is going to point me to a
simple page describing how to do it ... 

On 2014-06-05 09:57,
deano-dove...@areyes.com wrote: 

> Ugh, stuff got mangled in formatting
below. Anyway, I've had no luck with
> various permutations, so it's
looking like a virtual-user setup can't make
> use of replication ?
> 
>
I guess what I want is for it to activate replication upon ANY
notification
> of updated emails.
> 
> On 2014-06-03 11:54,
deano-dovecot@areyes.comwrote:
> 
>> Is it possible to get replication
working in a virtual user setup that uses a static userdb ? My
environment is fairly simple and typical - there's a single system user
(vmail) that owns all the home dirs (/var/mail/domain.com/user). The
virtual users ( userid @ domain.com : secretpassword) are kept in a
single file (/var/mail/domain.com/PASSWD) that's unique per domain, and
referenced as a static userdb : passdb { driver = passwd-file args =
scheme=plain username_format=%u /var/mail/%d/PASSWD } userdb { driver =
static args = uid=vmail gid=vmail home=/var/mail/%d/%n } I know the wiki
http://wiki2.dovecot.org/Replication [1] states that user listing must
be enabled, but that's not available for a static userdb. The wiki
http://wiki2.dovecot.org/UserDatabase/Static [2] also says that it
shouldn't be a problem because it will use do a passdb lookup instead
(except for PAM which isn't used here). Unfortunately, it's not working.
I've testing with ssh : dsync_remote_cmd = ssh -l vmail %{host} doveadm
dsync-server -u%u -l%{lock_timeout} -n%{namespace} mail_replica =
remote:vm...@server2.domain.com [3] as well as with straight tcp (SSL
for later) mail_replica = tcp:server2.domain.com:999 /var/log/mail.err
shows the problems ... Jun 3 11:30:53 server1 dovecot: auth: Error:
Trying to iterate users, but userdbs don't support it Jun 3 11:30:53
server1 dovecot: replicator: Error: User listing returned failure Jun 3
11:30:53 server1 dovecot: replicator: Error: listing users failed, can't
replicate existing data Anyone else have it working ? I'm sure it's
something simple that I've just overlooked.
 

Links:
--
[1]
http://wiki2.dovecot.org/Replication
[2]
http://wiki2.dovecot.org/UserDatabase/Static
[3]
mailto:vm...@server2.domain.com


Re: [Dovecot] Replication with virtual users and static userdb possible ?

2014-06-16 Thread deano-dovecot
 

I'm trying to avoid switching the userdb from a nice simple static
setup to something else to enable replication. Is there anyone using
replication with a virtual user configuration ? How did you do it ?
Actually, anyone doing replication at all - what does your config look
like ? 

Thanks - 

D. 

On 2014-06-03 11:54, deano-dove...@areyes.com
wrote: 

> Is it possible to get replication working in a virtual user
setup
> that uses a static userdb ? My environment is fairly simple and
typical
> - there's a single system user (vmail) that owns all the home
dirs
> (/var/mail/domain.com/user). The virtual users
>
(use...@domain.com:secretpassword) are kept in a single file
>
(/var/mail/domain.com/PASSWD) that's unique per domain, and referenced
>
as a static userdb : 
> 
> passdb {
> driver = passwd-file
> args =
scheme=plain username_format=%u /var/mail/%d/PASSWD
> } 
> 
> userdb {
>
driver = static
> args = uid=vmail gid=vmail home=/var/mail/%d/%n
> } 
>

> I know the
> wiki http://wiki2.dovecot.org/Replication states that
user listing must
> be enabled, but that's not available for a static
userdb. The wiki
> http://wiki2.dovecot.org/UserDatabase/Static also
says that it shouldn't
> be a problem because it will use do a passdb
lookup instead (except for
> PAM which isn't used here). 
> 
>
Unfortunately, it's not working. I've testing with ssh : 
> 
>
dsync_remote_cmd = ssh -l vmail %{host} doveadm
> dsync-server -u%u
-l%{lock_timeout} -n%{namespace}
> mail_replica =
>
remote:vm...@server2.domain.com 
> as well as with straight tcp (SSL
for
> later) 
> 
> mail_replica = tcp:server2.domain.com:999 
> 
>
/var/log/mail.err shows the problems ... 
> 
> Jun 3 11:30:53 server1
dovecot: auth: Error: Trying to iterate users, but userdbs don't support
it
> Jun 3 11:30:53 server1 dovecot: replicator: Error: User listing
returned failure
> Jun 3 11:30:53 server1 dovecot: replicator: Error:
listing users failed, can't replicate existing data 
> 
> Anyone else
have it working ? I'm sure it's something simple that I've just
overlooked.
 


Managing users and home dirs

2014-06-21 Thread deano-dovecot
 

For those of you using virtual users, and SQL, how are you managing
your users and their home dirs ? That is, what process do you use for
adding/deleting users, creating their home dirs etc ? I suppose it's
easy enough to do manually, inserting rows in the database, creating
dirs, chown/chmod yada yada, but there must be a better way to do it ...
If you're doing dovecot replication then it gets even more cumbersome,
having to duplicate the effort in two places (and make sure it's
correct). 

I have a nice test setup using Percona XtraDB Clustering in
a 3-node cluster which works swimmingly, albeit in VMs only at the
moment. A master DB node and two dovecot nodes. Dovecot replication is
up and running nicely too, and I almost have all the communications
going over ipsec tunnels, so it will be nice and secure. 

I'm thinking
of something like a cronjob with two tasks, the first would periodically
scan the home dirs and compare the users to what's in the database. When
it finds a new userdir (plus a file labeled PASSWD) the script would add
the user to the database, create the Maildir and whatever else, then
delete the PASSWD file. DB replication will push that to the other
nodes. 

The second task is scanning the user database and comparing to
the home dirs - basically opposite of the first cronjob. When it finds a
user in the DB that doesn't have a home dir, it would create it and
whatever else is needed. 

This way, to add a user one would just create
a PASSWD file in /var/mail/domain.com/newusername/PASSWD on either of
the dovecot replication partner systems. The first cronjob task would
discover the newusername dir, create the user in the DB, create the
Maildir, chown/chmod etc. and delete the PASSWD file, so it's ready to
go on that system. DB replication pushes the user table to the other
nodes. The second task on the other dovecot system will discover a new
user in the DB that doesn't have a home dir, and do its thing to create
it all. 

So the whole create-a-new-user process becomes something like
this on either dovecot system : 

mkdir -p
/var/mail/domain.com/newusername ; echo "changeme" >
/var/mail/domain.com/newusername/PASSWD 

A max of 5 minutes later the
user is added to the database, and the home dir/Maildir/etc/etc is
created on both dovecot systems. 

D. 
 


Re: Managing users and home dirs

2014-06-25 Thread deano-dovecot
 

Just a quick update on the below ... The 3-node setup is working
cleanly now. One master/backup DB node, two dovecot nodes, using Percona
Xtradb Cluster 5.5. All replication (percona and dovecot dsync) is via
ipsec tunnels. 

Adding a user or new domain is a matter of creating a
/var/mail/newusers.txt file, containing the list of users to be added.


john,doe.com,password,John Doe user 

A cronjob on both dovecot nodes
scans the user database and the /var/mail dirs. For any new users in the
file it adds them to the DB and creates their userdir/Maildir. Any new
user in the DB without a userdir, it creates their userdir/Maildir. So
it's a max of 5 minutes for a new user to be available on node1, and
another 5 minutes to be replicated to node2. Once the users are created,
the newusers.txt file is deleted. 

It would be nice to use a database
trigger to create the userdir/Maildir immediately rather than the
cronjob, but I haven't got that figured out yet. I found the
lib_mysqludf_sys UDF library, but it doesn't seem to be working. Some
issue with the db replication I think. 

Any ideas for creating a
directory from a mysql trigger ? 

On 2014-06-21 11:12,
deano-dove...@areyes.com wrote: 

> For those of you using virtual
users, and SQL, how are you managing
> your users and their home dirs ?
That is, what process do you use for
> adding/deleting users, creating
their home dirs etc ? I suppose it's
> easy enough to do manually,
inserting rows in the database, creating
> dirs, chown/chmod yada yada,
but there must be a better way to do it ...
> If you're doing dovecot
replication then it gets even more cumbersome,
> having to duplicate the
effort in two places (and make sure it's
> correct). 
> 
> I have a nice
test setup using Percona XtraDB Clustering in
> a 3-node cluster which
works swimmingly, albeit in VMs only at the
> moment. A master DB node
and two dovecot nodes. Dovecot replication is
> up and running nicely
too, and I almost have all the communications
> going over ipsec
tunnels, so it will be nice and secure. 
> 
> D.
 


Any issues with dsync between 2.1.7 and 2.2.9 ?

2014-06-28 Thread deano-dovecot
 

My current production system is running dovecot 2.1.7-7ubuntu1 on
Ubuntu 13.04, and I'm building a new setup based on Ubuntu 14.04 with
dovecot 2.2.9-1ubuntu2.1 - current standard in the repos. I'd like to
use replication to get the mailstore from the old system to the new. Are
there any caveats or gotchas to be aware of ? 

Is it possible to make
it a one-way replication ? So problems or glitches on the new setup
don't propagate to the old production ? 

Thanks - 

D. 
 


Re: Transition from one server to another.

2014-07-14 Thread deano-dovecot
I've done this a couple of times, and there are a couple of things you 
can do to help make it go smoothly.  I did it not too long ago to move 
to a new single server, and am getting ready to do it again to a fully 
redundant setup (3 nodes for percona clustering, 2 of the nodes as 
dovecot with sync).


* Set your current MX records TTL to the lowest you can, usually 30 
minutes.  This will make it quicker to do the final transition when you 
do.
* Create an A record in DNS for the new server, but no MX records for it 
just yet.

* If you're using SPF, add your new server IP address to the TXT record.

Get your new server up and ready and *tested*.  Verify everything works. 
 Web access to roundcube/squirrelmail/whatever, imaps access from 
thunderbird/outlook/whatever and so on, sending mail from the new 
server.  The works.


Get bi-directional replication going between the two servers.  This 
doesn't have to be dovecot dsync, you can use offlineimap too.  
Whatever, get sync going.  The aim is to be sure that any changes on one 
server are synced to the other one.  Test it - use swaks to create mail 
on the new server, make sure it shows up cleanly on the old (that people 
are still using).


* NOW, add an MX record of higher (lower number) priority pointing to 
the new server.  Remove the MX record for the old server.


Point clients to the new server.  This can be done gradually, as mail 
will be replicated between the two systems.


All new mail should now be going to the new server, while some systems 
with cached DNS will still use the old one.  Any mail they deliver to 
the old server will replicate to the new one.  The cached MX records 
should expire fairly soon (remember the 30 minute TTL you set) but 
sometimes they don't for a while.


Wait a few hours.

* Disable inbound smtp on the old server.

Now any connections coming into the old will fail, and the source 
systems should spool the mail for retry later, delivering to the new 
server.  Clients can still use the old server, and even send mail from 
it.


Wait a day or so and watch the old server.  Pretty quickly you should 
see no more connections coming into it.  "tshark port 25 or port 587" is 
your friend here.


Finally decommission the old server once you have all clients moved over 
to the new server.


If anyone is interested, that redundant setup is part of an automated 
installer I've been working on.  You can set up a 3-node environment on 
cheap VPS', two 2gig ram nodes for the 
dovecot/exim4/roundcube/spamassassin/clamav and a 512meg ram one for the 
3rd Percona cluster DB node (to make a quorum).  Everything replicates 
via encrypted vpn, so you can point to either main node for roundcube or 
imaps.  Works fine in Amazon AWS too, though that's a little pricier 
than the cheap VPS providers.




On 2014-07-14 16:22, Anders Wegge Keller wrote:

A frind of mine and I are running a dedicated server, that among
other things host mail for ourselves and friends and families. All in
all about 15 different domains with 35-40 users. The machine in
question is old, so we are doing a slow transition from the old server
to the new one. So far, we've managed to move web hosts
seamlessly. Due to the technical capabilities at some of the user
base, it would be nice to get to a setup, where we can move individual
users imaps from the old server to the new one, as we get the time to
visit them.

 I have an idea how such a transition could go:

 1. Upgrade the old dovecot 1.2.15 to 2.1.whateveritis from debian
squeeze backports.

 2. Set dsync up to replicate mails from the old server to the new
server. I know that 2.2 is recommended, but with a limited amount
of user, I'm willing to take a performance hit.

 3. Migrate my parents &c to use the new server.

 4. When all users have been moved on to using the new server, upgrade
MX records for the domain to point at the new server.

 5. When all MX records are updated, decalre success.


 Is this feasible, and what would the risks be. For instance, during
step 4, mails are bound to arrive at bothe the old and new server for
some time. Will this cause problems?

 Is there a simpler solution to the problem?


Re: To dovecot-ee or not to dovecot-ee

2014-10-13 Thread deano-dovecot

On 2014-10-13 08:30, Jens Dueholm Christensen wrote:

Apart from the need to register an account in order to "purchase" an
-ee license, are there any cavats by switching to the -ee version
compared to compiling and running the regular releases?

I've got no problems with downloading, building and installing the
normal releases and I have no need for object storage, so will the -ee
version give me anything else but access to a YUM repo and RPM
packages?


If you're planning on an Ubuntu platform, right now it only supports 
12.04.  14.04 is in the works as I recall, but no idea when.


Personally I run 2.2.9-1ubuntu2.1 on Ubuntu 14.04 from stock repo.

--
Dean


Re: Invoking the spam checker on the sieve script

2014-10-24 Thread deano-dovecot

On 2014-10-23 12:19, Alejandro Exojo wrote:
That most of my mail comes from 100% assured not spam sources: mailing 
lists
that are already filtered or rss2email (the second probably can be 
skipped
easily because it comes locally). I only have a small VPS, so I'm 
trying to
save some resources if possible. Spamassassin consumes quite a lot, 
AFAIK.


What kind of VPS are you using ?  I'm in a similar boat to you, running 
my own domain(s) and email, and have built the mail system on a set of 3 
VPS', two 6G ram that cost $7/mo and one 1G ram that's $3.50/mo.  The 
two larger ones run exim4, spamassassin, clamav, nginx, roundcube, 
dovecot, munin (stats), solr (search), zpush, tinyrss, percona (mysql).


It all works swimmingly well.  The main setup will run in a 2G ram VPS, 
albeit with some swapping.  If you're on an SSD-backed VPS, it works OK 
- that was my old setup with Digital Ocean.


ClamAV is the memory hog, spamassassin really isn't bad , so you might 
give it a shot ...


24576 www-data php /usr/share/tt-rss/www/u  01073212943
17572
 3310 unbound  /usr/sbin/unbound01764417779
19084
 5298 debian-spamd spamd chil   0 186034989   
101596
 5297 debian-spamd spamd chil   0 215635137   
101596
 5292 root /usr/sbin/spamd --max-child  0 314836869   
104944
 3474 tomcat6  /usr/lib/jvm/default-java/b  0   122240   122621   
124692
 5480 clamav   /usr/sbin/clamd  0   416496   416726   
417804
20010 mysql/usr/sbin/mysqld --basedir=  0   684200   684523   
686692


All the mysql stuff is a 3-node replication cluster, the two main 
systems and a 3rd (small one) just running percona.  Dovecot is also 
replicating between the two main systems.  This way ALL the data is 
replicated between them, and I can hit either main system for all 
functionality.  Replication is over tinc encrypted sessions.


--
Dean Carpenter
deano is at areyes dot com
203 six oh four 6644