> On 24 Dec 2022, at 09:35, David Bürgin <dbuer...@gluet.ch> wrote: > > raf: >> On Fri, Dec 23, 2022 at 06:20:08PM +0100, Gerben Wierda >> <gerben.wie...@rna.nl> wrote: >>> What is the best way to do this? Or is it too troublesome and should >>> I just use postfix outside of docker, installing it with apt? I would >>> rather like to have a single (docker) deployment model which would >>> make it easier later to migrate once more. >> >> It's probably heretical, but I don't think Docker is >> well-suited to Postfix. You would need to configure >> Docker to map many UNIX domain sockets to allow >> Postfix's own processes to communicate with each other >> and with any milters and policy services. Docker seems >> to be primarily aimed at things that communicate only >> via TCP. But take that with a grain of salt. I am >> barely a Docker novice. I don't doubt that Postfix >> could be packaged up with Docker, and that would make >> migration easier, but so would Ansible. I prefer apt >> and automated security upgrades to immutable >> infrastructure. In general, that's silly, but Docker >> (and immutable infrastucture) makes more sense when you >> need many equivalent transient VMs, not a single, >> stable MX host. But of course, that's just my opinion. > > +1 for the Ansible over Docker suggestion. > > Server migration was always a worry to me, given the landscape where > hosters can become unreliable (being bought off, no longer allowing > email hosting, etc.). Because of this I have a playbook for deployment > and a documented migration plan. This allows me to migrate at any time. > I keep it up-to-date and even practice migration every few years. What > advantages does Docker provide over this?
I agree that the the sockets would be troublesome from a setup/maintenance to do and other approaches would be probably better. I guess this is becoming a bit off-topic as there are many decent planning/migration setups. The advantage of docker would be (as far as I can see) As much as possible a mail server setup with (semi-)sandboxed components (e.g. apache-solr8, redis, dcc, rspamd, dovecot). I do run my containers as non-root (otherwise a docker setup would be more vulnerable, not less) Independent dependencies as far as many shared libraries and such are concerned. Deploy on the host OS, update some shared library as a result of whatever maintenance job and your service may break. In Docker, each container has its own set. The disadvantage is of course that you may have stuff running that is outdated and such inside your containers. If I can keep all of my services inside docker, I only have one area of management (a directory with all the descriptions) to keep maintained (e.g. in git), no need to add more services and complexity with ansible, puppet, etc. Easy to backup as well. But frankly, you and others are right. Given the need to (safely) connect via Unix sockets, the idea of being able to easily move to (e.g.) a cloud based container setup later is a no go anyway. Hmm.