On 9/6/2012 2:19 PM, Ed W wrote: > On 06/09/2012 18:56, Ben Johnson wrote: >> >> On 9/6/2012 6:10 AM, Charles Marcus wrote: >>> On 2012-09-05 6:20 PM, Ben Johnson <b...@indietorrent.org> wrote: >>> >>>> My configuration is Dovecot (1.2.9) + Sieve + SpamAssassin on Ubuntu >>> 10.04. >>> >>> 1.2.9 is really old... you really need to upgrade to a recent/stable >>> version. >> Thanks, Charles. I do see your point. One of the challenges we face in >> this regard is that we're using a Long-Term-Support version of Ubuntu >> (10.04) and 1.2.9 is the latest package in the OS's repository. >> >> That said, we could upgrade manually, but this is a production server on >> which downtime must be minimized, and we all know how unexpected issues >> arise during installation (even when the procedure is tested in a >> closely equivalent development environment). > > I personally use (lightweight) virtualisation on any new machine, I > really don't see any reason why NOT to. I would typically also setup my > mounts such that the operating system is separate from "the data". This > makes it easy to upgrade the OS/services, but without touching the data > (test before/after on the same data for example)
Thanks for your valuable insights, Ed. That seems like a worthwhile approach. > So in my situation I would boot a fairly small (gentoo in my case) > virtual environment that runs only dovecot + postfix, it mounts the mail > spools separately - I say "boot", but because I'm using linux-vservers, > it's really a fancy chroot, and so the instance will start in 2-3 > seconds (restarts are similarly near instant). I would upgrade by > cloning this installation, upgrading it, testing it to bits, and then to > make it live basically you swap this "machine" for the live machine. > There are various ways it could be made near seamless, but in my > situation I can bear a couple of seconds whilst I literally restart the > "machine" > > Similarly I segregate all my services into a dozen or so "virtual > machines", so DNS has it's own "machine" and so does logging, databases, > almost every webservice gets its own virtual environment, etc. You could > use a full blown vmware/kvm/etc if that floats your boat better, but the > point remains it's so trivial to install, makes upgrades to trivial and > massively decreases your downtime risk that it's very hard to find a > reason NOT to do it... While I'm with you here, and I understand the theory (and practice, to some extent), doesn't all of this require a true, physical machine? We can't justify the expense associated with a physical machine in a hosted environment, so we're left with so-called VPSs. My understanding is that OpenVZ cannot be installed on a VPS (for seemingly obvious reasons -- namely, that the VPS is itself an OpenVZ container). > I haven't tried too hard to keep my instances tiny, so each is probably > around 400-600MB in my case. However, if it were important this could > easily be reduced to 10-100s MB each using various hardlink features. > As you can see it's easy to snapshot a whole machine to manage > upgrades/backups, etc > > > This is more about infrastructure, but I honestly can't get over how > many people are sitting on their hands shackled by "I'm on Debian xxx > and I can't install any software newer than 5 years old"... It's so easy > to escape from that trap...!! Perhaps easy, but not necessarily inexpensive. ;-) Thanks again for sharing the details of your strategy; I'll bear all of this in mind moving forward. > Good luck > > Ed W > > -Ben