Hey TJ! > Steve, I'd like to ask you a question from a technical standpoint. I am > not being sarcastic or condescending in any way whatsoever. If you, in > your experience, think that perhaps systemd is an attempt to clone a > Windows desktop, the server side of Linux be damned? >
There are quite a few things in the systemd ecosystem that are very specifically targeted at servers and not so much as desktops. Some are even useful to an extent, but badly implemented. Some others are so bad for servers I am positively shocked that serious admins will consider deploying them. Of course, with systemd, you cannot really take a pick-and-mix approach, as most components are interdependent. I have run a few servers with the whole systemd collective by way of trial, and was depressingly unsurprised at the outcome: unstable, difficult to manage, almost impossible to troubleshoot when things go wrong. The nice bits of systemd are also available in different packages, and following the tried and tested (but apparently out of fashion) tradition of "do one thing, and do it well", these different packages usually tend to be superior in implementation. Socket activation, file/inotify implementation, unit files, introspection, process supervision - these are all cool and useful features. If I need any of these on any of my servers, I can choose from multiple alternatives that will do the job well. The not so nice bits of systemd (too many to list) as well as specific systemd idiosyncrasies positively preclude me (or any other sane operator) from running this for any kind of serious production use. systemd has positioned itself too centrally to the overall environment architecture to be able to take an incremental approach to implementation and deployment, and systemd has way too many untested (or proven to be disastrous) edge-cases where it will go off and make your server go down in flames. The most touted benefit of systemd - 1 second boot times - is pretty much irrelevant for servers. Firstly, you don't really want to reboot production servers. Secondly, if for whatever reason you do need to reboot a machine, there is a very high likelyhood that you are waiting many minutes for the hardware to boot - having to wait a few seconds extra for the OS really doesn't matter much (Most of my servers are VM's and will run a full reboot cycle in about 30 seconds - works for me...). Thirdly, the majority of workloads are moving to multi-host hypervisor environments (VMWare, Openstack/Xen etc.) and the correct way to deal with host/hardware reboots would be to "drain" the host, so you live-migrate (vmotion, etc) your workloads to other hosts, and then you can take the now idle machine offline for whatever you need to do on it. Same goes for containers, although the process is a bit different, the idea is the same. It isn't a case of "Servers be damned" - they really do want the servers within the systemd Borg Cube. It's more of a case where serious server admins think "hell no, not on my systems" Martijn
_______________________________________________ Dng mailing list Dng@lists.dyne.org https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng