Speaking as a relatively obsolete dinosaur, I would suggest that if you are going to discuss specific deployment practices, you start with the most fundamental ones: SSH, the unix shell and so on.
We have had issues over the years with people coming in and introducing sexy new deployment tools, but ultimately they all just run unix commands. Anyone managing a web application in the non-microsoft world is ultimately depending on this. Some key skills (assuming a Linux/Mac/Unix-ish environment): - know about SSH keys and logging into remote machines - know the basics of at least one command line editor (e.g. vi) - basic shell knowledge: environment variables, testing for existence of files and directories etc - how to interact with your database from the command line, if you use one (including dump and restore) - how your web server works: starting, stopping, configuration files, where log files live and how it talks to Python Fabric may be useful if you want to control half a dozen machines from your desktop, and it might add a lot of value if you want to control a hundred of them. But to update one server, you deploy by logging into it and then running commands or short scripts. For example, we have a 'demo site' we rebuild pretty often that uses Django, MYSQL, Celery and a few other things. It runs on plain vanilla Ubuntu machines we build ourselves. The sequence is... 1. Log in via SSH 2. CD to correct directory 3. activate virtual environment 4. stop any celery worker processes 5. stop web server processes (* in our setup, we leave Apache running) 6. pull latest code from mercurial - both the app, and 3-4 libraries it depends on 7. run a management command to rebuild the database 8. run a smallish in-place test suite 9. restart celery workers 10.restart web server 11. log out All of this after the login and CD can be handled by a shell script on the path of the server, so you can just run a command called something like ./update_server More realistically, we tend to end up with a management shell script called 'server' with a bunch of commands/arguments like 'stop / start / restart / update-code-in-staging / copy-live-data-to-staging / run-health-checks / swap-live-and-staging' and so on. SSH can execute remote commands like this just fine with the right arguments, if actually logging in is too tedious. Production sites are complex and all different. You might want to do instantaneous swaps from live to staging (and be able to back out fast if stuff goes wrong); to switch DNS so the world is looking at another server while you update one; you might have large databases to copy or migrate that need significant time; it may or may not be acceptable to lose sessions and have downtime; and so on. It takes less time to learn the fundamentals than you will spend debugging why your fancy new deployment tool stopped working after some Python dependency upgrade somewhere. And it is less likely that your new hires will disagree if you stick with the lowest common denominator. If you already know the fundamentals and make an informed decision to use a popular deployment tool, that's fine. Just take the time to write down why you use it in your docs so people will know if its no longer appropriate one day. --- So, my 2p worth is that in the book you might want to show a Linux/Apache setup, discuss what kind of scripts ought to exist on the box for managing it, discuss concerns you MIGHT need to address during deployment, and tell people to automate it. Then point out that there are many popular higher-level tools. - Andy _______________________________________________ python-uk mailing list python-uk@python.org http://mail.python.org/mailman/listinfo/python-uk