Upfront disclaimer, I work for Amazon in the AWS division. Anything I
do say in this email is entirely my own opinion and may or may not
reflect the views of my employers.
Prior to joining AWS I'd used Amazon for some website hosting stuff, but
nothing particularly fancy. Other than that my only experiences with
cloud stuff is a Linode VPS and I briefly had a Digital Ocean instance
spawned up for some testing.
I'll try to keep things as generic as possible rather than just tout a
long list of cloud services that Amazon provides :) As ever when buying
in to a cloud service, try to work out how easy it will be to get *out*
of it too. There are some terrific advantages of using specialist
services like Amazon's RDS or Route53, but remember that should you wish
to leave the provider you'll potentially have to re-engineer parts of
your solution, so weigh up the pros and cons :)
Answers follow, inline, but you're going to find a number of these will
involve configuration management. If you don't already use something
like Chef or Puppet, I would *strongly* encourage you to look at it
before bringing up cloud infrastructure. It really will make your life
easier. I actually used cloud as a way to get config management in to
my last job, and of course once it was in place it started getting used
for more and more of our physical infrastructure too:
On 6/3/2014 7:16 AM, Yves Dorfsman wrote:
With clouds (private and public) where you spin up new VMs or
containers for every deploy, how do you guys deal with:
Login in:
- Can you ssh to all your servers/containers? Or just check
centralised logs?
- If you can't ssh to them, how do you solve hard problems, problems
where you'd traditionally use netstat, iostat, strace etc...?
Most cloud providers allow you to set up a root SSH key to give you
direct access to the instance when it boots clean. From there how you
handle authentication is up to you, same as if it was a physical box.
At $job-1 I was using cloud instances inside an Amazon VPC (private
network), so I was replicating our LDAP tree off to one of the servers
there and using that for auth. Added bonus there being a nice hot spare
LDAP server. If it's not in a VPC I'm not sure that would be preferable
approach, you'd probably be better off with local users on the box,
something you can handle fairly easily through config management.
That said, in the 'servers are cattle not pets' world, you should be
avoiding logging in to servers at all. I'd say that probably really
applies to physical servers too for that matter. The ideal to strive
for is only ever logging in to a box because something has gone badly wrong.
Logging in to do iostat, netstat, strace and your usual debugging stuff
is fine.
Centralised logging is always a good thing, IMO, from a security
perspective as well as operational one.
DNS:
- Do you servers/containers register themselves on a DNS server?
- What about containers sharing their server's ip? How do you access
them individually?
The general pattern seems to be to use either DNS providers like
dnsimple.com, Amazon's Route53 or Dyn.com. Some of the advantages there
are that they have global, highly responsive and reliable DNS
infrastructures to rely on. Combined with an API that makes it really
easy to programatically add and remove servers. Alternatively you could
just build zone files from your config management software, or use a DNS
server like powerdns that allows you to have a database backend that you
can interact with directly.
Note that the cloud provider will generally automatically allocate some
kind of dns name and/or IP for the host that allows you to log in to
it. For most purposes you'd probably be fine relying on that IP address
rather than registering a box with a DNS service. Remember that
everything you do that relies on a DNS names instantly slows things
down, even if it's cached locally.
Not quite sure what you mean by containers here, if you're talking about
something like Docker, I couldn't offer any specific comment as I've
never used it myself. I would assume you'd use it exactly the same as
if it were physical infrastructure.
Authentication:
- in this new world, do you create one userid per staff, or
everybody's using the same one?
- if the latter, how do you deal with finding out who did what? What
about audit/reporting requirements?
- if the former, specifically in public clouds, what auth mechanism
do you use? Do you setup your own NIS/samba/AD server in the cloud?
One per staff, same as you would with physical, for the exact same
reasons, such as security and auditability. As mentioned earlier,
either local user accounts or an LDAP/NIS/AD is fine. I'd only set up
the latter in a private address space though. All of the main config
management solutions allow you to create and manage local users with
consistent UIDs and GIDs. There is some advantage in the local account
approach, in that it will make your service a little more resilient.
I would encourage you to use ssh-keys where possible, and have different
SSH keys for your cloud infrastructure as you use for your physical.
That's really easy to handle by setting up an ~/.ssh/config file
http://paulgraydon.co.uk/blog/2012/07/30/host-specific-ssh-options/ .
Be sure to update your ssh keys periodically, like you do your password,
again config management can help there by pushing the new keys out to
servers.
Paul
_______________________________________________
Tech mailing list
Tech@lists.lopsa.org
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
http://lopsa.org/