= Review =
Juju is a very flexible system for deploying services based on industry best 
practices and expertise. It is very capable and can deploy services to multiple 
providers. As such, though my review took quite a bit of time, it should still 
be considered a shallow audit. Understanding this, here is my security review:

juju has support for different providers which are simply the types of
cloud frameworks it can use. For example, there is an EC2 provider
(which also works with OpenStack) and a Local (LXC) provider. More
providers are expected. The providers are configured via
~/.juju/environments.yaml on the admin system.  juju abstracts out the
specifics of working with a provider one environments.yaml is correctly
configured. juju admin host stores sensitive information in
~/.juju/environments.yaml. It does not enforce safe permissions
currently (LP: #956009).

juju's architecture is such that an admin runs juju commands on her
system and they are delivered to a bootstrapping node. The bootstrapping
node runs a zookeeper database and has the ability to start and stop
units (nodes) and deliver setup code (charms) to the nodes. The nodes
execute the charms code as root. In addition to setup code, charms
provide other hooks like 'start' and 'stop' which are executed when the
service unit is stopped or started. All the hooks run with root
permissions. All the nodes share the same database, but there is only
one zookeeper leader so nodes should not be able to be elected as a
zookeeper leader (see server.* in /etc/zookeeper/conf/zoo.cfg).  All
nodes currently are able to read and write to the zookeeper database.
With the Local provider, zookeeper is started as the user invoking juju
(uses a high non-default port), not in a separate bootstrapping node. In
all ways I could see, the admin's system is effectively the
bootstrapping node with the Local provider.

In terms of network connectivity, juju allows ssh access to all nodes.
When the admin deploys a node via a charm, the node's new service is
still not available over the network (but is to other nodes in the
environment). Only when the service is 'exposed' does the application
become available over the network.  For example, if an admin deploys
mysql and wordpress services, wordpress is only available to the world
after the admin uses 'juju expose wordpress'. This is a good design as
it allows the admin to verify the configuration, perform updates, etc
before it is exposed to the world. Also, in this example, mysql is
correctly not exposed to the world. This is all accomplished via
security groups in EC2/OpenStack. In the current version of juju,
network access is not a problem with the Local provider because
zookeeper and the services are all on the libvirt NAT network and not
exposed to the world directly. Expose/unexpose doesn't seem to have any
meaning with the Local provider as no firewall rules are added via
iptables and the service is not accessible from other hosts (besides the
admin machine).

There are many problems surrounding zookeeper access. Anyone who can
connect zookeeper (ie, all nodes) can see and modify anything in the
database. Note that this does not require subverting the juju agent--
all that is required is a network connection to the zookeeper server and
standard tools. Some information appears to be rewritten each time (eg,
/environments).

While juju uses security groups for network access (thus limiting who can 
connect to it) for EC2/OpenStack, it would be best if this was explicit in the 
nodes firewall configuration (which is a requirement for Maas anyway). For 
example, these ports on the bootstrapping node are visible to other nodes in 
the environment:
2181/tcp  open  unknown
38830/tcp open  unknown

2181 is for followers to connect to the leader and 38830 is presumably
for leader election.

juju uses ssh for communications with the nodes. The specified ssh key
on the admin machine is copied to authorized_keys in the 'ubuntu'
account on all nodes. The 'ubuntu' account has an entry in
/etc/sudoers.d/90-cloudimg-ubuntu which allows full root access without
a password. This mirrors Ubuntu's EC2 implementation and is acceptable.

As mentioned, charm code is executed as root. Security-conscious users
will need to verify all charms before deployment. Deploying charms from
unknown sources is the equivalent of running executables or installing
packages from unknown sources and should be avoided. The juju design of
deploying hooks from the admin's machine (as opposed to pulling charms
onto the bootstrap node) is good because it allows the admin to verify
all charm code and track changes locally. That said, charms are cached
onto the bootstrapping node when a charm is deployed.

Various upgrade scenarios are documented as not being implemented yet
(service upgrades-- https://juju.ubuntu.com/docs/upgrades.html). Charms
provide an upgrade hook as well and work is ongoing to improve charm
upgrades. While unattended-upgrades is installed, running 'sudo dpkg-
reconfigure unattended-upgrades' shows deployed systems are not setup to
download and install security updates automatically.

The Local provider is a bit rough around the edges in the version of
juju I tested. While it is intended to only be used for local testing,
reboots of the admin machine result in zookeeper and nodes not starting
(LP: #955576). This is a major usability issue, reflects poorly on juju
and could be construed as data loss. Also, since many people are going
to want to test juju in virtualized environments, juju needs to not hard
code 192.168.122 in its code (LP: #955540). Also, 'juju destroy-
environment' currently needs to be run with sudo and the 'data-dir'
removed manually. These all combine to show that Local doesn't work as
well as the EC2 provider. I had a lot of problems running it in a VM--
it would be good to document how to do this if there are specific steps
to perform to make this work right (beyond fixing the above bugs).

There is a lot of various documentation on juju on the internet. The
canonical documentation seems to be at
https://juju.ubuntu.com/docs/index.html. There are no man pages and
finding the correct documentation to use proved difficult.  Because so
much has been blogged and written about juju, the conflicting sources
can make its initial use difficult. Also, because juju is rapidly
changing, much of what is on the internet is out of date. I had several
problems using the Local provider in general and configuring
environments.yaml appropriately for different providers. I also was not
able to easily understand what control-bucket and admin-secret are in
the documentation, though I eventually figured it out. It was also
difficult to find a high level design overview or security design
document.

In terms of packaging, juju looks fine with no surprises (excepting the
lack of man pages).

In terms of coding:
 * juju/control/debug_hooks.py makes use of os.system and should probably be 
converted to subprocess
 * juju/environment/config.py has a TOCTOU race for creating environments.yaml. 
 This is not a security concern since it is in the user's directory.
 * http:// calls that can be subject to MITM attacks in 
juju/providers/ec2/utils.py (LP: #965507) and 
juju/providers/orchestra/cobbler.py and juju/providers/orchestra/files.py I'm 
assuming the new Maas provider will also be affected as it is using cobbler.
 * txaws does not verify certificates: LP: #781949
 * juju/providers/common/cloudinit.py does not do any input shell metacharacter 
injection sanitization for:
   - 'branch' in _branch_install_scripts()
   - 'secret', 'provider_type' in _zookeeper_scripts()
   This aren't necessarily a security vulnerability because the admin controls 
the machines anyway, but it is good form to perform some sort of input 
sanitizing when writing out scripts in this manner (since they aren't going 
through something like subprocess.Popen)

= Conclusion =
Unfortunately, there are a number of problems that prevent me from endorsing 
juju's promotion to main in its current state. However, juju is an important 
project and I understand there is a lot of working on improving it for 12.04 
and that work will continue on it for 12.04.1. Based on conversations with 
Robbie Williamson, the server team is committing to a two-stage rollout of 
fixes to be completed by 12.04.1. I have taken the liberty of filing tracking 
bugs for these issues (see below). As such, I am ok for promotion to main at 
this time.

== Requirements (12.04) ==
- address coding concerns, above
- make sure in official blog posts, etc, the official documentation is 
discoverable as much as possible
- explicit ingress rule on non-Local provider bootstrapping node for zookeeper 
(LP: #966558)
- juju needs to enforce secure permissions on environments.yaml (LP: #956009)
- document best practices for keeping systems up to date (LP: #966563)
- (create/)document charm store review process. This process should include 
promoting the deploying AppArmor policy in charms (LP: #966566)
- document in release notes the lack of zookeeper ACLs, with an appropriate 
warning. This should also be mentioned in juju documentation as something being 
worked on (LP: #966573)
- explicit egress 'owner' rule on non-bootstrapping nodes to require root 
access to zookeeper (LP: #966577)
- document lack of encryption in the juju environment in documentation and 
release notes (LP: #966583)
- ensure Local provider properly exposes and unexposes services if this is part 
of the final release (ie, something akin to security groups for the Local 
provider such that you must expose services before they are available to the 
network)
- explicit ingress filtering for Maas nodes which should mimic the 
EC2/OpenStack security groups such that expose and unexpose work as expected 
LP: #966584

== Requirements (12.04.1) ==
- fix Local provider bug LP: #955576 (and ideally LP: #955540)
- explicit ingress filtering on non-Local provider bootstrapping node (eg allow 
ping and 22/tcp from anywhere, but only ping, 22/tcp, and 2181 only from nodes 
in this environment (LP: #966590) 
- implement zookeeper ACLs (eg admin for writes and something else for juju 
agent reads. world would be closed off entirely) (LP: #813773)
- if possible, encipher or remove sensitive credentials from zookeeper (LP: 
#966601)
- document best practices for securing communication between juju nodes and in 
the case of Maas, also between juju and cobbler. Remove an hard-coded 
assumptions from juju (eg, http vs https) and ideally add some functionality to 
make this easier to deploy. Suggestions are stunnel4 or openvpn (LP: #966605)
- man pages for juju commands. Utilize 'make man' in the build to generate them 
automatically. Also have an overview man page to point to the official 
documentation and have a brief tutorial for using each supported provider, 
overviews of how charms work, to trust charms carefully, how to do maintenance, 
etc. This overview can include an best practices for using the various 
providers (LP: #966611)
- supply high level design and security document (LP: #966617)


** Changed in: juju (Ubuntu)
       Status: New => Fix Committed

** Changed in: juju (Ubuntu)
     Assignee: Jamie Strandboge (jdstrand) => (unassigned)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/912861

Title:
  [MIR] juju, txaws, txzookeeper

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/juju/+bug/912861/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to