On Mon, Oct 19, 2015 at 02:52:25PM -0700, Gurucharan Shetty wrote:
> Docker removed 'experimental' tag for their multi-host
> networking constructs last week and did a code freeze for
> Docker 1.9.
> 
> This commit adds two drivers for OVN integration
> with Docker. The first driver is a pure overlay driver
> that does not need OpenStack integration. The second driver
> needs OVN+OpenStack.
> 
> The description of the Docker API exists here:
> https://github.com/docker/libnetwork/blob/master/docs/remote.md
> 
> Signed-off-by: Gurucharan Shetty <gshe...@nicira.com>
> ---
> v1->v2:
> Some style adustments with error messages.
> Consolidation of some duplicate code to function: get_logical_port_addresses

Thanks for doing this!  I have a few comments, see below.

> +For multi-host networking with OVN and Docker, Docker has to be started
> +with a destributed key-value store.  For e.g., if you decide to use consul
> +as your distributed key-value store, and your host IP address is $HOST_IP,
> +start your Docker daemon with:
> +
> +```
> +docker daemon --cluster-store=consul://127.0.0.1:8500 
> --cluster-advertise=$IP:0
> +```

I guess that $IP should be $HOST_IP here.

> +The "overlay" mode
> +==================
> +
> +OVN in "overlay" mode needs a minimum Open vSwitch version of 2.5.
> +
> +* Start the central components.
> +
> +OVN architecture has a central component which stores your networking intent
> +in a database.  So on any machine, with an IP Address of $CENTRAL_IP, where 
> you
> +have installed and started Open vSwitch, you will need to start some
> +central components.

I think that the paragraph above means that you have to do this on one
machine selected from your hypervisors.  But "on any machine...where you
have installed and started Open vSwitch" could be interpreted to mean
that you run this on *every* hypervisor.  Maybe s/on any machine/on one
of your machines/?

> +Start ovn_northd daemon.  This daemon translates networking intent from 
> Docker
> +stored in OVN_Northbound databse to logical flows in OVN_Southbound database.

It might be worth s/ovn_northd/ovn-northd/ above since that's the
correct name of the daemon.

> +```
> +/usr/share/openvswitch/scripts/ovn-ctl start_northd
> +```
> +
> +* One time setup.
> +
> +On each host, where you plan to spawn your containers, you will need to
> +run the following commands once.

I wonder whether it's worth adding some additional clarification, e.g.:

    (You need to run it again if your OVS database gets cleared.  It is
    harmless to run it again in any case.)

> +```
> +ovs-vsctl set Open_vSwitch . external_ids:ovn-remote="tcp:$CENTRAL_IP:6640 \

There's a missing " near the end of the line above.

> +    external_ids:ovn-encap-ip=$LOCAL_IP external_ids:ovn-encap-type="geneve"
> +```
> +
> +And finally, start the ovn-controller.
> +
> +```
> +/usr/share/openvswitch/scripts/ovn-ctl start_controller
> +```

I don't know whether that's really "one time" since it needs to happen
on each boot.

> +Source the openrc file. e.g.:
> +````
> +source openrc.sh
>  ```

The "source" command name isn't portable, since POSIX does not require
it.  You can use "." instead.

Are you expecting openrc.sh to be in the current directory?  POSIX says
that ". ./openrc.sh" is required to source a file in the current
directory (unless . is in $PATH).

In the Python code, I wonder whether there are any concerns about
malicious input.  I mean, what if someone names a subnet "--
emer-reset", for example (or similar)?  Would that delete basically the
whole OVS database?  Or does everything show up as a UUID and therefore
make it safe?  I didn't investigate enough to figure that out.

Assuming there's some kind of security story that way,
Acked-by: Ben Pfaff <b...@ovn.org>
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to