On 03/05/2015 03:28 PM, Gurucharan Shetty wrote:
> The design was come up after inputs and discussions with multiple
> people, including (in alphabetical order) Aaron Rosen, Ben Pfaff,
> Ganesan Chandrashekhar, Justin Pettit and Somik Behera. There
> are still some chinks around the OVN schema that needs to be
> sorted out. So this is a early version.

Cool stuff, thanks!

> diff --git a/ovn/CONTAINERS.md b/ovn/CONTAINERS.md
> new file mode 100644
> index 0000000..0bc7eee
> --- /dev/null
> +++ b/ovn/CONTAINERS.md
> @@ -0,0 +1,101 @@
> +Integration of Containers with OVN and Openstack

micro-nit: OpenStack (capitalization here and elsewhere)

> +------------------------------------------------
> +
> +In a multi-tenant environment, creating containers directly on hypervisors
> +has many risks.  A container application can break out and make changes to
> +the Open vSwitch flows and thus impact other tenants.  This document
> +describes creation of containers inside VMs and how they can be made part
> +of the logical networks securely.  The created logical network can include 
> VMs,
> +containers and physical machines as endpoints.  To better understand the
> +proposed integration of Containers with OVN and Openstack, this document
> +describes the end to end workflow with an example.
> +
> +* A OpenStack tenant creates a VM (say VM-A) with a single network interface
> +that belongs to a management logical network.  The VM is meant to host
> +containers.  OpenStack Nova chooses the hypervisor on which VM-A is created.
> +
> +* A logical port is created in Neutron with a port id that is same as the
> +vif-id associated with the virtual network interface (VIF) of VM-A.

I get what this is saying.  It's not clear if the wording exactly
matches what happens, though.  Here's my attempt at expressing it:

* A Neutron port may have been created in advance and passed in to Nova
with the request to create a new VM.  If not, Nova will issue a request
to Neutron to create a new port.  The ID of the logical port from
Neutron will also be used as the vif-id for the virtual network
interface (VIF) of VM-A.

> +* When VM-A is created on a hypervisor, its VIF gets added to the
> +Open vSwitch integration bridge.  This creates a row in the Interface table
> +of the Open_vSwitch database.  As explained in the [IntegrationGuide.md],
> +the vif-id associated with the VM network interface gets added in the
> +external_ids:iface-id column of the newly created row in the Interface table.
> +
> +* Since VM-A belongs to a logical network, it gets an IP address.  This IP
> +address is used to spawn containers (either manually or through container
> +orchestration systems) inside that VM and to monitor their health.

This is a very minor clarification, but "their health" read as slightly
confusing for me.  It's not obvious if it means the health of the
containers specifically or both the containers and the VM.  I suppose
either interpretation is fine, but rewording might help.

> +* The vif-id associated with the VM's network interface can be obtained by
> +making a call to Neutron using tenant credentials.

> +* All the calls to Neutron will need tenant credentials.  These calls can
> +either be made from inside the tenant VM as part of a container network 
> plugin

I figured out what this meant by "container network plugin" eventually,
but it wasn't clear at first.  It might be worth a bullet defining what
"container network plugin" means.  An attempt based on my understanding
so far:

* This flow assumes a logical component called a "container network
plugin".  Hypothetically, you could envision either a wrapper for docker
or a feature of docker itself that understands how to perform part of
this flow to get a container connected to a logical network managed by
Neutron.  The rest of the flow refers to this logical component that
does not yet exist as the "container network plugin".

> +or from outside the tenant VM (if the tenant is not comfortable using 
> temporary
> +Keystone tokens from inside the tenant VMs).  For simplicity, this document
> +explains the work flow using the former method.
> +
> +* The container hosting VM will need Open vSwitch installed in it.  The only
> +work for Open vSwitch inside the VM is to tag network traffic coming from
> +containers.
> +
> +* When a container needs to be created inside the VM with a container network
> +interface that is expected to be attached to a particular logical switch, the
> +network plugin in that VM chooses any unused VLAN (This VLAN tag only needs 
> to
> +be unique inside that VM.  This limits the number of Container interfaces to
> +4096 inside a single VM).  This VLAN tag is stripped out in the hypervisor
> +by OVN and is only useful as a context (or metadata) for OVN.
> +
> +* The container network plugin then makes a call to Neutron to create a
> +logical port.  In addition to all the inputs that a call to create a port in
> +Neutron is currently needed, it sends the vif-id and the VLAN tag as inputs.

I would change "is currently needed" to "that are currently needed".

Where would the vif-id and VLAN tag be specified in the request to
create a port?  It looks like there's a way to pass completely custom
data in the port create request in binding:profile.  Is there anything
better?

http://developer.openstack.org/api-ref-networking-v2.html#port_binding-ext

As an aside, it makes me cringe that Neutron provides ability to pass
through plugin specific data.  It devalues it as an abstraction layer,
but I suppose in this case it's handy to get something working faster.

> +* Neutron in turn will verify that the vif-id belongs to the tenant in 
> question
> +and then uses the OVN specific plugin to create a new row in the Logical_Port
> +table of the OVN Northbound Database.  Neutron responds back with an
> +IP address and MAC address for that network interface.  So Neutron becomes
> +the IPAM system and provides unique IP and MAC addresses across VMs and
> +Containers in the same logical network.
> +
> +* When a container is eventually deleted, the network plugin in that VM
> +will make a call to Neutron to delete that port.  Neutron in turn will
> +delete the entry in the Logical_Port table of the OVN Northbound Database.
> +
> +As an example, consider Docker containers.  Since Docker currently does not
> +have a network plugin feature, this example uses a hypothetical wrapper
> +around Docker to make calls to Neutron.
> +
> +* Create a Logical switch, e.g.:
> +
> +```
> +% ovn-docker --cred=cca86bd13a564ac2a63ddf14bf45d37f create network LS1
> +```
> +
> +The above command will make a call to Neutron with the credentials to create
> +a logical switch.  The above is optional if the logical switch has already
> +been created from outside the VM.
> +
> +* List networks available to the tenant.
> +
> +```
> +% ovn-docker --cred=cca86bd13a564ac2a63ddf14bf45d37f list networks
> +```
> +
> +* Create a container and attach a interface to the previously created switch
> +as a logical port.
> +
> +```
> +% ovn-docker --cred=cca86bd13a564ac2a63ddf14bf45d37f --vif-id=$VIF_ID \
> +--network=LS1 run -d --net=none ubuntu:14.04 /bin/sh -c \
> +"while true; do echo hello world; sleep 1; done"
> +```
> +
> +The above command will make a call to Neutron with all the inputs it 
> currently
> +needs to create a logical port.  In addition, it passes the $VIF_ID and a
> +unused VLAN.  Neutron will add that information in OVN and return back
> +a MAC address and IP address for that interface.  ovn-docker will then create
> +a veth pair, insert one end inside the container as 'eth0' and the other end
> +as a port of a local OVS bridge as an access port of the chosen VLAN.

As a higher level comment, if I understand correctly, the current
proposal would be to get this going as an OVN specific feature via
Neutron.  An alternative would be to create the container interface
concept in Neutron itself more officially.  That would certainly take
more time and work, so it could just be considered as potential future work.

-- 
Russell Bryant
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to