On 01/05/2011 08:20 AM, Daniel Lezcano wrote:
Rob thanks for the howto !
It's a work in progress. :)
I have a few comments/questions: In step 6: I guess you can change the "lxc's console bug" by specifying the correct inittab for busybox.
I'm happy to just wait for the next release of lxc, if that'll fix it.
There is a typo in the last sentence of this section: "... Repeat: you hae to run lxc-start, ..." ^^^
Fixed, although the sentence will go away next lxc release. (And I can fluff up the lxc-console bit to talk about -t ttynum.)
In step 7: you kill the lxc-start processes, you should not. Why don't you use the lxc-stop command ?
Fixed.Attached is a less unfinished version of part 2, setting up a simple isolated network in the container. I still have to comment out the "lxc.network.name = eth0" bit or the KVM/container interfaces get entangled, can you reproduce that? (Quite possibly I'm doing something wrong on my end again, but I can't figure out what...)
Thanks, Rob
Last time, we set up a three layer container test environment:
Laptop - the host system running on real hardware (my Ubuntu laptop).
KVM - a virtual debian Sid system running under KVM.
Container - a simple busybox-based system running in a container.
So "Laptop" hosts "KVM" which hosts "Container". This lets us reconfigure and reboot the container host (the KVM system) without screwing up our real host environment (the Laptop system).
We ended with a shell prompt inside a container. Now we're going to set up networking in the container, with different routing than the KVM system so the Container system and KVM system have different views of the outside world.
LXC supports several different virtual network types, listed in the lxc.conf man page: veth uses Linux's ethernet bridging support, vlan sets up a virtual interface selects packets by IP address, and macvlan sets up a virtual interface that selects packets by mac address, that routes packets at the IP level, and veth joins interfaces together using Linux's ethernet bridging support (and the ebtables subsystem).
The other two networking options LXC supports are "empty" (just the loopback interface), and "phys" to move one of the host's ethernet interfaces into the container (removing it from the host system).
We're going to add a second ethernet interface to the KVM system, and use the "phys" option to move it into the container.
Step 1: Add a TAP interface to the Laptop.
The TUN/TAP subsystem creates a virtual ethernet interface attached to a process. (A TUN interface allows a userspace program to read/write IP packets, and a TAP interface works with ethernet frames instead.) For details, see the kernel TUN/TAP documentation.
We're going to attach a TAP interface to KVM, to add a second ethernet interface to the KVM system. Doing so requires root access on the laptop, but we can use the "tunctl" program (from the "uml-utilities" package) to create a new TUN/TAP interface and then hand it over to a non-root user (so we don't have to run KVM as root).
Run this as root:
# Replace "landley" with your username tunctl -u landley -t kvm0 ifconfig kvm0 192.168.254.1 netmask 255.255.255.0 echo 1 > /proc/sys/net/ipv4/ip_forward
The above commands last until the next time you reboot your Laptop system, at which point you'll have to re-run them. It associates the address 192.168.254.1 with the TAP interface on the Laptop host, and tells the Laptop to route packets between interfaces.
If you want to remove the tun/tap interface from the host (without rebooting), the command is:
tunctl -d kvm0
Step 2: Launch KVM with two ethernet interfaces.
We need to reboot our KVM system, still using the kernel and root filesystem we built last time but this time specifing two ethernet interfaces. The first is still eth0 masqueraded through a virtual 10.0.2.x LAN (for use by the KVM host), and the other's a TAP device connected directly to the host (for use by the container).
To do this, we append a couple new arguments to the end of the previous KVM command line:
kvm -m 1024 -kernel arch/x86/boot/bzImage -no-reboot -hda ~/sid.ext3 \ -append "root=/dev/hda rw panic=1" -net nic,model=e1000 -net user \ -redir tcp:9876::22 -net nic,model=e1000 -net tap,ifname=kvm0,script=no
The first "-net nic" still creates an e1000 interface as KVM's eth0, the "-net user" plugs that interface into the masqueraded 10.0.2.x LAN, and -redir forwards port 9876 of the laptop's loopback to port 22 on that interface. What's new is the second "-net nic" which adds another e1000 interface (eth1) to KVM, and "-net tap" which connects that interface to the TUN/TAP device we just created on the Laptop.
Step 3: Set up a new container in the KVM system.
To add a network interface to the container, we need a new configuration file in the format described by the "lxc.conf" man page. We're going to move a physical interface (eth1) from the host into the container. This will remove it from the host's namespace, and make it appear only in the container.
In the kvm system, go to the directory containing the static "busybox" binary and as root run:
cat > busybox.conf << EOF lxc.utsname = busybox lxc.network.type = phys lxc.network.flags = up lxc.network.link = eth1 #lxc.network.name = eth0 EOF PATH=$(pwd):$PATH lxc-create -f busybox.conf -t busybox -n busybox lxc-start -n busybox
The reason the last line of busybox.conf is commented out is to work around another bug: if the container's interface has the same name as the host interface, the two bleed together. So the host's eth1 interface will still be called "eth1" in the container, even though there's no eth0 there.
Leave that running and SSH into the KVM system again, get a shell prompt in the container and configure the container's new network interface:
lxc-console -n busybox ifconfig eth1 192.168.254.2 netmask 255.255.255.0 route add default gw 192.168.254.1
Step 4: Fun with routing.
Now let's show that the container can access things the KVM can't. On the Laptop system, set up an alias of the loopback interface with the same IP address assigned to the KVM's eth0 (10.0.2.15). Then download the busybox binary to the Laptop and run busybox netcat in server mode so it prints "hello world" when you connect to port 12345.
sudo ifconfig lo:1 10.0.2.15 netmask 255.255.255.0 wget http://busybox.net/downloads/binaries/1.18.0/busybox-i686 -o busybox chmod +x busybox ./busybox nc -p 12345 -lle echo hello world
Now from the container, try to connect to it with netcat:
nc 10.0.2.15 12345
It should print "hello world", meaning you connected to the laptop's lo:1 interface rather than the KVM's eth0. If you try the same command from the KVM system (./busybox nc 10.0.2.15 12345), it won't connect.
------------------------------------------------------------------------------ Learn how Oracle Real Application Clusters (RAC) One Node allows customers to consolidate database storage, standardize their database environment, and, should the need arise, upgrade to a full multi-node Oracle RAC database without downtime or disruption http://p.sf.net/sfu/oracle-sfdevnl
_______________________________________________ Lxc-devel mailing list Lxc-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-devel