On Sat, 2012-11-03 at 17:10 -0400, Patrick LeBoutillier wrote: > Hi all,
> We have been using linux-vserver for years (we actually created the > project before eventually passing on maintainership), but after years > of kernel patching and unending API and syscall changes, we are looking > to move towards a more mainstream approach, ideally fully integrated into > the stock linux kernel. > Some research has lead me to LXC as perhaps a viable replacement > solution, but before going further I would like to get a better > grasp on some of key aspects of the project. I tried using linux-vservers a long time ago but had to abandon it due to removal / failure of IPv6 support (which I think has since been corrected) something like a decade ago. At the time, I switched to OpenVZ and never looked back. I feel your pain on the kernel patching an maintenance. This finally got me fed up and I had to abandon OpenVZ for LXC. For what I do, it works great. It has some significant resource control limitations vis-a-vis OpenVZ but I wasn't using those trottles and tuning knobs anyways. I'm running a set of tightly controlled environments with no "untrusted" users so all the owners of my containers are no more than a strong brick throw away from me so that takes care of resource management for me. :-)=) > Here goes: > - One of the things we use a lot with linux-vserver is an "enter" > functionality: from a shell in the host, use the "enter" command > to get a shell inside a container. This is a variant of the "exec" > feature, which allows the execution of an arbitrary command inside > a container from a shell on the host. I believe with recent kernels this works with lxc-attach but could be problematical. I'm working on development versions right now and it's having problems attaching. At one point, this feature needed a kernel patch but I was thinking it has been integrated upstream. Not certain. > A lot of our internal processes are based on this feature. The vast > majority of our containers do not run SSH servers and are accessed > interactively (or not) from the host. > Is this functionality (or something equivalent) supported by LXC? You really don't want to do that. Just use bridging. In the end, once it's set up, it's cleaner. I used bridging on linux-vservers, OpenVZ, and now LXC. > - As far as networking is concerned, we simply use IP aliases (eth0:1) > to allocate IP addresses for the containers. This offers very basic > network isolation (the container is limited to using specific aliases) > but it suits our needs. > With LXC, is this simple technique usable or does one have to > necessarily setup bridges and/or tunnels? There's also macvlan in LXC with lets the container share the network device (sort of what you are doing without the need for alias interfaces) but there have been problems in the past with that (solved now, I think). > Thanks a lot, > Patrick > -- > *Patrick LeBoutillier* > Directeur TI > 2, Place Laval, bureau 510 > Laval (Québec) H7N 5N6 > patrick.leboutill...@croesus.com > 450-662-6101, poste 136 Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF | possible worlds. A pessimist is sure of it!
signature.asc
Description: This is a digitally signed message part
------------------------------------------------------------------------------ LogMeIn Central: Instant, anywhere, Remote PC access and management. Stay in control, update software, and manage PCs from one command center Diagnose problems and improve visibility into emerging IT issues Automate, monitor and manage. Do more in less time with Central http://p.sf.net/sfu/logmein12331_d2d
_______________________________________________ Lxc-devel mailing list Lxc-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-devel