On Mon, Feb 17, 2020 at 1:21 PM Nikos Chantziaras <rea...@gmail.com> wrote: > > probably much slower.) A chroot or container on the other hand is > extremely lightweight. There's no virtualization involved (or very > little of it), so it should be pretty much as fast as a native system.
Chroots and containers are exactly as fast as the native system, and don't involve any virtualization. In fact, on linux you can't NOT run a process in a chroot or container. Every process has a root directory and a set of namespaces applied to it, including init, and every new process just inherits the settings of the process that execed it. All linux users are essentially using at least one container. As such, running more than one container doesn't involve any kernel behavior that running a single container doesn't involve. Now, it is true that if you're running multiple containers you're more likely to have multiple copies of glibc and so on in RAM, and thus there is a memory overhead, though that applies system-wide, and not just to the processes running inside the additional containers. Maybe the one bit of overhead is the first time you launch a particular process in a particular container any shared libraries it uses will have to be loaded into RAM, while on the host there is a decent chance that some of them are already in RAM. We're really splitting hairs at this point, however. I wouldn't use a chroot for anything at this point - anything you can do with one you can do just as easily with a container, with more separation. They're just as easy to set up as well - I personally use nspawn to run my containers but I'm sure lxc is almost as simple and of course it doesn't require running systemd. Getting back to the original topic - you can just build binary packages for stuff like qt without using a container, but if you do so you won't be able to build more than one layer of dependencies. It still cuts down on the merge time considerably, but obviously not as much as it does if you build everything ahead of time. -- Rich