On Sunday, 19 May 2019 00:49:03 BST Rich Freeman wrote:

> In general you can mount stuff in containers without issue.  There are
> two ways to go about it.  One is to mount something on the host and
> bind-mount it into the container, typically at launch time.  The other
> is to give the container the necessary capabilities so that it can do
> its own mounting (typically containers are not given the necessary
> capabilities, so mounting will fail even as root inside the
> container).

That's a useful wrinkle; thanks.

> I believe the reason the wiki says to be careful with mounts has more
> to do with UID/GID mapping.  As you are using nfs this is already an
> issue you're probably dealing with.  You're probably aware that
> running nfs with multiple hosts with unsynchronized passwd/group files
> can be tricky, because linux (and unix in general) works with
> UIDs/GIDs, and not really directly with names, so if you're doing
> something with one UID on one host and with a different UID on another
> host you might get unexpected permissions behavior.

Nope. I'm alone here and have the same UID and GID everywhere.

> In a nutshell the same thing can happen with containers, or for that
> matter with chroots.  If you have identical passwd/group files it
> should be a non-issue.  However, if you want to do mapping with
> unprivileged containers you have to be careful with mounts as they
> might not get translated properly.  Using completely different UIDs in
> a container is their suggested solution, which is fine as long as the
> actual container filesystem isn't shared with anything else.  That
> tends to be the case anyway when you're using container
> implementations that do a lot of fancy image management.  If you're
> doing something very minimal and just using a path/chroot on the host
> as your container then you need to be mindful of your UIDs/GIDs if you
> go accessing anything from the host directly.

Minimal. Build binaries for another box, that's all.

> The other thing I'd be careful with is mounting physical devices in
> more than one place.  Since you're actually sharing a kernel I suspect
> linux will "do the right thing" if you mount an ext4 on /dev/sda2 on
> two different containers, but I've never tried it (and again doing
> that requires giving containers access to even see sda2 because they
> probably won't see physical devices by default).  In a VM environment
> you definitely can't do this, because the VMs are completely isolated
> at the kernel level and having two different kernels having dirty
> buffers on the same physical device is going to kill any filesystem
> that isn't designed to be clustered.  In a container environment the
> two containers aren't really isolated at the actual physical
> filesystem level since they share the kernel, so I think you'd be fine
> but I'd really want to test or do some research before relying on it.

Good point. I'll bear it in mind.

> In any case, the more typical solution is to just mount everything on
> the host and then bind-mount it into the container.  So, you could
> mount the nfs in /mnt and then bind-mount that into your container.
> There is really no performance hit and it should work fine without
> giving the container a bunch of capabilities.

Thanks Rich. You've given me the confidence to give containers a whirl.

-- 
Regards,
Peter.




Reply via email to